Language Selection

English French German Italian Portuguese Spanish

Invasion of The Ethical Licenses

Filed under
OSS
Legal

About 23 years ago, I created the Debian Free Software Guidelines to help the Debian developers decide what software was permissible to include in Debian, which aspired to be 100% Free Software, and what should be consigned to a “non-free” repository upon which Debian would never depend. Nine months later, those guidelines became the Open Source Definition, and I announced Open Source to the world.

                        
                        [...]
                        
                        Despite the seeming impossibility of its enforcement, the Vaccine License is the most professionally constructed of this pack, carefully targeting the approval process of the Open Source Initiative – and IMO missing it. But all three licenses appear to be unlikely to obtain the agreement of a court in enforcement, and scaling their requirements would be a sort of full-employment act for lawyers.

Let’s work through how these licenses would be enforced.

When these licenses are enforced, the copyright holder is the plaintiff, a fancy word for someone who makes a complaint. Their complaint is that the defendant, the licensee, committed a tort, a violation of civil law. The tort is copyright infringement.

The important point here is that the complaint isn’t that the license was violated, the complaint is that the defendant did not have a license at all, and is infringing copyright. The defendant then has to prove that they did have a license, and that they were obeying the license’s terms, or that the court should for some reason not honor those terms.

Licenses are also contracts, and thus the tort can be breach of contract. But contracts require the consent of both parties – the copyright holder, and the licensee. Real consent is indicated by signing the contract, but that doesn’t ever happen with this sort of license. Instead, there is a lesser indication of consent by the action of using, distributing, or modifying the software.

Read more

More in Tux Machines

Pinebook Pro Review: A $200 laptop that’s only for cool people.

There’s a $200 laptop out in the wild now that has been getting a lot of buzz in the Fediverse. It’s called the Pinebook Pro and it ships with a customized version of Debian Stretch with the Mate desktop. If you don’t know what that means, it’s Linux. This is a Linux laptop. But that’s not all… it also has a few other tricks up its sleeve, like a bootable MicroSD card slot so you can easily run other operating systems off a cheap memory card whenever you feel like it. Now, this is being sold at cost mainly as a gift to the Free (as in Freedom) Open Source Software (FOSS) community so it’s not really meant for normal people. If you just want to open web pages like Facebook or Google Docs, you’re probably better off with a Chromebook or Macbook. If you believe in freedom and like to seriously learn about technology, keep reading… The Pinebook Pro is serious fun! Read more

Kernel: LWN's Latest Free Articles and Linux Support for "Data Streaming Accelerator" (DSA)

  • The 2019 Automated Testing Summit

    As with the first ATS, this edition was organized by Tim Bird and Kevin Hilman. Bird welcomed everyone to the conference then turned things over to Hilman for something of an overview of the "kernel testing landscape". Hilman started by noting that there were some gatherings and discussions at the Linux Plumbers Conference (LPC) in September, which he described in an email to the automated-testing mailing list. There were some themes that came out of those discussions, he said, which led to the title of his talk (slides [PDF]): "The bugs are too fast (and why we can't catch them)". He gave a brief summary of the new kernel unit-testing frameworks that were discussed at LPC in order to bring attendees up to date on what kernel developers have been up to. The existing kernel test efforts, including kselftest, Linux Test Project (LTP), syzbot, and others, are likely pretty familiar to attendees, he said. The KUnit framework (LWN article from March) has been merged into linux-next; it is a fast way to test kernel functionality in an architecture-independent way and can be run in user space with user-mode Linux (UML). The Kernel Test Framework (KTF) is another unit-test framework that has been posted for comments. Since KUnit is headed for the mainline, though, the KTF project will need to figure out how to add its functionality to KUnit, Hilman said, since there won't be multiple unit-test frameworks in the mainline. He then turned to the various testing initiatives that are currently active. The Intel 0-Day test service is probably the longest running; it is "mostly Intel focused". The Linaro Linux kernel functional testing (LKFT) has "quite a bit of in-depth testing but on a narrower set of hardware". The Red Hat continuous kernel integration (CKI) project has been around for a while, but has only recently been seen more publicly, he said; it is focused on testing stable kernels. And, of course, there is KernelCI that he cofounded; it was officially announced as a Linux Foundation project earlier in the week.

  • Emulated iopl()

    Operating systems and computing hardware both carry a lot of their history with them. The x86 I/O-port mechanism is one piece of that history; it is rarely used by hardware designed in the last 20 years, but it must still be supported. That doesn't mean that this support can't be cleaned up and improved, though, especially when the old implementation turns out to have some unpleasant properties. An example can be seen in the iopl() patch set from Thomas Gleixner. On most architectures, I/O is handled through memory-mapped I/O (MMIO) regions. A peripheral device will make a set of registers available as a range of memory; that range is then mapped into the processor's address space. Device drivers can then interact with the device by reading from and writing to those registers using normal memory accesses (or something close to that). This mechanism is flexible and it allows, for example, a set of registers to be mapped into a user-space process if the need arises; user-space drivers generally depend on this capability. Back in the early days of the x86 architecture, though, things were done a little differently. A separate address space was created for up to 65536 I/O ports, which have to be accessed via special instructions. Even devices that could map memory ranges for other purposes would use I/O ports for their control interfaces. The instructions for accessing I/O ports are necessarily privileged, so user-space code cannot normally use them.

  • Statistics from the 5.4 development cycle

    As of this writing, just over 14,000 non-merge changesets have found their way into the mainline repository for the 5.4 release; that is a bit less than we saw for 5.3, but more than most of the other recent kernels. The final 5.4 release is approaching, so it must be time for our usual look at where the code merged in this development cycle came from. It's mostly business as usual in the kernel community, modulo an appearance from none other than Hulk Robot. Those 14,000 changesets were contributed by 1,802 developers, which is just short of the 1,846 who contributed to 5.3; there is still time, though, for 5.4 to set a new record for the number of contributors — a surprising number of developers wait until the end of the release cycle to fix something. Of the developers seen so far, 266 made their first contribution to the kernel in this cycle. The combined work from these developers increased the size of the kernel by 393,000 lines.

  • Analyzing kernel email

    Digging into the email that provides the cornerstone of Linux kernel development is an endeavor that has become more popular over the last few years. There are some practical reasons for analyzing the kernel mailing lists and for correlating that information with the patches that actually reach the mainline, including tracking the path that patches take—or don't take. Three researchers reported on some efforts they have made on kernel email analysis at the 2019 Embedded Linux Conference Europe (ELCE), held in late October in Lyon, France. The presentation (slides [PDF]) actually listed four speakers, though one could not make it to ELCE. The three present were Ralf Ramsauer, from the Technical University of Applied Sciences Regensburg, Sebastian Duda, of Friedrich–Alexander University Erlangen–Nürnberg, and Wolfgang Mauerer, of Siemens AG in Munich. Lukas Bulwahn, who is a hobbyist active in the Linux Foundation ELISA Project and employed at BMW AG, was unable to attend. In the introduction, Mauerer jokingly suggested that the goal of the research was to understand more "than the NSA already knows" about the behavior of kernel developers. Really, though, the presentation was meant partly as a request for comments; the researchers have been observing the kernel community for some time and have been pulling out pieces they find interesting, but they would be happy to hear other ideas on the kinds of analysis that would be useful to the community.

  • Intel Details New Data Streaming Accelerator For Future CPUs - Linux Support Started

    The "Data Streaming Accelerator" (DSA) is a new block on future Intel CPUs that hasn't been talked about much publicly... Until now. Intel's open-source crew has begun detailing DSA for future Intel CPUs that will offer high-performance data movement and transformation operations. The Linux driver enablement has begun.

Red Hat: Application Migration, Departure, OpenShift Commons Gathering and More

  • Application Migration with Container-native virtualization

    More and more frequently, modern applications are choosing a container-first development and deployment paradigm built on the foundation of Kubernetes. However, not all applications are fully modernized and containerized micro services. Many applications are a hybrid of architectures and technology which have existed for years, even decades. This can add complexity, both to the application architecture and management overhead, when a container-based, cloud-native application component needs to access existing functionality which is virtual machine based. Container-native virtualization provides flexibility during the modernization process so that you can focus on the most critical aspects first, while still being able to access, manage, and consume VM-based aspects using the new Kubernetes-centric tools. Based on the KubeVirt project, recently accepted by the CNCF, Container-native virtualization manages both virtual machines and containers through a single control plane saving time, resources, and budget. Red Hat Container-native virtualization delivers KubeVirt functionality directly to OpenShift customers and helps to manage both virtual machines and OpenShift deployments from a single platform. This single platform simplifies the management of virtual machines and containers with a common Kubernetes interface that standardizes orchestration, networking, and storage management while also supporting the long term move to containers.

  • Alberto Ruiz: Hanging the Red Hat

    After 6+ wonderful years at Red Hat, I’ve decided to hang the fedora to go and try new things. For a while I’ve been craving for a new challenge and I’ve felt the urge to try other things outside of the scope of Red Hat so with great hesitation I’ve finally made the jump. I am extremely proud of the work done by the teams I have had the honour to run as engineering manager, I met wonderful people, I’ve worked with extremely talented engineers and learned lots. I am particularly proud of the achievements of my latest team from increasing the bootloader team and improving our relationship with GRUB upstream, to our wins at teaching Lenovo how to do upstream hardware support to improvements in Thunderbolt, Miracast, Fedora/RHEL VirtualBox guest compatibility… the list goes on and credit goes mostly to my amazing team. Thanks to this job I have been able to reach out to other upstreams beyond GNOME, like Fedora, LibreOffice, the Linux Kernel, Rust, GRUB… it has been an amazing ride and I’ve met wonderful people in each one of them.

  • Recap: OpenShift Commons Gathering at Kubecon/NA San Diego [Videos Uploaded]

    The OpenShift Commons Gathering in San Diego brought together over 550+ Kubernetes and Cloud Native experts from all over the world to discuss container technologies, best practices for cloud native application developers and the open source software projects that underpin the OpenShift ecosystem.

  • IBM Kicks Up Kubernetes Compatibility With Open Source

Antoine Beaupré: a quick review of file watchers

File watchers. I always forget about those and never use then, but I constantly feel like I need them. So I made this list to stop searching everywhere for those things which are surprisingly hard to find in a search engine. Read more