Language Selection

English French German Italian Portuguese Spanish

Linux 5.3, LWN's Kernel Coverage and the Linux Foundation

Filed under
Linux
  • Linux 5.3 Enables "-Wimplicit-fallthrough" Compiler Flag

    The recent work on enabling "-Wimplicit-fallthrough" behavior for the Linux kernel has culminated in Linux 5.3 with actually being able to universally enable this compiler feature.

    The -Wimplicit-fallthrough flag on GCC7 and newer warns of cases where switch case fall-through behavior could lead to potential bugs / unexpected behavior.

  • EXT4 For Linux 5.3 Gets Fixes & Faster Case-Insensitive Lookups

    The EXT4 file-system updates have already landed for the Linux 5.3 kernel merge window that opened this week.

    For Linux 5.3, EXT4 maintainer Ted Ts'o sent in primarily a hearty serving of fixes. There are fixes from coverity warnings being addressed to typos and other items for this mature and widely-used Linux file-system.

  • Providing wider access to bpf()

    The bpf() system call allows user space to load a BPF program into the kernel for execution, manipulate BPF maps, and carry out a number of other BPF-related functions. BPF programs are verified and sandboxed, but they are still running in a privileged context and, depending on the type of program loaded, are capable of creating various types of mayhem. As a result, most BPF operations, including the loading of almost all types of BPF program, are restricted to processes with the CAP_SYS_ADMIN capability — those running as root, as a general rule. BPF programs are useful in many contexts, though, so there has long been interest in making access to bpf() more widely available. One step in that direction has been posted by Song Liu; it works by adding a novel security-policy mechanism to the kernel.
    This approach is easy enough to describe. A new special device, /dev/bpf is added, with the core idea that any process that has the permission to open this file will be allowed "to access most of sys_bpf() features" — though what comprises "most" is never really spelled out. A non-root process that wants to perform a BPF operation, such as creating a map or loading a program, will start by opening this file. It then must perform an ioctl() call (BPF_DEV_IOCTL_GET_PERM) to actually enable its ability to call bpf(). That ability can be turned off again with the BPF_DEV_IOCTL_PUT_PERM ioctl() command.

    Internally to the kernel, this mechanism works by adding a new field (bpf_flags) to the task_struct structure. When BPF access is enabled, a bit is set in that field. If this patch goes forward, that detail is likely to change since, as Daniel Borkmann pointed out, adding an unsigned long to that structure for a single bit of information is unlikely to be popular; some other location for that bit will be found.

  • The io.weight I/O-bandwidth controller

    Part of the kernel's job is to arbitrate access to the available hardware resources and ensure that every process gets its fair share, with "its fair share" being defined by policies specified by the administrator. One resource that must be managed this way is I/O bandwidth to storage devices; if due care is not taken, an I/O-hungry process can easily saturate a device, starving out others. The kernel has had a few I/O-bandwidth controllers over the years, but the results have never been entirely satisfactory. But there is a new controller on the block that might just get the job done.
    There are a number of challenges facing an I/O-bandwidth controller. Some processes may need a guarantee that they will get at least a minimum amount of the available bandwidth to a given device. More commonly in recent times, though, the focus has shifted to latency: a process should be able to count on completing an I/O request within a bounded period of time. The controller should be able to provide those guarantees while still driving the underlying device at something close to its maximum rate. And, of course, hardware varies widely, so the controller must be able to adapt its operation to each specific device.

    The earliest I/O-bandwidth controller allows the administrator to set maximum bandwidth limits for each control group. That controller, though, will throttle I/O even if the device is otherwise idle, causing the loss of I/O bandwidth. The more recent io.latency controller is focused on I/O latency, but as Tejun Heo, the author of the new controller, notes in the patch series, this controller really only protects the lowest-latency group, penalizing all others if need be to meet that group's requirements. He set out to create a mechanism that would allow more control over how I/O bandwidth is allocated to groups.

  • TurboSched: the return of small-task packing

    CPU scheduling is a difficult task in the best of times; it is not trivial to pick the next process to run while maintaining fairness, minimizing energy use, and using the available CPUs to their fullest potential. The advent of increasingly complex system architectures is not making things easier; scheduling on asymmetric systems (such as the big.LITTLE architecture) is a case in point. The "turbo" mode provided by some recent processors is another. The TurboSched patch set from Parth Shah is an attempt to improve the scheduler's ability to get the best performance from such processors.
    Those of us who have been in this field for far too long will, when seeing "turbo mode", think back to the "turbo button" that appeared on personal computers in the 1980s. Pushing it would clock the processor beyond its original breathtaking 4.77MHz rate to something even faster — a rate that certain applications were unprepared for, which is why the "go slower" mode was provided at all. Modern turbo mode is a different thing, though, and it's not just a matter of a missing front-panel button. In short, it allows a processor to be overclocked above its rated maximum frequency for a period of time when the load on the rest of system overall allows it.

    Turbo mode can thus increase the CPU cycles available to a given process, but there is a reason why the CPU's rated maximum frequency is lower than what turbo mode provides. The high-speed mode can only be sustained as long as the CPU temperature does not get too high and, crucially (for the scheduler), the overall power load on the system must not be too high. That, in turn, implies that some CPUs must be powered down; if all CPUs are running, there will not be enough power available for any of those CPUs to go into the turbo mode. This mode, thus, is only usable for certain types of workloads and will not be usable (or beneficial) for many others.

  • EdgeX Foundry Announces Production Ready Release Providing Open Platform for IoT Edge Computing to a Growing Global Ecosystem

    EdgeX Foundry, a project under the LF Edge umbrella organization within the Linux Foundation that aims to establish an open, interoperable framework for edge IoT computing independent of hardware, silicon, application cloud, or operating system, today announced the availability of its “Edinburgh” release. Created collaboratively by a global ecosystem, EdgeX Foundry’s new release is a key enabler of digital transformation for IoT use cases and is a platform for real-world applications both for developers and end users across many vertical markets. EdgeX community members have created a range of complementary products and services, including commercial support, training and customer pilot programs and plug-in enhancements for device connectivity, applications, data and system management and security.

    Launched in April 2017, and now part of the LF Edge umbrella, EdgeX Foundry is an open source, loosely-coupled microservices framework that provides the choice to plug and play from a growing ecosystem of available third party offerings or to augment proprietary innovations. With a focus on the IoT Edge, EdgeX simplifies the process to design, develop and deploy solutions across industrial, enterprise, and consumer applications.

More in Tux Machines

10 Linux Distributions and Their Targeted Users

As a free and open-source operating system, Linux has spawned several distributions over time, spreading its wings to encompass a large community of users. From desktop/home users to Enterprise environments, Linux has ensured that each category has something to be happy about. [...] Debian is renowned for being a mother to popular Linux distributions such as Deepin, Ubuntu, and Mint which have provided solid performance, stability, and unparalleled user experience. The latest stable release is Debian 10.5, an update of Debian 10 colloquially known as Debian Buster. Note that Debian 10.5 does not constitute a new version of Debian Buster and is only an update of Buster with the latest updates and added software applications. Also included are security fixes that address pre-existing security issues. If you have your Buster system, there’s no need to discard it. Simply perform a system upgrade using the APT package manager. The Debian project provides over 59,000 software packages and supports a wide range of PCs with each release encompassing a broader array of system architectures. It strives to strike a balance between cutting edge technology and stability. Debian provides 3 salient development branches: Stable, Testing, and Unstable. The stable version, as the name suggests is rock-solid, enjoys full security support but unfortunately, does not ship with the very latest software applications. Nevertheless, It is ideal for production servers owing to its stability and reliability and also makes the cut for relatively conservative desktop users who don’t really mind having the very latest software packages. Debian Stable is what you would usually install on your system. Debian Testing is a rolling release and provides the latest software versions that are yet to be accepted into the stable release. It is a development phase of the next stable Debian release. It’s usually fraught with instability issues and might easily break. Also, it doesn’t get its security patches in a timely fashion. The latest Debian Testing release is Bullseye. The unstable distro is the active development phase of Debian. It is an experimental distro and acts as a perfect platform for developers who are actively making contributions to the code until it transitions to the ‘Testing’ stage. Overall, Debian is used by millions of users owing to its package-rich repository and the stability it provides especially in production environments. Read more

LWN on Linux and Linux Foundation Bits

  • Modernizing the tasklet API

    Tasklets offer a deferred-execution method in the Linux kernel; they have been available since the 2.3 development series. They allow interrupt handlers to schedule further work to be executed as soon as possible after the handler itself. The tasklet API has its shortcomings, but it has stayed in place while other deferred-execution methods, including workqueues, have been introduced. Recently, Kees Cook posted a security-inspired patch set (also including work from Romain Perier) to improve the tasklet API. This change is uncontroversial, but it provoked a discussion that might lead to the removal of the tasklet API in the (not so distant) future. The need for tasklets and other deferred execution mechanisms comes from the way the kernel handles interrupts. An interrupt is (usually) caused by some hardware event; when it happens, the execution of the current task is suspended and the interrupt handler takes the CPU. Before the introduction of threaded interrupts, the interrupt handler had to perform the minimum necessary operations (like accessing the hardware registers to silence the interrupt) and then call an appropriate deferred-work mechanism to take care of just about everything else that needed to be done. Threaded interrupts, yet another import from the realtime preemption work, move the handler to a kernel thread that is scheduled in the usual way; this feature was merged for the 2.6.30 kernel, by which time tasklets were well established. An interrupt handler will schedule a tasklet when there is some work to be done at a later time. The kernel then runs the tasklet when possible, typically when the interrupt handler finishes, or the task returns to the user space. The tasklet callback runs in atomic context, inside a software interrupt, meaning that it cannot sleep or access user-space data, so not all work can be done in a tasklet handler. Also, the kernel only allows one instance of any given tasklet to be running at any given time; multiple different tasklet callbacks can run in parallel. Those limitations of tasklets are not present in more recent deferred work mechanisms like workqueues. But still, the current kernel contains more than a hundred users of tasklets. Cook's patch set changes the parameter type for the tasklet's callback. In current kernels, they take an unsigned long value that is specified when the tasklet is initialized. This is different from other kernel mechanisms with callbacks; the preferred way in current kernels is to use a pointer to a type-specific structure. The change Cook proposes goes in that direction by passing the tasklet context (struct tasklet_struct) to the callback. The goal behind this work is to avoid a number of problems, including a need to cast from the unsigned int to a different type (without proper type checking) in the callback. The change allows the removal of the (now) redundant data field from the tasklet structure. Finally, this change mitigates the possible buffer overflow attacks that could overwrite the callback pointer and the data field. This is likely one of the primary objectives, as the work was first posted (in 2019) on the kernel-hardening mailing list.

  • Android kernel notes from LPC 2020

    Todd Kjos started things off by introducing the Android Generic Kernel Image (GKI) effort, which is aimed at reducing Android's kernel-fragmentation problem in general. It is the next step for the Android Common Kernel, which is based on the mainline long-term support (LTS) releases with a number of patches added on top. These patches vary from Android-specific, out-of-tree features to fixes cherry-picked from mainline releases. The end result is that the Android Common Kernel diverges somewhat from the LTS releases on which it is based. From there, things get worse. Vendors pick up this kernel and apply their own changes — often significant, core-kernel changes — to create a vendor kernel. The original-equipment manufacturers begin with that kernel when creating a device based on the vendor's chips, but then add changes of their own to create the OEM kernel that is shipped with a device to the consumer. The end result of all this patching is that every device has its own kernel, meaning that there are thousands of different "Android" kernels in use. There are a lot of costs to this arrangement, Kjos said. Fragmentation makes it harder to ensure that all devices are running current kernels — or even that they get security updates. New platform releases require a new kernel, which raises the cost of upgrading an existing device to a new Android version. Fixes applied by vendors and OEMs often do not make it back into the mainline, making things worse for everybody. The Android developers would like to fix this fragmentation problem; the path toward that goal involves providing a single generic kernel in binary form (the GKI) that all devices would use. Any vendor-specific or device-specific code that is not in the mainline kernel will need to be shipped in the form of kernel modules to be loaded into the GKI. That means that Android is explicitly encouraging vendor modules, Kjos said; the result is a cleaner kernel without the sorts of core-kernel modifications that ship on many devices now. This policy has already resulted in more vendors actively working to upstream their code. That code often does not take the form that mainline developers would like to see; some of it is just patches exporting symbols. That has created some tension in the development community, he said.

  • Vibrant Networking, Edge Open Source Development On Full Display at Open Networking & Edge Summit

    The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today marked significant progress in the open networking and edge spaces. In advance of the Open Networking and Edge Summit happening September 28-30, Linux Foundation umbrella projects LF Edge and LF Networking demonstrate recent achievements highlighting trends that set the stage for next-generation technology.

  • Vibrant Networking, Edge Open Source Development On Full Display at Open Networking & Edge Summit

    The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today marked significant progress in the open networking and edge spaces. In advance of the Open Networking and Edge Summit happening September 28-30, Linux Foundation umbrella projects LF Edge and LF Networking demonstrate recent achievements highlighting trends that set the stage for next-generation technology. [...] “We are thrilled to announce a number of milestones across our networking and edge projects, which will be on virtual display at ONES next week,” said Arpit Joshipura, general manager, Networking, Edge and IOT, at the Linux Foundation. “Indicative of what’s coming next, our communities are laying the groundwork for markets like cloud native, 5G, and edge to explode in terms of open deployments.”

GNU Projects: GnuPG, GCC and GNU Parallel 20200922 ('Ginsburg')

  • OpenPGP in Rust: the Sequoia project

    In 2018, three former GnuPG developers began work on Sequoia, a new implementation of OpenPGP in Rust. OpenPGP is an open standard for data encryption, often used for secure email; GnuPG is an implementation of that standard. The GPLv2-licensed Sequoia is heading toward version 1.0, with a handful of issues remaining to be addressed. The project's founders believe that there is much to be desired in GnuPG, which is the de facto standard implementation of OpenPGP today. They hope to fix this with a reimplementation of the specification using a language with features that will help protect users from common types of memory bugs. While GnuPG is the most popular OpenPGP implementation — especially for Linux — there are others, including OpenKeychain, OpenPGP.js, and RNP. OpenPGP has been criticized for years (such as this blog post from 2014, and another from 2019); the Sequoia project is working to build modern OpenPGP tooling that addresses many of those complaints. Sequoia has already been adopted by several other projects, including keys.openpgp.org, OpenPGP CA, koverto, Pijul, and KIPA. Sequoia was started by Neal H. Walfield, Justus Winter, and Kai Michaelis; each worked on GnuPG for about two years. In a 2018 presentation [YouTube] (slides [PDF]) Walfield discussed their motivations for the new project. In his opinion, GnuPG is "hard to modify" — mostly due to its organic growth over the decades. Walfield pointed out the tight coupling between components in GnuPG and the lack of unit testing as specific problem areas. As an example, he noted that the GnuPG command-line tool and the corresponding application libraries do not have the same abilities; there are things that can only be done using the command-line tool.

  • BPF in GCC

    The BPF virtual machine is being used ever more widely in the kernel, but it has not been a target for GCC until recently. BPF is currently generated using the LLVM compiler suite. Jose E. Marchesi gave a pair of presentations as part of the GNU Tools track at the 2020 Linux Plumbers Conference (LPC) that provided attendees with a look at the BPF for GCC project, which started around a year ago. It has made some significant progress, but there is, of course, more to do. There are three phases envisioned for the project. The first is to add the BPF target to the GNU toolchain. Next up is to ensure that the generated programs pass the kernel's verifier, so that they can be loaded into the kernel. That will also require effort to keep it working, Marchesi said, because the BPF world moves extremely fast. The last phase is to provide additional tools for BPF developers, beyond just a compiler and assembler, such as debuggers and simulators.

  • GNU Parallel 20200922 ('Ginsburg') released

    GNU Parallel 20200922 ('Ginsburg') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Arduino and Linux in Devices/Embedded

     
  • Making a random sound diffuser with Arduino

    Humans are generally quite bad at coming up with random patterns, so when Jeremy Cook wanted to make a sound diffuser with angled blocks of wood, he created a “pseudorandomness console” using an Arduino Uno and an LCD shield. [...] Code for this unique randomization is available on GitHub, with a quick explanation in the video above. You can see the final assembly at around the 4:38 mark, showing a process of applying glue, pressing a button to generate a value, and then placing triangles accordingly.

  • Marcin 'hrw' Juszkiewicz: 8 years of my work on AArch64

    Back in 2012 AArch64 was something new, unknown yet. There was no toolchain support (so no gcc, binutils or glibc). And I got assigned to get some stuff running around it. [...] First we had nothing. Then I added AArch64 target into OpenEmbedded. Same month Arm released Foundation model so anyone was able to play with AArch64 system. No screen, just storage, serial and network but it was enough for some to even start building whole distributions like Debian, Fedora, OpenSUSE, Ubuntu. At that moment several patches were shared by all distributions as it was faster way than waiting for upstreams. I saw multiple versions of some of them during my journey of fixing packages in some distributions.

  • Intel's 10nm Elkhart Lake Atom chips feature Cortex-M7 and triple 4K

    Software support includes Yocto Project, Ubuntu, Wind River Linux LTS, Android 10, and Windows 10 IoT Enterprise. There is also support for Intel’s OpenVINO toolkit, with pre-optimized libraries for AI, ML, and computer vision acceleration.

  • Intel Unveils Atom x6000E Series, Celeron and Pentium Elkhart Lake IoT Edge Processors

    We’ve been expecting Intel Elkhart Lake processors for more than a year, and the company has now officially announced the “IoT-enhanced processors” with a new Atom x6000E Series, as well as some Celeron and Pentium N/J parts. Last year, we thought Elkhart Lake would succeed Gemini lake, but the new 11th generation 10nm processors may not be found in many consumer devices, as they target IoT edge applications with additional artificial intelligence (AI), security, functional safety, and real-time capabilities.

  • Intel Introduces IoT-Enhanced Processors to Increase Performance, AI, Security
  • ADATA XPG GAIA MINI PC is based on the Linux-friendly Intel NUC 9 Extreme Kit
  • Ensemble Graphics Toolkit to speed Linux GUI development

    Microchip has launched a GUI development toolkit for its portfolio of 32-bit microprocessors (MPUs) running Linux, helping designers of industrial, medical, consumer and automotive graphical displays to reduce development cost and time-to-market.

  • GUI toolkit for Linux enhances 32-bit microprocessor capabilities

    Microchip Technology offers a new GUI development toolkit for its portfolio of 32-bit MPUs running Linux, assisting designers of industrial, medical, consumer and automotive graphical displays to lower development cost and time-to-market.