Language Selection

English French German Italian Portuguese Spanish

Linux 5.3, LWN's Kernel Coverage and the Linux Foundation

Filed under
Linux
  • Linux 5.3 Enables "-Wimplicit-fallthrough" Compiler Flag

    The recent work on enabling "-Wimplicit-fallthrough" behavior for the Linux kernel has culminated in Linux 5.3 with actually being able to universally enable this compiler feature.

    The -Wimplicit-fallthrough flag on GCC7 and newer warns of cases where switch case fall-through behavior could lead to potential bugs / unexpected behavior.

  • EXT4 For Linux 5.3 Gets Fixes & Faster Case-Insensitive Lookups

    The EXT4 file-system updates have already landed for the Linux 5.3 kernel merge window that opened this week.

    For Linux 5.3, EXT4 maintainer Ted Ts'o sent in primarily a hearty serving of fixes. There are fixes from coverity warnings being addressed to typos and other items for this mature and widely-used Linux file-system.

  • Providing wider access to bpf()

    The bpf() system call allows user space to load a BPF program into the kernel for execution, manipulate BPF maps, and carry out a number of other BPF-related functions. BPF programs are verified and sandboxed, but they are still running in a privileged context and, depending on the type of program loaded, are capable of creating various types of mayhem. As a result, most BPF operations, including the loading of almost all types of BPF program, are restricted to processes with the CAP_SYS_ADMIN capability — those running as root, as a general rule. BPF programs are useful in many contexts, though, so there has long been interest in making access to bpf() more widely available. One step in that direction has been posted by Song Liu; it works by adding a novel security-policy mechanism to the kernel.
    This approach is easy enough to describe. A new special device, /dev/bpf is added, with the core idea that any process that has the permission to open this file will be allowed "to access most of sys_bpf() features" — though what comprises "most" is never really spelled out. A non-root process that wants to perform a BPF operation, such as creating a map or loading a program, will start by opening this file. It then must perform an ioctl() call (BPF_DEV_IOCTL_GET_PERM) to actually enable its ability to call bpf(). That ability can be turned off again with the BPF_DEV_IOCTL_PUT_PERM ioctl() command.

    Internally to the kernel, this mechanism works by adding a new field (bpf_flags) to the task_struct structure. When BPF access is enabled, a bit is set in that field. If this patch goes forward, that detail is likely to change since, as Daniel Borkmann pointed out, adding an unsigned long to that structure for a single bit of information is unlikely to be popular; some other location for that bit will be found.

  • The io.weight I/O-bandwidth controller

    Part of the kernel's job is to arbitrate access to the available hardware resources and ensure that every process gets its fair share, with "its fair share" being defined by policies specified by the administrator. One resource that must be managed this way is I/O bandwidth to storage devices; if due care is not taken, an I/O-hungry process can easily saturate a device, starving out others. The kernel has had a few I/O-bandwidth controllers over the years, but the results have never been entirely satisfactory. But there is a new controller on the block that might just get the job done.
    There are a number of challenges facing an I/O-bandwidth controller. Some processes may need a guarantee that they will get at least a minimum amount of the available bandwidth to a given device. More commonly in recent times, though, the focus has shifted to latency: a process should be able to count on completing an I/O request within a bounded period of time. The controller should be able to provide those guarantees while still driving the underlying device at something close to its maximum rate. And, of course, hardware varies widely, so the controller must be able to adapt its operation to each specific device.

    The earliest I/O-bandwidth controller allows the administrator to set maximum bandwidth limits for each control group. That controller, though, will throttle I/O even if the device is otherwise idle, causing the loss of I/O bandwidth. The more recent io.latency controller is focused on I/O latency, but as Tejun Heo, the author of the new controller, notes in the patch series, this controller really only protects the lowest-latency group, penalizing all others if need be to meet that group's requirements. He set out to create a mechanism that would allow more control over how I/O bandwidth is allocated to groups.

  • TurboSched: the return of small-task packing

    CPU scheduling is a difficult task in the best of times; it is not trivial to pick the next process to run while maintaining fairness, minimizing energy use, and using the available CPUs to their fullest potential. The advent of increasingly complex system architectures is not making things easier; scheduling on asymmetric systems (such as the big.LITTLE architecture) is a case in point. The "turbo" mode provided by some recent processors is another. The TurboSched patch set from Parth Shah is an attempt to improve the scheduler's ability to get the best performance from such processors.
    Those of us who have been in this field for far too long will, when seeing "turbo mode", think back to the "turbo button" that appeared on personal computers in the 1980s. Pushing it would clock the processor beyond its original breathtaking 4.77MHz rate to something even faster — a rate that certain applications were unprepared for, which is why the "go slower" mode was provided at all. Modern turbo mode is a different thing, though, and it's not just a matter of a missing front-panel button. In short, it allows a processor to be overclocked above its rated maximum frequency for a period of time when the load on the rest of system overall allows it.

    Turbo mode can thus increase the CPU cycles available to a given process, but there is a reason why the CPU's rated maximum frequency is lower than what turbo mode provides. The high-speed mode can only be sustained as long as the CPU temperature does not get too high and, crucially (for the scheduler), the overall power load on the system must not be too high. That, in turn, implies that some CPUs must be powered down; if all CPUs are running, there will not be enough power available for any of those CPUs to go into the turbo mode. This mode, thus, is only usable for certain types of workloads and will not be usable (or beneficial) for many others.

  • EdgeX Foundry Announces Production Ready Release Providing Open Platform for IoT Edge Computing to a Growing Global Ecosystem

    EdgeX Foundry, a project under the LF Edge umbrella organization within the Linux Foundation that aims to establish an open, interoperable framework for edge IoT computing independent of hardware, silicon, application cloud, or operating system, today announced the availability of its “Edinburgh” release. Created collaboratively by a global ecosystem, EdgeX Foundry’s new release is a key enabler of digital transformation for IoT use cases and is a platform for real-world applications both for developers and end users across many vertical markets. EdgeX community members have created a range of complementary products and services, including commercial support, training and customer pilot programs and plug-in enhancements for device connectivity, applications, data and system management and security.

    Launched in April 2017, and now part of the LF Edge umbrella, EdgeX Foundry is an open source, loosely-coupled microservices framework that provides the choice to plug and play from a growing ecosystem of available third party offerings or to augment proprietary innovations. With a focus on the IoT Edge, EdgeX simplifies the process to design, develop and deploy solutions across industrial, enterprise, and consumer applications.

More in Tux Machines

Audiocasts/Shows: Linux in the Ham Shack and Linux Headlines

  • LHS Episode #302: The End of Kenwood

    Welcome to Episode 302 of Linux in the Ham Shack. In this short topic episode, the hosts discuss the potential end of Kenwood in the amateur radio market, emcom in Montucky, Storm Area 51, HF on satellites, a huge update for PulseAudio, the Linux 5.3 kernel and much more. Thank you for listening and have a fantastic week.

  • 09/19/2019 | Linux Headlines

    Fresh init system controversy at the Debian project, a more scalable Samba, and a big release for LLVM. Plus GitHub's latest security steps and a new version of OBS Studio.

Android Leftovers

When Diverse Network ASICs Meet A Unifying Operating System

And it has also been a decade since switch upstart Arista Networks launched its Extensible Operating System, or EOS, which is derived from Linux. [...] The cross-platform nature of ArcOS, coupled with its ability to run in any function on the network, could turn out to be the key differentiator. A lot of these other NOSes were point solutions that could only be deployed in certain parts of the network, and that just creates animosity with the incumbent vendors that dominate the rest of the networking stack. Given the mission-critical nature of networking in the modern datacenter, it costs a great deal to qualify a new network operating system, and it can take a lot of time. If ArcOS can run across more platforms, qualify faster, and do more jobs in the network, then, says Garg, it has a good chance of shaking up switching and routing. “That totally changes the business conversation and the TCO advantages that we can bring to a customer across the entirety of their network.” Read more

Server: Kubernetes/OpenShift, OpenStack, and Red Hat's Ansible

  • 9 steps to awesome with Kubernetes/OpenShift presented by Burr Sutter

    Burr Sutter gave a terrific talk in India in July, where he laid out the terms, systems and processes needed to setup Kubernetes for developers. This is an introductory presentation, which may be useful for your larger community of Kubernetes users once you’ve already setup User Provisioned Infrastructure (UPI) in Red Hat OpenShift for them, though it does go into the deeper details of actually running the a cluster. To follow along, Burr created an accompanying GitHub repository, so you too can learn how to setup an awesome Kubernetes cluster in just 9 steps.

  • Weaveworks Named a Top Kubernetes Contributor

    But anyone who knows the history of Weaveworks might not be too surprised by this. Weaveworks has been a major champion of Kubernetes since the very beginning. It might not be too much of a coincidence that Weaveworks was incorporated only a few weeks after Kubernetes was open sourced, five years ago. In addition to this, the very first elected chair of the CNCF’s Technical Oversight Committee, responsible for technical leadership to the Cloud Native Foundation was also headed up by our CEO, Alexis Richardson(@monadic) (soon to be replaced by the awesome Liz Rice (@lizrice) of Aqua Security).

  • Improving trust in the cloud with OpenStack and AMD SEV

    This post contains an exciting announcement, but first I need to provide some context! Ever heard that joke “the cloud is just someone else’s computer”? Of course it’s a gross over-simplification, but there’s more than a grain of truth in it. And that raises the question: if your applications are running in someone else’s data-centre, how can you trust that they’re not being snooped upon, or worse, invasively tampered with?

  • Red Hat OpenStack Platform 15 Enhances Infrastructure Security and Cloud-Native Integration Across the Open Hybrid Cloud

    Red Hat, Inc., the world's leading provider of open source solutions, today announced the general availability of Red Hat OpenStack Platform 15, the latest version of its highly scalable and agile cloud Infrastructure-as-a-Service (IaaS) solution. Based on the OpenStack community’s "Stein" release, Red Hat OpenStack Platform 15 adds performance and cloud security enhancements and expands the platform’s ecosystem of supported hardware, helping IT organizations to more quickly and more securely support demanding production workloads. Given the role of Linux as the foundation for hybrid cloud, customers can also benefit from a more secure, flexible and intelligent Linux operating system underpinning their private cloud deployments with Red Hat Enterprise Linux 8.

  • Red Hat Ansible Automation Accelerates Past Major Adoption Milestone, Now Manages More Than Four Million Customer Systems Worldwide

    Red Hat, Inc., the world's leading provider of open source solutions, today announced that more than four million customer systems worldwide are now automated by Red Hat Ansible Automation. Customers, including Energy Market Company, Microsoft, Reserve Bank of New Zealand and Surescripts all use Red Hat Ansible Automation to automate and orchestrate their IT operations, helping to expand automation across IT stacks. According to a blog post by Chris Gardner with Forrester Research, who was the author of The Forrester Wave™: Infrastructure Automation Platforms, Q3 2019, "Infrastructure automation isn’t just on-premises or the cloud. It’s at the edge and everywhere in between."1 Since its launch in 2013, Red Hat Ansible Automation has provided a single tool to help organizations automate across IT operations and development, including infrastructure, networks, cloud, security and beyond.