Language Selection

English French German Italian Portuguese Spanish

Linux 5.3, LWN's Kernel Coverage and the Linux Foundation

Filed under
Linux
  • Linux 5.3 Enables "-Wimplicit-fallthrough" Compiler Flag

    The recent work on enabling "-Wimplicit-fallthrough" behavior for the Linux kernel has culminated in Linux 5.3 with actually being able to universally enable this compiler feature.

    The -Wimplicit-fallthrough flag on GCC7 and newer warns of cases where switch case fall-through behavior could lead to potential bugs / unexpected behavior.

  • EXT4 For Linux 5.3 Gets Fixes & Faster Case-Insensitive Lookups

    The EXT4 file-system updates have already landed for the Linux 5.3 kernel merge window that opened this week.

    For Linux 5.3, EXT4 maintainer Ted Ts'o sent in primarily a hearty serving of fixes. There are fixes from coverity warnings being addressed to typos and other items for this mature and widely-used Linux file-system.

  • Providing wider access to bpf()

    The bpf() system call allows user space to load a BPF program into the kernel for execution, manipulate BPF maps, and carry out a number of other BPF-related functions. BPF programs are verified and sandboxed, but they are still running in a privileged context and, depending on the type of program loaded, are capable of creating various types of mayhem. As a result, most BPF operations, including the loading of almost all types of BPF program, are restricted to processes with the CAP_SYS_ADMIN capability — those running as root, as a general rule. BPF programs are useful in many contexts, though, so there has long been interest in making access to bpf() more widely available. One step in that direction has been posted by Song Liu; it works by adding a novel security-policy mechanism to the kernel.
    This approach is easy enough to describe. A new special device, /dev/bpf is added, with the core idea that any process that has the permission to open this file will be allowed "to access most of sys_bpf() features" — though what comprises "most" is never really spelled out. A non-root process that wants to perform a BPF operation, such as creating a map or loading a program, will start by opening this file. It then must perform an ioctl() call (BPF_DEV_IOCTL_GET_PERM) to actually enable its ability to call bpf(). That ability can be turned off again with the BPF_DEV_IOCTL_PUT_PERM ioctl() command.

    Internally to the kernel, this mechanism works by adding a new field (bpf_flags) to the task_struct structure. When BPF access is enabled, a bit is set in that field. If this patch goes forward, that detail is likely to change since, as Daniel Borkmann pointed out, adding an unsigned long to that structure for a single bit of information is unlikely to be popular; some other location for that bit will be found.

  • The io.weight I/O-bandwidth controller

    Part of the kernel's job is to arbitrate access to the available hardware resources and ensure that every process gets its fair share, with "its fair share" being defined by policies specified by the administrator. One resource that must be managed this way is I/O bandwidth to storage devices; if due care is not taken, an I/O-hungry process can easily saturate a device, starving out others. The kernel has had a few I/O-bandwidth controllers over the years, but the results have never been entirely satisfactory. But there is a new controller on the block that might just get the job done.
    There are a number of challenges facing an I/O-bandwidth controller. Some processes may need a guarantee that they will get at least a minimum amount of the available bandwidth to a given device. More commonly in recent times, though, the focus has shifted to latency: a process should be able to count on completing an I/O request within a bounded period of time. The controller should be able to provide those guarantees while still driving the underlying device at something close to its maximum rate. And, of course, hardware varies widely, so the controller must be able to adapt its operation to each specific device.

    The earliest I/O-bandwidth controller allows the administrator to set maximum bandwidth limits for each control group. That controller, though, will throttle I/O even if the device is otherwise idle, causing the loss of I/O bandwidth. The more recent io.latency controller is focused on I/O latency, but as Tejun Heo, the author of the new controller, notes in the patch series, this controller really only protects the lowest-latency group, penalizing all others if need be to meet that group's requirements. He set out to create a mechanism that would allow more control over how I/O bandwidth is allocated to groups.

  • TurboSched: the return of small-task packing

    CPU scheduling is a difficult task in the best of times; it is not trivial to pick the next process to run while maintaining fairness, minimizing energy use, and using the available CPUs to their fullest potential. The advent of increasingly complex system architectures is not making things easier; scheduling on asymmetric systems (such as the big.LITTLE architecture) is a case in point. The "turbo" mode provided by some recent processors is another. The TurboSched patch set from Parth Shah is an attempt to improve the scheduler's ability to get the best performance from such processors.
    Those of us who have been in this field for far too long will, when seeing "turbo mode", think back to the "turbo button" that appeared on personal computers in the 1980s. Pushing it would clock the processor beyond its original breathtaking 4.77MHz rate to something even faster — a rate that certain applications were unprepared for, which is why the "go slower" mode was provided at all. Modern turbo mode is a different thing, though, and it's not just a matter of a missing front-panel button. In short, it allows a processor to be overclocked above its rated maximum frequency for a period of time when the load on the rest of system overall allows it.

    Turbo mode can thus increase the CPU cycles available to a given process, but there is a reason why the CPU's rated maximum frequency is lower than what turbo mode provides. The high-speed mode can only be sustained as long as the CPU temperature does not get too high and, crucially (for the scheduler), the overall power load on the system must not be too high. That, in turn, implies that some CPUs must be powered down; if all CPUs are running, there will not be enough power available for any of those CPUs to go into the turbo mode. This mode, thus, is only usable for certain types of workloads and will not be usable (or beneficial) for many others.

  • EdgeX Foundry Announces Production Ready Release Providing Open Platform for IoT Edge Computing to a Growing Global Ecosystem

    EdgeX Foundry, a project under the LF Edge umbrella organization within the Linux Foundation that aims to establish an open, interoperable framework for edge IoT computing independent of hardware, silicon, application cloud, or operating system, today announced the availability of its “Edinburgh” release. Created collaboratively by a global ecosystem, EdgeX Foundry’s new release is a key enabler of digital transformation for IoT use cases and is a platform for real-world applications both for developers and end users across many vertical markets. EdgeX community members have created a range of complementary products and services, including commercial support, training and customer pilot programs and plug-in enhancements for device connectivity, applications, data and system management and security.

    Launched in April 2017, and now part of the LF Edge umbrella, EdgeX Foundry is an open source, loosely-coupled microservices framework that provides the choice to plug and play from a growing ecosystem of available third party offerings or to augment proprietary innovations. With a focus on the IoT Edge, EdgeX simplifies the process to design, develop and deploy solutions across industrial, enterprise, and consumer applications.

More in Tux Machines

Graphics: Coming Next in AMD and Mesa 20.3

  • AMD Ryzen 5000 leak shows a powerful APU to strike back at Intel’s Tiger Lake

    This popping up in Linux now suggests that we could see these Ryzen 5000 chips sooner rather than later. Currently, their anticipated debut is early 2021, but maybe it’ll be very early 2021; perhaps at CES? Or could we see a reveal possibly even this year? Who knows, and of course all this is pure guesswork, although the latter still seems rather unlikely. Whatever the case, Ryzen 5000 APUs for notebooks aren’t far away now, and will of course go up against Intel’s Tiger Lake CPUs which have already been revealed, and will start pitching up in laptops before the end of 2020 (we already know that some notebooks will be arriving in November). These 11th-gen mobile chips from Intel look to be shaping up very impressively from what we’ve seen thus far, and of course come with Xe integrated graphics, which represents a big step forward for gaming on a laptop – and that’s why RDNA 2 graphics will be key for AMD with its incoming Van Gogh APUs.

  • AMD Linux Kernel Patch Confirms Next-Gen Van Gogh APUs With DDR5 And RDNA2

    After a Linux kernel patch with 275K lines of code came out on Friday, the people over at Phoronix began to snoop around for any hidden information. Among the lines of code, they discovered that the upcoming Van Gogh APUs from AMD will have Navi 2 GPUs and will use DDR5 system memory.

  • Mesa 20.3 Can Now Consume SPIR-V Binaries Generated By LLVM's libclc

    Libclc is the LLVM library around OpenCL C programming language support and goes along with Clang's OpenCL front-end. Jesse Natalie of Microsoft has seen his two month old merge request land on Friday for being able to make use of libclc SPIR-V binaries that can be used by Mesa OpenCL code. Ultimately this code in part allows converting a libclc SPIR-V library into a set of NIR functions. Earlier this year the effort was started by Red Hat's David Airlie for being able to support a SPIR-V library generated from libclc to implement OpenCL runtime functions. Microsoft though pursued the work over the finish as part of their effort for getting OpenCL over Direct3D 12 (and OpenGL).

France’s open data lab launches study into open source and education

Etalab, the French governmental open data lab, has begun a study on the importance of open source software in higher education and research. The study will identify open source use in education, and compare institutional strategies on open data and open access and the sovereignty of education. Read more

Openwashing of Failing Swift by Apple

FreeBSD 12.2-BETA3 Now Available

The third BETA build of the 12.2-RELEASE release cycle is now available.

Installation images are available for:

o 12.2-BETA3 amd64 GENERIC
o 12.2-BETA3 i386 GENERIC
o 12.2-BETA3 powerpc GENERIC
o 12.2-BETA3 powerpc64 GENERIC64
o 12.2-BETA3 powerpcspe MPC85XXSPE
o 12.2-BETA3 armv6 RPI-B
o 12.2-BETA3 armv7 BANANAPI
o 12.2-BETA3 armv7 BEAGLEBONE
o 12.2-BETA3 armv7 CUBIEBOARD
o 12.2-BETA3 armv7 CUBIEBOARD2
o 12.2-BETA3 armv7 CUBOX-HUMMINGBOARD
o 12.2-BETA3 armv7 RPI2
o 12.2-BETA3 armv7 WANDBOARD
o 12.2-BETA3 armv7 GENERICSD
o 12.2-BETA3 aarch64 GENERIC
o 12.2-BETA3 aarch64 RPI3
o 12.2-BETA3 aarch64 PINE64
o 12.2-BETA3 aarch64 PINE64-LTS

Note regarding arm SD card images: For convenience for those without
console access to the system, a freebsd user with a password of
freebsd is available by default for ssh(1) access.  Additionally,
the root user password is set to root.  It is strongly recommended
to change the password for both users after gaining access to the
system.

Installer images and memory stick images are available here:

    https://download.freebsd.org/ftp/releases/ISO-IMAGES/12.2/

The image checksums follow at the end of this e-mail.

If you notice problems you can report them through the Bugzilla PR
system or on the -stable mailing list.

If you would like to use SVN to do a source based update of an existing
system, use the "releng/12.2" branch.

A summary of changes since 12.2-BETA2 includes:

o An installation issue with certctl(8) had been fixed.

o Read/write kstats for ZFS datasets had been added from OpenZFS.

o The default vm.max_user_wired value had been increased.

o The kern.geom.part.check_integrity sysctl(8) had been extended to work
  on GPT partitions.

o The cxgbe(4) firmware had been updated to version 1.25.0.0.

o Fixes for em(4) and igb(4) have been addressed.

o A fix for a potential NFS server crash had been addressed.

o A lock order reversal between NFS server and server-side krpc had been
  addressed.

A list of changes since 12.1-RELEASE is available in the releng/12.2
release notes:

    https://www.freebsd.org/releases/12.2R/relnotes.html

Please note, the release notes page is not yet complete, and will be
updated on an ongoing basis as the 12.2-RELEASE cycle progresses.

=== Virtual Machine Disk Images ===

VM disk images are available for the amd64, i386, and aarch64
architectures.  Disk images may be downloaded from the following URL
(or any of the FreeBSD download mirrors):

    https://download.freebsd.org/ftp/releases/VM-IMAGES/12.2-BETA3/

The partition layout is:

    ~ 16 kB - freebsd-boot GPT partition type (bootfs GPT label)
    ~ 1 GB  - freebsd-swap GPT partition type (swapfs GPT label)
    ~ 20 GB - freebsd-ufs GPT partition type (rootfs GPT label)

The disk images are available in QCOW2, VHD, VMDK, and raw disk image
formats.  The image download size is approximately 135 MB and 165 MB
respectively (amd64/i386), decompressing to a 21 GB sparse image.

Note regarding arm64/aarch64 virtual machine images: a modified QEMU EFI
loader file is needed for qemu-system-aarch64 to be able to boot the
virtual machine images.  See this page for more information:

    https://wiki.freebsd.org/arm64/QEMU

To boot the VM image, run:

    % qemu-system-aarch64 -m 4096M -cpu cortex-a57 -M virt  \
	-bios QEMU_EFI.fd -serial telnet::4444,server -nographic \
	-drive if=none,file=VMDISK,id=hd0 \
	-device virtio-blk-device,drive=hd0 \
	-device virtio-net-device,netdev=net0 \
	-netdev user,id=net0

Be sure to replace "VMDISK" with the path to the virtual machine image.

=== Amazon EC2 AMI Images ===

FreeBSD/amd64 EC2 AMIs are available in the following regions:

  af-south-1 region: ami-085b7b5b76d8f88e1
  eu-north-1 region: ami-0d2aaf811cd455b5d
  ap-south-1 region: ami-0c85211fa78c701f5
  eu-west-3 region: ami-08c4c388a19042fb3
  eu-west-2 region: ami-030841f586c12d392
  eu-south-1 region: ami-035fcb9515104859e
  eu-west-1 region: ami-0d5e826250c10cd3a
  ap-northeast-2 region: ami-01adc51da511ea8fc
  me-south-1 region: ami-04b2ddbedee42d57a
  ap-northeast-1 region: ami-0e5b3fc6777cd037d
  sa-east-1 region: ami-08be6405809912e60
  ca-central-1 region: ami-0c954a7d72d7b483c
  ap-east-1 region: ami-04377808aeca208a7
  ap-southeast-1 region: ami-02e1e04501c308c0b
  ap-southeast-2 region: ami-0e9ae229b9ca55677
  eu-central-1 region: ami-002e88141d3b00ee2
  us-east-1 region: ami-0c678fade90df8f04
  us-east-2 region: ami-0967c088cbf208659
  us-west-1 region: ami-0dafae7edc2b2f376
  us-west-2 region: ami-07e4d062d094f5364

FreeBSD/aarch64 EC2 AMIs are available in the following regions:

  af-south-1 region: ami-07c05f6349125a1c7
  eu-north-1 region: ami-041e507b80cb59335
  ap-south-1 region: ami-064907659b94c4823
  eu-west-3 region: ami-000c4a31405be8e94
  eu-west-2 region: ami-0debbacd03a24e562
  eu-south-1 region: ami-0c358e05477cd8b6b
  eu-west-1 region: ami-0fc48c1fef0e255f0
  ap-northeast-2 region: ami-06bd715c00c4237b7
  me-south-1 region: ami-04a671aa9611f8a74
  ap-northeast-1 region: ami-008e0fa8be5e5c44c
  sa-east-1 region: ami-03c2f687354f086b4
  ca-central-1 region: ami-0647aa16bc62701a3
  ap-east-1 region: ami-08f54406159203762
  ap-southeast-1 region: ami-007e5e33e3e4d9152
  ap-southeast-2 region: ami-0a028a4f5beeed373
  eu-central-1 region: ami-072e09d78436cf375
  us-east-1 region: ami-0218fa187d85dc688
  us-east-2 region: ami-06e8312e95743ce1a
  us-west-1 region: ami-0211983509f75ee9b
  us-west-2 region: ami-038188157f971a711

=== Vagrant Images ===

FreeBSD/amd64 images are available on the Hashicorp Atlas site, and can
be installed by running:

    % vagrant init freebsd/FreeBSD-12.2-BETA3
    % vagrant up

=== Upgrading ===

The freebsd-update(8) utility supports binary upgrades of amd64 and i386
systems running earlier FreeBSD releases.  Systems running earlier
FreeBSD releases can upgrade as follows:

	# freebsd-update upgrade -r 12.2-BETA3

During this process, freebsd-update(8) may ask the user to help by
merging some configuration files or by confirming that the automatically
performed merging was done correctly.

	# freebsd-update install

The system must be rebooted with the newly installed kernel before
continuing.

	# shutdown -r now

After rebooting, freebsd-update needs to be run again to install the new
userland components:

	# freebsd-update install

It is recommended to rebuild and install all applications if possible,
especially if upgrading from an earlier FreeBSD release, for example,
FreeBSD 11.x.  Alternatively, the user can install misc/compat11x and
other compatibility libraries, afterwards the system must be rebooted
into the new userland:

	# shutdown -r now

Finally, after rebooting, freebsd-update needs to be run again to remove
stale files:

	# freebsd-update install
Read more