Language Selection

English French German Italian Portuguese Spanish

Kernel: Linux 5.3, DragonFlyBSD Takes Linux Bits, LWN Paywall Expires for Recent Articles

Filed under
  • Ceph updates for 5.3-rc1
    Hi Linus,
    The following changes since commit 0ecfebd2b52404ae0c54a878c872bb93363ada36:
    Linux 5.2 (2019-07-07 15:41:56 -0700)
    are available in the Git repository at: tags/ceph-for-5.3-rc1
    for you to fetch changes up to d31d07b97a5e76f41e00eb81dcca740e84aa7782:
    ceph: fix end offset in truncate_inode_pages_range call (2019-07-08 14:01:45 +0200)
    There is a trivial conflict caused by commit 9ffbe8ac05db
    ("locking/lockdep: Rename lockdep_assert_held_exclusive() ->
    lockdep_assert_held_write()"). I included the resolution in
  • Ceph Sees "Lots Of Exciting Things" For Linux 5.3 Kernel

    Ceph for Linux 5.3 is bringing an addition to speed-up reads/discards/snap-diffs on sparse images, snapshot creation time is now exposed to support features like "restore previous versions", support for security xattrs (currently limited to SELinux), addressing a missing feature bit so the kernel client's Ceph features are now "luminous", better consistency with Ceph FUSE, and changing the time granularity from 1us to 1ns. There are also bug fixes and other work as part of the Ceph code for Linux 5.3. As maintainer Ilya Dryomov put it, "Lots of exciting things this time!"

  • The NVMe Patches To Support Linux On Newer Apple Macs Are Under Review

    At the start of the month we reported on out-of-tree kernel work to support Linux on the newer Macs. Those patches were focused on supporting Apple's NVMe drive behavior by the Linux kernel driver. That work has been evolving nicely and is now under review on the kernel mailing list.

    Volleyed on Tuesday were a set of three patches to the Linux kernel's NVMe code for dealing with the Apple hardware of the past few years in order for Linux to deal with these drives.

    On Apple 2018 systems and newer, their I/O queue sizing/handling is odd and in other areas not properly following NVMe specifications. These patches take care of that while hopefully not regressing existing NVMe controller support.

  • DragonFlyBSD Pulls In The Radeon Driver Code From Linux 4.4

    While the Linux 4.4 kernel is quite old (January 2016), DragonFlyBSD has now re-based its AMD Radeon kernel graphics driver against that release. It is at least a big improvement compared to its Radeon code having been derived previously from Linux 3.19.

    DragonFlyBSD developer François Tigeot continues doing a good job herding the open-source Linux graphics driver support to this BSD. With the code that landed on Monday, DragonFlyBSD's Radeon DRM is based upon the state found in the Linux 4.4.180 LTS tree.

  • Destaging ION

    The Android system has shipped a couple of allocators for DMA buffers over the years; first came PMEM, then its replacement ION. The ION allocator has been in use since around 2012, but it remains stuck in the kernel's staging tree. The work to add ION to the mainline started in 2013; at that time, the allocator had multiple issues that made inclusion impossible. Recently, John Stultz posted a patch set introducing DMA-BUF heaps, an evolution of ION, that is designed to do exactly that — get the Android DMA-buffer allocator to the mainline Linux kernel.

    Applications interacting with devices often require a memory buffer that is shared with the device driver. Ideally, it would be memory mapped and physically contiguous, allowing direct DMA access and minimal overhead when accessing the data from both sides at the same time. ION's main goal is to support that use case; it implements a unified way of defining and sharing such memory buffers, while taking into account the constraints imposed by the devices and the platform.

  • clone3(), fchmodat4(), and fsinfo()

    The kernel development community continues to propose new system calls at a high rate. Three ideas that are currently in circulation on the mailing lists are clone3(), fchmodat4(), and fsinfo(). In some cases, developers are just trying to make more flag bits available, but there is also some significant new functionality being discussed.

    The clone() system call creates a new process or thread; it is the actual machinery behind fork(). Unlike fork(), clone() accepts a flags argument to modify how it operates. Over time, quite a few flags have been added; most of these control what resources and namespaces are to be shared with the new child process. In fact, so many flags have been added that, when CLONE_PIDFD was merged for 5.2, the last available flag bit was taken. That puts an end to the extensibility of clone().

  • Soft CPU affinity

    On NUMA systems with a lot of CPUs, it is common to assign parts of the workload to different subsets of the available processors. This partitioning can improve performance while reducing the ability of jobs to interfere with each other. The partitioning mechanisms available on current kernels might just do too good a job in some situations, though, leaving some CPUs idle while others are overutilized. The soft affinity patch set from Subhra Mazumdar is an attempt to improve performance by making that partitioning more porous.
    In current kernels, a process can be restricted to a specific set of CPUs with either the sched_setaffinity() system call or the cpuset mechanism. Either way, any process so restricted will only be able to run on the specified CPUs regardless of the state of the system as a whole. Even if the other CPUs in the system are idle, they will be unavailable to any process that has been restricted not to run on them. That is normally the behavior that is wanted; a system administrator who has partitioned a system in this way probably has some other use in mind for those CPUs.

    But what if the administrator would rather relax the partitioning in cases where the fenced-off CPUs are idle and going to waste? The only alternative currently is to not partition the system at all and let processes roam across all CPUs. One problem with that approach, beyond losing the isolation between jobs, is that NUMA locality can be lost, resulting in reduced performance even with more CPUs available. In theory the AutoNUMA balancing code in the kernel should address that problem by migrating processes and their memory to the same node, but Mazumdar notes that it doesn't seem to work properly when memory is spread out across the system. Its reaction time is also said to be too slow, and the cost of the page scanning required is high.

More in Tux Machines

Late Coverage of Confidential Computing Consortium

  • Microsoft Partners With Google, Intel, And Others To Form Data Protection Consortium

    The software maker joined Google Cloud, Intel, IBM, Alibaba, Arm, Baidu, Red Hat, Swisscom, and Tencent to establish the Confidential Computing Consortium, a group committed to providing better private data protection, promoting the use of confidential computing, and advancing open source standards among members of the technology community.

  • #OSSUMMIT: Confidential Computing Consortium Takes Shape to Enable Secure Collaboration

    At the Open Source Summit in San Diego, California on August 21, the Linux Foundation announced the formation of the Confidential Computing Consortium. Confidential computing is an approach using encrypted data that enables organizations to share and collaborate, while still maintaining privacy. Among the initial backers of the effort are Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent. “The context of confidential computing is that we can actually use the data encrypted while programs are working on it,” John Gossman, distinguished engineer at Microsoft, said during a keynote presentation announcing the new effort. Initially there are three projects that are part of the Confidential Computing Consortium, with an expectation that more will be added over time. Microsoft has contributed its Open Enclave SDK, Red Hat is contributing the Enarx project for Trusted Execution Environments and Intel is contributing its Software Guard Extensions (SGX) software development kit. Lorie Wigle, general manager, platform security product management at Intel, explained that Intel has had a capability built into some of its processors called software guard which essentially provides a hardware-based capability for protecting an area of memory.

Graphics: Mesa Radeon Vulkan Driver and SPIR-V Support For OpenGL 4.6

  • Mesa Radeon Vulkan Driver Sees ~30% Performance Boost For APUs

    Mesa's RADV Radeon Vulkan driver just saw a big performance optimization land to benefit APUs like Raven Ridge and Picasso, simply systems with no dedicated video memory. The change by Feral's Alex Smith puts the uncached GTT type at a higher index than the visible vRAM type for these configurations without dedicated vRAM, namely APUs.

  • Intel Iris Gallium3D Is Close With SPIR-V Support For OpenGL 4.6

    This week saw OpenGL 4.6 support finally merged for Intel's i965 Mesa driver and will be part of the upcoming Mesa 19.2 release. Not landed yet but coming soon is the newer Intel "Iris" Gallium3D driver also seeing OpenGL 4.6 support. Iris Gallium3D has been at OpenGL 4.5 support and is quite near as well with its OpenGL 4.6 support thanks to the shared NIR support and more with the rest of the Intel open-source graphics stack. Though it's looking less likely that OpenGL 4.6 support would be back-ported to Mesa 19.2 for Iris, but we'll see.

The GPD MicroPC in 3 Minutes [Video Review]

In it I tackle the GPD MicroPC with Ubuntu MATE 19.10. I touch on the same points made in my full text review, but with the added bonus of moving images to illustrate my points, rather than words. Read more Also: WiringPi - Deprecated

today's howtos