Language Selection

English French German Italian Portuguese Spanish

Nepal sets example with school laptops

Filed under
OLPC

More in Tux Machines

Graphics: Dav1d AV1 Acceleration, AMDVLK and Sway 1.4

  • Dav1d AV1 Decoder Begins Adding AVX-512 Optimizations For Intel Ice Lake

    Ahead of the forthcoming dav1d 0.6 release, this open-source AV1 video decoder has begun implementing AVX-512 optimizations targeting Intel Ice Lake processors. The work has begun on AVX-512 optimizations focused on Ice Lake for this already quite speedy AV1 video decoder.

  • AMDVLK 2020.Q1.1 Brings Some Performance Tuning, Still On Vulkan 1.1

    Out this morning is AMDVLK 2020.Q1.1 as AMD's first official open-source Vulkan driver code drop of the new year. While the Radeon Software Adrenalin Edition driver for Windows was recently updated with Vulkan 1.2 support, this AMDVLK release is still on Vulkan 1.1 but at least updated against API 1.1.130 compliance. Hopefully their next code drop will have the Vulkan 1.2 support officially exposed. Meanwhile Mesa's RADV Radeon Vulkan driver has been supporting Vulkan 1.2 since hours after the specification's unveil.

  • Sway 1.4 Wayland Compositor Brings VNC Support, Initial Bits For MATE Panel Support

    Sway 1.4 is out today as the newest version of this i3-inspired Wayland compositor that has a growing following. Sway 1.4 consists of nearly 200 changes from over 50 contributors, showing the significant progress of this Wayland compositor that has been quick to pick-up features over the past few years.

LWN and Oracle on Linux 5.x Kernel

  • Grabbing file descriptors with pidfd_getfd()

    In response to a growing desire for ways to control groups of processes from user space, the kernel has added a number of mechanisms that allow one process to operate on another. One piece that is currently missing, though, is the ability for a process to snatch a copy of an open file descriptor from another. That gap may soon be filled, though, if the pidfd_getfd() system-call patch set from Sargun Dhillon is merged. One thing that is possible in current kernels is to open a file that another process also has open; the information needed to do that is in each process's /proc directory. That does not work, though, for file descriptors referring to pipes, sockets, or other objects that do not appear in the filesystem hierarchy. Just as importantly, though, opening a new file in this way creates a new entry in the file table; it is not the entry corresponding to the file descriptor in the process of interest. That distinction matters if the objective is to modify that particular file descriptor. One use case mentioned in the patch series is using seccomp to intercept attempts to bind a socket to a privileged port. A privileged supervisor process could, if it so chose, grab the file descriptor for that socket from the target process and actually perform the bind — something the target process would not have the privilege to do on its own. Since the grabbed file descriptor is essentially identical to the original, the bind operation will be visible to the target process as well. For the sufficiently determined, it is actually possible to extract a file descriptor from another process now. The technique involves using ptrace() to attach to that process, stop it from executing, inject some code that opens a connection to the supervisor process and sends the file descriptor via an SCM_RIGHTS datagram, then running that code. This solution might justly be said to be slightly lacking in elegance. It also requires stopping the target process, which is likely to be unwelcome.

  • configfd() and shifting bind mounts

    The 5.2 kernel saw the addition of an extensive new API for the mounting (and remounting) of filesystems; this article covered an early version of that API. Since then, work in this area has mostly focused on enabling filesystems to support this API fully. James Bottomley has taken a look at this API as part of the job of redesigning his shiftfs filesystem and found it to be incomplete. What has followed is a significant set of changes that promise to simplify the mount API — though it turns out that "simple" is often in the eye of the beholder. The mount API work replaces the existing, complex mount() system call with a half-dozen or so new system calls. An application would call fsopen() to open a filesystem stored somewhere or fspick() to open an already mounted filesystem. Calls to fsconfig() set various parameters related to the mount; fsmount() is then called to mount a filesystem within the kernel and move_mount() to attach the result to the filesystem hierarchy somewhere. There are a couple more calls to fill in other parts of the interface as well. The intent is for this set of system calls to be able to replace mount() entirely with something that is more flexible, capable, and maintainable. Back in November, Bottomley discovered one significant gap with the new API: it is not possible to use it to set up a read-only bind mount. The problem is that bind mounts are special; they do not represent a filesystem directly. Instead, they can be thought of as a view of a filesystem that is mounted elsewhere. There is no superblock associated with a bind mount, which turns out to be a problem where the new API is concerned, since fsconfig() is designed to operate on superblocks. An attempt to call fsconfig() on a bind mount will end up modifying the original mount, which is almost certainly not what the caller had in mind. So there is no way to set the read-only flag for a bind mount. David Howells, the creator of the new mount API, responded that what is needed is yet another system call, mount_setattr(), which would change attributes of mounts. That would work for the read-only case, Bottomley said, but it falls down when it comes to more complex situations, such as his proposed UID-shifting bind mount. Instead, he said, the file-descriptor-based configuration mechanism provided by fsconfig() is well suited to this job, but it needs to be made more widely applicable. He suggested that this interface be made more generic so that it could be used in both situations (and beyond).

  • Accelerating netfilter with hardware offload, part 1

    Supporting network protocols at high speeds in pure software is getting increasingly difficult, with 25-100Gb/s interfaces available now and 200-400Gb/s starting to show up. Packet processing at 100Gb/s must happen in 200 cycles or less, which does not leave much room for processing at the operating-system level. Fortunately some operations can be performed by hardware, including checksum verification and offloading parts of the packet send and receive paths. As modern hardware adds more functionality, new options are becoming available. The 5.3 kernel includes a patch set from Pablo Neira Ayuso that added support for offloading some packet filtering with netfilter. This patch set not only adds the offload support, but also performs a refactoring of the existing offload paths in the generic code and the network card drivers. More work came in the following kernel releases. This seems like a good moment to review the recent advancements in offloading in the network stack.

  • Linux Kernel Developments Since 5.0: Features and Developments of Note

    Last year, I covered features in Linux kernel 5.0 that we thought were worth highlighting. Unbreakable Enterprise Kernel 6 is based on stable kernel 5.4 and was recently made available as a developer preview. So, now is as good a time as any to review developments that have occurred since 5.0. While the features below are roughly in chronological order, there is no significance to the order otherwise. BPF spinlock patches BPF (Berkeley Packet Filter) spinlock patches give BPF programs increased control over concurrency. Learn more about BPF and how to use it in this seven part series by Oracle developer Alan Maguire. Btrfs ZSTD compression The Btrfs filesystem now supports the use of multiple ZSTD (Zstandard) compression levels. See this commit for some information about the feature and the performance characteristics of the various levels. Memory compaction improvements Memory compaction has been reworked, resulting in significant improvements in compaction success rates and CPU time required. In benchmarks that try to allocated Transparent HugePages in deliberatly fragmented virtual memory, the number of pages scanned for migration was reduced by 65% and the free scanner was reduced by 97.5%.

Lakka 2.3.2 with RetroArch 1.8.4

The Lakka team wishes everyone a happy new year and welcomes 2020 with a new update and a new tier-based releases system! This new Lakka update, 2.3.2, contains RetroArch 1.8.4 (was 1.7.2), some new cores and a handful of core updates. Read more

It is time to end the DMCA anti-circumvention exemptions process and put a stop to DRM

Although it is accurate, there's one aspect of the process that is missing from that description: the length. While the process kicks off every three years, the work that goes into fighting exemptions, whether previously granted or newly requested, has a much shorter interval. As you can see from the timeline of events from the 2018 round of the exemptions process, the process stretches on for months and months. For each exemption we have to prepare research, documents, and our comments through wave after wave of submission periods. For the 2018 exemptions round, the first announcements from the United States Copyright Office were in July of 2017, on a process that concluded in October of 2018. Fifteen months, every three years. If you do the math, that means we're fighting about 40% of the time just to ensure that exemptions we already won continue, and that new exemptions will be granted. If the timeline from the last round holds up, then we're only a few short months away from starting this whole circus back up again. Describing it as a circus seems an appropriate label for the purpose of this whole process. It's not meant to be an effective mechanism for protecting the rights of users: it's a method for eating up the time and resources of those who are fighting for justice. If we don't step up, users could lose the ability to control their own computing and software. It's like pushing a rock up a mile-long hill only to have it pushed back down again when we've barely had a chance to catch our breath. Read more