Language Selection

English French German Italian Portuguese Spanish

LWN on Security and Kernel (Paywall Has Ended)

Filed under
Linux
Security
  • A container-confinement breakout

    The recently announced container-confinement breakout for containers started with runc is interesting from a few different perspectives. For one, it affects more than just runc-based containers as privileged LXC-based containers (and likely others) are also affected, though the LXC-based variety are harder to compromise than the runc ones. But it also, once again, shows that privileged containers are difficult—perhaps impossible—to create in a secure manner. Beyond that, it exploits some Linux kernel interfaces in novel ways and the fixes use a perhaps lesser-known system call that was added to Linux less than five years back.

    The runc tool implements the container runtime specification of the Open Container Initiative (OCI), so it is used by a number of different containerization solutions and orchestration systems, including Docker, Podman, Kubernetes, CRI-O, and containerd. The flaw, which uses the /proc/self/exe pseudo-file to gain control of the host operating system (thus anything else, including other containers, running on the host), has been assigned CVE-2019-5736. It is a massive hole for containers that run with access to the host root user ID (i.e. UID 0), which, sadly, covers most of the containers being run today.

    There are a number of sources of information on the flaw, starting with the announcement from runc maintainer Aleksa Sarai linked above. The discoverers, Adam Iwaniuk and Borys Popławski, put out a blog post about how they found the hole, including some false steps along the way. In addition, one of the LXC maintainers who worked with Sarai on the runc fix, Christian Brauner, described the problems with privileged containers and how CVE-2019-5736 applies to LXC containers. There is a proof of concept (PoC) attached to Sarai's announcement, along with another more detailed PoC he posted the following day after the discoverers' blog post.

  • The Thunderclap vulnerabilities

    It should come as no surprise that plugging untrusted devices into a computer system can lead to a wide variety of bad outcomes—though often enough it works just fine. We have reported on a number of these kinds of vulnerabilities (e.g. BadUSB in 2014) along the way. So it will not shock readers to find out that another vulnerability of this type has been discovered, though it may not sit well that, even after years of vulnerable plug-in buses, there are still no solid protections against these rogue devices. This most-recent entrant into this space targets the Thunderbolt interface; the vulnerabilities found have been dubbed "Thunderclap".

    There are several different versions of Thunderbolt, either using Mini DisplayPort connectors (Thunderbolt 1 and 2) or USB Type-C (Thunderbolt 3). According to the long list of researchers behind Thunderclap, all of those are vulnerable to the problems they found. Beyond that, PCI Express (PCIe) peripherals are also able to exploit the Thunderclap vulnerabilities, though they are a bit less prone to hotplugging. Thunderclap is the subject of a paper [PDF] and web site. It is more than just a bunch of vulnerabilities, however, as there is a hardware and software research platform that they have developed and released. A high-level summary of the Thunderclap paper was posted to the Light Blue Touchpaper blog by Theo Markettos, one of the researchers, at the end of February.

  • Core scheduling

    Kernel developers are used to having to defend their work when posting it to the mailing lists, so when a longtime kernel developer describes their own work as "expensive and nasty", one tends to wonder what is going on. The patch set in question is core scheduling from Peter Zijlstra. It is intended to make simultaneous multithreading (SMT) usable on systems where cache-based side channels are a concern, but even its author is far from convinced that it should actually become part of the kernel.
    SMT increases performance by turning one physical CPU into two virtual CPUs that share the hardware; while one is waiting for data from memory, the other can be executing. Sharing a processor this closely has led to security issues and concerns for years, and many security-conscious users disable SMT entirely. The disclosure of the L1 terminal fault vulnerability in 2018 did not improve the situation; for many, SMT simply isn't worth the risks it brings with it.

    But performance matters too, so there is interest in finding ways to make SMT safe (or safer, at least) to use in environments with users who do not trust each other. The coscheduling patch set posted last September was one attempt to solve this problem, but it did not get far and has not been reposted. One obstacle to this patch set was almost certainly its complexity; it operated at every level of the scheduling domain hierarchy, and thus addressed more than just the SMT problem.

    Zijlstra's patch set is focused on scheduling at the core level only, meaning that it is intended to address SMT concerns but not to control higher-level groups of physical processors as a unit. Conceptually, it is simple enough. On kernels where core scheduling is enabled, a core_cookie field is added to the task structure; it is an unsigned long value. These cookies are used to define the trust boundaries; two processes with the same cookie value trust each other and can be allowed to run simultaneously on the same core.

  • A kernel unit-testing framework

    March 1, 2019 For much of its history, the kernel has had little in the way of formal testing infrastructure. It is not entirely an exaggeration to say that testing is what the kernel community kept users around for. Over the years, though, that situation has improved; internal features like kselftest and services like the 0day testing system have increased our test coverage considerably. The story is unlikely to end there, though; the next addition to the kernel's testing arsenal may be a unit-testing framework called KUnit.

    The KUnit patches, currently in their fourth revision, have been developed by Brendan Higgins at Google. The intent is to enable the easy and rapid testing of kernel components in isolation — unit testing, in other words. That distinguishes KUnit from kernel's kselftest framework in a couple of significant ways. Kselftest is intended to verify that a given feature works in a running kernel; the tests run in user space and exercise the kernel that the system booted. They thus can be thought of as a sort of end-to-end test, ensuring that specific parts of the entire system are behaving as expected. These tests are important to have, but they do not necessarily test specific kernel subsystems in isolation from all of the others, and they require actually booting the kernel to be tested.

    KUnit, instead, is designed to run more focused tests, and they run inside the kernel itself. To make this easy to do in any setting, the framework makes use of user-mode Linux (UML) to actually run the tests. That may come as a surprise to those who think of UML as a dusty relic from before the kernel had proper virtualization support (its home page is hosted on SourceForge and offers a bleeding-edge 2.6.24 kernel for download), but UML has been maintained over the years. It makes a good platform for something like KUnit without rebooting the host system or needing to set up virtualization.

  • Two topics in user-space access

    Kernel code must often access data that is stored in user space. Most of the time, this access is uneventful, but it is not without its dangers and cannot be done without exercising due care. A couple of recent discussions have made it clear that this care is not always being taken, and that not all kernel developers fully understand how user-space access should be performed. The good news is that kernel developers are currently working on a set of changes to make user-space access safer in the future.

More in Tux Machines

New features in OpenStack Neutron

OpenStack is the open source cloud infrastructure software project that provides compute, storage, and networking services for bare-metal, container, and VM workloads. To get a sense of the core functionality and additional services, check out the OpenStack map. The platform has a modular architecture that works across industry segments because infrastructure operators can choose the components they need to manage their infrastructure in the way that best supports their application workloads. The modules are also pluggable to provide further flexibility and make sure they can be used with a specific storage backend or software-defined networking (SDN) controller. Neutron is an OpenStack project to provide a de-facto standard REST API to manage and configure networking services and make them available to other components such as Nova. Read more

today's leftovers

  • Full Circle Weekly News #125
  • Why Open19 Designs Matter for Edge Computing [Ed: Openwashing Microsoft without even any source code]
    On the opening day of this year's Data Center World in Phoenix, Yuval Bachar, LinkedIn's principal engineer of data center architecture, was on hand to explain why the social network's Open19 Project will be an important part of data centers' move to the edge.
  • Course Review: Applied Hardware Attacks: Rapid Prototying & Hardware Implants
    Everyone learns in different ways. While Joe is happy to provide as much help as a student needs, his general approach probably caters most to those who learn by doing. Lecture is light and most of the learning happens during the lab segments. He gives enough space that you will make mistakes and fail, but not so badly that you never accomplish your objective. If you read the lab manual carefully, you will find adequate hints to get you in the right direction. On the other hand, if you’re a student that wants to site in a classroom and listen to an instructor lecture for the entire time, you are definitely in the wrong place. If you do not work on the labs, you will get very, very, little out of the course. The rapid prototyping course is a good introduction to using the 3D printer and pcb mill for hardware purposes, and would be valuable even for those building hardware instead of breaking it. It really opened my eyes to the possibilities of these technologies. On the other hand, I suspect that the hardware implants course has limited application. It’s useful to learn what is possible, but unless you work in secure hardware design or offensive security that would use hardware implants, it’s probably not something directly applicable to your day to day.
  • Nulloy – Music Player with Waveform Progress Bar
    I’ve written a lot about multimedia software including a wide range of music players, some built with web-technologies, others using popular widget toolkits like Qt and GTK. I want to look at another music player today. You may not have heard of this one, as development stalled for a few years. But it’s still under development, and it offers some interesting features. It’s called Nulloy. The software is written in the C++ programming language, with the user interface using the Qt widget toolkit. It’s first release was back in 2011.
  • A Complete List of Google Drive Clients for Linux

Security Leftovers

SmartArt and Contributors to LibreOffice

  • SmartArt improvements in LibreOffice, part 4
    I recently dived into the SmartArt support of LibreOffice, which is the component responsible for displaying complex diagrams from PPTX. I focus on the case when only the document model and the layout constraints are given, not a pre-rendered result. First, thanks to our partner SUSE for working with Collabora to make this possible.
  • Things to know if you are a new contributor to LibreOffice code
    When I began contributing code to LibreOffice, I faced some issues because I didn't know several facts that the other active contributors knew. This blog post summarizes some of those facts, and I hope it will be useful for other new contributors!