Language Selection

English French German Italian Portuguese Spanish

Linux Foundation and Kernel Development

Filed under
Linux
  • Containers Microconference Accepted into 2018 Linux Plumbers Conference

    The Containers Micro-conference at Linux Plumbers is the yearly gathering of container runtime developers, kernel developers and container users. It is the one opportunity to have everyone in the same room to both look back at the past year in the container space and discuss the year ahead.

    In the past, topics such as use of cgroups by containers, system call filtering and interception (Seccomp), improvements/additions of kernel namespaces, interaction with the Linux Security Modules (AppArmor, SELinux, SMACK), TPM based validation (IMA), mount propagation and mount API changes, uevent isolation, unprivileged filesystem mounts and more have been discussed in this micro-conference.

  • LF Deep Learning Foundation Advances Open Source Artificial Intelligence With Major Membership Growth

    The LF Deep Learning Foundation, an umbrella organization of The Linux Foundation that supports and sustains open source innovation in artificial intelligence, machine learning, and deep learning, today announced five new members: Ciena, DiDi, Intel, Orange and Red Hat. The support of these new members will provide additional resources to the community to develop and expand open source AI, ML and DL projects, such as the Acumos AI Project, the foundation's comprehensive platform for AI model discovery, development and sharing.

  • A quick history of early-boot memory allocators

    One might think that memory allocation during system startup should not be difficult: almost all of memory is free, there is no concurrency, and there are no background tasks that will compete for memory. Even so, boot-time memory management is a tricky task. Physical memory is not necessarily contiguous, its extents change from system to system, and the detection of those extents may be not trivial. With NUMA things are even more complex because, in order to satisfy allocation locality, the exact memory topology must be determined. To cope with this, sophisticated mechanisms for memory management are required even during the earliest stages of the boot process.

    One could ask: "so why not use the same allocator that Linux uses normally from the very beginning?" The problem is that the primary Linux page allocator is a complex beast and it, too, needs to allocate memory to initialize itself. Moreover, the page-allocator data structures should be allocated in a NUMA-aware way. So another solution is required to get to the point where the memory-management subsystem can become fully operational.

    In the early days, Linux didn't have an early memory allocator; in the 1.0 kernel, memory initialization was not as robust and versatile as it is today. Every subsystem initialization call, or simply any function called from start_kernel(), had access to the starting address of the single block of free memory via the global memory_start variable. If a function needed to allocate memory it just increased memory_start by the desired amount. By the time v2.0 was released, Linux was already ported to five more architectures, but boot-time memory management remained as simple as in v1.0, with the only difference being that the extents of the physical memory were detected by the architecture-specific code. It should be noted, though, that hardware in those days was much simpler and memory configurations could be detected more easily.

  • Teaching the OOM killer about control groups

    The kernel's out-of-memory (OOM) killer is summoned when the system runs short of free memory and is unable to proceed without killing one or more processes. As might be expected, the policy decisions around which processes should be targeted have engendered controversy for as long as the OOM killer has existed. The 4.19 development cycle is likely to include a new OOM-killer implementation that targets control groups rather than individual processes, but it turns out that there is significant disagreement over how the OOM killer and control groups should interact.

    To simplify a bit: when the OOM killer is invoked, it tries to pick the process whose demise will free the most memory while causing the least misery for users of the system. The heuristics used to make this selection have varied considerably over time — it was once remarked that each developer who changes the heuristics makes them work for their use case while ruining things for everybody else. In current kernels, the heuristics implemented in oom_badness() are relatively simple: sum up the amount of memory used by a process, then scale it by the process's oom_score_adj value. That value, found in the process's /proc directory, can be tweaked by system administrators to make specific processes more or less attractive as an OOM-killer target.

    No OOM-killer implementation is perfect, and this one is no exception. One problem is that it does not pay attention to how much memory a particular user has allocated; it only looks at specific processes. If user A has a single large process while user B has 100 smaller ones, the OOM killer will invariably target A's process, even if B is using far more memory overall. That behavior is tolerable on a single-user system, but it is less than optimal on a large system running containers on behalf of multiple users.

More in Tux Machines

The 5 Best Linux Distros for Laptops

Maybe you’ve just purchased a brand new laptop. Or maybe you have an older laptop sitting in your closet that you’d like to bring back to life. Either way, the best Linux distros for laptops are those that offer better driver support and can accommodate the performance offered by most laptops. People buy laptops for a specific purpose. That may be software development, creating graphic content, gaming, or office work. The Linux distros below are well suited to run on any laptop. Read more

Graphics: Freedreno Gallium3D and NVIDIA

  • Freedreno Gallium3D Lands MSAA Support For Qualcomm Adreno 600 Series
    While Qualcomm was busy hosting their Tech Summit this week in Hawaii, the independent open-source developers were pressing ahead with their reverse-engineered Qualcomm Adreno 3D graphics driver support. Rob Clark of Red Hat and Kristian Kristensen of Google landed their latest Freedreno Gallium3D driver improvements into Mesa 19.0. The most notable addition was multi-sample anti-aliasing support (MSAA) for the Adreno 600 series hardware. There is also now EXT_multisampled_render_to_texture support exposed by this Gallium3D driver. Besides that work there were also fixes and other changes.
  • NVIDIA Tegra X2 & Xavier Get HDMI Audio With Linux 4.21
    While it's not as exciting as if seeing full 3D open-source driver support, with the upcoming Linux 4.21 kernel are some mainline Tegra improvements that does include HDMI audio support for the X2 and Xavier SoCs. Thierry Reding of NVIDIA sent in the Tegra DRM driver updates this week for the upcoming Linux 4.21 cycle. He commented, "These changes contain a couple of minor fixes for host1x and the Falcon library in Tegra DRM. There are also a couple of missing pieces that finally enable support for host1x, VIC and display on Tegra194. I've also added a patch that enables audio over HDMI using the SOR which has been tested, and works, on both Tegra186 and Tegra194."

Powers of two, powers of Linux: 2048 at the command line

Hello and welcome to today's installment of the Linux command-line toys advent calendar. Every day, we look at a different toy for your terminal: it could be a game or any simple diversion that helps you have fun. Maybe you have seen various selections from our calendar before, but we hope there’s at least one new thing for everyone. Today's toy is a command-line version of one of my all-time favorite casual games, 2048 (which itself is a clone of another clone). Read more

More Radeon RX 590 Ubuntu Benchmarks - See How Your Linux GPU Performance Compares

Published on Friday was my Radeon RX 590 Linux benchmarks now that the kinks in the support for this latest Polaris refresh are worked out (at least in patch form). Here are some complementary data points with some of the OpenGL tests outside of the Steam games for those curious about the RX 590 performance in other workloads or wanting to see how your own GPU performance would compare to these results. The Radeon RX 590 continues running well with the patched Linux 4.20 kernel build (hopefully the last patch needed for the RX 590 will make it into 4.20 mainline soon) and in user-space was Mesa 19.0 from the Padoka PPA for this system running on Ubuntu 18.04 LTS. Read more