Language Selection

English French German Italian Portuguese Spanish

Linux Foundation and Kernel Development

Filed under
Linux
  • Containers Microconference Accepted into 2018 Linux Plumbers Conference

    The Containers Micro-conference at Linux Plumbers is the yearly gathering of container runtime developers, kernel developers and container users. It is the one opportunity to have everyone in the same room to both look back at the past year in the container space and discuss the year ahead.

    In the past, topics such as use of cgroups by containers, system call filtering and interception (Seccomp), improvements/additions of kernel namespaces, interaction with the Linux Security Modules (AppArmor, SELinux, SMACK), TPM based validation (IMA), mount propagation and mount API changes, uevent isolation, unprivileged filesystem mounts and more have been discussed in this micro-conference.

  • LF Deep Learning Foundation Advances Open Source Artificial Intelligence With Major Membership Growth

    The LF Deep Learning Foundation, an umbrella organization of The Linux Foundation that supports and sustains open source innovation in artificial intelligence, machine learning, and deep learning, today announced five new members: Ciena, DiDi, Intel, Orange and Red Hat. The support of these new members will provide additional resources to the community to develop and expand open source AI, ML and DL projects, such as the Acumos AI Project, the foundation's comprehensive platform for AI model discovery, development and sharing.

  • A quick history of early-boot memory allocators

    One might think that memory allocation during system startup should not be difficult: almost all of memory is free, there is no concurrency, and there are no background tasks that will compete for memory. Even so, boot-time memory management is a tricky task. Physical memory is not necessarily contiguous, its extents change from system to system, and the detection of those extents may be not trivial. With NUMA things are even more complex because, in order to satisfy allocation locality, the exact memory topology must be determined. To cope with this, sophisticated mechanisms for memory management are required even during the earliest stages of the boot process.

    One could ask: "so why not use the same allocator that Linux uses normally from the very beginning?" The problem is that the primary Linux page allocator is a complex beast and it, too, needs to allocate memory to initialize itself. Moreover, the page-allocator data structures should be allocated in a NUMA-aware way. So another solution is required to get to the point where the memory-management subsystem can become fully operational.

    In the early days, Linux didn't have an early memory allocator; in the 1.0 kernel, memory initialization was not as robust and versatile as it is today. Every subsystem initialization call, or simply any function called from start_kernel(), had access to the starting address of the single block of free memory via the global memory_start variable. If a function needed to allocate memory it just increased memory_start by the desired amount. By the time v2.0 was released, Linux was already ported to five more architectures, but boot-time memory management remained as simple as in v1.0, with the only difference being that the extents of the physical memory were detected by the architecture-specific code. It should be noted, though, that hardware in those days was much simpler and memory configurations could be detected more easily.

  • Teaching the OOM killer about control groups

    The kernel's out-of-memory (OOM) killer is summoned when the system runs short of free memory and is unable to proceed without killing one or more processes. As might be expected, the policy decisions around which processes should be targeted have engendered controversy for as long as the OOM killer has existed. The 4.19 development cycle is likely to include a new OOM-killer implementation that targets control groups rather than individual processes, but it turns out that there is significant disagreement over how the OOM killer and control groups should interact.

    To simplify a bit: when the OOM killer is invoked, it tries to pick the process whose demise will free the most memory while causing the least misery for users of the system. The heuristics used to make this selection have varied considerably over time — it was once remarked that each developer who changes the heuristics makes them work for their use case while ruining things for everybody else. In current kernels, the heuristics implemented in oom_badness() are relatively simple: sum up the amount of memory used by a process, then scale it by the process's oom_score_adj value. That value, found in the process's /proc directory, can be tweaked by system administrators to make specific processes more or less attractive as an OOM-killer target.

    No OOM-killer implementation is perfect, and this one is no exception. One problem is that it does not pay attention to how much memory a particular user has allocated; it only looks at specific processes. If user A has a single large process while user B has 100 smaller ones, the OOM killer will invariably target A's process, even if B is using far more memory overall. That behavior is tolerable on a single-user system, but it is less than optimal on a large system running containers on behalf of multiple users.

More in Tux Machines

OSS Leftover

  • How an affordable open source eye tracker is helping thousands communicate
    In 2015, while sat in a meeting at his full-time job, Julius Sweetland posted to Reddit about a project he had quietly been working on for years, that would help people with motor neurone disease communicate using just their eyes and an application. He forgot about the post for a couple of hours before friends messaged him to say he'd made the front page. Now three years on Optikey, the open source eye-tracking communication tool, is being used by thousands of people, largely through word of mouth recommendations. Sweetland was speaking at GitHub Universe at the Palace of Fine Art in San Francisco, and he took some time to speak with Techworld about the project. [...] Originally, Sweetland's exposure to open source had largely been through the consumption of tools such as the GIMP. "I knew of the concept, I didn't really know how the nuts and bolts worked, I was always a little blase about how do you make money from something like that... but flipping it around again I'm still coming from the point of view that there's no money in my product, so I still don't understand how people make money in open source...
  • Fission open source serverless framework gets updated
    Platform9 just released updates to Fission.io - the open source, Kubernetes-native Serverless framework, with new features enabling developers and IT Operations to improve the quality and reliability of serverless applications. Other new features include Automated Canary Deployments to reduce the risk of failed releases, Prometheus integration for automated monitoring and alerts, and fine-grained cost and performance optimization capabilities. With this latest release, Fission offers the most complete set of features to allow Dev and Ops teams to safely adopt Serverless and benefit from the speed, cost savings and scalability of this cloud native development pattern on any environment - either in the public cloud or on-premises.
  • Alphabet’s DeepMind open-sources key building blocks from its AI projects
  • United States: It's Ten O'Clock: Do You Know Where Your Software Developers Are? [Ed: Smith Gambrell & Russell LLP are liars. Dana Hustins says FSF "purport to convert others' proprietary software into open source software" in there. They paint GPL as a conspiracy of some kind to entrap proprietary s/w developers.]
  • Transatomic Power To Open Source IP Regarding Advanced Molten Salt Reactors [Ed: There's no such thing as "IP", Duane Morris LLP. There are copyrights, trademarks, patents etc. and Transatomic basically made code free.]
  • Code Review--an Excerpt from VM Brasseur's New Book Forge Your Future with Open Source
    Even new programmers can provide a lot of value with their code reviews. You don't have to be a Rockstar Ninja 10x Unicorn Diva programmer with years and years of experience to have valuable insights. In fact, you don't even have to be a programmer at all. You just have to be knowledgable enough to spot patterns. While you won't be able to do a complete review without programming knowledge, you may still spot things that could use some work or clarification. If you're not a Rockstar Ninja 10x Unicorn Diva programmer, not only is your code review feedback still valuable, but you can also learn a great deal in the process: Code layout, programming style, domain knowledge, best practices, neat little programming tricks you'd not have seen otherwise, and sometimes antipatterns (or "how not to do things"). So don't let the fact that you're unfamiliar with the code, the project, or the language hold you back from reviewing code contributions. Give it a go and see what there is to learn and discover.

Security Leftovers

Android Leftovers

Ubuntu 18.10 (Cosmic Cuttlefish) Is Now Available to Download

After six months in development, Ubuntu 18.10 (Cosmic Cuttlefish) is now finally here, and you can download the ISO images right now for all official flavors, including Kubuntu, Xubuntu, Lubuntu, Ubuntu MATE, Ubuntu Budgie, Ubuntu Kylin, and Ubuntu Studio, for 64-bit and 32-bit architectures (only Lubuntu and Xubuntu). The Ubuntu Server edition is also out and it's supported on more hardware architectures than Ubuntu Desktop, including 64-bit (amd64), ARM64 (AArch64), IBM System z (s390x), PPC64el (Power PC 64-bit Little Endian), and Raspberry Pi 2/ARMhf. A live Ubuntu Server flavor is also available only for 64-bit computers. Read more Also: Ubuntu Linux 18.10 arrives