Language Selection

English French German Italian Portuguese Spanish

Linux Foundation and Kernel Development

Filed under
Linux
  • Containers Microconference Accepted into 2018 Linux Plumbers Conference

    The Containers Micro-conference at Linux Plumbers is the yearly gathering of container runtime developers, kernel developers and container users. It is the one opportunity to have everyone in the same room to both look back at the past year in the container space and discuss the year ahead.

    In the past, topics such as use of cgroups by containers, system call filtering and interception (Seccomp), improvements/additions of kernel namespaces, interaction with the Linux Security Modules (AppArmor, SELinux, SMACK), TPM based validation (IMA), mount propagation and mount API changes, uevent isolation, unprivileged filesystem mounts and more have been discussed in this micro-conference.

  • LF Deep Learning Foundation Advances Open Source Artificial Intelligence With Major Membership Growth

    The LF Deep Learning Foundation, an umbrella organization of The Linux Foundation that supports and sustains open source innovation in artificial intelligence, machine learning, and deep learning, today announced five new members: Ciena, DiDi, Intel, Orange and Red Hat. The support of these new members will provide additional resources to the community to develop and expand open source AI, ML and DL projects, such as the Acumos AI Project, the foundation's comprehensive platform for AI model discovery, development and sharing.

  • A quick history of early-boot memory allocators

    One might think that memory allocation during system startup should not be difficult: almost all of memory is free, there is no concurrency, and there are no background tasks that will compete for memory. Even so, boot-time memory management is a tricky task. Physical memory is not necessarily contiguous, its extents change from system to system, and the detection of those extents may be not trivial. With NUMA things are even more complex because, in order to satisfy allocation locality, the exact memory topology must be determined. To cope with this, sophisticated mechanisms for memory management are required even during the earliest stages of the boot process.

    One could ask: "so why not use the same allocator that Linux uses normally from the very beginning?" The problem is that the primary Linux page allocator is a complex beast and it, too, needs to allocate memory to initialize itself. Moreover, the page-allocator data structures should be allocated in a NUMA-aware way. So another solution is required to get to the point where the memory-management subsystem can become fully operational.

    In the early days, Linux didn't have an early memory allocator; in the 1.0 kernel, memory initialization was not as robust and versatile as it is today. Every subsystem initialization call, or simply any function called from start_kernel(), had access to the starting address of the single block of free memory via the global memory_start variable. If a function needed to allocate memory it just increased memory_start by the desired amount. By the time v2.0 was released, Linux was already ported to five more architectures, but boot-time memory management remained as simple as in v1.0, with the only difference being that the extents of the physical memory were detected by the architecture-specific code. It should be noted, though, that hardware in those days was much simpler and memory configurations could be detected more easily.

  • Teaching the OOM killer about control groups

    The kernel's out-of-memory (OOM) killer is summoned when the system runs short of free memory and is unable to proceed without killing one or more processes. As might be expected, the policy decisions around which processes should be targeted have engendered controversy for as long as the OOM killer has existed. The 4.19 development cycle is likely to include a new OOM-killer implementation that targets control groups rather than individual processes, but it turns out that there is significant disagreement over how the OOM killer and control groups should interact.

    To simplify a bit: when the OOM killer is invoked, it tries to pick the process whose demise will free the most memory while causing the least misery for users of the system. The heuristics used to make this selection have varied considerably over time — it was once remarked that each developer who changes the heuristics makes them work for their use case while ruining things for everybody else. In current kernels, the heuristics implemented in oom_badness() are relatively simple: sum up the amount of memory used by a process, then scale it by the process's oom_score_adj value. That value, found in the process's /proc directory, can be tweaked by system administrators to make specific processes more or less attractive as an OOM-killer target.

    No OOM-killer implementation is perfect, and this one is no exception. One problem is that it does not pay attention to how much memory a particular user has allocated; it only looks at specific processes. If user A has a single large process while user B has 100 smaller ones, the OOM killer will invariably target A's process, even if B is using far more memory overall. That behavior is tolerable on a single-user system, but it is less than optimal on a large system running containers on behalf of multiple users.

More in Tux Machines

Server: HTTP Clients, IIS DDoS and 'DevOps' Hype From Red Hat

  • What are good command line HTTP clients?
    The whole is greater than the sum of its parts is a very famous quote from Aristotle, a Greek philosopher and scientist. This quote is particularly pertinent to Linux. In my view, one of Linux’s biggest strengths is its synergy. The usefulness of Linux doesn’t derive only from the huge raft of open source (command line) utilities. Instead, it’s the synergy generated by using them together, sometimes in conjunction with larger applications. The Unix philosophy spawned a “software tools” movement which focused on developing concise, basic, clear, modular and extensible code that can be used for other projects. This philosophy remains an important element for many Linux projects. Good open source developers writing utilities seek to make sure the utility does its job as well as possible, and work well with other utilities. The goal is that users have a handful of tools, each of which seeks to excel at one thing. Some utilities work well independently. This article looks at 4 open source command line HTTP clients. These clients let you download files over the internet from the command line. But they can also be used for many more interesting purposes such as testing, debugging and interacting with HTTP servers and web applications. Working with HTTP from the command-line is a worthwhile skill for HTTP architects and API designers. If you need to play around with an API, HTTPie and curl will be invaluable.
  • Microsoft publishes security alert on IIS bug that causes 100% CPU usage spikes
    The Microsoft Security Response Center published yesterday a security advisory about a denial of service (DOS) issue impacting IIS (Internet Information Services), Microsoft's web server technology.
  • 5 things to master to be a DevOps engineer
    There's an increasing global demand for DevOps professionals, IT pros who are skilled in software development and operations. In fact, the Linux Foundation's Open Source Jobs Report ranked DevOps as the most in-demand skill, and DevOps career opportunities are thriving worldwide. The main focus of DevOps is bridging the gap between development and operations teams by reducing painful handoffs and increasing collaboration. This is not accomplished by making developers work on operations tasks nor by making system administrators work on development tasks. Instead, both of these roles are replaced by a single role, DevOps, that works on tasks within a cooperative team. As Dave Zwieback wrote in DevOps Hiring, "organizations that have embraced DevOps need people who would naturally resist organization silos."

Purism's Privacy and Security-Focused Librem 5 Linux Phone to Arrive in Q3 2019

Initially planned to ship in early 2019, the revolutionary Librem 5 mobile phone was delayed for April 2019, but now it suffered just one more delay due to the CPU choices the development team had to make to deliver a stable and reliable device that won't heat up or discharge too quickly. Purism had to choose between the i.MX8M Quad or the i.MX8M Mini processors for their Librem 5 Linux-powered smartphone, but after many trials and errors they decided to go with the i.MX8M Quad CPU as manufacturer NXP recently released a new software stack solving all previous power consumption and heating issues. Read more

Qt Creator 4.9 Beta released

We are happy to announce the release of Qt Creator 4.9 Beta! There are many improvements and fixes included in Qt Creator 4.9. I’ll just mention some highlights in this blog post. Please refer to our change log for a more thorough overview. Read more

Hack Week - Browsersync integration for Online

Recently my LibreOffice work is mostly focused on the Online. It's nice to see how it is growing with new features and has better UI. But when I was working on improving toolbars (eg. folding menubar or reorganization of items) I noticed one annoying thing from the developer perspective. After every small change, I had to restart the server to provide updated content for the browser. It takes few seconds for switching windows, killing old server then running new one which requires some tests to be passed. Last week during the Hack Week funded by Collabora Productivity I was able to work on my own projects. It was a good opportunity for me to try to improve the process mentioned above. I've heard previously about browsersync so I decided to try it out. It is a tool which can automatically reload used .css and .js files in all browser sessions after change detection. To make it work browsersync can start proxy server watching files on the original server and sending events to the browser clients if needed. Read more