Language Selection

English French German Italian Portuguese Spanish

today's leftovers

Filed under
Misc
  • Fedora 34 Looking To Tweak Default zRAM Configuration

    ast year with Fedora 33 zRAM was switched on by default. The setup was that using a compressed zRAM drive for swap space leads to better performance and in turn a better user experience. Some spins of Fedora have been using swap-on-zRAM by default going back many releases while since F33 it's been used for all spins. Now with Fedora 34 the configuration is being further refined.

    With Fedora 33, the zRAM configuration was limited to a 0.5 fraction of RAM or 4GB, whichever is smaller, while for Fedora 34 the zram-fraction will be 1.0 and the maximum zRAM size set to 8GiB.

    [...]

    This in particular should help the Fedora 34 experience for systems with minimal amounts of RAM.

  • A Zoological guide to kernel data structures

    Recently I was working on a BPF feature which aimed to provide a mechanism to display any kernel data structure for debugging purposes. As part of that effort, I wondered what the limits are. How big is the biggest kernel data structure? What's the typical kernel data structure size?

    [...]

    A lot of the articles we read about the Linux kernel talk about size, but in the context of numbers of lines of code, files, commits and so on. These are interesting metrics, but here we're going to focus on data structures, and we're going to use two fantastic tools in our investigation...

    [...]

    Zooming out again, what's interesting about the pattern of structure size frequency is that it seems to reflect the inherent cost of large data structures; they pay a tax in terms of memory utilization, so while we see many small data structures, and the falloff as we approach larger sizes is considerable.

    This pattern is observed elsewhere, bringing us back to the zoological title of this post. If we look at the frequency of animal species grouped by their size, we see a similar pattern of exponential decay as we move from smaller to larger species sizes. For more info see https://en.wikipedia.org/wiki/Body_size_and_species_richness. If metabolic cost is a factor in determining this pattern in nature, we can observe a similar "metabolic cost" in memory utilization for larger data structures in the Linux kernel also. A related observation - that smaller species (such as insects) exist in much larger numbers than larger species in nature - would be interesting to investigate for the Linux kernel, but that would require observing data structure utilization in running systems, which is a job for another day!

  • Destination Linux 208: Mythbusting Linux Misconceptions

    This week on Destination Linux, we’re going to bust some myths as we talk about some Linux Misconceptions. Then we’re going to review some information on openSUSE and review the interesting facts revealed in it’s most recent community poll. We’ve also got our famous tips, tricks and software picks. All of this and so much more this week on Destination Linux.

  • Software Is Eating Every Layer Of The Datacenter

    Software may be eating the world, as Marc Andreessen correctly asserted nearly a decade ago, but some parts of the world are crunchier than others and take some time for the hardware to be smashed open and for software to flow in and out of it.

    We have been watching with great interest since around 2008 or so as merchant silicon came to switching and routing and how control of hardware was broken free from control of software, much as the X86 platform emerged as a common computing substrate a decade earlier. Initial attempts at creating portable and compatible operating systems for switching and routing had their issues, but a second wave network operating systems are emerging and, we think, will eventually become the way that networking is done in the datacenter, breaking the hegemony of proprietary operating systems as happened in compute in the past. A decade of open systems Unix platforms on dozens of chip architectures really just helped create the conditions that allowed Linux on X86 to become the dominate platform in the datacenter. And, ironically, now that Linux dominates, now different hardware, now including various kinds of accelerators as well as new CPUs, can now be slipped easily in and out of compute in the datacenter without huge disruption to Linux.

    The same thing is starting to happen with network operating systems, including the SONiC/SAI effort championed by Microsoft, the ArcOS platform from Arrcus, whatever Nvidia ultimately cooks up through the combination of Mellanox Technology and Cumulus Networks. Cisco Systems is now a supplier of merchant silicon with its Silicon One router chips, which debuted in December 2019 and which were augmented with switch chips last October. Every switch ASIC vendor has created some form of programmability for its packet processing engines, with P4 as advanced by Barefoot Networks (now part of Intel) being the darling but by no means the only way to achieve programmability, and now we see the industry rallying behind the concept of the Data Processing Unit, or DPU, which among other things manages network and storage virtualization and increasingly runs the compute hypervisor, offloading these functions from the CPUs in host systems.

  • The Fridge: Ubuntu Weekly Newsletter Issue 665

    Welcome to the Ubuntu Weekly Newsletter, Issue 665 for the week of January 3 – 9, 2021.

  • The 2020 CC Global Summit Keynotes Are Here!

    In addition to the 170+ sessions hosted at last year’s virtual event, we hosted three keynotes that helped us think through how to connect the events of 2020 with our work—and find a path forward in hope and optimism. We’re excited to share these recordings of the keynotes with you today!

  • Wikipedia’s future lies in poorer countries

    The number of people actively editing Wikipedia articles in English, its most-used language, peaked in 2007 at 53,000, before starting a decade-long decline. That trend spawned fears that the site would atrophy into irrelevance. Fortunately for Wikipedia’s millions of readers, the bleeding has stopped: since 2015 there have been around 32,000 active English-language editors. This stabilising trend is similar for other languages of European origin.

    Meanwhile, as more people in poorer countries gain [Internet] access, Wikipedia is becoming a truly global resource. The encyclopedia’s sub-sites are organised by language, not by nationality. However, you can estimate the typical wealth of speakers of each language by averaging the GDP per head of the countries they live in, weighted by the number of speakers in each country. (For Portuguese, this would be 80% Brazil, 5% Portugal and 15% other countries; for Icelandic, it is almost entirely Iceland.)

More in Tux Machines

digiKam 7.7.0 is released

After three months of active maintenance and another bug triage, the digiKam team is proud to present version 7.7.0 of its open source digital photo manager. See below the list of most important features coming with this release. Read more

Dilution and Misuse of the "Linux" Brand

Samsung, Red Hat to Work on Linux Drivers for Future Tech

The metaverse is expected to uproot system design as we know it, and Samsung is one of many hardware vendors re-imagining data center infrastructure in preparation for a parallel 3D world. Samsung is working on new memory technologies that provide faster bandwidth inside hardware for data to travel between CPUs, storage and other computing resources. The company also announced it was partnering with Red Hat to ensure these technologies have Linux compatibility. Read more

today's howtos

  • How to install go1.19beta on Ubuntu 22.04 – NextGenTips

    In this tutorial, we are going to explore how to install go on Ubuntu 22.04 Golang is an open-source programming language that is easy to learn and use. It is built-in concurrency and has a robust standard library. It is reliable, builds fast, and efficient software that scales fast. Its concurrency mechanisms make it easy to write programs that get the most out of multicore and networked machines, while its novel-type systems enable flexible and modular program constructions. Go compiles quickly to machine code and has the convenience of garbage collection and the power of run-time reflection. In this guide, we are going to learn how to install golang 1.19beta on Ubuntu 22.04. Go 1.19beta1 is not yet released. There is so much work in progress with all the documentation.

  • molecule test: failed to connect to bus in systemd container - openQA bites

    Ansible Molecule is a project to help you test your ansible roles. I’m using molecule for automatically testing the ansible roles of geekoops.

  • How To Install MongoDB on AlmaLinux 9 - idroot

    In this tutorial, we will show you how to install MongoDB on AlmaLinux 9. For those of you who didn’t know, MongoDB is a high-performance, highly scalable document-oriented NoSQL database. Unlike in SQL databases where data is stored in rows and columns inside tables, in MongoDB, data is structured in JSON-like format inside records which are referred to as documents. The open-source attribute of MongoDB as a database software makes it an ideal candidate for almost any database-related project. This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you the step-by-step installation of the MongoDB NoSQL database on AlmaLinux 9. You can follow the same instructions for CentOS and Rocky Linux.

  • An introduction (and how-to) to Plugin Loader for the Steam Deck. - Invidious
  • Self-host a Ghost Blog With Traefik

    Ghost is a very popular open-source content management system. Started as an alternative to WordPress and it went on to become an alternative to Substack by focusing on membership and newsletter. The creators of Ghost offer managed Pro hosting but it may not fit everyone's budget. Alternatively, you can self-host it on your own cloud servers. On Linux handbook, we already have a guide on deploying Ghost with Docker in a reverse proxy setup. Instead of Ngnix reverse proxy, you can also use another software called Traefik with Docker. It is a popular open-source cloud-native application proxy, API Gateway, Edge-router, and more. I use Traefik to secure my websites using an SSL certificate obtained from Let's Encrypt. Once deployed, Traefik can automatically manage your certificates and their renewals. In this tutorial, I'll share the necessary steps for deploying a Ghost blog with Docker and Traefik.