today's leftovers
-
Fedora 34 Looking To Tweak Default zRAM Configuration
ast year with Fedora 33 zRAM was switched on by default. The setup was that using a compressed zRAM drive for swap space leads to better performance and in turn a better user experience. Some spins of Fedora have been using swap-on-zRAM by default going back many releases while since F33 it's been used for all spins. Now with Fedora 34 the configuration is being further refined.
With Fedora 33, the zRAM configuration was limited to a 0.5 fraction of RAM or 4GB, whichever is smaller, while for Fedora 34 the zram-fraction will be 1.0 and the maximum zRAM size set to 8GiB.
[...]
This in particular should help the Fedora 34 experience for systems with minimal amounts of RAM.
-
A Zoological guide to kernel data structures
Recently I was working on a BPF feature which aimed to provide a mechanism to display any kernel data structure for debugging purposes. As part of that effort, I wondered what the limits are. How big is the biggest kernel data structure? What's the typical kernel data structure size?
[...]
A lot of the articles we read about the Linux kernel talk about size, but in the context of numbers of lines of code, files, commits and so on. These are interesting metrics, but here we're going to focus on data structures, and we're going to use two fantastic tools in our investigation...
[...]
Zooming out again, what's interesting about the pattern of structure size frequency is that it seems to reflect the inherent cost of large data structures; they pay a tax in terms of memory utilization, so while we see many small data structures, and the falloff as we approach larger sizes is considerable.
This pattern is observed elsewhere, bringing us back to the zoological title of this post. If we look at the frequency of animal species grouped by their size, we see a similar pattern of exponential decay as we move from smaller to larger species sizes. For more info see https://en.wikipedia.org/wiki/Body_size_and_species_richness. If metabolic cost is a factor in determining this pattern in nature, we can observe a similar "metabolic cost" in memory utilization for larger data structures in the Linux kernel also. A related observation - that smaller species (such as insects) exist in much larger numbers than larger species in nature - would be interesting to investigate for the Linux kernel, but that would require observing data structure utilization in running systems, which is a job for another day!
-
Destination Linux 208: Mythbusting Linux Misconceptions
This week on Destination Linux, we’re going to bust some myths as we talk about some Linux Misconceptions. Then we’re going to review some information on openSUSE and review the interesting facts revealed in it’s most recent community poll. We’ve also got our famous tips, tricks and software picks. All of this and so much more this week on Destination Linux.
-
Software Is Eating Every Layer Of The Datacenter
Software may be eating the world, as Marc Andreessen correctly asserted nearly a decade ago, but some parts of the world are crunchier than others and take some time for the hardware to be smashed open and for software to flow in and out of it.
We have been watching with great interest since around 2008 or so as merchant silicon came to switching and routing and how control of hardware was broken free from control of software, much as the X86 platform emerged as a common computing substrate a decade earlier. Initial attempts at creating portable and compatible operating systems for switching and routing had their issues, but a second wave network operating systems are emerging and, we think, will eventually become the way that networking is done in the datacenter, breaking the hegemony of proprietary operating systems as happened in compute in the past. A decade of open systems Unix platforms on dozens of chip architectures really just helped create the conditions that allowed Linux on X86 to become the dominate platform in the datacenter. And, ironically, now that Linux dominates, now different hardware, now including various kinds of accelerators as well as new CPUs, can now be slipped easily in and out of compute in the datacenter without huge disruption to Linux.
The same thing is starting to happen with network operating systems, including the SONiC/SAI effort championed by Microsoft, the ArcOS platform from Arrcus, whatever Nvidia ultimately cooks up through the combination of Mellanox Technology and Cumulus Networks. Cisco Systems is now a supplier of merchant silicon with its Silicon One router chips, which debuted in December 2019 and which were augmented with switch chips last October. Every switch ASIC vendor has created some form of programmability for its packet processing engines, with P4 as advanced by Barefoot Networks (now part of Intel) being the darling but by no means the only way to achieve programmability, and now we see the industry rallying behind the concept of the Data Processing Unit, or DPU, which among other things manages network and storage virtualization and increasingly runs the compute hypervisor, offloading these functions from the CPUs in host systems.
-
The Fridge: Ubuntu Weekly Newsletter Issue 665
Welcome to the Ubuntu Weekly Newsletter, Issue 665 for the week of January 3 – 9, 2021.
-
The 2020 CC Global Summit Keynotes Are Here!
In addition to the 170+ sessions hosted at last year’s virtual event, we hosted three keynotes that helped us think through how to connect the events of 2020 with our work—and find a path forward in hope and optimism. We’re excited to share these recordings of the keynotes with you today!
-
Wikipedia’s future lies in poorer countries
The number of people actively editing Wikipedia articles in English, its most-used language, peaked in 2007 at 53,000, before starting a decade-long decline. That trend spawned fears that the site would atrophy into irrelevance. Fortunately for Wikipedia’s millions of readers, the bleeding has stopped: since 2015 there have been around 32,000 active English-language editors. This stabilising trend is similar for other languages of European origin.
Meanwhile, as more people in poorer countries gain [Internet] access, Wikipedia is becoming a truly global resource. The encyclopedia’s sub-sites are organised by language, not by nationality. However, you can estimate the typical wealth of speakers of each language by averaging the GDP per head of the countries they live in, weighted by the number of speakers in each country. (For Portuguese, this would be 80% Brazil, 5% Portugal and 15% other countries; for Icelandic, it is almost entirely Iceland.)
- Login or register to post comments
- Printer-friendly version
- 2808 reads
- PDF version
More in Tux Machines
- Highlights
- Front Page
- Latest Headlines
- Archive
- Recent comments
- All-Time Popular Stories
- Hot Topics
- New Members
digiKam 7.7.0 is releasedAfter three months of active maintenance and another bug triage, the digiKam team is proud to present version 7.7.0 of its open source digital photo manager. See below the list of most important features coming with this release. |
Dilution and Misuse of the "Linux" Brand
|
Samsung, Red Hat to Work on Linux Drivers for Future TechThe metaverse is expected to uproot system design as we know it, and Samsung is one of many hardware vendors re-imagining data center infrastructure in preparation for a parallel 3D world. Samsung is working on new memory technologies that provide faster bandwidth inside hardware for data to travel between CPUs, storage and other computing resources. The company also announced it was partnering with Red Hat to ensure these technologies have Linux compatibility. |
today's howtos
|
Recent comments
1 year 11 weeks ago
1 year 11 weeks ago
1 year 11 weeks ago
1 year 11 weeks ago
1 year 11 weeks ago
1 year 11 weeks ago
1 year 11 weeks ago
1 year 11 weeks ago
1 year 11 weeks ago
1 year 11 weeks ago