Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 6 hours 59 min ago

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, July 2019

Tuesday 20th of August 2019 09:38:28 AM

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 199 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Adrian Bunk got 8h assigned but did nothing (plus 10 extra hours from June), thus he is carrying over 18h to August.
  • Ben Hutchings did 18.5 hours (out of 18.5 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 21 hours (out of 18.5h allocated + 17h remaining, thus keeping 14.5 extra hours for August).
  • Hugo Lefeuvre did 9.75 hours (out of 18.5 hours, thus carrying over 8.75h to Augustq).
  • Jonas Meurer did 19 hours (out of 17 hours allocated plus 2h extra hours June).
  • Markus Koschany did 18.5 hours (out of 18.5 hours allocated).
  • Mike Gabriel did 15.75 hours (out of 18.5 hours allocated plus 7.25 extra hours from June, thus carrying over 10h to August.).
  • Ola Lundqvist did 0.5 hours (out of 8 hours allocated plus 8 extra hours from June, then he gave 7.5h back to the pool, thus he is carrying over 8 extra hours to August).
  • Roberto C. Sanchez did 8 hours (out of 8 hours allocated).
  • Sylvain Beucler did 18.5 hours (out of 18.5 hours allocated).
  • Thorsten Alteholz did 18.5 hours (out of 18.5 hours allocated).
Evolution of the situation

July was different than other months. First, some people have been on actual vacations, while 4 of the above 14 contributors met in Curitiba, Brazil, for DebConf19. There, a talk about LTS (slides, video) was given, followed by a Q&ligA session. Also a new promotional video about Debian LTS, aimed at potential sponsors was shown there for the first time.

DebConf19 was also a success in respect to on-boarding of new contributors, we’ve found three potential new contributors, one of them is already in training.

The security tracker (now for oldoldstable as Buster has been released and thus Jessie became oldoldstable) currently lists 51 packages with a known CVE and the dla-needed.txt file has 35 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Dirk Eddelbuettel: RcppQuantuccia 0.0.3

Tuesday 20th of August 2019 12:45:00 AM

A maintenance release of RcppQuantuccia arrived on CRAN earlier today.

RcppQuantuccia brings the Quantuccia header-only subset / variant of QuantLib to R. At the current stage, it mostly offers date and calendaring functions.

This release was triggered by some work CRAN is doing on updating C++ standards for code in the repository. Notably, under C++11 some constructs such ptr_fun, bind1st, bind2nd, … are now deprecated, and CRAN prefers the code base to not issue such warnings (as e.g. now seen under clang++-9). So we updated the corresponding code in a good dozen or so places to the (more current and compliant) code from QuantLib itself.

We also took this opportunity to significantly reduce the footprint of the sources and the installed shared library of RcppQuantuccia. One (unexported) feature was pricing models via Brownian Bridges based on quasi-random Sobol sequences. But the main source file for these sequences comes in at several megabytes in sizes, and allocates a large number of constants. So in this version the file is excluded, making the current build of RcppQuantuccia lighter in size and more suitable for the (simpler, popular and trusted) calendar functions. We also added a new holiday to the US calendar.

The complete list changes follows.

Changes in version 0.0.3 (2019-08-19)
  • Updated Travis CI test file (#8)).

  • Updated US holiday calendar data with G H Bush funeral date (#9).

  • Updated C++ use to not trigger warnings [CRAN request] (#9).

  • Comment-out pragmas to suppress warnings [CRAN Policy] (#9).

  • Change build to exclude Sobol sequence reducing file size for source and shared library, at the cost of excluding market models (#10).

Courtesy of CRANberries, there is also a diffstat report relative to the previous release. More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jaskaran Singh: GSoC Final Report

Tuesday 20th of August 2019 12:00:00 AM
Introduction:

The Debian Patch Porting System aims to systematize and partially automate the security patch porting process.

In this Google Summer of Code (2019), I wrote a webcrawler to extract security patches for a given security vulnerability identifier. This webcrawler or patch-finder serves as the first step of the Debian Patch Porting System.

The Patch-finder should recognize numerous vulnerability identifiers. These identifiers can be security advisories (DSA, GLSA, RHSA), vulnerability identifiers (OVAL, CVE), etc. So far, it can identify CVE, DSA (Debian Security Advisory), GLSA (Gentoo Linux Security Advisory) and RHSA (Red Hat Security Advisory).

Each vulnerability identifier has a list of entrypoint URLs associated with it. These URLs are used to initiate the patch finding.

Vulnerabilities that are not CVEs are generic vulnerabilities. If a generic vulnerability is given, its “aliases” (i.e. CVEs that are related to the generic vulnerability) are determined. This method was chosen because CVEs are quite possibly the most widely used security vulnerability and thus would have the most number of patches associated to them. Once the aliases are determined, the entrypoint URLs of the aliases are crawled for the patch-finding.

The Patch-finder is based on the web crawling and scraping framework Scrapy.

What was done:

During these three months, I have:

  • Used Scrapy to implement a spider to collect patch links.
  • Implemented a recursive patch-finding process. Any links that the patch-finder finds on a page (in a certain area of interest, of course) that are not patch links are followed.
  • Implemented a crawler to extract patches from Debian Packages.
  • Implemented a crawler to extract patches from a given GitHub repository.

Here’s a link to the patch-finder’s Github Repository which I have used for GSoC.

TODO:

There is a lot more stuff to be done, from solving small bugs to implementing major features. Some of these issues are on the project’s GitHub issue tracker here. Following is a summary of these issues and a few more ideas:

  • A way to uniquely identify patches. This is so that the same patches are not scraped and collected.
  • A Database, and a corresponding database API.
  • Store patches in the database, along with any other information.
  • Collect not only patches but other information relevant to the vulnerability.
  • Integrate the Github crawler/parser in the crawling process.
  • A way to check the relevancy of the patch to the vulnerability. A naive solution is, of course, to simply check for mention of the vulnerability ID in the patch description.
  • Efficient page filters. Certain links should not be crawled because it is obvious they will not yield any patches, for example homepages.
  • A better way to scrape links, rather than using a URL’s corresponding xpath.
  • A more efficient testing framework.
  • More crawlers/parsers.
Personal Notes:

Google Summer of Code has been a super comfortable and fun experience for me. I’ve learnt tonnes about Python, Open Source and Software Development. My mentors Luciano Bello and László Böszörményi have been immensely helpful and have guided me through these three months.

I plan to continue working on this project and hopefully develop it to a state where Debian and everyone who needs it can use it conveniently.

Jonathan Dowland: Shared notes and TODO lists

Monday 19th of August 2019 07:55:35 PM

When it comes to organising myself, I've long been anachronistic. I've relied upon paper notebooks for most of my life. In the last 15 years I've stuck to a particular type of diary/notebook hybrid, with a week-to-view on the left-hand side of pages and lined notebook pages on the right.

This worked well for me for my own personal stuff but obviously didn't work well for family things that need to be shared. Trying to find systems that work for both my wife and I has proven really challenging. The best we've come up with so far is a shared (IMAP) account and Apple's notes apps.

On iOS, Apple's low-frills note-taking app lets you synchronise your notes with a mail account (over IMAP). It stores them individually in HTML format, one email per note page, in a mailbox called "Notes". You can set up note syncing to the same account from multiple devices, and so we have a "family" mailbox set up on both my phone and my wife's. I can also get at the notes using any other mail client if I need to.

This works surprisingly well, but not perfectly. In particular synchronising changes to notes can go wrong if we both have the same note page open at the same time. The failure mode is not the worst: it duplicates the note into two; but it's still a problem.

Can anyone recommend a simple, more robust system for sharing notes — and task lists — between people? For task lists, it would be lovely (but not essential) if we could tick things off. At the moment we manage that just as free-form text.

Holger Levsen: 20190818-cccamp

Monday 19th of August 2019 06:49:01 PM
Home again

Two days ago I finally arrived home again and was greeted with this very nice view when entering the area:

(These images were taken yesterday from inside the venue.)

To give an idea of scale, the Pesthörnchen flag on top is 2m wide

Since today, there's also a rainbow flag next to the Pesthörnchen one. I'm very much looking forward to the next days, though buildup is big fun already.

Antoine Beaupré: KNOB attack: Is my Bluetooth device insecure?

Monday 19th of August 2019 05:58:49 PM

A recent attack against Bluetooth, called KNOB, has been making waves last week. In essence, it allows an attacker to downgrade the security of a Bluetooth so much that it's possible for the attacker to break the encryption key and spy on all the traffic. The attack is so devastating that some have described it as the "stop using bluetooth" flaw.

This is my attempt at answering my own lingering questions about "can I still use Bluetooth now?" Disclaimer: I'm not an expert in Bluetooth at all, and just base this analysis on my own (limited) knowledge of the protocol, and some articles (including the paper) I read on the topic.

Is Bluetooth still safe?

It really depends what "safe" means, and what your threat model is. I liked how the Ars Technica article put it:

It's also important to note the hurdles—namely the cost of equipment and a surgical-precision MitM—that kept the researchers from actually carrying out their over-the-air attack in their own laboratory. Had the over-the-air technique been easy, they almost certainly would have done it.

In other words, the active attack is really hard to do, and the researchers didn't actually do one at all! It's a theoretical flaw, at this point, and while it's definitely possible, it's not what the researchers did:

The researchers didn't carry out the man-in-the-middle attack over the air. They did, however, root a Nexus 5 device to perform a firmware attack. Based on the response from the other device—a Motorola G3—the researchers said they believe that both attacks would work.

This led some researchers to (boldy) say they would still use a Bluetooth keyboard:

Dan Guido, a mobile security expert and the CEO of security firm Trail of Bits, said: "This is a bad bug, although it is hard to exploit in practice. It requires local proximity, perfect timing, and a clear signal. You need to fully MitM both peers to change the key size and exploit this bug. I'm going to apply the available patches and continue using my bluetooth keyboard."

So, what's safe and what's not, in my much humbler opinion?

Keyboards: bad

The attack is a real killer for Bluetooth keyboards. If an active attack is leveraged, it's game over: everything you type is visible to the attacker, and that includes, critically, passwords. In theory, one could even input keyboard events into the channel, which allows basically arbitrary code execution on the host.

Some, however, made the argument that it's probably easier to implant a keylogger in the device than actually do that attack, but I disagree: this requires physical access, while the KNOB attack can be done remotely.

How far this can be done, by the way, is still open to debate. The Telegraph claimed "a mile" in a click-bait title, but I think such an attacker would need to be much closer for this to work, more in the range of "meters" than "kilometers". But it still means "a black van sitting outside your house" instead of "a dude breaking into your house", which is a significant difference.

Other input devices: hum

I'm not sure mice and other input devices are such a big deal, however. Extracting useful information from those mice moving around the screen is difficult without seeing what's behind that screen.

So unless you use an on-screen keyboard or have special input devices, I don't think those are such a big deal when spied upon.

They could be leveraged with other attacks to make you "click through" some things an attacker would otherwise not be able to do.

Speakers: okay

I think I'll still keep using my Bluetooth speakers. But that's because I don't have much confidential audio I listen to. I listen to music, movies, and silly cat videos; not confidential interviews with victims of repression that should absolutely have their identities protected. And if I ever come across such material, I now know that I should not trust that speaker..

Otherwise, what's an attacker going to do here: listen to my (ever decreasing) voicemail (which is transmitted in cleartext by email anyways)? Listen to that latest hit? Meh.

Do keep in mind that some speakers have microphones in them as well, so that's not the entire story...

Headsets and microphones: hum

Headsets and microphones are another beast, as they can listen to other things in your environment. I do feel much less comfortable using those devices now. What makes the entire thing really iffy is some speakers do have microphones in them and all of a sudden everything around you can listen on your entire life.

(It seems like a given, with "smart home assistants" these days, but I still like to think my private conversations at home are private, in general. And I generally don't want to be near any of those "smart" devices, to be honest.)

One mitigating circumstance here is that the attack needs to happen during the connection (or pairing? still unclear) negociation, which doesn't happen that often if everything works correctly. Unfortunately, this happens more than often exactly with speakers and headsets. That's because many of those devices stupidly have low limits on the number of devices they can pair with. For example, the Bose Soundlink II can only pair with 8 other devices. If you count three device by person (laptop, workstation, phone), you quickly hit the limit when you move the device around. So I end up repairing that device quite often.

And that would be if the attack takes place during the pairing phase. As it turns out, the attack window is much wider: the attack happens during the connexion stage (see Figure 1, page 1049 in the paper), after devices have paired. This actually happens way more often than just during pairing. Any time your speaker or laptop will go to sleep, it will disconnect. Then to start using the device again, the BT layer will renegociate that keysize, and the attack can happen again.

(I have written the authors of the paper to clarify at which stage the attack happens and will update this post when/if they reply. Update: Daniele Antonioli has confirmed the attack takes place at connect phase.)

Fortunarely, the Bose Soundlink II has no microphone, which I'm thankful of. But my Bluetooth headset does have a microphone, which makes me less comfortable.

File and contact transfers: bad

Bluetooth, finally, is also used to transfer stuff other than audio of course. It's clunky, weird and barely working, but it's possible to send files over Bluetooth, and some headsets and car controllers will ask you permission to list your contacts so that "smart" features like "OK Google, call dad please" will work.

This attack makes it possible for an attacker to steal your contacts, when connecting devices. It can also intercept file transfers and so on.

That's pretty bad, to say the least.

Unfortunately, the "connection phase" mitigation described above is less relevant here. It's less likely you'll be continuously connecting two phones (or your phone and laptop) together for the purpose of file transfers. What's more likely is you'll connect the devices for explicit purpose of the file transfer, and therefore an attacker has a window for attack at every transfer.

I don't really use the "contacts" feature anyways (because it creeps me the hell out in the first place), so that's not a problem for me. But the file transfer problem will certainly give me pause the next time I ever need to feel the pain of transfering files over Bluetooth again, which I hope is "never".

It's interesting to note the parallel between this flaw, which will mostly affect Android file transfers, and the recent disclosure of flaws with Apple's Airdrop protocol which was similarly believed to be secure, even though it was opaque and proprietary. Now, think a bit about how Airdrop uses Bluetooth to negociate part of the protocol, and you can feel like I feel that everything in security just somewhat keeps crashes down and we don't seem to be able to make any progress at all.

Overall: meh

I've always been uncomfortable with Bluetooth devices: the pairing process has no sort of authentication whatsoever. The best you get is to enter a pin, and it's often "all zeros" or some trivially easy thing to bruteforce. So Bluetooth security has always felt like a scam, and I especially never trusted keyboards with passwords, in particular.

Like many branded attacks, I think this one might be somewhat overstated. Yes, it's a powerful attack, but Bluetooth implementations are already mostly proprietary junk that is undecipherable from the opensource world. There are no or very few open hardware implementations, so it's somewhat of expected we find things like this.

I have also found the response from the Bluetooth SIG is particularly alarming:

To remedy the vulnerability, the Bluetooth SIG has updated the Bluetooth Core Specification to recommend a minimum encryption key length of 7 octets for BR/EDR connections.

7 octets is 56 bits. That's the equivalent of DES, which was broken in 56 hours back, over 20 years ago. That's far from enough. But what's more disturbing is that this key size negociation protocol might be there "because 'some' governments didn't want other governments to have stronger encryption", ie. it would be a backdoor.

The 7-byte lower bound might also be there because of Apple lobbying. Their AirPods were implemented as not-standards-compliant and already have that lower 7-byte bound, so by fixing the standard to match one Apple implementation, they would reduce the cost of their recall/replacements/upgrades.

Overally, this behavior of the standards body is what should make us suspicious of any Bluetooth device going forward, and question the motivations of the entire Bluetooth standardization process. We can't use 56 bits keys anymore, and I can't believe I need to explicitely say so, but it seems it's where we're at with Bluetooth these days.

Jonathan Dowland: NAS upgrade

Monday 19th of August 2019 03:13:20 PM

After 5 years of continuous service, the mainboard in my NAS recently failed (at the worst possible moment). I opted to replace the mainboard with a more modern version of the same idea: ASRock J4105-ITX featuring the Intel J4105, an integrated J-series Celeron CPU, designed to be passively cooled, and I've left the rest of the machine as it was.

In the process of researching which CPU/mainboard to buy, I was pointed at the Odroid-H2: a single-board computer (SBC) designed/marketed at a similar sector to things like the Raspberry PI (but featuring the exact same CPU as the mainboard I eventually settled on). I've always felt that the case I'm using for my NAS is too large, but didn't want to spend much money on a smaller one. The ODroid-H2 has a number of cheap, custom-made cases for different use-cases, including one for NAS-style work, which is in a very small footprint: the "Case 1". Unfortunately this case positions two disk drives flat, one vertically above the other, and both above the SBC. I was too concerned that one drive would be heating the other, and cumulatively both heating the SBC at that orientation. The case is designed with a fan but I want to avoid requiring one. I had too many bad memories of trying to control the heat in my first NAS, the Thecus n2100, which (by default) oriented the drives in the same way (and for some reason it never occurred to me to rotate that device into the "toaster" orientation).

I've mildly revised my NAS page to reflect the change. Interestingly most of the niggles I was experiencing were all about the old mainboard, so I've moved them on a separate page (J1900N-D3V) in case they are useful to someone.

At some point in the future I hope to spend a little bit of time on the software side of things, as some of the features of my set up are no longer working as they should: I can't remote-decrypt the main disk via SSH on boot, and the first run of any backup fails due to some kind of race condition in the systemd unit dependencies. (The first attempt does not correctly mount the backup partition; the second attempt always succeeds).

Russ Allbery: Review: Spinning Silver

Monday 19th of August 2019 03:07:00 AM

Review: Spinning Silver, by Naomi Novik

Publisher: Del Rey Copyright: 2018 ISBN: 0-399-18100-8 Format: Kindle Pages: 465

Miryem is the daughter of the village moneylender and the granddaughter (via her mother) of a well-respected moneylender in the city. Her grandfather is good at his job. Her father is not. He's always willing to loan the money out, but collecting it is another matter, and the village knows that and takes advantage of it. Each year is harder than the one before, in part because they have less and less money and in part because the winter is getting harsher and colder. When Miryem's mother falls ill, that's the last straw: she takes her father's ledger and goes to collect the money her family is rightfully owed.

Rather to her surprise, she's good at the job in all the ways her father is not. Daring born of desperation turns into persistent, cold anger at the way her family had been taken advantage of. She's good with numbers, has an eye for investments, and is willing to be firm and harden her heart where her father was not. Her success leads to good food, a warmer home, and her mother's recovery. It also leads to the attention of the Staryk.

The Staryk are the elves of Novik's world. They claim everything white in the forest, travel their own mysterious ice road, and raid villages when they choose. And, one night, one of the Staryk comes to Miryem's house and leaves a small bag of Staryk silver coins, challenging her to turn them into the gold the Staryk value so highly.

This is just the start of Spinning Silver, and Miryem is only one of a broadening cast. She demands the service of Wanda and her younger brother as payment for their father's debt, to the delight (hidden from Miryem) of them both since this provides a way to escape their abusive father. The Staryk silver becomes jewelry with surprising magical powers, which Miryem sells to the local duke for his daughter. The duke's daughter, in turn, draws the attention of the czar, who she met as a child when she found him torturing squirrels. And Miryem finds herself caught up in the world of the Staryk, which works according to rules that she can barely understand and may be a trap that she cannot escape.

Novik makes a risky technical choice in this book and pulls it off beautifully: the entirety of Spinning Silver is written in first person with frequently shifting narrators that are not signaled outside of the text. I think there were five different narrators in total, and I may be forgetting some. Despite that, I was never confused for more than a paragraph about who was speaking due to Novik's command of the differing voices. Novik uses this to great effect to show the inner emotions and motivations of the characters without resorting to the distancing effect of wandering third-person.

That's important for this novel because these characters are not emotionally forthcoming. They can't be. Each of them is operating under sharp constraints that make too much emotion unsafe: Wanda and her brother are abused, the Duke's daughter is valuable primarily as a political pawn and later is juggling the frightening attention of the czar, and Miryem is carefully preserving an icy core of anger against her parents' ineffectual empathy and is trying to navigate the perilous and trap-filled world of the Staryk. The caution and occasional coldness of the characters does require the reader do some work to extrapolate emotions, but I thought the overall effect worked.

Miryem's family is, of course, Jewish. The nature of village interactions with moneylenders make that obvious before the book explicitly states it. I thought Novik built some interesting contrasts between Miryem's navigation of the surrounding anti-Semitism and her navigation of the rules of the Staryk, which start off as far more alien than village life but become more systematic and comprehensible than the pervasive anti-Semitism as Miryem learns more. But I was particularly happy that Novik includes the good as well as the bad of Jewish culture among unforgiving neighbors: a powerful sense of family, household religious practices, Jewish weddings, and a cautious but very deep warmth that provides the emotional core for the last part of the book.

Novik also pulls off a rare feat in the plot structure by transforming most of the apparent villains into sympathetic characters and, unlike The Song of Ice and Fire, does this without making everyone awful. The Staryk, the duke, and even the czar are obvious villains on first appearances, but in each case the truth is more complicated and more interesting. The plot of Spinning Silver is satisfyingly complex and ever-changing, with just the right eventual payoffs for being a good (but cautious and smart!) person.

There were places when Spinning Silver got a bit bleak, such as when the story lingered a bit too long on Miryem trying and failing to navigate the Staryk world while getting herself in deeper and deeper, but her core of righteous anger and the protagonists' careful use of all the leverage that they have carried me through. The ending is entirely satisfying and well worth the journey. Recommended.

Rating: 8 out of 10

Markus Koschany: My Free Software Activities in July 2019

Sunday 18th of August 2019 10:19:01 PM

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

DebConf 19 in Curitiba

I have been attending DebConf 19 in Curitiba, Brazil from 16.7.2019 to 28.7.2019. I gave two talks about games in Debian and the Long Term Support project, together with Hugo Lefeuvre, Chris Lamb and Holger Levsen. Especially the Games talk had some immediate positive impact. In response to it Reiner Herrmann and Giovanni Mascellani provided patches for release critical bugs related to GCC-9 and the Python 2 removal and we could already fix some of the more important problems for our current release cycle.

I had a lot of fun in Brazil and again met a couple of new and interesting people.  Thanks to all who helped organizing DebConf 19 and made it the great event it was!

Debian Games
  • We are back in business which means packaging new upstream versions of popular games. I packaged new versions of atomix, dreamchess and pygame-sdl2,
  • uploaded minetest 5.0.1 to unstable and backported it later to buster-backports,
  • uploaded new versions of freeorion and warzone2100 to Buster,
  • fixed bug #931415 in freeciv and #925866 in xteddy,
  • became the new uploader of enemylines7.
  • I reviewed and sponsored patches from Reiner Herrmann to port several games to python3-pygame including whichwayisup, funnyboat and monsterz,
  • from Giovanni Mascellani ember and enemylines7.
Debian Java
  • I packaged new upstream versions of robocode, jboss-modules, jboss-jdeparser2, wildfly-common, commons-dbcp2, jboss-logging-tools, jboss-logmanager, libpdfbox2.java, jboss-logging, jboss-xnio, libjide-oss-java,  sweethome3d, sweethome3d-furniture, pdfsam, libsambox-java, libsejda-java, jackson-jr, jackson-dataformat-xml, libsmali-java and apktool.
Misc
  • I updated the popular Firefox/Chromium addons ublock-origin, https-everywhere and privacybadger and also packaged new upstream versions of wabt and binaryen which are both required for building webassembly files from source.
Debian LTS

This was my 41. month as a paid contributor and I have been paid to work 18,5 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-1854-1. Issued a security update for libonig fixing 1 CVE.
  • DLA-1860-1. Issued a security update for libxslt fixing 4 CVE.
  • DLA-1846-2. Issued a regression update for unzip to address a Firefox build failure.
  • DLA-1873-1. Issued a security update for proftpd-dfsg fixing 1 CVE.
  • DLA-1886-1. Issued a security update for openjdk-7 fixing 4 CVE.
  • DLA-1890-1. Issued a security update for kde4libs fixing 1 CVE.
  • DLA-1891-1. Reviewed and sponsored a security update for openldap fixing 2 CVE prepared by Ryan Tandy.
ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my fourteenth month and I have been paid to work 15 hours on ELTS.

  • I was in charge of our ELTS frontdesk from 15.07.2019 until 21.07.2019 and I triaged CVE in openjdk7, libxslt, libonig, php5, wireshark, python2.7, libsdl1.2, patch, suricata and libssh2.
  • ELA-143-1. Issued a security update for libonig fixing 1 CVE.
  • ELA-145-1.  Issued a security update for libxslt fixing 2 CVE.
  • ELA-151-1. Issued a security update for linux fixing 3 CVE.
  • ELA-154-1. Issued a security update for openjdk-7 fixing 4 CVE.

Thanks for reading and see you next time.

Michael Stapelberg: Linux distributions: Can we do without hooks and triggers?

Saturday 17th of August 2019 04:47:29 PM

Hooks are an extension feature provided by all package managers that are used in larger Linux distributions. For example, Debian uses apt, which has various maintainer scripts. Fedora uses rpm, which has scriptlets. Different package managers use different names for the concept, but all of them offer package maintainers the ability to run arbitrary code during package installation and upgrades. Example hook use cases include adding daemon user accounts to your system (e.g. postgres), or generating/updating cache files.

Triggers are a kind of hook which run when other packages are installed. For example, on Debian, the man(1) package comes with a trigger which regenerates the search database index whenever any package installs a manpage. When, for example, the nginx(8) package is installed, a trigger provided by the man(1) package runs.

Over the past few decades, Open Source software has become more and more uniform: instead of each piece of software defining its own rules, a small number of build systems are now widely adopted.

Hence, I think it makes sense to revisit whether offering extension via hooks and triggers is a net win or net loss.

Hooks preclude concurrent package installation

Package managers commonly can make very little assumptions about what hooks do, what preconditions they require, and which conflicts might be caused by running multiple package’s hooks concurrently.

Hence, package managers cannot concurrently install packages. At least the hook/trigger part of the installation needs to happen in sequence.

While it seems technically feasible to retrofit package manager hooks with concurrency primitives such as locks for mutual exclusion between different hook processes, the required overhaul of all hooks¹ seems like such a daunting task that it might be better to just get rid of the hooks instead. Only deleting code frees you from the burden of maintenance, automated testing and debugging.

① In Debian, there are 8620 non-generated maintainer scripts, as reported by find shard*/src/*/debian -regex ".*\(pre\|post\)\(inst\|rm\)$" on a Debian Code Search instance.

Triggers slow down installing/updating other packages

Personally, I never use the apropos(1) command, so I don’t appreciate the man(1) package’s trigger which updates the database used by apropos(1). The process takes a long time and, because hooks and triggers must be executed serially (see previous section), blocks my installation or update.

When I tell people this, they are often surprised to learn about the existance of the apropos(1) command. I suggest adopting an opt-in model.

Unnecessary work if programs are not used between updates

Hooks run when packages are installed. If a package’s contents are not used between two updates, running the hook in the first update could have been skipped. Running the hook lazily when the package contents are used reduces unnecessary work.

As a welcome side-effect, lazy hook evaluation automatically makes the hook work in operating system images, such as live USB thumb drives or SD card images for the Raspberry Pi. Such images must not ship the same crypto keys (e.g. OpenSSH host keys) to all machines, but instead generate a different key on each machine.

Why do users keep packages installed they don’t use? It’s extra work to remember and clean up those packages after use. Plus, users might not realize or value that having fewer packages installed has benefits such as faster updates.

I can also imagine that there are people for whom the cost of re-installing packages incentivizes them to just keep packages installed—you never know when you might need the program again…

Implemented in an interpreted language

While working on hermetic packages (more on that in another blog post), where the contained programs are started with modified environment variables (e.g. PATH) via a wrapper bash script, I noticed that the overhead of those wrapper bash scripts quickly becomes significant. For example, when using the excellent magit interface for Git in Emacs, I encountered second-long delays² when using hermetic packages compared to standard packages. Re-implementing wrappers in a compiled language provided a significant speed-up.

Similarly, getting rid of an extension point which mandates using shell scripts allows us to build an efficient and fast implementation of a predefined set of primitives, where you can reason about their effects and interactions.

② magit needs to run git a few times for displaying the full status, so small overhead quickly adds up.

Incentivizing more upstream standardization

Hooks are an escape hatch for distribution maintainers to express anything which their packaging system cannot express.

Distributions should only rely on well-established interfaces such as autoconf’s classic ./configure && make && make install (including commonly used flags) to build a distribution package. Integrating upstream software into a distribution should not require custom hooks. For example, instead of requiring a hook which updates a cache of schema files, the library used to interact with those files should transparently (re-)generate the cache or fall back to a slower code path.

Distribution maintainers are hard to come by, so we should value their time. In particular, there is a 1:n relationship of packages to distribution package maintainers (software is typically available in multiple Linux distributions), so it makes sense to spend the work in the 1 and have the n benefit.

Can we do without them?

If we want to get rid of hooks, we need another mechanism to achieve what we currently achieve with hooks.

If the hook is not specific to the package, it can be moved to the package manager. The desired system state should either be derived from the package contents (e.g. required system users can be discovered from systemd service files) or declaratively specified in the package build instructions—more on that in another blog post. This turns hooks (arbitrary code) into configuration, which allows the package manager to collapse and sequence the required state changes. E.g., when 5 packages are installed which each need a new system user, the package manager could update /etc/passwd just once.

If the hook is specific to the package, it should be moved into the package contents. This typically means moving the functionality into the program start (or the systemd service file if we are talking about a daemon). If (while?) upstream is not convinced, you can either wrap the program or patch it. Note that this case is relatively rare: I have worked with hundreds of packages and the only package-specific functionality I came across was automatically generating host keys before starting OpenSSH’s sshd(8)³.

There is one exception where moving the hook doesn’t work: packages which modify state outside of the system, such as bootloaders or kernel images.

③ Even that can be moved out of a package-specific hook, as Fedora demonstrates.

Conclusion

Global state modifications performed as part of package installation today use hooks, an overly expressive extension mechanism.

Instead, all modifications should be driven by configuration. This is feasible because there are only a few different kinds of desired state modifications. This makes it possible for package managers to optimize package installation.

Michael Stapelberg: Linux package managers are slow

Saturday 17th of August 2019 04:36:31 PM

I measured how long the most popular Linux distribution’s package manager take to install small and large packages (the ack(1p) source code search Perl script and qemu, respectively).

Where required, my measurements include metadata updates such as transferring an up-to-date package list. For me, requiring a metadata update is the more common case, particularly on live systems or within Docker containers.

All measurements were taken on an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz running Docker 1.13.1 on Linux 4.19, backed by a Samsung 970 Pro NVMe drive boasting many hundreds of MB/s write performance.

See Appendix B for details on the measurement method and command outputs.

Measurements

Keep in mind that these are one-time measurements. They should be indicative of actual performance, but your experience may vary.

ack (small Perl program) distribution package manager data wall-clock time rate Fedora dnf 107 MB 29s 3.7 MB/s NixOS Nix 15 MB 14s 1.1 MB/s Debian apt 15 MB 4s 3.7 MB/s Arch Linux pacman 6.5 MB 3s 2.1 MB/s Alpine apk 10 MB 1s 10.0 MB/s qemu (large C program) distribution package manager data wall-clock time rate Fedora dnf 266 MB 1m8s 3.9 MB/s Arch Linux pacman 124 MB 1m2s 2.0 MB/s Debian apt 159 MB 51s 3.1 MB/s NixOS Nix 262 MB 38s 6.8 MB/s Alpine apk 26 MB 2.4s 10.8 MB/s


The difference between the slowest and fastest package managers is 30x!

How can Alpine’s apk and Arch Linux’s pacman be an order of magnitude faster than the rest? They are doing a lot less than the others, and more efficiently, too.

Pain point: too much metadata

For example, Fedora transfers a lot more data than others because its main package list is 60 MB (compressed!) alone. Compare that with Alpine’s 734 KB APKINDEX.tar.gz.

Of course the extra metadata which Fedora provides helps some use case, otherwise they hopefully would have removed it altogether. The amount of metadata seems excessive for the use case of installing a single package, which I consider the main use-case of an interactive package manager.

I expect any modern Linux distribution to only transfer absolutely required data to complete my task.

Pain point: no concurrency

Because they need to sequence executing arbitrary package maintainer-provided code (hooks and triggers), all tested package managers need to install packages sequentially (one after the other) instead of concurrently (all at the same time).

In my blog post “Can we do without hooks and triggers?”, I outline that hooks and triggers are not strictly necessary to build a working Linux distribution.

Thought experiment: further speed-ups

Strictly speaking, the only required feature of a package manager is to make available the package contents so that the package can be used: a program can be started, a kernel module can be loaded, etc.

By only implementing what’s needed for this feature, and nothing more, a package manager could likely beat apk’s performance. It could, for example:

  • skip archive extraction by mounting file system images (like AppImage or snappy)
  • use compression which is light on CPU, as networks are fast (like apk)
  • skip fsync when it is safe to do so, i.e.:
    • package installations don’t modify system state
    • atomic package installation (e.g. an append-only package store)
    • automatically clean up the package store after crashes
Current landscape

Here’s a table outlining how the various package managers listed on Wikipedia’s list of software package management systems fare:

name scope package file format hooks/triggers AppImage apps image: ISO9660, SquashFS no snappy apps image: SquashFS yes: hooks FlatPak apps archive: OSTree no 0install apps archive: tar.bz2 no nix, guix distro archive: nar.{bz2,xz} activation script dpkg distro archive: tar.{gz,xz,bz2} in ar(1) yes rpm distro archive: cpio.{bz2,lz,xz} scriptlets pacman distro archive: tar.xz install slackware distro archive: tar.{gz,xz} yes: doinst.sh apk distro archive: tar.gz yes: .post-install Entropy distro archive: tar.bz2 yes ipkg, opkg distro archive: tar{,.gz} yes Conclusion

As per the current landscape, there is no distribution-scoped package manager which uses images and leaves out hooks and triggers, not even in smaller Linux distributions.

I think that space is really interesting, as it uses a minimal design to achieve significant real-world speed-ups.

I have explored this idea in much more detail, and am happy to talk more about it in my post “Introducing the distri research linux distribution".

Appendix A: related work

There are a couple of recent developments going into the same direction:

Appendix B: measurement details ack

You can expand each of these:

Fedora’s dnf takes almost 30 seconds to fetch and unpack 107 MB.

% docker run -t -i fedora /bin/bash [root@722e6df10258 /]# time dnf install -y ack Fedora Modular 30 - x86_64 4.4 MB/s | 2.7 MB 00:00 Fedora Modular 30 - x86_64 - Updates 3.7 MB/s | 2.4 MB 00:00 Fedora 30 - x86_64 - Updates 17 MB/s | 19 MB 00:01 Fedora 30 - x86_64 31 MB/s | 70 MB 00:02 […] Install 44 Packages Total download size: 13 M Installed size: 42 M […] real 0m29.498s user 0m22.954s sys 0m1.085s

NixOS’s Nix takes 14s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix 39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i perl5.28.2-ack-2.28' unpacking channels... created 2 symlinks in user environment installing 'perl5.28.2-ack-2.28' these paths will be fetched (14.91 MiB download, 80.83 MiB unpacked): /nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2 /nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48 /nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man /nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27 /nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31 /nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53 /nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16 /nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28 copying path '/nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man' from 'https://cache.nixos.org'... copying path '/nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27' from 'https://cache.nixos.org'... copying path '/nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16' from 'https://cache.nixos.org'... copying path '/nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48' from 'https://cache.nixos.org'... copying path '/nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53' from 'https://cache.nixos.org'... copying path '/nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31' from 'https://cache.nixos.org'... copying path '/nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2' from 'https://cache.nixos.org'... copying path '/nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28' from 'https://cache.nixos.org'... building '/nix/store/q3243sjg91x1m8ipl0sj5gjzpnbgxrqw-user-environment.drv'... created 56 symlinks in user environment real 0m 14.02s user 0m 8.83s sys 0m 2.69s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid root@b7cc25a927ab:/# time (apt update && apt install -y ack-grep) Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [233 kB] Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8270 kB] Fetched 8502 kB in 2s (4764 kB/s) […] The following NEW packages will be installed: ack ack-grep libfile-next-perl libgdbm-compat4 libgdbm5 libperl5.26 netbase perl perl-modules-5.26 The following packages will be upgraded: perl-base 1 upgraded, 9 newly installed, 0 to remove and 60 not upgraded. Need to get 8238 kB of archives. After this operation, 42.3 MB of additional disk space will be used. […] real 0m9.096s user 0m2.616s sys 0m0.441s

Arch Linux’s pacman takes a little over 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base [root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm ack) :: Synchronizing package databases... core 132.2 KiB 1033K/s 00:00 extra 1629.6 KiB 2.95M/s 00:01 community 4.9 MiB 5.75M/s 00:01 […] Total Download Size: 0.07 MiB Total Installed Size: 0.19 MiB […] real 0m3.354s user 0m0.224s sys 0m0.049s

Alpine’s apk takes only about 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine / # time apk add ack fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz (1/4) Installing perl-file-next (1.16-r0) (2/4) Installing libbz2 (1.0.6-r7) (3/4) Installing perl (5.28.2-r1) (4/4) Installing ack (3.0.0-r0) Executing busybox-1.30.1-r2.trigger OK: 44 MiB in 18 packages real 0m 0.96s user 0m 0.25s sys 0m 0.07s

qemu

You can expand each of these:

Fedora’s dnf takes over a minute to fetch and unpack 266 MB.

% docker run -t -i fedora /bin/bash [root@722e6df10258 /]# time dnf install -y qemu Fedora Modular 30 - x86_64 3.1 MB/s | 2.7 MB 00:00 Fedora Modular 30 - x86_64 - Updates 2.7 MB/s | 2.4 MB 00:00 Fedora 30 - x86_64 - Updates 20 MB/s | 19 MB 00:00 Fedora 30 - x86_64 31 MB/s | 70 MB 00:02 […] Install 262 Packages Upgrade 4 Packages Total download size: 172 M […] real 1m7.877s user 0m44.237s sys 0m3.258s

NixOS’s Nix takes 38s to fetch and unpack 262 MB.

% docker run -t -i nixos/nix 39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i qemu-4.0.0' unpacking channels... created 2 symlinks in user environment installing 'qemu-4.0.0' these paths will be fetched (262.18 MiB download, 1364.54 MiB unpacked): […] real 0m 38.49s user 0m 26.52s sys 0m 4.43s

Debian’s apt takes 51 seconds to fetch and unpack 159 MB.

% docker run -t -i debian:sid root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86) Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [149 kB] Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8426 kB] Fetched 8574 kB in 1s (6716 kB/s) […] Fetched 151 MB in 2s (64.6 MB/s) […] real 0m51.583s user 0m15.671s sys 0m3.732s

Arch Linux’s pacman takes 1m2s to fetch and unpack 124 MB.

% docker run -t -i archlinux/base [root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm qemu) :: Synchronizing package databases... core 132.2 KiB 751K/s 00:00 extra 1629.6 KiB 3.04M/s 00:01 community 4.9 MiB 6.16M/s 00:01 […] Total Download Size: 123.20 MiB Total Installed Size: 587.84 MiB […] real 1m2.475s user 0m9.272s sys 0m2.458s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine / # time apk add qemu-system-x86_64 fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz […] OK: 78 MiB in 95 packages real 0m 2.43s user 0m 0.46s sys 0m 0.09s

Michael Stapelberg: distri: a Linux distribution to research fast package management

Saturday 17th of August 2019 04:36:31 PM

Over the last year or so I have worked on a research linux distribution in my spare time. It’s not a distribution for researchers (like Scientific Linux), but my personal playground project to research linux distribution development, i.e. try out fresh ideas.

This article focuses on the package format and its advantages, but there is more to distri, which I will cover in upcoming blog posts.

Motivation

I was a Debian Developer for the 7 years from 2012 to 2019, but using the distribution often left me frustrated, ultimately resulting in me winding down my Debian work.

Frequently, I was noticing a large gap between the actual speed of an operation (e.g. doing an update) and the possible speed based on back of the envelope calculations. I wrote more about this in my blog post “Package managers are slow”.

To me, this observation means that either there is potential to optimize the package manager itself (e.g. apt), or what the system does is just too complex. While I remember seeing some low-hanging fruit¹, through my work on distri, I wanted to explore whether all the complexity we currently have in Linux distributions such as Debian or Fedora is inherent to the problem space.

I have completed enough of the experiment to conclude that the complexity is not inherent: I can build a Linux distribution for general-enough purposes which is much less complex than existing ones.

① Those were low-hanging fruit from a user perspective. I’m not saying that fixing them is easy in the technical sense; I know too little about apt’s code base to make such a statement.

Key idea: packages are images, not archives

One key idea is to switch from using archives to using images for package contents. Common package managers such as dpkg(1) use tar(1) archives with various compression algorithms.

distri uses SquashFS images, a comparatively simple file system image format that I happen to be familiar with from my work on the gokrazy Raspberry Pi 3 Go platform.

This idea is not novel: AppImage and snappy also use images, but only for individual, self-contained applications. distri however uses images for distribution packages with dependencies. In particular, there is no duplication of shared libraries in distri.

A nice side effect of using read-only image files is that applications are immutable and can hence not be broken by accidental (or malicious!) modification.

Key idea: separate hierarchies

Package contents are made available under a fully-qualified path. E.g., all files provided by package zsh-amd64-5.6.2-3 are available under /ro/zsh-amd64-5.6.2-3. The mountpoint /ro stands for read-only, which is short yet descriptive.

Perhaps surprisingly, building software with custom prefix values of e.g. /ro/zsh-amd64-5.6.2-3 is widely supported, thanks to:

  1. Linux distributions, which build software with prefix set to /usr, whereas FreeBSD (and the autotools default), which build with prefix set to /usr/local.

  2. Enthusiast users in corporate or research environments, who install software into their home directories.

Because using a custom prefix is a common scenario, upstream awareness for prefix-correctness is generally high, and the rarely required patch will be quickly accepted.

Key idea: exchange directories

Software packages often exchange data by placing or locating files in well-known directories. Here are just a few examples:

  • gcc(1) locates the libusb(3) headers via /usr/include
  • man(1) locates the nginx(1) manpage via /usr/share/man.
  • zsh(1) locates executable programs via PATH components such as /bin

In distri, these locations are called exchange directories and are provided via FUSE in /ro.

Exchange directories come in two different flavors:

  1. global. The exchange directory, e.g. /ro/share, provides the union of the share sub directory of all packages in the package store.
    Global exchange directories are largely used for compatibility, see below.

  2. per-package. Useful for tight coupling: e.g. irssi(1) does not provide any ABI guarantees, so plugins such as irssi-robustirc can declare that they want e.g. /ro/irssi-amd64-1.1.1-1/out/lib/irssi/modules to be a per-package exchange directory and contain files from their lib/irssi/modules.

Note: Only a few exchange directories are also available in the package build environment (as opposed to run-time). Search paths sometimes need to be fixed

Programs which use exchange directories sometimes use search paths to access multiple exchange directories. In fact, the examples above were taken from gcc(1) ’s INCLUDEPATH, man(1) ’s MANPATH and zsh(1) ’s PATH. These are prominent ones, but more examples are easy to find: zsh(1) loads completion functions from its FPATH.

Some search path values are derived from --datadir=/ro/share and require no further attention, but others might derive from e.g. --prefix=/ro/zsh-amd64-5.6.2-3/out and need to be pointed to an exchange directory via a specific command line flag.

Note: To create the illusion of a writable search path at package build-time, $DESTDIR/ro/share and $DESTDIR/ro/lib are diverted to $DESTDIR/$PREFIX/share and $DESTDIR/$PREFIX/lib, respectively. FHS compatibility

Global exchange directories are used to make distri provide enough of the Filesystem Hierarchy Standard (FHS) that third-party software largely just works. This includes a C development environment.

I successfully ran a few programs from their binary packages such as Google Chrome, Spotify, or Microsoft’s Visual Studio Code.

Fast package manager

I previously wrote about how Linux distribution package managers are too slow.

distri’s package manager is extremely fast. Its main bottleneck is typically the network link, even at high speed links (I tested with a 100 Gbps link).

Its speed comes largely from an architecture which allows the package manager to do less work. Specifically:

  1. Package images can be added atomically to the package store, so we can safely skip fsync(2) . Corruption will be cleaned up automatically, and durability is not important: if an interactive installation is interrupted, the user can just repeat it, as it will be fresh on their mind.

  2. Because all packages are co-installable thanks to separate hierarchies, there are no conflicts at the package store level, and no dependency resolution (an optimization problem requiring SAT solving) is required at all.
    In exchange directories, we resolve conflicts by selecting the package with the highest monotonically increasing distri revision number.

  3. distri proves that we can build a useful Linux distribution entirely without hooks and triggers. Not having to serialize hook execution allows us to download packages into the package store with maximum concurrency.

  4. Because we are using images instead of archives, we do not need to unpack anything. This means installing a package is really just writing its package image and metadata to the package store. Sequential writes are typically the fastest kind of storage usage pattern.

Fast installation also make other use-cases more bearable, such as creating disk images, be it for testing them in qemu(1) , booting them on real hardware from a USB drive, or for cloud providers such as Google Cloud.

Note: To saturate links above 1 Gbps, transfer packages without compression. Fast package builder

Contrary to how distribution package builders are usually implemented, the distri package builder does not actually install any packages into the build environment.

Instead, distri makes available a filtered view of the package store (only declared dependencies are available) at /ro in the build environment.

This means that even for large dependency trees, setting up a build environment happens in a fraction of a second! Such a low latency really makes a difference in how comfortable it is to iterate on distribution packages.

Package stores

In distri, package images are installed from a remote package store into the local system package store /roimg, which backs the /ro mount.

A package store is implemented as a directory of package images and their associated metadata files.

You can easily make available a package store by using distri export.

To provide a mirror for your local network, you can periodically distri update from the package store you want to mirror, and then distri export your local copy. Special tooling (e.g. debmirror in Debian) is not required because distri install is atomic (and update uses install).

Producing derivatives is easy: just add your own packages to a copy of the package store.

The package store is intentionally kept simple to manage and distribute. Its files could be exchanged via peer-to-peer file systems, or synchronized from an offline medium.

distri’s first release

distri works well enough to demonstrate the ideas explained above. I have branched this state into branch jackherer, distri’s first release code name. This way, I can keep experimenting in the distri repository without breaking your installation.

From the branch contents, our autobuilder creates:

  1. disk images, which…

  2. a package repository. Installations can pick up new packages with distri update.

  3. documentation for the release.

The project website can be found at https://distr1.org. The website is just the README for now, but we can improve that later.

The repository can be found at https://github.com/distr1/distri

Project outlook

Right now, distri is mainly a vehicle for my spare-time Linux distribution research. I don’t recommend anyone use distri for anything but research, and there are no medium-term plans of that changing. At the very least, please contact me before basing anything serious on distri so that we can talk about limitations and expectations.

I expect the distri project to live for as long as I have blog posts to publish, and we’ll see what happens afterwards. Note that this is a hobby for me: I will continue to explore, at my own pace, parts that I find interesting.

My hope is that established distributions might get a useful idea or two from distri.

There’s more to come: subscribe to the distri feed

I don’t want to make this post too long, but there is much more!

Please subscribe to the following URL in your feed reader to get all posts about distri:

https://michael.stapelberg.ch/posts/tags/distri/feed.xml

Next in my queue are articles about hermetic packages and good package maintainer experience (including declarative packaging).

Feedback or questions?

I’d love to discuss these ideas in case you’re interested!

Please send feedback to the distri mailing list so that everyone can participate!

Cyril Brulebois: Sending HTML messages with Net::XMPP (Perl)

Saturday 17th of August 2019 06:15:00 AM
Executive summary

It’s perfectly possible! Jump to the HTML demo!

Longer version

This started with a very simple need: wanting to improve the notifications I’m receiving from various sources. Those include:

  • changes or failures reported during Puppet runs on my own infrastructure, and on at a customer’s;
  • build failures for the Debian Installer;
  • changes in banking amounts;
  • and lately: build status for jobs in a customer’s Jenkins instance.

I’ve been using plaintext notifications for a number of years but I decided to try and pimp them a little by adding some colors.

While the XMPP-sending details are usually hidden in a local module, here’s a small self-contained example: connecting to a server, sending credentials, and then sending a message to someone else. Of course, one might want to tweak the Configuration section before trying to run this script…

#!/usr/bin/perl use strict; use warnings; use Net::XMPP; # Configuration: my $hostname = 'example.org'; my $username = 'bot'; my $password = 'call-me-alan'; my $resource = 'demo'; my $recipient = 'human@example.org'; # Open connection: my $con = Net::XMPP::Client->new(); my $status = $con->Connect( hostname => $hostname, connectiontype => 'tcpip', tls => 1, ssl_ca_path => '/etc/ssl/certs', ); die 'XMPP connection failed' if ! defined($status); # Log in: my @result = $con->AuthSend( hostname => $hostname, username => $username, password => $password, resource => $resource, ); die 'XMPP authentication failed' if $result[0] ne 'ok'; # Send plaintext message: my $msg = 'Hello, World!'; my $res = $con->MessageSend( to => $recipient, body => $msg, type => 'chat', ); die('ERROR: XMPP message failed') if $res != 0;

For reference, here’s what the XML message looks like in Gajim’s XML console (on the receiving end):

<message type='chat' to='human@example.org' from='bot@example.org/demo'> <body>Hello, World!</body> </message>

Issues start when one tries to send some HTML message, e.g. with the last part changed to:

# Send plaintext message: my $msg = 'This is a <b>failing</b> test'; my $res = $con->MessageSend( to => $recipient, body => $msg, type => 'chat', );

as that leads to the following message:

<message type='chat' to='human@example.org' from='bot@example.org/demo'> <body>This is a &lt;b&gt;failing&lt;/b&gt; test</body> </message>

So tags are getting encoded and one gets to see the uninterpreted “HTML code”.

Trying various things to embed that inside <body> and <html> tags, with or without namespaces, led nowhere.

Looking at a message sent from Gajim to Gajim (so that I could craft an HTML message myself and inspect it), I’ve noticed it goes this way (edited to concentrate on important parts):

<message xmlns="jabber:client" to="human@example.org/Gajim" type="chat"> <body>Hello, World!</body> <html xmlns="http://jabber.org/protocol/xhtml-im"> <body xmlns="http://www.w3.org/1999/xhtml"> <p>Hello, <strong>World</strong>!</p> </body> </html> </message>

Two takeaways here:

  • The message is send both in plaintext and in HTML. It seems Gajim archives the plaintext version, as opening the history/logs only shows the textual version.

  • The fact that the HTML message is under a different path (/message/html as opposed to /message/body) means that one cannot use the MessageSend method to send HTML messages…

This was verified by checking the documentation and code of the Net::XMPP::Message module. It comes with various getters and setters for attributes. Those are then automatically collected when the message is serialized into XML (through the GetXML() method). Trying to add handling for a new HTML attribute would mean being extra careful as that would need to be treated with $type = 'raw'…

Oh, wait a minute! While using git grep in the sources, looking for that raw type thing, I’ve discovered what sounded promising: an InsertRawXML() method, that doesn’t appear anywhere in either the code or the documentation of the Net::XMPP::Message module.

It’s available, though! Because Net::XMPP::Message is derived from Net::XMPP::Stanza:

use Net::XMPP::Stanza; use base qw( Net::XMPP::Stanza );

which then in turn comes with this function:

############################################################################## # # InsertRawXML - puts the specified string onto the list for raw XML to be # included in the packet. # ##############################################################################

Let’s put that aside for a moment and get back to the MessageSend() method. It wants parameters that can be passed to the Net::XMPP::Message SetMessage() method, and here is its entire code:

############################################################################### # # MessageSend - Takes the same hash that Net::XMPP::Message->SetMessage # takes and sends the message to the server. # ############################################################################### sub MessageSend { my $self = shift; my $mess = $self->_message(); $mess->SetMessage(@_); $self->Send($mess); }

The first assignment is basically equivalent to my $mess = Net::XMPP::Message->new();, so what this function does is: creating a Net::XMPP::Message for us, passing all parameters there, and handing the resulting object over to the Send() method. All in all, that’s merely a proxy.

HTML demo

The question becomes: what if we were to create that object ourselves, then tweaking it a little, and then passing it directly to Send(), instead of using the slightly limited MessageSend()? Let’s see what the rewritten sending part would look like:

# Send HTML message: my $text = 'This is a working test'; my $html = 'This is a <b>working</b> test'; my $message = Net::XMPP::Message->new(); $message->SetMessage( to => $recipient, body => $text, type => 'chat', ); $message->InsertRawXML("<html><body>$html</body></html>"); my $res = $con->Send($message);

And tada!

<message type='chat' to='human@example.org' from='bot@example.org/demo'> <body>This is a working test</body> <html> <body>This is a <b>working</b> test</body> </html> </message>

I’m absolutely no expert when it comes to XMPP standards, and one might need/want to set some more metadata like xmlns but I’m happy enough with this solution that I thought I’d share it as is. ;)

Bits from Debian: Debian celebrates 26 years, Happy DebianDay!

Friday 16th of August 2019 06:12:00 PM

26 years ago today in a single post to the comp.os.linux.development newsgroup, Ian Murdock announced the completion of a brand new Linux release named Debian.

Since that day we’ve been into outer space, typed over 1,288,688,830 lines of code, spawned over 300 derivatives, were enhanced with 6,155 known contributors, and filed over 975,619 bug reports.

We are home to a community of thousands of users around the globe, we gather to host our annual Debian Developers Conference DebConf which spans the world in a different country each year, and of course today's many DebianDay celebrations held around the world.

It's not too late to throw an impromptu DebianDay celebration or to go and join one of the many celebrations already underway.

As we celebrate our own anniversary, we also want to celebrate our many contributors, developers, teams, groups, maintainers, and users. It is all of your effort, support, and drive that continue to make Debian truly: The universal operating system.

Happy DebianDay!

Jonathan McDowell: DebConf19: Brazil

Friday 16th of August 2019 05:46:54 PM

My first DebConf was DebConf4, held in Porte Alegre, Brazil back in 2004. Uncle Steve did the majority of the travel arrangements for 6 of us to go. We had some mishaps which we still tease him about, but it was a great experience. So when I learnt DebConf19 was to be in Brazil again, this time in Curitiba, I had to go. So last November I realised flights were only likely to get more expensive, that I’d really kick myself if I didn’t go, and so I booked my tickets. A bunch of life happened in the meantime that mean the timing wasn’t particularly great for me - it’s been a busy 6 months - but going was still the right move.

One thing that struck me about DC19 is that a lot of the faces I’m used to seeing at a DebConf weren’t there. Only myself and Steve from the UK DC4 group made it, for example. I don’t know if that’s due to the travelling distances involved, or just the fact that attendance varies and this happened to be a year where a number of people couldn’t make it. Nonetheless I was able to catch up with a number of people I only really see at DebConfs, as well as getting to hang out with some new folk.

Given how busy I’ve been this year and expect to be for at least the next year I set myself a hard goal of not committing to any additional tasks. That said DebConf often provides a welcome space to concentrate on technical bits. I reviewed and merged dkg’s work on WKD and DANE for the Debian keyring under debian.org - we’re not exposed to the recent keyserver network issues due to the fact the keyring is curated, but providing additional access to our keyring makes sense if it can be done easily. I spent some time with Ian Jackson talking about dgit - I’m not a user of it at present, but I’m intrigued by the potential for being able to do Debian package uploads via signed git tags. Of course I also attended a variety of different talks (and, as usual, at times the schedule conflicted such that I had a difficult choice about which option to chose for a particular slot).

This also marks the first time I did a non-team related talk at DebConf, warbling about my home automation (similar to my NI Dev Conf talk but with some more bits about the Debian involvement thrown in):

In addition I co-presented a couple of talks for teams I’m part of:

I only realised late in the week that 2 talks I’d normally expect to attend, an Software in the Public Interest BoF and a New Member BoF, were not on the schedule, but to be honest I don’t think I’d have been able to run either even if I’d realised in advance.

Finally, DebConf wouldn’t be DebConf without playing with some embedded hardware at some point, and this year it was the Caninos Loucos Labrador. This is a Brazilian grown single board ARM based computer with a modular form factor designed for easy integration into bigger projects. There;s nothing particularly remarkable about the hardware and you might ask why not just use a Pi? The reason is that import duties in Brazil make such things prohibitively expensive - importing a $35 board can end up costing $150 by the time shipping, taxes and customs fees are all taken into account. The intent is to design and build locally, as components can be imported with minimal taxes if the final product is being assembled within Brazil. And Mercosul allows access to many other South American countries without tariffs. I’d have loved to get hold of one of the boards, but they’ve only produced 1000 in the initial run and really need to get them into the hands of people who can help progress the project rather than those who don’t have enough time.

Next year DebConf20 is in Haifa - a city I’ve spent some time in before - but I’ve made the decision not to attend; rather than spending a single 7-10 day chunk away from home I’m going to aim to attend some more local conferences for shorter periods of time.

François Marier: Passwordless restricted guest account on Ubuntu

Friday 16th of August 2019 03:10:00 AM

Here's how I created a restricted but not ephemeral guest account on an Ubuntu 18.04 desktop computer that can be used without a password.

Create a user that can login without a password

First of all, I created a new user with a random password (using pwgen -s 64):

adduser guest

Then following these instructions, I created a new group and added the user to it:

addgroup nopasswdlogin adduser guest nopasswdlogin

In order to let that user login using GDM without a password, I added the following to the top of /etc/pam.d/gdm-password:

auth sufficient pam_succeed_if.so user ingroup nopasswdlogin

Note that this user is unable to ssh into this machine since it's not part of the sshuser group I have setup in my sshd configuration.

Privacy settings

In order to reduce the amount of digital traces left between guest sessions, I logged into the account using a GNOME session and then opened gnome-control-center. I set the following in the privacy section:

Then I replaced Firefox with Brave in the sidebar, set it as the default browser in gnome-control-center:

and configured it to clear everything on exit:

Create a password-less system keyring

In order to suppress prompts to unlock gnome-keyring, I opened seahorse and deleted the default keyring.

Then I started Brave, which prompted me to create a new keyring so that it can save the contents of its password manager securely. I set an empty password on that new keyring, since I'm not going to be using it.

I also made sure to disable saving of passwords, payment methods and addresses in the browser too.

Restrict user account further

Finally, taking an idea from this similar solution, I prevented the user from making any system-wide changes by putting the following in /etc/polkit-1/localauthority/50-local.d/10-guest-policy.pkla:

[guest-policy] Identity=unix-user:guest Action=* ResultAny=no ResultInactive=no ResultActive=no

If you know of any other restrictions that could be added, please leave a comment!

Julian Andres Klode: APT Patterns

Thursday 15th of August 2019 01:55:47 PM

If you have ever used aptitude a bit more extensively on the command-line, you’ll probably have come across its patterns. This week I spent some time implementing (some) patterns for apt, so you do not need aptitude for that, and I want to let you in on the details of this merge request !74.

so, what are patterns?

Patterns allow you to specify complex search queries to select the packages you want to install/show. For example, the pattern ?garbage can be used to find all packages that have been automatically installed but are no longer depended upon by manually installed packages. Or the pattern ?automatic allows you find all automatically installed packages.

You can combine patterns into more complex ones; for example, ?and(?automatic,?obsolete) matches all automatically installed packages that do not exist any longer in a repository.

There are also explicit targets, so you can perform queries like ?for x: ?depends(?recommends(x)): Find all packages x that depend on another package that recommends x. I do not fully comprehend those yet - I did not manage to create a pattern that matches all manually installed packages that a meta-package depends upon. I am not sure it is possible.

reducing pattern syntax

aptitude’s syntax for patterns is quite context-sensitive. If you have a pattern ?foo(?bar) it can have two possible meanings:

  1. If ?foo takes arguments (like ?depends did), then ?bar is the argument.
  2. Otherwise, ?foo(?bar) is equivalent to ?foo?bar which is short for ?and(?foo,?bar)

I find that very confusing. So, when looking at implementing patterns in APT, I went for a different approach. I first parse the pattern into a generic parse tree, without knowing anything about the semantics, and then I convert the parse tree into a APT::CacheFilter::Matcher, an object that can match against packages.

This is useful, because the syntactic structure of the pattern can be seen, without having to know which patterns have arguments and which do not - basically, for the parser ?foo and ?foo() are the same thing. That said, the second pass knows whether a pattern accepts arguments or not and insists on you adding them if required and not having them if it does not accept any, to prevent you from confusing yourself.

aptitude also supports shortcuts. For example, you could write ~c instead of config-files, or ~m for automatic; then combine them like ~m~c instead of using ?and. I have not implemented these short patterns for now, focusing instead on getting the basic functionality working.

So in our example ?foo(?bar) above, we can immediately dismiss parsing that as ?foo?bar:

  1. we do not support concatenation instead of ?and.
  2. we automatically parse ( as the argument list, no matter whether ?foo supports arguments or not

apt not understanding invalid patterns

Supported syntax

At the moment, APT supports two kinds of patterns: Basic logic ones like ?and, and patterns that apply to an entire package as opposed to a specific version. This was done as a starting point for the merge, patterns for versions will come in the next round.

We also do not have any support for explicit search targets such as ?for x: ... yet - as explained, I do not yet fully understand them, and hence do not want to commit on them.

The full list of the first round of patterns is below, helpfully converted from the apt-patterns(7) docbook to markdown by pandoc.

logic patterns

These patterns provide the basic means to combine other patterns into more complex expressions, as well as ?true and ?false patterns.

?and(PATTERN, PATTERN, ...)

Selects objects where all specified patterns match.

?false

Selects nothing.

?not(PATTERN)

Selects objects where PATTERN does not match.

?or(PATTERN, PATTERN, ...)

Selects objects where at least one of the specified patterns match.

?true

Selects all objects.

package patterns

These patterns select specific packages.

?architecture(WILDCARD)

Selects packages matching the specified architecture, which may contain wildcards using any.

?automatic

Selects packages that were installed automatically.

?broken

Selects packages that have broken dependencies.

?config-files

Selects packages that are not fully installed, but have solely residual configuration files left.

?essential

Selects packages that have Essential: yes set in their control file.

?exact-name(NAME)

Selects packages with the exact specified name.

?garbage

Selects packages that can be removed automatically.

?installed

Selects packages that are currently installed.

?name(REGEX)

Selects packages where the name matches the given regular expression.

?obsolete

Selects packages that no longer exist in repositories.

?upgradable

Selects packages that can be upgraded (have a newer candidate).

?virtual

Selects all virtual packages; that is packages without a version. These exist when they are referenced somewhere in the archive, for example because something depends on that name.

examples
apt remove ?garbage

Remove all packages that are automatically installed and no longer needed - same as apt autoremove

apt purge ?config-files

Purge all packages that only have configuration files left

oddities

Some things are not yet where I want them:

  • ?architecture does not support all, native, or same
  • ?installed should match only the installed version of the package, not the entire package (that is what aptitude does, and it’s a bit surprising that ?installed implies a version and ?upgradable does not)
the future

Of course, I do want to add support for the missing version patterns and explicit search patterns. I might even add support for some of the short patterns, but no promises. Some of those explicit search patterns might have slightly different syntax, e.g. ?for(x, y) instead of ?for x: y in order to make the language more uniform and easier to parse.

Another thing I want to do ASAP is to disable fallback to regular expressions when specifying package names on the command-line: apt install g++ should always look for a package called g++, and not for any package containing g (g++ being a valid regex) when there is no g++ package. I think continuing to allow regular expressions if they start with ^ or end with $ is fine - that prevents any overlap with package names, and would avoid breaking most stuff.

There also is the fallback to fnmatch(): Currently, if apt cannot find a package with the specified name using the exact name or the regex, it would fall back to interpreting the argument as a glob(7) pattern. For example, apt install apt* would fallback to installing every package starting with apt if there is no package matching that as a regular expression. We can actually keep those in place, as the glob(7) syntax does not overlap with valid package names.

Maybe I should allow using [] instead of () so larger patterns become more readable, and/or some support for comments.

There are also plans for AppStream based patterns. This would allow you to use apt install ?provides-mimetype(text/xml) or apt install ?provides-lib(libfoo.so.2). It’s not entirely clear how to package this though, we probably don’t want to have libapt-pkg depend directly on libappstream.

feedback

Talk to me on IRC, comment on the Mastodon thread, or send me an email if there’s anything you think I’m missing or should be looking at.

Steve Kemp: That time I didn't find a kernel bug, or did I?

Tuesday 13th of August 2019 06:00:44 PM

Recently I saw a post to the linux kernel mailing-list containing a simple fix for a use-after-free bug. The code in question originally read:

hdr->pkcs7_msg = pkcs7_parse_message(buf + buf_len, sig_len); if (IS_ERR(hdr->pkcs7_msg)) { kfree(hdr); return PTR_ERR(hdr->pkcs7_msg); }

Here the bug is obvious once it has been pointed out:

  • A structure is freed.
    • But then it is dereferenced, to provide a return value.

This is the kind of bug that would probably have been obvious to me if I'd happened to read the code myself. However patch submitted so job done? I did have some free time so I figured I'd scan for similar bugs. Writing a trivial perl script to look for similar things didn't take too long, though it is a bit shoddy:

  • Open each file.
  • If we find a line containing "free(.*)" record the line and the thing that was freed.
  • The next time we find a return look to see if the return value uses the thing that was free'd.
    • If so that's a possible bug. Report it.

Of course my code is nasty, but it looked like it immediately paid off. I found this snippet of code in linux-5.2.8/drivers/media/pci/tw68/tw68-video.c:

if (hdl->error) { v4l2_ctrl_handler_free(hdl); return hdl->error; }

That looks promising:

  • The structure hdl is freed, via a dedicated freeing-function.
  • But then we return the member error from it.

Chasing down the code I found that linux-5.2.8/drivers/media/v4l2-core/v4l2-ctrls.c contains the code for the v4l2_ctrl_handler_free call and while it doesn't actually free the structure - just some members - it does reset the contents of hdl->error to zero.

Ahah! The code I've found looks for an error, and if it was found returns zero, meaning the error is lost. I can fix it, by changing to this:

if (hdl->error) { int err = hdl->error; v4l2_ctrl_handler_free(hdl); return err; }

I did that. Then looked more closely to see if I was missing something. The code I've found lives in the function tw68_video_init1, that function is called only once, and the return value is ignored!

So, that's the story of how I scanned the Linux kernel for use-after-free bugs and contributed nothing to anybody.

Still fun though.

I'll go over my list more carefully later, but nothing else jumped out as being immediately bad.

There is a weird case I spotted in ./drivers/media/platform/s3c-camif/camif-capture.c with a similar pattern. In that case the function involved is s3c_camif_create_subdev which is invoked by ./drivers/media/platform/s3c-camif/camif-core.c:

ret = s3c_camif_create_subdev(camif); if (ret < 0) goto err_sd;

So I suspect there is something odd there:

  • If there's an error in s3c_camif_create_subdev
    • Then handler->error will be reset to zero.
    • Which means that return handler->error will return 0.
    • Which means that the s3c_camif_create_subdev call should have returned an error, but won't be recognized as having done so.
    • i.e. "0 < 0" is false.

Of course the error-value is only set if this code is hit:

hdl->buckets = kvmalloc_array(hdl->nr_of_buckets, sizeof(hdl->buckets[0]), GFP_KERNEL | __GFP_ZERO); hdl->error = hdl->buckets ? 0 : -ENOMEM;

Which means that the registration of the sub-device fails if there is no memory, and at that point what can you even do?

It's a bug, but it isn't a security bug.

Ricardo Mones: When your mail hub password is updated...

Tuesday 13th of August 2019 03:30:02 PM
don't forget to run postmap on your /etc/postfix/sasl_passwd
(repeat 100 times sotto voce or until falling asleep, whatever happens first).

Sven Hoexter: Debian/buster on HPE DL360G10 - interfaces change back to ethX

Tuesday 13th of August 2019 01:24:17 PM

For yet unknown reasons some recently installed HPE DL360G10 running buster changed back the interface names from the expected "onboard" based names eno5 and eno6 to ethX after a reboot.

My current workaround is a link file which kind of enforces the onboard scheme.

$ cat /etc/systemd/network/101-onboard-rd.link [Link] NamePolicy=onboard kernel database slot path MACAddressPolicy=none

The hosts are running the latest buster kernel

Linux foobar 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64 GNU/Linu

A downgrade of the kernel did not change anything. So I currently like to believe this is not related a kernel change.

I tried to collect a few information on one of the broken systems while in a broken state:

root@foobar:~# SYSTEMD_LOG_LEVEL=debug udevadm test-builtin net_setup_link /sys/class/net/eth0 === trie on-disk === tool version: 241 file size: 9492053 bytes header size 80 bytes strings 2069269 bytes nodes 7422704 bytes Load module index Found container virtualization none. timestamp of '/etc/systemd/network' changed Skipping overridden file '/usr/lib/systemd/network/99-default.link'. Parsed configuration file /etc/systemd/network/99-default.link Created link configuration context. ID_NET_DRIVER=i40e eth0: No matching link configuration found. Builtin command 'net_setup_link' fails: No such file or directory Unload module index Unloaded link configuration context. root@foobar:~# udevadm test-builtin net_id /sys/class/net/eth0 2>/dev/null ID_NET_NAMING_SCHEME=v240 ID_NET_NAME_MAC=enx48df37944ab0 ID_OUI_FROM_DATABASE=Hewlett Packard Enterprise ID_NET_NAME_ONBOARD=eno5 ID_NET_NAME_PATH=enp93s0f0

Most interesting hint right now seems to be that /sys/class/net/eth0/name_assign_type is invalid While on sytems before the reboot that breaks it, and after setting the .link file fix, contains a 4.

Since those hosts were intially installed with buster there are no remains on any ethX related configuration present. If someone has an idea what is going on write a mail (sven at stormbind dot net), or blog on planet.d.o.

I found a vaguely similar bug report for a Dell PE server in #929622, though that was a change from 4.9 (stretch) to the 4.19 stretch-bpo kernel and the device names were not changed back to the ethX scheme, and Ben found a reason for it inside the kernel. Also the hardware is different using bnxt_en, while I've tg3 and i40e in use.

More in Tux Machines

Late Coverage of Confidential Computing Consortium

  • Microsoft Partners With Google, Intel, And Others To Form Data Protection Consortium

    The software maker joined Google Cloud, Intel, IBM, Alibaba, Arm, Baidu, Red Hat, Swisscom, and Tencent to establish the Confidential Computing Consortium, a group committed to providing better private data protection, promoting the use of confidential computing, and advancing open source standards among members of the technology community.

  • #OSSUMMIT: Confidential Computing Consortium Takes Shape to Enable Secure Collaboration

    At the Open Source Summit in San Diego, California on August 21, the Linux Foundation announced the formation of the Confidential Computing Consortium. Confidential computing is an approach using encrypted data that enables organizations to share and collaborate, while still maintaining privacy. Among the initial backers of the effort are Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent. “The context of confidential computing is that we can actually use the data encrypted while programs are working on it,” John Gossman, distinguished engineer at Microsoft, said during a keynote presentation announcing the new effort. Initially there are three projects that are part of the Confidential Computing Consortium, with an expectation that more will be added over time. Microsoft has contributed its Open Enclave SDK, Red Hat is contributing the Enarx project for Trusted Execution Environments and Intel is contributing its Software Guard Extensions (SGX) software development kit. Lorie Wigle, general manager, platform security product management at Intel, explained that Intel has had a capability built into some of its processors called software guard which essentially provides a hardware-based capability for protecting an area of memory.

Graphics: Mesa Radeon Vulkan Driver and SPIR-V Support For OpenGL 4.6

  • Mesa Radeon Vulkan Driver Sees ~30% Performance Boost For APUs

    Mesa's RADV Radeon Vulkan driver just saw a big performance optimization land to benefit APUs like Raven Ridge and Picasso, simply systems with no dedicated video memory. The change by Feral's Alex Smith puts the uncached GTT type at a higher index than the visible vRAM type for these configurations without dedicated vRAM, namely APUs.

  • Intel Iris Gallium3D Is Close With SPIR-V Support For OpenGL 4.6

    This week saw OpenGL 4.6 support finally merged for Intel's i965 Mesa driver and will be part of the upcoming Mesa 19.2 release. Not landed yet but coming soon is the newer Intel "Iris" Gallium3D driver also seeing OpenGL 4.6 support. Iris Gallium3D has been at OpenGL 4.5 support and is quite near as well with its OpenGL 4.6 support thanks to the shared NIR support and more with the rest of the Intel open-source graphics stack. Though it's looking less likely that OpenGL 4.6 support would be back-ported to Mesa 19.2 for Iris, but we'll see.

The GPD MicroPC in 3 Minutes [Video Review]

In it I tackle the GPD MicroPC with Ubuntu MATE 19.10. I touch on the same points made in my full text review, but with the added bonus of moving images to illustrate my points, rather than words. Read more Also: WiringPi - Deprecated

today's howtos