Language Selection

English French German Italian Portuguese Spanish


Syndicate content
Planet Debian -
Updated: 12 hours 54 min ago

Julian Andres Klode: Migrating away from apt-key

21 hours 11 min ago

This is an edited copy of an email I sent to provide guidance to users of apt-key as to how to handle things in a post apt-key world.

The manual page already provides all you need to know for replacing apt-key add usage:

Note: Instead of using this command a keyring should be placed directly in the /etc/apt/trusted.gpg.d/ directory with a descriptive name and either “gpg” or “asc” as file extension

So it’s kind of surprising people need step by step instructions for how to copy/download a file into a directory.

I’ll also discuss the alternative security snakeoil approach with signed-by that’s become popular. Maybe we should not have added signed-by, people seem to forget that debs still run maintainer scripts as root.

Aside from this email, Debian users should look into extrepo, which manages curated external repositories for you.

Direct translation

Assume you currently have:

wget -qO- https://myrepo.example/myrepo.asc | sudo apt-key add –

To translate this directly for bionic and newer, you can use:

sudo wget -qO /etc/apt/trusted.gpg.d/myrepo.asc https://myrepo.example/myrepo.asc

or to avoid downloading as root:

wget -qO- https://myrepo.example/myrepo.asc | sudo tee -a /etc/apt/trusted.gpg.d/myrepo.asc

Older (and all) releases only support unarmored files with an extension .gpg. If you care about them, provide one, and use

sudo wget -qO /etc/apt/trusted.gpg.d/myrepo.gpg https://myrepo.example/myrepo.gpg

Some people will tell you to download the .asc and pipe it to gpg --dearmor, but gpg might not be installed, so really, just offer a .gpg one instead that is supported on all systems.

wget might not be available everywhere so you can use apt-helper:

sudo /usr/lib/apt/apt-helper download-file https://myrepo.example/myrepo.asc /etc/apt/trusted.gpg.d/myrepo.asc

or, to avoid downloading as root:

/usr/lib/apt/apt-helper download-file https://myrepo.example/myrepo.asc /tmp/myrepo.asc && sudo mv /tmp/myrepo.asc /etc/apt/trusted.gpg.d Pretending to be safer by using signed-by

People say it’s good practice to not use trusted.gpg.d and install the file elsewhere and then refer to it from the sources.list entry by using signed-by=<path to the file>. So this looks a lot safer, because now your key can’t sign other unrelated repositories. In practice, security increase is minimal, since package maintainer scripts run as root anyway. But I guess it’s better for publicity :)

As an example, here are the instructions to install signal-desktop from As mentioned, gpg --dearmor use in there is not a good idea, and I’d personally not tell people to modify /usr as it’s supposed to be managed by the package manager, but we don’t have an /etc/apt/keyrings or similar at the moment; it’s fine though if the keyring is installed by the package. You can also just add the file there as a starting point, and then install a keyring package overriding it (pretend there is a signal-desktop-keyring package below that would override the .gpg we added).

# NOTE: These instructions only work for 64 bit Debian-based # Linux distributions such as Ubuntu, Mint etc. # 1. Install our official public software signing key wget -O- | gpg --dearmor > signal-desktop-keyring.gpg cat signal-desktop-keyring.gpg | sudo tee -a /usr/share/keyrings/signal-desktop-keyring.gpg > /dev/null # 2. Add our repository to your list of repositories echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/signal-desktop-keyring.gpg] xenial main' |\ sudo tee -a /etc/apt/sources.list.d/signal-xenial.list # 3. Update your package database and install signal sudo apt update && sudo apt install signal-desktop

I do wonder why they do wget | gpg --dearmor, pipe that into the file and then cat | sudo tee it, instead of having that all in one pipeline. Maybe they want nicer progress reporting.

Scenario-specific guidance

We have three scenarios:

For system image building, shipping the key in /etc/apt/trusted.gpg.d seems reasonable to me; you are the vendor sort of, so it can be globally trusted.

Chrome-style debs and repository config debs: If you ship a deb, embedding the sources.list.d snippet (calling it $myrepo.list) and shipping a $myrepo.gpg in /usr/share/keyrings is the best approach. Whether you ship that in product debs aka vscode/chromium or provide a repository configuration deb (let’s call it myrepo-repo.deb) and then tell people to run apt update followed by apt install <package inside the repo> depends on how many packages are in the repo, I guess.

Manual instructions (signal style): The third case, where you tell people to run wget themselves, I find tricky. As we see in signal, just stuffing keyring files into /usr/share/keyrings is popular, despite /usr supposed to be managed by the package manager. We don’t have another dir inside /etc (or /usr/local), so it’s hard to suggest something else. There’s no significant benefit from actually using signed-by, so it’s kind of extra work for little gain, though.

Addendum: Future work

This part is new, just for this blog post. Let’s look at upcoming changes and how they make things easier.

Bundled .sources files

Assuming I get my merge request merged, the next version of APT (2.4/2.3.something) will do away with all the complexity and allow you to embed the key directly into a deb822 .sources file (which have been available for some time now):

Types: deb URIs: https://myrepo.example/ https://myotherrepo.example/ Suites: stable not-so-stable Components: main Signed-By: -----BEGIN PGP PUBLIC KEY BLOCK----- . mDMEYCQjIxYJKwYBBAHaRw8BAQdAD/P5Nvvnvk66SxBBHDbhRml9ORg1WV5CvzKY CuMfoIS0BmFiY2RlZoiQBBMWCgA4FiEErCIG1VhKWMWo2yfAREZd5NfO31cFAmAk IyMCGyMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQREZd5NfO31fbOwD6ArzS dM0Dkd5h2Ujy1b6KcAaVW9FOa5UNfJ9FFBtjLQEBAJ7UyWD3dZzhvlaAwunsk7DG 3bHcln8DMpIJVXht78sL =IE0r -----END PGP PUBLIC KEY BLOCK-----

Then you can just provide a .sources files to users, they place it into `sources.list.d, and everything magically works

Probably adding a nice apt add-source command for it I guess.

Well, python-apt’s aptsources package still does not support deb822 sources, and never will, we’ll need an aptsources2 for that for backwards-compatibility reasons, and then port software-properties and other users to it.

OpenPGP vs aptsign

We do have a better, tighter replacement for gpg in the works which uses Ed25519 keys to sign Release files. It’s temporarily named aptsign, but it’s a generic signer for single-section deb822 files, similar to signify/minisign.

We believe that this solves the security nightmare that our OpenPGP integration is while reducing complexity at the same time. Keys are much shorter, so the bundled sources file above will look much nicer.

Russ Allbery: Review: The Magician's Nephew

Sunday 20th of June 2021 04:03:00 AM

Review: The Magician's Nephew, by C.S. Lewis

Illustrator: Pauline Baynes Series: Chronicles of Narnia #6 Publisher: Collier Books Copyright: 1955 Printing: 1978 ISBN: 0-02-044230-0 Format: Mass market Pages: 186

The Magician's Nephew is the sixth book of the Chronicles of Narnia in the original publication order, but it's a prequel, set fifty years before The Lion, the Witch and the Wardrobe. It's therefore put first in the new reading order.

I have always loved world-building and continuities and, as a comics book reader (Marvel primarily), developed a deep enjoyment of filling in the pieces and reconstructing histories from later stories. It's no wonder that I love reading The Magician's Nephew after The Lion, the Witch and the Wardrobe. The experience of fleshing out backstory with detail and specifics makes me happy. If that's also you, I recommend the order in which I'm reading these books.

Reading this one first is defensible, though. One of the strongest arguments for doing so is that it's a much stronger, tighter, and better-told story than The Lion, the Witch and the Wardrobe, and therefore might might start the series off on a better foot for you. It stands alone well; you don't need to know any of the later events to enjoy this, although you will miss the significance of a few things like the lamp post and you don't get the full introduction to Aslan.

The Magician's Nephew is the story of Polly Plummer, her new neighbor Digory Kirke, and his Uncle Andrew, who fancies himself a magician. At the start of the book, Digory's mother is bed-ridden and dying and Digory is miserable, which is the impetus for a friendship with Polly. The two decide to explore the crawl space of the row houses in which they live, seeing if they can get into the empty house past Digory's. They don't calculate the distances correctly and end up in Uncle Andrew's workroom, where Digory was forbidden to go. Uncle Andrew sees this as a golden opportunity to use them for an experiment in travel to other worlds.


The Magician's Nephew, like the best of the Narnia books, does not drag its feet getting started. It takes a mere 30 pages to introduce all of the characters, establish a friendship, introduce us to a villain, and get both of the kids into another world. When Lewis is at his best, he has an economy of storytelling and a grasp of pacing that I wish was more common.

It's also stuffed to the brim with ideas, one of the best of which is the Wood Between the Worlds.

Uncle Andrew has crafted pairs of magic rings, yellow and green, and tricks Polly into touching one of the yellow ones, causing her to vanish from our world. He then uses her plight to coerce Digory into going after her, carrying two green rings that he thinks will bring people back into our world, and not incidentally also observing that world and returning to tell Uncle Andrew what it's like. But the world is more complicated than he thinks it is, and the place where the children find themselves is an eerie and incredibly peaceful wood, full of grass and trees but apparently no other living thing, and sprinkled with pools of water.

This was my first encounter with the idea of a world that connects other worlds, and it remains the most memorable one for me. I love everything about the Wood: the simplicity of it, the calm that seems in part to be a defense against intrusion, the hidden danger that one might lose one's way and confuse the ponds for each other, and even the way that it tends to make one lose track of why one is there or what one is trying to accomplish. That quiet forest filled with pools is still an image I use for infinite creativity and potential. It's quiet and nonthreatening, but not entirely inviting either; it's magnificently neutral, letting each person bring what they wish to it.

One of the minor plot points of this book is that Uncle Andrew is wrong about the rings because he's wrong about the worlds. There aren't just two worlds; there are an infinite number, with the Wood as a nexus, and our reality is neither the center nor one of an important pair. The rings are directional, but relative to the Wood, not our world. The kids, who are forced to experiment and who have an open mind, figure this out quickly, but Uncle Andrew never shifts his perspective. This isn't important to the story, but I've always thought it was a nice touch of world-building.

Where this story is heading, of course, is the creation of Narnia and the beginning of all of the stories told in the rest of the series. But before that, the kids's first trip out of the Wood is to one of the best worlds of children's fantasy: Charn.

If the Wood is my mental image of a world nexus, Charn will forever be my image of a dying world: black sky, swollen red sun, and endless abandoned and crumbling buildings as far as the eye can see, full of tired silences and eerie noises. And, of course, the hall of statues, with one of the most memorable descriptions of history and empire I've ever read (if you ignore the racialized description):

All of the faces they could see were certainly nice. Both the men and women looked kind and wise, and they seemed to come of a handsome race. But after the children had gone a few steps down the room they came to faces that looked a little different. These were very solemn faces. You felt you would have to mind your P's and Q's, if you ever met living people who looked like that. When they had gone a little farther, they found themselves among faces they didn't like: this was about the middle of the room. The faces here looked very strong and proud and happy, but they looked cruel. A little further on, they looked crueller. Further on again, they were still cruel but they no longer looked happy. They were even despairing faces: as if the people they belonged to had done dreadful things and also suffered dreadful things.

The last statue is of a fierce, proud woman that Digory finds strikingly beautiful. (Lewis notes in an aside that Polly always said she never found anything specially beautiful about her. Here, as in The Silver Chair, the girl is the sensible one and things would have gone better if the boy had listened to her, a theme that I find immensely frustrating because Susan was the sensible one in the first two books of the series but then Lewis threw that away.)

There is a bell in the middle of this hall, and the pillar that holds that bell has an inscription on it that I think every kid who grew up on Narnia knows by heart.

Make your choice, adventurous Stranger;
Strike the bell and bide the danger,
Or wonder, till it drives you mad,
What would have followed if you had.

Polly has no intention of striking the bell, but Digory fights her and does it anyway, waking Jadis from where she sat as the final statue in the hall and setting off one of the greatest reimaginings of a villain in children's literature.

Jadis will, of course, become the White Witch who holds Narnia in endless winter some thousand Narnian years later. But the White Witch was a mediocre villain at best, the sort of obvious and cruel villain common in short fairy tales where the author isn't interested in doing much characterization. She exists to be evil, do bad things, and be defeated. She has a few good moments in conflict with Aslan, but that's about it. Jadis in this book is another matter entirely: proud, brilliant, dangerous, and creative.

The death of everything on Charn was Jadis's doing: an intentional spell, used to claim a victory of sorts from the jaws of defeat by her sister in a civil war. (I find it fascinating that Lewis puts aside his normally sexist roles here.) Despite the best attempts of the kids to lose her both in Charn and in the Wood (which is inimical to her, in another nice bit of world-building), she manages to get back to England with them. The result is a remarkably good bit of villain characterization.

Jadis is totally out of her element, used to a world-spanning empire run with magic and (from what hints we get) vaguely medieval technology. Her plan to take over their local country and eventually the world should be absurd and is played somewhat for laughs. Her magic, which is her great weapon, doesn't even work in England. But Jadis learns at a speed that the reader can watch. She's observant, she pays attention to things that don't fit her expectations, she changes plans, and she moves with predatory speed. Within a few hours in London she's stolen jewels and a horse and carriage, and the local police seem entirely overmatched. There's no way that one person without magic should be a real danger to England around the turn of the 20th century, but by the time the kids manage to pull her back into the Wood, you're not entirely sure England would have been safe.

A chaotic confrontation, plus the ability of the rings to work their magic through transitive human contact, ends up with the kids, Uncle Andrew, Jadis, a taxicab driver and his horse all transported through the Wood to a new world. In this case, literally a new world: Narnia at the point of its creation.

Here again, Lewis translates Christian myth, in this case the Genesis creation story, into a more vivid and in many ways more beautiful story than the original. Aslan singing the world into existence is an incredible image, as is the newly-created world so bursting with life that even things that normally could not grow will do so. (Which, of course, is why there is a lamp post burning in the middle of the western forest of Narnia for the Pevensie kids to find later.) I think my favorite part is the creation of the stars, but the whole sequence is great.

There's also an insightful bit of human psychology. Uncle Andrew can't believe that a lion is singing, so he convinces himself that Aslan is not singing, and thus prevents himself from making any sense of the talking animals later.

Now the trouble about trying to make yourself stupider than you really are is that you very often succeed.

As with a lot in Lewis, he probably meant this as a statement about faith, but it generalizes well beyond the religious context.

What disappointed me about the creation story, though, is the animals. I didn't notice this as a kid, but this re-read has sensitized me to how Lewis consistently treats the talking animals as less than humans even though he celebrates them. That happens here too: the newly-created, newly-awakened animals are curious and excited but kind of dim. Some of this is an attempt to show that they're young and are just starting to learn, but it also seems to be an excuse for Aslan to set up a human king and queen over them instead of teaching them directly how to deal with the threat of Jadis who the children inadvertently introduced into the world.

The other thing I dislike about The Magician's Nephew is that the climax is unnecessarily cruel. Once Digory realizes the properties of the newly-created world, he hopes to find a way to use that to heal his mother. Aslan points out that he is responsible for Jadis entering the world and instead sends him on a mission to obtain a fruit that, when planted, will ward Narnia against her for many years. The same fruit would heal his mother, and he has to choose Narnia over her. (It's a fairly explicit parallel to the Garden of Eden, except in this case Digory passes.)

Aslan, in the end, gives Digory the fruit of the tree that grows, which is still sufficient to heal his mother, but this sequence made me angry when re-reading it. Aslan knew all along that what Digory is doing will let him heal his mother as well, but hides this from him to make it more of a test. It's cruel and mean; Aslan could have promised to heal Digory's mother and then seen if he would help Narnia without getting anything in return other than atoning for his error, but I suppose that was too transactional for Lewis's theology or something. Meh.

But, despite that, the only reason why this is not the best Narnia book is because The Voyage of the Dawn Treader is the only Narnia book that also nails the ending. The Magician's Nephew, up through Charn, Jadis's rampage through London, and the initial creation of Narnia, is fully as good, perhaps better. It sags a bit at the end, partly because it tries to hard to make the Narnian animals humorous and partly because of the unnecessary emotional torture of Digory. But this is still holds up as the second-best Narnia book, and one I thoroughly enjoyed re-reading. If anything, Jadis and Charn are even better than I remembered.

Followed by the last book of the series, the somewhat notorious The Last Battle.

Rating: 9 out of 10

Sean Whitton: transient-caps-lock

Sunday 20th of June 2021 12:26:03 AM

If you’re writing a lot of Common Lisp and you want to follow the convention of using all uppercase to refer to symbols in docstrings, comments etc., you really need something better than the shift key. Similarly if you’re writing C and you have VARIOUS_LONG_ENUMS.

The traditional way is a caps lock key. But that means giving up a whole keyboard key, all of the time, just for block capitalisation, which one hardly uses outside of programming. So a better alternative is to come up with some Emacs thing to get block capitalisation, as Emacs key binding is much more flexible than system keyboard layouts, and can let us get block capitalisation without giving up a whole key.

The simplest thing would be to bind some sequence of keys to just toggle caps lock. But I came up with something a bit fancier. With the following, you can type M-C, and then you get block caps until the point at which you’ve probably finished typing your symbol or enum name.

(defun spw/transient-caps-self-insert (&optional n) (interactive "p") (insert-char (upcase last-command-event) n)) (defun spw/activate-transient-caps () "Activate caps lock while typing the current whitespace-delimited word(s). This is useful for typing Lisp symbols and C enums which consist of several all-uppercase words separated by hyphens and underscores, such that M-- M-u after typing will not upcase the whole thing." (interactive) (let* ((map (make-sparse-keymap)) (deletion-commands &apos(delete-backward-char paredit-backward-delete backward-kill-word paredit-backward-kill-word spw/unix-word-rubout spw/paredit-unix-word-rubout)) (typing-commands (cons &aposspw/transient-caps-self-insert deletion-commands))) (substitute-key-definition &aposself-insert-command #&aposspw/transient-caps-self-insert map (current-global-map)) (set-transient-map map (lambda () ;; try to determine whether we are probably still about to try to type ;; something in all-uppercase (and (member this-command typing-commands) (not (and (eq this-command &aposspw/transient-caps-self-insert) (= (char-syntax last-command-event) ?\ ))) (not (and (or (bolp) (= (char-syntax (char-before)) ?\ )) (member this-command deletion-commands)))))))) (global-set-key "\M-C" #&aposspw/activate-transient-caps)

A few notes:

  • I have caps lock on left-ctrl on standard keyboard layouts, and on the sequence keypd-capslock-keypd on the Kinesis Advantage. These are about equally inconvenient, but good enough for those rare cases one needs caps lock outside of Emacs.

  • I also had the idea of binding this to Delete, because I don’t use that at all in Emacs, but Delete is relatively hard to hit on conventional keyboards, sometimes missing, and might not work in some text terminals.

  • I’ve actually trained myself to set the mark, type my symbol or enum and then just C-x C-u to upcase it, so I’m not actually using the above, but I thought someone else might like it, so still worth posting.

Andrew Cater: Debian 10.10 media checking - RELEASE - 202106192215

Saturday 19th of June 2021 10:12:49 PM

And that's it for another release. A few bugs but nothing show-stopping.

Thanks again to Sledge, RattusRattus and Isy, to Linux-Fan and schweer.

As the testing page notes, there's a bug in some arm64 installs - the fix should come out via debian-security shortly but you might want to be aware of this.

 Here's to the next one: only a short time until Bullseye

Andrew Cater: Debian 10.10 media checking - 202106191837 - We're doing quite well

Saturday 19th of June 2021 07:42:11 PM

Linux-Fan and Schweer have just left us: Schweer has confirmed that all the Debian-Edu images are fine and working to his satisfaction.

 After a short break for food, we're all back in on testing: the Cambridge folks are working hard.  There have been questions on IRC about the release in Libera.Chat as well. Always good to do this: at some point in the next couple of months, we'll be doing this for Debian 11 [Bullseye] :)

Thanks as ever to all behind the scenes making each point release happen and to those folks supporting LTS and ELTS. It takes a huge amount of bug fixing, sometimes on the fly as issues are discovered, to make it work this seamlessly.

Andrew Cater: Fixing Wayland failing to start when a desktop environment is installed but your machine needs firmware. ...

Saturday 19th of June 2021 06:32:32 PM

This came up in an install that I was just doing for Debian 10.10 media testing.

 I hadn't seen this before and it would be disconcerting to other people, though it is a known bug, I think.

 I was installing an image that had no network and no firmware. KDE failed to run and dropped me to a text mode prompt. This was because the Zotac SBC I'm using requires Radeon R600 firmware to work. There was a warning message on screen to that effect.

 The way round this was to plug in a network cable and edit /etc/apt/sources.list.

 Editing /etc/apt/sources.list was to add contrib and non-free to the appropriate lines to allow me to install  firmware-linux-nonfree and firmware-misc-nonfree which includes the appropriate AMD firmware for the embedded Radeon chipset.

 Since the machine hadn't been connected to a network at install time, I also needed to run a dhclient command to obtain a network lease and allow me to install the non-free metapackages over the network.

Result: success: a full KDE desktop. [The machine is an old Zotac SBC with embedded graphics hardware: AMD E350 - specifically, it requires firmware-amd-graphics and amd64-microcode].

Chris Lamb: Raiders of the Lost Ark: 40 Years On

Saturday 19th of June 2021 05:01:27 PM
"Again, we see there is nothing you can possess which I cannot take away."

The cinema was a rare and expensive treat in my youth, so I first came across Raiders of the Lost Ark by recording it from television onto a poor quality VHS. I only mention this as it meant I watched a slightly different film to the one intended, as my copy somehow missed off the first 10 minutes. For those not as intimately familiar with the film as me, this is just in time to see a Belloq demand Dr. Jones hand over the Peruvian head (see above), just in time to learn that Indy loathes snakes, and just in time to see the inadvertent reproduction of two Europeans squabbling over the spoils of a foreign land.

What this truncation did to my interpretation of the film (released thirty years ago today on June 19th 1981) is interesting to explore. Without Jones' physical and moral traits being demonstrated on-screen (as well as missing the weighing the gold head and the rollercoaster boulder scene), it actually made the idea of 'Indiana Jones' even more of a mythical archetype. The film wisely withholds Jones' backstory, but my directors cut deprived him of even more, and counterintuitively imbued him with even more of a legendary hue as the elision made his qualities an assumption beyond question. Indiana Jones, if you can excuse the cliché, needed no introduction at all.


Good artists copy, great artists steal. And oh boy, does Raiders steal. I've watched this film about twenty times over the past two decades and it's now firmly entered into my personal canon. But watching it on its thirtieth anniversary was different not least because I could situate it in a broader cinematic context. For example, I now see the Gestapo officer in Major Strasser from Casablanca (1942), in fact just as I can with many of Raiders' other orientalist tendencies: not only in its breezy depictions of backwards sand people, but also of North Africa as an entrepôt and playground for a certain kind of Western gangster. The opening as well, set in an equally reductionist pseudo-Peru, now feels like Werner Herzog's Aguirre, the Wrath of God (1972) — but without, of course, any self-conscious colonial critique.

The imagery of the ark appears to be borrowed from James Tissot's The Ark Passes Over the Jordan, part of the fin de siecle fascination with the occult and (ironically enough given the background of Raiders' director), a French Catholic revival.

I can now also appreciate some of the finer edges that make this film just so much damn fun to watch. For instance, the comic book conceit that Jones and Belloq are a 'shadowy reflection' of one other and that they need 'only a nudge' to make one like the other. As is the idea that Belloq seems to be actually enjoying being evil. I also spotted Jones rejecting the martini on the plane. This feels less like a comment on corrupting effect of alcohol (he drinks rather heavily elsewhere in the film), but rather a subtle distancing from James Bond. This feels especially important given that the action-packed cold open is, let us be honest for a second, ripped straight from the 007 franchise.

John William's soundtracks are always worth mentioning. The corny Raiders March does almost nothing for me, but the highly-underrated 'Ark theme' certainly does. I delight in its allusions to Gregorian chant, the diabolus in musica and the Hungarian minor scale, fusing the Christian doctrine of the Holy Trinity (the stacked thirds, get it?), the ars antiqua of the Middle Ages with an 'exotic' twist that the Russian Five associated with central European Judaism.

The best use of the ark leitmotif is, of course, when it is opened. Here, Indy and Marion are saved by not opening their eyes whilst the 'High Priest' Belloq and the rest of the Nazis are all melted away. I'm no Biblical scholar, but I'm almost certain they were alluding to Leviticus 16:2 here:

The Lord said to Moses: “Tell your brother Aaron that he is not to come whenever he chooses into the Most Holy Place behind the curtain in front of the atonement cover on the ark, or else he will die, for I will appear in the cloud above the mercy seat.”

But would it be too much of a stretch to also see the myth of Orpheus and Eurydices too? Orpheus's wife would only be saved from the underworld if he did not turn around until he came to his own house. But he turned round to look at his wife, and she instantly slipped back into the depths:

For he who overcome should turn back his gaze
Towards the Tartarean cave,
Whatever excellence he takes with him
He loses when he looks on those below.

Perhaps not, given that Marion and the ark are not lost in quite the same way. But whilst touching on gender, it was interesting to update my view of archaeologist René Belloq. To countermand his slight queer coding (a trope of Disney villains such as Scar, Jafar, Cruella, etc.), there is a rather clumsy subplot involving Belloq repeatedly (and half-heartedly) failing to seduce Marion. This disavows any idea that Belloq isn't firmly heterosexual, essential for the film's mainstream audience, but it is especially important in Raiders because, if we recall the relationship between Belloq and Jones: 'it would take only a nudge to make you like me'. (This would definitely put a new slant on 'Top men'.)

However, my favourite moment is where the Nazis place the ark in a crate in order to transport it to the deserted island. On route, the swastikas on the side of the crate spontaneously burn away, and a disturbing noise is heard in the background. This short scene has always fascinated me, partly because it's the first time in the film that the power of the ark is demonstrated first-hand but also because gives the object an other-worldly nature that, to the best of my knowledge, has no parallel in the rest of cinema.

Still, I had always assumed that the Aak disfigured the swastikas because of their association with the Nazis, interpreting the act as God's condemnation of the Third Reich. But now I catch myself wondering whether the ark would have disfigured any iconography as a matter of principle or whether their treatment was specific to the swastika. We later get a partial answer to this question, as the 'US Army' inscriptions in the Citizen Kane warehouse remain untouched.

Far from being an insignificant concern, the filmmakers appear to have wandered into a highly-contested theological debate. As in, if the burning of the swastika is God's moral judgement of the Nazi regime, then God is clearly both willing and able to intervene in human affairs. So why did he not, to put it mildly, prevent Auschwitz? From this perspective, Spielberg appears to be limbering up for some of the academic critiques surrounding Holocaust representations that will follow Schindler's List (1993).


Given my nostalgic and somewhat ironic attachment to Raiders, it will always be difficult for me to objectively appraise the film. Even so, it feels like it is underpinned by an earnest attempt to entertain the viewer, largely absent in the affected cynicism of contemporary cinema. And when considered in the totality of Hollywood's output, its tonal and technical flaws are not actually that bad — or at least Marion's muddled characterisation and its breezy chauvinism (for example) clearly have far worse examples.

Perhaps the most remarkable thing about the film in 2021 is that it hasn't changed that much at all. It spawned one good sequel (The Last Crusade), one bad one (The Temple of Doom), and one hardly worth mentioning at all, yet these adventures haven't affected the original Raiders in any meaningful way. In fact, if anything has affected the original text it is, once again, George Lucas himself, as knowing the impending backlash around the Star Wars prequels adds an inadvertent paratext to all his earlier works.

Yet in a 1978 discussion prior to the creation of Raiders, you can get a keen sense of how Lucas' childlike enthusiasm will always result in something either extremely good or something extremely bad — somehow no middle ground is quite possible. Yes, it's easy to rubbish his initial ideas — 'We'll call him Indiana Smith! — but hasn't Lucas actually captured the essence of a heroic 'Americana' here, and that the final result is simply a difference of degree, not kind?

Andrew Cater: Debian 10.10 release 202106191548

Saturday 19th of June 2021 03:49:05 PM

 Late blogging on this one.

Even as we wait for the final release of Bullseye [Debian 11], we're still producing updates for Debian 10 [Buster].

Today has thrown up a few problems: working with Steve, RattusRattus and Isy in Cambridge, Schweer and Linux-Fan somewhere else in the world.

A couple of build problems have meant that we've started later than we otherwise might have been and a couple of image runs have had to be redone. We're there now and happily running tests.

As ever, it's good to be doing this. With practice, I can now repeat mistakes with 100% reliability and in shorter time :)

More updates later.

Joachim Breitner: Leaving DFINITY

Saturday 19th of June 2021 10:51:24 AM

Last Thursday was my last working day at DFINITY. There are various reasons why I felt that after almost three years the DFINITY Foundation isn’t quite the right place for me anymore, and this plan has been in the making for a while. Primarily, there are personal pull factors that strongly suggest that I’ll take a break from full time employment, so I decided to see the launch of the Internet Computer through and then leave.

DFINITY has hired some amazing people, and it was a great pleasure to work with them. I learned a lot (some Rust, a lot of Nix, and just how merciless Conway’s law is), and I dare say I had the opportunity to do some good work, contributing my part to make the Internet Computer a reality.

I am especially proud of the Interface Specification and the specification-driven design principles behind it. It even comes with a model reference implementation and acceptance test suite, and although we didn’t quite get to do formalization, those familiar with the DeepSpec project will recognize some influence of their concept of “deep specifications”.

Besides that, there is of course my work on the Motoko programming language, where I build the backend,a the Candid interoperability layer, where I helped with the formalism, formulated the a generic soundness criterion for Interface Description Languages in a higher order settings and formally verified that in Coq. Fortunately, all of this work is now Free Software or at least Open Source.

With so much work poured into this project, I continue to care about it, and you’ll see me post on the the developer forum and hack on Motoko. As the Internet Computer becomes gradually more open, I hope I can be gradually more involved again. But even without me contributing full-time I am sure that DFINITY and the Internet Computer will do well; when I left there were still plenty of smart, capable and enthusiastic people forging ahead.

So what’s next?

So far, I have rushed every professional transition in my life: When starting my PhD, when starting my postdoc, when starting my job at DFINITY, and every time I regretted it. So this time, I will take a proper break and will explore the world a bit (as far as that is possible given the pandemic). I will stresslessly contribute to various open source projects. I also hope to do more public outreach and teaching, writing more blog posts again, recording screencasts and giving talks and lectures. If you want to invite me to your user group/seminar/company/workshop, please let me know! Also, I might be up for small interesting projects in a while.

Beyond these, I have no concrete plans and am looking forward to the inspiration I might get from hiking through the Scandinavian wilderness. If you happen to stumble across my tent, please stop for a tea.

Gunnar Wolf: Fighting spam on roundcube with modsecurity

Friday 18th of June 2021 04:40:49 PM

Every couple of months, one of my users falls prey to phishing attacks, and send their login/password data to an unknown somebody who poses as… Well, as me, their always-friendly and always-helpful systems administrator.

What follows is, of course, me spending a week trying to get our systems out of all of the RBLs/DNSBLs. But, no matter how fast I act, there’s always distruption and lost mails (bounced or classified as spam) for my users.

Most of my users use the Webmail I have configured on our institute’s servers, Roundcube, for which I have the highest appreciation. Only that… Of course, when a user yields their username and password to an attacker, it is very successful at… Sending huge amounts of unrequested mail, leading to my server losing its reputation ☹

This week, I set two bits of mitigation strategies. The first one, most straightforward, was to ask Roundcube to disallow sending mails with over ten recipients. In a Debian install, this is as easy as setting up the following variable in /etc/roundcube/

$config['max_recipients'] = 10

However, a dilligent spammer can still clog the server by sending many, many, many, many requests — maybe each of them with ten recipients only; last weekend, I got a new mail every three seconds or so.

Adding rate limit to a specific Roundcube action is not easy, however, or at least it took me quite a bit of headbanging to get it right ☹. Roundcube is a very AJAX-y system where all (most, at least) actions are received by /index.php and there is quite a bit of parsing to do to understand the actions done. When sending a mail, of course, it is done using the POST HTTP verb, and the URI-specified variables include _task=mail&_unlock=loading<message_id> (of course, with changing message IDs).

After some poking here and there, I faced to SpiderLabs’ ModSecurity… Only that I am not yet well versed in writing rules for it. But after quite a bit of reading, poking, breaking… I was able to come up with the following rules:

# How often does the limit counter expire ⇒ ratelimit_client=60, # every 60 seconds SecRule REQUEST_LINE "@rx POST.*_task=mail&_unlock" id:10,phase:2,nolog,pass,setuid:%{tx.ua_hash},setvar:user.ratelimit_client=+1,expirevar:user.ratelimit_client=60 # How many requests do we allow in the specified time period? ⇒ # @gt 3, 3 requests SecRule user:ratelimit_client "@gt 2" chain,id:100009,phase:2,deny,status:429,setenv:RATELIMITED,log,msg:RATE-LIMITED SecRule REQUEST_LINE "@rx POST.*_task=mail&_unlock"

The first line specifies the rule will match request lines specifying the POST verb aind including the _task=mail&_unlock fragment in the URL. It increments tht ratelimit_client user variable, but expires it after 60 seconds.

The first line verifies whether the above specified variable (do note that it’s user: instead of user.) is greater than 2. If so, it sets the deny action, HTTP return status of 429 (Too Many Requests), and logs the reason why this request was denied (rate-limited).

And… Given the way Roundcube works, this even works transparently! If a user hits the limit, the mail sending component will just wait and, after a while, time out. Then, the user can click Send again. If legitimate users are too productive and try to send over three mails in a minute, they won’t lose any of it; spammers will (hopefully!) find it unbearably slow and give up.

Logging is quite informative; I will probably later restrict it to show fewer parts (even if just for privacy sake, as it logs the full request!) For a complex permissions framework such as mod_security, having information such as the following is most welcome in order to find a possibly misbehaving rule:

--76659f4b-H-- Message: Access denied with code 429 (phase 2). Pattern match "POST.*_task=mail&_unlock" at REQUEST_LINE. [file "/etc/modsecurity/rate_limit_sender.conf"] [line "20"] [id "100009"] [msg "RATELIMITED BOT"] Apache-Error: [file "apache2_util.c"] [line 273] [level 3] [client] ModSecurity: Access denied with code 429 (phase 2). Pattern match "POST.*_task=mail&_unlock" at REQUEST_LINE. [file "/etc/modsecurity/rate_limit_sender.conf"] [line "20"] [id "100009"] [msg "RATELIMITED BOT"] [hostname ""] [uri "/roundcube/"] [unique_id "YMzJLR9jVDMGsG@18kB1qAAAAAY"] Action: Intercepted (phase 2) Stopwatch: 1624033581838813 1204 (- - -) Stopwatch2: 1624033581838813 1204; combined=352, p1=29, p2=140, p3=0, p4=0, p5=94, sr=81, sw=89, l=0, gc=0 Response-Body-Transformed: Dechunked Producer: ModSecurity for Apache/2.9.3 ( Server: Apache WebApp-Info: "default" "-" "" Engine-Mode: "ENABLED"

I truly, truly hope this is the last time my server falls in the black pits of DNSBL/RBL lists ☹

Enrico Zini: Playbooks, host vars, group vars

Friday 18th of June 2021 01:58:45 PM

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

Host variables

Ansible allows to specify per-host variables, and I like that. Let's try to model a host as a dataclass:

@dataclass class Host: """ A host to be provisioned. """ name: str type: str = "Mitogen" args: Dict[str, Any] = field(default_factory=dict) def _make_system(self) -> System: cls = getattr(transilience.system, self.type) return cls(, **self.args)

This should have enough information to create a connection to the host, and can be subclassed to add host-specific dataclass fields.

Host variables can then be provided as default constructor arguments when instantiating Roles:

# Add host/group variables to role constructor args host_fields = { f for f in fields(host)} for field in fields(role_cls): if in host_fields: role_kwargs.setdefault(, getattr(host, role = role_cls(**role_kwargs) Group variables

It looks like I can model groups and group variables by using dataclasses as mixins:

@dataclass class Webserver: server_name: str = "" @dataclass class Srv1(Webserver): ...

Doing things like filtering all hosts that are members of a given group can be done with a simple isinstance or issubclass test.


So far Transilience is executing on one host at a time, and Ansible can execute on a whole host inventory.

Since the most part of running a playbook is I/O bound, we can parallelize hosts using threads, without worrying too much about the performance impact of GIL.

Let's introduce a Playbook class as the main entry point for a playbook:

class Playbook: def setup_logging(self): ... def make_argparser(self): description = inspect.getdoc(self) if not description: description = "Provision systems" parser = argparse.ArgumentParser(description=description) parser.add_argument("-v", "--verbose", action="store_true", help="verbose output") parser.add_argument("--debug", action="store_true", help="verbose output") return parser def hosts(self) -> Sequence[Host]: """ Generate a sequence with all the systems on which the playbook needs to run """ return () def start(self, runner: Runner): """ Start the playbook on the given runner. This method is called once for each system returned by systems() """ raise NotImplementedError(f"{self.__class__.__name__}.start is not implemented") def main(self): parser = self.make_argparser() self.args = parser.parse_args() self.setup_logging() # Start all the runners in separate threads threads = [] for host in self.hosts(): runner = Runner(host) self.start(runner) t = threading.Thread(target=runner.main) threads.append(t) t.start() # Wait for all threads to complete for t in threads: t.join()

And an actual playbook will now look like something like this:

from dataclasses import dataclass import sys from transilience import Playbook, Host @dataclass class MyServer(Host): srv_root: str = "/srv" site_admin: str = "" class VPS(Playbook): """ Provision my VPS """ def hosts(self): yield MyServer(name="server", args={ "method": "ssh", "hostname": "", "username": "root", }) def start(self, runner): runner.add_role("fail2ban") runner.add_role("prosody") runner.add_role( "mailserver", postmaster="enrico", myhostname="", aliases={...}) if __name__ == "__main__": sys.exit(VPS().main())

It looks quite straightforward to me, works on any number of hosts, and has a proper command line interface:

./provision --help usage: provision [-h] [-v] [--debug] Provision my VPS optional arguments: -h, --help show this help message and exit -v, --verbose verbose output --debug verbose output

Elana Hashman: I'm hosting a Bug Scrub for Kubernetes SIG Node

Thursday 17th of June 2021 03:20:00 PM

It's been a long while since I last hosted a BSP, but 'tis the season.

Kubernetes SIG Node will be holding a bug scrub on June 24-25, and this is a great opportunity for you to get involved if you're interested in contributing to Kubernetes or SIG Node!

We will be hosting a global event with region captains for all timezones. I am one of the NASA captains (~17:00-01:00 UTC) and I'll be leading the kickoff. We will be working on Slack and Zoom. I hope you'll be able to drop in!

Details I'm an existing contributor, what should I work on?

Work on triaging and closing SIG Node bugs. We have a lot of bugs!!

The goal of our event is to categorize, clean up, and resolve some of the 450+ issues in k/k for SIG Node.

Check out the event docs for more instructions.

I'm a new contributor and want to join but I have no idea what I'm doing!

At some point, that was all of us!

This is a great opportunity to get involved if you've never contributed to Kubernetes. We'll have dedicated mentors available to coordinate and help out new contributors.

If you've never contributed to Kubernetes before, I recommend you check out the Getting Started and Contributor Guide resources in advance of the event. You will want to ensure you've signed the contributor license agreement (CLA).

Remember, you don't have to code to make valuable contributions! Triaging the bug tracker is a great example of this.

See you there!

Happy hacking.

Enrico Zini: Reimagining Ansible variables

Thursday 17th of June 2021 01:52:15 PM

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

While experimenting with Transilience, I've been giving some thought about Ansible variables.

My gripes

I like the possibility to define host and group variables, and I like to have a set of variables that are autodiscovered on the target systems.

I do not like to have everything all blended in a big bucket of global variables.

Let's try some more prototyping.

My fiddlings

First, Role classes could become dataclasses, too, and declare the variables and facts that they intend to use (typed, even!):

class Role(role.Role): """ Postfix mail server configuration """ # Postmaster username postmaster: str = None # Public name of the mail server myhostname: str = None # Email aliases defined on this mail server aliases: Dict[str, str] = field(default_factory=dict)

Using dataclasses.asdict() I immediately gain context variables for rendering Jinja2 templates:

class Role: # [...] def render_file(self, path: str, **kwargs): """ Render a Jinja2 template from a file, using as context all Role fields, plus the given kwargs. """ ctx = asdict(self) ctx.update(kwargs) return self.template_engine.render_file(path, ctx) def render_string(self, template: str, **kwargs): """ Render a Jinja2 template from a string, using as context all Role fields, plus the given kwargs. """ ctx = asdict(self) ctx.update(kwargs) return self.template_engine.render_string(template, ctx)

I can also model results from fact gathering into dataclass members:

# From ansible/module_utils/facts/system/ @dataclass class Platform(Facts): """ Facts from the platform module """ ansible_system: Optional[str] = None ansible_kernel: Optional[str] = None ansible_kernel: Optional[str] = None ansible_kernel_version: Optional[str] = None ansible_machine: Optional[str] = None # [...] ansible_userspace_architecture: Optional[str] = None ansible_machine_id: Optional[str] = None def summary(self): return "gather platform facts" def run(self, system: transilience.system.System): super().run(system) # ... collect facts

I like that this way, one can explicitly declare what variables a Facts action will collect, and what variables a Role needs.

At this point, I can add machinery to allow a Role to declare what Facts it needs, and automatically have the fields from the Facts class added to the Role class. Then, when facts are gathered, I can make sure that their fields get copied over to the Role classes that use them.

In a way, variables become role-scoped, and Facts subclasses can be used like some kind of Role mixin, that contributes only field members:

# Postfix mail server configuration @role.with_facts([actions.facts.Platform]) class Role(role.Role): # Postmaster username postmaster: str = None # Public name of the mail server myhostname: str = None # Email aliases defined on this mail server aliases: Dict[str, str] = field(default_factory=dict) # All fields from actions.facts.Platform are inherited here! def have_facts(self, facts): # self.ansible_domain comes from actions.facts.Platform self.add(builtin.command( argv=["certbot", "certonly", "-d", f"mail.{self.ansible_domain}", "-n", "--apache"], creates=f"/etc/letsencrypt/live/mail.{self.ansible_domain}/fullchain.pem" ), name="obtain mail.* certificate") # the template context will have the Role variables, plus the variables # of all the Facts the Role uses with self.notify(ReloadPostfix): self.add(builtin.copy( dest="/etc/postfix/", content=self.render_file("roles/mailserver/templates/"), ), name="configure /etc/postfix/")

One can also fill in variables when instantiating Roles, making parameterized generic Roles possible and easy:

runner.add_role( "mailserver", postmaster="enrico", myhostname="", aliases={ "me": "enrico", }, ) Outcomes

I like where this is going: having well defined variables for facts and roles, means that the variables that get into play can be explicitly defined, well known, and documented.

I think this design lends itself quite well to role reuse:

  • Roles can use variables without risking interfering with each other.
  • Variables from facts can have well defined meanings across roles.
  • Roles are classes, and can easily be made inheritable.

I have a feeling that, this way, it may be much easier to create generic libraries of Roles that one can reuse to compose complex playbooks.

Since roles are just Python modules, we even already know how to package and distribute them!

Next step: Playbooks, host vars, group vars.

Rapha&#235;l Hertzog: Submit your ideas for Debian and +1 those that you find important

Thursday 17th of June 2021 01:36:25 PM

A while ago, I got a request from Kentaro Hayashi on the project I use to manage funding requests addressed to Freexian. He was keen to see some improvements on the way reimbursement requests are handled in Debian. In my opinion, the idea is certainly good but he’s not part of the treasurer team and was not willing to implement the project either, so it was not really ready to be submitted to us.

To be able to fund a useful project, we need either someone that is willing to do the work and try to push it further in Debian, or we need a Debian team interested in the result of the project (and in that case, we can try to find someone willing to implement the project). In this case, it’s a bit sad that the treasurer team didn’t comment at all… but in general, what should we do with those suggestions ?

It would still be interesting to have a list of such suggestions and have Debian developers be able to advocate (+1) those suggestions. It’s in this spirit that Kentaro created the “Grow your ideas” project on Browse the list of issues, submit your own and click on

Joey Hess: typed pipes in every shell

Thursday 17th of June 2021 01:09:02 AM

Powershell and nushell take unix piping beyond raw streams of text to structured or typed data. Is it possible to keep a traditional shell like bash and still get typed pipes?

I think it is possible, and I'm now surprised noone seems to have done it yet. This is a fairly detailed design for how to do it. I've not implemented it yet. RFC.

Let's start with a command called typed. You can use it in a pipeline like this:

typed foo | typed bar | typed baz

What typed does is discover the types of the commands to its left and its right, while communicating the type of the command it runs back to them. Then it checks if the types match, and runs the command, communicating the type information to it. Pipes are unidirectional, so it may seem hard to discover the type to the right, but I'll explain how it can be done in a minute.

Now suppose that foo generates json, and bar filters structured data of a variety of types, and baz consumes csv and pretty-prints a table. Then bar will be informed that its input is supposed to be json, and that its output should be csv. If bar didn't support json, typed foo and typed bar would both fail with a type error.

Writing "typed" in front of everything is annoying. But it can be made a shell alias like "t". It also possible to wrap programs using typed:

cat >~/bin/foo <<EOF #/usr/bin/typed /usr/bin/foo EOF

Or program could import a library that uses typed, so it natively supports being used in typed pipelines. I'll explain one way to make such a library later on, once some more details are clear.

Which gets us back to a nice simple pipeline, now automatically typed.

foo | bar | baz

If one of the commands is not actually typed, the other ones in the pipe will treat it as having a raw stream of text as input or output. Which will sometimes result in a type error (yay, I love type errors!), but in other cases can do something useful.

find | bar | baz # type error, bar expected json or csv foo | bar | less # less displays csv

So how does typed discover the types of the commands to the left and right? That's the hard part. It has to start by finding the pids to its left and right. There is no really good way to do that, but on Linux, it can be done: Look at what /proc/self/fd/0 and /proc/self/fd/1 link to, which contains the unique identifiers of the pipes. Then look at other processes' fd/0 and fd/1 to find matching pipe identifiers. (It's also possible to do this on OSX, I believe. I don't know about BSDs.)

Searching through all processes would be a bit expensive (around 15 ms with an average number of processes), but there's a nice optimisation: The shell will have started the processes close together in time, so the pids are probably nearby. So look at the previous pid, and the next pid, and fan outward. Also, check isatty to detect the beginning and end of the pipeline and avoid scanning all the processes in those cases.

To indicate the type of the command it will run, typed simply opens a file with an extension of ".typed". The file can be located anywhere, and can be an already existing file, or can be created as needed (eg in /run). Once it discovers the pid at the other end of a pipe, typed first looks at /proc/$pid/cmdline to see if it's also running typed. If it is, it looks at its open file handles to find the first ".typed" file. It may need to wait for the file handle to get opened, which is why it needs to verify the pid is running typed.

There also needs to be a way for typed to learn the type of the command it will run. Reading /usr/share/typed/$command.typed is one way. Or it can be specified at the command line, which is useful for wrapper scripts:

cat >~/bin/bar <<EOF #/usr/bin/typed --type="JSON | CSV" --output-type="JSON | CSV" /usr/bin/bar EOF

And typed communicates the type information to the command that it runs. This way a command like bar can know what format its input should be in, and what format to use as output. This might be done with environment variables, eg INPUT_TYPE=JSON and OUTPUT_TYPE=CSV

I think that's everything typed needs, except for the syntax of types and how the type checking works. Which I should probably not try to think up off the cuff. I used Haskell ADT syntax in the example above, but don't think that's necessarily the right choice.

Finally, here's how to make a library that lets a program natively support being used in a typed pipeline. It's a bit tricky, because it has to run typed, because typed checks /proc/$pid/cmdline as detailed above. So, check an environment variable. When not set yet, set it, and exec typed, passing it the path to the program, which it will re-exec. This should be done before program does anything else.

This work was sponsored by Mark Reidenbach on Patreon.

Julien Danjou: Python Tools to Try in 2021

Wednesday 16th of June 2021 03:30:19 PM

The Python programming language is one of the most popular and in huge demand. It is free, has a large community, is intended for the development of projects of varying complexity, is easy to learn, and opens up great opportunities for programmers. To work comfortably with it, you need special Python tools, which are able to simplify your work. We have selected the best Python tools that will be relevant in 2021.

Popular Python Tools 2021

Python tools make life much easier for any developer and provide ample opportunities for creating effective applications or sites. These solutions help to automate different processes and minimize routine tasks.

In fact, their functionality varies considerably. Some are made for full-fledged complex multi-level development, while others have a simplified interface that allows you to develop individual modules and blocks. Before choosing a tool, you need to define objectives and understand goals. In this case, it will become clear what exactly to use.


As you may probably know, in order to send an email, you need SMTP (Simple Mail Transfer Protocol). This is because you can't just send a letter to the recipient. It needs to be sent to the server from which the recipient will download this letter using IMAP and POP3.

Mailtrap provides an opportunity to send emails in python. Moreover, Mailtrap provides #rest #api to access current emails. It can be used to automate email testing, which will improve your email marketing campaigns. For example, you can check the password recovery form in the Selenium Test and immediately see if an email was sent to the correct address. Then take a new password from the email and try to enter the site with it. Cool, isn't it?

  • All emails are in one place.
  • Mailtrap provides multiple inboxes.
  • Shared access is present.
  • It is easy to set up.
  • RESTful API

No visible disadvantages were found.


Django is a free and open-source full-stack framework. It is one of the most important and popular among Python developers. It helps you move from a prototype to a ready-made working solution in a short time since its main task is to automate processes and speed up work through associations and libraries. It’s a great choice for a product launch.

You can use Django if at least a few of the following points interest you:

  • There is a need to develop the server-side of the API.
  • You need to develop a web application.
  • In the course of work, many changes are made, you have to constantly deploy the application and make edits.
  • There are many complex tasks that are difficult to solve on your own, and you will need the help of the community.
  • ORM support is needed to avoid accessing the database directly.
  • There is a need to integrate new technologies such as machine learning.

Django is a great Python Web Framework that does its job. It is not for nothing that it is one of the most popular, and is actively used by millions of developers.


Django has quite a few advantages. It contains a large number of ready-made solutions, which greatly simplifies development. Admin panel, database migration, various forms, user authentication tools are extremely helpful. The structure is very clear and simple.

A large community helps to solve almost any problem. Thanks to ORM, there is a high level of security and it is comfortable to work with databases.


Despite its powerful capabilities, Django's Python Web Framework has drawbacks. It is very massive, monolithic, therefore it develops slowly. Despite the many generic modules, the development speed of Django itself is reduced.


CherryPy is a micro-framework. It is designed to solve specific problems, capable of running the program on any operating system. CherryPy is used in the following cases:

  • To create an application with small code size.
  • There is a need to manage several servers at the same time.
  • You need to monitor the performance of applications.

CherryPy refers to Python Frameworks, which are designed for specific tasks. It's clear, user-friendly, and ideal for Android development.


CherryPy Python tool has a friendly and understandable development environment. This is a functional and complete framework, which can be used to build good applications. The source code is open, so the platform is completely free for developers, and the community, although not too large, is very responsive, and always helps to solve problems.


There are not so many cons to this Python tool. It is not capable of performing complex tasks and functions, it is intended more for specific solutions, for example, for the development of certain plugins or modules.


Python Pyramid tool is designed for programming complex objects and solving multifunctional problems. It is used by professional programmers and is traditionally used for identification and routing. It is aimed at a wide audience and is capable of developing API prototypes.

It is used in the following cases:

  • You need problem indicator tools to make timely adjustments and edits.
  • You use several programming languages ​​at once;
  • You work with reporting and financial calculations, forecasting;
  • You need to quickly create a simple application.

At the same time, the Python Web Framework Pyramid allows you to create complex applications with great functionality like a translation software.


Pyramid does an excellent job of developing basic applications quickly. It is quite flexible and easy to learn. In fact, the key to the success of this framework is that it is completely based on fundamental principles, using simple and basic programming techniques. It is minimalistic, but at the same time offers users a lot of freedom of action. It is able to work with both small applications and powerful multifunctional programs.


It is difficult to deviate from the basic principles. This Python tool makes the decision for you. Simple programs are very easy to implement. But to do something complex and large-scale, you have to completely immerse yourself in the study of the environment and obey it.


Grok is a Python tool that works with templates. Its main task is to eliminate repetitions in the code. If the element is repeated, then the template that was already created earlier is simply applied. This greatly simplifies and speeds up the work.

Grok suits developers in the following cases:

  • If a programmer has little experience and is not yet ready to develop his modules.
  • There is a need to quickly develop a simple application.
  • The functionality of the application is simple, straightforward, and the interface does not play a key role.

The Grok framework is a child of Zope3, which was released earlier. It has a simplified structure of work, easy installation of modules, more capabilities, and better flexibility. It is designed to develop small applications. Yes, it is not intended for complex work, but due to its functionality, it allows you to quickly implement a project.


The Grok community is not very large, as this Python tool has not gained widespread popularity. Nevertheless, it is used by Python adepts for comfortable development. It is impossible to implement complex tasks on it since the possibilities are quite limited.

Grok is one of the best Python Web Frameworks. It is understandable and has enough features for comfortable development.


Web2Py is a Python tool that has its own IDEwhich, which includes a code editor, debugger, and deployment. It works great without the need for configuration or installation, provides a high level of data security, and is suitable for work on various platforms.

Web2Py is great in the following cases:

  • When there is a need to develop something on different operating systems.
  • If there is no way to install and configure the framework.
  • When a high level of data security is required, for example, when developing financial applications or sales performance management tools.
  • If you need to carefully track bugs right during development, and not during the testing phase.

Web2Py is capable of working with different protocols, has a built-in error tracker, and has a backward compatibility feature that helps to work on the basis of previous versions of the framework. This means that code maintenance becomes much easier and cheaper. It's free, open-source, and very flexible.


Among the many Python tools, there are not many that require the latest version of the language. Web2Py is one of those and won't work on Python 3 and below. Therefore, you need to constantly monitor the updates.

Web2Py does an excellent job of its tasks. It is quite simple and accessible to everyone.


BlueBream used to be called Zope3 before. It copes well with tasks of the medium and high level of complexity and is suitable for working on serious projects.


The BlueBream build system is quite powerful and suitable for complex tasks. You can create functional applications on it, and the principle of reuse of components makes the code easier. At the same time, the speed of development increases. The software can be scaled, and a transactional object database provides an easy path to store it. This means that queries are processed quickly and database management is simple.


This is not a very flexible framework, it is better to know clearly in advance what is required of it. In addition, it cannot withstand heavy loads. When working with 1000 users at the same time, it can crash and give errors. Therefore, it should be used to solve narrow problems.

Python frameworks are often designed for specific tasks. BlueBream is one of these and is suitable for applications where database management plays a key role.


Python tools come in different forms and have vastly different capabilities. There are quite a few of them, but in 2021 these will be the most popular and in demand. Experienced programmers always choose several development tools for their comfortable work.

Abhijith PA: Changing LCD screen of car infotainment system

Wednesday 16th of June 2021 05:53:00 AM

I have a 2013 model used car that I bought two years ago. It came with a 7 inch touch screen infotainment system on its dash board with features like navigation, Bluetooth phone connectivity and a good FM AM radio. Except the radio I rarely use navigation or Bluetooth phone sync. After couple of months the touch started to become non-responsive. Since all important things such as call termination, mute and volume control have physical switches, I was happy with it.

During a periodic car check up on a local workshop, mechanic pulled battery terminals making the infotainment system locked. Now it ask for 4 digit pass code. Its one of their ant-theft mechanism and in order to enter those digits you need a touch responsive screen. So now I am completed locked out.

I visited service center of this car to get it changed, turns out they don’t repair it and only change by the unit. And will cost me Rs 50,000. Considering my usage is restricted to radio. That price is way too much.

So my next options were,

  1. Use a normal cheap car radio player. I dropped this plan, since stereo players comes in very small size and I might need plastic placeholders to close rest of the area from the 7 inch screen. This can mess aesthetics of dash board. And it also affect value of the car if I will sell in future.

  2. Use a 7 inch third party infotainment system available in the market. I did enquired with local car accessories shop and I dropped this plan as well because most of them are android and comes with enormous number of pre installed apps which I never gonna use it anyway. And I don’t need one more device that need to be connected to the Internet. Also the wiring at the back of these devices are different than what I have already so it need rewiring.

  3. Last option was to change the infotainment system parts my own. I was under the impression that these units are in-house made, tightly assembled and sealed with screws that you never see in your life and you will never able to be open it up, let alone fix parts.

Couple of articles showed me that car infotainment system sizes and wiring at the back are actually standardized. This particular system’s OEM is LG

After gaining confidence from several youtube videos and articles, I chose the 3rd option and started to disassemble head unit. Luckily all those screw driver heads were with me. After taking everything apart I could see back of touch screen glass bulged and pushing against the display. The back of the display had a label with its manufacturers name Innolux display and its model number AT070TN94. You might know this company from all those raspberry pi displays in the market. Also Tesla car makers use Innolux display in their infotainment unit.

I was able to spot an online reseller in Hong Kong. I only need a touch screen display but to be on the safe side I also placed order for display as well(Price difference were negligible) Took 1 month to reach at my hand due to COVID restrictions. I assembled everything together and collected radio pass code the very same day I received the item. Everything is working now. yay!

The touch screen + display cost me Rs 6000/-. I can easily get an average android infotainment around this price. But it was the price I paid for avoiding e-waste.

So if you have a broken head unit in your pre 2015 era car. Try to change the parts yourself. Its not complicated like our mobile phones.

Steinar H. Gunderson: Nikon D4 repair

Tuesday 15th of June 2021 07:45:25 PM
“Various error messages saved on sequence and aperture control unit in front module. Error also appears on testing here. The service period for this product is expired, and Nikon will not deliver parts. Thus, we can unfortunately not repair your camera. Charged for inspection.”

So, seriously, Nikon, I could understand it if this were a dinky $200 compact, or a phone which could no longer keep up with the burdens of the ecosystem it was part of, but this is a 2012 flagship DSLR. You pay $6000, and yet you can't even get parts nine years later? I'm usually not the one to complain the loudest about “planned obsolescence”, but I think stocking up on parts should be possible. :-)

Supposedly, a third-party repair shop still has D4 parts and the know-how to switch them without messing things up (which I don't). So with some luck, I'll get five more years or so out of it. Oh well, 131k exposures isn't bad at any rate…

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, May 2021

Tuesday 15th of June 2021 12:04:55 PM

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

In May, we again put aside 2100 EUR to fund Debian projects. There was no proposals for new projects received, thus we’re looking forward to receive more projects from various Debian teams! Please do not hesitate to submit a proposal, if there is a project that could benefit from the funding!

We’re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.

Debian LTS contributors

In May, 12 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 7.0h (out of 14h assigned and 12h from April), thus carrying over 19h to June.
  • Anton Gladky did 12h (out of 12h assigned).
  • Ben Hutchings did 16h (out of 13.5h assigned plus 4.5h from April), thus is carrying over 2h for June.
  • Chris Lamb did 18h (out of 18h assigned).
  • Holger Levsen‘s work was coordinating/managing the LTS team, he did 5.5h and gave back 6.5h to the pool.
  • Markus Koschany did 15h (out of 29.75h assigned and 15h from April), thus carrying over 29.75h to June.
  • Ola Lundqvist did 12h (out of 12h assigned and 4.5h from April), thus carrying over 4.5h to June.
  • Roberto C. Sánchez did 7.5h (out of 27.5h assigned and 27h from April), and gave back 47h to the pool.
  • Sylvain Beucler did 29.75h (out of 29.75h assigned).
  • Thorsten Alteholz did 29.75h (out of 29.75h assigned).
  • Utkarsh Gupta did 29.75h (out of 29.75h assigned).
Evolution of the situation

In May we released 33 DLAs and mostly skipped our public IRC meeting and the end of the month. In June we’ll have another team meeting using video as lined out on our LTS meeting page.
Also, two months ago we announced that Holger would step back from his coordinator role and today we are announcing that he is back for the time being, until a new coordinator is found.
Finally, we would like to remark once again that we are constantly looking for new contributors. Please contact Holger if you are interested!

The security tracker currently lists 41 packages with a known CVE and the dla-needed.txt file has 21 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

Ben Hutchings: Debian LTS work, May 2021

Monday 14th of June 2021 09:47:37 PM

In May I was assigned 13.5 hours of work by Freexian's Debian LTS initiative and carried over 4.5 hours from earlier months. I worked 16 hours and will carry over the remainder.

I finished reviewing the futex code in the PREEMPT_RT patchset for Linux 4.9, and identified several places where it had been mis-merged with the recent futex security fixes. I sent a patch for these upstream, which was accepted and applied in v4.9.268-rt180.

I have continued updating the Linux 4.9 package to later upstream stable versions, and backported some missing security fixes. I have still not made a new upload, but intend to do so this week.