Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 11 hours 1 min ago

Jonathan Dowland: vim-css-color

Friday 30th of September 2022 07:48:06 PM

Last year I wrote about a subset of the vim plugins I was using, specifically those created by master craftsman Tim Pope. Off and on since then I've reviewed the other plugins I use and tried a few others, so I thought I'd write about them.

automatic colour name colouring

vim-css-color is a simple plugin that recognised colour names specified in CSS-style: e.g. 'red', '#ff0000', '#rgb(255,0,0)' etc., and colours them accordingly. True to its name, once installed it's active when editing CSS files, but it's also supported for many other file types, and extending it further is not hard.

Reproducible Builds (diffoscope): diffoscope 223 released

Friday 30th of September 2022 12:00:00 AM

The diffoscope maintainers are pleased to announce the release of diffoscope version 223. This version includes the following changes:

[ Chris Lamb ] * The cbfstools utility is now provided in Debian via the coreboot-utils Debian package, so we can enable that functionality within Debian. (Closes: #1020630) [ Mattia Rizzolo ] * Also include coreboot-utils in Build-Depends and Test-Depends so it is available for the tests. [ Jelle van der Waa ] * Add support for file 5.43.

You find out more by visiting the project homepage.

Antoine Beaupré: Detecting manual (and optimizing large) package installs in Puppet

Thursday 29th of September 2022 07:05:40 PM

Well this is a mouthful.

I recently worked on a neat hack called puppet-package-check. It is designed to warn about manually installed packages, to make sure "everything is in Puppet". But it turns out it can (probably?) dramatically decrease the bootstrap time of Puppet bootstrap when it needs to install a large number of packages.

Detecting manual packages

On a cleanly filed workstation, it looks like this:

root@emma:/home/anarcat/bin# ./puppet-package-check -v listing puppet packages... listing apt packages... loading apt cache... 0 unmanaged packages found

A messy workstation will look like this:

root@curie:/home/anarcat/bin# ./puppet-package-check -v listing puppet packages... listing apt packages... loading apt cache... 288 unmanaged packages found apparmor-utils beignet-opencl-icd bridge-utils clustershell cups-pk-helper davfs2 dconf-cli dconf-editor dconf-gsettings-backend ddccontrol ddrescueview debmake debootstrap decopy dict-devil dict-freedict-eng-fra dict-freedict-eng-spa dict-freedict-fra-eng dict-freedict-spa-eng diffoscope dnsdiag dropbear-initramfs ebtables efibootmgr elpa-lua-mode entr eog evince figlet file file-roller fio flac flex font-manager fonts-cantarell fonts-inconsolata fonts-ipafont-gothic fonts-ipafont-mincho fonts-liberation fonts-monoid fonts-monoid-tight fonts-noto fonts-powerline fonts-symbola freeipmi freetype2-demos ftp fwupd-amd64-signed gallery-dl gcc-arm-linux-gnueabihf gcolor3 gcp gdisk gdm3 gdu gedit gedit-plugins gettext-base git-debrebase gnome-boxes gnote gnupg2 golang-any golang-docker-credential-helpers golang-golang-x-tools grub-efi-amd64-signed gsettings-desktop-schemas gsfonts gstreamer1.0-libav gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-pulseaudio gtypist gvfs-backends hackrf hashcat html2text httpie httping hugo humanfriendly iamerican-huge ibus ibus-gtk3 ibus-libpinyin ibus-pinyin im-config imediff img2pdf imv initramfs-tools input-utils installation-birthday internetarchive ipmitool iptables iptraf-ng jackd2 jupyter jupyter-nbextension-jupyter-js-widgets jupyter-qtconsole k3b kbtin kdialog keditbookmarks keepassxc kexec-tools keyboard-configuration kfind konsole krb5-locales kwin-x11 leiningen lightdm lintian linux-image-amd64 linux-perf lmodern lsb-base lvm2 lynx lz4json magic-wormhole mailscripts mailutils manuskript mat2 mate-notification-daemon mate-themes mime-support mktorrent mp3splt mpdris2 msitools mtp-tools mtree-netbsd mupdf nautilus nautilus-sendto ncal nd ndisc6 neomutt net-tools nethogs nghttp2-client nocache npm2deb ntfs-3g ntpdate nvme-cli nwipe obs-studio okular-extra-backends openstack-clients openstack-pkg-tools paprefs pass-extension-audit pcmanfm pdf-presenter-console pdf2svg percol pipenv playerctl plymouth plymouth-themes popularity-contest progress prometheus-node-exporter psensor pubpaste pulseaudio python3-ldap qjackctl qpdfview qrencode r-cran-ggplot2 r-cran-reshape2 rake restic rhash rpl rpm2cpio rs ruby ruby-dev ruby-feedparser ruby-magic ruby-mocha ruby-ronn rygel-playbin rygel-tracker s-tui sanoid saytime scrcpy scrcpy-server screenfetch scrot sdate sddm seahorse shim-signed sigil smartmontools smem smplayer sng sound-juicer sound-theme-freedesktop spectre-meltdown-checker sq ssh-audit sshuttle stress-ng strongswan strongswan-swanctl syncthing system-config-printer system-config-printer-common system-config-printer-udev systemd-bootchart systemd-container tardiff task-desktop task-english task-ssh-server tasksel tellico texinfo texlive-fonts-extra texlive-lang-cyrillic texlive-lang-french texlive-lang-german texlive-lang-italian texlive-xetex tftp-hpa thunar-archive-plugin tidy tikzit tint2 tintin++ tipa tpm2-tools traceroute tree trocla ucf udisks2 unifont unrar-free upower usbguard uuid-runtime vagrant-cachier vagrant-libvirt virt-manager vmtouch vorbis-tools w3m wamerican wamerican-huge wfrench whipper whohas wireshark xapian-tools xclip xdg-user-dirs-gtk xlax xmlto xsensors xserver-xorg xsltproc xxd xz-utils yubioath-desktop zathura zathura-pdf-poppler zenity zfs-dkms zfs-initramfs zfsutils-linux zip zlib1g zlib1g-dev 157 old: apparmor-utils clustershell davfs2 dconf-cli dconf-editor ddccontrol ddrescueview decopy dnsdiag ebtables efibootmgr elpa-lua-mode entr figlet file-roller fio flac flex font-manager freetype2-demos ftp gallery-dl gcc-arm-linux-gnueabihf gcolor3 gcp gdu gedit git-debrebase gnote golang-docker-credential-helpers golang-golang-x-tools gtypist hackrf hashcat html2text httpie httping hugo humanfriendly iamerican-huge ibus ibus-pinyin imediff input-utils internetarchive ipmitool iptraf-ng jackd2 jupyter-qtconsole k3b kbtin kdialog keditbookmarks keepassxc kexec-tools kfind konsole leiningen lightdm lynx lz4json magic-wormhole manuskript mat2 mate-notification-daemon mktorrent mp3splt msitools mtp-tools mtree-netbsd nautilus nautilus-sendto nd ndisc6 neomutt net-tools nethogs nghttp2-client nocache ntpdate nwipe obs-studio openstack-pkg-tools paprefs pass-extension-audit pcmanfm pdf-presenter-console pdf2svg percol pipenv playerctl qjackctl qpdfview qrencode r-cran-ggplot2 r-cran-reshape2 rake restic rhash rpl rpm2cpio rs ruby-feedparser ruby-magic ruby-mocha ruby-ronn s-tui saytime scrcpy screenfetch scrot sdate seahorse shim-signed sigil smem smplayer sng sound-juicer spectre-meltdown-checker sq ssh-audit sshuttle stress-ng system-config-printer system-config-printer-common tardiff tasksel tellico texlive-lang-cyrillic texlive-lang-french tftp-hpa tikzit tint2 tintin++ tpm2-tools traceroute tree unrar-free vagrant-cachier vagrant-libvirt vmtouch vorbis-tools w3m wamerican wamerican-huge wfrench whipper whohas xdg-user-dirs-gtk xlax xmlto xsensors xxd yubioath-desktop zenity zip 131 new: beignet-opencl-icd bridge-utils cups-pk-helper dconf-gsettings-backend debmake debootstrap dict-devil dict-freedict-eng-fra dict-freedict-eng-spa dict-freedict-fra-eng dict-freedict-spa-eng diffoscope dropbear-initramfs eog evince file fonts-cantarell fonts-inconsolata fonts-ipafont-gothic fonts-ipafont-mincho fonts-liberation fonts-monoid fonts-monoid-tight fonts-noto fonts-powerline fonts-symbola freeipmi fwupd-amd64-signed gdisk gdm3 gedit-plugins gettext-base gnome-boxes gnupg2 golang-any grub-efi-amd64-signed gsettings-desktop-schemas gsfonts gstreamer1.0-libav gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-pulseaudio gvfs-backends ibus-gtk3 ibus-libpinyin im-config img2pdf imv initramfs-tools installation-birthday iptables jupyter jupyter-nbextension-jupyter-js-widgets keyboard-configuration krb5-locales kwin-x11 lintian linux-image-amd64 linux-perf lmodern lsb-base lvm2 mailscripts mailutils mate-themes mime-support mpdris2 mupdf ncal npm2deb ntfs-3g nvme-cli okular-extra-backends openstack-clients plymouth plymouth-themes popularity-contest progress prometheus-node-exporter psensor pubpaste pulseaudio python3-ldap ruby ruby-dev rygel-playbin rygel-tracker sanoid scrcpy-server sddm smartmontools sound-theme-freedesktop strongswan strongswan-swanctl syncthing system-config-printer-udev systemd-bootchart systemd-container task-desktop task-english task-ssh-server texinfo texlive-fonts-extra texlive-lang-german texlive-lang-italian texlive-xetex thunar-archive-plugin tidy tipa trocla ucf udisks2 unifont upower usbguard uuid-runtime virt-manager wireshark xapian-tools xclip xserver-xorg xsltproc xz-utils zathura zathura-pdf-poppler zfs-dkms zfs-initramfs zfsutils-linux zlib1g zlib1g-dev

Yuck! That's a lot of shit to go through.

Notice how the packages get sorted between "old" and "new" packages. This is because popcon is used as a tool to mark which packages are "old". If you have unmanaged packages, the "old" ones are likely things that you can uninstall, for example.

If you don't have popcon installed, you'll also get this warning:

popcon stats not available: [Errno 2] No such file or directory: '/var/log/popularity-contest'

The error can otherwise be safely ignored, but you won't get "help" prioritizing the packages to add to your manifests.

Note that the tool ignores packages that were "marked" (see apt-mark(8)) as automatically installed. This implies that you might have to do a little bit of cleanup the first time you run this, as Debian doesn't necessarily mark all of those packages correctly on first install. For example, here's how it looks like on a clean install, after Puppet ran:

root@angela:/home/anarcat# ./bin/puppet-package-check -v listing puppet packages... listing apt packages... loading apt cache... 127 unmanaged packages found ca-certificates console-setup cryptsetup-initramfs dbus file gcc-12-base gettext-base grub-common grub-efi-amd64 i3lock initramfs-tools iw keyboard-configuration krb5-locales laptop-detect libacl1 libapparmor1 libapt-pkg6.0 libargon2-1 libattr1 libaudit-common libaudit1 libblkid1 libbpf0 libbsd0 libbz2-1.0 libc6 libcap-ng0 libcap2 libcap2-bin libcom-err2 libcrypt1 libcryptsetup12 libdb5.3 libdebconfclient0 libdevmapper1.02.1 libedit2 libelf1 libext2fs2 libfdisk1 libffi8 libgcc-s1 libgcrypt20 libgmp10 libgnutls30 libgpg-error0 libgssapi-krb5-2 libhogweed6 libidn2-0 libip4tc2 libiw30 libjansson4 libjson-c5 libk5crypto3 libkeyutils1 libkmod2 libkrb5-3 libkrb5support0 liblocale-gettext-perl liblockfile-bin liblz4-1 liblzma5 libmd0 libmnl0 libmount1 libncurses6 libncursesw6 libnettle8 libnewt0.52 libnftables1 libnftnl11 libnl-3-200 libnl-genl-3-200 libnl-route-3-200 libnss-systemd libp11-kit0 libpam-systemd libpam0g libpcre2-8-0 libpcre3 libpcsclite1 libpopt0 libprocps8 libreadline8 libselinux1 libsemanage-common libsemanage2 libsepol2 libslang2 libsmartcols1 libss2 libssl1.1 libssl3 libstdc++6 libsystemd-shared libsystemd0 libtasn1-6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl libtinfo6 libtirpc-common libtirpc3 libudev1 libunistring2 libuuid1 libxtables12 libxxhash0 libzstd1 linux-image-amd64 logsave lsb-base lvm2 media-types mlocate ncurses-term pass-extension-otp puppet python3-reportbug shim-signed tasksel ucf usr-is-merged util-linux-extra wpasupplicant xorg zlib1g popcon stats not available: [Errno 2] No such file or directory: '/var/log/popularity-contest'

Normally, there should be unmanaged packages here. But because of the way Debian is installed, a lot of libraries and some core packages are marked as manually installed, and are of course not managed through Puppet. There are two solutions to this problem:

  • really manage everything in Puppet (argh)
  • mark packages as automatically installed

I typically chose the second path and mark a ton of stuff as automatic. Then either they will be auto-removed, or will stop being listed. In the above scenario, one could mark all libraries as automatically installed with:

apt-mark auto $(./bin/puppet-package-check | grep -o 'lib[^ ]*')

... but if you trust that most of that stuff is actually garbage that you don't really want installed anyways, you could just mark it all as automatically installed:

apt-mark auto $(./bin/puppet-package-check)

In my case, that ended up keeping basically all libraries (because of course they're installed for some reason) and auto-removing this:

dh-dkms discover-data dkms libdiscover2 libjsoncpp25 libssl1.1 linux-headers-amd64 mlocate pass-extension-otp pass-otp plocate x11-apps x11-session-utils xinit xorg

You'll notice xorg in there: yep, that's bad. Not what I wanted. But for some reason, on other workstations, I did not actually have xorg installed. Turns out having xserver-xorg is enough, and that one has dependencies. So now I guess I just learned to stop worrying and live without X(org).

Optimizing large package installs

But that, of course, is not all. Why make things simple when you can have an unreadable title that is trying to be both syntactically correct and click-baity enough to flatter my vain ego? Right.

One of the challenges in bootstrapping Puppet with large package lists is that it's slow. Puppet lists packages as individual resources and will basically run apt install $PKG on every package in the manifest, one at a time. While the overhead of apt is generally small, when you add things like apt-listbugs, apt-listchanges, needrestart, triggers and so on, it can take forever setting up a new host.

So for initial installs, it can actually makes sense to skip the queue and just install everything in one big batch.

And because the above tool inspects the packages installed by Puppet, you can run it against a catalog and have a full lists of all the packages Puppet would install, even before I even had Puppet running.

So when reinstalling my laptop, I basically did this:

apt install puppet-agent/experimental puppet agent --test --noop apt install $(./puppet-package-check --debug \ 2>&1 | grep ^puppet\ packages | sed 's/puppet packages://;s/ /\n/g' | grep -v -e onionshare -e golint -e git-sizer -e github-backup -e hledger -e xsane -e audacity -e chirp -e elpa-flycheck -e elpa-lsp-ui -e yubikey-manager -e git-annex -e hopenpgp-tools -e puppet ) puppet-agent/experimental

That massive grep was because there are currently a lot of packages missing from bookworm. Those are all packages that I have in my catalog but that still haven't made it to bookworm. Sad, I know. I eventually worked around that by adding bullseye sources so that the Puppet manifest actually ran.

The point here is that this improves the Puppet run time a lot. All packages get installed at once, and you get a nice progress bar. Then you actually run Puppet to deploy configurations and all the other goodies:

puppet agent --test

I wish I could tell you how much faster that ran. I don't know, and I will not go through a full reinstall just to please your curiosity. The only hard number I have is that it installed 444 packages (which exploded in 10,191 packages with dependencies) in a mere 10 minutes. That might also be with the packages already downloaded.

In any case, I have that gut feeling it's faster, so you'll have to just trust my gut. It is, after all, much more important than you might think.

Similar work

The blueprint system is something similar to this:

It figures out what you’ve done manually, stores it locally in a Git repository, generates code that’s able to recreate your efforts, and helps you deploy those changes to production

That tool has unfortunately been abandoned for a decade at this point.

Also note that the AutoRemove::RecommendsImportant and AutoRemove::SuggestsImportant are relevant here. If it is set to true (the default), a package will not be removed if it is (respectively) a Recommends or Suggests of another package (as opposed to the normal Depends). In other words, if you want to also auto-remove packages that are only Suggests, you would, for example, add this to apt.conf:

AutoRemove::SuggestsImportant false;

Paul Wise has tried to make the Debian installer and debootstrap properly mark packages as automatically installed in the past, but his bug reports were rejected. The other suggestions in this section are also from Paul, thanks!

Russell Coker: Links September 2022

Thursday 29th of September 2022 12:55:58 PM

Tony Kern wrote an insightful document about the crash of a B-52 at Fairchild air base in 1994 as a case study of failed leadership [1].

Cory Doctorow wrote an insightful medium article “We Should Not Endure a King” describing the case for anti-trust laws [2]. We need them badly.

Insightful Guardian article about the way reasonable responses to the bad situations people are in are diagnosed as mental health problems [3]. Providing better mental healthcare is good, but the government should also work on poverty etc.

Cory Doctorow wrote an insightful Locus article about some of the issues that have to be dealt with in applying anti-trust legislation to tech companies [4]. We really need this to be done.

Ars Technica has an interesting article about Stable Diffusion, an open source ML system for generating images [5], the results that it can produce are very impressive. One interesting thing is that the license has a set of conditions for usage which precludes exploiting or harming minors or generating false information [6]. This means it will need to go in the non-free section of Debian at best.

Dan Wang wrote an interesting article on optimism as human capital [7] which covers the reasons that people feel inspired to create things.

Related posts:

  1. Links September 2020 MD5 cracker, find plain text that matches MD5 hash [1]....
  2. Links Aug 2022 Armor is an interesting technology from Manchester University for stopping...
  3. Links July 2022 Darren Hayes wrote an interesting article about his battle with...

Jelmer Vernooij: Northcape 4000

Wednesday 28th of September 2022 10:00:00 PM

This summer, I signed up to participate in the Northcape 4000 <https://www.northcape4000.com/>, an annual 4000km bike ride between Rovereto (in northern Italy) and the northernmost point of Europe, the North cape.

The Northcape event has been held for several years, and while it always ends on the North Cape, the route there varies. Last years’ route went through the Baltics, but this years’ was perhaps as direct as possible - taking us through Italy, Austria, Switzerland, Germany, the Czech republic, Germany again, Sweden, Finland and finally Norway.

The ride is unsupported, meaning you have to find your own food and accomodation and can only avail yourself of resupply and sleeping options on the route that are available to everybody else as well. The event is not meant to be a race (unlike the Transcontinental, which starts at the same day), so there is a minimum time to finish it in (10 days) and a maximum (21 days).

Unfortunately, this meant skipping some other events I’d wanted attend (DebConf, MCH).

Ian Jackson: Hippotat (IP over HTTP) - first advertised release

Wednesday 28th of September 2022 08:12:48 PM

I have released version 1.0.0 of Hippotat, my IP-over-HTTP system. To quote the README:

You’re in a cafe or a hotel, trying to use the provided wifi. But it’s not working. You discover that port 80 and port 443 are open, but the wifi forbids all other traffic.

Never mind, start up your hippotat client. Now you have connectivity. Your VPN and SSH and so on run over Hippotat. The result is not very efficient, but it does work.

Story

In early 2017 I was in a mountaintop cafeteria, hoping to do some work on my laptop. (For Reasons I couldn’t go skiing that day.) I found that local wifi was badly broken: It had a severe port block. I had to use my port 443 SSH server to get anywhere. My usual arrangements punt everything over my VPN, which uses UDP of course, and I had to bodge several things. Using a web browser directly only the wifi worked normally, of course - otherwise the other guests would have complained. This was not the first experience like this I’d had, but this time I had nothing much else to do but fix it.

In a few furious hacking sessions, I wrote Hippotat, a tool for making my traffic look enough like “ordinary web browsing” that it gets through most stupid firewalls. That Python version of Hippotat served me well for many years, despite being rather shonky, extremely inefficient in CPU (and therefore battery) terms and not very productised.

But recently things have started to go wrong. I was using Twisted Python and there was what I think must be some kind of buffer handling bug, which started happening when I upgraded the OS (getting newer versions of Python and the Twisted libraries). The Hippotat code, and the Twisted APIs, were quite convoluted, and I didn’t fancy debugging it.

So last year I rewrote it in Rust. The new Rust client did very well against my existing servers. To my shame, I didn’t get around to releasing it.

However, more recently I upgraded the server hosts my Hippotat daemons run on to recent Debian releases. They started to be affected by the bug too, rendering my Rust client unuseable. I decided I had to deploy the Rust server code.

This involved some packaging work. Having done that, it’s time to release it: Hippotat 1.0.0 is out.

The package build instructions are rather strange

My usual approach to releasing something like this would be to provide a git repository containing a proper Debian source package. I might also build binaries, using sbuild, and I would consider actually uploading to Debian.

However, despite me taking a fairly conservative approach to adding dependencies to Hippotat, still a couple of the (not very unusual) Rust packages that Hippotat depends on are not in Debian. Last year I considered tackling this head-on, but I got derailed by difficulties with Rust packaging in Debian.

Furthermore, the version of the Rust compiler itself in Debian stable is incapable of dealing with recent versions of very many upstream Rust packages, because many packages’ most recent versions now require the 2021 Edition of Rust. Sadly, Rust’s package manager, cargo, has no mechanism for trying to choose dependency versions that are actually compatible with the available compiler; efforts to solve this problem have still not borne the needed fruit.

The result is that, in practice, currently Hippotat has to be built with (a) a reasonably recent Rust toolchain such as found in Debian unstable or obtained from Rust upstream; (b) dependencies obtained from the upstream Rust repository.

At least things aren’t completely terrible: Rustup itself, despite its alarming install rune, has a pretty good story around integrity, release key management and so on. And with the right build rune, cargo will check not just the versions, but the precise content hashes, of the dependencies to be obtained from crates.io, against the information I provide in the Cargo.lock file. So at least when you build it you can be sure that the dependencies you’re getting are the same ones I used myself when I built and tested Hippotat. And there’s only 147 of them (counting indirect dependencies too), so what could possibly go wrong?

Sadly the resulting package build system cannot work with Debian’s best tool for doing clean and controlled builds, sbuild. Under the circumstances, I don’t feel I want to publish any binaries.



comments

Vincent Fourmond: Version 3.1 of QSoas is out

Wednesday 28th of September 2022 01:29:04 PM
The new version of QSoas has just been released ! It brings in a host of new features, as the releases before, but maybe the most important change is the following...

Binary images now freely available ! Starting from now, all the binary images for the new versions of QSoas will freely available from the download page. You can download the precompiled versions of QSoas for MacOS or windows. So now, you have no reason anymore not to try !
My aim with making the binaries freely available is also to simplify the release process for me and therefore increase the rate at which new versions are released.

Improvements to the fit interface Some work went into improving the fit interface, in particular for the handling of fit trajectories when doing parameter space exploration, for difficult fits with many parameters and many local minima. The fit window now features real menus, along with tab a way to display the terminal (see the menus and the tabs selection on the image).
Individual fits have also been improved, with, among others, the possibility to easily simulate voltammograms with the kinetic-system fits, and the handling of Marcus-Hush-Chidsey (or Marcus "distribution of states") kinetics for electron transfers.

Column and row names This release greatly improves the handling of column and row names, including commands to easily modify them, the possibility to use Ruby formulas to change them, and a much better way read and write them to data files. Mastering the use of column names (and to a lesser extent, row names) can greatly simplify data handling, especially when dealing with files with a large number of columns.

Complex numbers Version 3.1 brings in support for formulas handling complex numbers. Although it is not possible to store complex numbers directly into datasets, it is easy to separate them in real and imaginary parts to your liking.

Scripting improvement Two important improvements for scripting are included in version 3.1. The first is the possibility to define virtual files inside a script file, which makes it easy to define subfunctions to run using commands like run-for-each. The second is the possibility to define variables to be reused later (like the script arguments) using the new command let.

There are a lot of other new features, improvements and so on, look for the full list there. About QSoas
QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.1. You can download its source code or precompiled versions for MacOS and Windows there. Alternatively, you can clone from the GitHub repository.

François Marier: Upgrading from chan_sip to res_pjsip in Asterisk 18

Tuesday 27th of September 2022 11:30:00 PM

After upgrading to Ubuntu Jammy and Asterisk 18.10, I saw the following messages in my logs:

WARNING[360166]: loader.c:2487 in load_modules: Module 'chan_sip' has been loaded but was deprecated in Asterisk version 17 and will be removed in Asterisk version 21. WARNING[360174]: chan_sip.c:35468 in deprecation_notice: chan_sip has no official maintainer and is deprecated. Migration to WARNING[360174]: chan_sip.c:35469 in deprecation_notice: chan_pjsip is recommended. See guides at the Asterisk Wiki: WARNING[360174]: chan_sip.c:35470 in deprecation_notice: https://wiki.asterisk.org/wiki/display/AST/Migrating+from+chan_sip+to+res_pjsip WARNING[360174]: chan_sip.c:35471 in deprecation_notice: https://wiki.asterisk.org/wiki/display/AST/Configuring+res_pjsip

and so I decided it was time to stop postponing the overdue migration of my working setup from chan_sip to res_pjsip.

It turns out that it was not as painful as I expected, though the conversion script bundled with Asterisk didn't work for me out of the box.

Debugging

Before you start, one very important thing to note is that the SIP debug information you used to see when running this in the asterisk console (asterisk -r):

sip set debug on

now lives behind this command:

pjsip set logger on SIP phones

The first thing I migrated was the config for my two SIP phones (Snom 300 and Snom D715).

The original config for them in sip.conf was:

[2000] ; Snom 300 type=friend qualify=yes secret=password123 encryption=no context=full host=dynamic nat=no directmedia=no mailbox=10@internal vmexten=707 dtmfmode=rfc2833 call-limit=2 disallow=all allow=g722 allow=ulaw [2001] ; Snom D715 type=friend qualify=yes secret=password456 encryption=no context=full host=dynamic nat=no directmedia=yes mailbox=10@internal vmexten=707 dtmfmode=rfc2833 call-limit=2 disallow=all allow=g722 allow=ulaw

and that became the following in pjsip.conf:

[transport-udp] type = transport protocol = udp bind = 0.0.0.0 external_media_address = myasterisk.dyn.example.com external_signaling_address = myasterisk.dyn.example.com local_net = 192.168.0.0/255.255.0.0 [2000] type = aor max_contacts = 1 [2000] type = auth username = 2000 password = password123 [2000] type = endpoint context = full dtmf_mode = rfc4733 disallow = all allow = g722 allow = ulaw direct_media = no mailboxes = 10@internal auth = 2000 outbound_auth = 2000 aors = 2000 [2001] type = aor max_contacts = 1 [2001] type = auth username = 2001 password = password456 [2001] type = endpoint context = full dtmf_mode = rfc4733 disallow = all allow = g722 allow = ulaw direct_media = yes mailboxes = 10@internal auth = 2001 outbound_auth = 2001 aors = 2001

The different direct_media line between the two phones has to do with how they each connect to my Asterisk server and whether or not they have access to the Internet.

Internal calls

For some reason, my internal calls (from one SIP phone to the other) didn't work when using "aliases". I fixed it by changing this blurb in extensions.conf from:

[speeddial] exten => 1000,1,Dial(SIP/2000,20) exten => 1001,1,Dial(SIP/2001,20)

to:

[speeddial] exten => 1000,1,Dial(${PJSIP_DIAL_CONTACTS(2000)},20) exten => 1001,1,Dial(${PJSIP_DIAL_CONTACTS(2001)},20)

I have not yet dug into what this changes or why it's necessary and so feel free to leave a comment if you know more here.

PSTN trunk

Once I had the internal phones working, I moved to making and receiving phone calls over the PSTN, for which I use VoIP.ms with encryption.

I had to change the following in my sip.conf:

[general] register => tls://555123_myasterisk:password789@vancouver2.voip.ms externhost=myasterisk.dyn.example.com localnet=192.168.0.0/255.255.0.0 tcpenable=yes tlsenable=yes tlscertfile=/etc/asterisk/asterisk.cert tlsprivatekey=/etc/asterisk/asterisk.key tlscapath=/etc/ssl/certs/ [voipms] type=peer host=vancouver2.voip.ms secret=password789 defaultuser=555123_myasterisk context=from-voipms disallow=all allow=ulaw allow=g729 insecure=port,invite canreinvite=no trustrpid=yes sendrpid=yes transport=tls encryption=yes

to the following in pjsip.conf:

[transport-tls] type = transport protocol = tls bind = 0.0.0.0 external_media_address = myasterisk.dyn.example.com external_signaling_address = myasterisk.dyn.example.com local_net = 192.168.0.0/255.255.0.0 cert_file = /etc/asterisk/asterisk.cert priv_key_file = /etc/asterisk/asterisk.key ca_list_path = /etc/ssl/certs/ method = tlsv1_2 [voipms] type = registration transport = transport-tls outbound_auth = voipms client_uri = sip:555123_myasterisk@vancouver2.voip.ms server_uri = sip:vancouver2.voip.ms [voipms] type = auth password = password789 username = 555123_myasterisk [voipms] type = aor contact = sip:555123_myasterisk@vancouver2.voip.ms [voipms] type = identify endpoint = voipms match = vancouver2.voip.ms [voipms] type = endpoint context = from-voipms disallow = all allow = ulaw allow = g729 from_user = 555123_myasterisk trust_id_inbound = yes media_encryption = sdes auth = voipms outbound_auth = voipms aors = voipms rtp_symmetric = yes rewrite_contact = yes send_rpid = yes timers = no

The TLS method line is needed since the default in Debian OpenSSL is too strict. The timers line is to prevent outbound calls from getting dropped after 15 minutes.

Finally, I changed the Dial() lines in these extensions.conf blurbs from:

[from-voipms] exten => 5551231000,1,Goto(2000,1) exten => 2000,1,Dial(SIP/2000&SIP/2001,20) exten => 2000,n,Goto(in2000-${DIALSTATUS},1) exten => 2000,n,Hangup exten => in2000-BUSY,1,VoiceMail(10@internal,su) exten => in2000-BUSY,n,Hangup exten => in2000-CONGESTION,1,VoiceMail(10@internal,su) exten => in2000-CONGESTION,n,Hangup exten => in2000-CHANUNAVAIL,1,VoiceMail(10@internal,su) exten => in2000-CHANUNAVAIL,n,Hangup exten => in2000-NOANSWER,1,VoiceMail(10@internal,su) exten => in2000-NOANSWER,n,Hangup exten => _in2000-.,1,Hangup(16) [pstn-voipms] exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231000>) exten => _1NXXNXXXXXX,n,Dial(SIP/voipms/${EXTEN}) exten => _1NXXNXXXXXX,n,Hangup() exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231000>) exten => _NXXNXXXXXX,n,Dial(SIP/voipms/1${EXTEN}) exten => _NXXNXXXXXX,n,Hangup() exten => _011X.,1,Set(CALLERID(all)=Francois Marier <5551231000>) exten => _011X.,n,Authenticate(1234) exten => _011X.,n,Dial(SIP/voipms/${EXTEN}) exten => _011X.,n,Hangup() exten => _00X.,1,Set(CALLERID(all)=Francois Marier <5551231000>) exten => _00X.,n,Authenticate(1234) exten => _00X.,n,Dial(SIP/voipms/${EXTEN}) exten => _00X.,n,Hangup()

to:

[from-voipms] exten => 5551231000,1,Goto(2000,1) exten => 2000,1,Dial(PJSIP/2000&PJSIP/2001,20) exten => 2000,n,Goto(in2000-${DIALSTATUS},1) exten => 2000,n,Hangup exten => in2000-BUSY,1,VoiceMail(10@internal,su) exten => in2000-BUSY,n,Hangup exten => in2000-CONGESTION,1,VoiceMail(10@internal,su) exten => in2000-CONGESTION,n,Hangup exten => in2000-CHANUNAVAIL,1,VoiceMail(10@internal,su) exten => in2000-CHANUNAVAIL,n,Hangup exten => in2000-NOANSWER,1,VoiceMail(10@internal,su) exten => in2000-NOANSWER,n,Hangup exten => _in2000-.,1,Hangup(16) [pstn-voipms] exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231000>) exten => _1NXXNXXXXXX,n,Dial(PJSIP/${EXTEN}@voipms) exten => _1NXXNXXXXXX,n,Hangup() exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231000>) exten => _NXXNXXXXXX,n,Dial(PJSIP/1${EXTEN}@voipms) exten => _NXXNXXXXXX,n,Hangup() exten => _011X.,1,Set(CALLERID(all)=Francois Marier <5551231000>) exten => _011X.,n,Authenticate(1234) exten => _011X.,n,Dial(PJSIP/${EXTEN}@voipms) exten => _011X.,n,Hangup() exten => _00X.,1,Set(CALLERID(all)=Francois Marier <5551231000>) exten => _00X.,n,Authenticate(1234) exten => _00X.,n,Dial(PJSIP/${EXTEN}@voipms) exten => _00X.,n,Hangup()

Note that it's not just replacing SIP/ with PJSIP/, but it was also necessary to use a format supported by pjsip for the channel since SIP/trunkname/extension isn't supported by pjsip.

Steve McIntyre: Firmware again - updates, how I'm voting and why!

Tuesday 27th of September 2022 05:46:00 PM
Updates

Back in April I wrote about issues with how we handle firmware in Debian, and I also spoke about it at DebConf in July. Since then, we've started the General Resolution process - this led to a lot of discussion on the the debian-vote mailing list and we're now into the second week of the voting phase.

The discussion has caught the interest of a few news sites along the way:

My vote

I've also had several people ask me how I'm voting myself, as I started this GR in the first place. I'm happy to oblige! Here's my vote, sorted into preference order:

[1] Choice 5: Change SC for non-free firmware in installer, one installer [2] Choice 1: Only one installer, including non-free firmware [3] Choice 6: Change SC for non-free firmware in installer, keep both installers [4] Choice 2: Recommend installer containing non-free firmware [5] Choice 3: Allow presenting non-free installers alongside the free one [6] Choice 7: None Of The Above [7] Choice 4: Installer with non-free software is not part of Debian

Why have I voted this way?

Fundamentally, my motivation for starting this vote was to ask the project for clear positive direction on a sensible way forward with non-free firmware support. Thus, I've voted all of the options that do that above NOTA. On those terms, I don't like Choice 4 here - IMHO it leaves us in the same unclear situation as before.

I'd be happy for us to update the Social Contract for clarity, and I know some people would be much more comfortable if we do that explicitly here. Choice 1 was my initial personal preference as we started the GR, but since then I've been convinced that also updating the SC would be a good idea, hence Choice 5.

I'd also rather have a single image / set of images produced, for the two reasons I've outlined before. It's less work for our images team to build and test all the options. But, much more importantly: I believe it's less likely to confuse new users.

I appreciate that not everybody agrees with me here, and this is part of the reason why we're voting!

Other Debian people have also blogged about their voting choices (Gunnar Wolf and Ian Jackson so far), and I thank them for sharing their reasoning too.

For the avoidance of doubt: my goal for this vote was simply to get a clear direction on how to proceed here. Although I proposed Choice 1 (Only one installer, including non-free firmware), I also seconded several of the other ballot options. Of course I will accept the will of the project when the result is announced - I'm not going to do anything silly like throw a tantrum or quit the project over this!

Finally

If you're a DD and you haven't voted already, please do so - this is an important choice for the Debian project.

Jelmer Vernooij: Northcape 4000

Monday 26th of September 2022 10:00:00 PM

This summer, I signed up to participate in the Northcape 4000 <https://www.northcape4000.com/>, an annual 4000km bike ride between Rovereto (in northern Italy) and the northernmost point of Europe, the North cape.

The Northcape event has been held for several years, and while it always ends on the North Cape, the route there varies. Last years’ route went through the Baltics, but this years’ was perhaps as direct as possible - taking us through Italy, Austria, Switzerland, Germany, the Czech republic, Germany again, Sweden, Finland and finally Norway.

The ride is unsupported, meaning you have to find your own food and accomodation and can only avail yourself of resupply and sleeping options on the route that are available to everybody else as well. The event is not meant to be a race (unlike the Transcontinental, which starts at the same day), so there is a minimum time to finish it in (10 days) and a maximum (21 days).

Unfortunately, this meant skipping some other events I’d wanted attend (DebConf, MCH).

Bits from Debian: New Debian Developers and Maintainers (July and August 2022)

Monday 26th of September 2022 02:00:00 PM

The following contributors got their Debian Developer accounts in the last two months:

  • Sakirnth Nagarasa (sakirnth)
  • Philip Rinn (rinni)
  • Arnaud Rebillout (arnaudr)
  • Marcos Talau (talau)

The following contributors were added as Debian Maintainers in the last two months:

  • Xiao Sheng Wen
  • Andrea Pappacoda
  • Robin Jarry
  • Ben Westover
  • Michel Alexandre Salim

Congratulations!

Sergio Talens-Oliag: Kubernetes Static Content Server

Sunday 25th of September 2022 10:12:00 PM

This post describes how I’ve put together a simple static content server for kubernetes clusters using a Pod with a persistent volume and multiple containers: an sftp server to manage contents, a web server to publish them with optional access control and another one to run scripts which need access to the volume filesystem.

The sftp server runs using MySecureShell, the web server is nginx and the script runner uses the webhook tool to publish endpoints to call them (the calls will come from other Pods that run backend servers or are executed from Jobs or CronJobs).

Note:

This service has been developed for Kyso and the version used in our current architecture includes an additional container to index documents for Elasticsearch, but as it is not relevant for the description of the service as a general solution I’ve decided to ignore it on this post.

History

The system was developed because we had a NodeJS API with endpoints to upload files and store them on S3 compatible services that were later accessed via HTTPS, but the requirements changed and we needed to be able to publish folders instead of individual files using their original names and apply access restrictions using our API.

Thinking about our requirements the use of a regular filesystem to keep the files and folders was a good option, as uploading and serving files is simple.

For the upload I decided to use the sftp protocol, mainly because I already had an sftp container image based on mysecureshell prepared; once we settled on that we added sftp support to the API server and configured it to upload the files to our server instead of using S3 buckets.

To publish the files we added a nginx container configured to work as a reverse proxy that uses the ngx_http_auth_request_module to validate access to the files (the sub request is configurable, in our deployment we have configured it to call our API to check if the user can access a given URL).

Finally we added a third container when we needed to execute some tasks directly on the filesystem (using kubectl exec with the existing containers did not seem a good idea, as that is not supported by CronJobs objects, for example).

The solution we found avoiding the NIH Syndrome (i.e. write our own tool) was to use the webhook tool to provide the endpoints to call the scripts; for now we have three:

  • one to get the disc usage of a PATH,
  • one to hardlink all the files that are identical on the filesystem,
  • one to copy files and folders from S3 buckets to our filesystem.
Container definitionsmysecureshell

The mysecureshell container can be used to provide an sftp service with multiple users (although the files are owned by the same UID and GID) using standalone containers (launched with docker or podman) or in an orchestration system like kubernetes, as we are going to do here.

The image is generated using the following Dockerfile:

ARG ALPINE_VERSION=3.16.2 FROM alpine:$ALPINE_VERSION as builder LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>" RUN apk update &&\ apk add --no-cache alpine-sdk git musl-dev &&\ git clone https://github.com/sto/mysecureshell.git &&\ cd mysecureshell &&\ ./configure --prefix=/usr --sysconfdir=/etc --mandir=/usr/share/man\ --localstatedir=/var --with-shutfile=/var/lib/misc/sftp.shut --with-debug=2 &&\ make all && make install &&\ rm -rf /var/cache/apk/* FROM alpine:$ALPINE_VERSION LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>" COPY --from=builder /usr/bin/mysecureshell /usr/bin/mysecureshell COPY --from=builder /usr/bin/sftp-* /usr/bin/ RUN apk update &&\ apk add --no-cache openssh shadow pwgen &&\ sed -i -e "s|^.*\(AuthorizedKeysFile\).*$|\1 /etc/ssh/auth_keys/%u|"\ /etc/ssh/sshd_config &&\ mkdir /etc/ssh/auth_keys &&\ cat /dev/null > /etc/motd &&\ add-shell '/usr/bin/mysecureshell' &&\ rm -rf /var/cache/apk/* COPY bin/* /usr/local/bin/ COPY etc/sftp_config /etc/ssh/ COPY entrypoint.sh / EXPOSE 22 VOLUME /sftp ENTRYPOINT ["/entrypoint.sh"] CMD ["server"] Note:

Initially the container used the mysecureshell package included in alpine, but we wanted to be able to create hardlinks from the client and the support is only available on the master branch of the source repository, that is why we are compiling our own binary using a multi-stage Dockerfile.

Note that we are cloning the source from a fork that includes this pull request because we had to fix a couple of minor issues to make the ln command work as expected.

The /etc/sftp_config file is used to configure the mysecureshell server to have all the user homes under /sftp/data, only allow them to see the files under their home directories as if it were at the root of the server and close idle connections after 5m of inactivity:

etc/sftp_config # Default mysecureshell configuration <Default> # All users will have access their home directory under /sftp/data Home /sftp/data/$USER # Log to a file inside /sftp/logs/ (only works when the directory exists) LogFile /sftp/logs/mysecureshell.log # Force users to stay in their home directory StayAtHome true # Hide Home PATH, it will be shown as / VirtualChroot true # Hide real file/directory owner (just change displayed permissions) DirFakeUser true # Hide real file/directory group (just change displayed permissions) DirFakeGroup true # We do not want users to keep forever their idle connection IdleTimeOut 5m </Default> # vim: ts=2:sw=2:et

The entrypoint.sh script is the one responsible to prepare the container for the users included on the /secrets/user_pass.txt file (creates the users with their HOME directories under /sftp/data and a /bin/false shell and creates the key files from /secrets/user_keys.txt if available).

The script expects a couple of environment variables:

  • SFTP_UID: UID used to run the daemon and for all the files, it has to be different than 0 (all the files managed by this daemon are going to be owned by the same user and group, even if the remote users are different).
  • SFTP_GID: GID used to run the daemon and for all the files, it has to be different than 0.

And can use the SSH_PORT and SSH_PARAMS values if present.

It also requires the following files (they can be mounted as secrets in kubernetes):

  • /secrets/host_keys.txt: Text file containing the ssh server keys in mime format; the file is processed using the reformime utility (the one included on busybox) and can be generated using the gen-host-keys script included on the container (it uses ssh-keygen and makemime).
  • /secrets/user_pass.txt: Text file containing lines of the form username:password_in_clear_text (only the users included on this file are available on the sftp server, in fact in our deployment we use only the scs user for everything).

And optionally can use another one:

  • /secrets/user_keys.txt: Text file that contains lines of the form username:public_ssh_ed25519_or_rsa_key; the public keys are installed on the server and can be used to log into the sftp server if the username exists on the user_pass.txt file.

The contents of the entrypoint.sh script are:

entrypoint.sh #!/bin/sh set -e # --------- # VARIABLES # --------- # Expects SSH_UID & SSH_GID on the environment and uses the value of the # SSH_PORT & SSH_PARAMS variables if present # SSH_PARAMS SSH_PARAMS="-D -e -p ${SSH_PORT:=22} ${SSH_PARAMS}" # Fixed values # DIRECTORIES HOME_DIR="/sftp/data" CONF_FILES_DIR="/secrets" AUTH_KEYS_PATH="/etc/ssh/auth_keys" # FILES HOST_KEYS="$CONF_FILES_DIR/host_keys.txt" USER_KEYS="$CONF_FILES_DIR/user_keys.txt" USER_PASS="$CONF_FILES_DIR/user_pass.txt" USER_SHELL_CMD="/usr/bin/mysecureshell" # TYPES HOST_KEY_TYPES="dsa ecdsa ed25519 rsa" # --------- # FUNCTIONS # --------- # Validate HOST_KEYS, USER_PASS, SFTP_UID and SFTP_GID _check_environment() { # Check the ssh server keys ... we don't boot if we don't have them if [ ! -f "$HOST_KEYS" ]; then cat <<EOF We need the host keys on the '$HOST_KEYS' file to proceed. Call the 'gen-host-keys' script to create and export them on a mime file. EOF exit 1 fi # Check that we have users ... if we don't we can't continue if [ ! -f "$USER_PASS" ]; then cat <<EOF We need at least the '$USER_PASS' file to provision users. Call the 'gen-users-tar' script to create a tar file to create an archive that contains public and private keys for users, a 'user_keys.txt' with the public keys of the users and a 'user_pass.txt' file with random passwords for them (pass the list of usernames to it). EOF exit 1 fi # Check SFTP_UID if [ -z "$SFTP_UID" ]; then echo "The 'SFTP_UID' can't be empty, pass a 'GID'." exit 1 fi if [ "$SFTP_UID" -eq "0" ]; then echo "The 'SFTP_UID' can't be 0, use a different 'UID'" exit 1 fi # Check SFTP_GID if [ -z "$SFTP_GID" ]; then echo "The 'SFTP_GID' can't be empty, pass a 'GID'." exit 1 fi if [ "$SFTP_GID" -eq "0" ]; then echo "The 'SFTP_GID' can't be 0, use a different 'GID'" exit 1 fi } # Adjust ssh host keys _setup_host_keys() { opwd="$(pwd)" tmpdir="$(mktemp -d)" cd "$tmpdir" ret="0" reformime <"$HOST_KEYS" || ret="1" for kt in $HOST_KEY_TYPES; do key="ssh_host_${kt}_key" pub="ssh_host_${kt}_key.pub" if [ ! -f "$key" ]; then echo "Missing '$key' file" ret="1" fi if [ ! -f "$pub" ]; then echo "Missing '$pub' file" ret="1" fi if [ "$ret" -ne "0" ]; then continue fi cat "$key" >"/etc/ssh/$key" chmod 0600 "/etc/ssh/$key" chown root:root "/etc/ssh/$key" cat "$pub" >"/etc/ssh/$pub" chmod 0600 "/etc/ssh/$pub" chown root:root "/etc/ssh/$pub" done cd "$opwd" rm -rf "$tmpdir" return "$ret" } # Create users _setup_user_pass() { opwd="$(pwd)" tmpdir="$(mktemp -d)" cd "$tmpdir" ret="0" [ -d "$HOME_DIR" ] || mkdir "$HOME_DIR" # Make sure the data dir can be managed by the sftp user chown "$SFTP_UID:$SFTP_GID" "$HOME_DIR" # Allow the user (and root) to create directories inside the $HOME_DIR, if # we don't allow it the directory creation fails on EFS (AWS) chmod 0755 "$HOME_DIR" # Create users echo "sftp:sftp:$SFTP_UID:$SFTP_GID:::/bin/false" >"newusers.txt" sed -n "/^[^#]/ { s/:/ /p }" "$USER_PASS" | while read -r _u _p; do echo "$_u:$_p:$SFTP_UID:$SFTP_GID::$HOME_DIR/$_u:$USER_SHELL_CMD" done >>"newusers.txt" newusers --badnames newusers.txt # Disable write permission on the directory to forbid remote sftp users to # remove their own root dir (they have already done it); we adjust that # here to avoid issues with EFS (see before) chmod 0555 "$HOME_DIR" # Clean up the tmpdir cd "$opwd" rm -rf "$tmpdir" return "$ret" } # Adjust user keys _setup_user_keys() { if [ -f "$USER_KEYS" ]; then sed -n "/^[^#]/ { s/:/ /p }" "$USER_KEYS" | while read -r _u _k; do echo "$_k" >>"$AUTH_KEYS_PATH/$_u" done fi } # Main function exec_sshd() { _check_environment _setup_host_keys _setup_user_pass _setup_user_keys echo "Running: /usr/sbin/sshd $SSH_PARAMS" # shellcheck disable=SC2086 exec /usr/sbin/sshd -D $SSH_PARAMS } # ---- # MAIN # ---- case "$1" in "server") exec_sshd ;; *) exec "$@" ;; esac # vim: ts=2:sw=2:et

The container also includes a couple of auxiliary scripts, the first one can be used to generate the host_keys.txt file as follows:

$ docker run --rm stodh/mysecureshell gen-host-keys > host_keys.txt

Where the script is as simple as:

bin/gen-host-keys #!/bin/sh set -e # Generate new host keys ssh-keygen -A >/dev/null # Replace hostname sed -i -e 's/@.*$/@mysecureshell/' /etc/ssh/ssh_host_*_key.pub # Print in mime format (stdout) makemime /etc/ssh/ssh_host_* # vim: ts=2:sw=2:et

And there is another script to generate a .tar file that contains auth data for the list of usernames passed to it (the file contains a user_pass.txt file with random passwords for the users, public and private ssh keys for them and the user_keys.txt file that matches the generated keys).

To generate a tar file for the user scs we can execute the following:

$ docker run --rm stodh/mysecureshell gen-users-tar scs > /tmp/scs-users.tar

To see the contents and the text inside the user_pass.txt file we can do:

$ tar tvf /tmp/scs-users.tar -rw-r--r-- root/root 21 2022-09-11 15:55 user_pass.txt -rw-r--r-- root/root 822 2022-09-11 15:55 user_keys.txt -rw------- root/root 387 2022-09-11 15:55 id_ed25519-scs -rw-r--r-- root/root 85 2022-09-11 15:55 id_ed25519-scs.pub -rw------- root/root 3357 2022-09-11 15:55 id_rsa-scs -rw------- root/root 3243 2022-09-11 15:55 id_rsa-scs.pem -rw-r--r-- root/root 729 2022-09-11 15:55 id_rsa-scs.pub $ tar xfO /tmp/scs-users.tar user_pass.txt scs:20JertRSX2Eaar4x

The source of the script is:

bin/gen-users-tar #!/bin/sh set -e # --------- # VARIABLES # --------- USER_KEYS_FILE="user_keys.txt" USER_PASS_FILE="user_pass.txt" # --------- # MAIN CODE # --------- # Generate user passwords and keys, return 1 if no username is received if [ "$#" -eq "0" ]; then return 1 fi opwd="$(pwd)" tmpdir="$(mktemp -d)" cd "$tmpdir" for u in "$@"; do ssh-keygen -q -a 100 -t ed25519 -f "id_ed25519-$u" -C "$u" -N "" ssh-keygen -q -a 100 -b 4096 -t rsa -f "id_rsa-$u" -C "$u" -N "" # Legacy RSA private key format cp -a "id_rsa-$u" "id_rsa-$u.pem" ssh-keygen -q -p -m pem -f "id_rsa-$u.pem" -N "" -P "" >/dev/null chmod 0600 "id_rsa-$u.pem" echo "$u:$(pwgen -s 16 1)" >>"$USER_PASS_FILE" echo "$u:$(cat "id_ed25519-$u.pub")" >>"$USER_KEYS_FILE" echo "$u:$(cat "id_rsa-$u.pub")" >>"$USER_KEYS_FILE" done tar cf - "$USER_PASS_FILE" "$USER_KEYS_FILE" id_* 2>/dev/null cd "$opwd" rm -rf "$tmpdir" # vim: ts=2:sw=2:et nginx-scs

The nginx-scs container is generated using the following Dockerfile:

ARG NGINX_VERSION=1.23.1 FROM nginx:$NGINX_VERSION LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>" RUN rm -f /docker-entrypoint.d/* COPY docker-entrypoint.d/* /docker-entrypoint.d/

Basically we are removing the existing docker-entrypoint.d scripts from the standard image and adding a new one that configures the web server as we want using a couple of environment variables:

  • AUTH_REQUEST_URI: URL to use for the auth_request, if the variable is not found on the environment auth_request is not used.
  • HTML_ROOT: Base directory of the web server, if not passed the default /usr/share/nginx/html is used.

Note that if we don’t pass the variables everything works as if we were using the original nginx image.

The contents of the configuration script are:

docker-entrypoint.d/10-update-default-conf.sh #!/bin/sh # Replace the default.conf nginx file by our own version. set -e if [ -z "$HTML_ROOT" ]; then HTML_ROOT="/usr/share/nginx/html" fi if [ "$AUTH_REQUEST_URI" ]; then cat >/etc/nginx/conf.d/default.conf <<EOF server { listen 80; server_name localhost; location / { auth_request /.auth; root $HTML_ROOT; index index.html index.htm; } location /.auth { internal; proxy_pass $AUTH_REQUEST_URI; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI \$request_uri; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } EOF else cat >/etc/nginx/conf.d/default.conf <<EOF server { listen 80; server_name localhost; location / { root $HTML_ROOT; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } EOF fi # vim: ts=2:sw=2:et

As we will see later the idea is to use the /sftp/data or /sftp/data/scs folder as the root of the web published by this container and create an Ingress object to provide access to it outside of our kubernetes cluster.

webhook-scs

The webhook-scs container is generated using the following Dockerfile:

ARG ALPINE_VERSION=3.16.2 ARG GOLANG_VERSION=alpine3.16 FROM golang:$GOLANG_VERSION AS builder LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>" ENV WEBHOOK_VERSION 2.8.0 ENV WEBHOOK_PR 549 ENV S3FS_VERSION v1.91 WORKDIR /go/src/github.com/adnanh/webhook RUN apk update &&\ apk add --no-cache -t build-deps curl libc-dev gcc libgcc patch RUN curl -L --silent -o webhook.tar.gz\ https://github.com/adnanh/webhook/archive/${WEBHOOK_VERSION}.tar.gz &&\ tar xzf webhook.tar.gz --strip 1 &&\ curl -L --silent -o ${WEBHOOK_PR}.patch\ https://patch-diff.githubusercontent.com/raw/adnanh/webhook/pull/${WEBHOOK_PR}.patch &&\ patch -p1 < ${WEBHOOK_PR}.patch &&\ go get -d && \ go build -o /usr/local/bin/webhook WORKDIR /src/s3fs-fuse RUN apk update &&\ apk add ca-certificates build-base alpine-sdk libcurl automake autoconf\ libxml2-dev libressl-dev mailcap fuse-dev curl-dev RUN curl -L --silent -o s3fs.tar.gz\ https://github.com/s3fs-fuse/s3fs-fuse/archive/refs/tags/$S3FS_VERSION.tar.gz &&\ tar xzf s3fs.tar.gz --strip 1 &&\ ./autogen.sh &&\ ./configure --prefix=/usr/local &&\ make -j && \ make install FROM alpine:$ALPINE_VERSION LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>" WORKDIR /webhook RUN apk update &&\ apk add --no-cache ca-certificates mailcap fuse libxml2 libcurl libgcc\ libstdc++ rsync util-linux-misc &&\ rm -rf /var/cache/apk/* COPY --from=builder /usr/local/bin/webhook /usr/local/bin/webhook COPY --from=builder /usr/local/bin/s3fs /usr/local/bin/s3fs COPY entrypoint.sh / COPY hooks/* ./hooks/ EXPOSE 9000 ENTRYPOINT ["/entrypoint.sh"] CMD ["server"]

Again, we use a multi-stage build because in production we wanted to support a functionality that is not already on the official versions (streaming the command output as a response instead of waiting until the execution ends); this time we build the image applying the PATCH included on this pull request against a released version of the source instead of creating a fork.

The entrypoint.sh script is used to generate the webhook configuration file for the existing hooks using environment variables (basically the WEBHOOK_WORKDIR and the *_TOKEN variables) and launch the webhook service:

entrypoint.sh #!/bin/sh set -e # --------- # VARIABLES # --------- WEBHOOK_BIN="${WEBHOOK_BIN:-/webhook/hooks}" WEBHOOK_YML="${WEBHOOK_YML:-/webhook/scs.yml}" WEBHOOK_OPTS="${WEBHOOK_OPTS:--verbose}" # --------- # FUNCTIONS # --------- print_du_yml() { cat <<EOF - id: du execute-command: '$WEBHOOK_BIN/du.sh' command-working-directory: '$WORKDIR' response-headers: - name: 'Content-Type' value: 'application/json' http-methods: ['GET'] include-command-output-in-response: true include-command-output-in-response-on-error: true pass-arguments-to-command: - source: 'url' name: 'path' pass-environment-to-command: - source: 'string' envname: 'OUTPUT_FORMAT' name: 'json' EOF } print_hardlink_yml() { cat <<EOF - id: hardlink execute-command: '$WEBHOOK_BIN/hardlink.sh' command-working-directory: '$WORKDIR' http-methods: ['GET'] include-command-output-in-response: true include-command-output-in-response-on-error: true EOF } print_s3sync_yml() { cat <<EOF - id: s3sync execute-command: '$WEBHOOK_BIN/s3sync.sh' command-working-directory: '$WORKDIR' http-methods: ['POST'] include-command-output-in-response: true include-command-output-in-response-on-error: true pass-environment-to-command: - source: 'payload' envname: 'AWS_KEY' name: 'aws.key' - source: 'payload' envname: 'AWS_SECRET_KEY' name: 'aws.secret_key' - source: 'payload' envname: 'S3_BUCKET' name: 's3.bucket' - source: 'payload' envname: 'S3_REGION' name: 's3.region' - source: 'payload' envname: 'S3_PATH' name: 's3.path' - source: 'payload' envname: 'SCS_PATH' name: 'scs.path' stream-command-output: true EOF } print_token_yml() { if [ "$1" ]; then cat << EOF trigger-rule: match: type: 'value' value: '$1' parameter: source: 'header' name: 'X-Webhook-Token' EOF fi } exec_webhook() { # Validate WORKDIR if [ -z "$WEBHOOK_WORKDIR" ]; then echo "Must define the WEBHOOK_WORKDIR variable!" >&2 exit 1 fi WORKDIR="$(realpath "$WEBHOOK_WORKDIR" 2>/dev/null)" || true if [ ! -d "$WORKDIR" ]; then echo "The WEBHOOK_WORKDIR '$WEBHOOK_WORKDIR' is not a directory!" >&2 exit 1 fi # Get TOKENS, if the DU_TOKEN or HARDLINK_TOKEN is defined that is used, if # not if the COMMON_TOKEN that is used and in other case no token is checked # (that is the default) DU_TOKEN="${DU_TOKEN:-$COMMON_TOKEN}" HARDLINK_TOKEN="${HARDLINK_TOKEN:-$COMMON_TOKEN}" S3_TOKEN="${S3_TOKEN:-$COMMON_TOKEN}" # Create webhook configuration { print_du_yml print_token_yml "$DU_TOKEN" echo "" print_hardlink_yml print_token_yml "$HARDLINK_TOKEN" echo "" print_s3sync_yml print_token_yml "$S3_TOKEN" }>"$WEBHOOK_YML" # Run the webhook command # shellcheck disable=SC2086 exec webhook -hooks "$WEBHOOK_YML" $WEBHOOK_OPTS } # ---- # MAIN # ---- case "$1" in "server") exec_webhook ;; *) exec "$@" ;; esac

The entrypoint.sh script generates the configuration file for the webhook server calling functions that print a yaml section for each hook and optionally adds rules to validate access to them comparing the value of a X-Webhook-Token header against predefined values.

The expected token values are taken from environment variables, we can define a token variable for each hook (DU_TOKEN, HARDLINK_TOKEN or S3_TOKEN) and a fallback value (COMMON_TOKEN); if no token variable is defined for a hook no check is done and everybody can call it.

The Hook Definition documentation explains the options you can use for each hook, the ones we have right now do the following:

  • du: runs on the $WORKDIR directory, passes as first argument to the script the value of the path query parameter and sets the variable OUTPUT_FORMAT to the fixed value json (we use that to print the output of the script in JSON format instead of text).
  • hardlink: runs on the $WORKDIR directory and takes no parameters.
  • s3sync: runs on the $WORKDIR directory and sets a lot of environment variables from values read from the JSON encoded payload sent by the caller (all the values must be sent by the caller even if they are assigned an empty value, if they are missing the hook fails without calling the script); we also set the stream-command-output value to true to make the script show its output as it is working (we patched the webhook source to be able to use this option).
The du hook script

The du hook script code checks if the argument passed is a directory, computes its size using the du command and prints the results in text format or as a JSON dictionary:

hooks/du.sh #!/bin/sh set -e # Script to print disk usage for a PATH inside the scs folder # --------- # FUNCTIONS # --------- print_error() { if [ "$OUTPUT_FORMAT" = "json" ]; then echo "{\"error\":\"$*\"}" else echo "$*" >&2 fi exit 1 } usage() { if [ "$OUTPUT_FORMAT" = "json" ]; then echo "{\"error\":\"Pass arguments as '?path=XXX\"}" else echo "Usage: $(basename "$0") PATH" >&2 fi exit 1 } # ---- # MAIN # ---- if [ "$#" -eq "0" ] || [ -z "$1" ]; then usage fi if [ "$1" = "." ]; then DU_PATH="./" else DU_PATH="$(find . -name "$1" -mindepth 1 -maxdepth 1)" || true fi if [ -z "$DU_PATH" ] || [ ! -d "$DU_PATH/." ]; then print_error "The provided PATH ('$1') is not a directory" fi # Print disk usage in bytes for the given PATH OUTPUT="$(du -b -s "$DU_PATH")" if [ "$OUTPUT_FORMAT" = "json" ]; then # Format output as {"path":"PATH","bytes":"BYTES"} echo "$OUTPUT" | sed -e "s%^\(.*\)\t.*/\(.*\)$%{\"path\":\"\2\",\"bytes\":\"\1\"}%" | tr -d '\n' else # Print du output as is echo "$OUTPUT" fi # vim: ts=2:sw=2:et:ai:sts=2 The hardlink hook script

The hardlink hook script is really simple, it just runs the util-linux version of the hardlink command on its working directory:

hooks/hardlink.sh #!/bin/sh hardlink --ignore-time --maximize .

We use that to reduce the size of the stored content; to manage versions of files and folders we keep each version on a separate directory and when one or more files are not changed this script makes them hardlinks to the same file on disc, reducing the space used on disk.

The s3sync hook script

The s3sync hook script uses the s3fs tool to mount a bucket and synchronise data between a folder inside the bucket and a directory on the filesystem using rsync; all values needed to execute the task are taken from environment variables:

hooks/s3sync.sh #!/bin/ash set -euo pipefail set -o errexit set -o errtrace # Functions finish() { ret="$1" echo "" echo "Script exit code: $ret" exit "$ret" } # Check variables if [ -z "$AWS_KEY" ] || [ -z "$AWS_SECRET_KEY" ] || [ -z "$S3_BUCKET" ] || [ -z "$S3_PATH" ] || [ -z "$SCS_PATH" ]; then [ "$AWS_KEY" ] || echo "Set the AWS_KEY environment variable" [ "$AWS_SECRET_KEY" ] || echo "Set the AWS_SECRET_KEY environment variable" [ "$S3_BUCKET" ] || echo "Set the S3_BUCKET environment variable" [ "$S3_PATH" ] || echo "Set the S3_PATH environment variable" [ "$SCS_PATH" ] || echo "Set the SCS_PATH environment variable" finish 1 fi if [ "$S3_REGION" ] && [ "$S3_REGION" != "us-east-1" ]; then EP_URL="endpoint=$S3_REGION,url=https://s3.$S3_REGION.amazonaws.com" else EP_URL="endpoint=us-east-1" fi # Prepare working directory WORK_DIR="$(mktemp -p "$HOME" -d)" MNT_POINT="$WORK_DIR/s3data" PASSWD_S3FS="$WORK_DIR/.passwd-s3fs" # Check the moutpoint if [ ! -d "$MNT_POINT" ]; then mkdir -p "$MNT_POINT" elif mountpoint "$MNT_POINT"; then echo "There is already something mounted on '$MNT_POINT', aborting!" finish 1 fi # Create password file touch "$PASSWD_S3FS" chmod 0400 "$PASSWD_S3FS" echo "$AWS_KEY:$AWS_SECRET_KEY" >"$PASSWD_S3FS" # Mount s3 bucket as a filesystem s3fs -o dbglevel=info,retries=5 -o "$EP_URL" -o "passwd_file=$PASSWD_S3FS" \ "$S3_BUCKET" "$MNT_POINT" echo "Mounted bucket '$S3_BUCKET' on '$MNT_POINT'" # Remove the password file, just in case rm -f "$PASSWD_S3FS" # Check source PATH ret="0" SRC_PATH="$MNT_POINT/$S3_PATH" if [ ! -d "$SRC_PATH" ]; then echo "The S3_PATH '$S3_PATH' can't be found!" ret=1 fi # Compute SCS_UID & SCS_GID (by default based on the working directory owner) SCS_UID="${SCS_UID:=$(stat -c "%u" "." 2>/dev/null)}" || true SCS_GID="${SCS_GID:=$(stat -c "%g" "." 2>/dev/null)}" || true # Check destination PATH DST_PATH="./$SCS_PATH" if [ "$ret" -eq "0" ] && [ -d "$DST_PATH" ]; then mkdir -p "$DST_PATH" || ret="$?" fi # Copy using rsync if [ "$ret" -eq "0" ]; then rsync -rlptv --chown="$SCS_UID:$SCS_GID" --delete --stats \ "$SRC_PATH/" "$DST_PATH/" || ret="$?" fi # Unmount the S3 bucket umount -f "$MNT_POINT" echo "Called umount for '$MNT_POINT'" # Remove mount point dir rmdir "$MNT_POINT" # Remove WORK_DIR rmdir "$WORK_DIR" # We are done finish "$ret" # vim: ts=2:sw=2:et:ai:sts=2 Deployment objects

The system is deployed as a StatefulSet with one replica.

Our production deployment is done on AWS and to be able to scale we use EFS for our PersistenVolume; the idea is that the volume has no size limit, its AccessMode can be set to ReadWriteMany and we can mount it from multiple instances of the Pod without issues, even if they are in different availability zones.

For development we use k3d and we are also able to scale the StatefulSet for testing because we use a ReadWriteOnce PVC, but it points to a hostPath that is backed up by a folder that is mounted on all the compute nodes, so in reality Pods in different k3d nodes use the same folder on the host.

secrets.yaml

The secrets file contains the files used by the mysecureshell container that can be generated using kubernetes pods as follows (we are only creating the scs user):

$ kubectl run "mysecureshell" --restart='Never' --quiet --rm --stdin \ --image "stodh/mysecureshell:latest" -- gen-host-keys >"./host_keys.txt" $ kubectl run "mysecureshell" --restart='Never' --quiet --rm --stdin \ --image "stodh/mysecureshell:latest" -- gen-users-tar scs >"./users.tar"

Once we have the files we can generate the secrets.yaml file as follows:

$ tar xf ./users.tar user_keys.txt user_pass.txt $ kubectl --dry-run=client -o yaml create secret generic "scs-secret" \ --from-file="host_keys.txt=host_keys.txt" \ --from-file="user_keys.txt=user_keys.txt" \ --from-file="user_pass.txt=user_pass.txt" > ./secrets.yaml

The resulting secrets.yaml will look like the following file (the base64 would match the content of the files, of course):

secrets.yaml apiVersion: v1 data: host_keys.txt: TWlt... user_keys.txt: c2Nz... user_pass.txt: c2Nz... kind: Secret metadata: creationTimestamp: null name: scs-secret pvc.yaml

The persistent volume claim for a simple deployment (one with only one instance of the statefulSet) can be as simple as this:

pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: scs-pvc labels: app.kubernetes.io/name: scs spec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi

On this definition we don’t set the storageClassName to use the default one.

Volumes in our development environment (k3d)

In our development deployment we create the following PersistentVolume as required by the Local Persistence Volume Static Provisioner (note that the /volumes/scs-pv has to be created by hand, in our k3d system we mount the same host directory on the /volumes path of all the nodes and create the scs-pv directory by hand before deploying the persistent volume):

k3d-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: scs-pv labels: app.kubernetes.io/name: scs spec: capacity: storage: 8Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete claimRef: name: scs-pvc storageClassName: local-storage local: path: /volumes/scs-pv nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: node.kubernetes.io/instance-type operator: In values: - k3s Note:

The nodeAffinity section is required but in practice the current definition selects all k3d nodes.

And to make sure that everything works as expected we update the PVC definition to add the right storageClassName:

k3d-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: scs-pvc labels: app.kubernetes.io/name: scs spec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi storageClassName: local-storage Volumes in our production environment (aws)

In the production deployment we don’t create the PersistentVolume (we are using the aws-efs-csi-driver which supports Dynamic Provisioning) but we add the storageClassName (we set it to the one mapped to the EFS driver, i.e. efs-sc) and set ReadWriteMany as the accessMode:

efs-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: scs-pvc labels: app.kubernetes.io/name: scs spec: accessModes: - ReadWriteMany resources: requests: storage: 8Gi storageClassName: efs-sc statefulset.yaml

The definition of the statefulSet is as follows:

statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: scs labels: app.kubernetes.io/name: scs spec: serviceName: scs replicas: 1 selector: matchLabels: app: scs template: metadata: labels: app: scs spec: containers: - name: nginx image: stodh/nginx-scs:latest ports: - containerPort: 80 name: http env: - name: AUTH_REQUEST_URI value: "" - name: HTML_ROOT value: /sftp/data volumeMounts: - mountPath: /sftp name: scs-datadir - name: mysecureshell image: stodh/mysecureshell:latest ports: - containerPort: 22 name: ssh securityContext: capabilities: add: - IPC_OWNER env: - name: SFTP_UID value: '2020' - name: SFTP_GID value: '2020' volumeMounts: - mountPath: /secrets name: scs-file-secrets readOnly: true - mountPath: /sftp name: scs-datadir - name: webhook image: stodh/webhook-scs:latest securityContext: privileged: true ports: - containerPort: 9000 name: webhook-http env: - name: WEBHOOK_WORKDIR value: /sftp/data/scs volumeMounts: - name: devfuse mountPath: /dev/fuse - mountPath: /sftp name: scs-datadir volumes: - name: devfuse hostPath: path: /dev/fuse - name: scs-file-secrets secret: secretName: scs-secrets - name: scs-datadir persistentVolumeClaim: claimName: scs-pvc

Notes about the containers:

  • nginx: As this is an example the web server is not using an AUTH_REQUEST_URI and uses the /sftp/data directory as the root of the web (to get to the files uploaded for the scs user we will need to use /scs/ as a prefix on the URLs).
  • mysecureshell: We are adding the IPC_OWNER capability to the container to be able to use some of the sftp-* commands inside it, but they are not really needed, so adding the capability is optional.
  • webhook: We are launching this container in privileged mode to be able to use the s3fs-fuse, as it will not work otherwise for now (see this kubernetes issue); if the functionality is not needed the container can be executed with regular privileges; besides, as we are not enabling public access to this service we don’t define *_TOKEN variables (if required the values should be read from a Secret object).

Notes about the volumes:

  • the devfuse volume is only needed if we plan to use the s3fs command on the webhook container, if not we can remove the volume definition and its mounts.
service.yaml

To be able to access the different services on the statefulset we publish the relevant ports using the following Service object:

service.yaml apiVersion: v1 kind: Service metadata: name: scs-svc labels: app.kubernetes.io/name: scs spec: ports: - name: ssh port: 22 protocol: TCP targetPort: 22 - name: http port: 80 protocol: TCP targetPort: 80 - name: webhook-http port: 9000 protocol: TCP targetPort: 9000 selector: app: scs ingress.yaml

To download the scs files from the outside we can add an ingress object like the following (the definition is for testing using the localhost name):

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: scs-ingress labels: app.kubernetes.io/name: scs spec: ingressClassName: nginx rules: - host: 'localhost' http: paths: - path: /scs pathType: Prefix backend: service: name: scs-svc port: number: 80 Deployment

To deploy the statefulSet we create a namespace and apply the object definitions shown before:

$ kubectl create namespace scs-demo namespace/scs-demo created $ kubectl -n scs-demo apply -f secrets.yaml secret/scs-secrets created $ kubectl -n scs-demo apply -f pvc.yaml persistentvolumeclaim/scs-pvc created $ kubectl -n scs-demo apply -f statefulset.yaml statefulset.apps/scs created $ kubectl -n scs-demo apply -f service.yaml service/scs-svc created $ kubectl -n scs-demo apply -f ingress.yaml ingress.networking.k8s.io/scs-ingress created

Once the objects are deployed we can check that all is working using kubectl:

$ kubectl -n scs-demo get all,secrets,ingress NAME READY STATUS RESTARTS AGE pod/scs-0 3/3 Running 0 24s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/scs-svc ClusterIP 10.43.0.47 <none> 22/TCP,80/TCP,9000/TCP 21s NAME READY AGE statefulset.apps/scs 1/1 24s NAME TYPE DATA AGE secret/default-token-mwcd7 kubernetes.io/service-account-token 3 53s secret/scs-secrets Opaque 3 39s NAME CLASS HOSTS ADDRESS PORTS AGE ingress.networking.k8s.io/scs-ingress nginx localhost 172.21.0.5 80 17s

At this point we are ready to use the system.

Usage examplesFile uploads

As previously mentioned in our system the idea is to use the sftp server from other Pods, but to test the system we are going to do a kubectl port-forward and connect to the server using our host client and the password we have generated (it is on the user_pass.txt file, inside the users.tar archive):

$ kubectl -n scs-demo port-forward service/scs-svc 2020:22 & Forwarding from 127.0.0.1:2020 -> 22 Forwarding from [::1]:2020 -> 22 $ PF_PID=$! $ sftp -P 2020 scs@127.0.0.1 1 Handling connection for 2020 The authenticity of host '[127.0.0.1]:2020 ([127.0.0.1]:2020)' can't be \ established. ED25519 key fingerprint is SHA256:eHNwCnyLcSSuVXXiLKeGraw0FT/4Bb/yjfqTstt+088. This key is not known by any other names Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '[127.0.0.1]:2020' (ED25519) to the list of known \ hosts. scs@127.0.0.1's password: ********** Connected to 127.0.0.1. sftp> ls -la drwxr-xr-x 2 sftp sftp 4096 Sep 25 14:47 . dr-xr-xr-x 3 sftp sftp 4096 Sep 25 14:36 .. sftp> !date -R > /tmp/date.txt 2 sftp> put /tmp/date.txt . Uploading /tmp/date.txt to /date.txt date.txt 100% 32 27.8KB/s 00:00 sftp> ls -l -rw-r--r-- 1 sftp sftp 32 Sep 25 15:21 date.txt sftp> ln date.txt date.txt.1 3 sftp> ls -l -rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt -rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt.1 sftp> put /tmp/date.txt date.txt.2 4 Uploading /tmp/date.txt to /date.txt.2 date.txt 100% 32 27.8KB/s 00:00 sftp> ls -l 5 -rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt -rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt.1 -rw-r--r-- 1 sftp sftp 32 Sep 25 15:21 date.txt.2 sftp> exit $ kill "$PF_PID" [1] + terminated kubectl -n scs-demo port-forward service/scs-svc 2020:22
  1. We connect to the sftp service on the forwarded port with the scs user.
  2. We put a file we have created on the host on the directory.
  3. We do a hard link of the uploaded file.
  4. We put a second copy of the file we created locally.
  5. On the file list we can see that the two first files have two hardlinks
File retrievals

If our ingress is configured right we can download the date.txt file from the URL http://localhost/scs/date.txt:

$ curl -s http://localhost/scs/date.txt Sun, 25 Sep 2022 17:21:51 +0200 Use of the webhook container

To finish this post we are going to show how we can call the hooks directly, from a CronJob and from a Job.

Direct script call (du)

In our deployment the direct calls are done from other Pods, to simulate it we are going to do a port-forward and call the script with an existing PATH (the root directory) and a bad one:

$ kubectl -n scs-demo port-forward service/scs-svc 9000:9000 >/dev/null & $ PF_PID=$! $ JSON="$(curl -s "http://localhost:9000/hooks/du?path=.")" $ echo $JSON {"path":"","bytes":"4160"} $ JSON="$(curl -s "http://localhost:9000/hooks/du?path=foo")" $ echo $JSON {"error":"The provided PATH ('foo') is not a directory"} $ kill $PF_PID

As we only have files on the base directory we print the disk usage of the . PATH and the output is in json format because we export OUTPUT_FORMAT with the value json on the webhook configuration.

Cronjobs (hardlink)

As explained before, the webhook container can be used to run cronjobs; the following one uses an alpine container to call the hardlink script each minute (that setup is for testing, obviously):

webhook-cronjob.yaml apiVersion: batch/v1 kind: CronJob metadata: name: hardlink labels: cronjob: 'hardlink' spec: schedule: "* */1 * * *" concurrencyPolicy: Replace jobTemplate: spec: template: metadata: labels: cronjob: 'hardlink' spec: containers: - name: hardlink-cronjob image: alpine:latest command: ["wget", "-q", "-O-", "http://scs-svc:9000/hooks/hardlink"] restartPolicy: Never

The following console session shows how we create the object, allow a couple of executions and remove it (in production we keep it running but once a day, not each minute):

$ kubectl -n scs-demo apply -f webhook-cronjob.yaml 1 cronjob.batch/hardlink created $ kubectl -n scs-demo get pods -l "cronjob=hardlink" -w 2 NAME READY STATUS RESTARTS AGE hardlink-27735351-zvpnb 0/1 Pending 0 0s hardlink-27735351-zvpnb 0/1 ContainerCreating 0 0s hardlink-27735351-zvpnb 0/1 Completed 0 2s ^C $ kubectl -n scs-demo logs pod/hardlink-27735351-zvpnb 3 Mode: real Method: sha256 Files: 3 Linked: 1 files Compared: 0 xattrs Compared: 1 files Saved: 32 B Duration: 0.000220 seconds $ sleep 60 $ kubectl -n scs-demo get pods -l "cronjob=hardlink" 4 NAME READY STATUS RESTARTS AGE hardlink-27735351-zvpnb 0/1 Completed 0 83s hardlink-27735352-br5rn 0/1 Completed 0 23s $ kubectl -n scs-demo logs pod/hardlink-27735352-br5rn 5 Mode: real Method: sha256 Files: 3 Linked: 0 files Compared: 0 xattrs Compared: 0 files Saved: 0 B Duration: 0.000070 seconds $ kubectl -n scs-demo delete -f webhook-cronjob.yaml 6 cronjob.batch "hardlink" deleted
  1. This command creates the cronjob object.
  2. This checks the pods with our cronjob label, we interrupt it once we see that the first run has been completed.
  3. With this command we see the output of the execution, as this is the fist execution we see that date.txt.2 has been replaced by a hardlink (the summary does not name the file, but it is the only option knowing the contents from the original upload).
  4. After waiting a little bit we check the pods executed again to get the name of the latest one.
  5. The log now shows that nothing was done.
  6. As this is a demo, we delete the cronjob.
Jobs (s3sync)

The following job can be used to synchronise the contents of a directory in a S3 bucket with the SCS Filesystem:

job.yaml apiVersion: batch/v1 kind: Job metadata: name: s3sync labels: cronjob: 's3sync' spec: template: metadata: labels: cronjob: 's3sync' spec: containers: - name: s3sync-job image: alpine:latest command: - "wget" - "-q" - "--header" - "Content-Type: application/json" - "--post-file" - "/secrets/s3sync.json" - "-O-" - "http://scs-svc:9000/hooks/s3sync" volumeMounts: - mountPath: /secrets name: job-secrets readOnly: true restartPolicy: Never volumes: - name: job-secrets secret: secretName: webhook-job-secrets

The file with parameters for the script must be something like this:

s3sync.json { "aws": { "key": "********************", "secret_key": "****************************************" }, "s3": { "region": "eu-north-1", "bucket": "blogops-test", "path": "test" }, "scs": { "path": "test" } }

Once we have both files we can run the Job as follows:

$ kubectl -n scs-demo create secret generic webhook-job-secrets \ 1 --from-file="s3sync.json=s3sync.json" secret/webhook-job-secrets created $ kubectl -n scs-demo apply -f webhook-job.yaml 2 job.batch/s3sync created $ kubectl -n scs-demo get pods -l "cronjob=s3sync" 3 NAME READY STATUS RESTARTS AGE s3sync-zx2cj 0/1 Completed 0 12s $ kubectl -n scs-demo logs s3sync-zx2cj 4 Mounted bucket 's3fs-test' on '/root/tmp.jiOjaF/s3data' sending incremental file list created directory ./test ./ kyso.png Number of files: 2 (reg: 1, dir: 1) Number of created files: 2 (reg: 1, dir: 1) Number of deleted files: 0 Number of regular files transferred: 1 Total file size: 15,075 bytes Total transferred file size: 15,075 bytes Literal data: 15,075 bytes Matched data: 0 bytes File list size: 0 File list generation time: 0.147 seconds File list transfer time: 0.000 seconds Total bytes sent: 15,183 Total bytes received: 74 sent 15,183 bytes received 74 bytes 30,514.00 bytes/sec total size is 15,075 speedup is 0.99 Called umount for '/root/tmp.jiOjaF/s3data' Script exit code: 0 $ kubectl -n scs-demo delete -f webhook-job.yaml 5 job.batch "s3sync" deleted $ kubectl -n scs-demo delete secrets webhook-job-secrets 6 secret "webhook-job-secrets" deleted
  1. Here we create the webhook-job-secrets secret that contains the s3sync.json file.
  2. This command runs the job.
  3. Checking the label cronjob=s3sync we get the Pods executed by the job.
  4. Here we print the logs of the completed job.
  5. Once we are finished we remove the Job.
  6. And also the secret.
Final remarks

This post has been longer than I expected, but I believe it can be useful for someone; in any case, next time I’ll try to explain something shorter or will split it into multiple entries.

Shirish Agarwal: Rama II, Arthur C. Clarke, Aliens

Sunday 25th of September 2022 09:07:45 AM
Rama II

This would be more of a short post about the current book I am reading. Now people who have seen Arrival would probably be more at home. People who have also seen Avatar would also be familiar to the theme or concept I am sharing about. Now before I go into detail, it seems that Arthur C. Clarke wanted to use a powerful god or mythological character for the name and that is somehow the RAMA series started.

Now the first book in the series explores an extraterrestrial spaceship that earth people see/connect with. The spaceship is going somewhere and is doing an Earth flyby so humans don’t have much time to explore the spaceship and it is difficult to figure out how the spaceship worked. The spaceship is around 40 km. long. They don’t meet any living Ramans but mostly automated systems and something called biots.

As I’m still reading it, I can’t really say what happens next. Although in Rama or Rama I, the powers that be want to destroy it while in the end last they don’t. Whether they could have destroyed it or not would be whole another argument. What people need to realize is that the book is a giant ‘What IF’ scenario.

Aliens

If there were any intelligent life in the Universe, I don’t think they will take the pain of visiting Earth. And the reasons are far more mundane than anything else. Look at how we treat each other. One of the largest democracies on Earth, The U.S. has been so divided. While the progressives have made some good policies, the Republicans are into political stunts, consider the political stunt of sending Refugees to Martha’s Vineyard. The ex-president also made a statement that he can declassify anything just by thinking about it. Now understand this, a refugee is a legal migrant whose papers would be looked into by the American Govt. and till the time he/she/their application is approved or declined they can work, have a house, or do whatever to support themselves. There is a huge difference between having refugee status and being an undocumented migrant. And it isn’t as if the Republicans don’t know this, they did it because they thought they will be able to get away with it.

Both the above episodes don’t throw us in a good light. If we treat others like the above, how can we expect to be treated? And refugees always have a hard time, not just in the U.S, , the UK you name it. The UK just some months ago announced a controversial deal where they will send Refugees to Rwanda while their refugee application is accepted or denied, most of them would be denied.

The Indian Government is more of the same. A friend, a casual acquaintance Nishant Shah shared the same issues as I had shared a few weeks back even though he’s an NRI. So, it seems we are incapable of helping ourselves as well as helping others. On top of it, we have the temerity of using the word ‘alien’ for them.

Now, just for a moment, imagine you are an intelligent life form. An intelligent life-form that could coax energy from the stars, why would you come to Earth, where the people at large have already destroyed more than half of the atmosphere and still arguing about it with the other half. On top of it, we see a list of authoritarian figures like Putin, Xi Jinping whose whole idea is to hold on to power for as long as they can, damn the consequences. Mr. Modi is no different, he is the dumbest of the lot and that’s saying something. Most of the projects made by him are in disarray, Pune Metro, my city giving an example. And this is when Pune was the first applicant to apply for a Metro. Just like the UK, India too has tanked the economy under his guidance. Every time they come closer to target dates, the targets are put far into the future, for e.g. now they have said 2040 for a good economy. And just like in other countries, he has some following even though he has a record of failure in every sector of the economy, education, and defense, the list is endless. There isn’t a single accomplishment by him other than screwing with other religions. Most of my countrymen also don’t really care or have a bother to see how the economy grows and how exports play a crucial part otherwise they would be more alert. Also, just like the UK, India too gave tax cuts to the wealthy, most people don’t understand how economies function and the PM doesn’t care. The media too is subservient and because nobody asks the questions, nobody seems to be accountable :(.

Religion

There is another aspect that also has been to the fore, just like in medieval times, I see a great fervor for religion happening here, especially since the pandemic and people are much more insecure than ever before. Before, I used to think that insecurity and religious appeal only happen in the uneducated, and I was wrong. I have friends who are highly educated and yet still are blinded by religion. In many such cases or situations, I find their faith to be a sham. If you have faith, then there shouldn’t be any room for doubt or insecurity. And if you are not in doubt or insecure, you won’t need to talk about your religion. The difference between the two is that a person is satiated himself/herself/themselves with thirst and hunger. That person would be in a relaxed mode while the other person would continue to create drama as there is no peace in their heart.

Another fact is none of the major religions, whether it is Christianity, Islam, Buddhism or even Hinduism has allowed for the existence of extraterrestrials. We have already labeled them as ‘aliens’ even before meeting them & just our imagination. And more often than not, we end up killing them. There are and have been scores of movies that have explored the idea. Independence day, Aliens, Arrival, the list goes on and on. And because our religions have never thought about the idea of ET’s and how they will affect us, if ET’s do come, all the religions and religious practices would panic and die. That is the possibility why even the 1947 Roswell Incident has been covered up .

If the above was not enough, the bombing of Hiroshima and Nagasaki by the Americans would always be a black mark against humanity. From the alien perspective, if you look at the technology that they have vis-a-vis what we have, they will probably think of us as spoilt babies and they wouldn’t be wrong. Spoilt babies with nuclear weapons are not exactly a healthy mix

Ian Jackson: Please vote in favour of the Debian Social Contract change

Saturday 24th of September 2022 07:08:19 PM

tl;dr: Please vote in favour of the Debian Social Contract change, by ranking all of its options above None of the Above. Rank the SC change options above corresponding options that do not change the Social Contract.

Vote to change the SC even if you think the change is not necessary for Debian to prominently/officially provide an installer with-nonfree-firmware.

Why vote for SC change even if I think it’s not needed?

I’m addressing myself primarily to the reader who agrees with me that Debian ought to be officially providing with-firmware images. I think it is very likely that the winning option will be one of the ones which asks for an official and prominent with-firmware installer.

However, many who oppose this change believe that it would be a breach of Debian’s Social Contract. This is a very reasonable and arguable point of view. Indeed, I’m inclined to share it.

If the winning option is to provide a with-firmware installer (perhaps, only a with-firmware installer) those people will feel aggrieved. They will, quite reasonably, claim that the result of the vote is illegitimate - being contrary to Debian’s principles as set out in the Social Contract, which require a 3:1 majority to change.

There is even the possibility that the Secretary may declare the GR result void, as contrary to the Constitution! (Sadly, I am not making this up.) This would cast Debian into (yet another) acrimonious constitutional and governance crisis.

The simplest answer is to amend the Social Contract to explicitly permit what is being proposed. Holger’s option F and Russ’s option E do precisely that.

Amending the SC is not an admission that it was legally necessary to do so. It is practical politics: it ensures that we have clear authority and legitimacy.

Aren’t we softening Debian’s principles?

I think prominently distributing an installer that can work out of the box on the vast majority of modern computers would help Debian advance our users’ freedom.

I see user freedom as a matter of practical capability, not theoretical purity. Anyone living in the modern world must make compromises. It is Debian’s job to help our users (and downstreams) minimise those compromises and retain as much control as possible over the computers in their life. Insisting that a user buys different hardware, or forcing them to a different distro, does not serve that goal.

I don’t really expect to convince anyone with such a short argument, but I do want to make the point that providing an installer that users can use to obtain a lot of practical freedom is also, for many of us, a matter of principle.



comments

Gunnar Wolf: 6237415

Friday 23rd of September 2022 04:03:26 PM

Years ago, it was customary that some of us stated publicly the way we think in time of Debian General Resolutions (GRs). And even if we didn’t, vote lists were open (except when voting for people, i.e. when electing a DPL), so if interested we could understand what our different peers thought.

This is the first vote, though, where a Debian vote is protected under voting secrecy. I think it is sad we chose that path, as I liken a GR vote more with a voting process within a general assembly of a cooperative than with a countrywide voting one; I feel that understanding who is behind each posture helps us better understand the project as a whole.

But anyway, I’m digressing… Even though I remained quiet during much of the discussion period (I was preparing and attending a conference), I am very much interested in this vote — I am the maintainer for the Raspberry Pi firmware, and am a seconder for two of them. Many people know me for being quite inflexible in my interpretation of what should be considered Free Software, and I’m proud of it. But still, I believer it to be fundamental for Debian to be able to run on the hardware most users have.

So… My vote was as follows:

[6] Choice 1: Only one installer, including non-free firmware [2] Choice 2: Recommend installer containing non-free firmware [3] Choice 3: Allow presenting non-free installers alongside the free one [7] Choice 4: Installer with non-free software is not part of Debian [4] Choice 5: Change SC for non-free firmware in installer, one installer [1] Choice 6: Change SC for non-free firmware in installer, keep both installers [5] Choice 7: None Of The Above

For people reading this not into Debian’s voting processes: Debian uses the cloneproof Schwatz sequential dropping Condorcet method, which means we don’t only choose our favorite option (which could lead to suboptimal strategic voting outcomes), but we rank all the options according to our preferences.

To read this vote, we should first locate position of “None of the above”, which for my ballot is #5. Let me reorder the ballot according to my preferences:

[1] Choice 6: Change SC for non-free firmware in installer, keep both installers [2] Choice 2: Recommend installer containing non-free firmware [3] Choice 3: Allow presenting non-free installers alongside the free one [4] Choice 5: Change SC for non-free firmware in installer, one installer [5] Choice 7: None Of The Above [6] Choice 1: Only one installer, including non-free firmware [7] Choice 4: Installer with non-free software is not part of Debian

This is, I don’t agree either with Steve McIntyre’s original proposal, Choice 1 (even though I seconded it, this means, I think it’s very important to have this vote, and as a first proposal, it’s better than the status quo — maybe it’s contradictory that I prefer it to the status quo, but ranked it below NotA. Well, more on that when I present Choice 5).

My least favorite option is Choice 4, presented by Simon Josefsson, which represents the status quo: I don’t want Debian not to have at all an installer that cannot be run on most modern hardware with reasonably good user experience (i.e. network support — or the ability to boot at all!)

Slightly above my acceptability threshold, I ranked Choice 5, presented by Russ Allbery. Debian’s voting and its constitution rub each other in interesting ways, so the Project Secretary has to run the votes as they are presented… but he has interpreted Choice 1 to be incompatible with the Social Contract (as there would no longer be a DFSG-free installer available), and if it wins, it could lead him to having to declare the vote invalid. I don’t want that to happen, and that’s why I ranked Choice 1 below None of the above.

[update/note] Several people have asked me to back that the Secretary said so. I can refer to four mails: 2022.08.29, 2022.08.30, 2022.09.02, 2022.09.04.

Other than that, Choice 6 (proposed by Holger Levsen), Choice 2 (proposed by me) and Choice 3 (proposed by Bart Martens) are very much similar; the main difference is that Choice 6 includes a modification to the Social Contract expressing that:

The Debian official media may include firmware that is otherwise not part of the Debian system to enable use of Debian with hardware that requires such firmware.

I believe choices 2 and 3 to be mostly the same, being Choice 2 more verbose in explaining the reasoning than Choice 3.

Oh! And there are always some more bits to the discussion… For example, given they hold modifications to the Social Contract, both Choice 5 and Choice 6 need a 3:1 supermajority to be valid.

So, lets wait until the beginning of October to get the results, and to implement the changes they will (or not?) allow. If you are a Debian Project Member, please vote!

Steve Kemp: Lisp macros are magical

Friday 23rd of September 2022 02:30:30 PM

In my previous post I introduced yet another Lisp interpreter. When it was posted there was no support for macros.

Since I've recently returned from a visit to the UK, and caught COVID-19 while I was there, I figured I'd see if my brain was fried by adding macro support.

I know lisp macros are awesome, it's one of those things that everybody is told. Repeatedly. I've used macros in my emacs programming off and on for a good few years, but despite that I'd not really given them too much thought.

If you know anything about lisp you know that it's all about the lists, the parenthesis, and the macros. Here's a simple macro I wrote:

(define if2 (macro (pred one two) `(if ~pred (begin ~one ~two))))

The standard lisp if function allows you to write:

(if (= 1 a) (print "a == 1") (print "a != 1"))

There are three arguments supplied to the if form:

  • The test to perform.
  • A single statement to execute if the test was true.
  • A single statement to execute if the test was not true.

My if2 macro instead has three arguments:

  • The test to perform.
  • The first statement to execute if the test was true.
  • The second statement to execute if the test was true.
  • i.e. There is no "else", or failure, clause.

This means I can write:

(if2 blah (one..) (two..))

Rather than:

(if blah (begin (one..) (two..)))

It is simple, clear, and easy to understand and a good building-block for writing a while function:

(define while-fun (lambda (predicate body) (if2 (predicate) (body) (while-fun predicate body))))

There you see that if the condition is true then we call the supplied body, and then recurse. Doing two actions as a result of the single if test is a neat shortcut.

Of course we need to wrap that up in a macro, for neatness:

(define while (macro (expression body) (list 'while-fun (list 'lambda '() expression) (list 'lambda '() body))))

Now we're done, and we can run a loop five times like so:

(let ((a 5)) (while (> a 0) (begin (print "(while) loop - iteration %s" a) (set! a (- a 1) true))))

Output:

(while) loop - iteration 5 (while) loop - iteration 4 (while) loop - iteration 3 (while) loop - iteration 2 (while) loop - iteration 1

We've gone from using lists to having a while-loop, with a couple of simple macros and one neat recursive function.

There are a lot of cute things you can do with macros, and now I'm starting to appreciate them a little more. Of course it's not quite as magical as FORTH, but damn close!

Reproducible Builds (diffoscope): diffoscope 222 released

Friday 23rd of September 2022 12:00:00 AM

The diffoscope maintainers are pleased to announce the release of diffoscope version 222. This version includes the following changes:

[ Mattia Rizzolo ] * Use pep517 and pip to load the requirements. (Closes: #1020091) * Remove old Breaks/Replaces in debian/control that have been obsoleted since bullseye

You find out more by visiting the project homepage.

Jonathan Dowland: Nine Inch Nails, Cornwall, June

Thursday 22nd of September 2022 10:09:30 AM

In June I travelled to see Nine Inch Nails perform two nights at the Eden Project in Cornwall. It'd been eight years since I last saw them live and when they announced the Eden shows, I thought it might be the only chance I'd get to see them for a long time. I committed, and sods law, a week or so later they announced a handful of single-night UK club shows. On the other hand, on previous tours where they'd typically book two club nights in each city, I've attended one night and always felt I should have done both, so this time I was making that happen.

Newquay

approach by air

Towan Beach (I think)

For personal reasons it's been a difficult year so it was nice to treat myself to a mini holiday. I stayed in Newquay, a seaside town with many similarities to the North East coast, as well as many differences. It's much bigger, and although we have a thriving surfing community in Tynemouth, Newquay have it on another level. They also have a lot more tourism, which is a double-edged sword: in Newquay, besides surfing, there was not a lot to do. There's a lot of tourist tat shops, and bars and cafes (som very nice ones), but no book shops, no record shops, very few of the quaint, unique boutique places we enjoy up here and possibly take for granted.

If you want tie-dyed t-shirts though, you're sorted.

Nine Inch Nails have a long-established, independently fan-run forum called Echoing The Sound. There is now also an official Discord server. I asked on both whether anyone was around in Newquay and wanted to meet up: not many people were! But I did meet a new friend, James, for a quiet drink. He was due to share a taxi with Sarah, who was flying in but her flight was delayed and she had to figure out another route.

Eden Project

the Eden Project

The Eden Project, the venue itself, is a fascinating place. I didn't realise until I'd planned most of my time there that the gig tickets granted you free entry into the Project on the day of the gig as well as the day after. It was quite tricky to get from Newquay to the Eden project, I would have been better off staying in St Austell itself perhaps, so I didn't take advantage of this, but I did have a couple of hours total to explore a little bit at the venue before the gig on each night.

Friday 17th (sunny)

Once I got to the venue I managed to meet up with several names from ETS and the Discord: James, Sarah (who managed to re-arrange flights), Pete and his wife (sorry I missed your name), Via Tenebrosa (she of crab hat fame), Dave (DaveDiablo), Elliot and his sister and finally James (sheapdean), someone who I've been talking to online for over a decade and finally met in person (and who taped both shows). I also tried to meet up with a friend from the Debian UK community (hi Lief) but I couldn't find him!

Support for Friday was Nitzer Ebb, who I wasn't familiar with before. There were two men on stage, one operating instruments, the other singing. It was a tough time to warm up the crowd, the venue was still very empty and it was very bright and sunny, but I enjoyed what I was hearing. They're definitely on my list. I later learned that the band's regular singer (Doug McCarthy) was unable to make it, and so the guy I was watching (Bon Harris) was standing in for full vocal duties. This made the performance (and their subsequent one at Hellfest the week after) all the more impressive.

Via (with crab hat), Sarah, me (behind). pic by kraw

(Day) and night one, Thursday, was very hot and sunny and the band seemed a little uncomfortable exposed on stage with little cover. Trent commented as such at least once. The setlist was eclectic: and I finally heard some of my white whale songs. Highlights for me were The Perfect Drug, which was unplayed from 1997-2018 and has now become a staple, and the second ever performance of Everything, the first being a few days earlier. Also notable was three cuts in a row from the last LP, Bad Witch, Heresy and Love Is Not Enough.

Saturday 18th (rain)

with Elliot, before

Day/night 2, Friday, was rainy all day. Support was Yves Tumor, who were an interesting clash of styles: a Prince/Bowie-esque inspired lead clashing with a rock-out lead guitarist styling himself similarly to Brian May.

I managed to find Sarah, Elliot (new gig best-buddy), Via and James (sheapdean) again. Pete was at this gig too, but opted to take a more relaxed position than the rail this time. I also spent a lot of time talking to a Canadian guy on a press pass (both nights) that I'm ashamed to have forgotten his name.

The dank weather had Nine Inch Nails in their element. I think night one had the more interesting setlist, but night two had the best performance, hands down. Highlights for me were mostly a string of heavier songs (in rough order of scarcity, from common to rarely played): wish, burn, letting you, reptile, every day is exactly the same, the line begins to blur, and finally, happiness in slavery, the first UK performance since 1994. This was a crushing set.

A girl in front of me was really suffering with the cold and rain after waiting at the venue all day to get a position on the rail. I thought she was going to pass out. A roadie with NIN noticed, and came over and gave her his jacket. He said if she waited to the end of the show and returned his jacket he'd give her a setlist, and true to his word, he did. This was a really nice thing to happen and really gave the impression that the folks who work on these shows are caring people.

Yep I was this close

A fuckin' rainbow! Photo by "Lazereth of Nazereth"

Afterwards

Night two did have some gentler songs and moments to remember: a re-arranged Sanctified (which ended a nineteen-year hiatus in 2013) And All That Could Have Been (recorded 2002, first played 2018), La Mer, during which the rain broke and we were presented with a beautiful pink-hued rainbow. They then segued into Less Than, providing the comic moment of the night when Trent noticed the rainbow mid-song; now a meme that will go down in NIN fan history.

Wrap-up

This was a blow-out, once in a lifetime trip to go and see a band who are at the top of their career in terms of performance. One problem I've had with NIN gigs in the past is suffering gig flashback to them when I go to other (inferior) gigs afterwards, and I'm pretty sure I will have this problem again. Doing both nights was worth it, the two experiences were very different and each had its own unique moments. The venue was incredible, and Cornwall is (modulo tourist trap stuff) beautiful.

Simon Josefsson: Privilege separation of GSS-API credentials for Apache

Tuesday 20th of September 2022 06:40:05 AM

To protect web resources with Kerberos you may use Apache HTTPD with mod_auth_gssapi — however, all web scripts (e.g., PHP) run under Apache will have access to the Kerberos long-term symmetric secret credential (keytab). If someone can get it, they can impersonate your server, which is bad.

The gssproxy project makes it possible to introduce privilege separation to reduce the attack surface. There is a tutorial for RPM-based distributions (Fedora, RHEL, AlmaLinux, etc), but I wanted to get this to work on a DPKG-based distribution (Debian, Ubuntu, Trisquel, PureOS, etc) and found it worthwhile to document the process. I’m using Ubuntu 22.04 below, but have tested it on Debian 11 as well. I have adopted the gssproxy package in Debian, and testing this setup is part of the scripted autopkgtest/debci regression testing.

First install the required packages:

root@foo:~# apt-get update root@foo:~# apt-get install -y apache2 libapache2-mod-auth-gssapi gssproxy curl

This should give you a working and running web server. Verify it is operational under the proper hostname, I’ll use foo.sjd.se in this writeup.

root@foo:~# curl --head http://foo.sjd.se/
HTTP/1.1 200 OK

The next step is to create a keytab containing the Kerberos V5 secrets for your host, the exact steps depends on your environment (usually kadmin ktadd or ipa-getkeytab), but use the string “HTTP/foo.sjd.se” and then confirm using something like the following.

root@foo:~# ls -la /etc/gssproxy/httpd.keytab -rw------- 1 root root 176 Sep 18 06:44 /etc/gssproxy/httpd.keytab root@foo:~# klist -k /etc/gssproxy/httpd.keytab -e Keytab name: FILE:/etc/gssproxy/httpd.keytab KVNO Principal ---- -------------------------------------------------------------------------- 2 HTTP/foo.sjd.se@GSSPROXY.EXAMPLE.ORG (aes256-cts-hmac-sha1-96) 2 HTTP/foo.sjd.se@GSSPROXY.EXAMPLE.ORG (aes128-cts-hmac-sha1-96) root@foo:~#

The file should be owned by root and not be in the default /etc/krb5.keytab location, so Apache’s libapache2-mod-auth-gssapi will have to use gssproxy to use it.

Then configure gssproxy to find the credential and use it with Apache.

root@foo:~# cat<<EOF > /etc/gssproxy/80-httpd.conf [service/HTTP] mechs = krb5 cred_store = keytab:/etc/gssproxy/httpd.keytab cred_store = ccache:/var/lib/gssproxy/clients/krb5cc_%U euid = www-data process = /usr/sbin/apache2 EOF

For debugging, it may be useful to enable more gssproxy logging:

root@foo:~# cat<<EOF > /etc/gssproxy/gssproxy.conf [gssproxy] debug_level = 1 EOF root@foo:~#

Restart gssproxy so it finds the new configuration, and monitor syslog as follows:

root@foo:~# tail -F /var/log/syslog & root@foo:~# systemctl restart gssproxy

You should see something like this in the log file:

Sep 18 07:03:15 foo gssproxy[4076]: [2022/09/18 05:03:15]: Exiting after receiving a signal
Sep 18 07:03:15 foo systemd[1]: Stopping GSSAPI Proxy Daemon…
Sep 18 07:03:15 foo systemd[1]: gssproxy.service: Deactivated successfully.
Sep 18 07:03:15 foo systemd[1]: Stopped GSSAPI Proxy Daemon.
Sep 18 07:03:15 foo gssproxy[4092]: [2022/09/18 05:03:15]: Debug Enabled (level: 1)
Sep 18 07:03:15 foo systemd[1]: Starting GSSAPI Proxy Daemon…
Sep 18 07:03:15 foo gssproxy[4093]: [2022/09/18 05:03:15]: Kernel doesn't support GSS-Proxy (can't open /proc/net/rpc/use-gss-proxy: 2 (No such file or directory))
Sep 18 07:03:15 foo gssproxy[4093]: [2022/09/18 05:03:15]: Problem with kernel communication! NFS server will not work
Sep 18 07:03:15 foo systemd[1]: Started GSSAPI Proxy Daemon.
Sep 18 07:03:15 foo gssproxy[4093]: [2022/09/18 05:03:15]: Initialization complete.

The NFS-related errors is due to a default gssproxy configuration file, it is harmless and if you don’t use NFS with GSS-API you can silence it like this:

root@foo:~# rm /etc/gssproxy/24-nfs-server.conf
root@foo:~# systemctl try-reload-or-restart gssproxy

The log should now indicate that it loaded the keytab:

Sep 18 07:18:59 foo systemd[1]: Reloading GSSAPI Proxy Daemon…
Sep 18 07:18:59 foo gssproxy[4182]: [2022/09/18 05:18:59]: Received SIGHUP; re-reading config.
Sep 18 07:18:59 foo gssproxy[4182]: [2022/09/18 05:18:59]: Service: HTTP, Keytab: /etc/gssproxy/httpd.keytab, Enctype: 18
Sep 18 07:18:59 foo gssproxy[4182]: [2022/09/18 05:18:59]: New config loaded successfully.
Sep 18 07:18:59 foo systemd[1]: Reloaded GSSAPI Proxy Daemon.

To instruct Apache — or actually, the MIT Kerberos V5 GSS-API library used by mod_auth_gssap loaded by Apache — to use gssproxy instead of using /etc/krb5.keytab as usual, Apache needs to be started in an environment that has GSS_USE_PROXY=1 set. The background is covered by the gssproxy-mech(8) man page and explained by the gssproxy README.

When systemd is used the following can be used to set the environment variable, note the final command to reload systemd.

root@foo:~# mkdir -p /etc/systemd/system/apache2.service.d root@foo:~# cat<<EOF > /etc/systemd/system/apache2.service.d/gssproxy.conf [Service] Environment=GSS_USE_PROXY=1 EOF root@foo:~# systemctl daemon-reload

The next step is to configure a GSS-API protected Apache resource:

root@foo:~# cat<<EOF > /etc/apache2/conf-available/private.conf <Location /private> AuthType GSSAPI AuthName "GSSAPI Login" Require valid-user </Location>

Enable the configuration and restart Apache — the suggested use of reload is not sufficient, because then it won’t be restarted with the newly introduced GSS_USE_PROXY variable. This just applies to the first time, after the first restart you may use reload again.

root@foo:~# a2enconf private
Enabling conf private.
To activate the new configuration, you need to run:
systemctl reload apache2
root@foo:~# systemctl restart apache2

When you have debug messages enabled, the log may look like this:

Sep 18 07:32:23 foo systemd[1]: Stopping The Apache HTTP Server…
Sep 18 07:32:23 foo gssproxy[4182]: [2022/09/18 05:32:23]: Client [2022/09/18 05:32:23]: (/usr/sbin/apache2) [2022/09/18 05:32:23]: connected (fd = 10)[2022/09/18 05:32:23]: (pid = 4651) (uid = 0) (gid = 0)[2022/09/18 05:32:23]:
Sep 18 07:32:23 foo gssproxy[4182]: message repeated 4 times: [ [2022/09/18 05:32:23]: Client [2022/09/18 05:32:23]: (/usr/sbin/apache2) [2022/09/18 05:32:23]: connected (fd = 10)[2022/09/18 05:32:23]: (pid = 4651) (uid = 0) (gid = 0)[2022/09/18 05:32:23]:]
Sep 18 07:32:23 foo systemd[1]: apache2.service: Deactivated successfully.
Sep 18 07:32:23 foo systemd[1]: Stopped The Apache HTTP Server.
Sep 18 07:32:23 foo systemd[1]: Starting The Apache HTTP Server…
Sep 18 07:32:23 foo gssproxy[4182]: [2022/09/18 05:32:23]: Client [2022/09/18 05:32:23]: (/usr/sbin/apache2) [2022/09/18 05:32:23]: connected (fd = 10)[2022/09/18 05:32:23]: (pid = 4657) (uid = 0) (gid = 0)[2022/09/18 05:32:23]:
root@foo:~# Sep 18 07:32:23 foo gssproxy[4182]: message repeated 8 times: [ [2022/09/18 05:32:23]: Client [2022/09/18 05:32:23]: (/usr/sbin/apache2) [2022/09/18 05:32:23]: connected (fd = 10)[2022/09/18 05:32:23]: (pid = 4657) (uid = 0) (gid = 0)[2022/09/18 05:32:23]:]
Sep 18 07:32:23 foo systemd[1]: Started The Apache HTTP Server.

Finally, set up a dummy test page on the server:

root@foo:~# echo OK > /var/www/html/private

To verify that the server is working properly you may acquire tickets locally and then use curl to retrieve the GSS-API protected resource. The "--negotiate" enables SPNEGO and "--user :" asks curl to use username from the environment.

root@foo:~# klist Ticket cache: FILE:/tmp/krb5cc_0 Default principal: jas@GSSPROXY.EXAMPLE.ORG Valid starting Expires Service principal 09/18/22 07:40:37 09/19/22 07:40:37 krbtgt/GSSPROXY.EXAMPLE.ORG@GSSPROXY.EXAMPLE.ORG root@foo:~# curl --negotiate --user : http://foo.sjd.se/private OK root@foo:~#

The log should contain something like this:

Sep 18 07:56:00 foo gssproxy[4872]: [2022/09/18 05:56:00]: Client [2022/09/18 05:56:00]: (/usr/sbin/apache2) [2022/09/18 05:56:00]: connected (fd = 10)[2022/09/18 05:56:00]: (pid = 5042) (uid = 33) (gid = 33)[2022/09/18 05:56:00]:
Sep 18 07:56:00 foo gssproxy[4872]: [CID 10][2022/09/18 05:56:00]: gp_rpc_execute: executing 6 (GSSX_ACQUIRE_CRED) for service "HTTP", euid: 33,socket: (null)
Sep 18 07:56:00 foo gssproxy[4872]: [CID 10][2022/09/18 05:56:00]: gp_rpc_execute: executing 6 (GSSX_ACQUIRE_CRED) for service "HTTP", euid: 33,socket: (null)
Sep 18 07:56:00 foo gssproxy[4872]: [CID 10][2022/09/18 05:56:00]: gp_rpc_execute: executing 1 (GSSX_INDICATE_MECHS) for service "HTTP", euid: 33,socket: (null)
Sep 18 07:56:00 foo gssproxy[4872]: [CID 10][2022/09/18 05:56:00]: gp_rpc_execute: executing 6 (GSSX_ACQUIRE_CRED) for service "HTTP", euid: 33,socket: (null)
Sep 18 07:56:00 foo gssproxy[4872]: [CID 10][2022/09/18 05:56:00]: gp_rpc_execute: executing 9 (GSSX_ACCEPT_SEC_CONTEXT) for service "HTTP", euid: 33,socket: (null)

The Apache log will look like this, notice the authenticated username shown.

127.0.0.1 - jas@GSSPROXY.EXAMPLE.ORG [18/Sep/2022:07:56:00 +0200] "GET /private HTTP/1.1" 200 481 "-" "curl/7.81.0"

Congratulations, and happy hacking!

Matthew Garrett: Handling WebAuthn over remote SSH connections

Tuesday 20th of September 2022 02:17:22 AM
Being able to SSH into remote machines and do work there is great. Using hardware security tokens for 2FA is also great. But trying to use them both at the same time doesn't work super well, because if you hit a WebAuthn request on the remote machine it doesn't matter how much you mash your token - it's not going to work.

But could it?

The SSH agent protocol abstracts key management out of SSH itself and into a separate process. When you run "ssh-add .ssh/id_rsa", that key is being loaded into the SSH agent. When SSH wants to use that key to authenticate to a remote system, it asks the SSH agent to perform the cryptographic signatures on its behalf. SSH also supports forwarding the SSH agent protocol over SSH itself, so if you SSH into a remote system then remote clients can also access your keys - this allows you to bounce through one remote system into another without having to copy your keys to those remote systems.

More recently, SSH gained the ability to store SSH keys on hardware tokens such as Yubikeys. If configured appropriately, this means that even if you forward your agent to a remote site, that site can't do anything with your keys unless you physically touch the token. But out of the box, this is only useful for SSH keys - you can't do anything else with this support.

Well, that's what I thought, at least. And then I looked at the code and realised that SSH is communicating with the security tokens using the same library that a browser would, except it ensures that any signature request starts with the string "ssh:" (which a genuine WebAuthn request never will). This constraint can actually be disabled by passing -O no-restrict-websafe to ssh-agent, except that was broken until this weekend. But let's assume there's a glorious future where that patch gets backported everywhere, and see what we can do with it.

First we need to load the key into the security token. For this I ended up hacking up the Go SSH agent support. Annoyingly it doesn't seem to be possible to make calls to the agent without going via one of the exported methods here, so I don't think this logic can be implemented without modifying the agent module itself. But this is basically as simple as adding another key message type that looks something like:
type ecdsaSkKeyMsg struct { Type string `sshtype:"17|25"` Curve string PubKeyBytes []byte RpId string Flags uint8 KeyHandle []byte Reserved []byte Comments string Constraints []byte `ssh:"rest"` }Where Type is ssh.KeyAlgoSKECDSA256, Curve is "nistp256", RpId is the identity of the relying party (eg, "webauthn.io"), Flags is 0x1 if you want the user to have to touch the key, KeyHandle is the hardware token's representation of the key (basically an opaque blob that's sufficient for the token to regenerate the keypair - this is generally stored by the remote site and handed back to you when it wants you to authenticate). The other fields can be ignored, other than PubKeyBytes, which is supposed to be the public half of the keypair.

This causes an obvious problem. We have an opaque blob that represents a keypair. We don't have the public key. And OpenSSH verifies that PubKeyByes is a legitimate ecdsa public key before it'll load the key. Fortunately it only verifies that it's a legitimate ecdsa public key, and does nothing to verify that it's related to the private key in any way. So, just generate a new ECDSA key (ecdsa.GenerateKey(elliptic.P256(), rand.Reader)) and marshal it ( elliptic.Marshal(ecKey.Curve, ecKey.X, ecKey.Y)) and we're good. Pass that struct to ssh.Marshal() and then make an agent call.

Now you can use the standard agent interfaces to trigger a signature event. You want to pass the raw challenge (not the hash of the challenge!) - the SSH code will do the hashing itself. If you're using agent forwarding this will be forwarded from the remote system to your local one, and your security token should start blinking - touch it and you'll get back an ssh.Signature blob. ssh.Unmarshal() the Blob member to a struct like
type ecSig struct { R *big.Int S *big.Int }and then ssh.Unmarshal the Rest member to
type authData struct { Flags uint8 SigCount uint32 }The signature needs to be converted back to a DER-encoded ASN.1 structure (eg,
var b cryptobyte.Builder b.AddASN1(asn1.SEQUENCE, func(b *cryptobyte.Builder) { b.AddASN1BigInt(ecSig.R) b.AddASN1BigInt(ecSig.S) }) signatureDER, _ := b.Bytes() , and then you need to construct the Authenticator Data structure. For this, take the RpId used earlier and generate the sha256. Append the one byte Flags variable, and then convert SigCount to big endian and append those 4 bytes. You should now have a 37 byte structure. This needs to be CBOR encoded (I used github.com/fxamacker/cbor and just called cbor.Marshal(data, cbor.EncOptions{})).

Now base64 encode the sha256 of the challenge data, the DER-encoded signature and the CBOR-encoded authenticator data and you've got everything you need to provide to the remote site to satisfy the challenge.

There are alternative approaches - you can use USB/IP to forward the hardware token directly to the remote system. But that means you can't use it locally, so it's less than ideal. Or you could implement a proxy that communicates with the key locally and have that tunneled through to the remote host, but at that point you're just reinventing ssh-agent.

And you should bear in mind that the default behaviour of blocking this sort of request is for a good reason! If someone is able to compromise a remote system that you're SSHed into, they can potentially trick you into hitting the key to sign a request they've made on behalf of an arbitrary site. Obviously they could do the same without any of this if they've compromised your local system, but there is some additional risk to this. It would be nice to have sensible MAC policies that default-denied access to the SSH agent socket and only allowed trustworthy binaries to do so, or maybe have some sort of reasonable flatpak-style portal to gate access. For my threat model I think it's a worthwhile security tradeoff, but you should evaluate that carefully yourself.

Anyway. Now to figure out whether there's a reasonable way to get browsers to work with this.

comments

More in Tux Machines

today's howtos

  • How to install go1.19beta on Ubuntu 22.04 – NextGenTips

    In this tutorial, we are going to explore how to install go on Ubuntu 22.04 Golang is an open-source programming language that is easy to learn and use. It is built-in concurrency and has a robust standard library. It is reliable, builds fast, and efficient software that scales fast. Its concurrency mechanisms make it easy to write programs that get the most out of multicore and networked machines, while its novel-type systems enable flexible and modular program constructions. Go compiles quickly to machine code and has the convenience of garbage collection and the power of run-time reflection. In this guide, we are going to learn how to install golang 1.19beta on Ubuntu 22.04. Go 1.19beta1 is not yet released. There is so much work in progress with all the documentation.

  • molecule test: failed to connect to bus in systemd container - openQA bites

    Ansible Molecule is a project to help you test your ansible roles. I’m using molecule for automatically testing the ansible roles of geekoops.

  • How To Install MongoDB on AlmaLinux 9 - idroot

    In this tutorial, we will show you how to install MongoDB on AlmaLinux 9. For those of you who didn’t know, MongoDB is a high-performance, highly scalable document-oriented NoSQL database. Unlike in SQL databases where data is stored in rows and columns inside tables, in MongoDB, data is structured in JSON-like format inside records which are referred to as documents. The open-source attribute of MongoDB as a database software makes it an ideal candidate for almost any database-related project. This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you the step-by-step installation of the MongoDB NoSQL database on AlmaLinux 9. You can follow the same instructions for CentOS and Rocky Linux.

  • An introduction (and how-to) to Plugin Loader for the Steam Deck. - Invidious
  • Self-host a Ghost Blog With Traefik

    Ghost is a very popular open-source content management system. Started as an alternative to WordPress and it went on to become an alternative to Substack by focusing on membership and newsletter. The creators of Ghost offer managed Pro hosting but it may not fit everyone's budget. Alternatively, you can self-host it on your own cloud servers. On Linux handbook, we already have a guide on deploying Ghost with Docker in a reverse proxy setup. Instead of Ngnix reverse proxy, you can also use another software called Traefik with Docker. It is a popular open-source cloud-native application proxy, API Gateway, Edge-router, and more. I use Traefik to secure my websites using an SSL certificate obtained from Let's Encrypt. Once deployed, Traefik can automatically manage your certificates and their renewals. In this tutorial, I'll share the necessary steps for deploying a Ghost blog with Docker and Traefik.

Red Hat Hires a Blind Software Engineer to Improve Accessibility on Linux Desktop

Accessibility on a Linux desktop is not one of the strongest points to highlight. However, GNOME, one of the best desktop environments, has managed to do better comparatively (I think). In a blog post by Christian Fredrik Schaller (Director for Desktop/Graphics, Red Hat), he mentions that they are making serious efforts to improve accessibility. Starting with Red Hat hiring Lukas Tyrychtr, who is a blind software engineer to lead the effort in improving Red Hat Enterprise Linux, and Fedora Workstation in terms of accessibility. Read more

Today in Techrights

Android Leftovers