Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 7 hours 5 min ago

David Bremner: Yet another buildinfo database.

Tuesday 23rd of July 2019 03:00:00 AM
What?

I previously posted about my extremely quick-and-dirty buildinfo database using buildinfo-sqlite. This year at DebConf, I re-implimented this using PostgreSQL backend, added into some new features.

There is already buildinfo and buildinfos. I was informed I need to think up a name that clearly distinguishes from those two. Thus I give you builtin-pho.

There's a README for how to set up a local database. You'll need 12GB of disk space for the buildinfo files and another 4GB for the database (pro tip: you might want to change the location of your PostgreSQL data_directory, depending on how roomy your /var is)

Demo 1: find things build against old / buggy Build-Depends select distinct p.source,p.version,d.version, b.path from binary_packages p, builds b, depends d where p.suite=&apossid&apos and b.source=p.source and b.arch_all and p.arch = &aposall&apos and p.version = b.version and d.id=b.id and d.depend=&aposdh-elpa&apos and d.version < debversion &apos1.16&apos Demo 2: find packages in sid without buildinfo files select distinct p.source,p.version from binary_packages p where p.suite=&apossid&apos except select p.source,p.version from binary_packages p, builds b where b.source=p.source and p.version=b.version and ( (b.arch_all and p.arch=&aposall&apos) or (b.arch_amd64 and p.arch=&aposamd64&apos) ) Disclaimer

Work in progress by an SQL newbie.

Candy Tsai: Outreachy Week 6 – Week 7: Getting Code Merge

Monday 22nd of July 2019 10:20:46 AM

Already half way through the internship! I have implemented some features and opened a merge request. So… what now? Let’s get those changes merged once and for all! Since I’m already at mid-point, there’s also a video shared on what I’ve done so far in this project.

  • Breaking large merge request into smaller pieces
  • Thoughts on remote pair programming
  • Video sharing for the current progress with the project

Making that video was probably the most time-consuming part. Paying great respects to all YouTubers out there!

Breaking The Merge Request

When I looked back at my merge request, it actually started out quite small and precise. After discussions in the merge request, I started to fix things in the same merge request and then it just got bigger and bigger and we had to seperate out the “mergable parts” to make actual progress in this project.

Remote Pair Programming

You can’t overhear what others are doing or learn something about your colleagues through gossip over lunch break when working remotely. So after being stuck for quite a bit, terceiro suggested that we try pair programming.

After our first remote pair programming session, I think there should be no difference in pair programming in person. We shared the same terminal, looked at the same code and discussed just like people standing side by side.

Through our pair programming session, I found out that I had a bad habit. I didn’t run tests on my code that often, so when I had failing tests that didn’t fail before, I spent more time debugging than I should have. Pair programming gave insight to how others work and I think little improvements go a long way.

Week 6

And then I took almost a week off, so my week 7 was delayed.

Week 7

I found out that I can make small merge requests and list the merge requests it depends on. Gitlab will automatically handle the rest for me once a request is merged.

  • finally finished breaking down my large merge request
  • added the history section

Daniel Lange: Security is hard, open source security unnecessarily harder

Monday 22nd of July 2019 01:15:16 AM

Now it is a commonplace that security is hard. It involves advanced mathematics and a single, tiny mistake or omission in implementation can spoil everything.

And the only sane IT security can be open source security. Because you need to assess the algorithms and their implementation and you need to be able to completely verify the implementation. You simply can't if you don't have the code and can compile it yourself to produce a trusted (ideally reproducible) build. A no-brainer for everybody in the field.

But we make it unbelievably hard for people to use security tools. Because these have grown over decades fostered by highly intelligent people with no interest in UX.
"It was hard to write, so it should be hard to use as well."
And then complain about adoption.

PGP / gpg has received quite some fire this year and the good news is this has resulted in funding for the sole gpg developer. Which will obviously not solve the UX problem.

But the much worse offender is OpenSSL. It is so hard to use that even experienced hackers fail.

Now, securely encrypting a mass communication media like IRC is not possible at all. Read Trust is not transitive: or why IRC over SSL is pointless1.
Still it makes wiretapping harder and that may be a good thing these days.

LibreSSL has forked the OpenSSL code base "with goals of modernizing the codebase, improving security, and applying best practice development processes". No UX improvement. A cleaner code for the chosen few. Duh.

I predict the re-implementations and gradual improvement scenarios will fail. The nearly-impossible-to-use-right situation with both gpg and (much more importantly) OpenSSL cannot be fixed by gradual improvements and however thorough code reviews.

Now the "there's an App for this" security movement won't work out on a grand scale either:

  1. Most often not open source. Notable exceptions: ChatSecure, TextSecure.
  2. No reference implementations with excellent test servers and well documented test suites but products. "Use my App.", "No, use MY App!!!".
  3. Only secures chat or email. So the VC-powered ("next WhatsApp") mass-adoption markets but not the really interesting things to improve upon (CA, code signing, FDE, ...).
  4. While everybody is focusing on mobile adoption the heavy lifting is still on servers. We need sane libraries and APIs. No App for that.

So we need a new development, a new code, a new open source product. Sadly so the Core Infrastructure Initiative so far only funds existing open source projects in dire needs and people bug hunting.

It basically makes the bad solutions of today a bit more secure and ensures maintenance of decade old crufty code bases. That way it extends the suffering of everybody using the inadequate solutions of today.

That's inevitable until we have a better stack but we need to look into getting rid of gpg and OpenSSL and replacing it with something new. Something designed well from the ground up, technically and from a user experience perspective.

Now who's in for a five year funding plan? $3m2 annually. ROCE 0. But a very good chance to get the OBE awarded.

Updates:

21.07.19: A current essay on "The PGP problem" is making rounds and lists some valid issues with the file format, RFCs and the gpg implementation. The GnuPG-users mailing list has a discussion thread on the issues listed in the essay.

19.01.19: Daniel Kahn Gillmor, a Senior Staff Technologist at the ACLU, tried to get his gpg key transition correct. He put a huge amount of thought and preparation into the transition. To support Autocrypt (another try to get GPG usable for more people than a small technical elite), he specifically created different identities for him as a person and his two main email addresses. Two days later he has to invalidate his new gpg key and back-off to less "modern" identity layouts because many of the brittle pieces of infrastructure around gpg from emacs to gpg signature management frontends to mailing list managers fell over dead.

28.11.18: Changed the Quakenet link on why encrypting IRC is useless to an archive.org one as they have removed the original content.

13.03.17: Chris Wellons writes about why GPG is a failure and created a small portable application Enchive to replace it for asymmetric encryption.

24.02.17: Stefan Marsiske has written a blog article: On PGP. He argues about adversary models and when gpg is "probably" 3 still good enough to use. To me a security tool can never be a sane choice if the UI is so convoluted that only a chosen few stand at least a chance of using it correctly. Doesn't matter who or what your adversary is.
Stefan concludes his blog article:

PGP for encryption as in RFC 4880 should be retired, some sunk-cost-biases to be coped with, but we all should rejoice that the last 3-4 years had so much innovation in this field, that RFC 4880 is being rewritten[Citation needed] with many of the above in mind and that hopefully there'll be more and better tools. [..]

He gives an extensive list of tools he considers worth watching in his article. Go and check whether something in there looks like a possible replacement for gpg to you. Stefan also gave a talk on the OpenPGP conference 2016 with similar content, slides.

14.02.17: James Stanley has written up a nice account of his two hour venture to get encrypted email set up. The process is speckled with bugs and inconsistent nomenclature capable of confusing even a technically inclined person. There has been no progress in the last ~two years since I wrote this piece. We're all still riding dead horses. James summarizes:

Encrypted email is nothing new (PGP was initially released in 1991 - 26 years ago!), but it still has a huge barrier to entry for anyone who isn't already familiar with how to use it.

04.09.16: Greg Kroah-Hartman ends an analysis of the Evil32 PGP keyid collisions with:

gpg really is horrible to use and almost impossible to use correctly.

14.11.15:
Scott Ruoti, Jeff Andersen, Daniel Zappala and Kent Seamons of BYU, Utah, have analysed the usability [local mirror, 173kB] of Mailvelope, a webmail PGP/GPG add-on based on a Javascript PGP implementation. They describe the results as "disheartening":

In our study of 20 participants, grouped into 10 pairs of participants who attempted to exchange encrypted email, only one pair was able to successfully complete the assigned tasks using Mailvelope. All other participants were unable to complete the assigned task in the one hour allotted to the study. Even though a decade has passed since the last formal study of PGP, our results show that Johnny has still not gotten any closer to encrypt his email using PGP.
  1. Quakenet has removed that article citing "near constant misrepresentation of the presented argument" sometime in 2018. The contents (not misrepresented) are still valid so I have added and archive.org Wayback machine link instead. 

  2. The estimate was $2m until end of 2018. The longer we wait, the more expensive it'll get. And - obviously - ever harder. E.g. nobody needed to care about sidechannel attacks on big-LITTLE five years ago. But now they start to hit servers and security-sensitive edge devices. 

  3. Stefan says "probably" five times in one paragraph. Probably needs an editor. The person not the application. 

Giovanni Mascellani: Bootstrappable Debian BoF

Monday 22nd of July 2019 12:30:00 AM

Greetings from DebConf 19 in Curitiba! Just a quick reminder that I will run a Bootstrappable Debian BoF on Tuesday 23rd, at 13.30 Brasilia time (which is 16.30 UTC, if I am not mistaken). If you are curious about bootstrappability in Debian, why do we want it and where we are right now, you are welcome to come in person if you are at DebCon or to follow the streaming.

Vincent Bernat: A Makefile for your Go project (2019)

Sunday 21st of July 2019 07:20:39 PM

My most loathed feature of Go was the mandatory use of GOPATH: I do not want to put my own code next to its dependencies. I was not alone and people devised tools or crafted their own Makefile to avoid organizing their code around GOPATH.

Hopefully, since Go 1.11, it is possible to use Go’s modules to manage dependencies without relying on GOPATH. First, you need to convert your project to a module:1

$ go mod init hellogopher go: creating new go.mod: module hellogopher $ cat go.mod module hellogopher

Then, you can invoke the usual commands, like go build or go test. The go command resolves imports by using versions listed in go.mod. When it runs into an import of a package not present in go.mod, it automatically looks up the module containing that package using the latest version and adds it.

$ go test ./... go: finding github.com/spf13/cobra v0.0.5 go: downloading github.com/spf13/cobra v0.0.5 ? hellogopher [no test files] ? hellogopher/cmd [no test files] ok hellogopher/hello 0.001s $ cat go.mod module hellogopher require github.com/spf13/cobra v0.0.5

If you want a specific version, you can either edit go.mod or invoke go get:

$ go get github.com/spf13/cobra@v0.0.4 go: finding github.com/spf13/cobra v0.0.4 go: downloading github.com/spf13/cobra v0.0.4 $ cat go.mod module hellogopher require github.com/spf13/cobra v0.0.4

Add go.mod to your version control system. Optionally, you can also add go.sum as a safety net against overriden tags. If you really want to vendor the dependencies, you can invoke go mod vendor and add the vendor/ directory to your version control system.

Thanks to the modules, in my opinion, Go’s dependency management is now on a par with other languages, like Ruby. While it is possible to run day-to-day operations—building and testing—with only the go command, a Makefile can still be useful to organize common tasks, a bit like Python’s setup.py or Ruby’s Rakefile. Let me describe mine.

Using third-party tools

Most projects need some third-party tools for testing or building. We can either expect them to be already installed or compile them on the fly. For example, here is how code linting is done with Golint:

BIN = $(CURDIR)/bin $(BIN): @mkdir -p $@ $(BIN)/%: | $(BIN) @tmp=$$(mktemp -d); \ env GO111MODULE=off GOPATH=$$tmp GOBIN=$(BIN) go get $(PACKAGE) \ || ret=$$?; \ rm -rf $$tmp ; exit $$ret $(BIN)/golint: PACKAGE=golang.org/x/lint/golint GOLINT = $(BIN)/golint lint: | $(GOLINT) $(GOLINT) -set_exit_status ./...

The first block defines how a third-party tool is built: go get is invoked with the package name matching the tool we want to install. We do not want to pollute our dependency management and therefore, we are working in an empty GOPATH. The generated binaries are put in bin/.

The second block extends the pattern rule defined in the first block by providing the package name for golint. Additional tools can be added by just adding another line like this.

The last block defines the recipe to lint the code. The default linting tool is the golint built using the first block but it can be overrided with make GOLINT=/usr/bin/golint.

Tests

Here are some rules to help running tests:

TIMEOUT = 20 PKGS = $(or $(PKG),$(shell env GO111MODULE=on $(GO) list ./...)) TESTPKGS = $(shell env GO111MODULE=on $(GO) list -f \ '{{ if or .TestGoFiles .XTestGoFiles }}{{ .ImportPath }}{{ end }}' \ $(PKGS)) TEST_TARGETS := test-default test-bench test-short test-verbose test-race test-bench: ARGS=-run=__absolutelynothing__ -bench=. test-short: ARGS=-short test-verbose: ARGS=-v test-race: ARGS=-race $(TEST_TARGETS): test check test tests: fmt lint go test -timeout $(TIMEOUT)s $(ARGS) $(TESTPKGS)

A user can invoke tests in different ways:

  • make test runs all tests;
  • make test TIMEOUT=10 runs all tests with a timeout of 10 seconds;
  • make test PKG=hellogopher/cmd only runs tests for the cmd package;
  • make test ARGS="-v -short" runs tests with the specified arguments;
  • make test-race runs tests with race detector enabled.

go test includes a test coverage tool. Unfortunately, it only handles one package at a time and you have to explicitely list the packages to be instrumented, otherwise the instrumentation is limited to the currently tested package. If you provide too many packages, the compilation time will skyrocket. Moreover, if you want an output compatible with Jenkins, you need some additional tools.

COVERAGE_MODE = atomic COVERAGE_PROFILE = $(COVERAGE_DIR)/profile.out COVERAGE_XML = $(COVERAGE_DIR)/coverage.xml COVERAGE_HTML = $(COVERAGE_DIR)/index.html test-coverage-tools: | $(GOCOVMERGE) $(GOCOV) $(GOCOVXML) # ❶ test-coverage: COVERAGE_DIR := $(CURDIR)/test/coverage.$(shell date -u +"%Y-%m-%dT%H:%M:%SZ") test-coverage: fmt lint test-coverage-tools @mkdir -p $(COVERAGE_DIR)/coverage @for pkg in $(TESTPKGS); do \ # ❷ go test \ -coverpkg=$$(go list -f '{{ join .Deps "\n" }}' $$pkg | \ grep '^$(MODULE)/' | \ tr '\n' ',')$$pkg \ -covermode=$(COVERAGE_MODE) \ -coverprofile="$(COVERAGE_DIR)/coverage/`echo $$pkg | tr "/" "-"`.cover" $$pkg ;\ done @$(GOCOVMERGE) $(COVERAGE_DIR)/coverage/*.cover > $(COVERAGE_PROFILE) @go tool cover -html=$(COVERAGE_PROFILE) -o $(COVERAGE_HTML) @$(GOCOV) convert $(COVERAGE_PROFILE) | $(GOCOVXML) > $(COVERAGE_XML)

First, we define some variables to let the user override them. In ❶, we require the following tools—built like golint previously:

  • gocovmerge merges profiles from different runs into a single one;
  • gocov-xml converts a coverage profile to the Cobertura format, for Jenkins;
  • gocov is needed to convert a coverage profile to a format handled by gocov-xml.

In ❷, for each package to test, we run go test with the -coverprofile argument. We also explicitely provide the list of packages to instrument to -coverpkg by using go list to get a list of dependencies for the tested package and keeping only our owns.

Build

Another useful recipe is to build the program. While this could be done with just go build, it is not uncommon to have to specify build tags, additional flags, or to execute supplementary build steps. In the following example, the version is extracted from Git tags. It will replace the value of the Version variable in the hellogopher/cmd package.

VERSION ?= $(shell git describe --tags --always --dirty --match=v* 2> /dev/null || \ echo v0) all: fmt lint | $(BIN) go build \ -tags release \ -ldflags '-X hellogopher/cmd.Version=$(VERSION)' \ -o $(BIN)/hellogopher main.go

The recipe also runs code formatting and linting.

The excerpts provided in this post are a bit simplified. Have a look at the final result for more perks, including fancy output and integrated help!

  1. For an application not meant to be used as a library, I prefer to use a short name instead of a name derived from an URL, like github.com/vincentbernat/hellogopher. It makes it easier to read import sections:

    import ( "fmt" "os" "hellogopher/cmd" "github.com/pkg/errors" "github.com/spf13/cobra" )

    ↩︎

Bits from Debian: DebConf19 starts today in Curitiba

Sunday 21st of July 2019 07:10:00 PM

DebConf19, the 20th annual Debian Conference, is taking place in Curitiba, Brazil from from July 21 to 28, 2019.

Debian contributors from all over the world have come together at Federal University of Technology - Paraná (UTFPR) in Curitiba, Brazil, to participate and work in a conference exclusively run by volunteers.

Today the main conference starts with over 350 attendants expected and 121 activities scheduled, including 45- and 20-minute talks and team meetings ("BoF"), workshops, a job fair as well as a variety of other events.

The full schedule at https://debconf19.debconf.org/schedule/ is updated every day, including activities planned ad-hoc by attendees during the whole conference.

If you want to engage remotely, you can follow the video streaming available from the DebConf19 website of the events happening in the three talk rooms: Auditório (the main auditorium), Miniauditório and Sala de Videoconferencia. Or you can join the conversation about what is happening in the talk rooms: #debconf-auditorio, #debconf-miniauditorio and #debconf-videoconferencia (all those channels in the OFTC IRC network).

You can also follow the live coverage of news about DebConf19 on https://micronews.debian.org or the @debian profile in your favorite social network.

DebConf is committed to a safe and welcome environment for all participants. During the conference, several teams (Front Desk, Welcome team and Anti-Harassment team) are available to help so both on-site and remote participants get their best experience in the conference, and find solutions to any issue that may arise. See the web page about the Code of Conduct in DebConf19 website for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf19, particularly our Platinum Sponsors: Infomaniak, Google and Lenovo.

Holger Levsen: 20190721-piuparts-was-not-down

Sunday 21st of July 2019 05:31:32 PM
piuparts.debian.org was not down for maintenance

I hadn't shut down piuparts.debian.org for maintenance, I just said so, to make you attend my talk, as my last call for help at DebConf17 was attended by 3 people only...

So please join the session about piuparts(d.o.) today at 14:30 localtime.

Please help help help!

Sylvain Beucler: Planet clean-up

Sunday 21st of July 2019 04:57:42 PM

I did some clean-up / resync on the planet.gnu.org setup

  • Fix issue with newer https websites (SNI)
  • Re-sync Debian base config, scripts and packaging, update documentation; the planet-venus package is still in bad shape though, it's not officially orphaned but the maintainer is unreachable AFAICS
  • Fetch all Savannah feeds using https
  • Update feeds with redirections, which seem to mess-up caching

Dirk Eddelbuettel: RPushbullet 0.3.2

Sunday 21st of July 2019 02:28:00 PM

A new release 0.3.2 of the RPushbullet package is now on CRAN. RPushbullet is interfacing the neat Pushbullet service for inter-device messaging, communication, and more. It lets you easily send alerts like the one to the left to your browser, phone, tablet, … – or all at once.

This is the first new release in almost 2 1/2 years, and it once again benefits greatly from contributed pull requests by Colin (twice !) and Chan-Yub – see below for details.

Changes in version 0.3.2 (2019-07-21)
  • The Travis setup was robustified with respect to the token needed to run tests (Dirk in #48)

  • The configuration file is now readable only by the user (Colin Gillespie in #50)

  • At startup initialization is now more consistent (Colin Gillespie in #53 fixing #52)

  • A new function to fetch prior posts was added (Chanyub Park in #54). `

Courtesy of CRANberries, there is also a diffstat report for this release. More details about the package are at the RPushbullet webpage and the RPushbullet GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jose M. Calhariz: New release of switchconf 0.0.16

Saturday 20th of July 2019 11:56:00 PM

I have not touched switchconf for a long time. Being at DebCamp19 was a good time to work on it.

I have moved the development of switchconf from a private svn repo to a git repo in salsa: https://salsa.debian.org/debian/switchconf Created a virtual host called http://software.calhariz.com were I will publish the sources of the software that I take care. Updated the Makefile to the git repo and released version 0.0.16.

You can download the latest version of switchconf from here: http://software.calhariz.com/switchconf

John Goerzen: Alas, Poor PGP

Saturday 20th of July 2019 11:15:11 PM

Over in The PGP Problem, there’s an extended critique of PGP (and also specifics of the GnuPG implementation) in a modern context. Robert J. Hansen, one of the core GnuPG developers, has an interesting response:

First, RFC4880bis06 (the latest version) does a pretty good job of bringing the crypto angle to a more modern level. There’s a massive installed base of clients that aren’t aware of bis06, and if you have to interoperate with them you’re kind of screwed: but there’s also absolutely nothing prohibiting you from saying “I’m going to only implement a subset of bis06, the good modern subset, and if you need older stuff then I’m just not going to comply.” Sequoia is more or less taking this route — more power to them.

Second, the author makes a couple of mistakes about the default ciphers. GnuPG has defaulted to AES for many years now: CAST5 is supported for legacy reasons (and I’d like to see it dropped entirely: see above, etc.).

Third, a couple of times the author conflates what the OpenPGP spec requires with what it permits, and with how GnuPG implements it. Cleaner delineation would’ve made the criticisms better, I think.

But all in all? It’s a good criticism.

The problem is, where does that leave us? I found the suggestions in the original author’s article (mainly around using IM apps such as Signal) to be unworkable in a number of situations.

The Problems With PGP

Before moving on, let’s tackle some of the problems identified.

The first is an assertion that email is inherently insecure and can’t be made secure. There are some fairly convincing arguments to be made on that score; as it currently stands, there is little ability to hide metadata from prying eyes. And any format that is capable of talking on the network — as HTML is — is just begging for vulnerabilities like EFAIL.

But PGP isn’t used just for this. In fact, one could argue that sending a binary PGP message as an attachment gets around a lot of that email clunkiness — and would be right, at the expense of potentially more clunkiness (and forgetfulness).

What about the web-of-trust issues? I’m in agreement. I have never really used WoT to authenticate a key, only in rare instances trusting an introducer I know personally and from personal experience understand how stringent they are in signing keys. But this is hardly a problem for PGP alone. Every encryption tool mentioned has the problem of validating keys. The author suggests Signal. Signal has some very strong encryption, but you have to have a phone number and a smartphone to use it. Signal’s strength when setting up a remote contact is as strong as SMS. Let that disheartening reality sink in for a bit. (A little social engineering could probably get many contacts to accept a hijacked SIM in Signal as well.)

How about forward secrecy? This is protection against a private key that gets compromised in the future, because an ephemeral session key (or more than one) is negotiated on each communication, and the secret key is never stored. This is a great plan, but it really requires synchronous communication (or something approaching it) between the sender and the recipient. It can’t be used if I want to, for instance, burn a backup onto a Bluray and give it to a friend for offsite storage without giving the friend access to its contents. There are many, many situations where synchronous key negotiation is impossible, so although forward secrecy is great and a nice enhancement, we should assume it to be always applicable.

The saltpack folks have a more targeted list of PGP message format problems. Both they, and the article I link above, complain about the gpg implementation of PGP. There is no doubt truth to these. Among them is a complaint that gpg can emit unverified data. Well sure, because it has a streaming mode. It exits with a proper error code and warnings if a verification fails at the end — just as gzcat does. This is a part of the API that the caller needs to be aware of. It sounds like some callers weren’t handling this properly, but it’s just a function of a streaming tool.

Suggested Solutions

The Signal suggestion is perfectly reasonable in a lot of cases. But the suggestion to use WhatsApp — a proprietary application from a corporation known to brazenly lie about privacy — is suspect. It may have great crypto, but if it uploads your address book to a suspicious company, is it a great app?

Magic Wormhole is a pretty neat program I hadn’t heard of before. But it should be noted it’s written in Python, so it’s probably unlikely to be using locked memory.

How about backup encryption? Backups are a lot more than just filesystem; maybe somebody has a 100GB MySQL or zfs send stream. How should this be encrypted?

My current estimate is that there’s no magic solution right now. The Sequoia PGP folks seem to have a good thing going, as does Saltpack. Both projects are early in development, so as a privacy-concerned person, should you trust them more than GPG with appropriate options? That’s really hard to say.

Additional Discussions

Gunnar Wolf: DebConf19 Key Signing Party: Your personalized map is ready!

Saturday 20th of July 2019 06:13:07 PM

When facing a large key signing party in a group, even a group where you are already well socially connected in, you often lose track whom you have cross-signed with already, who is farther away from you (in the interest of better weaving the Web of Trust)...

So, having Samuel announce the DebConf19 KSP fingerprints list, I hacked a bit to improve the scripts I used on previous years, and... Behold!

The DC19 KSP personalized maps!

This time it's even color-coded! People you have not cross-signed with are in light grey. People whose keys have been signed by you are presented with blue text. People that have signed your key are presented with green background. Of course, people you have cross-signed with have blue text and green background :-]

The graph is up to date as of early today, pulling the data from keys.gnupg.net. Sorry for the huge size, but it's the only way I found it to be useful to see both the big picture and the detailed information. Of course — You can zoom in and out at will!

Steinar H. Gunderson: Nageru 1.9.0 released

Saturday 20th of July 2019 04:39:00 PM

I've just released version 1.9.0 of Nageru, my live video mixer. This contains some fairly significant changes to the way themes work, and I'd like to elaborate a bit about why:

Themes in Nageru govern what's put on screen at any given time (this includes the actual output, of course, but also preview channels show in the UI). They were always a compromise between flexibility and implementation cost; with limited resources, I just could not create a full-fledged animation studio like VizRT has.

Themes work by defining chains (now called scenes) at startup, which get optimized and compiled down to a set of OpenGL shaders. In the beginning, most chains were fairly pedestrian; take an input and put it on screen:

local chain = EffectChain.new(16, 9) input = chain:add_live_input(false, false) chain:finalize(true)

You'd actually have to create each chain twice, since the live output and the previews need different output formats (Y'CbCr vs. RGB), but that wasn't worse than a little for loop and calling finalize() with true or false, respectively.

After a while, one would want to support e.g. 1080p inputs in a 720p stream. By default, those would be scaled directly by the GPU, which is acceptable but not the best one could do, so one would add a high-quality Lanzcos3 resampler:

local chain = EffectChain.new(16, 9) input = chain:add_live_input(false, false) local resample_effect = chain:add_effect(ResampleEffect.new()) resample_effect:set_int("width", 1280) resample_effect:set_int("height", 720) chain:finalize(true)

But you wouldn't want to waste GPU resources on resampling if the input signal were already the right resolution, so you'd build chains with and without resampling and choose the right one ahead-of-time.

At some point, Nageru started supporting interlaced inputs, by means of deinterlacing them. This requires adding a deinterlacer in the chain and also keeping some history of previous frames; most of this is transparent, but it would need to be specified when building the chain. So now we're up to eight possibilities; all combinatoins of deinterlacing on/off, scaling on/off, and preview or live output.

And as I started doing sports, and I wanted fades. This means you would have two different inputs to deal with, and you're up to 32 different kinds. And as Nageru started to support multiple input types, such as images or HTML inputs (rendered via an embedded Chromium), there would be even more. And the for loops would grow, and be replaced by some fairly elaborate multidimensional Lua tables, and as I one day needed to add crop support for some inputs to alleviate letterboxing, I thought working with themes does not spark joy and it was time to do something.

So Nageru 1.9.0 moves a lot of this complexity to where you no longer need to think about it. You just do addinput(), and that can display any kind of input (be it progressive, deinterlaced, image, or HTML). And you can add _optional effects, such as the resampling mentioned above, and turn it on and off as needed. You're still writing themes in Lua instead of drawing boxes in a neat GUI, and there's still combinatorial explosion behind the scenes (no pun intended), but it's much, much more manageable. Here's an example from the included theme:

local scene = Scene.new(16, 9) local simple_scene = { scene = scene, input = scene:add_input(), resample_effect = scene:add_effect({ResampleEffect.new(), ResizeEffect.new(), IdentityEffect.new()}), wb_effect = scene:add_effect(WhiteBalanceEffect.new()) } scene:finalize()

So that's an input (of any kind), a high-quality resize, low-quality resize or no resize, a white balance adjustment, and then finalization. This becomes 24 different sets of shaders internally, and you don't really need to know anything about it. You just do

simple_scene.resample_effect:choose(ResampleEffect) simple_scene.resample_effect:set_int("width", width) simple_scene.resample_effect:set_int("height", height)

or

simple_scene.resample_effect:disable() -- No scaling.

There are also many other small tweaks to how themes work, I believe all of them strongly for the better. However, all old themes continue to work as before; I don't like breaking people's hard work for no reason. I do recommend you move to the newer interfaces as soon as possible, though!

As usual, Nageru 1.9.0 can be downloaded from https://nageru.sesse.net/. It is also uploaded to Debian experimental, not not to unstable yet—it depends on a newer version of bmusb clearing the NEW queue for a soname bump. The documentation is updated with the new theme interfaces, too.

Bits from Debian: DebConf19 invites you to Debian Open Day at the Federal University of Technology - Paraná (UTFPR), in Curitiba

Saturday 20th of July 2019 04:30:00 PM

DebConf, the annual conference for Debian contributors and users interested in improving the Debian operating system, will be held in Federal University of Technology - Paraná (UTFPR) in Curitiba, Brazil, from July 21 to 28, 2019. The conference is preceded by DebCamp from July 14 to 19, and the DebConf19 Open Day on July 20.

The Open Day, Saturday, 20 July, is targeted at the general public. Events of interest to a wider audience will be offered, ranging from topics specific to Debian to the greater Free Software community and maker movement.

The event is a perfect opportunity for interested users to meet the Debian community, for Debian to broaden its community, and for the DebConf sponsors to increase their visibility.

Less purely technical than the main conference schedule, the events on Open Day will cover a large range of topics from social and cultural issues to workshops and introductions to Debian.

The detailed schedule of the Open Day's events includes events in English and Portuguese. Some of the talks are:

  • "The metaverse, gaming and the metabolism of cities" by Bernelle Verster
  • "O Projeto Debian quer você!" by Paulo Henrique de Lima Santana
  • "Protecting Your Web Privacy with Free Software" by Pedro Barcha
  • "Bastidores Debian - Entenda como a distribuição funciona" by Joao Eriberto Mota Filho
  • "Caninos Loucos: a plataforma nacional de Single Board Computers para IoT" by geonnave
  • "Debian na vida de uma Operadora de Telecom" by Marcelo Gondim
  • "Who's afraid of Spectre and Meltdown?" by Alexandre Oliva
  • "New to DebConf BoF" by Rhonda D'Vine

During the Open Day, there will also be a Job Fair with booths from our several of our sponsors, a workshop about the Git version control system and a Debian installfest, for attendees who would like to get help installing Debian on their machines.

Everyone is welcome to attend. As the rest of the conference, attendance is free of charge, but registration in the DebConf19 website is highly recommended.

The full schedule for the Open Day's events and the rest of the conference is at https://debconf19.debconf.org/schedule and the video streaming will be available at the DebConf19 website

DebConf is committed to a safe and welcome environment for all participants. See the DebConf Code of Conduct and the Debian Code of Conduct for more details on this.

Debian thanks the numerous sponsors for their commitment to DebConf19, particularly its Platinum Sponsors: Infomaniak, Google and Lenovo.

Holger Levsen: 20190719-piuparts-down

Saturday 20th of July 2019 03:14:09 PM
piuparts.debian.org down for maintenance

So I've just shut down piuparts.debian.org for maintenance, the website is still up but the slaves won't be running for the next week. I think this will block testing migration for a few packages, but probably that's how it is. (Edit 2019-7-21: this was a joke to make you attend the talk.)

If you want to know more, please join my session about piuparts(d.o.) tomorrow on the first day of DebConf19 at 14:30 localtime.

With a little help from some friends the service should soon be running nicely again for many more years!

Please help help help!

Vincent Bernat: Writing sustainable Python scripts

Saturday 20th of July 2019 03:04:10 PM

Python is a great language to write a standalone script. Getting to the result can be a matter of a dozen to a few hundred lines of code and, moments later, you can forget about it and focus on your next task.

Six months later, a co-worker asks you why the script fails and you don’t have a clue: no documentation, hard-coded parameters, nothing logged during the execution and no sensible tests to figure out what may go wrong.

Turning a “quick-and-dirty” Python script into a sustainable version, which will be easy to use, understand and support by your co-workers and your future self, only takes some moderate effort. As an illustration, let’s start from the following script solving the classic Fizz-Buzz test:

import sys for n in range(int(sys.argv[1]), int(sys.argv[2])): if n % 3 == 0 and n % 5 == 0: print("fizzbuzz") elif n % 3 == 0: print("fizz") elif n % 5 == 0: print("buzz") else: print(n) Documentation

I find useful to write documentation before coding: it makes the design easier and it ensures I will not postpone this task indefinitely. The documentation can be embedded at the top of the script:

#!/usr/bin/env python3 """Simple fizzbuzz generator. This script prints out a sequence of numbers from a provided range with the following restrictions: - if the number is divisble by 3, then print out "fizz", - if the number is divisible by 5, then print out "buzz", - if the number is divisible by 3 and 5, then print out "fizzbuzz". """

The first line is a short summary of the script purpose. The remaining paragraphs contain additional details on its action.

Command-line arguments

The second task is to turn hard-coded parameters into documented and configurable values through command-line arguments, using the argparse module. In our example, we ask the user to specify a range and allow them to modify the modulo values for “fizz” and “buzz”.

import argparse import sys class CustomFormatter(argparse.RawDescriptionHelpFormatter, argparse.ArgumentDefaultsHelpFormatter): pass def parse_args(args=sys.argv[1:]): """Parse arguments.""" parser = argparse.ArgumentParser( description=sys.modules[__name__].__doc__, formatter_class=CustomFormatter) g = parser.add_argument_group("fizzbuzz settings") g.add_argument("--fizz", metavar="N", default=3, type=int, help="Modulo value for fizz") g.add_argument("--buzz", metavar="N", default=5, type=int, help="Modulo value for buzz") parser.add_argument("start", type=int, help="Start value") parser.add_argument("end", type=int, help="End value") return parser.parse_args(args) options = parse_args() for n in range(options.start, options.end + 1): # ...

The added value of this modification is tremendous: parameters are now properly documented and are discoverable through the --help flag. Moreover, the documentation we wrote in the previous section is also displayed:

$ ./fizzbuzz.py --help usage: fizzbuzz.py [-h] [--fizz N] [--buzz N] start end Simple fizzbuzz generator. This script prints out a sequence of numbers from a provided range with the following restrictions: - if the number is divisble by 3, then print out "fizz", - if the number is divisible by 5, then print out "buzz", - if the number is divisible by 3 and 5, then print out "fizzbuzz". positional arguments: start Start value end End value optional arguments: -h, --help show this help message and exit fizzbuzz settings: --fizz N Modulo value for fizz (default: 3) --buzz N Modulo value for buzz (default: 5)

The argparse module is quite powerful. If you are not familiar with it, skimming through the documentation is helpful. I like to use the ability to define sub-commands and argument groups.

Logging

A nice addition to a script is to display information during its execution. The logging module is a good fit for this purpose. First, we define the logger:

import logging import logging.handlers import os import sys logger = logging.getLogger(os.path.splitext(os.path.basename(sys.argv[0]))[0])

Then, we make its verbosity configurable: logger.debug() should output something only when a user runs our script with --debug and --silent should mute the logs unless an exceptional condition occurs. For this purpose, we add the following code in parse_args():

# In parse_args() g = parser.add_mutually_exclusive_group() g.add_argument("--debug", "-d", action="store_true", default=False, help="enable debugging") g.add_argument("--silent", "-s", action="store_true", default=False, help="don't log to console")

We add this function to configure logging:

def setup_logging(options): """Configure logging.""" root = logging.getLogger("") root.setLevel(logging.WARNING) logger.setLevel(options.debug and logging.DEBUG or logging.INFO) if not options.silent: ch = logging.StreamHandler() ch.setFormatter(logging.Formatter( "%(levelname)s[%(name)s] %(message)s")) root.addHandler(ch)

The main body of our script becomes this:

if __name__ == "__main__": options = parse_args() setup_logging(options) try: logger.debug("compute fizzbuzz from {} to {}".format(options.start, options.end)) for n in range(options.start, options.end + 1): # ... except Exception as e: logger.exception("%s", e) sys.exit(1) sys.exit(0)

If the script may run unattended—e.g. from a crontab, we can make it log to syslog:

def setup_logging(options): """Configure logging.""" root = logging.getLogger("") root.setLevel(logging.WARNING) logger.setLevel(options.debug and logging.DEBUG or logging.INFO) if not options.silent: if not sys.stderr.isatty(): facility = logging.handlers.SysLogHandler.LOG_DAEMON sh = logging.handlers.SysLogHandler(address='/dev/log', facility=facility) sh.setFormatter(logging.Formatter( "{0}[{1}]: %(message)s".format( logger.name, os.getpid()))) root.addHandler(sh) else: ch = logging.StreamHandler() ch.setFormatter(logging.Formatter( "%(levelname)s[%(name)s] %(message)s")) root.addHandler(ch)

For this example, this is a lot of code just to use logger.debug() once, but in a real script, this will come handy to help users understand how the task is completed.

$ ./fizzbuzz.py --debug 1 3 DEBUG[fizzbuzz] compute fizzbuzz from 1 to 3 1 2 fizz Tests

Unit tests are very useful to ensure an application behaves as intended. It is not common to use them in scripts, but writing a few of them greatly improves their reliability. Let’s turn the code in the inner “for” loop into a function with some interactive examples of use to its documentation:

def fizzbuzz(n, fizz, buzz): """Compute fizzbuzz nth item given modulo values for fizz and buzz. >>> fizzbuzz(5, fizz=3, buzz=5) 'buzz' >>> fizzbuzz(3, fizz=3, buzz=5) 'fizz' >>> fizzbuzz(15, fizz=3, buzz=5) 'fizzbuzz' >>> fizzbuzz(4, fizz=3, buzz=5) 4 >>> fizzbuzz(4, fizz=4, buzz=6) 'fizz' """ if n % fizz == 0 and n % buzz == 0: return "fizzbuzz" if n % fizz == 0: return "fizz" if n % buzz == 0: return "buzz" return n

pytest can ensure the results are correct:1

$ python3 -m pytest -v --doctest-modules ./fizzbuzz.py ============================ test session starts ============================= platform linux -- Python 3.7.4, pytest-3.10.1, py-1.8.0, pluggy-0.8.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /home/bernat/code/perso/python-script, inifile: plugins: xdist-1.26.1, timeout-1.3.3, forked-1.0.2, cov-2.6.0 collected 1 item fizzbuzz.py::fizzbuzz.fizzbuzz PASSED [100%] ========================== 1 passed in 0.05 seconds ==========================

In case of an error, pytest displays a message describing the location and the nature of the failure:

$ python3 -m pytest -v --doctest-modules ./fizzbuzz.py -k fizzbuzz.fizzbuzz ============================ test session starts ============================= platform linux -- Python 3.7.4, pytest-3.10.1, py-1.8.0, pluggy-0.8.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /home/bernat/code/perso/python-script, inifile: plugins: xdist-1.26.1, timeout-1.3.3, forked-1.0.2, cov-2.6.0 collected 1 item fizzbuzz.py::fizzbuzz.fizzbuzz FAILED [100%] ================================== FAILURES ================================== ________________________ [doctest] fizzbuzz.fizzbuzz _________________________ 100 101 >>> fizzbuzz(5, fizz=3, buzz=5) 102 'buzz' 103 >>> fizzbuzz(3, fizz=3, buzz=5) 104 'fizz' 105 >>> fizzbuzz(15, fizz=3, buzz=5) 106 'fizzbuzz' 107 >>> fizzbuzz(4, fizz=3, buzz=5) 108 4 109 >>> fizzbuzz(4, fizz=4, buzz=6) Expected: fizz Got: 4 /home/bernat/code/perso/python-script/fizzbuzz.py:109: DocTestFailure ========================== 1 failed in 0.02 seconds ==========================

We can also write unit tests as code. Let’s suppose we want to test the following function:

def main(options): """Compute a fizzbuzz set of strings and return them as an array.""" logger.debug("compute fizzbuzz from {} to {}".format(options.start, options.end)) return [str(fizzbuzz(i, options.fizz, options.buzz)) for i in range(options.start, options.end+1)]

At the end of the script,2 we add the following unit tests, leveraging pytest’s parametrized test functions:

# Unit tests import pytest # noqa: E402 import shlex # noqa: E402 @pytest.mark.parametrize("args, expected", [ ("0 0", ["fizzbuzz"]), ("3 5", ["fizz", "4", "buzz"]), ("9 12", ["fizz", "buzz", "11", "fizz"]), ("14 17", ["14", "fizzbuzz", "16", "17"]), ("14 17 --fizz=2", ["fizz", "buzz", "fizz", "17"]), ("17 20 --buzz=10", ["17", "fizz", "19", "buzz"]), ]) def test_main(args, expected): options = parse_args(shlex.split(args)) options.debug = True options.silent = True setup_logging(options) assert main(options) == expected

The test function runs once for each of the provided parameters. The args part is used as input for the parse_args() function to get the appropriate options we need to pass to the main() function. The expected part is compared to the result of the main() function. When everything works as expected, pytest says:

python3 -m pytest -v --doctest-modules ./fizzbuzz.py ============================ test session starts ============================= platform linux -- Python 3.7.4, pytest-3.10.1, py-1.8.0, pluggy-0.8.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /home/bernat/code/perso/python-script, inifile: plugins: xdist-1.26.1, timeout-1.3.3, forked-1.0.2, cov-2.6.0 collected 7 items fizzbuzz.py::fizzbuzz.fizzbuzz PASSED [ 14%] fizzbuzz.py::test_main[0 0-expected0] PASSED [ 28%] fizzbuzz.py::test_main[3 5-expected1] PASSED [ 42%] fizzbuzz.py::test_main[9 12-expected2] PASSED [ 57%] fizzbuzz.py::test_main[14 17-expected3] PASSED [ 71%] fizzbuzz.py::test_main[14 17 --fizz=2-expected4] PASSED [ 85%] fizzbuzz.py::test_main[17 20 --buzz=10-expected5] PASSED [100%] ========================== 7 passed in 0.03 seconds ==========================

When an error occurs, pytest provides a useful assessment of the situation:

$ python3 -m pytest -v --doctest-modules ./fizzbuzz.py [...] ================================== FAILURES ================================== __________________________ test_main[0 0-expected0] __________________________ args = '0 0', expected = ['0'] @pytest.mark.parametrize("args, expected", [ ("0 0", ["0"]), ("3 5", ["fizz", "4", "buzz"]), ("9 12", ["fizz", "buzz", "11", "fizz"]), ("14 17", ["14", "fizzbuzz", "16", "17"]), ("14 17 --fizz=2", ["fizz", "buzz", "fizz", "17"]), ("17 20 --buzz=10", ["17", "fizz", "19", "buzz"]), ]) def test_main(args, expected): options = parse_args(shlex.split(args)) options.debug = True options.silent = True setup_logging(options) > assert main(options) == expected E AssertionError: assert ['fizzbuzz'] == ['0'] E At index 0 diff: 'fizzbuzz' != '0' E Full diff: E - ['fizzbuzz'] E + ['0'] fizzbuzz.py:160: AssertionError ----------------------------- Captured log call ------------------------------ fizzbuzz.py 125 DEBUG compute fizzbuzz from 0 to 0 ===================== 1 failed, 6 passed in 0.05 seconds =====================

The call to logger.debug() is included in the output. This is another good reason to use the logging feature! If you want to know more about the wonderful features of pytest, have a look at “Testing network software with pytest and Linux namespaces.”

To sum up, enhancing a Python script to make it more sustainable can be done in four steps:

  1. add documentation at the top,
  2. use the argparse module to document the different parameters,
  3. use the logging module to log details about progress, and
  4. add some unit tests.

You can find the complete example on GitHub and use it as a template!

Update (2019.06)

There are some interesting threads about this article on Lobsters and Reddit. While the addition of documentation and command-line arguments seems to be well-received, logs and tests are sometimes reported as too verbose. Dan Connolly write “Practical production python scripts” as an answer to this post.

  1. This requires the script name to end with .py. I dislike appending an extension to a script name: the language is a technical detail that shouldn’t be exposed to the user. However, it seems to be the easiest way to let test runners, like pytest, discover the enclosed tests. ↩︎

  2. Because the script ends with a call to sys.exit(), when invoked normally, the additional code for tests will not be executed. This ensures pytest is not needed to run the script. ↩︎

Hideki Yamane: Debian 10 "buster" release party @Tokyo (7/7)

Saturday 20th of July 2019 03:01:11 PM

We ate a delicious cake to celebrate Debian 10 "buster" release, at party in Tokyo (my employer provided the venue, cake and wine. Thanks to SIOS Technology, Inc.! :)

Hope we'll do the same 2 years later for "bullseye"


Sean Whitton: Debian Policy call for participation -- July 2019

Saturday 20th of July 2019 12:37:39 PM

Debian Policy started off the Debian 11 “bullseye” release cycle with the release of Debian Policy 4.4.0.0. Please consider helping us fix more bugs and prepare more releases (whether or not you’re at DebCamp19!).

Consensus has been reached and help is needed to write a patch:

#425523 Describe error unwind when unpacking a package fails

#452393 Clarify difference between required and important priorities

#582109 document triggers where appropriate

#592610 Clarify when Conflicts + Replaces et al are appropriate

#682347 mark ‘editor’ virtual package name as obsolete

#685506 copyright-format: new Files-Excluded field

#749826 [multiarch] please document the use of Multi-Arch field in debian/c…

#757760 please document build profiles

#770440 policy should mention systemd timers

#823256 Update maintscript arguments with dpkg >= 1.18.5

#905453 Policy does not include a section on NEWS.Debian files

#907051 Say much more about vendoring of libraries

Wording proposed, awaiting review from anyone and/or seconds by DDs:

#786470 [copyright-format] Add an optional “License-Grant” field

#919507 Policy contains no mention of /var/run/reboot-required

#920692 Packages must not install files or directories into /var/cache

#922654 Section 9.1.2 points to a wrong FHS section?

Ritesh Raj Sarraf: Cross Architecture Linux Containers

Friday 19th of July 2019 04:04:02 PM
Linux and ARM

With more ARM based devices in the market, and with them getting more powerful every day, it is more common to see more of ARM images for your favorite Linux distribution. Of them, Debian has become the default choice for individuals and companies to base their work on. It must have to do with Debian’s long history of trying to support many more architectures than the rest of the distributions. Debian also tends to have a much wider user/developer mindshare even though it does not have a direct backing from any of the big Linux distribution companies.

Some of my work involves doing packaging and integration work which reflects on all architectures and image types; ARM included. So having the respective environment readily available is really important to get work done quicker.

I still recollect back in 2004, when I was much newer to Linux Enterprise while working at a big Computer Hardware Company, I had heard about the Itanium 64 architecture. Back then, trying out anything other than x86 would mean you need access to physical hardware. Or be a DD and have shell access the Debian Machines.

With Linux Virtualization, a lot seems to have improved over time.

Linux Virtualization

With a decently powered processor with virtualization support, you can emulate a lot of architectures that Linux supports.

Linux has many virtualization options but the main dominant ones are KVM/Qemu, Xen and VirtualBox. Qemu is the most feature-rich virtualization choice on Linux with a wide range of architectures that it can emulate.

In case of ARM, things are still a bit tricky as hardware definition is tightly coupled. Emulating device type on a virtualized Qemu is not straightforward as x86 architecture. Thankfully, in libvirt, you can be provided with a generic machine type called virt. For basic cross architecture tasks, this should be enough.

virt board profile under libvirt

But, while, virtualization is a nice progression, it is not always an optimal one. For one, it needs good device virtualization support, which can be tricky in the ARM context. Second, it can be (very) slow at times.

And unless you are doing low-level hardware specific work, you can look for an alternative in Linux Containers

Linux Containers

So this is nothing new now. Lots and lots of buzz around containers already. There’s many different implementations across platforms supporting similar concept. From good old chroot (with limited functionality), jails (on BSD), to well marketed products like: Docker, LXC, systemd-nspawn. There’s also some like firejail targeting specific use cases.

As long as you do not have tight dependency on the hardware or a dependency on the specific parts of the Linux kernel (like once I explored the possibility of running open-iscsi in a containerized environment instead), containers are a quick way to get an equal environment. Especially, things like Process, Namespace and Network separation are awesome helping me concentrate on the work rather than putting the focus on Host <=> Guest issues.

Given how fast work can be accomplished with containers, I have been wanting to explore the possibility of building container images for ARM and other architectures that I care about.

The good thing is that architecture virtualization is offered through Qemu and the same tool also provides similar architecture emulation. So features and fixes have a higher chance of parity as they are being served from the same tool.

systemd-nspawn

I haven’t explored all the container implementations that Linux has to offer. Initially, I used LXC for a while. But these days, for my work it is docker, while my personal preference lies with systemd-nspawn.

Part of the reason is simply because I have grown more familiar with systemd given it is the house keeper for my operating system now. And also, so far, I like most of what systemd offers.

Getting Cross Architecture Containers under Debian GNU/Linux
  • Use qemu-user-static for emulation
  • Generate cross architecture chroot images with qemu-debootstrap
  • Import those as sub-volumes under systemd-nspawn
  • Set a template for your containers and other misc stuff
Glitches

Not everything works perfect but most of it does work.

Here’s my container list. Subvolumed containers do help de-duplicate and save some space. They are also very very quick when creating a clone for the container

rrs@priyasi:~$ machinectl list-images NAME TYPE RO USAGE CREATED MODIFIED 2019 subvolume no n/a Mon 2019-06-10 09:18:26 IST n/a SDK1812 subvolume no n/a Mon 2018-10-15 12:45:39 IST n/a BusterArm64 subvolume no n/a Mon 2019-06-03 14:49:40 IST n/a DebSidArm64 subvolume no n/a Mon 2019-06-03 14:56:42 IST n/a DebSidArmhf subvolume no n/a Mon 2018-07-23 21:18:42 IST n/a DebSidMips subvolume no n/a Sat 2019-06-01 08:31:34 IST n/a DebianJessieTemplate subvolume no n/a Mon 2018-07-23 21:18:54 IST n/a DebianSidTemplate subvolume no n/a Mon 2018-07-23 21:18:05 IST n/a aptVerifySigsDebSid subvolume no n/a Mon 2018-07-23 21:19:04 IST n/a jenkins-builder subvolume no n/a Tue 2018-11-27 20:11:42 IST n/a jenkins-builder-new subvolume no n/a Tue 2019-04-16 10:13:43 IST n/a opensuse subvolume no n/a Mon 2018-07-23 21:18:34 IST n/a 12 images listed. 21:08 ♒♒♒ ☺

Jonathan McDowell: Upgrading my home server

Friday 19th of July 2019 08:06:12 AM

At the end of last year I decided it was time to upgrade my home server. I built it back in 2013 as an all-in-one device to be my only always-on machine, with some attempt towards low power consumption. It was starting to creak a bit - the motherboard is limited to 16G RAM and the i3-3220T is somewhat ancient (though has served me well). So it was time to think about something more up to date. Additionally since then my needs have changed; my internet connection is VDSL2 (BT Fibre-to-the-Cabinet) so I have an BT HomeHub 5 running OpenWRT to drive that and provide core routing/firewalling. My wifi is provided by a pair of UniFi APs at opposite ends of the house. I also decided I could use something low power to run Kodi and access my ripped DVD collection, rather than having the main machine in the living room. That meant what I wanted was much closer to just a standard server rather than having any special needs.

The first thing to consider was a case. My ADSL terminates in what I call the “comms room” - it has the electricity meter / distribution board and gas boiler, as well as being where one of the UniFi’s lives and where the downstairs ethernet terminates. In short it’s the right room for a server to live in. I don’t want a full rack, however, and ideally wanted something that could sit alongside the meter cabinet without protruding from the wall any further. A tower case would have worked, but only if turned sideways, which would have made it a bit awkward to access. I tried in vain to find a wall mount case with side access that was shallow enough, but failed. However in the process I discovered a 4U vertical wall mount. This was about the same depth as the meter cabinet, so an ideal choice. I paired it with a basic 2U case from X-Case, giving me a couple of spare U should I decide I want another rack-mount machine or two.

My old machine has 2 3.5” hotswap drive bays; this has been useful in the past when a drive failed even just to avoid having to take the machine apart. I still wanted to aim for low power consumption, so 2 drives is enough. I started with a pair of cheap 5.25” drive bay to dual 2.5” + 3.5” hotswap bay devices, but the rear SATA connectors ended up being very fragile and breaking off, so I bit the bullet and bought a SilverStone FS303. This takes up 2 5.25” bays and provides 3 x 3.5” hotswap bays. It’s well constructed and the extra bay has already turned out useful when a drive started to fail and I was able to put the replacement in and resync the RAID set before having to remove the old drive.

Now I had the externals sorted I needed to think about what to put inside. The only thing coming from the old machine were the hard disks (a 4T Seagate and a 6T WD RED, 4T of software RAID1 and 2T of unRAIDed backup space), so everything else was up for discussion. I toyed with an Intel i7-8700T - 6 cores in 35W. AMD have a stronger offering these days though and the AMD Ryzen 2700E with 8 cores in 45W seemed like a good option for an extra 10W. Plus on top there are several of the recent speculative execution exploits that don’t seem to affect AMD chips (or more recent Intel CPUs, but they weren’t out at the time in a low power format). Sadly the 2700E proved to be made of unobtanium; I sat with it on backorder for nearly 3 months before giving up and ordering a AMD Ryzen 2700 that was on offer. This is rated at up to 65W, but I considered trying to underclock if necessary or tweak the cpufreq settings at least.

Next up was a motherboard. The 2U case is short, but allows for MicroATX, an improvement over the MiniITX my last case needs. One of the things constraining me with the old machine was that it maxed out at 16G RAM, so I wanted something that would take more. It turns out there are a number of Socket AM4 MicroATX boards that will take 64G over 4 DIMMs. I chose an ASRock B450M Pro4, which had a couple of good reviews and seemed to have all the bits I wanted. It’s been decent so far - including having some interactions with ASRock support when I initially put an AMD 240GE (while waiting for the 2700E that was never coming) in it. I like to think of BIOS 3.10 as mine ;).

For RAM I went with a Corsair CMK32GX4M2A2400C14 Vengeance LPX 32GB (2 x 16GB) set. I’m sure I should care more about RAM but it was decently priced from a vendor I trust. At some point I’ll buy another set to bring the board up to the full 64GB, but for now this is twice what the old machine had.

Finally I decided to splash out on some SSD. The spinning rust is primarily for media (music + video shared out to Kodi etc) and backups, but I wanted to move my containers (home automation, UniFi controller, various others) over to SSD. I talked myself into a pair of Corsair MP510 960GB NVMe M.2 drives. One went on the motherboard slot and I had to buy a low profile PCIe adaptor for the other (of course they’re RAID1ed). They fly; initially I clocked them in at about 1.5GB/s until I realised the one in the add-in card was only using 2 PCIe lanes. Once I rejigged things so it had all 4 it can use I was up to 2.3GB/s. Impressive.

You’ll note I haven’t mentioned a graphic card here. I ended up with a cheap NVidia off eBay to get things going, but this is a server in a comms room and removing the graphics card saves me at least 10W of power (it was also the reason the NVMe drive only had 2 lanes). I couldn’t find an AM4 motherboard that did serial console, but the 450M Pro is happy to boot without a graphics card present, and I have GRUB onward configured to do serial console just in case.

And the power consumption? The previous machine idled at around 50W, getting to maybe 60-65W under load. I’ve cheated with the new machine; because the spinning rust is not generally in use it’s configured to spin down after 20 minutes idle. As a result the machine idles at around 36W. It hits 50W when the drives spin up, so for 8 cores compared to 2 we’re still sitting in the same ballpark. That’s good, because that’s the general case - idle here means Home Assistant operational, the UniFi controller going, the syslog container logging and so on. However the new server peaks considerably higher; if the drives are spun up and I compile a kernel I can hit 120W. However the compilation takes less than a quarter of the time - the machine is significantly faster than the old one, and even without taking advantage of the SSDs idles at roughly the same power level. I’d call that an overall win.

More in Tux Machines

Late Coverage of Confidential Computing Consortium

  • Microsoft Partners With Google, Intel, And Others To Form Data Protection Consortium

    The software maker joined Google Cloud, Intel, IBM, Alibaba, Arm, Baidu, Red Hat, Swisscom, and Tencent to establish the Confidential Computing Consortium, a group committed to providing better private data protection, promoting the use of confidential computing, and advancing open source standards among members of the technology community.

  • #OSSUMMIT: Confidential Computing Consortium Takes Shape to Enable Secure Collaboration

    At the Open Source Summit in San Diego, California on August 21, the Linux Foundation announced the formation of the Confidential Computing Consortium. Confidential computing is an approach using encrypted data that enables organizations to share and collaborate, while still maintaining privacy. Among the initial backers of the effort are Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent. “The context of confidential computing is that we can actually use the data encrypted while programs are working on it,” John Gossman, distinguished engineer at Microsoft, said during a keynote presentation announcing the new effort. Initially there are three projects that are part of the Confidential Computing Consortium, with an expectation that more will be added over time. Microsoft has contributed its Open Enclave SDK, Red Hat is contributing the Enarx project for Trusted Execution Environments and Intel is contributing its Software Guard Extensions (SGX) software development kit. Lorie Wigle, general manager, platform security product management at Intel, explained that Intel has had a capability built into some of its processors called software guard which essentially provides a hardware-based capability for protecting an area of memory.

Graphics: Mesa Radeon Vulkan Driver and SPIR-V Support For OpenGL 4.6

  • Mesa Radeon Vulkan Driver Sees ~30% Performance Boost For APUs

    Mesa's RADV Radeon Vulkan driver just saw a big performance optimization land to benefit APUs like Raven Ridge and Picasso, simply systems with no dedicated video memory. The change by Feral's Alex Smith puts the uncached GTT type at a higher index than the visible vRAM type for these configurations without dedicated vRAM, namely APUs.

  • Intel Iris Gallium3D Is Close With SPIR-V Support For OpenGL 4.6

    This week saw OpenGL 4.6 support finally merged for Intel's i965 Mesa driver and will be part of the upcoming Mesa 19.2 release. Not landed yet but coming soon is the newer Intel "Iris" Gallium3D driver also seeing OpenGL 4.6 support. Iris Gallium3D has been at OpenGL 4.5 support and is quite near as well with its OpenGL 4.6 support thanks to the shared NIR support and more with the rest of the Intel open-source graphics stack. Though it's looking less likely that OpenGL 4.6 support would be back-ported to Mesa 19.2 for Iris, but we'll see.

The GPD MicroPC in 3 Minutes [Video Review]

In it I tackle the GPD MicroPC with Ubuntu MATE 19.10. I touch on the same points made in my full text review, but with the added bonus of moving images to illustrate my points, rather than words. Read more Also: WiringPi - Deprecated

today's howtos