Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 4 hours 19 min ago

Kai-Chung Yan: My Open-Source Activities from November to December 2018

8 hours 7 min ago

Welcome, reader! This is a infrequently updated post series that logs my activities within open-source communities. I want my work to be as transparent as possible in order to promote open governance, a policy feared even by some “mighty” nations.

I do not work on open-source full-time, although I sincerely would love to. Therefore the posts may cover a ridiculously long period (even a whole year).

Debian

Debian is a general-purpose Linux distribution that is widely used on the planet. I am a Debian Developer who works on packages related to Android SDK and the Java ecosystem.

After a month of hardwork, I finally finished the packaging of android-platform-art. The tricky part was that this package is the first of our Android SDK packages that fails to build using GCC, which was realized only after I had patched an awful lot of code.

Other activities include:

Voidbuilder Release 0.3.0

Voidbuilder is a simple program that mimics pbuilder but uses Docker as the container engine. I have been using it privately and am quite satisfied.

I released 0.3.0 in December. A notable change is that it now prints the build result in details, just like sbuild does.

Other Activities

Pushed a patch to AOSP that removes a SUN API usage for Base64. Now let’s see if it will get accepted in 10 years…

Russ Allbery: Review: New York 2140

Monday 21st of January 2019 03:09:00 AM

Review: New York 2140, by Kim Stanley Robinson

Publisher: Orbit Copyright: March 2017 Printing: March 2018 ISBN: 0-316-26233-1 Format: Kindle Pages: 624

About forty years in our future, world-wide sea levels suddenly rose ten feet over the course of a decade due to collapse of polar ice, creating one of the largest disasters in history. It was enough to get people to finally take greenhouse effects and the risks of fossil fuels seriously, but too late to prevent the Second Pulse: a collapse of Antarctic ice shelves that raised global ocean levels another forty feet. Now, about fifty years after the Second Pulse, New York is still standing but half-drowned. The northern half of Manhattan Island is covered with newly-constructed superscrapers. The skyscrapers in the southern half, anchored in bedrock, survive in a precarious new world of canals, underwater floors, commuter boats, high-tech sealants, and murky legal structures.

The Met Life Tower is one of those surviving buildings and is home to the cast of this novel: two quants (programmers and mathematicians who work on financial algorithms) living in temporary housing on the farm floor, the morose building super, the social worker who has headed the building co-op board for decades, a chief inspector for the NYPD, a derivatives trader who runs a housing index for the half-drowned intertidal areas, a streaming video star who takes on wildlife preservation projects in her dirigible Assisted Migration, and a couple of orphan street kids (in this world, water rats) endlessly looking for their next adventure. The characters start the book engrossed in their day-to-day lives, which have settled into a workable equilibrium. But they're each about to play a role in another great disruption in economic life.

This is my sixth try for Kim Stanley Robinson novels, and I've yet to find a book I really liked. It may be time to give up.

I really want to like Robinson's writing. He's writing novels about an intersection of ecology and politics that I find inherently interesting, particularly since he emphasizes people's ability to adapt without understating the magnitude of future challenges. I think he's getting better at characterization (more on that in a moment). But this sort of book, particularly the way Robinson writes it, elevates the shape of the future world to the role of protagonist, which means it has to hold up to close scrutiny. And for me this didn't.

As is typical in Robinson novels, New York 2140 opens with an extended meander through the everyday lives of multiple protagonists. This is laying the groundwork for pieces of later plot, but only slowly. It's primarily a showcase for the Robinson's future extrapolation, here made more obvious by a viewpoint "character" whose chapters are straight infodumps about future history. And that extrapolated world is odd and unconvincing in ways that kept throwing me out of the story. The details of environmental catastrophe and adaptation aren't the problem; I suspect those are the best-researched parts of the book, and they seemed at least plausible to me. It's politics and economics that get Robinson into trouble.

For example, racism is apparently not a thing that exists in 2140 New York on any systematic scale. We're at most fifty years past what would be the greatest refugee crisis in the history of humanity, one that would have caused vast internal dislocation in the United States let alone in the rest of the world. Migrant and refugee crises in Syria and Central America in the current day that are orders of magnitude less severe set off paroxysms of racist xenophobia. And yet, this plays no role whatsoever in the politics of this book.

It's not that the main characters wouldn't have noticed. One is a social worker who works specifically with refugees on housing, and whose other job is running a housing co-op. In our world, racism is very near the center of US housing policy. Another, the police inspector, is a tall black woman from a poor background, but the only interaction she has with racism in the whole book is a brief and odd mention of how she might appear to a private security mercenary that she faces down. It seriously tries my suspension of disbelief that racism would not be a constant irritant, or worse, through her entire career.

Racism doesn't need to be a central topic of every book, and sometimes there's a place for science fiction novels that intentionally write racism out as an optimistic statement or as momentary relief. But the rest of this book seems focused on a realistic forward projection, not on that sort of deep social divergence. Robinson does not provide even a hint of the sort of social change that would be required for racism to disappear in a country founded on a racial caste system, particularly given 100 years of disruptive emigration crises of the type that have, in every past era of US history, substantially increased systematic racism.

In a similar omission, the political organization of this world is decidedly strange. For most of the book, politics are hyperlocal, tightly focused on organizations and communities in a tiny portion of New York City. The federal government is passive, distant, ignored, and nearly powerless. This is something that could happen in some future worlds, but this sort of government passivity is an uneven fit with the kind of catastrophe that Robinson is projecting. Similar catastrophes in human history, particularly in the middle of a crisis of mass migration, are far more likely to strengthen aggressive nationalists who will give voice to fear and xenophobia and provide a rallying point.

Every future science fiction novel is, of course, really about the present or the past in some way. It becomes clear during New York 2140 that, despite the ecological frame, this book is primarily concerned with the 2008 financial crisis. That makes some sense of the federal government in this book: Robinson is importing the domestic economic policy of Bush and Obama to make a point about the crisis they bungled. Based on publication date, he probably also wrote this book before Trump's election. But given the past two years, not to mention world history, these apathetic libertarian politics seem weirdly mismatched with the future history Robinson postulates.

There are other problems, such as Robinson's narrative voice convincing me that he doesn't understand how sovereign debt works, and as a result I kept arguing with the book instead of being drawn into the plot. That's a shame, since this is some of the best character work Robinson has done. It's still painfully slow; about halfway through the book, I wasn't sure I liked anyone except Vlade, the building super, and I was quite certain I hated Franklin, the derivative trader obsessed with seducing a woman. But Robinson pulls off a fairly impressive pivot by the end of the book. Charlotte, the social worker and co-op president who determinedly likes all of the characters, turns out to be a better judge of character than I was. I never exactly liked Franklin, but Robinson made me believe in his change, which takes some doing.

Amelia, the streaming video star, deserves a special mention due to some subtle but perceptive bits of characterization. She starts out as a stereotype whose popularity has a lot to do with her tendency to lose her clothes, and I wish Robinson hadn't reinforced that idea. (I suspect he was thinking of the (in)famous PETA commercials, but this stereotype is a serious problem for real-world female streamers.) But throughout the story Amelia is so determinedly herself that she transcends that unfortunate start. The moment I started really liking her was her advertisement for Charlotte, which is both perfectly in character and more sophisticated than it looks. And her character interactions and personal revelations at the very end of the book made me want to read more about her.

There were moments when I really liked this book. The plot finally kicks in about 70% of the way through, much too late but still with considerable effectiveness. This is about the time when I started to warm to more of the characters, and I thought I'd finally found a Robinson book I could recommend. But then Robinson undermined his own ending: he seemed so focused on telling the reader that life goes on and that any segment of history is partial and incomplete that he didn't give me the catharsis I wanted after a harrowing event and the clear villainy of some of the players. For a book that's largely about confronting the downsides of capitalism, it's weirdly non-confrontational. What triumph the characters do gain is mostly told, narrated away in yet another infodump, rather than shown. It left me feeling profoundly unsatisfied.

There's always enough meat to a Kim Stanley Robinson novel that I understand why people keep nominating them for awards, but I come away vaguely dissatisfied with the experience. I think some people will enjoy this, particularly if you don't get as snarled as I was in the gaps left in Robinson's political tale. He is clearly getting better on characterization, despite the exceptionally slow start. But the story still doesn't have enough power, or enough catharsis, or enough thoughtful accuracy for me to recommend it.

Rating: 6 out of 10

Kurt Kremitzki: Free Software Activities in December 2018

Monday 21st of January 2019 12:19:53 AM

Hello again for another of my monthly updates on my work on Debian Science and the FreeCAD ecosystem.

There's only a few announcement items since I was mostly enjoying my holidays, but several important things were accomplished this month. Also, since there's not much time left before the release of Debian 10, there's some consideration to be done towards what I'll be working on in the next few months.

gmsh bugfix; no gmsh 4 yet

At the beginning of the month, I uploaded gmsh 3.0.6+dfsg1-4. This had a patch submitted by Joost van Zwieten (thanks!) to fix Debian bug #904946 which was preventing gmsh usage in FreeCAD, as well as adding an autopkgtest to make sure that behavior remains.

New Coin3D transition; Pivy uploaded!

Near the middle of the month, Leo Palomo-Avellaneda, Anton Gladky, and I finished the transition for the coin3 package, which is a scene graph library and high-level wrapper of OpenGL. The new version is a pre-release coin3 4.0.0 which adds CMake support. It also fixes Debian bug #874727 which caused FreeCAD to segfault when importing an SVG. FreeCAD also uses pivy, a Python wrapper for coin, as a runtime dependency, and completing this transition has finished the last blocker for a Python 3 FreeCAD package, so thanks to Leo and Anton!

New release for med-fichier (not by me) /images/dec18-HDF.svg

Another FreeCAD dependency had a new release this month, med-fichier 4.0.0. This software is developed by Électricité de France and built on the HDF5 library and file format but is specialized for mesh data. It is also a dependency of gmsh which introduced some difficulty for the gmsh package.

OpenFOAM upstream switch; from openfoam.org to openfoam.com

The final noteworthy item for this month was an interesting bit of correspondence I received concerning the OpenFOAM package. As I had mentioned in previous posts, the current OpenFOAM version in Debian is 4.x, and I had worked on updating the package for OpenFOAM 6.x. My packaging was working and complete last summer, but for an inexplicable reason it stopped building late summer, started building again for about a week in September, and then stopped again. I would really like an up-to-date OpenFOAM in Debian 10, and so when I received an email from the people at openfoam.com about packaging their version, I was very intrigued. You see, the version in Debian is currently from openfoam.org. Besides the TLD difference, there is a bit of history between the two versions, but for end users there should major difference. If you're interested in more background, you can consult this video by József Nagy.

Similarly, it seems likely that the OpenFOAM package will be changing over to the openfoam.com version soon. I've already succesfully built the latest OpenFOAM from this source, version 1812, and plan to submit it soon.

Thanks to my sponsors

This work is made possible in part by contributions from readers like you! You can send moral support my way via Twitter @thekurtwk. Financial support is also appreciated at any level and possible on several platforms: Patreon, Liberapay, and PayPal.

Carl Chenet: How I Switched Working From Office Full Time to Remote 3 Days A Week

Sunday 20th of January 2019 11:00:31 PM

Remote work is not for everyone. It depends a lot of anyone’s taste. But it’s definitely for me. Here is why and how I switched from working full time in an office to 3 days of remote work.

TL;DR: After working from home for a few months, I was convinced remote work was my thing. I had to look for a new freelance contract including remote work and I had to refuse a lot of good offers banning remote work. At least in my country (France), finding remote work, even part time, is still difficult. It greatly depends on the company culture. I’m lucky enough, my current client promotes remote work.

Foreword

If you follow my blog on a regular basis (RSS feed if you like it), you know I’ve been working remotely for a while, starting one day a weekwhen I was working part time in order to be more productive for my side projects.

But this article will explain why and how I decided to start working remotely and what kind of choice – professional and personal – I had to do in order to achieve this goal.

Why working remotely suits me

I’m a freelance since 2012 and usually work at the office of my clients.  I had 2 intense years some time ago and it was so intense I needed a break.That’s not optimal because, as you know, a freelance does not earn money when he does not work. No paid vacation. Moreover the freelancer can not count on any unemployment compensation (at least in France). I work on side projects since 2015 but I’m far from being self-sufficient. After my previous mission, I took a 6 month break and had important personal finance issue after that. You guess. So it’s obvious if I want to remain freelance, I need to work on a regular basis.

But.

Paris is a quite crowded city. Public transportation are overcrowded and some subway lines are too old. It generates a lot of stress for everyone, public transportation workers and users. When you go to work and especially if you live in Paris suburbs, after a chaotic ride from home, it usually means you haven’t started to work but you’re already stressed out. You also waste between 45mn and 1h30mn for each ride, between 1h30 and 3 hours each day!

Given the fact I work on several side projects, helping communities to grow and developing online services, I need time. Even the lunch break time. I’m not a workaholic, I love playing squash, watching movies, reading, playing poker so I’m not going to work everyday until 2 or 3am.

Playing Squash during lunch break on remote days (Paris, Charléty stadium)

Once my daily job is finished, I need my free time. And don’t talk to me about waking up at 5am. Tried that. Once. Never again.

My main job is system architect. I mostly work on complex issues, like scalability of high-trafic websites or migrating old platform to the cloud. I need peace and long period of time without interruptions in order to think.

Open space is a waste of both my time and resources, given the fact I can not manage useless interruptions when I’m at the office as efficiently as while I work remotely (I just don’t reply). Some jobs need a lot of interactions with others (managers), others need silence and peace. It’s a fact. Meetings are sometimes useful, but I don’t need to be at the office 5 days a week for these. And online tools are now quite efficient for short meetings.

How I switched

During my 6-month break, I have been working on my side projects from home. I knew I was ready.

When I started to search for a new contract, I was looking for companies allowing remote work. In France it’s not so common and sometimes remote workers are seen as slackers. Given this reputation, I had to stand firm about it while candidating.

Another issue comes from the recruiters. Some of them are overoptimistic about remote work and tell candidates that remote work will be allowed soon in their company. That is not often the case and even if it is, it could take months or years. Moreover remote work is a culture, it’s not so simple to set up. If the company culture is not ready, it’s only a matter of time before cancellation. Yahoo! has a famous record about it, banning work from home.

I finally chose for a company being the best fit for me. Given the price of venues in Paris, the company still growing , they had to encourage working from home. A really good point for me. During the interview, my boss told me members of the team were working remotely on a regular basis, 1 or 2 times a week.

Of course I started slow and for some months worked full time in the office. I started with only one day a week. I was not bored to death working at home. I was still efficient. Even more efficient on complex tasks. Less interruption, less noise and I was not forced to use headphone any more.

At home, drinking tea all day long

I soon started to work 2 times a week from home. Working remotely is written in the DNA of this company and anybody is easily reachable. Being quite self-sufficient on my projects, I mostly need to go to the office for enjoying the team and for meetings. From a technical point of view, being at home or at the office is exactly the same thing. We use a laptop and a VPN. Most of the company tools are Software As A Service (SAAS), reachable from anywhere around the world.

These days, depending on business or team meetings, I work up to 3 days from home. and I enjoy doing so.

To be continued

Working remotely is a great asset some positions can offer. Definitely not for all kinds of jobs, but it allows to improve some real issues like commuting and allows a better personal time management. I guess the taste for working remotely is different for anyone, but in my case it suits my lifestyle and I’ll make it a requirement for my next jobs.

About The Author

Carl Chenet, Free Software Indie Hacker, Founder of LinuxJobs.io, a Job board dedicated to Free and Open Source Software Jobs in the US (soon to be released).

Follow Me On Social Networks

 

Sune Vuorela: KookBook 0.2.0 available – now manage your cooking recipes better

Sunday 20th of January 2019 07:20:05 PM

I got a bit traction on KookBook and decided to fix a few bugs, mostly in the touch client, and add some features.

Get it here: kookbook-0.2.0.tar.xz

KookBook is now also available in Debian, thanks to Stuart Prescott

KRecipe converter
Some people has a large recipe collection in KRecipe and would like to try out KookBook. I wrote a convertion python script now available. It works in “current directory” and reads the krecipe database from there and outputs KookBook markdown files in same repository. It has so far been tried on 1 database.

Bug fixes

  • Fix install of touch client
  • Fixes in desktop files
  • Fixes in touch client’s file open dialog
  • Touch client now show images in recipes
  • You could end up with no dock widgets and no toolbar and no way to recover in the desktop client. This is now fixed
  • Build fixes

Future
Some people have started talking about maybe translation of the interface. I might look into that in the future.

And I wouldn’t be sad if some icon artists provided me with a icon slightly better than the knife I drew. Feel free to contact me if that’s the case.

Happy kooking!

Clint Adams: Using hkt findpaths in a more boring way

Saturday 19th of January 2019 03:13:40 PM

Did dkg certify his new key with something I've certified?

hkt findpaths --keyring ~/.gnupg/pubring.gpg '' \ 2100A32C46F895AF3A08783AF6D3495BB0AE9A02 \ C4BC2DDB38CCE96485EBE9C2F20691179038E5C6 2>/dev/null (3,[46,31,257]) (31,0EE5BE979282D80B9F7540F1CCD2ED94D21739E9) (46,2100A32C46F895AF3A08783AF6D3495BB0AE9A02) (257,C4BC2DDB38CCE96485EBE9C2F20691179038E5C6)

I (№ 46) have certified № 31 (0EE5BE979282D80B9F7540F1CCD2ED94D21739E9) which has certified № 257 (C4BC2DDB38CCE96485EBE9C2F20691179038E5C6).

Posted on 2019-01-19 Tags: quanks

Daniel Kahn Gillmor: New OpenPGP certificate for dkg, 2019

Saturday 19th of January 2019 07:49:53 AM
Update

I've scrapped my first try at a new OpenPGP certificate for 2019 (the one i published yesterday). See the history discussion at the bottom of this post for details. This blogpost has been updated to reflect my revised attempt.

2019 OpenPGP transition (try 2)

My old OpenPGP certificate will be 12 years old later this year. I'm transitioning to a new OpenPGP certificate.

You might know my old OpenPGP certificate as:

pub rsa4096 2007-06-02 [SC] [expires: 2019-06-29] 0EE5BE979282D80B9F7540F1CCD2ED94D21739E9 uid Daniel Kahn Gillmor <dkg@fifthhorseman.net> uid Daniel Kahn Gillmor <dkg@debian.org>

My new OpenPGP certificate is:

pub ed25519 2019-01-19 [C] [expires: 2021-01-18] C4BC2DDB38CCE96485EBE9C2F20691179038E5C6 uid Daniel Kahn Gillmor <dkg@fifthhorseman.net> uid Daniel Kahn Gillmor <dkg@debian.org>

If you've certified my old certificate, I'd appreciate your certifying my new one. Please do confirm by contacting me via whatever channels you think are most appropriate (including in-person if you want to share food or drink with me!) before you re-certify, of course.

I've published the new certificate to the SKS keyserver network, as well as to my personal website -- you can fetch it like this:

wget -O- https://dkg.fifthhorseman.net/dkg-openpgp.key | gpg --import

A copy of this transition statement signed by both the old and new certificates is available on my website, and you can also find further explanation about technical details, choices, and rationale on my blog.

Technical details

I've made a few decisions differently about this certificate:

Ed25519 and Curve25519 for Public Key Material

I've moved from 4096-bit RSA public keys to the Bernstein elliptic curve 25519 for all my public key material (EdDSA for signing, certification, and authentication, and Curve25519 for encryption). While 4096-bit RSA is likely to be marginally stronger cryptographically than curve 25519, 25519 still appears to be significantly stronger than any cryptanalytic attack known to the public.

Additionally, elliptic curve keys and the signatures associated with them are tiny compared to 4096-bit RSA. I certified my new cert with my old one, and well over half of the new certificate is just certifications from the old key because they are so large.

This size advantage makes it easier for me ship the public key material (and signatures from it) in places that would be more awkward otherwise. See the discussion about Autocrypt below.

Split out ACLU identity

Note that my old certificate included some additional identities, including job-specific e-mail addresses. I've split out my job-specific cryptographic credentials to a different OpenPGP certificate entirely. If you want to mail me at dkg@aclu.org, you can use the certificate with fingerprint 888E6BEAC41959269EAA177F138F5AB68615C560 (which is also published on my work bio page).

This is in part because the folks who communicate with me at my ACLU address are more likely to have old or poorly-maintained e-mail systems than other people i communicate with, and they might not be able to handle curve 25519. So the ACLU keys use 3072-bit RSA, which is universally supported by any plausible OpenPGP implementation.

This way i can experiment with being more forward-looking in my free software and engineering community work, and shake out any bugs that i might find there, before cutting over the e-mails that come in from more legal- and policy-focused colleagues.

Isolated Subkey Capabilities

In my new certificate, the primary key is designated certification-only. There are three subkeys, one each for authentication, encryption, and signing. The primary key also has a longer expiration time (2 years as of this writing), while the subkeys have 1 year expiration dates.

Isolating this functionality helps a little bit with security (i can take the certification key entirely offline while still being able to sign non-identity data), and it also offers a pathway toward having a more robust subkey rotation schedule. As i build out my tooling for subkey rotation, i'll probably make a few more blog posts about that.

Autocrypt-friendly

Finally, several of these changes are related to the Autocrypt project, a really great collaboration of a group of mail user agent developers, designers, UX experts, trainers, and users, who are providing guidance to make encrypted e-mail something that normal humans can use without having to think too much about it.

Autocrypt treats the OpenPGP certificate User IDs as merely decorative, but its recommended form of the User ID for an OpenPGP certificate is just the e-mail address wrapped in angle brackets. Unfortunately, i didn't manage to get that particular form of User ID into this certificate at this time (see discussion of split User IDs below).

Autocrypt is also moving toward 25519 elliptic curve keys, so this gives me a chance to exercise that choice.

I'm proud to be associated with the Autocrypt project, and have been helping to shepherd some of the Autocrypt functionality into different clients (my work on my own MUA of choice, notmuch is currently stalled, but i hope to pick it back up again soon). Having an OpenPGP certificate that works well with Autocrypt, and that i can stuff into messages even from clients that aren't fully-Autocrypt compliant yet is useful to me for getting things tested.

Documenting workflow vs. tooling

Some people may want to know "how did you make your OpenPGP cert like this?" For those folks, i'm sorry but this is not a step-by-step technical howto. I've read far too many "One True Way To Set Up Your OpenPGP Certificate" blog posts that haven't aged well, and i'm not confident enough to tell people to run the weird arbitrary commands that i ran to get things working this way.

Furthermore, i don't want people to have to run those commands.

If i think there are sensible ways to set up OpenPGP certificates, i want those patterns built into standard tooling for normal people to use, without a lot of command-line hackery.

So if i'm going to publish a "how to", it would be in the form of software that i think can be sensibly maintained and provides a sane user interface for normal humans. I haven't written that tooling yet, but i need to change certs first, so for now you just get this blog post in English. But feel free to tell me what you think i could do better!

History

This is my second attempt at an OpenPGP certificate transition in 2019. My earlier attempt uncovered a bunch of tooling issues with split-out User IDs. The original rationale for trying the split, and the problems i found are detailed below.

What were Separated User IDs?

My earlier attempt at a new OpenPGP certificate for 2019 tried to do an unusual thing with the certificate User IDs. Rather than two User IDs:

  • Daniel Kahn Gillmor <dkg@fifthhorseman.net>
  • Daniel Kahn Gillmor <dkg@debian.org>

the (now revoked) earlier certificate had the name separate from the e-mail addresses, making three User IDs:

  • Daniel Kahn Gillmor
  • dkg@fifthhorseman.net
  • dkg@debian.org

There are a couple reasons i tried this.

One reason is to simplify the certification process. Traditional OpenPGP User ID certification is an all-or-nothing process: the certifier is asserting that both the name and e-mail address belong to the identified party. But this can be tough to reason about. Maybe you know my name, but not my e-mail address. Or maybe you know my over e-mail, but aren't really sure what my "real" name is (i'll leave questions about what counts as a real name to a more philosophical blog post). You ought to be able to certify them independently. Now you can, since it's possible to certify one User ID independently of another.

Another reason is because i planned to use this certificate for e-mail, among other purposes. In e-mail systems, the human name is a confusing distraction, as the real underlying correspondent is the e-mail address. E-mail programs should definitely allow their users to couple a memorable name with an e-mail address, but it should be more like a petname. The bundling of a human "real" name with the e-mail address by the User ID itself just provides more points of confusion for the mail client.

If the user communicates with a certain person by e-mail address, the certificate should be bound to the e-mail protocol address on its own. Then the user themselves can decide what other monikers they want to use for the person; the User ID shouldn't force them to look at a "real" name just because it's been bundled together.

Alas, putting this attempt into public practice uncovered several gaps in the OpenPGP ecosystem.

User IDs without an e-mail address are often ignored, mishandled, or induce crashes:

And User IDs that are a raw e-mail address (without enclosing angle-brackets) tickle additional problems.

Finally, Monkeysphere's ssh user authentication mechanism typically works on a single User ID at a time. There's no way in Monkeysphere to say "authorize access to account foo by any OpenPGP certificate that has a valid User ID Alice Jones and a valid User ID <alice@example.org>. I'd like to keep the ~/.monkeysphere/authorized_user_ids that i already have in place working OK. I have enough technical debt to deal with for Monkeysphere (including that it only handles RSA currently) that i don't need the additional headache of reasoning about split/joint User IDs too.

Because of all of these issues, in particular the schleuder bugs, i'm not ready to use a split User ID OpenPGP certificate on today's Internet, alas. I have revoked the OpenPGP certificate that had split User IDs and started over with a new certificate with a more standard User ID layout, as described above. Better to rip off the band-aid quickly!

Rhonda D'Vine: Enigma

Friday 18th of January 2019 03:57:00 PM

Just the other day a working colleague asked me what kind of music I listen to, especially when working. It's true, music helps me to focus better and work more concentrated. But it obviously depends on what kind of music it is. And there is one project I come to every now and then. The name is Enigma. It's not disturbing, good for background, with soothing and non-intrusive vocals. Here are the songs:

  • Return To Innocence: This is quite likely the song you know from them, which also got me hooked up originally.
  • Push The Limits: A powerful song. The album version is even a few minutes longer.
  • Voyageur: Love the rhythm and theme in this song.

Like always, enjoy.

/music | permanent link | Comments: 0 | Flattr this

Keith Packard: newt-duino

Friday 18th of January 2019 05:19:07 AM
Newt-Duino: Newt on an Arduino

Here's our target system. The venerable Arduino Duemilanove. Designed in 2009, this board comes with the Atmel ATmega328 system on chip, and not a lot else. This 8-bit microcontroller sports 32kB of flash, 2kB of RAM and another 1kB of EEPROM. Squeezing even a tiny version of Python onto this device took some doing.

How Small is Newt Anyways?

From my other ?postings about Newt, the amount of memory and set of dependencies for running Newt has shrunk over time. Right now, a complete Newt system fits in about 30kB of text on both Cortex M and ATmel processors. That includes the parser and bytecode interpreter, plus garbage collector for memory management.

Bare Metal Arduino

The first choice to make is whether to take some existing OS and get that running, or to just wing it and run right on the metal. To save space, I decided (at least for now), to skip the OS and implement whatever hardware support I need myself.

Newt has limited dependencies; it doesn't use malloc, and the only OS interface the base language uses is getchar and putchar to the console. That means that a complete Newt system need not be much larger than the core Newt language and some simple I/O routines.

For the basic Arduino port, I included some simple serial I/O routines for the console to bring the language up. Once running, I've started adding some simple I/O functions to talk to the pins of the device.

Pushing data out of .data

The ATmega 328P, like most (all?) of the 8-bit Atmel processors cannot directly access flash memory as data. Instead, you are required to use special library functions. Whoever came up with this scheme clearly loved the original 8051 design because it worked the same way.

Modern 8051 clones (like the CC1111 we used to use for Altus Metrum stuff) fix this bug by creating a flat 16-bit address space for both flash and ram so that 16-bit pointers can see all of memory. For older systems, SDCC actually creates magic pointers and has run-time code that performs the relevant magic to fetch data from anywhere in the system. Sadly, no-one has bothered to do this with avr-gcc.

This means that any data accesses done in the normal way can only touch RAM. To make this work, avr-gcc places all data, read-write and read-only in RAM. Newt has some pretty big read-only data bits, including parse tables and error messages. With only 2kB of RAM but 32kB of flash, it's pretty important to avoid filling that with read-only data.

avr-gcc has a whole 'progmem' mechanism which allows you to direct data into living only in flash by decorating the declaration with PROGMEM:

const char PROGMEM error_message[] = "This is in flash, not RAM";

This is pretty easy to manage, the only problem is that attempts to access this data from your program will fail unless you use the pgm_read_word and pgm_read_byte functions:

const char *m = error_message; char c; while ((c = (char) pgm_read_byte(m++))) putchar(c);

avr-libc includes some functions, often indicated with a '_P' suffix, which take pointers to flash instead of pointers to RAM. So, it's possible to place all read-only data in flash and not in RAM, it's just a pain, especially when using portable code.

So, I hacked up the newt code to add macros for the parse tables, one to add the necessary decoration and three others to fetch elements of the parse table by address.

#define PARSE_TABLE_DECLARATION(t) PROGMEM t #define PARSE_TABLE_FETCH_KEY(a) ((parse_key_t) pgm_read_word(a)) #define PARSE_TABLE_FETCH_TOKEN(a) ((token_t) pgm_read_byte(a)) #define PARSE_TABLE_FETCH_PRODUCTION(a) ((uint8_t) pgm_read_byte(a))

With suitable hacks in Newt to use these macros, I could finally build newt for the Arduino.

Automatically Getting Strings out of RAM

A lot of the strings in Newt are printf format strings passed directly to a printf-ish functions. I created some wrapper macros to automatically move the format strings out of RAM and call functions expecting the strings to be in flash. Here's the wrapper I wrote for fprintf:

#define fprintf(file, fmt, args...) do { \ static const char PROGMEM __fmt__[] = (fmt); \ fprintf_P(file, __fmt__, ## args); \ } while(0)

This assumes that all calls to fprintf will take constant strings, which is true in Newt, but not true generally. I would love to automatically handle those cases using __builtin_const_p, but gcc isn't as helpful as it could be; you can't declare a string to be initialized from a variable value, even if that code will never be executed:

#define fprintf(file, fmt, args...) do { \ if (__builtin_const_p(fmt)) { \ static const char PROGMEM __fmt__[] = (fmt); \ fprintf_P(file, __fmt__, ## args); \ } else { \ fprintf(file, fmt, ## args); \ } \ } while(0)

This doesn't compile when 'fmt' isn't a constant string because the initialization of fmt, even though never executed, isn't legal. Suggestions on how to make this work would be most welcome. I only need this for sprintf, so for now, I've created a 'sprintf_const' macro which does the above trick that I use for all sprintf calls with a constant string format.

With this hack added, I saved hundreds of bytes of RAM, making enough space for an (wait for it) 900 byte heap within the interpreter. That's probably enough to do some simple robotics stuff, with luck sufficient for my robotics students.

It Works! > def fact(x): > r = 1 > for y in range(2,x): > r *= y > return r > > fact(10) 362880 > fact(20) * fact(10) 4.41426e+22 >

This example was actually run on the Arduino pictured above.

Future Work

All I've got running at this point is the basic language and a couple of test primitives to control the LED on D13. Here are some things I'd like to add.

Using EEPROM for Program Storage

To make the system actually useful as a stand-alone robot, we need some place to store the application. Fortunately, the ATmega328P has 1kB of EEPROM memory. I plan on using this for program storage and will parse the contents at start up time.

Simple I/O API

Having taught students using Lego Logo, I've learned that the fewer keystrokes needed to get the first light blinking the better the students will do. Seeking to replicate that experience, I'm thinking that the I/O API should avoid requiring any pin mode settings, and should have functions that directly manipulate any devices on the board. The Lego Logo API also avoids functions with more than one argument, preferring to just save state within the system. So, for now, I plan to replicate that API loosely:

def onfor(n): talkto(LED) on() time.sleep(n) off()

In the Arduino environment, a motor controller is managed with two separate bits, requiring two separate numbers and lots of initialization:

int motor_speed = 6; int motor_dir = 5; void setup() { pinMod(motor_dir, OUTPUT); } void motor(int speed, int dir) { digitalWrite(motor_dir, dir); analogWrite(motor_speed, speed); }

By creating an API which knows how to talk to the motor controller as a unified device, we get:

def motor(speed, dir): talkto(MOTOR_1) setdir(dir) setpower(speed) Plans

I'll see about getting the EEPROM bits working, then the I/O API running. At that point, you should be able to do anything with this that you can with C, aside from performance.

Then some kind of host-side interface, probably written in Java to be portable to whatever machine the user has, and I think the system should be ready to experiment with.

Links

The source code is available from my server at https://keithp.com/cgit/newt.git/, and also at github https://github.com/keith-packard/newt. It is licensed under the GPLv2 (or later version).

Daniel Kahn Gillmor: New OpenPGP certificate for dkg, 2019

Friday 18th of January 2019 04:43:11 AM

My old OpenPGP certificate will be 12 years old later this year. I'm transitioning to a new OpenPGP certificate.

You might know my old OpenPGP certificate as:

pub rsa4096 2007-06-02 [SC] [expires: 2019-06-29] 0EE5BE979282D80B9F7540F1CCD2ED94D21739E9 uid Daniel Kahn Gillmor <dkg@fifthhorseman.net> uid Daniel Kahn Gillmor <dkg@debian.org>

My new OpenPGP certificate is:

pub ed25519 2019-01-17 [C] [expires: 2021-01-16] 723E343AC00331F03473E6837BE5A11FA37E8721 uid Daniel Kahn Gillmor uid dkg@debian.org uid dkg@fifthhorseman.net

If you've certified my old certificate, I'd appreciate your certifying my new one. Please do confirm by contacting me via whatever channels you think are most appropriate (including in-person if you want to share food or drink with me!) before you re-certify, of course.

I've published the new certificate to the SKS keyserver network, as well as to my personal website -- you can fetch it like this:

wget -O- https://dkg.fifthhorseman.net/dkg-openpgp.key | gpg --import

A copy of this transition statement signed by both the old and new certificates is available on my website, and you can also find further explanation about technical details, choices, and rationale on my blog.

Technical details

I've made a few decisions differently about this certificate:

Ed25519 and Curve25519 for Public Key Material

I've moved from 4096-bit RSA public keys to the Bernstein elliptic curve 25519 for all my public key material (EdDSA for signing, certification, and authentication, and Curve25519 for encryption). While 4096-bit RSA is likely to be marginally stronger cryptographically than curve 25519, 25519 still appears to be significantly stronger than any cryptanalytic attack known to the public.

Additionally, elliptic curve keys and the signatures associated with them are tiny compared to 4096-bit RSA. I certified my new cert with my old one, and well over half of the new certificate is just certifications from the old key because they are so large.

This size advantage makes it easier for me ship the public key material (and signatures from it) in places that would be more awkward otherwise. See the discussion below.

Separated User IDs

The other thing you're likely to notice if you're considering certifying my key is that my User IDs are now split out. rather than two User IDs:

  • Daniel Kahn Gillmor <dkg@fifthhorseman.net>
  • Daniel Kahn Gillmor <dkg@debian.org>

I now have the name separate from the e-mail addresses, making three User IDs:

  • Daniel Kahn Gillmor
  • dkg@fifthhorseman.net
  • dkg@debian.org

There are a couple reasons i've done this.

One reason is to simplify the certification process. Traditional OpenPGP User ID certification is an all-or-nothing process: the certifier is asserting that both the name and e-mail address belong to the identified party. But this can be tough to reason about. Maybe you know my name, but not my e-mail address. Or maybe you know my over e-mail, but aren't really sure what my "real" name is (i'll leave questions about what counts as a real name to a more philosophical blog post). You ought to be able to certify them independently. Now you can, since it's possible to certify one User ID independently of another.

Another reason is because i plan to use this certificate for e-mail, among other purposes. In e-mail systems, the human name is a confusing distraction, as the real underlying correspondent is the e-mail address. E-mail programs should definitely allow their users to couple a memorable name with an e-mail address, but it should be more like a petname. The bundling of a human "real" name with the e-mail address by the User ID itself just provides more points of confusion for the mail client.

If the user communicates with a certain person by e-mail address, the key should be bound to the e-mail protocol address on its own. Then the user themselves can decide what other monikers they want to use for the person; the User ID shouldn't force them to look at a "real" name just because it's been bundled together.

Split out ACLU identity

Note that my old certificate included some additional identities, including job-specific e-mail addresses. I've split out my job-specific cryptographic credentials to a different OpenPGP key entirely. If you want to mail me at dkg@aclu.org, you can use key 888E6BEAC41959269EAA177F138F5AB68615C560 (which is also published on my work bio page).

This is in part because the folks who communicate with me at my ACLU address are more likely to have old or poorly-maintained e-mail systems than other people i communicate with, and they might not be able to handle curve 25519. So the ACLU key is using 3072-bit RSA, which is universally supported by any plausible OpenPGP implementation.

This way i can experiment with being more forward-looking in my free software and engineering community work, and shake out any bugs that i might find there, before cutting over the e-mails that come in from more legal- and policy-focused colleagues.

Isolated Subkey Capabilities

In my new certificate, the primary key is designated certification-only. There are three subkeys, one each for authentication, encryption, and signing. The primary key also has a longer expiration time (2 years as of this writing), while the subkeys have 1 year expiration dates.

Isolating this functionality helps a little bit with security (i can take the certification key entirely offline while still being able to sign non-identity data), and it also offers a pathway toward having a more robust subkey rotation schedule. As i build out my tooling for subkey rotation, i'll probably make a few more blog posts about that.

Autocrypt-friendly

Finally, several of these changes are related to the Autocrypt project, a really great collaboration of a group of mail user agent developers, designers, UX experts, trainers, and users, who are providing guidance to make encrypted e-mail something that normal humans can use without having to think too much about it.

Autocrypt treats the OpenPGP certificate User IDs as merely decorative, but its recommended form of the User ID for an OpenPGP certificate is just the bare e-mail address. With the User IDs split out as described above, i can produce a minimized OpenPGP certificate that's Autocrypt-friendly, including only the User ID that matches my sending e-mail address precisely, and leaving out the rest.

I'm proud to be associated with the Autocrypt project, and have been helping to shepherd some of the Autocrypt functionality into different clients (my work on my own MUA of choice, notmuch is currently stalled, but i hope to pick it back up again soon). Having an OpenPGP certificate that works well with Autocrypt, and that i can stuff into messages even from clients that aren't fully-Autocrypt compliant yet is useful to me for getting things tested.

Documenting workflow vs. tooling

Some people may want to know "how did you make your OpenPGP cert like this?" For those folks, i'm sorry but this is not a step-by-step technical howto. I've read far too many "One True Way To Set Up Your OpenPGP Certificate" blog posts that haven't aged well, and i'm not confident enough to tell people to run the weird arbitrary commands that i ran to get things working this way.

Furthermore, i don't want people to have to run those commands.

If i think there are sensible ways to set up OpenPGP certificates, i want those patterns built into standard tooling for normal people to use, without a lot of command-line hackery.

So if i'm going to publish a "how to", it would be in the form of software that i think can be sensibly maintained and provides a sane user interface for normal humans. I haven't written that tooling yet, but i need to change keys first, so for now you just get this blog post in English. But feel free to tell me what you think i could do better!

Dirk Eddelbuettel: RcppArmadillo 0.9.200.7.0

Friday 18th of January 2019 02:00:00 AM

A new RcppArmadillo bugfix release arrived at CRAN today. The version 0.9.200.7.0 is another minor bugfix release, and based on the new Armadillo bugfix release 9.200.7 from earlier this week. I also just uploaded the Debian version, and Uwe’s systems have already create the CRAN Windows binary.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 559 other packages on CRAN.

This release just brings minor upstream bug fixes, see below for details (and we also include the updated entry for the November bugfix release).

Changes in RcppArmadillo version 0.9.200.7.0 (2019-01-17)
  • Upgraded to Armadillo release 9.200.7 (Carpe Noctem)

  • Fixes in 9.200.7 compared to 9.200.5:

    • handling complex compound expressions by trace()

    • handling .rows() and .cols() by the Cube class

Changes in RcppArmadillo version 0.9.200.5.0 (2018-11-09)
  • Upgraded to Armadillo release 9.200.5 (Carpe Noctem)

  • Changes in this release

    • linking issue when using fixed size matrices and vectors

    • faster handling of common cases by princomp()

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Anastasia Tsikoza: Week 5: Resolving the blocker

Thursday 17th of January 2019 11:42:56 PM
Photo by Pascal Debrunner on Unsplash

Hi, I am Anastasia, an Outreachy intern working on the project “Improve integration of Debian derivatives with Debian infrastructure and community”. Here are my other posts that haven’t appeared in the feed on time:

This post is about my work on the subscription feature for Debian derivatives - first of the two main issues to be resolved within my internship. And this week’s topic from the organizers is “Think About Your Audience”, especially newcomers to the community and future Outreachy applicants. So I’ll try to write about the feature keeping the most important details but taking into account that the readers might be unfamiliar with some terms and concepts.

What problem I was trying to solve

As I wrote here, around the middle of December the Debian derivatives census was re-enabled. It means that the census scripts started running on a scheduled basis. Every midnight they scanned through the derivatives’ wiki pages, retrieved all the info needed, processed it and formed files and directories for future use.

At each stage different kinds of errors could occur. Usually that means the maintainers should be contacted so that they update wiki pages of their derivatives or fix something in their apt repositories. Checking error logs, identifying new issues, notifying maintainers about them and keeping in mind who has already been notified and who has not - all this always was a significant amount of work which should have been eventually automated (at least partially).

What exactly I needed to do

By design, the subscription system had nothing in common with the daily spam mailing. Notifications should have only been sent when new issues appeared, which doesn’t happen too often.

It was decided to create several small logs for each derivative and then merge them into one. The resulting log should have also had comments, like which issues are new and which ones have been fixed since the last log was sent.

So my task could be divided into three parts:

  • create a way for folks to subscribe
  • write a script joining small logs into separate log for each particular derivative
  • write a script sending the log to subscribers (if it has to be sent)

The directory with the small logs was supposed to be kept until the next run to be compared with the new small logs. If some new issues appeared, the combined log would be sent to subscribers; otherwise it would simply be stored in the derivative’s directory.

Overcoming difficulties

I ran into first difficulties almost immediately after my changes were deployed. I forgot about the parallel execution of make: in the project’s crontab file we use the make --jobs=3, which means three parallel Make jobs are creating files and directories concurrently. So all can turn to chaos if you weren’t extra careful with the makefiles.

Every time I needed to fix something, Paul had to run a cron job and create most of the files from zero. And for nearly two days we’ve pulled together, coordinating actions through IRC. At the first day’s end the main problem seemed to be solved and I let myself rejoice… and then things went wrong. I was at my wits’ end and felt extremely tired, so I went off to bed. The next day I found the solution seconds after waking up!

Cool things

When finally all went without a hitch, it was a huge relief to me. And I am happy that some folks have already signed up for the errors, and hope they consider the feature useful.

Also I was truly pleased to know that Paul had first learned about makefiles’ order-only prerequisites from my code! That was quite a surprise :)

More details

If you are curious, in the Projects section a more detailed report will appear soon.

Jonathan Dowland: multi-coloured Fedoras

Thursday 17th of January 2019 04:57:20 PM

My blog post about my Red Hat shell prompt proved very popular, although some people tried my instructions and couldn't get it to work, so here's some further information.

The unicode code-point I am using is "

Julien Danjou: Serious Python released!

Thursday 17th of January 2019 04:56:09 PM

Today I'm glad to announce that my new book, Serious Python, has been released.

However, you wonder… what is Serious Python?

Well, Serious Python is the the new name of The Hacker's Guide to Python — the first book I published. Serious Python is the 4th update of that book — but with a brand a new name and a new editor!

For more than a year, I've been working with the editor No Starch Press to enhance this book and bring it to the next level! I'm very proud of what we achieved, and working with a whole team on this book has been a fantastic experience.

The content has been updated to be ready for 2019: pytest is now a de-facto standard for testing, so I had to write about it. On the other hand, Python 2 support was less a focus, and I removed many mentions of Python 2 altogether. Some chapters have been reorganized, regrouped and others got enhanced with new content!

The good news: you can get this new edition of the book with a 15% discount for the next 24 hours using the coupon code SERIOUSPYTHONLAUNCH on the book page.

The book is also released as part as No Starch collection. They also are in charge of distributing the paperback copy of the book. If you want a version of the book that you can touch and hold in your arms, look for it in No Starch shop, on Amazon or in your favorite book shop!

No Starch version of Serious Python cover

Kunal Mehta: Eliminating PHP polyfills

Thursday 17th of January 2019 07:50:13 AM

The Symfony project has recently created a set of pure-PHP polyfills for both PHP extensions and newer language features. It allows developers to add requirements upon those functions or language additions without increasing the system requirements upon end users. For the most part, I think this is a good thing, and valuable to have. We've done similar things inside MediaWiki as well for CDB support, Memcached, and internationalization, just to name a few.

But the downside is that on platforms where it is possible to install the missing PHP extensions or upgrade PHP itself, we're shipping empty code. MediaWiki requires both the ctypes and mbstring PHP extensions, and our servers have those, so there's no use in deploying polyfills for those, because they'll never be used. In September, Reedy and I replaced the polyfills with "unpolyfills" that simply provide the correct package, so the polyfill is skipped by composer. That removed about 3,700 lines of code from what we're committing, reviewing, and deploying - a big win.

Last month I came across the same problem in Debian: #911832. The php-symfony-polyfill package was failing tests on the new PHP 7.3, and up for removal from the next stable release (Buster). On its own, the package isn't too important, but was a dependency of other important packages. In Debian, the polyfills are even more useless, since instead of depending upon e.g. php-symfony-polyfill-mbstring, the package could simply depend upon the native PHP extension, php-mbstring. In fact, there was already a system designed to implement those kinds of overrides. After looking at the dependencies, I uploaded a fixed version of php-webmozart-assert, filed bugs for two other packages. and provided patches for symfony. I also made a patch to the default overrides in pkg-php-tools, so that any future package that depends upon a symfony polyfill should now automatically depend upon the native PHP extension if necessary.

Ideally composer would support alternative requirements like ext-mbstring | php-symfony-polyfill-mbstring, but that's been declined by their developers. There's another issue that is somewhat related, but doesn't do much to reduce the installation of polyfills when unnecessary.

Reproducible builds folks: Reproducible Builds: Weekly report #194

Wednesday 16th of January 2019 04:39:41 PM

Here’s what happened in the Reproducible Builds effort between Sunday January 6 and Saturday January 12 2019:

Packages reviewed and fixed, and bugs filed Website development

There were a number of updates to the reproducible-builds.org project website this week, including:

Test framework development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org this week, including:

  • Holger Levsen:
    • Arch Linux-specific changes:
      • Use Debian’s sed, untar and others with sudo as they are not available in the bootstrap.tar.gz file ([], [], [], [], etc.).
      • Fix incorrect sudoers(5) regex. []
      • Only move old schroot away if it exists. []
      • Add and drop debug code, cleanup cruft, exit on cleanup(), etc. ([], [])
      • cleanup() is only called on errors, thus exit 1. []
    • Debian-specific changes:
      • Revert “Support arbitrary package filters when generating deb822 output” ([]) and re-open the corresponding merge request
      • Show the total number of packages in a package set. []
    • Misc/generic changes:
      • Node maintenance. ([, [], [], etc.)
  • Mattia Rizzolo:
    • Fix the NODE_NAME value in case it’s not a full-qualified domain name. []
    • Node maintenance. ([], etc.)
  • Vagrant Cascadian:

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Iain R. Learmonth: A Solution for Authoritative DNS

Wednesday 16th of January 2019 04:30:00 PM

I’ve been thinking about improving my DNS setup. So many things will use e-mail verification as a backup authentication measure that it is starting to show as a real weak point. An Ars Technica article earlier this year talked about how “[f]ederal authorities and private researchers are alerting companies to a wave of domain hijacking attacks that use relatively novel techniques to compromise targets at an almost unprecedented scale.”

The two attacks that are mentioned in that article, changing the nameserver and changing records, are something that DNSSEC could protect against. Records wouldn’t have to be changed on my chosen nameservers, a BGP-hijacking could just give another server the queries for records on my domain instead and then reply with whatever it chooses.

After thinking for a while, my requirements come down to:

  • Offline DNSSEC signing
  • Support for storing signing keys on a HSM (YubiKey)
  • Version control
  • No requirement to run any Internet-facing infrastructure myself

After some searching I discovered GooDNS, a “good” DNS hosting provider. They have an interesting setup that looks to fit all of my requirements. If you’re coming from a more traditional arrangement with either a self-hosted name server or a web panel then this might seem weird, but if you’ve done a little “infrastructure as code” then maybe it is not so weird.

The inital setup must be completed via the web interface. You’ll need to have an hardware security module (HSM) for providing a time based one time password (TOTP), an SSH key and optionally a GPG key as part of the registration. You will need the TOTP to make any changes via the web interface, the SSH key will be used to interact with the git service, and the GPG key will be used for any email correspondance including recovery in the case that you lose your TOTP HSM or password.

You must validate your domain before it will be served from the GooDNS servers. There are two options for this, one for new domains and one “zero-downtime” option that is more complex but may be desirable if your domain is already live. For new domains you can simply update your nameservers at the registrar to validate your domain, for existing domains you can add a TXT record to the current DNS setup that will be validated by GooDNS to allow for the domain to be configured fully before switching the nameservers. Once the domain is validated, you will not need to use the web interface again unless updating contact, security or billing details.

All the DNS configuration is managed in a single git repository. There are three branches in the repository: “master”, “staging” and “production”. These are just the default branches, you can create other branches if you like. The only two that GooDNS will use are the “staging” and “production” branches.

GooDNS provides a script that you can install at /usr/local/bin/git-dns (or elsewhere in your path) which provides some simple helper commands for working with the git repository. The script is extremely readable and so it’s easy enough to understand and write your own scripts if you find yourself needing something a little different.

When you clone your git repository you’ll find one text file on the master branch for each of your configured zones:

irl@computer$ git clone git@goodns.net:irl.git Cloning into 'irl1'... remote: Enumerating objects: 3, done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3 Receiving objects: 100% (3/3), 22.55 KiB | 11.28 MiB/s, done. Resolving deltas: 100% (1/1), done. irl@computer$ ls irl1.net learmonth.me irl@computer$ cat irl1.net @ IN SOA ns1.irl1.net. hostmaster.irl1.net. ( _SERIAL_ 28800 7200 864000 86400 ) @ IN NS ns1.goodns.net. @ IN NS ns2.goodns.net. @ IN NS ns3.goodns.net.

In the backend GooDNS is using OpenBSD 6.4 servers with nsd(8). This means that the zone files use the same syntax. If you don’t know what this means then that is fine as the documentation has loads of examples in it that should help you to configure all the record types you might need. If a record type is not yet supported by nsd(8), you can always specify the record manually and it will work just fine.

One thing you might note here is that the string _SERIAL_ appears instead of a serial number. The git-dns script will replace this with a serial number when you are ready to publish the zone file.

I’ll assume that you already have you GPG key and SSH key set up, now let’s set up the DNSSEC signing key. For this, we will use one of the four slots of the YubiKey. You could use either 9a or 9e, but here I’ll use 9e as 9a is already the SSH key for me.

To set up the token, we will need the yubico-piv-tool. Be extremely careful when following these steps especially if you are using a production device. Try to understand the commands before pasting them into the terminal.

First, make sure the slot is empty. You should get an output similar to the following one:

irl@computer$ yubico-piv-tool -s 9e -a status CHUID: ... CCC: No data available PIN tries left: 10

Now we will use git-dns to create our key signing key (KSK):

irl@computer$ git dns kskinit --yubikey-neo Successfully generated a new private key. Successfully generated a new self signed certificate. Found YubiKey NEO. Slots available: (1) 9a - Not empty (2) 9e - Empty Which slot to use for DNSSEC signing key? 2 Successfully imported a new certificate. CHUID: ... CCC: No data available Slot 9e: Algorithm: ECCP256 Subject DN: CN=irl1.net Issuer DN: CN=irl1.net Fingerprint: 97dda8a441a401102328ab6ed4483f08bc3b4e4c91abee8a6e144a6bb07a674c Not Before: Feb 01 13:10:10 2019 GMT Not After: Feb 01 13:10:10 2021 GMT PIN tries left: 10

We can see the public key for this new KSK:

irl@computer$ git dns pubkeys irl1.net. DNSKEY 256 3 13 UgGYfiNse1qT4GIojG0VGcHByLWqByiafQ8Yt7/Eit2hCPYYcyiE+TX8HP8al/SzCnaA8nOpAkqFgPCI26ydqw==

Next we will create a zone signing key (ZSK). These are stored in the keys/ folder of your git repository but are not version controlled. You can optionally encrypt these with GnuPG (and so requiring the YubiKey to sign zones) but I’ve not done that here. Operations using slot 9e do not require the PIN so leaving the YubiKey connected to the computer is pretty much the same as leaving the KSK on the disk. Maybe a future YubiKey will not have this restriction or will add more slots.

irl@computer$ git dns zskinit Created ./keys/ Successfully generated a new private key. irl@computer$ git dns pubkeys irl1.net. DNSKEY 256 3 13 UgGYfiNse1qT4GIojG0VGcHByLWqByiafQ8Yt7/Eit2hCPYYcyiE+TX8HP8al/SzCnaA8nOpAkqFgPCI26ydqw= irl1.net. DNSKEY 257 3 13 kS7DoH7fxDsuH8o1vkvNkRcMRfTbhLqAZdaT2SRdxjRwZSCThxxpZ3S750anoPHV048FFpDrS8Jof08D2Gqj9w==

Now we can go to our domain registrar and add DS records to the registry for our domain using the public keys. First though, we should actually sign the zone. To create a signed zone:

irl@computer$ git dns signall Signing irl1.net... Signing learmonth.me... [production 51da0f0] Signed all zone files at 2019-02-01 13:28:02 2 files changed, 6 insertions(+), 0 deletions(-)

You’ll notice that all the zones were signed although we only created one set of keys. Set ups where you have one shared KSK and individual ZSK per zone are possible but they provide questionable additional security. Reducing the number of keys required for DNSSEC helps to keep them all under control.

To make these changes live, all that is needed is to push the production branch. To keep things tidy, and to keep a backup of your sources, you can push the master branch too. git-dns provides a helper function for this:

irl@computer$ git dns push Pushing master...done Pushing production...done Pushing staging...done

If I now edit a zone file on the master branch and want to try out the zone before making it live, all I need to do is:

irl@computer$ git dns signall --staging Signing irl1.net... Signing learmonth.me... [staging 72ea1fc] Signed all zone files at 2019-02-01 13:30:12 2 files changed, 8 insertions(+), 0 deletions(-) irl@computer$ git dns push Pushing master...done Pushing production...done Pushing staging...done

If I now use the staging resolver or lookup records at irl1.net.staging.goodns.net then I’ll see the zone live. The staging resolver is a really cool idea for development and testing. They give you a couple of unique IPv6 addresses just for you that will serve your staging zone files and act as a resolver for everything else. You just have to plug these into your staging environment and everything is ready to go. In the future they are planning to allow you to have more than one staging environment too.

All that is left to do is ensure that your zone signatures stay fresh. This is easy to achieve with a cron job:

0 3 * * * /usr/local/bin/git-dns cron --repository=/srv/dns/irl1.net --quiet

I monitor the records independently and disable the mail output from this command but you might want to drop the --quiet if you’d like to get mails from cron on errors/warnings.

On the GooDNS blog they talk about adding an Onion service for the git server in the future so that they do not have logs that could show the location of your DNSSEC signing keys, which allows you to have even greater protection. They already support performing the git push via Tor but the addition of the Onion service would make it faster and more reliable.

Unfortunately, GooDNS is entirely fictional and you can’t actually manage your DNS in this way, but wouldn’t it be nice? This post has drawn inspiration from the following:

Daniel Silverstone: Plans for 2019

Wednesday 16th of January 2019 11:29:51 AM

At the end of last year I made eight statements about what I wanted to do throughout 2019. I tried to split them semi-evenly between being a better adult human and being a better software community contributor. I have had a few weeks now to settle my thoughts around what they mean and I'd like to take some time to go through the eight and discuss them a little more.

I've been told that doing this reduces the chance of me sticking to the points because simply announcing the points and receiving any kind of positive feedback may stunt my desire to actually achieve the goals. I'm not sure about that though, and I really want my wider friends community to help keep me honest about them all. I've set a reminder for April 7th to review the situation and hopefully be able to report back positively on my progress.

My list of goals was stated in a pair of tweets:

  1. Continue to lose weight and get fit. I'd like to reach 80kg during the year if I can
  2. Begin a couch to 5k and give it my very best
  3. Focus my software work on finishing projects I have already started
  4. Where I join in other projects be a net benefit
  5. Give back to the @rustlang community because I've gained so much from them already
  6. Be better at tidying up
  7. Save up lots of money for renovations
  8. Go on a proper holiday
Weight and fitness

Some of you may be aware already, others may not, that I have been making an effort to shed some of my excess weight over the past six or seven months. I "started" in May of 2018 weighing approximately 141kg and I am, as of this morning, weighing approximately 101kg. Essentially that's a semi-steady rate of 5kg per month, though it has, obviously, been slowing down of late.

In theory, given my height of roughly 178cm I should aim for a weight of around 70kg. I am trying to improve my fitness and to build some muscle and as such I'm aiming long-term for roughly 75kg. My goal for this year is to continue my improvement and to reach and maintain 80kg or better. I think this will make a significant difference to my health and my general wellbeing. I'm already sleeping better on average, and I feel like I have more energy over all. I bought a Garmin Vivoactive 3 and have been using that to track my general health and activity. My resting heart rate has gone down a few BPM over the past six months, and I can see my general improvement in sleep etc over that time too. I bought a Garmin Index Scale to track my weight and body composition, and that is also showing me good values as well as encouraging me to weigh myself every day and to learn how to interpret the results.

I've been managing my weight loss partly by means of a 16:8 intermittent fasting protocol, combined with a steady calorie deficit of around 1000kcal/day. While this sounds pretty drastic, I was horrendously overweight and this was critical to getting my weight to shift quickly. I expect I'll reduce that deficit over the course of the year, hence I'm only aiming for a 20kg drop over a year rather than trying to maintain what could in theory be a drop of 30kg or more.

In addition to the IF/deficit, I have been more active. I bought an e-bike and slowly got going on that over the summer, along with learning to enjoy walks around my local parks and scrubland. Since the weather got bad enough that I didn't want to be out of doors I joined a gym where I have been going regularly since September. Since the end of October I have been doing a very basic strength training routine and my shoulders do seem to be improving for it. I can still barely do a pushup but it's less embarassingly awful than it was.

Given my efforts toward my fitness, my intention this year is to extend that to include a Couch to 5k type effort. Amusingly, Garmin offer a self adjusting "coach" called Garmin Coach which I will likely use to guide me through the process. While I'm not committing to any, maybe I'll get involved in some parkruns this year too. I'm not committing to reach an ability to run 5k because, quite simply, my bad leg may not let me, but I am committing to give it my best. My promise to myself was to start some level of jogging once I hit 100kg, so that's looking likely by the end of this month. Maybe February is when I'll start the c25k stuff in earnest.

Adulting

I have put three items down in this category to get better at this year. One is a big thing for our house. I am, quite simply put, awful at tidying up. I leave all sorts of things lying around and I am messy and lazy. I need to fix this. My short-term goal in this respect is to pick one room of the house where the mess is mostly mine, and learn to keep it tidy before my checkpoint in April. I think I'm likely to choose the Study because it's where others of my activities for this year will centre and it's definitely almost entirely my mess in there. I'm not yet certain how I'll learn to do this, but it has been a long time coming and I really do need to. It's not fair to my husband for me to be this awful all the time.

The second of these points is to explicitly save money for renovations. Last year we had a new bathroom installed and I've been seriously happy about that. We will need to pay that off this year (we have the money, we're just waiting as long as we can to earn the best interest on it first) and then I'll want to be saving up for another spot of renovations. I'd like to have the kitchen and dining room done - new floor, new units and sink in the kitchen, fix up the messy wall in the dining room, have them decorated, etc. I imagine this will take quite a bit of 2019 to save for, but hopefully this time next year I'll be saying that we managed that and it's time for the next part of the house.

Finally I want to take a proper holiday this year. It has been a couple of years since Rob and I went to Seoul for a month, and while that was excellent, it was partly "work from home" and so I'd like to take a holiday which isn't also a conference, or working from home, or anything other than relaxation and seeing of interesting things. This will also require saving for, so I imagine we won't get to do it until mid to late 2019, but I feel like this is part of a general effort I've been making to take care of myself more. The fitness stuff above being physical, but a proper holiday being part of taking better care of my mental health.

Software, Hardware, and all the squishy humans in between

2018 was not a great year for me in terms of getting projects done. I have failed to do almost anything with Gitano and I did not doing well with Debian or other projects I am part of. As such, I'm committing to do better by my projects in 2019.

First, and foremost, I'm pledging to focus my efforts on finishing projects which I've already started. I am very good at thinking "Oh, that sounds fun" and starting something new, leaving old projects by the wayside and not getting them to any state of completion. While software is never entirely "done", I do feel like I should get in-progress projects to a point that others can use them and maybe contribute too.

As such, I'll be making an effort to sort out issues which others have raised in Gitano (though I doubt I'll do much more feature development for it) so that it can be used by NetSurf and so that it doesn't drop out of Debian. Since the next release of Debian is due soon, I will have to pull my finger out and get this done pretty soon.

I have been working, on and off, with Rob on a new point-of-sale for our local pub Ye Olde Vic and I am committing to get it done to a point that we can experiment with using it in the pub by the summer. Also I was working on a way to measure fluid flow through a pipe so that we can correlate the pulled beer with the sales and determine wastage etc. I expect I'll get back to the "beer'o'meter" once the point-of-sale work is in place and usable. I am not going to commit to getting it done this year, but I'd like to make a dent in the remaining work for it.

I have an on-again off-again relationship with some code I wrote quite a while ago when learning Rust. I am speaking of my Yarn implementation called (imaginatively) rsyarn. I'd like to have that project reworked into something which can be used with Cargo and associated tooling nicely so that running cargo test in a Rust project can result in running yarns as well.

There may be other projects which jump into this category over the year, but those listed above are the ones I'm committing to make a difference to my previous lackadaisical approach.

On a more community-minded note, one of my goals is to ensure that I'm always a net benefit to any project I join or work on in 2019. I am very aware that in a lot of cases, I provide short drive-by contributions to projects which can end up costing that project more than I gave them in benefit. I want to stop that behaviour and instead invest more effort into fewer projects so that I always end up a net benefit to the project in question. This may mean spending longer to ensure that an issue I file has enough in it that I may not need to interact with it again until verification of a correct fix is required. It may mean spending time fixing someone elses' issues so that there is the engineering bandwidth for someone else to fix mine. I can't say for sure how this will manifest, beyond being up-front and requesting of any community I decide to take part in, that they tell me if I end up costing more than I'm bringing in benefit.

Rust and the Rust community

I've mentioned Rust above, and this is perhaps the most overlappy of my promises for 2019. I want to give back to the Rust community because over the past few years as I've learned Rust and learned more and more about the community, I've seen how much of a positive effect they've had on my life. Not just because they made learning a new programming langauge so enjoyable, but because of the community's focus on programmers as human beings. The fantastic documentation ethics, and the wonderfully inclusive atmosphere in the community meant that I managed to get going with Rust so much more effectively than with almost any other language I've ever tried to learn since Lua.

I have, since Christmas, been slowly involving myself in the Rust community more and more. I joined one of the various Discord servers and have been learning about how crates.io is managed and I have been contributing to rustup.rs which is the initial software interface most Rust users encounter and forms such an integral part of the experience of the ecosystem that I feel it's somewhere I can make a useful impact.

While I can't say a significant amount more right now, I hope I'll be able to blog more in the future on what I'm up to in the Rust community and how I hope that will benefit others already in, and interested in joining, the fun that is programming in Rust.

In summary, I hope at least some of you will help to keep me honest about my intentions for 2019, and if, in return, I can help you too, please feel free to let me know.

Daniel Lange: Openssh taking minutes to become available, booting takes half an hour ... because your server waits for a few bytes of randomness

Wednesday 16th of January 2019 08:20:05 AM

So, your machine now needs minutes to boot before you can ssh in where it used to be seconds before the Debian Buster update?

Problem

Linux 3.17 (2014-10-05) learnt a new syscall getrandom() that, well, gets bytes from the entropy pool. Glibc learnt about this with 2.25 (2017-02-05) and two tries and four years after the kernel, OpenSSL used that functionality from release 1.1.1 (2018-09-11). OpenSSH implemented this natively for the 7.8 release (2018-08-24) as well.

Now the getrandom() syscall will block1 if the kernel can't provide enough entropy. And that's frequenty the case during boot. Esp. with VMs that have no input devices or IO jitter to source the pseudo random number generator from.

First seen in the wild January 2017

I vividly remember not seeing my Alpine Linux VMs back on the net after the Alpine 3.5 upgrade. That was basically the same issue.

Systemd. Yeah.

Systemd makes this behaviour worse, see issue #4271, #4513 and #10621.
Basically as of now the entropy file saved as /var/lib/systemd/random-seed will not - drumroll - add entropy to the random pool when played back during boot. Actually it will. It will just not be accounted for. So Linux doesn't know. And continues blocking getrandom(). This is obviously different from SysVinit times2 when /var/lib/urandom/random-seed (that you still have lying around on updated systems) made sure the system carried enough entropy over reboot to continue working right after enough of the system was booted.

#4167 is a re-opened discussion about systemd eating randomness early at boot (hashmaps in PID 0...). Some Debian folks participate in the recent discussion and it is worth reading if you want to learn about the mess that booting a Linux system has become.

While we're talking systemd ... #10676 also means systems will use RDRAND in the future despite Ted Ts'o's warning on RDRAND [Archive.org mirror and mirrored locally as 130905_Ted_Tso_on_RDRAND.pdf, 205kB as Google+ will be discontinued in April 2019].

Debian

Debian is seeing the same issue working up towards the Buster release, e.g. Bug #912087.

The typical issue is:

[    4.428797] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: data=ordered
[ 130.970863] random: crng init done

with delays up to tens of minutes on systems with very little external random sources.

This is what it should look like:

[    1.616819] random: fast init done
[    2.299314] random: crng init done

Check dmesg | grep -E "(rng|random)" to see how your systems are doing.

If this is not fully solved before the Buster release, I hope some of the below can end up in the release notes3.

Solutions

You need to get entropy into the random pool earlier at boot. There are many ways to achieve this and - currently - all require action by the system administrator.

Kernel boot parameter

From kernel 4.19 (Debian Buster currently runs 4.18 [Update: but will be getting 4.19 before release according to Ben via Mika]) you can set RANDOM_TRUST_CPU at compile time or random.trust_cpu=on on the kernel command line. This will make Intel / AMD system trust RDRAND and fill the entropy pool with it. See the warning from Ted Ts'o linked above.

Using a TPM

The Trusted Platform Module has an embedded random number generator that can be used. Of course you need to have one on your board for this to be useful. It's a hardware device.

Load the tpm-rng module (ideally from initrd) or compile it into the kernel (config HW_RANDOM_TPM). Now, the kernel does not "trust" the TPM RNG by default, so you need to add

rng_core.default_quality=1000

to the kernel command line. 1000 means "trust", 0 means "don't use". So you can chose any value in between that works for you depending on how much you consider your TPM to be unbugged.

VirtIO

For Virtual Machines (VMs) you can forward entropy from the host (that should be running longer than the VMs and have enough entropy) via virtio_rng.

So on the host, you do:

kvm ... -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,bus=pci.0,addr=0x7

and within the VM newer kernels should automatically load virtio_rng and use that.

You can confirm with dmesg as per above.

Or check:

# cat /sys/devices/virtual/misc/hw_random/rng_available
virtio_rng.0
# cat /sys/devices/virtual/misc/hw_random/rng_current
virtio_rng.0 Patching systemd

The Fedora bugtracker has a bash / python script that replaces the systemd rnd seeding with a (better) working one. The script can also serve as a good starting point if you need to script your own solution, e.g. for reading from an entropy provider available within your (secure) network.

Chaoskey

The wonderful Keith Packard and Bdale Garbee have developed a USB dongle, ChaosKey, that supplies entropy to the kernel. Hard- and software are open source.

Jitterentropy_RNG

Kernel 4.2 introduced jitterentropy_rng which will use the jitter in CPU timings to generate randomness.

modprobe jitterentropy_rng

This apparently needs a userspace daemon though (read: design mistake) so

apt install jitterentropy-rngd (available from Buster/testing).

The current version 1.0.8-3 installs nicely on Stretch. dpkg -i is your friend.

But - drumroll - that daemon doesn't seem to use the kernel module at all.

That's where I stopped looking at that solution. At least for now. There are extensive docs if you want to dig into this yourself.

Haveged

apt install haveged

Haveged is a user-space daemon that gathers entropy though the timing jitter any CPU has. It will only run "late" in boot but may still get your openssh back online within seconds and not minutes.

It is also - to the best of my knowledge - not verified at all regarding the quality of randomness it generates. The haveged design and history page provides and interesting read and I wouldn't recommend haveged if you have alternatives. If you have none, haveged is a wonderful solution though as it works reliably. And unverified entropy is better than no entropy. Just forget this is 2018 2019 .

Updates 14.01.2019

Stefan Fritsch, the Apache2 maintainer in Debian, OpenBSD developer and a former Debian security team member stumbled over the systemd issue preventing Apache libssl to initialize at boot in a Debian bug #916690 - apache2: getrandom call blocks on first startup, systemd kills with timeout.

The bug has been retitled "document getrandom changes causing entropy starvation" hinting at not fixing the underlying issue but documenting it in the Debian Buster release notes.

Unhappy with this "minimal compromise" Stefan wrote a comprehensive summary of the current situation to the Debian-devel mailing list. The discussion spans over December 2018 and January 2019 and mostly iterated what had been written above already. The discussion has - so far - not reached any consensus. There is still the "systemd stance" (not our problem, fix the daemons and the "ssh/apache stance" (fix systemd, credit entropy).

The "document in release notes" minimal compromise was brought up again and Stefan warned of the problems this would create for Buster users:

> I'd prefer having this documented in the release notes: > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=916690 > with possible solutions like installing haveged, configuring virtio-rng, > etc. depending on the situation. That would be an extremely user-unfriendly "solution" and would lead to countless hours of debugging and useless bug reports.

This is exactly why I wrote this blog entry and keep it updated. We need to either fix this or tell everybody we can reach before upgrading to Buster. Otherwise this will lead to huge amounts of systems dead on the network after what looked like a successful upgrade.

Some interesting tidbits were mentioned within the thread:

Raphael Hertzog fixed the issue for Kali Linux by installing haveged by default. Michael Prokop did the same for the grml distribution within its December 2018 release.

Ben Hutchings pointed to an interesting thread on the debian-release mailing list he kicked off in May 2018. Multiple people summarized the options and the fact that there is no "general solution that is both correct and easy" at the time.

Sam Hartman identified Debian Buster VMs running under VMware as an issue, because that supervisor does not provide virtio-rng. So Debian VMs wouldn't boot into ssh availability within a reasonable time. This is an issue for real world use cases albeit running a proprietary product as the supervisor.

16.01.2019

Daniel Kahn Gillmor wrote in to explain a risk for VMs starting right after the boot of the host OS:

If that pool is used by the guest to generate long-term secrets because it appears to be well-initialized, that could be a serious problem.
(e.g. "Mining your P's and Q's" by Heninger et al -- https://factorable.net/weakkeys12.extended.pdf)
I've just opened https://bugs.launchpad.net/qemu/+bug/1811758 to report a way to improve that situation in qemu by default.

So ... make sure that your host OS has access to a hardware random number generator or at least carries over its random seed properly across reboots. You could also delay VM starts until the crng on the host Linux is fully initialized (random: crng init done).
Otherwise your VMs may get insufficiently generated pseudo-random numbers and won't even know.

  1. it will return with EAGAIN in the GRND_NONBLOCK use case. The blocking behaviour when lacking entropy is a security measure as per Bug #1559 of Google's Project Zero

  2. Update 18.12.2018: "SysVinit times" ::= "The times when most Linux distros used SysVinit over other init systems." So Wheezy and previous for Debian. Some people objected to the statement, so I added this footnote as a clarification. See the discussion in the comments below. 

  3. there is no Buster branch in the release notes repository yet (2018-12-17) 

Russ Allbery: Review: Aerial Magic Season 1

Wednesday 16th of January 2019 04:02:00 AM

Review: Aerial Magic Season 1, by walkingnorth

Series: Aerial Magic #1 Publisher: LINE WEBTOON Copyright: 2018 Format: Online graphic novel Pages: 156 Aerial Magic is a graphic novel published on the LINE WEBTOON platform by the same author as the wonderful Always Human, originally in weekly episodes. It is readable for free, starting with the prologue. I was going to wait until all seasons were complete and then review the entire work, like I did with Always Human, but apparently there are going to be five seasons and I don't want to wait that long. This is a review of the first season, which is now complete in 25 episodes plus a prologue.

As with Always Human, the pages metadata in the sidebar is a bit of a lie: a very rough guess on how many pages this would be if it were published as a traditional graphic novel (six times the number of episodes, since each episode seems a bit longer than in Always Human). A lot of the artwork is large panels, so it may be an underestimate. Consider it only a rough guide to how long it might take to read.

Wisteria Kemp is an apprentice witch. This is an unusual thing to be — not the witch part, which is very common in a society that appears to use magic in much the way that we use technology, but the apprentice part. Most people training for a career in magic go to university, but school doesn't agree with Wisteria. There are several reasons for that, but one is that she's textblind and relies on a familiar (a crow-like bird named Puppy) to read for her. Her dream is to be accredited to do aerial magic, but her high-school work was... not good, and she's very afraid she'll be sent home after her ten-day trial period.

Magister Cecily Moon owns a magical item repair shop in the large city of Vectum and agreed to take Wisteria on as an apprentice, something that most magisters no longer do. She's an outgoing woman with a rather suspicious seven-year-old, two other employees, and a warm heart. She doesn't seem to have the same pessimism Wisteria has about her future; she instead is more concerned with whether Wisteria will want to stay after her trial period. This doesn't reassure Wisteria, nor do her initial test exercises, all of which go poorly.

I found the beginning of this story a bit more painful than Always Human. Wisteria has such a deep crisis of self-confidence, and I found Cecily's lack of awareness of it quite frustrating. This is not unrealistic — Cecily is clearly as new to having an apprentice as Wisteria is to being one, and is struggling to calibrate her style — but it's somewhat hard reading since at least some of Wisteria's unhappiness is avoidable. I wish Cecily had shown a bit more awareness of how much harder she made things for Wisteria by not explaining more of what she was seeing. But it does set up a highly effective pivot in tone, and the last few episodes were truly lovely. Now I'm nearly as excited for more Aerial Magic as I would be for more Always Human.

walkingnorth's art style is much the same as that in Always Human, but with more large background panels showing the city of Vectum and the sky above it. Her faces are still exceptional: expressive, unique, and so very good at showing character emotion. She occasionally uses an exaggerated chibi style for some emotions, but I feel like she's leaning more on subtlety of expression in this series and doing a wonderful job with it. Wisteria's happy expressions are a delight to look at. The backgrounds are not generally that detailed, but I think they're better than Always Human. They feature a lot of beautiful sky, clouds, and sunrise and sunset moments, which are perfect for walkingnorth's pastel palette.

The magical system underlying this story doesn't appear in much detail, at least yet, but what is shown has an interesting animist feel and seems focused on the emotions and memories of objects. Spells appear to be standardized symbolism that is known to be effective, which makes magic something like cooking: most people use recipes that are known to work, but a recipe is not strictly required. I like the feel of it and the way that magic is woven into everyday life (personal broom transport is common), and am looking forward to learning more in future seasons.

As with Always Human, this is a world full of fundamentally good people. The conflict comes primarily from typical interpersonal conflicts and inner struggles rather than any true villain. Also as with Always Human, the world features a wide variety of unremarked family arrangements, although since it's not a romance the relationships aren't quite as central. It makes for relaxing and welcoming reading.

Also as in Always Human, each episode features its own soundtrack, composed by the author. I am again not reviewing those because I'm a poor music reviewer and because I tend to read online comics in places and at times where I don't want the audio, but if you like that sort of thing, the tracks I listened to were enjoyable, fit the emotions of the scene, and were unobtrusive to listen to while reading.

This is an online comic on a for-profit publishing platform, so you'll have to deal with some amount of JavaScript and modern web gunk. I at least (using up-to-date Chrome on Linux with UMatrix) had fewer technical problems with delayed and partly-loaded panels than I had with Always Human.

I didn't like this first season quite as well as Always Human, but that's a high bar, and it took some time for Always Human to build up to its emotional impact as well. What there is so far is a charming, gentle, and empathetic story, full of likable characters (even the ones who don't seem that likable at first) and a fascinating world background. This is an excellent start, and I will certainly be reading (and reviewing) later seasons as they're published.

walkingnorth has a Patreon, which, in addition to letting you support the artist directly, has various supporting material such as larger artwork and downloadable versions of the music.

Rating: 7 out of 10

More in Tux Machines

today's howtos

Audiocasts: Linux in the Ham Shack (LHS), Linux Action News, Open Source Security Podcast and Let’s Encrypt

  • LHS Episode #266: #$%&! Net Neutrality
    Welcome to the first episode of Linux in the Ham Shack for 2019. In this episode, the hosts discuss topics including the 2018 RTTY Roundup using FT-8, Cubesats and wideband receivers in space, the ORI at Hamcation, Wekcan, Raspberry Pi-based VPN servers, the LHS Linux distributions, CW trainers and much more.
  • LHS Episode #267: The Weekender XXII
    Welcome to the 22nd edition of the LHS Weekender. In this episode, the hosts discuss upcoming amateur radio contests and special event stations, Open Source events in the next fortnight, Linux distributions of interest, news about science, technology and related endeavors as well is dive into food, drink and other hedonistic topics.
  • Linux Action News 89
    Another troubling week for MongoDB, ZFS On Linux lands a kernel workaround, and 600 days of postmarketOS. Plus our thoughts on the new Project Trident release, and Mozilla ending the Test Pilot program.
  • Open Source Security Podcast: Episode 130 - Chat with Snyk co-founder Danny Grander
  • The ACME Era | TechSNAP 395
    We welcome Jim to the show, and he and Wes dive deep into all things Let’s Encrypt.

Review: Sculpt OS 18.09

The Sculpt OS website suggests that the operating system is ready for day to day use, at least in some environments: "Sculpt is used as day-to-day OS by the Genode developers." Though this makes me wonder in what capacity the operating system runs on the machines of those developers. When I tried out the Haiku beta last year, the operating system had some limitations, but I could see how it could be useful to some people in environments with compatible hardware. In theory, I could browse the web, perform some basic tasks and develop software on Haiku. With Sculpt though, I was unable to get the operating system to do anything, from a user's point of view. The small OS could download packages and load some of them into memory, and it could display a graph of related components. Sculpt could connect to my network and mount additional storage. All of this is good and a fine demo of the Genode design. However, I (as a user) was unable to interact with any applications, find a command line, or browse the file system. All of this put a severe damper on my ability to use Sculpt to do anything useful. Genode, and by extension Sculpt OS, has some interesting design goals when it comes to security and minimalism. However, I don't think Sculpt is practical for any end-user tasks at this time. Read more

This Week in Linux, Chrome OS, and Death of Windows 10 Mobile

  • Episode 51 | This Week in Linux
    On this episode of This Week in Linux, we got some new announcements from Inkscape, Purism, Solus, Mozilla, and Steam. We’ll also check out some new Distro releases from Netrunner, Deeping, Android X86 and more. Then we’ll look at some new hardware offerings from Purism and Entroware. Later in the show will talk about some drama happening with a project’s licensing issues and then we’ll round out the episode with some Linux Gaming news including some sales from Humble Bundle. All that and much more!
  • Chrome OS 73 Dev Channel adds Google Drive, Play Files mount in Linux, USB device management and Crostini backup flag
    On Tuesday, Google released the first iteration of Chrome OS 73 for the Dev Channel and there are quite a few new items related to Project Crostini, for Linux app support. Some things in the lengthy changelog only set up new features coming soon while others add new functionality. Here’s a rundown on some of the Crostini additions to Chrome OS 73.
  • Tens to be disappointed as Windows 10 Mobile death date set: Doomed phone OS won't see 2020
    Microsoft has formally set the end date for support of its all-but-forgotten Windows 10 Mobile platform. The Redmond code factory said today that, come December 10, it's curtains for the ill-fated smartphone venture. The retirement will end a four-year run for a Microsoft phone effort that never really got off the ground and helped destroy Nokia in the process. "The end of support date applies to all Windows 10 Mobile products, including Windows 10 Mobile and Windows 10 Mobile Enterprise," Microsoft declared.