Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 55 min 20 sec ago

Holger Levsen: 20190716-wanna-work-on-lts

Tuesday 16th of July 2019 03:56:02 PM
Wanna work on Debian LTS (and get funded)?

If you are in Curitiba and are interested to work on Debian LTS (and get paid for that work), please come and talk to me, Debian LTS is still looking for more contributors! Also, if you want a bigger challenge, extended LTS also needs more contributors, though I'd suggest you start with regular LTS

On Thursday, July 25th, there will also be a talk titled "Debian LTS, the good, the bad and the better" where we plan to present what we think works nicely and what doesn't work so nicely yet and where we also want to gather your wishes and requests.

If cannot make it to Curitiba, there will be a video stream (and the possibility to ask questions via IRC) and you can always send me an email or ping on IRC if you want to work on LTS.

Russ Allbery: DocKnot 3.01

Monday 15th of July 2019 04:15:00 AM

The last release of DocKnot failed a whole bunch of CPAN tests that didn't fail locally or on Travis-CI, so this release cleans that up and adds a few minor things to the dist command (following my conventions to run cppcheck and Valgrind tests). The test failures are moderately interesting corners of Perl module development that I hadn't thought about, so seem worth blogging about.

First, the more prosaic one: as part of the tests of docknot dist, the test suite creates a new Git repository because the release process involves git archive and needs a repository to work from. I forgot to use git config to set user.email and user.name, so that broke on systems without Git global configuration. (This would have been caught by the Debian package testing, but sadly I forgot to add git to the build dependencies, so that test was being skipped.) I always get bitten by this each time I write a test suite that uses Git; someday I'll remember the first time.

Second, the build system runs perl Build.PL to build a tiny test package using Module::Build, and it was using system Perl. Slaven Rezic pointed out that this fails if Module::Build isn't installed system-wide or if system Perl doesn't work for whatever reason. Using system Perl is correct for normal operation of docknot dist, but the test suite should use the same Perl version used to run the test suite. I added a new module constructor argument for this, and the test suite now passes in $^X for that argument.

Finally, there was a more obscure problem on Windows: the contents of generated and expected test files didn't match because the generated file content was supposedly just the file name. I think I fixed this, although I don't have Windows on which to test. The root of the problem is another mistake I've made before with Perl: File::Temp->new() does not return a file name, but it returns an object that magically stringifies to the file name, so you can use it that way in many situations and it appears to magically work. However, on Windows, it was not working the way that it was on my Debian system. The solution was to explicitly call the filename method to get the actual file name and use it consistently everywhere; hopefully tests will now pass on Windows.

You can get the latest version from CPAN or from the DocKnot distribution page. A Debian package is also available from my personal archive. I'll probably upload DocKnot to Debian proper during this release cycle, since it's gotten somewhat more mature, although I'd like to make some backward-incompatible changes and improve the documentation first.

François Marier: Installing Debian buster on a GnuBee PC 2

Sunday 14th of July 2019 10:30:00 PM

Here is how I installed Debian 10 / buster on my GnuBee Personal Cloud 2, a free hardware device designed as a network file server / NAS.

Flashing the LibreCMC firmware with Debian support

Before we can install Debian, we need a firmware that includes all of the necessary tools.

On another machine, do the following:

  1. Download the latest librecmc-ramips-mt7621-gb-pc1-squashfs-sysupgrade_*.bin.
  2. Mount a vfat-formatted USB stick.
  3. Copy the file onto it and rename it to gnubee.bin.
  4. Unmount the USB stick

Then plug a network cable between your laptop and the black network port and plug the USB stick into the GnuBee before rebooting the GnuBee via ssh:

ssh 192.68.10.0 reboot

If you have a USB serial cable, you can use it to monitor the flashing process:

screen /dev/ttyUSB0 57600

otherwise keep an eye on the LEDs and wait until they are fully done flashing.

Getting ssh access to LibreCMC

Once the firmware has been updated, turn off the GnuBee manually using the power switch and turn it back on.

Now enable SSH access via the built-in LibreCMC firmware:

  1. Plug a network cable between your laptop and the black network port.
  2. Open web-based admin panel at http://192.168.10.0.
  3. Go to System | Administration.
  4. Set a root password.
  5. Disable ssh password auth and root password logins.
  6. Paste in your RSA ssh public key.
  7. Click Save & Apply.
  8. Go to Network | Firewall.
  9. Select "accept" for WAN Input.
  10. Click Save & Apply.

Finaly, go to Network | Interfaces and note the ipv4 address of the WAN port since that will be needed in the next step.

Installing Debian

The first step is to install Debian jessie on the GnuBee.

Connect the blue network port into your router/switch and ssh into the GnuBee using the IP address you noted earlier:

ssh root@192.168.1.xxx

and the root password you set in the previous section.

Then use fdisk /dev/sda to create the following partition layout on the first drive:

Device Start End Sectors Size Type /dev/sda1 2048 8390655 8388608 4G Linux swap /dev/sda2 8390656 234441614 226050959 107.8G Linux filesystem

Note that I used an 120GB solid-state drive as the system drive in order to minimize noise levels.

Then format the swap partition:

mkswap /dev/sda1

and download the latest version of the jessie installer:

wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/GnuBee_Docs/master/GB-PCx/scripts/jessie_3.10.14/debian-jessie-install

(Yes, the --no-check-certificate is really unfortunate. Please leave a comment if you find a way to work around it.)

The stock installer fails to bring up the correct networking configuration on my network and so I have modified the install script by changing the eth0.1 blurb to:

auto eth0.1 iface eth0.1 inet static address 192.168.10.1 netmask 255.255.255.0

Then you should be able to run the installer succesfully:

sh ./debian-jessie-install

and reboot:

reboot Restore ssh access in Debian jessie

Once the GnuBee has finished booting, login using the serial console:

  • username: root
  • password: GnuBee

and change the root password using passwd.

Look for the IPv4 address of eth0.2 in the output of the ip addr command and then ssh into the GnuBee from your desktop computer:

ssh root@192.168.1.xxx # type password set above mkdir .ssh vim .ssh/authorized_keys # paste your ed25519 ssh pubkey Finish the jessie installation

With this in place, you should be able to ssh into the GnuBee using your public key:

ssh root@192.168.1.172

and then finish the jessie installation:

wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/gnubee-git.github.io/master/debian/debian-modules-install bash ./debian-modules-install reboot

After rebooting, I made a few tweaks to make the system more pleasant to use:

update-alternatives --config editor # choose vim.basic dpkg-reconfigure locales # enable the locale that your desktop is using Upgrade to stretch and then buster

To upgrade to stretch, put this in /etc/apt/sources.list:

deb http://httpredir.debian.org/debian stretch main deb http://httpredir.debian.org/debian stretch-updates main deb http://security.debian.org/ stretch/updates main

Then upgrade the packages:

apt update apt full-upgrade apt autoremove reboot

To upgrade to buster, put this in /etc/apt/sources.list:

deb http://httpredir.debian.org/debian buster main deb http://httpredir.debian.org/debian buster-updates main deb http://security.debian.org/debian-security buster/updates main

and upgrade the packages:

apt update apt full-upgrade apt autoremove reboot Next steps

At this point, my GnuBee is running the latest version of Debian stable, however there are two remaining issues to fix:

  1. openssh-server doesn't work and I am forced to access the GnuBee via the serial interface.

  2. The firmware is running an outdated version of the Linux kernel though this is being worked on by community members.

I hope to resolve these issues soon, and will update this blog post once I do, but you are more than welcome to leave a comment if you know of a solution I may have overlooked.

Benjamin Mako Hill: Hairdressers with Supposedly Funny Pun Names I’ve Visited Recently

Sunday 14th of July 2019 10:08:14 PM

Mika and I recently spent two weeks biking home to Seattle from our year in Palo Alto. The route was ~1400 kilometers and took us past 10 volcanoes and 4 hot springs.

Route of our bike trip from Davis, CA to Oregon City, OR. An elevation profile is also shown.

To my delight, the route also took us past at least 8 hairdressers with supposedly funny pun names! Plus two in Oakland on our way out.

As a result of this trip, I’ve now made 24 contributions to the Hairdressers with Supposedly Funny Pun Names Flickr group photo pool.

Daniel Silverstone: A quarter in review - Halfway to 2020

Sunday 14th of July 2019 03:54:00 PM
The 2019 plan - Second-quarter review

At the start of the year I blogged about my plans for 2019. For those who don't want to go back to read that post, in summary they are:

  1. Continue to lose weight and get fit. I'd like to reach 80kg during the year if I can
  2. Begin a couch to 5k and give it my very best
  3. Focus my software work on finishing projects I have already started
  4. Where I join in other projects be a net benefit
  5. Give back to the @rustlang community because I've gained so much from them already
  6. Be better at tidying up
  7. Save up lots of money for renovations
  8. Go on a proper holiday

At the point that I posted that, I promised myself to do quarterly reviews and so here is the second of those. The first can be found here.

1. Weight loss

So when I wrote in April, I was around 88.6kg and worried about how my body seemed to really like roughly 90kg. This is going to be a similar report. Despite managing to lose 10kg in the first quarter, the second quarter has been harder, and with me focussed on running rather than my full gym routine, loss has been less. I've recently started to push a bit lower though and I'm now around 83kg.

I could really shift my focus back to all-round gym exercise, but honestly I've been enjoying a lot of the spare time returned to me by switching back to my cycling and walking, plus now running a bit. I imagine as the weather returns to its more usual wet mess the gym will return to prominence for me, and with that maybe I'll shed a bit of more of this weight.

I continue give myself a solid "B" for this, though if I were generous, given everything else I might consider a "B+"

2. Couch to 5k

Last time I wrote, I'd just managed a 5k run for the first time. Since then I completed the couch-to-5k programme and have now done eight parkruns. I missed one week due to awful awful weather, but otherwise I've managed to be consistent and attended one parkrun per week. They've all been at the same course apart from one which was in Southampton. This gives me a clean ability to compare runs.

My first parkrun was 30m32s, though I remain aware that the course at Platt Fields is a smidge under 5k really, and I was really pleased with that. However as a colleague explained to me, It never gets easier… Each parkrun is just as hard, if not harder, than the previous one. However to continue his quote, …you just get faster. and I have. Since that first run, I have improved my personal record to 27m34s which is, to my mind at least, bloody brilliant. Even when this week I tried to force myself to go slower, aiming to pace out a 30m run, I ended up at 27m49s.

I am currently trying to convince myself that I can run a bit more slowly and thus increase my distance, but for now I think 5k is a stuck record for me. I'll continue to try and improve that time a little more.

I said last review that I'd be adjusting my goals in the light of how well I'd done with couch-2-5k at that point. Since I've now completed it, I'll be renaming this section the 'Fitness' section and hopefully next review I'll be able to report something other than running in it.

So far, so good, I'm continuing with giving myself an "A+"

3. Finishing projects

I did a bunch more on NetSurf this quarter. We had an amazing long-weekend where we worked on a whole bunch of NS stuff, and I've even managed to give up some of my other spare time to resolve bugs. I'm very pleased with how I've done with that.

Rob and I failed to do much with the pub software, but Lars and I continue to work on the Fable project.

So over-all, this one doesn't get better than the "C" from last time - still satisfactory but could do a lot better.

4. Be a net benefit

My efforts for Debian continue to be restricted, though I hope it continues to just about be a net benefit to the project. My efforts with the Lua community have not extended again, so pretty much the same.

I remain invested in Rust stuff, and have managed (just about) to avoid starting in on any other projects, so things are fairly much the same as before.

I remain doing "okay" here, and I want to be a little more positive than last review, so I'm upgrading to a "B".

5. Give back to the Rust community

My work with Rustup continues, though in the past month or so I've been pretty lax because I've had to travel a lot for work. I continue to be as heavily involved in Rust as I can be -- I've stepped up to the plate to lead the Rustup team, and that puts me into the Rust developer tools team proper. I attended a conference, in part to represent the Rust developer community, and I have some followup work on that which I still need to complete.

I still hang around on the #wg-rustup Discord channel and other channels on that server, helping where I can, and I've been trying to teach my colleagues about Rust so that they might also contribute to the community.

Previously I gave myself an 'A' but thought I could manage an 'A+' if I tried harder. Since I've been a little lax recently I'm dropping myself to an 'A-'.

6. Be better at tidying up

Once again, I came out of the previous review fired up to tidy more. Once again, that energy ebbed after about a week. Every time I feel like I might have the mental space to begin building a cleaning habit, something comes along to knock the wind out of my sails. Sometimes that's a big work related thing, but sometimes it's something as small as "Our internet connection is broken, so suddenly instead of having time to clean, I have no time because it's broken and so I can't do anything, even things which don't need an internet connection."

This remains an "F" for fail, sadly.

7. Save up money for renovations

The savings process continues. I've not managed to put quite as much away in this quarter as I did the quarter before, but I have been doing as much as I can. I've finally consolidated most of my savings into one place which also makes them look a little healthier.

The renovations bills continue to loom, but we're doing well, so I think I get to keep the "A" here.

8. Go on a proper holiday

Well, I had that week "off" but ended up doing so much stuff that it doesn't count as much of a holiday. Rob is now in Japan, but I've not managed to take the time as a holiday because my main project at work needs me there since our project manager and his usual stand-in are both also away in Japan.

We have made a basic plan to take some time around the August Bank Holiday to perhaps visit family etc, so I'm going to upgrade us to "C+" since we're making inroads, even if we've not achieved a holiday yet.

Summary

Last quarter, my scores were B, A+, C, B-, A, F, A, C, which, if we ignore the F is an average of A, though the F did ruin things a little.

This quarter I have a B+, A+, C, B, A-, F, A, C+, which ignoring the F is a little better, though still not great. I guess here's to another quarter.

Ben Hutchings: Talk: What goes into a Debian package?

Sunday 14th of July 2019 02:05:55 PM

Some months ago I gave a talk / live demo at work about how Debian source and binary packages are constructed.

Yesterday I repeated this talk (with minor updates) for the Chicago LUG. I had quite a small audience, but got some really good questions at the end. I have now put the notes up on my talks page.

No, I'm not in Chicago. This was a trial run of giving a talk remotely, which I'll also be doing for DebConf this year. I set up an RTMP server in the cloud (nginx) and ran OBS Studio on my laptop to capture and transmit video and audio. I'm generally very impressed with OBS Studio, although the X window capture source could do with improvement. I used the built-in camera and mic, but the mic picked up a fair amount of background noise (including fan noise, since the video encoding keeps the CPU fairly busy). I should probably switch to a wearable mic in future.

Martin Pitt: Lightweight i3 developer desktop with OSTree and chroots

Sunday 14th of July 2019 12:00:00 AM
Introduction I’ve always liked a clean, slim, lightweight, and robust OS on my laptop (which is my only PC) – I’ve been running the i3 window manager for years, with some custom configuration to enable the Fn keys and set up my preferred desktop session layout. Initially on Ubuntu, for the last two and a half years under Fedora (since I moved to Red Hat). I started with a minimal server install and then had a post-install script that installed the packages that I need, restore my /etc files from git, and some other minor bits.

Jonathan Carter: My Debian 10 (buster) Report

Friday 12th of July 2019 04:58:38 PM

In the early hours of Sunday morning (my time), Debian 10 (buster) was released. It’s amazing to be a part of an organisation where so many people work so hard to pull together and make something like this happen. Creating and supporting a stable release can be tedious work, but it’s essential for any kind of large-scale or long-term deployments. I feel honored to have had a small part in this release

Debian Live

My primary focus area for this release was to get Debian live images in a good shape. It’s not perfect yet, but I think we made some headway. The out of box experiences for the desktop environments on live images are better, and we added a new graphical installer that makes Debian easier to install for the average laptop/desktop user. For the bullseye release I intend to ramp up quality efforts and have a bunch of ideas to make that happen, but more on that another time.

Calamares installer on Cinnamon live image.

Other new stuff I’ve been working on in the Buster cycle

Gamemode

Gamemode is a library and tool that changes your computer’s settings for maximum performance when you launch a game. Some new games automatically invoke Gamemode when they’re launched, but for most games you have to do it manually, check their GitHub page for documentation.

Innocent de Marchi Packages

I was sad to learn about the passing of Innocent de Marchi, a math teacher who was also a Debian contributor for whom I’ve sponsored a few packages before. I didn’t know him personally but learned that he was really loved in his community, I’m continuing to maintain some of his packages that I also had an interest in:

  • calcoo – generic lightweight graphical calculator app that can be useful on desktop environments that doesn’t have one
  • connectagram – a word unscrambling game that gets its words from wiktionary
  • fracplanet – fractal planet generator
  • fractalnow – fast, advanced fractal generator
  • gnubik – 3D Rubik’s cube game
  • tanglet – single player word finding game based on Boggle
  • tetzle – jigsaw puzzle game (was also Debian package of the Day #44)
  • xabacus – simulation of the ancient calculator

Powerline Goodies

I wrote a blog post on vim-airline and powerlevel9k shortly after packaging those: New powerline goodies in Debian.

Debian Desktop

I helped co-ordinate the artwork for the Buster release, although Laura Arjona did most of the heavy lifting on that. I updated some of the artwork in the desktop-base package and in debian-installer. Working on the artwork packages exposed me to some of their bugs but not in time to fix them for buster, so that will be a goal for bullseye. I also packaged the font that’s widely used in the buster artwork called quicksand (Debian package: fonts-quicksand). This allows SVG versions of the artwork in the system to display with the correct font.

Bundlewrap

Bundlewrap is a configuration management system written in Python. If you’re familiar with bcfg2 and Ansible, the concepts in Bundlewrap will look very familiar to you. It’s not as featureful as either of those systems, but what it lacks in advanced features it more than makes up for in ease of use and how easy it is to learn. It’s immediately useful for the large amount of cases where you want to install some packages and manage some config files based on conditions with templates. For anything else you might need you can write small Python modules.

Catimg

catimg is a tool that converts jpeg, png, ico and gif files to terminal output. This was also Debian Package of the day #26.

Gnome Shell Extensions

  • gnome-shell-extension-dash-to-panel: dash-to-panel is an essential shell extension for me, and does more for me to make Gnome 3 feel like Gnome 2.x for me than the classic mode does. It’s the easiest way to get a nice single panel on the top of the screen that contains everything that’s useful.
  • gnome-shell-extension-hide-veth: If you use LXC or Docker (or similar), you’ll probably be somewhat annoyed at all the ‘veth’ interfaces you see in network manager. This extension will hide those from the GUI.
  • gnome-shell-extension-no-annoyance: No annoyance fixes something that should really be configurable in Gnome by default. It removes all those nasty “Window is ready” notifications that are intrusive and distracting.

Other

That’s a wrap for my new Debian packages I maintain in Buster. There’s a lot more that I’d like to talk about that happened during this cycle, like that crazy month when I ran for DPL! And also about DebConf stuff, but I’m all out of time and on that note, I’m heading to DebCamp/DebConf in around 12 hours and look forward to seeing many of my Debian colleagues there :-)

Jonathan McDowell: Burn it all

Friday 12th of July 2019 11:17:06 AM

I am generally positive about my return to Northern Ireland, and decision to stay here. Things are much better than when I was growing up and there’s a lot more going on here these days. There’s an active tech scene and the quality of life is pretty decent. That said, this time of year is one that always dampens my optimism. TLDR: This post brings no joy. This is the darkest timeline.

First, we have the usual bonfire issues. I’m all for setting things on fire while having a drink, but when your bonfire is so big it leads to nearby flat residents being evacuated to a youth hostel for the night or you decide that adding 1800 tyres to your bonfire is a great idea, it’s time to question whether you’re celebrating your cultural identity while respecting those around you, or just a clampit (thanks, @Bolster). If you’re starting to displace people from their homes, or releasing lots of noxious fumes that are a risk to your health and that of your local community you need to take a hard look at the message you’re sending out.

Secondly, we have the House of Commons vote on Tuesday to amend the Northern Ireland (Executive Formation) Bill to require the government to bring forward legislation to legalise same-sex marriage and abortion in Northern Ireland. On the face of it this is a good thing; both are things the majority of the NI population want legalised and it’s an area of division between us and the rest of the UK (and, for that matter, Ireland). Dig deeper and it doesn’t tell a great story about the Northern Ireland Assembly. The bill is being brought in the first place because (at the time of writing) it’s been 907 days since Northern Ireland had a government. The current deadline for forming an executive is August 25th, or another election must be held. The bill extends this to October 21st, with an option to extend it further to January 13th. That’ll be 3 years since the assembly sat. That’s not what I voted for; I want my elected officials to actually do their jobs - I may not agree with all of their views, but it serves NI much more to have them turning up and making things happen than failing to do so. Especially during this time of uncertainty about borders and financial stability.

It’s also important to note that the amendments only kick in if an executive is not formed by October 21st - if there’s a functioning local government it’s expected to step in and enact the appropriate legislation to bring NI into compliance with its human rights obligations, as determined by the Supreme Court. It’s possible that this will provide some impetus to the DUP to re-form the assembly in NI. Equally it’s possible that it will make it less likely that Sinn Fein will rush to re-form it, as both amendments cover issues they have tried to resolve in the past.

Equally while I’m grateful to Stella Creasy and Conor McGinn for proposing these amendments, it’s a rare example of Westminster appearing to care about Northern Ireland at all. The ‘backstop’ has been bandied about as a political football, with more regard paid to how many points Tory leadership contenders can score off each other than what the real impact will be upon the people in Northern Ireland. It’s the most attention anyone has paid to us since the Good Friday Agreement, but it’s not exactly the right sort of attention.

I don’t know what the answer is. Since the GFA politics in Northern Ireland has mostly just got more polarised rather than us finding common ground. The most recent EU elections returned an Alliance MEP, Naomi Long, for the first time, which is perhaps some sign of a move to non-sectarian politics, but the real test would be what a new Assembly election would look like. I don’t hold out any hope that we’d get a different set of parties in power.

Still, I suppose at least it’s a public holiday today. Here’s hoping the pub is open for lunch.

Markus Koschany: My Free Software Activities in June 2019

Thursday 11th of July 2019 08:32:12 PM

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

First of all I want to thank Debian’s Release Team. Whenever there was something to unblock for Buster, I always got feedback within hours and in almost all cases the package could just migrate to testing. Good communication and clear rules helped a lot to make the whole freeze a great experience.

Debian Games
  • I reviewed and sponsored a couple of packages again this month.
  • Reiner Herrmann provided a complete overhaul of xbill, so that we all can fight those Wingdows Viruses again.
  • He also prepared a new upstream release of Supertuxkart, which is currently sitting in experimental but will hopefully be uploaded to unstable within the next days.
  • Bernhard Übelacker fixed two annoying bugs in Freeorion (#930417) and Warzone2100 (#930942).  Unfortunately it was too late to include the fixes for Debian 10 in time but I will prepare an update for the next point release.
  • Well, the freeze is over now (hooray) and I intend to upgrade a couple of games in the warm (if you live in the northern hemisphere) month of July again .
Debian Java
  • I prepared another security update for jackson-databind to fix CVE-2019-12814 and CVE-2019-12384 (#930750).
  • I worked on a security update for Tomcat 8 but have not finished it yet.
Debian LTS

This was my fortieth month as a paid contributor and I have been paid to work 17 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 10.06.2019 until 16.06.2019 and from 24.06.2019 until 30.06.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in wordpress, ansible, libqb, radare2, lemonldap-ng, irssi, libapache2-mod-auth-mellon and openjpeg2.
  • DLA-1827-1. Issued a security update for gvfs fixing 1 CVE.
  • DLA-1831-1. Issued a security update for jackson-databind fixing 2 CVE.
  • DLA-1822-1. Issued a security update for php-horde-form fixing 1 CVE.
  • DLA-1839-1. Issued a security update for expat fixing 1 CVE.
  • DLA-1845-1.  Issued a security update for dosbox fixing 2 CVE.
  • DLA-1846-1.  Issued a security update for unzip fixing 1 CVE.
  • DLA-1851-1. Issued a security update for openjpeg2 fixing 2 CVE.
ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my thirteenth month and I have been paid to work 22 hours on ELTS (15 hours were allocated + 7 hours from last month).

  • ELA-133-1. Issued a security update for linux fixing 9 CVE.
  • ELA-137-1. Issued a security update for libvirt fixing 1 CVE.
  • ELA-139-1. Issued a security update for bash fixing 1 CVE.
  • ELA-140-1. Issued a security update for glib2.0 fixing 3 CVE.
  • ELA-141-1. Issued a security update for unzip fixing 1 CVE.
  • ELA-142-1. Issued a security update for libxslt fixing 2 CVE.

Thanks for reading and see you next time.

Vincent Sanders: We can make it better than it was. Better...stronger...faster.

Thursday 11th of July 2019 05:15:09 PM
It is not a novel observation that computers have become so powerful that a reasonably recent system has a relatively long life before obsolescence. This is in stark contrast to the period between the nineties and the teens where it was not uncommon for users with even moderate needs from their computers to upgrade every few years.

This upgrade cycle was mainly driven by huge advances in processing power, memory capacity and ballooning data storage capability. Of course the software engineers used up more and more of the available resources and with each new release ensured users needed to update to have a reasonable experience.
And then sometime in the early teens this cycle slowed almost as quickly as it had begun as systems had become "good enough". I experienced this at a time I was relocating for a new job and had moved most of my computer use to my laptop which was just as powerful as my desktop but was far more flexible.

As a software engineer I used to have a pretty good computer for myself but I was never prepared to spend the money on "top of the range" equipment because it would always be obsolete and generally I had access to much more powerful servers if I needed more resources for a specific task.
To illustrate, the system specification of my desktop PC at the opening of the millennium was:
  • Single core Pentium 3 running at 500Mhz
  • Socket 370 motherboard with 100 Mhz Front Side Bus
  • 128 Megabytes of memory
  • A 25 Gigabyte Deskstar hard drive
  • 150 Mhz TNT 2 graphics card
  • 10 Megabit network card
  • Unbranded 150W PSU
But by 2013 the specification had become:
  • Quad core i5-3330S Processor running at 2700Mhz
  • FCLGA1155 motherboard running memory at 1333 Mhz
  • 8 Gigabytes of memory
  • Terabyte HGST hard drive
  • 1,050 Mhz Integrated graphics
  • Integrated Intel Gigabit network
  • OCZ 500W 80+ PSU
The performance change between these systems was more than tenfold in fourteen years with an upgrade roughly once every couple of years.

I recently started using that system again in my home office mainly for Computer Aided Design (CAD), Computer Aided Manufacture (CAM) and Electronic Design Automation (EDA). The one addition was to add a widescreen monitor as there was not enough physical space for my usual dual display setup.
To my surprise I increasingly returned to this setup for programming tasks. Firstly because being at my desk acts as an indicator to family members I am concentrating where the laptop was no longer had that effect. Secondly I really like the ultra wide display for coding it has become my preferred display and I had been saving for a UWQHD
Alas last month the system started freezing, sometimes it would be stable for several days and then without warning the mouse pointer would stop, my music would cease and a power cycle was required. I tried several things to rectify the situation: replacing the thermal compound, the CPU cooler and trying different memory, all to no avail.
As fixing the system cheaply appeared unlikely I began looking for a replacement and was immediately troubled by the size of the task. Somewhere in the last six years while I was not paying attention the world had moved on, after a great deal of research I managed to come to an answer.
AMD have recently staged something of a comeback with their Ryzen processors after almost a decade of very poor offerings when compared to Intel. The value for money when considering the processor and motherboard combination is currently very much weighted towards AMD.
My timing also seems fortuitous as the new Ryzen 2 processors have just been announced which has resulted in the current generation being available at a substantial discount. I was also encouraged to see that the new processors use the same AM4 socket and are supported by the current motherboards allowing for future upgrades if required.

I Purchased a complete new system for under five hundred pounds, comprising:
  • Hex core Ryzen 5 2600X Processor 3600Mhz
  • MSI B450 TOMAHAWK AMD Socket AM4 Motherboard
  • 32 Gigabytes of PC3200 DDR4 memory
  • Aero Cool Project 7 P650 80+ platinum 650W Modular PSU
  • Integrated RTL Gigabit networking
  • Lite-On iHAS124 DVD Writer Optical Drive
  • Corsair CC-9011077-WW Carbide Series 100R Silent Mid-Tower ATX Computer Case
to which I added some recycled parts:
  • 250 Gigabyte SSD from laptop upgrade
  • GeForce GT 640 from a friend
I installed a fresh copy of Debian and all my CAD/CAM applications and have been using the system for a couple of weeks with no obvious issues.

An example of the performance difference is compiling NetSurf from a clean with empty ccache used to take 36 seconds and now takes 16 which is a nice improvement, however a clean build with the results cached has gone from 6 seconds to 3 which is far less noticeable and during development a normal edit, build, debug cycle affecting only of a small number of files has gone from 400 milliseconds to 200 which simply feels instant in both cases.

My conclusion is that the new system is completely stable but that I have gained very little in common usage. Objectively the system is over twice as fast as its predecessor but aside from compiling large programs or rendering huge CAD drawings this performance is not utilised. Given this I anticipate this system will remain unchanged until it starts failing and I can only hope that will be at least another six years away.

Arturo Borrero González: Netfilter workshop 2019 Malaga summary

Thursday 11th of July 2019 12:00:00 PM

This week we had the annual Netfilter Workshop. This time the venue was in Malaga (Spain). We had the hotel right in the Malaga downtown and the meeting room was in University ETSII Malaga. We had plenty of talks, sessions, discussions and debates, and I will try to summarice in this post what it was about.

Florian Westphal, Linux kernel hacker, Netfilter coreteam member and engineer from Red Hat, started with a talk related to some work being done in the core of the Netfilter code in the kernel to convert packet processing to lists. He shared an overview of current problems and challenges. Processing in a list rather than per packet seems to have several benefits: code can be smarter and faster, so this seems like a good improvement. On the other hand, Florian thinks some of the pain to refactor all the code may not worth it. Other approaches may be considered to introduce even more fast forwarding paths (apart from the flow table mechanism which is already available).

Florian also followed up with the next topic: testing. We are starting to have a lot of duplicated code to do testing. Suggestion by Pablo is to introduce some dedicated tools to ease in maintenance and testing itself. Special mentions to nfqueue and tproxy, 2 mechanisms that require quite a bit of code to be well tested (and could be hard to setup anyway).

Ahmed Abdelsalam, engineer from Cisco, gave a talk on SRv6 Network programming. This protocol allows to simplify some interesting use cases from the network engineering point of view. For example, SRv6 aims to eliminate some tunneling and overlay protocols (VXLAN and friends), and increase native multi-tenancy support in IPv6 networks. Network Services Chaining is one of the main uses cases, which is really interesting in cloud environments. He mentioned that some Kubernetes CNI mechanisms are going to implement SRv6 soon. This protocol does not looks interesting only for the cloud use cases, but also from the general network engineering point of view. By the way, Ahmed shared some really interesting numbers and graphs regarding global IPv6 adoption. Ahmed shared the work that has been done in Linux in general and in nftables in particular to support such setups. I had the opportunity to talk more personally with Ahmed during the workshop to learn more about this mechanism and the different use cases and applications it has.

Fernando, GSoC student, gave us an overview of the OSF functionality of nftables to identify different operating systems from inside the ruleset. He shared some of the improvements he has been working on, and some of them are great, like version matching and wildcards.

Brett, engineer from Untangle, shared some plans to use a new nftables expression (nft_dict) to arbitrarily match on metadata. The proposed design is interesting because it covers some use cases from new perspectives. This triggered a debate on different design approaches and solutions to the issues presented.

Next day, Pablo Neira, head of the Netfilter project, started by opening a debate about extra features for nftables, like the ones provided via xtables-addons for iptables. The first we evaluated was GeoIP. I suggested having some generic infrastructure to be able to write/query external metadata from nftables, given we have more and more use cases looking for this (OSF, the dict expression, GeoIP). Other exhotics extension were discussed, like TARPIT, DELUDE, DHCPMAC, DNETMAP, ECHO, fuzzy, gradm, iface, ipp2p, etc.

A talk on connection tracking support for the linux bridge followed, led by Pablo. A status update on latest work was shared, and some debate happened regarding different use cases for ebtables & nftables.

Next topic was a complex one with no easy solutions: hosting of the Netfilter project infrastructure: git repositories, mailing lists, web pages, wiki deployments, bugzilla, etc. Right now the project has a couple of physical servers housed in a datacenter in Seville. But nobody has time to properly maintain them, upgrade them, and such. Also, part of our infra is getting old, for example the webpage. Some other stuff is mostly unmaintained, like project twitter accounts. Nobody actually has time to keep things updated, and this is probably the base problem. Many options were considered, including moving to github, gitlab, or other hosting providers.

After lunch, Pablo followed up with a status update on hardware flow offload capabilities for nftables. He started with an overview of the current status of ethtool_rx and tc offloads, capabilities and limitations. It should be possible for most commodity hardware to support some variable amount of offload capabilities, but apparently the code was not in very good shape. The new flow block API should improve this situation, while also giving support for nftables offload. There is a related article in LWN.

Next talk was by Phil, engineer at Red Hat. He commented on user-defined strings in nftables, which presents some challenges. Some debate happened, mostly to get to an agreement on how to proceed.

Next day, Phil was the one to continue with the workshop talks. This time the talk was about sharing his TODO list for iptables-nft, presentation and discussion of planned work. This triggered a discussion on how to handle certain bugs in Debian Buster, which have a lot of patch dependencies (so we cannot simply cherry-pick a single patch for stable). It turns out I maintain most of the Debian Netfilter packages, and Sebastian Delafond was attending the workshop too, who is also a Debian Developer. We provided some Debian-side input on how to better proceed with fixes for specific bugs in Debian. Phil continued pointing out several improvements that we require in nftables in order to support some rather exhotic uses cases in both iptables-nft and ebtables-nft.

Yi-Hung Wei, engineer working in OpenVSwitch shared some intresting features related to using the conntrack engine in certain scenarios. OVS is really useful in cloud environments. Specifically, the open discussion was around the zone based timeout policy support for other Netfilter use cases. It was pointed out by Pablo that nftables already support this. By the way, the Wikimedia Cloud Services team plans to use OVS in the near future by means of Neutron (a VXLAN+OVS setup)

Phil gave another talk related to nftables undefined behaviour situations. He has been working lately in polishing the last gaps between -legacy and -nft flavors of iptables and friends. Mostly what we have yet to solve are some corner cases. Also some weird ICMP situation. Thanks to Phil for taking care of this. Actually, Phils has been contributing a lot to the Netfilter project in the past few years.

Stephen, engineer from secunet, followed up after lunch to bring up a couple of topics about improvements to the kernel datapath using XDP. Also, he commented on partitioning the system into control and dataplace CPUs. The nftables flow table infra is doing exactly this, as pointed out by Florian.

Florian continued with some open-for.discussion topics for pending features in nftables. It looks like every day we have more requests for more different setups and use cases with nftables. We need to define uses cases as well as possible, and also try to avoid reinventing the wheel for some stuff.

Laura, engineer from Zevenet, followed up with a really interesting talk on load balancing and clustering using nftables. The amount of features and improvements added to nftlb since the last year is amazing: stateless DNAT topologies, L7 helpers support, more topologies for virtual services and backends, improvements for affinities, security policies, diffrerent clustering architectures, etc. We had an interesting conversation about how we integrate with etcd in the Wikimedia Foundation for sharing information between load balancer and for pooling/depooling backends. They are also spearheading a proposal to include support for nftables into Kubernetes kube-proxy.

Abdessamad El Abbassi, also engineer from Zevenet, shared the project that this company is developing to create a nft-based L7 proxy capable of offloading. They showed some metrics in which this new L7 proxy outperforms HAproxy for some setups. Quite interesting. Also some debate happened around SSL termination and how to better handle that situation.

That very afternoon the core team of the Netfilter project had a meeting in which some internal topics were discussed. Among other things, we decided to invite Phil Sutter to join the Netfilter coreteam.

I really enjoyed this round of Netfilter workshop. Pretty much enjoyed the time with all the folks, old friends and new ones.

Wouter Verhelst: DebConf Video player

Thursday 11th of July 2019 10:14:46 AM

Last weekend, I sat down to learn a bit more about angular, a TypeScript-based programming environment for rich client webapps. According to their website, "TypeScript is a typed superset of JavaScript that compiles to plain JavaScript", which makes the programming environment slightly more easy to work with. Additionally, since TypeScript compiles to whatever subset of JavaScript that you want to target, it compiles to something that should work on almost every browser (that is, if it doesn't, in most cases the fix is to just tweak the compatibility settings a bit).

Since I think learning about a new environment is best done by actually writing a project that uses it, and since I think it was something we could really use, I wrote a video player for the DebConf video team. It makes use of the metadata archive that Stefano Rivera has been working on the last few years (or so). It's not quite ready yet (notably, I need to add routing so you can deep-link to a particular video), but I think it's gotten to a state where it is useful for more general consumption.

We'll see where this gets us...

Steve Kemp: Building a computer - part 1

Thursday 11th of July 2019 10:01:00 AM

I've been tinkering with hardware for a couple of years now, most of this is trivial stuff if I'm honest, for example:

  • Wiring a display to a WiFi-enabled ESP8266 device.
    • Making it fetch data over the internet and display it.
  • Hooking up a temperature/humidity sensor to a device.
    • Submit readings to an MQ bus.

Off-hand I think the most complex projects I've built have been complex in terms of software. For example I recently hooked up a 933Mhz radio-receiver to an ESP8266 device, then had to reverse engineer the protocol of the device I wanted to listen for. I recorded a radio-burst using an SDR dongle on my laptop, broke the transmission into 1 and 0 manually, worked out the payload and then ported that code to the ESP8266 device.

Anyway I've decided I should do something more complex, I should build "a computer". Going old-school I'm going to stick to what I know best the Z80 microprocessor. I started programming as a child with a ZX Spectrum which is built around a Z80.

Initially I started with BASIC, later I moved on to assembly language mostly because I wanted to hack games for infinite lives. I suspect the reason I don't play video-games so often these days is because I'm just not very good without cheating ;)

Anyway the Z80 is a reasonably simple processor, available in a 40PIN DIP format. There are the obvious connectors for power, ground, and a clock-source to make the thing tick. After that there are pins for the address-bus, and pins for the data-bus. Wiring up a standalone Z80 seems to be pretty trivial.

Of course making the processor "go" doesn't really give you much. You can wire it up, turn on the power, and barring explosions what do you have? A processor executing NOP instructions with no way to prove it is alive.

So to make a computer I need to interface with it. There are two obvious things that are required:

  • The ability to get your code on the thing.
    • i.e. It needs to read from memory.
  • The ability to read/write externally.
    • i.e. Light an LED, or scan for keyboard input.

I'm going to keep things basic at the moment, no pun intended. Because I have no RAM, because I have no ROM, because I have no keyboard I'm going to .. fake it.

The Z80 has 40 pins, of which I reckon we need to cable up over half. Only the arduino mega has enough pins for that, but I think if I use a Mega I can wire it to the Z80 then use the Arduino to drive it:

  • That means the Arduino will generate a clock-signal to make the Z80 tick.
  • The arduino will monitor the address-bus
    • When the Z80 makes a request to read the RAM at address 0x0000 it will return something from its memory.
    • When the Z80 makes a request to write to the RAM at address 0xffff it will store it away in its memory.
  • Similarly I can monitor for requests for I/O and fake that.

In short the Arduino will run a sketch with a 1024 byte array, which the Z80 will believe is its memory. Via the serial console I can read/write to that RAM, or have the contents hardcoded.

I thought I was being creative with this approach, but it seems like it has been done before, numerous times. For example:

  • http://baltazarstudios.com/arduino-zilog-z80/
  • https://forum.arduino.cc/index.php?topic=60739.0
  • https://retrocomputing.stackexchange.com/questions/2070/wiring-a-zilog-z80

Anyway I've ordered a bunch of Z80 chips, and an Arduino mega (since I own only one Arduino, I moved on to ESP8266 devices pretty quickly), so once it arrives I'll document the process further.

Once it works I'll need to slowly remove the arduino stuff - I guess I'll start by trying to build an external RAM/ROM interface, or an external I/O circuit. But basically:

  • Hook the Z80 up to the Arduino such that I can run my own code.
  • Then replace the arduino over time with standalone stuff.

The end result? I guess I have no illusions I can connect a full-sized keyboard to the chip, and drive a TV. But I bet I could wire up four buttons and an LCD panel. That should be enough to program a game of Tetris in Z80 assembly, and declare success. Something like that anyway :)

Expect part two to appear after my order of parts arrives from China.

Sven Hoexter: Frankenstein JVM with flavour - jlink your own JVM with OpenJDK 11

Wednesday 10th of July 2019 02:31:27 PM

While you can find a lot of information regarding the Java "Project Jigsaw", I could not really find a good example on "assembling" your own JVM. So I took a few minutes to figure that out. My usecase here is that someone would like to use Instana (non free tracing solution) which requires the java.instrument and jdk.attach module to be available. From an operations perspektive we do not want to ship the whole JDK in our production Docker Images, so we've to ship a modified JVM. Currently we base our images on the builds provided by AdoptOpenJDK.net, so my examples are based on those builds. You can just download and untar them to any directory to follow along.

You can check the available modules of your JVM by runnning:

$ jdk-11.0.3+7-jre/bin/java --list-modules | grep -E '(instrument|attach)' java.instrument@11.0.3

As you can see only the java.instrument module is available. So let's assemble a custom JVM which includes all the modules provided by the default AdoptOpenJDK.net JRE builds and the missing jdk.attach module:

$ jdk-11.0.3+7/bin/jlink --module-path jdk-11.0.3+7 --add-modules $(jdk-11.0.3+7-jre/bin/java --list-modules|cut -d'@' -f 1|tr '\n' ',')jdk.attach --output myjvm $ ./myjvm/bin/java --list-modules | grep -E '(instrument|attach)' java.instrument@11.0.3 jdk.attach@11.0.3

Size wise the increase is, as expected, rather minimal:

$ du -hs myjvm jdk-11.0.3+7-jre jdk-11.0.3+7 141M myjvm 121M jdk-11.0.3+7-jre 310M jdk-11.0.3+7

For the fun of it you could also add the compiler so you can execute source files directly:

$ jdk-11.0.3+7/bin/jlink --module-path jdk-11.0.3+7 --add-modules $(jdk-11.0.3+7-jre/bin/java --list-modules|cut -d'@' -f 1|tr '\n' ',')jdk.compiler --output myjvm2 $ ./myjvm2/bin/java HelloWorld.java Hello World!

Jonathan Dowland: Bose on-ear wireless headphones

Wednesday 10th of July 2019 09:27:47 AM

Azoychka modelling the headphones

Earlier this year, and after about five years, I've had to accept that my beloved AKG K451 fold-able headphones have finally died, despite the best efforts of a friendly colleague in the Newcastle Red Hat office, who had replaced and re-soldered all the wiring through the headband, and disassembled the left ear-cup to remove a stray metal ring that got jammed in the jack, most likely snapped from one of several headphone wires I'd gone through.

The K451's were really good phones. They didn't sound quite as good as my much larger, much less portable Sennheisers, but the difference was close, and the portability aspect gave them fantastic utility. They remained comfortable to wear and listen to for hours on end, and were surprisingly low-leaking. I became convinced that on-ear was a good form factor for portable headphones.

To replace them, I decided to finally give wireless headphones a try. There are not a lot of on-ear, smaller form-factor wireless headphone models. I really wanted to like the Sony WH-H800s, which (I thought) looked stylish, and reviews for their bigger brother (the 1000 series over-ear) are fantastic. The 800s proved very hard to audition. I could only find one vendor in Newcastle with a pair for evaluation, Currys PC World, but the circumstances were very poor: a noisy store, the headphones tethered to a security frame on a very short leash, so I had to stoop to put them on; no ability to try my own music through the headset. The headset in actuality seemed poorly constructed, with the hard plastic seeming to be ill-fitting such that the headset rattled when I picked it up.

I therefore ended up buying the Bose on-ear wireless headphones. I was able to audition them in several different environments, using my own music, both over Bluetooth and via a cable. They are very comfortable, which is important for the use-case. I was a little nervous about reports on Bose sound quality, which is described as more sculpted than true to the source material, but I was happy with what I could hear in my demonstrations. What clinched it was a few other circumstances (that I won't elaborate on here) which brought the price down to comparable to what I paid for the AKG K451s.

A few months in, and the only criticism I have of the Bose headphones is I can get some mild discomfort on my helix if I have positioned them poorly. This has not turned out to be a big problem. One consequence of having wireless headphones, asides from increased convenience in the same listening circumstances I used wired headphones, is all the situations that I can use them that I wouldn't have bothered before, including a far wider range of house work chores, going up and down ladders, DIY jobs, etc. I'm finding myself consuming a lot more podcasts and programmes from BBC Radio, and experimenting more with streaming music.

Eddy Petrișor: Rust: How do we teach "Implementing traits in no_std for generics using lifetimes" without sutdents going mad?

Wednesday 10th of July 2019 12:03:45 AM
I'm trying to go through Sergio Benitez's CS140E class and I am currently at Implementing StackVec. StackVec is something that currently, looks like this:

/// A contiguous array type backed by a slice.
///
/// `StackVec`'s functionality is similar to that of `std::Vec`. You can `push`
/// and `pop` and iterate over the vector. Unlike `Vec`, however, `StackVec`
/// requires no memory allocation as it is backed by a user-supplied slice. As a
/// result, `StackVec`'s capacity is _bounded_ by the user-supplied slice. This
/// results in `push` being fallible: if `push` is called when the vector is
/// full, an `Err` is returned.
#[derive(Debug)]
pub struct StackVec<'a, T: 'a> {
    storage: &'a mut [T],
    len: usize,
    capacity: usize,
}The initial skeleton did not contain the derive Debug and the capacity field, I added them myself.

Now I am trying to understand what needs to happens behind:
  1. IntoIterator
  2. when in no_std
  3. with a custom type which has generics
  4. and has to use lifetimes
I don't now what I'm doing, I might have managed to do it:

pub struct StackVecIntoIterator<'a, T: 'a> {
    stackvec: StackVec<'a, T>,
    index: usize,
}

impl<'a, T: Clone + 'a> IntoIterator for StackVec<'a, &'a mut T> {
    type Item = &'a mut T;
    type IntoIter = StackVecIntoIterator<'a, T>;

    fn into_iter(self) -> Self::IntoIter {
        StackVecIntoIterator {
            stackvec: self,
            index: 0,
        }
    }
}

impl<'a, T: Clone + 'a> Iterator for StackVecIntoIterator<'a, T> {
    type Item = &'a mut T;

    fn next(&mut self) -> Option {
        let result = self.stackvec.pop();
        self.index += 1;

        result
    }
}I was really struggling to understand what should the returned iterator type be in my case, since, obviously, std::vec is out because a) I am trying to do a no_std implementation of something that should look a little like b) a std::vec.

That was until I found this wonderful example on a custom type without using any already implemented Iterator, but defining the helper PixelIntoIterator struct and its associated impl block:

struct Pixel {
    r: i8,
    g: i8,
    b: i8,
}

impl IntoIterator for Pixel {
    type Item = i8;
    type IntoIter = PixelIntoIterator;

    fn into_iter(self) -> Self::IntoIter {
        PixelIntoIterator {
            pixel: self,
            index: 0,
        }

    }
}

struct PixelIntoIterator {
    pixel: Pixel,
    index: usize,
}

impl Iterator for PixelIntoIterator {
    type Item = i8;
    fn next(&mut self) -> Option {
        let result = match self.index {
            0 => self.pixel.r,
            1 => self.pixel.g,
            2 => self.pixel.b,
            _ => return None,
        };
        self.index += 1;
        Some(result)
    }
}


fn main() {
    let p = Pixel {
        r: 54,
        g: 23,
        b: 74,
    };
    for component in p {
        println!("{}", component);
    }
}The part in bold was what I was actually missing. Once I had that missing link, I was able to struggle through the generics part.

Note that, once I had only one new thing, the generics - luckly the lifetime part seemed it to be simply considered part of the generic thing - everything was easier to navigate.


Still, the fact there are so many new things at once, one of them being lifetimes - which can not be taught, only experienced @oli_obk - makes things very confusing.

Even if I think I managed it for IntoIterator, I am similarly confused about implementing "Deref for StackVec" for the same reasons.

I think I am seeing on my own skin what Oliver Scherer was saying about big infodumps at once at the beginning is not the way to go. I feel that if Sergio's class was now in its second year, things would have improved. OTOH, I am now very curious how does your curriculum look like, Oli?

All that aside, what should be the signature of the impl? Is this OK?

impl<'a, T: Clone + 'a> Deref for StackVec<'a, &'a mut T> {
    type Target = T;

    fn deref(&self) -> &Self::Target;
}Trivial examples like wrapper structs over basic Copy types u8 make it more obvious what Target should be, but in this case it's so unclear, at least to me, at this point. And because of that I am unsure what should the implementation even look like.

I don't know what I'm doing, but I hope things will become clear with more exercise.

Matthew Garrett: Bug bounties and NDAs are an option, not the standard

Tuesday 9th of July 2019 09:15:21 PM
Zoom had a vulnerability that allowed users on MacOS to be connected to a video conference with their webcam active simply by visiting an appropriately crafted page. Zoom's response has largely been to argue that:

a) There's a setting you can toggle to disable the webcam being on by default, so this isn't a big deal,
b) When Safari added a security feature requiring that users explicitly agree to launch Zoom, this created a poor user experience and so they were justified in working around this (and so introducing the vulnerability), and,
c) The submitter asked whether Zoom would pay them for disclosing the bug, and when Zoom said they'd only do so if the submitter signed an NDA, they declined.

(a) and (b) are clearly ludicrous arguments, but (c) is the interesting one. Zoom go on to mention that they disagreed with the severity of the issue, and in the end decided not to change how their software worked. If the submitter had agreed to the terms of the NDA, then Zoom's decision that this was a low severity issue would have led to them being given a small amount of money and never being allowed to talk about the vulnerability. Since Zoom apparently have no intention of fixing it, we'd presumably never have heard about it. Users would have been less informed, and the world would have been a less secure place.

The point of bug bounties is to provide people with an additional incentive to disclose security issues to companies. But what incentive are they offering? Well, that depends on who you are. For many people, the amount of money offered by bug bounty programs is meaningful, and agreeing to sign an NDA is worth it. For others, the ability to publicly talk about the issue is worth more than whatever the bounty may award - being able to give a presentation on the vulnerability at a high profile conference may be enough to get you a significantly better paying job. Others may be unwilling to sign an NDA on principle, refusing to trust that the company will ever disclose the issue or fix the vulnerability. And finally there are people who can't sign such an NDA - they may have discovered the issue on work time, and employer policies may prohibit them doing so.

Zoom are correct that it's not unusual for bug bounty programs to require NDAs. But when they talk about this being an industry standard, they come awfully close to suggesting that the submitter did something unusual or unreasonable in rejecting their bounty terms. When someone lets you know about a vulnerability, they're giving you an opportunity to have the issue fixed before the public knows about it. They've done something they didn't need to do - they could have just publicly disclosed it immediately, causing significant damage to your reputation and potentially putting your customers at risk. They could potentially have sold the information to a third party. But they didn't - they came to you first. If you want to offer them money in order to encourage them (and others) to do the same in future, then that's great. If you want to tie strings to that money, that's a choice you can make - but there's no reason for them to agree to those strings, and if they choose not to then you don't get to complain about that afterwards. And if they make it clear at the time of submission that they intend to publicly disclose the issue after 90 days, then they're acting in accordance with widely accepted norms. If you're not able to fix an issue within 90 days, that's very much your problem.

If your bug bounty requires people sign an NDA, you should think about why. If it's so you can control disclosure and delay things beyond 90 days (and potentially never disclose at all), look at whether the amount of money you're offering for that is anywhere near commensurate with the value the submitter could otherwise gain from the information and compare that to the reputational damage you'll take from people deciding that it's not worth it and just disclosing unilaterally. And, seriously, never ask for an NDA before you're committing to a specific $ amount - it's never reasonable to ask that someone sign away their rights without knowing exactly what they're getting in return.

tl;dr - a bug bounty should only be one component of your vulnerability reporting process. You need to be prepared for people to decline any restrictions you wish to place on them, and you need to be prepared for them to disclose on the date they initially proposed. If they give you 90 days, that's entirely within industry norms. Remember that a bargain is being struck here - you offering money isn't being generous, it's you attempting to provide an incentive for people to help you improve your security. If you're asking people to give up more than you're offering in return, don't be surprised if they say no.

comments

Sean Whitton: Upload to Debian with just 'git tag' and 'git push'

Tuesday 9th of July 2019 08:49:41 PM

At a sprint over the weekend, Ian Jackson and I designed and implemented a system to make it possible for Debian Developers to upload new versions of packages by simply pushing a specially formatted git tag to salsa (Debian’s GitLab instance). That’s right: the only thing you will have to do to cause new source and binary packages to flow out to the mirror network is sign and push a git tag.

It works like this:

  1. DD signs and pushes a git tag containing some metadata. The tag is placed on the commit you want to release (which is probably the commit where you ran dch -r).

  2. This triggers a GitLab webhook, which passes the public clone URI of your salsa project and the name of the newly pushed tag to a cloud service called tag2upload.

  3. tag2upload verifies the signature on the tag against the Debian keyring,1 produces a .dsc and .changes, signs these, and uploads the result to ftp-master.2

    (tag2upload does not have, nor need, push access to anyone’s repos on salsa. It doesn’t make commits to the maintainer’s branch.)

  4. ftp-master and the autobuilder network push out the source and binary packages in the usual way.

The first step of this should be as easy as possible, so we’ve produced a new script, git debpush, which just wraps git tag and git push to sign and push the specially formatted git tag.

We’ve fully implemented tag2upload, though it’s not running in the cloud yet. However, you can try out this workflow today by running tag2upload on your laptop, as if in response to a webhook. We did this ourselves for a few real uploads to sid during the sprint.

  1. First get the tools installed. tag2upload reuses code from dgit and dgit-infrastructure, and lives in bin:dgit-infrastructure. git debpush is in a completely independent binary package which does not make any use of dgit.3

    % apt-get install git-debpush dgit-infrastructure dgit debian-keyring

    (you need version 9.1 of the first three of these packages, only in Debian unstable at the time of writing. However, the .debs will install on stretch & buster. And I’ll put them in buster-backports once that opens.)

  2. Prepare a source-only upload of some package that you normally push to salsa. When you are ready to upload this, just type git debpush.

    If the package is non-native, you will need to pass a quilt option to inform tag2upload what git branch layout you are using—it has to know this in order to produce a .dsc. See the git-debpush(1) manpage for the supported quilt options.

    The quilt option you select gets stored in the newly created tag, so for your next upload you won’t need it, and git debpush alone will be enough.

    See the git-debpush(1) manpage for more options, but we’ve tried hard to ensure most users won’t need any.

  3. Now you need to simulate salsa’s sending of a webhook to the tag2upload service. This is how you can do that:

    % mkdir -p ~/tmp/t2u % cd ~/tmp/t2u % DGIT_DRS_EMAIL_NOREPLY=myself@example.org dgit-repos-server \ debian . /usr/share/keyrings/debian-keyring.gpg,a --tag2upload \ https://salsa.debian.org/dgit-team/dgit-test-dummy.git debian/1.23

    … substituting your own service admin e-mail address, salsa repo URI and new tag name.

    Check the file ~/tmp/t2u/overall.log to see what happened, and perhaps take a quick look at Debian’s upload queue.

A few other notes about trying this out:

  • tag2upload will delete various files and directories in your working directory, so be sure to invoke it in an empty directory like ~/tmp/t2u.

  • You won’t see any console output, and the command might feel a bit slow. Neither of these will matter when tag2upload is running as a cloud service, of course. If there is an error, you’ll get an e-mail.

  • Running the script like this on your laptop will use your default PGP key to sign the .dsc and .changes. The real cloud service will have its own PGP key.

  • The shell invocation given above is complicated, but once the cloud service is deployed, no human is going to ever need to type it!

    What’s important to note is the two pieces of user input the command takes: your salsa repo URI, and the new tag name. The GitLab web hook will provide the tag2upload service with (only) these two parameters.

For some more discussion of this new workflow, see the git-debpush(1) manpage. We hope you have fun trying it out.

  1. Unfortunately, DMs can’t try tag2upload out on their laptops, though they will certainly be able to use the final cloud service version of tag2upload.
  2. Only source-only uploads are supported, but this is by design.
  3. Do not be fooled by the string ‘dgit’ appearing in the generated tags! We are just reusing a tag metadata convention that dgit also uses.

Daniel Lange: Cleaning a broken GNUpg (gpg) key

Tuesday 9th of July 2019 05:44:32 PM

I've long said that the main tools in the Open Source security space, OpenSSL and GnuPG (gpg), are broken and only a complete re-write will solve this. And that is still pending as nobody came forward with the funding. It's not a sexy topic, so it has to get really bad before it'll get better.

Gpg has a UI that is close to useless. That won't substantially change with more bolted-on improvements.

Now Robert J. Hansen and Daniel Kahn Gillmor had somebody add ~50k signatures (read 1, 2, 3, 4 for the g{l}ory details) to their keys and - oops - they say that breaks gpg.

But does it?

I downloaded Robert J. Hansen's key off the SKS-Keyserver network. It's a nice 45MB file when de-ascii-armored (gpg --dearmor broken_key.asc ; mv broken_key.asc.gpg broken_key.gpg).

Now a friendly:

$ /usr/bin/time -v gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

User-ID "Robert J. Hansen <rjh@sixdemonbag.org>": 49705 Signaturen entfernt
User-ID "Robert J. Hansen <rob@enigmail.net>": 49704 Signaturen entfernt
User-ID "Robert J. Hansen <rob@hansen.engineering>": 49701 Signaturen entfernt

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

        Command being timed: "gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit"
        User time (seconds): 3911.14
        System time (seconds): 2442.87
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 1:45:56
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 107660
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 1
        Minor (reclaiming a frame) page faults: 26630
        Voluntary context switches: 43
        Involuntary context switches: 59439
        Swaps: 0
        File system inputs: 112
        File system outputs: 48
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
 

And the result is a nicely useable 3835 byte file of the clean public key. If you supply a keyring instead of --no-default-keyring it will also keep the non-self signatures that are useful for you (as you apparently know the signing party).

So it does not break gpg. It does break things that call gpg at runtime and not asynchronously. I heard Enigmail is affected, quelle surprise.

Now the main problem here is the runtime. 1h45min is just ridiculous. As Filippo Valsorda puts it:

Someone added a few thousand entries to a list that lets anyone append to it. GnuPG, software supposed to defeat state actors, suddenly takes minutes to process entries. How big is that list you ask? 17 MiB. Not GiB, 17 MiB. Like a large picture. https://dev.gnupg.org/T4592

If I were a gpg / SKS keyserver developer, I'd

  • speed this up so the edit-key run above completes in less than 10 s (just getting rid of the lseek/read dance and deferring all time-based decisions should get close)
  • (ideally) make the drop-sig import-filter syntax useful (date-ranges, non-reciprocal signatures, ...)
  • clean affected keys on the SKS keyservers (needs coordination of sysops, drop servers from unreachable people)
  • (ideally) use the opportunity to clean all keyserver filesystem and the message board over pgp key servers keys, too
  • only accept new keys and new signatures on keys extending the strong set (rather small change to the existing codebase)

That way another key can only be added to the keyserver network if it contains at least one signature from a previously known strong-set key. Attacking the keyserver network would become at least non-trivial. And the web-of-trust thing may make sense again.

Update

09.07.2019

GNUpg 2.2.17 has been released with another set of quickly bolted together fixes:

* gpg: Ignore all key-signatures received from keyservers. This change is required to mitigate a DoS due to keys flooded with faked key-signatures. The old behaviour can be achieved by adding keyserver-options no-self-sigs-only,no-import-clean to your gpg.conf. [#4607] * gpg: If an imported keyblocks is too large to be stored in the keybox (pubring.kbx) do not error out but fallback to an import using the options "self-sigs-only,import-clean". [#4591] * gpg: New command --locate-external-key which can be used to refresh keys from the Web Key Directory or via other methods configured with --auto-key-locate. * gpg: New import option "self-sigs-only". * gpg: In --auto-key-retrieve prefer WKD over keyservers. [#4595] * dirmngr: Support the "openpgpkey" subdomain feature from draft-koch-openpgp-webkey-service-07. [#4590]. * dirmngr: Add an exception for the "openpgpkey" subdomain to the CSRF protection. [#4603] * dirmngr: Fix endless loop due to http errors 503 and 504. [#4600] * dirmngr: Fix TLS bug during redirection of HKP requests. [#4566] * gpgconf: Fix a race condition when killing components. [#4577]

Bug T4607 shows that these changes are all but well thought-out. They introduce artificial limits, like 64kB for WKD-distributed keys or 5MB for local signature imports (Bug T4591) which weaken the web-of-trust further.

I recommend to not run gpg 2.2.17 in production environments without extensive testing as these limits and the unverified network traffic may bite you. Do validate your upgrade with valid and broken keys that have segments (packet groups) surpassing the above mentioned limits. You may be surprised what gpg does. On the upside: you can now refresh keys (sans signatures) via WKD. So if your buddies still believe in limiting their subkey validities, you can more easily update them bypassing the SKS keyserver network. NB: I have not tested that functionality. So test before deploying.

More in Tux Machines

Today in Techrights

8 Top Ubuntu server Web GUI Management Panels

Ubuntu Server with command-line interface might sound little bit wired to newbies because of no previous familiarization. Thus, if you are new to Ubuntu Linux server running on your local hardware or some Cloud hosting and planning to install some Linux Desktop Graphical environment (GUI) over it; I would like to recommend don’t, until and unless you don’t have supported hardware. Instead, think about free and open-source Ubuntu server Web GUI Management panels. Moreover, for a moment, you can think about Desktop Graphical environment for your local server but if you have some Linux cloud hosting server, never do it. I am saying this because Ubuntu or any other Linux server operating systems are built to run on low hardware resources, thus even old computer/server hardware can easily handle it. GUI means more RAM and hard disk storage space. Read more

Android Leftovers

Ubuntu 18.10 Cosmic Cuttlefish reaches end of life on Thursday, upgrade now

Canonical, earlier this month, announced that Ubuntu 18.10 Cosmic Cuttlefish will be reaching end-of-life status this Thursday, making now the ideal time to upgrade to a later version. As with all non-Long Term Support (LTS) releases, 18.10 had nine months of support following its release last October. When distributions reach their end-of-life stage, they no longer receive security updates. While you may be relatively safe at first, the longer you keep running an unpatched system, the more likely it is that your system will become compromised putting your data at risk. If you’d like to move on from Ubuntu 18.10, you’ve got two options; you can either perform a clean install of a more up-to-date version of Ubuntu or you can do an in-place upgrade. Read more