Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 8 hours 33 sec ago

Elana Hashman: My favourite bash alias for git

Saturday 3rd of August 2019 04:00:00 AM

I review a lot of code. A lot. And an important part of that process is getting to experiment with said code so I can make sure it actually works. As such, I find myself with a frequent need to locally run code from a submitted patch.

So how does one fetch that code? Long ago, when I was a new maintainer, I would add the remote repository I was reviewing to my local repo so I could fetch that whole fork and target branch. Once downloaded, I could play around with that on my local machine. But this was a lot of overhead! There was a lot of clicking, copying, and pasting involved in order to figure out the clone URL for the remote repo, and a bunch of commands to set it up. It felt like a lot of toil that could be easily automated, but I didn't know a better way.

One day, when a coworker of mine saw me struggling with this, he showed me the better way.

Turns out, most hosted git repos with pull request functionality will let you pull down a read-only version of the changeset from the upstream fork using git, meaning that you don't have to set up additional remote tracking to fetch and run the patch or use platform-specific HTTP APIs.

Using GitHub's git references for pull requests

I first learned how to do this on GitHub.

GitHub maintains a copy of pull requests against a particular repo at the pull/NUM/head reference. (More documentation on refs here.) This means that if you have set up a remote called origin and someone submits a pull request #123 against that repository, you can fetch the code by running

$ git fetch origin pull/123/head remote: Enumerating objects: 3, done. remote: Counting objects: 100% (3/3), done. remote: Total 4 (delta 3), reused 3 (delta 3), pack-reused 1 Unpacking objects: 100% (4/4), done. From github.com:ehashman/hack_the_planet * branch refs/pull/123/head -> FETCH_HEAD $ git checkout FETCH_HEAD Note: checking out 'FETCH_HEAD'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at deadb00 hack the planet!!!

Woah.

Using pull request references for CI

As a quick aside: This is also handy if you want to write your own CI scripts against users' pull requests. Even better—on GitHub, you can fetch a tree with the pull request already merged onto the top of the current master branch by fetching pull/NUM/merge. (I'm not sure if this is officially documented somewhere, and I don't believe it's widely supported by other hosted git platforms.)

If you also specify the --depth flag in your fetch command, you can fetch code even faster by limiting how much upstream history you download. It doesn't make much difference on small repos, but it is a big deal on large projects:

elana@silverpine:/tmp$ time git clone https://github.com/kubernetes/kubernetes.git Cloning into 'kubernetes'... remote: Enumerating objects: 295, done. remote: Counting objects: 100% (295/295), done. remote: Compressing objects: 100% (167/167), done. remote: Total 980446 (delta 148), reused 136 (delta 128), pack-reused 980151 Receiving objects: 100% (980446/980446), 648.95 MiB | 12.47 MiB/s, done. Resolving deltas: 100% (686795/686795), done. Checking out files: 100% (20279/20279), done. real 1m31.035s user 1m17.856s sys 0m7.782s elana@silverpine:/tmp$ time git clone --depth=10 https://github.com/kubernetes/kubernetes.git kubernetes-shallow Cloning into 'kubernetes-shallow'... remote: Enumerating objects: 34305, done. remote: Counting objects: 100% (34305/34305), done. remote: Compressing objects: 100% (22976/22976), done. remote: Total 34305 (delta 17247), reused 19060 (delta 10567), pack-reused 0 Receiving objects: 100% (34305/34305), 34.22 MiB | 10.25 MiB/s, done. Resolving deltas: 100% (17247/17247), done. real 0m31.495s user 0m3.941s sys 0m1.228s Writing the pull alias

So how can one harness all this as a bash alias? It takes just a little bit of code:

pull() { git fetch "$1" pull/"$2"/head && git checkout FETCH_HEAD } alias pull='pull'

Then I can check out a PR locally with the short command pull <remote> <num>:

$ pull origin 123 remote: Enumerating objects: 4, done. remote: Counting objects: 100% (4/4), done. remote: Total 5 (delta 4), reused 4 (delta 4), pack-reused 1 Unpacking objects: 100% (5/5), done. From github.com:ehashman/hack_the_planet * branch refs/pull/123/head -> FETCH_HEAD Note: checking out 'FETCH_HEAD'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at deadb00 hack the planet!!!

You can even add your own commits, save them on a local branch, and push that to your collaborator's repository to build on their PR if you're so inclined... but let's not get too ahead of ourselves.

Changeset references on other git platforms

These pull request refs are not a special feature of git itself, but rather a per-platform implementation detail using an arbitrary git ref format. As far as I'm aware, most major git hosting platforms implement this, but they all use slightly different ref names.

GitLab

At my last job I needed to figure out how to make this work with GitLab in order to set up CI pipelines with our Jenkins instance. Debian's Salsa platform also runs GitLab.

GitLab calls user-submitted changesets "merge requests" and that language is reflected here:

git fetch origin merge-requests/NUM/head

They also have some nifty documentation for adding a git alias to fetch these references. They do so in a way that creates a local branch automatically, if that's something you'd like—personally, I check out so many patches that I would not be able to deal with cleaning up all the extra branch mess!

BitBucket

Bad news: as of the time of publication, this isn't supported on bitbucket.org, even though a request for this feature has been open for seven years. (BitBucket Server supports this feature, but that's standalone and proprietary, so I won't bother including it in this post.)

Gitea

While I can't find any official documentation for it, I tested and confirmed that Gitea uses the same ref names for pull requests as GitHub, and thus you can use the same bash/git aliases on a Gitea repo as those you set up for GitHub.

Saved you a click?

Hope you found this guide handy. No more excuses: now that it's just one short command away, go forth and run your colleagues' code locally!

Sven Hoexter: From 30 to 230 docker containers per host

Friday 2nd of August 2019 02:44:25 PM

I could not find much information on the interwebs how many containers you can run per host. So here are mine and the issues we ran into along the way.

The Beginning

In the beginning there were virtual machines running with 8 vCPUs and 60GB of RAM. They started to serve around 30 containers per VM. Later on we managed to squeeze around 50 containers per VM.

Initial orchestration was done with swarm, later on we moved to nomad. Access was initially fronted by nginx with consul-template generating the config. When it did not scale anymore nginx was replaced by Traefik. Service discovery is managed by consul. Log shipping was initially handled by logspout in a container, later on we switched to filebeat. Log transformation is handled by logstash. All of this is running on Debian GNU/Linux with docker-ce.

At some point it did not make sense anymore to use VMs. We've no state inside the containerized applications anyway. So we decided to move to dedicated hardware for our production setup. We settled with HPe DL360G10 with 24 physical cores and 128GB of RAM.

THP and Defragmentation

When we moved to the dedicated bare metal hosts we were running Debian/stretch + Linux from stretch-backports. At that time Linux 4.17. These machines were sized to run 95+ containers. Once we were above 55 containers we started to see occasional hiccups. First occurences lasted only for 20s, then 2min, and suddenly some lasted for around 20min. Our system metrics, as collected by prometheus-node-exporter, could only provide vague hints. The metric export did work, so processes were executed. But the CPU usage and subsequently the network throughput went down to close to zero.

I've seen similar hiccups in the past with Postgresql running on a host with THP (Transparent Huge Pages) enabled. So a good bet was to look into that area. By default /sys/kernel/mm/transparent_hugepage/enabled is set to always, so THP are enabled. We stick to that, but changed the defrag mode /sys/kernel/mm/transparent_hugepage/defrag (since Linux 4.12) from the default madavise to defer+madvise.

This moves page reclaims and compaction for pages which were not allocated with madvise to the background, which was enough to get rid of those hiccups. See also the upstream documentation. Since there is no sysctl like facility to adjust sysfs values, we're using the sysfsutils package to adjust this setting after every reboot.

Conntrack Table

Since the default docker networking setup involves a shitload of NAT, it shouldn't be surprising that nf_conntrack will start to drop packets at some point. We're currently fine with setting the sysctl tunable

net.netfilter.nf_conntrack_max = 524288

but that's very much up to your network setup and traffic characteristics.

Inotify Watches and Cadvisor

Along the way cadvisor refused to start at one point. Turned out that the default settings (again sysctl tunables) for

fs.inotify.max_user_instances = 128 fs.inotify.max_user_watches = 8192

are too low. We increased to

fs.inotify.max_user_instances = 4096 fs.inotify.max_user_watches = 32768 Ephemeral Ports

We didn't ran into an issue with running out of ephemeral ports directly, but dockerd has a constant issue of keeping track of ports in use and we already see collisions to appear regularly. Very unscientifically we set the sysctl

net.ipv4.ip_local_port_range = 11000 60999 NOFILE limits and Nomad

Initially we restricted nomad (via systemd) with

LimitNOFILE=65536

which apparently is not enough for our setup once we were crossing the 100 container per host limit. Though the error message we saw was hard to understand:

[ERROR] client.alloc_runner.task_runner: prestart failed: alloc_id=93c6b94b-e122-30ba-7250-1050e0107f4d task=mycontainer error="prestart hook "logmon" failed: Unrecognized remote plugin message:

This was solved by following the official recommendation and setting

LimitNOFILE=infinity LimitNPROC=infinity TasksMax=infinity

The main lead here was looking into the "hashicorp/go-plugin" library source, and understanding that they try to read the stdout of some other process, which sounded roughly like someone would have to open at some point a file.

Running out of PIDs

Once we were close to 200 containers per host (test environment with 256GB RAM per host), we started to experience failures of all kinds because processes could no longer be forked. Since that was also true for completely fresh user sessions, it was clear that we're hitting some global limitation and nothing bound to session via a pam module.

It's important to understand that most of our workloads are written in Java, and a lot of the other software we use is written in go. So we've a lot of Threads, which in Linux are presented as "Lightweight Process" (LWP). So every LWP still exists with a distinct PID out of the global PID space.

With /proc/sys/kernel/pid_max defaulting to 32768 we actually ran out of PIDs. We increased that limit vastly, probably way beyond what we currently need, to 500000. Actuall limit on 64bit systems is 222 according to man 5 proc.

Vincent Bernat: Securing BGP on the host with origin validation

Friday 2nd of August 2019 09:16:31 AM

An increasingly popular design for a datacenter network is BGP on the host: each host ships with a BGP daemon to advertise the IPs it handles and receives the routes to its fellow servers. Compared to a L2-based design, it is very scalable, resilient, cross-vendor and safe to operate.1 Take a look at “L3 routing to the hypervisor with BGP” for a usage example.

BGP on the host with a spine-leaf IP fabric. A BGP session is established over each link and each host advertises its own IP prefixes.

While routing on the host eliminates the security problems related to Ethernet networks, a server may announce any IP prefix. In the above picture, two of them are announcing 2001:db8:cc::/64. This could be a legit use of anycast or a prefix hijack. BGP offers several solutions to improve this aspect and one of them is to reuse the features around the RPKI.

Short introduction to the RPKI

On the Internet, BGP is mostly relying on trust. This contributes to various incidents due to operator errors, like the one that affected Cloudflare a few months ago, or to malicious attackers, like the hijack of Amazon DNS to steal cryptocurrency wallets. RFC 7454 explains the best practices to avoid such issues.

IP addresses are allocated by five Regional Internet Registries (RIR). Each of them maintains a database of the assigned Internet resources, notably the IP addresses and the associated AS numbers. These databases may not be totally reliable but are widely used to build ACLs to ensure peers only announce the prefixes they are expected to. Here is an example of ACLs generated by bgpq3 when peering directly with Apple:2

$ bgpq3 -l v6-IMPORT-APPLE -6 -R 48 -m 48 -A -J -E AS-APPLE policy-options { policy-statement v6-IMPORT-APPLE { replace: from { route-filter 2403:300::/32 upto /48; route-filter 2620:0:1b00::/47 prefix-length-range /48-/48; route-filter 2620:0:1b02::/48 exact; route-filter 2620:0:1b04::/47 prefix-length-range /48-/48; route-filter 2620:149::/32 upto /48; route-filter 2a01:b740::/32 upto /48; route-filter 2a01:b747::/32 upto /48; } } }

The RPKI (RFC 6480) adds public-key cryptography on top of it to sign the authorization for an AS to be the origin of an IP prefix. Such record is a Route Origination Authorization (ROA). You can browse the databases of these ROAs through the RIPE’s RPKI Validator instance:

RPKI validator shows one ROA for 85.190.88.0/21

BGP daemons do not have to download the databases or to check digital signatures to validate the received prefixes. Instead, they offload these tasks to a local RPKI validator implementing the “RPKI-to-Router Protocol” (RTR, RFC 6810).

For more details, have a look at “RPKI and BGP: our path to securing Internet Routing.”

Using origin validation in the datacenter

While it is possible to create our own RPKI for use inside the datacenter, we can take a shortcut and use a validator implementing RTR, like GoRTR, and accepting another source of truth. Let’s work on the following topology:

BGP on the host with prefix validation using RTR. Each server has its own AS number. The leaf routers establish RTR sessions to the validators.

You assume we have a place to maintain a mapping between the private AS numbers used by each host and the allowed prefixes:3

ASN Allowed prefixes AS 65005 2001:db8:aa::/64 AS 65006 2001:db8:bb::/64,
2001:db8:11::/64 AS 65007 2001:db8:cc::/64 AS 65008 2001:db8:dd::/64 AS 65009 2001:db8:ee::/64,
2001:db8:11::/64 AS 65010 2001:db8:ff::/64

From this table, we build a JSON file for GoRTR, assuming each host can announce the provided prefixes or longer ones (like 2001:db8:aa::­42:d9ff:­fefc:287a/128 for AS 65005):

{ "roas": [ { "prefix": "2001:db8:aa::/64", "maxLength": 128, "asn": "AS65005" }, { "…": "…" }, { "prefix": "2001:db8:ff::/64", "maxLength": 128, "asn": "AS65010" }, { "prefix": "2001:db8:11::/64", "maxLength": 128, "asn": "AS65006" }, { "prefix": "2001:db8:11::/64", "maxLength": 128, "asn": "AS65009" } ] }

This file is deployed to all validators and served by a web server. GoRTR is configured to fetch it and update it every 10 minutes:

$ gortr -refresh=600 \ -verify=false -checktime=false \ -cache=http://127.0.0.1/rpki.json INFO[0000] New update (7 uniques, 8 total prefixes). 0 bytes. Updating sha256 hash -> 68a1d3b52db8d654bd8263788319f08e3f5384ae54064a7034e9dbaee236ce96 INFO[0000] Updated added, new serial 1

The refresh time could be lowered but GoRTR can be notified of an update using the SIGHUP signal. Clients are immediately notified of the change.

The next step is to configure the leaf routers to validate the received prefixes using the farm of validators. Most vendors support RTR:

Platform Over TCP? Over SSH? Juniper JunOS ✔️ ❌ Cisco IOS XR ✔️ ✔️ Cisco IOS XE ✔️ ❌ Cisco IOS ✔️ ❌ Arista EOS ❌ ❌ BIRD ✔️ ✔️ FRR ✔️ ✔️ GoBGP ✔️ ❌ Configuring JunOS

JunOS only supports plain-text TCP. First, let’s configure the connections to the validation servers:

routing-options { validation { group RPKI { session validator1 { hold-time 60; # session is considered down after 1 minute record-lifetime 3600; # cache is kept for 1 hour refresh-time 30; # cache is refreshed every 30 seconds port 8282; } session validator2 { /* OMITTED */ } session validator3 { /* OMITTED */ } } } }

By default, at most two sessions are randomly established at the same time. This provides a good way to load-balance them among the validators while maintaining good availability. The second step is to define the policy for route validation:

policy-options { policy-statement ACCEPT-VALID { term valid { from { protocol bgp; validation-database valid; } then { validation-state valid; accept; } } term invalid { from { protocol bgp; validation-database invalid; } then { validation-state invalid; reject; } } } policy-statement REJECT-ALL { then reject; } }

The policy statement ACCEPT-VALID turns the validation state of a prefix from unknown to valid if the ROA database says it is valid. It also accepts the route. If the prefix is invalid, the prefix is marked as such and rejected. We have also prepared a REJECT-ALL statement to reject everything else, notably unknown prefixes.

A ROA only certifies the origin of a prefix. A malicious actor can therefore prepend the expected AS number to the AS path to circumvent the validation. For example, AS 65007 could annonce 2001:db8:dd::/64, a prefix allocated to AS 65006, by advertising it with the AS path 65007 65006. To avoid that, we define an additional policy statement to reject AS paths with more than one AS:

policy-options { as-path EXACTLY-ONE-ASN "^.$"; policy-statement ONLY-DIRECTLY-CONNECTED { term exactly-one-asn { from { protocol bgp; as-path EXACTLY-ONE-ASN; } then next policy; } then reject; } }

The last step is to configure the BGP sessions:

protocols { bgp { group HOSTS { local-as 65100; type external; # export [ … ]; import [ ONLY-DIRECTLY-CONNECTED ACCEPT-VALID REJECT-ALL ]; enforce-first-as; neighbor 2001:db8:42::a10 { peer-as 65005; } neighbor 2001:db8:42::a12 { peer-as 65006; } neighbor 2001:db8:42::a14 { peer-as 65007; } } } }

The import policy rejects any AS path longer than one AS, accepts any validated prefix and rejects everything else. The enforce-first-as directive is also pretty important: it ensures the first (and, here, only) AS in the AS path matches the peer AS. Without it, a malicious neighbor could inject a prefix using an AS different than its own, defeating our purpose.4

Let’s check the state of the RTR sessions and the database:

> show validation session Session State Flaps Uptime #IPv4/IPv6 records 2001:db8:4242::10 Up 0 00:16:09 0/9 2001:db8:4242::11 Up 0 00:16:07 0/9 2001:db8:4242::12 Connect 0 0/0 > show validation database RV database for instance master Prefix Origin-AS Session State Mismatch 2001:db8:11::/64-128 65006 2001:db8:4242::10 valid 2001:db8:11::/64-128 65006 2001:db8:4242::11 valid 2001:db8:11::/64-128 65009 2001:db8:4242::10 valid 2001:db8:11::/64-128 65009 2001:db8:4242::11 valid 2001:db8:aa::/64-128 65005 2001:db8:4242::10 valid 2001:db8:aa::/64-128 65005 2001:db8:4242::11 valid 2001:db8:bb::/64-128 65006 2001:db8:4242::10 valid 2001:db8:bb::/64-128 65006 2001:db8:4242::11 valid 2001:db8:cc::/64-128 65007 2001:db8:4242::10 valid 2001:db8:cc::/64-128 65007 2001:db8:4242::11 valid 2001:db8:dd::/64-128 65008 2001:db8:4242::10 valid 2001:db8:dd::/64-128 65008 2001:db8:4242::11 valid 2001:db8:ee::/64-128 65009 2001:db8:4242::10 valid 2001:db8:ee::/64-128 65009 2001:db8:4242::11 valid 2001:db8:ff::/64-128 65010 2001:db8:4242::10 valid 2001:db8:ff::/64-128 65010 2001:db8:4242::11 valid IPv4 records: 0 IPv6 records: 18

Here is an example of accepted route:

> show route protocol bgp table inet6 extensive all inet6.0: 11 destinations, 11 routes (8 active, 0 holddown, 3 hidden) 2001:db8:bb::42/128 (1 entry, 0 announced) *BGP Preference: 170/-101 Next hop type: Router, Next hop index: 0 Address: 0xd050470 Next-hop reference count: 4 Source: 2001:db8:42::a12 Next hop: 2001:db8:42::a12 via em1.0, selected Session Id: 0x0 State: <Active NotInstall Ext> Local AS: 65006 Peer AS: 65000 Age: 12:11 Validation State: valid Task: BGP_65000.2001:db8:42::a12+179 AS path: 65006 I Accepted Localpref: 100 Router ID: 1.1.1.1

A rejected route would be similar with the reason “rejected by import policy” shown in the details and the validation state would be invalid.

Configuring BIRD

BIRD supports both plain-text TCP and SSH. Let’s configure it to use SSH. We need to generate keypairs for both the leaf router and the validators (they can all share the same keypair). We also have to create a known_hosts file for BIRD:

(validatorX)$ ssh-keygen -qN "" -t rsa -f /etc/gortr/ssh_key (validatorX)$ echo -n "validatorX:8283 " ; \ cat /etc/bird/ssh_key_rtr.pub validatorX:8283 ssh-rsa AAAAB3[…]Rk5TW0= (leaf1)$ ssh-keygen -qN "" -t rsa -f /etc/bird/ssh_key (leaf1)$ echo 'validator1:8283 ssh-rsa AAAAB3[…]Rk5TW0=' >> /etc/bird/known_hosts (leaf1)$ echo 'validator2:8283 ssh-rsa AAAAB3[…]Rk5TW0=' >> /etc/bird/known_hosts (leaf1)$ cat /etc/bird/ssh_key.pub ssh-rsa AAAAB3[…]byQ7s= (validatorX)$ echo 'ssh-rsa AAAAB3[…]byQ7s=' >> /etc/gortr/authorized_keys

GoRTR needs additional flags to allow connections over SSH:

$ gortr -refresh=600 -verify=false -checktime=false \ -cache=http://127.0.0.1/rpki.json \ -ssh.bind=:8283 \ -ssh.key=/etc/gortr/ssh_key \ -ssh.method.key=true \ -ssh.auth.user=rpki \ -ssh.auth.key.file=/etc/gortr/authorized_keys INFO[0000] Enabling ssh with the following authentications: password=false, key=true INFO[0000] New update (7 uniques, 8 total prefixes). 0 bytes. Updating sha256 hash -> 68a1d3b52db8d654bd8263788319f08e3f5384ae54064a7034e9dbaee236ce96 INFO[0000] Updated added, new serial 1

Then, we can configure BIRD to use these RTR servers:

roa6 table ROA6; template rpki VALIDATOR { roa6 { table ROA6; }; transport ssh { user "rpki"; remote public key "/etc/bird/known_hosts"; bird private key "/etc/bird/ssh_key"; }; refresh keep 30; retry keep 30; expire keep 3600; } protocol rpki VALIDATOR1 from VALIDATOR { remote validator1 port 8283; } protocol rpki VALIDATOR2 from VALIDATOR { remote validator2 port 8283; }

Unlike JunOS, BIRD doesn’t have a feature to only use a subset of validators. Therefore, we only configure two of them. As a safety measure, if both connections become unavailable, BIRD will keep the ROAs for one hour.

We can query the state of the RTR sessions and the database:

> show protocols all VALIDATOR1 Name Proto Table State Since Info VALIDATOR1 RPKI --- up 17:28:56.321 Established Cache server: rpki@validator1:8283 Status: Established Transport: SSHv2 Protocol version: 1 Session ID: 0 Serial number: 1 Last update: before 25.212 s Refresh timer : 4.787/30 Retry timer : --- Expire timer : 3574.787/3600 No roa4 channel Channel roa6 State: UP Table: ROA6 Preference: 100 Input filter: ACCEPT Output filter: REJECT Routes: 9 imported, 0 exported, 9 preferred Route change stats: received rejected filtered ignored accepted Import updates: 9 0 0 0 9 Import withdraws: 0 0 --- 0 0 Export updates: 0 0 0 --- 0 Export withdraws: 0 --- --- --- 0 > show route table ROA6 Table ROA6: 2001:db8:11::/64-128 AS65006 [VALIDATOR1 17:28:56.333] * (100) [VALIDATOR2 17:28:56.414] (100) 2001:db8:11::/64-128 AS65009 [VALIDATOR1 17:28:56.333] * (100) [VALIDATOR2 17:28:56.414] (100) 2001:db8:aa::/64-128 AS65005 [VALIDATOR1 17:28:56.333] * (100) [VALIDATOR2 17:28:56.414] (100) 2001:db8:bb::/64-128 AS65006 [VALIDATOR1 17:28:56.333] * (100) [VALIDATOR2 17:28:56.414] (100) 2001:db8:cc::/64-128 AS65007 [VALIDATOR1 17:28:56.333] * (100) [VALIDATOR2 17:28:56.414] (100) 2001:db8:dd::/64-128 AS65008 [VALIDATOR1 17:28:56.333] * (100) [VALIDATOR2 17:28:56.414] (100) 2001:db8:ee::/64-128 AS65009 [VALIDATOR1 17:28:56.333] * (100) [VALIDATOR2 17:28:56.414] (100) 2001:db8:ff::/64-128 AS65010 [VALIDATOR1 17:28:56.333] * (100) [VALIDATOR2 17:28:56.414] (100)

Like for the JunOS case, a malicious actor could try to workaround the validation by building an AS path where the last AS number is the legitimate one. BIRD is flexible enough to allow us to use any AS to check the IP prefix. Instead of checking the origin AS, we ask it to check the peer AS with this function, without looking at the AS path:

function validated(int peeras) { if (roa_check(ROA6, net, peeras) != ROA_VALID) then { print "Ignore invalid ROA ", net, " for ASN ", peeras; reject; } accept; }

The BGP instance is then configured to use the above function as the import policy:

protocol bgp PEER1 { local as 65100; neighbor 2001:db8:42::a10 as 65005; ipv6 { import keep filtered; import where validated(65005); # export …; }; }

You can view the rejected routes with show route filtered, but BIRD does not store information about the validation state in the routes. You can also watch the logs:

2019-07-31 17:29:08.491 <INFO> Ignore invalid ROA 2001:db8:bb::40:/126 for ASN 65005

Currently, BIRD does not reevaluate the prefixes when the ROAs are updated. There is work in progress to fix this. If this feature is important to you, have a look at FRR instead: it also supports the RTR protocol and triggers a soft reconfiguration of the BGP sessions when ROAs are updated.

  1. Notably, the data flow and the control plane are separated. A node can remove itself by notifying its peers without losing a single packet. ↩︎

  2. People often use AS sets, like AS-APPLE in this example, as they are convenient if you have multiple AS numbers or customers. However, there is currently nothing preventing a rogue actor to add arbitrary AS numbers to their AS set. ↩︎

  3. We are using 16-bit AS numbers for readability. Because we need to assign a different AS number for each host in the datacenter, in an actual deployment, we would use 32-bit AS numbers. ↩︎

  4. Cisco routers and FRR enforce the first AS by default. It is a tunable value to allow the use of route servers: they distribute prefixes on behalf of other routers. ↩︎

Junichi Uekawa: Started wanting to move stuff to docker.

Friday 2nd of August 2019 04:55:38 AM
Started wanting to move stuff to docker. Especially around build systems. If things are mutable they will go bad and fixing them is annoying.

Mike Gabriel: My Work on Debian LTS/ELTS (July 2019)

Thursday 1st of August 2019 06:24:18 PM

In July 2019, I have worked on the Debian LTS project for 15.75 hours (of 18.5 hours planned) and on the Debian ELTS project for another 12 hours (as planned) as a paid contributor.

LTS Work
  • Upload to jessie-security: libssh2 (DLA 1730-3) [1]
  • Upload to jessie-security: libssh2 (DLA 1730-4) [2]
  • Upload to jessie-security: glib2.0 (DLA 1866-1) [3]
  • Upload to jessie-security: wpa (DLA 1867-1) [4]

The Debian Security package archive only has arch-any buildds attached, so source packages that build at least one arch-all bin:pkg must include the arch-all DEBs from a local build. So, ideally, we upload source + arch-all builds and leave the arch-any builds to the buildds. However, this seems to be problematic when doing the builds using sbuild. So, I spent a little time on...

  • sbuild: Try to understand the mechanism of building arch-all + source package (i.e. omit arch-any uploads)... Unfortunately, there is no "-g" option (like in dpkg-buildpackage). Neither does the parameter combination ''--source --arch-all --no-arch-any'' result in a source + arch-all build. More investigation / communication with the developers of sbuild required here. To be continued...
ELTS Work
  • Upload to wheezy-lts: freetype (ELA 149-1) [5]
  • Upload to wheezy-lts: libssh2 (ELA 99-3) [6]
References

Gunnar Wolf: Goodbye, pgp.gwolf.org

Thursday 1st of August 2019 03:25:08 PM

I started running an SKS keyserver a couple of years ago (don't really remember, but I think it was around 2014). I am, as you probably expect me to be given my lines of work, a believer of the Web-of-Trust model upon which the PGP network is built. I have published a couple of academic papers (Strengthening a Curated Web of Trust in a Geographically Distributed Project, with Gina Gallegos, Cryptologia 2016, and Insights on the large-scale deployment of a curated Web-of-Trust: the Debian project’s cryptographic keyring, with Victor González Quiroga, Journal of Internet Services and Applications, 2018) and presented several conferences regarding some aspects of it, mainly in relation to the Debian project.

Even in light of the recent flooding attacks (more info by dkg, Daniel Lange, Michael Altfield, others available; GnuPG task tracker). I still believe in the model. But I have had enough of the implementation's brittleness. I don't know how much to blame SKS and how much to blame myself, but I cannot devote more time to fiddling around to try to get it to work as it should — I was providing an unstable service. Besides, this year I had to rebuild the database three times already due to it getting corrupted... And yesterday I just could not get past of segfaults when importing.

So, I have taken the unhappy decision to shut down my service. I have contacted both the SKS mailing list and the servers I was peering with. Due to the narrow scope of a single SKS server, possibly this post is not needed... But it won't hurt, so here it goes.

Thomas Goirand: My work during DebCamp / DebConf

Thursday 1st of August 2019 11:34:02 AM

Lots of uploads

Grepping my IRC log for the BTS bot output shows that I uploaded roughly 244 times in Curitiba.

Removing Python 2 from OpenStack by uploading OpenStack Stein in Sid

Most of these uploads were uploading OpenStack Stein from Experimental to Sid, with a breaking record of 96 uploads in a single day. As the work for Python 2 removal was done before the Buster release (uploads in Experimental), this effectively removed a lot of Python 2 support.

Removing Python 2 from Django packages

But once that was done, I started uploading some Django packages. Indeed, since Django 2.2 was uploaded to Sid with the removal of Python 2 support, a lot of dangling python-django-* needed to be fixed. Not only Python 2 support needed to be removed from them, but often, patches were needed in order to fix at least unit tests since Django 2.2 removed a lot of things that were deprecated since a few earlier versions. I went through all of the django packages we have in Debian, and I believe I fixed most of them. I uploaded 43 times some Django packages, fixing 39 packages.

Removing Python 2 support from non-django or OpenStack packages

During the Python BoF at Curitiba, we collectively decided it was time to remove Python 2, and that we’ll try to do as much of that work as possible before Bullseye. Details of this will come from our dear leader p1otr, so I’ll let him write the document and wont comment (yet) on how we’re going to proceed. Anyway, we already have a “python2-rm” release tracker. After the Python BOF, I then also started removing Python 2 support on a few package with more generic usage. Hopefully, touching only leaf packages, without breaking things. I’m not sure of the total count of packages that I touched, probably a bit less than a dozen.

Horizon broken in Sid since the beginning of July

Unfortunately, Horizon, the OpenStack dashboard, is currently still broken in Debian Sid. Indeed, since Django 1.11, the login() function in views.py has been deprecated in the favor of a LoginView class. And in Django 2.2, the support for the function has been removed. As a consequence, since the 9th of July, when Django 2.2 was uploaded, Horizon’s openstack_auth/views.py is boken. Upstream says they are targeting Django 2.2 for next February. That’s a way too late. Hopefully, someone will be able to fix this situation with me (it’s probably a bit too much for Django my skills). Once this is fixed, I’ll be able to work on all the Horizon plugins which are still in Experimental. Note that I already fixed all of Horizon’s reverse dependencies in Sid, but some of the patches need to be upstreamed.

Next work (from home): fixing piuparts

I’ve already written a first attempt at a patch for piuparts, so that it uses Python 3 and not Python 2 anymore. That patch is already as a merge request in Salsa, though I haven’t had the time to test it yet. What’s remaining to do is: actually test using Puiparts with this patch, and fix debian/control so that it switches to Python 2.

Steve Kemp: Building a computer - part 3

Thursday 1st of August 2019 10:01:00 AM

This is part three in my slow journey towards creating a home-brew Z80-based computer. My previous post demonstrated writing some simple code, and getting it running under an emulator. It also described my planned approach:

  • Hookup a Z80 processor to an Arduino Mega.
  • Run code on the Arduino to emulate RAM reads/writes and I/O.
  • Profit, via the learning process.

I expect I'll have to get my hands-dirty with a breadboard and naked chips in the near future, but for the moment I decided to start with the least effort. Erturk Kocalar has a website where he sells "shields" (read: expansion-boards) which contain a Z80, and which is designed to plug into an Arduino Mega with no fuss. This is a simple design, I've seen a bunch of people demonstrate how to wire up by hand, for example this post.

Anyway I figured I'd order one of those, and get started on the easy-part, the software. There was some sample code available from Erturk, but it wasn't ideal from my point of view because it mixed driving the Z80 with doing "other stuff". So I abstracted the core code required to interface with the Z80 and packaged it as a simple library.

The end result is that I have a z80 retroshield library which uses an Arduino mega to drive a Z80 with something as simple as this:

#include <z80retroshield.h> // // Our program, as hex. // unsigned char rom[32] = { 0x3e, 0x48, 0xd3, 0x01, 0x3e, 0x65, 0xd3, 0x01, 0x3e, 0x6c, 0xd3, 0x01, 0xd3, 0x01, 0x3e, 0x6f, 0xd3, 0x01, 0x3e, 0x0a, 0xd3, 0x01, 0xc3, 0x16, 0x00 }; // // Our helper-object // Z80RetroShield cpu; // // RAM I/O function handler. // char ram_read(int address) { return (rom[address]) ; } // I/O function handler. void io_write(int address, char byte) { if (address == 1) Serial.write(byte); } // Setup routine: Called once. void setup() { Serial.begin(115200); // // Setup callbacks. // // We have to setup a RAM-read callback, otherwise the program // won't be fetched from RAM and executed. // cpu.set_ram_read(ram_read); // // Then we setup a callback to be executed every time an "out (x),y" // instruction is encountered. // cpu.set_io_write(io_write); // // Configured. // Serial.println("Z80 configured; launching program."); } // // Loop function: Called forever. // void loop() { // Step the CPU. cpu.Tick(); }

All the logic of the program is contained in the Arduino-sketch, and all the use of pins/ram/IO is hidden away. As a recap the Z80 will make requests for memory-contents, to fetch the instructions it wants to execute. For general purpose input/output there are two instructions that are used:

IN A, (1) ; Read a character from STDIN, store in A-register. OUT (1), A ; Write the character in A-register to STDOUT

Here 1 is the I/O address, and this is an 8 bit number. At the moment I've just configured the callback such that any write to I/O address 1 is dumped to the serial console.

Anyway I put together a couple of examples of increasing complexity, allowing me to prove that RAM read/writes work, and that I/O reads and writes work.

I guess the next part is where I jump in complexity:

  • I need to wire a physical Z80 to a board.
  • I need to wire a PROM to it.
    • This will contain the program to be executed - hardcoded.
  • I need to provide power, and a clock to make the processor tick.

With a bunch of LEDs I'll have a Z80-system running, but it'll be isolated and hard to program. (Since I'll need to reflash the RAM/ROM-chip).

The next step would be getting it hooked up to a serial-console of some sort. And at that point I'll have a genuinely programmable standalone Z80 system.

Sylvain Beucler: Debian LTS - July 2019

Thursday 1st of August 2019 08:41:48 AM

Here is my transparent report for my work on the Debian Long Term Support (LTS) project, which extends the security support for past Debian releases, as a paid contributor.

In July, the monthly sponsored hours were split evenly among contributors depending on their max availability - I declared max 30h and got 18.5h.

My time was mostly spend on Front-Desk duties, as well as improving our scripts&docs.

Current vulnerabilities triage:

  • CVE-2019-13117/libxslt CVE-2019-13118/libxslt: triage (affected, dla-needed)
  • CVE-2019-12781/python-django: triage (affected)
  • CVE-2019-12970/squirrelmail: triage (affected)
  • CVE-2019-13147/audiofile: triage (postponed)
  • CVE-2019-12493/poppler: jessie triage (postponed)
  • CVE-2019-13173/node-fstream: jessie triage (node-* not supported)
  • exiv2: jessie triage (5 CVEs, none to fix - CVE-2019-13108 CVE-2019-13109 CVE-2019-13110 CVE-2019-13112 CVE-2019-13114)
  • CVE-2019-13207/nsd: jessie triage (affected, posponed)
  • CVE-2019-11272/libspring-security-2.0-java: jessie triage (affected, dla-needed)
  • CVE-2019-13312/ffmpeg: (libav) jessie triage (not affected)
  • CVE-2019-13313/libosinfo: jessie triage (affected, postponed)
  • CVE-2019-13290/mupdf: jessie triage (not-affected)
  • CVE-2019-13351/jackd2: jessie triage (affected, postponed)
  • CVE-2019-13345/squid3: jessie triage (2 XSS: 1 unaffected, 1 reflected affected, dla-needed)
  • CVE-2019-11841/golang-go.crypto: jessie triage (affected, dla-needed)
  • Call for triagers for the upcoming weeks

Past undermined issues triage:

  • libgig: contact maintainer about 17 pending undetermined CVEs
  • libsixel: contact maintainer about 6 pending undetermined CVEs
  • netpbm-free - actually an old Debian-specific fork: contact original reporter for PoCs and attach them to BTS; CVE-2017-2579 and CVE-2017-2580 not-affected, doubts about CVE-2017-2581

Documentation:

Tooling - bin/lts-cve-triage.py:

  • filter out 'undetermined' but explicitely 'ignored' packages (e.g. jasperreports)
  • fix formatting with no-colors output, hint that color output is available
  • display lts' nodsa sub-states
  • upgrade unsupported packages list to jessie

Kurt Kremitzki: Summer Update for FreeCAD & Debian Science Work

Thursday 1st of August 2019 04:47:23 AM

Hello, and welcome to my "summer update" on my free software work on FreeCAD and the Debian Science team. I call it a summer update because it was winter when I last wrote, and quite some time has elapsed since I fell out of the monthly update habit. This is a high-level summary of what I've been working on since March.

FreeCAD 0.18 Release & Debian 10 Full Freeze Timing


The official release date of FreeCAD 0.18 ( release notes ) is March 12, 2019, although the git tag for it wasn't pushed until March 14th. This timing was a bit unfortunate as the full freeze for Debian 10 went into effect March 12th, with a de-facto freeze date of March 2nd due to the 10 day testing migration period. To compound things, since this was my first Debian release as a packaging contributor, I didn't do things quite right such that while I probably could have gotten FreeCAD 0.18 into Debian 10, I didn't. Instead, what's available is a pre-release version from about a month before the release which is missing a few bugfixes and refinements.

On the positive side, this is an impetus for me to learn about Debian Backports, a way to provide non-bugfix updates to Debian Stable users. The 0.18 release line has already had several bugfix releases; I've currently got Debian Testing/Unstable as well as the Ubuntu Stable PPA up-to-date with version 0.18.3. As soon as I'm able, I'll get this version into Debian Backports, too.

FreeCAD PPA Improvements

Another nice improvement I've recently made is migrating the packaging for the Ubuntu Stable and Daily PPAs to Debian's GitLab instance at https://salsa.debian.org/science-team/freecad by creating the ppa/master and ppa/daily branches. Having all the Debian and Ubuntu packaging in one place means that propagating updates has become a matter of git merging and pushing. Once any changes are in place, I simply have to trigger an import and build on Launchpad for the stable releases. For the daily builds, changes are automatically synced and the debian directory from Salsa is combined with the latest synced upstream source from GitHub, so daily builds no longer have to be triggered manually. However, this has uncovered another problem in our process which being worked on at the FreeCAD forums. (Thread: Finding a solution for the 'version.h' issue

Science Team Package Updates


The main Science Team packages I've been working on recently have been OpenCASCADE, Netgen, Gmsh, and OpenFOAM.

For OpenCASCADE, I have uploaded the third bugfix release in the 7.3.0 series. Unfortunately, their versioning scheme is a bit unusual, so this version is tagged 7.3.0p3. This is unfortunate because dpkg --compare-versions 7.3.0p3+dfsg1 gt 7.3.0+dfsg1 evaluates to false. As such, I've uploaded this package as 7.3.3, with plans to contact upstream to discuss their bugfix release versioning scheme. Currently, version 7.4.0 has an upstream target release date for the end of August, so there will be an opportunity to convince them to release 7.4.1 instead of 7.4.0p1. If you're interested in the changes contained in this upload, you can refer to the upstream git log for more information.

In collaboration with Nico Schlömer and Anton Gladky, the newest Gmsh, version 4.4.1, has been uploaded to wait in the Debian NEW queue. See the upstream changelog for more information on what's new.

I've also prepared the package for the newest version of Netgen, 6.2.1905. Unfortunately, uploading this is blocked because 6.2.1810 is still in Debian NEW. However, I've tested compiling FreeCAD against Netgen, and I've been able to get the integration with it working again, so once I'm able to do this upload, I'll be able to upload a new and improved FreeCAD with the power of Netgen meshing.

I've also begun working on packaging the latest OpenFOAM release, 1906. I've gotten a little sidetracked, though, as a pecularity in the way upstream prepares their tarballs seems to be triggering a bug in GNU tar. I should have this one uploaded soon. For a preview in what'll be coming, see the release notes for version 1906.

GitLab CI Experimentation with salsa.debian.org

Some incredibly awesome Debian contributors have set up the ability to use GitLab CI to automate the testing of Debian packages (see documentation.)

I did a bit of experimentation with it. Unfortunately, both OpenCASCADE and FreeCAD exceeded the 2 hour time limit. There's a lot of promise in it for smaller packages, though!

Python 2 Removal in Debian Underway


Per pythonclock.org, Python 2 has less than 5 months until it's end-of-life, so the task of removing it for the next version of Debian has begun. For now, it's mainly limited to leaf packages with nothing depending on them. As such, I've uploaded Python 3-only packages for new upstream releases of python-fluids (a fluid dynamics engineering & design library) and python-ulmo (provides clean & simple access to public hydrology and climatology data).

Debian Developer Application

I've finally applied to become a full Debian Developer, which is an exciting prospect. I'll be more able to enact improvements without having to bug, well, mostly Anton, Andreas, and Tobias. (Thanks!) I'm also looking forward to having access to more resources to improve my packages on other architectures, particularly arm64 now that the Raspberry Pi 4 is out and potentially a serious candidate for a low-powered FreeCAD workstation.

The process is slow and calculating, as it should be, so it'll be some time before I'm officially in, but it sure will be cause for celebration.

Google Summer of Code Mentoring

CC-BY-SA-4.0, Aswinshenoy.


I'm mentoring a Google Summer of Code project for FreeCAD this year! (See forum thread.) My student is quite new to FreeCAD and Debian/Ubuntu, so the first half of the project has involved relatively the deep-end topics of using Debian packaging to distribute bugfixes for FreeCAD and to learn by exploring related packages in its ecosystem. In particular, focus was given to OpenCAMLib, since there is a lot of user and developer interest in FreeCAD's potential for generating toolpaths for machining and manufacturing the models created in the program.

Now that he's officially swimming and not sinking, the next phase is working on making development and packaging-related improvements for FreeCAD on Windows, which is in even rougher shape than Debian/Ubuntu, but more his area of familiarity. Stay tuned for the final results!

Thanks to my sponsors

This work is made possible in part by contributions from readers like you! You can send moral support my way via Twitter @thekurtwk. Financial support is also appreciated at any level and possible on several platforms: Patreon, Liberapay, and PayPal.

Paul Wise: FLOSS Activities July 2019

Thursday 1st of August 2019 02:20:27 AM
Changes Issues Review Administration
  • apt-xapian-index: migrated repo to Salsa, merged some branches and patches
  • Debian: redirect user support request, answer porterbox access query,
  • Debian wiki: ping team member, re-enable accounts, unblock IP addresses, whitelist domains, whitelist email addresses, send unsubscribe info, redirect support requests
  • Debian QA services: deploy changes
  • Debian PTS: deploy changes
  • Debian derivatives census: disable cron job due to design flaws
Communication Sponsors

The File::LibMagic, purple-discord, librecaptcha & harmony work was sponsored by my employer. All other work was done on a volunteer basis.

Jonathan Carter: Free Software Activities (2019-07)

Wednesday 31st of July 2019 06:51:12 PM

Group photo above taken at DebConf19 by Agairs Mahinovs.

2019-07-03: Upload calamares-settings-debian (10.0.20-1) (CVE 2019-13179) to debian unstable.

2019-07-05: Upload calamares-settings-debian (10.0.25-1) to debian unstable.

2019-07-06: Debian Buster Live final ISO testing for release, also attended Cape Town buster release party.

2019-07-08: Sponsor package ddupdate (0.6.4-1) for debian unstable (mentors.debian.net request, RFS: #931582)

2019-07-08: Upload package btfs (2.19-1) to debian unstable.

2019-07-08: Upload package calamares (3.2.11-1) to debian unstable.

2019-07-08: Request update for util-linux (BTS: #931613).

2019-07-08: Upload package gnome-shell-extension-dashtodock (66-1) to debian unstable.

2019-07-08: Upload package gnome-shell-extension-multi-monitors (18-1) to debian unstable.

2019-07-08: Upload package gnome-shell-extension-system-monitor (38-1) to debian unstable.

2019-07-08: Upload package gnome-shell-extension-tilix-dropdown (7-1) to debian unstable.

2019-07-08: Upload package python3-aniso8601 (7.0.0-1) to debian unstable.

2019-07-08: Upload package python-3-flask-restful (0.3.7-2) to debian unstable.

2019-07-08: Upload package xfce4-screensaver (0.1.6) to debian unstable.

2019-07-09: Sponsor package wordplay (8.0-1) (mentors.debian.net request).

2019-07-09: Sponsor package blastem (0.6.3.2-1) (mentors.debian.net request) (Closes RFS: #931263).

2019-07-09: Upload gnome-shell-extension-workspaces-to-dock (50-1) to debian unstable.

2019-07-09: Upload bundlewrap (3.6.1-2) to debian unstable.

2019-07-09: Upload connectagram (1.2.9-6) to debian unstable.

2019-07-09: Upload fracplanet (0.5.1-5) to debian unstable.

2019-07-09: Upload fractalnow (0.8.2-4) to debian unstable.

2019-07-09: Upload gnome-shell-extension-dash-to-panel (19-2) to debian unstable.

2019-07-09: Upload powerlevel9k (0.6.7-2) to debian unstable.

2019-07-09: Upload speedtest-cli (2.1.1-2) to debian unstable.

2019-07-11: Upload tetzle (2.1.4+dfsg1-2) to debian unstable.

2019-07-11: Review mentors.debian.net package hipercontracer (1.4.1-1).

2019-07-15 – 2019-07-28: Attend DebCamp and DebConf!

My DebConf19 mini-report:

There is really too much to write about that happened at DebConf, I hope to get some time and write seperate blog entries on those really soon.

  • Participated in Bursaries BoF, I was chief admin of DebConf bursaries in this cycle. Thanks to everyone who already stepped up to help with next year.
  • Gave a lightning talk titled “Can you install Debian within a lightning talk slot?” where I showed off Calamares on the latest official live media. Spoiler alert: it barely doesn’t fit in the allotted time, something to fix for bullseye!
  • Participated in a panel called “Surprise, you’re a manager!“.
  • Hosted “Debian Live BoF” – we made some improvements for the live images during the buster cycle, but there’s still a lot of work to do so we held a session to cut out our initial work for Debian 11.
  • Got the debbug and missed the day trip, I hope to return to this part of Brazil one day, so much to explore in just the surrounding cities.
  • The talk selection this year was good, there’s a lot that I learned and caught up on that I probably wouldn’t have done if it wasn’t for DebConf. Talks are recorded so (http archive, YouTube). PS: If you find something funny, please link (with time stamp) on the FunnyMoments wiki page (that page is way too bare right now).

Chris Lamb: Free software activities in July 2019

Wednesday 31st of July 2019 03:31:19 PM

Here is my monthly update covering what I have been doing in the free software world during July 2019 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom. Conservancy acts as a corporate umbrella, allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.


This month:

I spent significant amount of time working on our website this month, including:

  • Split out our non-fiscal sponsors with a description [...] and make them non-display three-in-a-row [...].
  • Correct references to "1&1 IONOS" (née Profitbricks). [...]
  • Lets not promote yet more ambiguity in our environment names! [...]
  • Recreate the badge image, saving the .svg alongside it. [...]
  • Update our fiscal sponsors. [...][...][...]
  • Tidy the weekly reports section on the news page [...], fixup the typography on the documentation page [...] and make all headlines stand out a bit more [...].
  • Drop some old CSS files and fonts. [...]
  • Tidy news page a bit. [...]
  • Fixup a number of issues in the report template and previous reports. [...][...][...][...][...][...]

I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Add support for Java .jmod modules (#60). However, not all versions of file(1) support detection of these files yet, so we perform a manual comparison instead [...].
  • If a command fails to execute but does not print anything to standard error, try and include the first line of standard output in the message we include in the difference. This was motivated by readelf(1) returning its error messages on standard output. [#59) [...]
  • Add general support for file(1) 5.37 (#57) but also adjust the code to not fail in tests when, eg, we do not have sufficiently newer or older version of file(1) (#931881).
  • Factor out the ability to ignore the exit codes of zipinfo and zipinfo -v in the presence of non-standard headers. [...] but only override the exit code from our special-cased calls to zipinfo(1) if they are 1 or 2 to avoid potentially masking real errors [...].
  • Cease ignoring test failures in stable-backports. [...]
  • Add missing textual DESCRIPTION headers for .zip and "Mozilla"-optimised .zip files. [...]
  • Merge two overlapping environment variables into a single DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS. [...]
  • Update some reporting:
    • Re-add "return code" noun to "Command foo exited with X" error messages. [...]
    • Use repr(..)-style output when printing DIFFOSCOPE_TESTS_FAIL_ON_MISSING_TOOLS in skipped test rationale text. [...]
    • Skip the extra newline in Output:\nfoo. [...]
  • Add some explicit return values to appease Pylint, etc. [...]
  • Also include the python3-tlsh in the Debian test dependencies. [...]
  • Released and uploaded releasing versions 116, 117, 118, 119 & 120. [...][...][...][...][...]


strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Support OpenJDK ".jmod" files. [...]
  • Identify data files from the COmmon Data Access (CODA) framework as being .zip files. [...]
  • Pass --no-sandbox if necessary to bypass seccomp-enabled version of file(1) which was causing a huge number of regressions in our testing framework.
  • Don't just run the tests but build the Debian package instead using Salsa's centralised scripts so that we get code coverage, Lintian, autopkgtests, etc. [...][...]
  • Update tests:
    • Don't build release Git tags on salsa.debian.org. [...]
    • Merge the debian branch into the master branch to simplify testing and deployment [...] and update debian/gbp.conf to match [...].
  • Drop misleading and outdated MANIFEST and MANIFEST.SKIP files as they are not used by our release process. [...]
Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS (ELTS) project.


Uploads

I also made "sourceful" uploads to unstable to ensure migration to testing after recent changes that prevent maintainer-supplied packages entering bullseye for bfs (1.5-3), redis (5:5.0.5-2), lastpass-cli (1.3.3-2), python-daiquiri (1.5.0-3) and I finally performed a sponsored upload of elpy (1.29.1+40.gb929013-1).

FTP Team

As a Debian FTP assistant I ACCEPTed 19 packages: aiorwlock, bolt, caja-mediainfo, cflow, cwidget, dgit, fonts-smc-gayathri, gmt, gnuastro, guile-gcrypt, guile-sqlite3, guile-ssh, hepmc3, intel-gmmlib, iptables, mescc-tools, nyacc, python-pdal & scheme-bytestructures. I additionally filed a bug against scheme-bytestructures for having a seemingly-incomplete debian/copyright file. (#932466)

Michael Prokop: Some useful bits about Linux hardware support and patched Kernel packages

Wednesday 31st of July 2019 07:00:42 AM

Disclaimer: I started writing this blog post in May 2018, when Debian/stretch was the current stable release of Debian, but published this article in August 2019, so please keep the version information (Debian releases + kernels not being up2date) in mind.

The kernel version of Debian/stretch (4.9.0) didn’t support the RAID controller as present in Lenovo ThinkSystem SN550 blade servers yet. The RAID controller was known to be supported with Ubuntu 18.10 using kernel v4.15 as well as with Grml ISOs using kernel v4.15 and newer. Using a more recent Debian kernel version wasn’t really an option for my customer, as there was no LTS kernel version that could be relied on. Using the kernel version from stretch-backports could have be an option, though it would be our last resort only, since the customer where this applied to controls the Debian repositories in usage and we’d have to track security issues more closely, test new versions of the kernel on different kinds of hardware more often,… whereas the kernel version from Debian/stable is known to be working fine and is less in a flux than the ones from backports. Alright, so it doesn’t support this new hardware model yet, but how to identify the relevant changes in the kernel to have a chance to get it supported in the stable Debian kernel?

Some bits about PCI IDs and related kernel drivers

We start by identifying the relevant hardware:

root@grml ~ # lspci | grep 'LSI.*RAID' 08:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID Tri-Mode SAS3404 (rev 01) root@grml ~ # lspci -s '08:00.0' 08:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID Tri-Mode SAS3404 (rev 01)

Which driver gets used for this device?

root@grml ~ # lspci -k -s '08:00.0' 08:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID Tri-Mode SAS3404 (rev 01) Subsystem: Lenovo ThinkSystem RAID 530-4i Flex Adapter Kernel driver in use: megaraid_sas Kernel modules: megaraid_sas

So it’s the megaraid_sas driver, let’s check some version information:

root@grml ~ # modinfo megaraid_sas | grep version version: 07.703.05.00-rc1 srcversion: 442923A12415C892220D5F0 vermagic: 4.15.0-1-grml-amd64 SMP mod_unload modversions

But how does the kernel know which driver should be used for this device? We start by listing further details about the hardware device:

root@grml ~ # lspci -n -s 0000:08:00.0 08:00.0 0104: 1000:001c (rev 01)

The 08:00.0 describes the hardware slot information ([domain:]bus:device.function), the 0104 describes the class (with 0104 being of type RAID bus controller, also see /usr/share/misc/pci.ids by searching for ‘C 01’ -> ’04`), the (rev 01) obviously describes the revision number. We’re interested in the 1000:001c though. The 1000 identifies the vendor:

% grep '^1000' /usr/share/misc/pci.ids 1000 LSI Logic / Symbios Logic

The `001c` finally identifies the actual model. Having this information available, we can check the mapping of the megaraid_sas driver, using the `modules.alias` file of the kernel:

root@grml ~ # grep -i '1000.*001c' /lib/modules/$(uname -r)/modules.alias alias pci:v00001000d0000001Csv*sd*bc*sc*i* megaraid_sas root@grml ~ # modinfo megaraid_sas | grep -i 001c alias: pci:v00001000d0000001Csv*sd*bc*sc*i*

Bingo! Now we can check this against the Debian/stretch kernel, which doesn’t support this device yet:

root@stretch:~# modinfo megaraid_sas | grep version version: 06.811.02.00-rc1 srcversion: 64B34706678212A7A9CC1B1 vermagic: 4.9.0-6-amd64 SMP mod_unload modversions root@stretch:~# modinfo megaraid_sas | grep -i 001c root@stretch:~#

No match here – bingo²! Now we know for sure that the ID 001c is relevant for us. How do we identify the corresponding change in the Linux kernel though?

The file drivers/scsi/megaraid/megaraid_sas.h of the kernel source lists the PCI device IDs supported by the megaraid_sas driver. Since we know that kernel v4.9 doesn’t support it yet, while it’s supported with v4.15 we can run "git log v4.9..v4.15 drivers/scsi/megaraid/megaraid_sas.h" in the git repository of the kernel to go through the relevant changes. It’s easier to run "git blame drivers/scsi/megaraid/megaraid_sas.h" though – then we’ll stumble upon our ID from before – `0x001C` – right at the top:

[...] 45f4f2eb3da3c (Sasikumar Chandrasekaran 2017-01-10 18:20:43 -0500 59) #define PCI_DEVICE_ID_LSI_VENTURA 0x0014 754f1bae0f1e3 (Shivasharan S 2017-10-19 02:48:49 -0700 60) #define PCI_DEVICE_ID_LSI_CRUSADER 0x0015 45f4f2eb3da3c (Sasikumar Chandrasekaran 2017-01-10 18:20:43 -0500 61) #define PCI_DEVICE_ID_LSI_HARPOON 0x0016 45f4f2eb3da3c (Sasikumar Chandrasekaran 2017-01-10 18:20:43 -0500 62) #define PCI_DEVICE_ID_LSI_TOMCAT 0x0017 45f4f2eb3da3c (Sasikumar Chandrasekaran 2017-01-10 18:20:43 -0500 63) #define PCI_DEVICE_ID_LSI_VENTURA_4PORT 0x001B 45f4f2eb3da3c (Sasikumar Chandrasekaran 2017-01-10 18:20:43 -0500 64) #define PCI_DEVICE_ID_LSI_CRUSADER_4PORT 0x001C [...]

Alright, the relevant change was commit 45f4f2eb3da3c:

commit 45f4f2eb3da3cbff02c3d77c784c81320c733056 Author: Sasikumar Chandrasekaran […] Date: Tue Jan 10 18:20:43 2017 -0500 scsi: megaraid_sas: Add new pci device Ids for SAS3.5 Generic Megaraid Controllers This patch contains new pci device ids for SAS3.5 Generic Megaraid Controllers Signed-off-by: Sasikumar Chandrasekaran […] Reviewed-by: Tomas Henzl […] Signed-off-by: Martin K. Petersen […] diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h index fdd519c1dd57..cb82195a8be1 100644 --- a/drivers/scsi/megaraid/megaraid_sas.h +++ b/drivers/scsi/megaraid/megaraid_sas.h @@ -56,6 +56,11 @@ #define PCI_DEVICE_ID_LSI_INTRUDER_24 0x00cf #define PCI_DEVICE_ID_LSI_CUTLASS_52 0x0052 #define PCI_DEVICE_ID_LSI_CUTLASS_53 0x0053 +#define PCI_DEVICE_ID_LSI_VENTURA 0x0014 +#define PCI_DEVICE_ID_LSI_HARPOON 0x0016 +#define PCI_DEVICE_ID_LSI_TOMCAT 0x0017 +#define PCI_DEVICE_ID_LSI_VENTURA_4PORT 0x001B +#define PCI_DEVICE_ID_LSI_CRUSADER_4PORT 0x001C [...] Custom Debian kernel packages for testing

Now that we identified the relevant change, what’s the easiest way to test this change? There’s an easy way how to build a custom Debian package, based on the official Debian kernel but including further patch(es), thanks to Ben Hutchings. Make sure to have a Debian system available (I was running this inside an amd64 system, building for amd64), with according deb-src entries in your apt’s sources.list and enough free disk space, then run:

% sudo apt install dpkg-dev build-essential devscripts fakeroot % apt-get source -t stretch linux % cd linux-* % sudo apt-get build-dep linux % bash debian/bin/test-patches -f amd64 -s none 0001-scsi-megaraid_sas-Add-new-pci-device-Ids-for-SAS3.5-.patch

This generates something like a linux-image-4.9.0-6-amd64_4.9.88-1+deb9u1a~test_amd64.deb for you (next to further Debian packages like linux-headers-4.9.0-6-amd64_4.9.88-1+deb9u1a~test_amd64.deb + linux-image-4.9.0-6-amd64-dbg_4.9.88-1+deb9u1a~test_amd64.deb), ready for installing and testing on the affected system. The Kernel Handbook documents this procedure as well, I just wasn’t aware of this handy `debian/bin/test-patches` so far though.

JFTR: sadly the patch with the additional PCI_DEVICE_ID* was not enough (also see #900349), we seem to need further patches from the changes between v4.9 and v4.15, though this turned up to be no longer relevant for my customer and it’s also working with Debian/buster nowadays.

Candy Tsai: Outreachy Week 8 – Week 9: Remote or In-Office Working

Monday 29th of July 2019 09:54:48 AM

The Week 9 blog prompt recommended by Outreachy was to write about my career goals. To be honest, this is a really hard topic for me. As long as a career path involves some form of coding, creating and learning new things, I’m willing to take it on. The best situation could be that it is also doing something good for the society. This might be because that “something that I am too passionate for” doesn’t yet exist in my life. For now, I wish I’d still be coding 5 years from now. It’s just that simple. The only thing that I would like to see improvement upon is gender balance for this industry.

As for working environment, I would like to share some thoughts after having experienced both extremes of totally remote work and complete in-office work. There are a lot of articles out there comparing the pros and cons. Here are just my opinions on the time spent not working:

  • Dozing off
  • Socializing
Dozing off

Our concentration time is limited and there definitely will be times when we doze off a bit. Just a list of things that I had done before in both places. I think I’m being too honest here

Russ Allbery: Review: All the Birds in the Sky

Monday 29th of July 2019 03:47:00 AM

Review: All the Birds in the Sky, by Charlie Jane Anders

Publisher: Tor Copyright: January 2016 ISBN: 1-4668-7112-1 Format: Kindle Pages: 315

When Patricia was six years old, she rescued a wounded bird, protected it from her sister, discovered that she could talk to animals, and found her way to the Parliament Tree. There, she was asked the Endless Question, which she didn't know how to answer, and was dumped back into her everyday life. Her magic apparently disappeared again, except not quite entirely.

Laurence liked video games and building things. From schematics he found on the Internet, he built a wrist-watch time machine that could send him two seconds forward into the future. That was his badge of welcome, the thing that marked him as part of the group of cool scientists and engineers, when he managed to sneak away to visit a rocket launch.

Patricia and Laurence meet in junior high school, where both of them are bullied and awkward and otherwise friendless. They strike up an unlikely friendship based on actually listening to each other, Patricia getting Laurence out of endless outdoor adventures arranged by his parents, and the supercomputer Laurence is building in his closet. But it's not clear whether that friendship can survive endless abuse, the attention of an assassin, and their eventual recruitment into a battle between magic and technology of which they're barely aware.

So, first, the world-building in All the Birds in the Sky is subtly brilliant. I had been avoiding this book because I'd gotten the impression it was surreal and weird, which often doesn't work for me. But it's not, and that's due to careful and deft authorial control. This is a book in which two kids are sitting in a shopping mall watching people's feet go by on an escalator and guessing at their profession, and this happens:

The man in black slippers and worn gray socks was an assassin, said Patricia, a member of a secret society of trained killers who stalked their prey, looking for the perfect moment to strike and kill them undetected.

"It's amazing how much you can tell about people from their feet," said Patricia. "Shoes tell the whole story."

"Except us," said Laurence. "Our shoes are totally boring. You can't tell anything about us."

"That's because our parents pick out our shoes," said Patricia. "Just wait until we're grown up. Our shoes will be insane."

In fact, Patricia had been correct about the man in the gray socks and black shoes. His name was Theodolphus Rose, and he was a member of the Nameless Order of Assassins. He had learned 873 ways to murder someone without leaving even a whisper of evidence, and he'd had to kill 419 people to reach the number nine spot in the NOA hierarchy. He would have been very annoyed to learn that his shoes had given him away, because he prided himself on blending with his surroundings.

Anders maintains that tone throughout the book: dry, a little wry, matter-of-fact with a quirked smile, and utterly certain. The oddity of this world is laid out on the page without apologies, clear and comprehensible and orderly even when it's wildly strange. It's very easy as a reader to just start nodding along with magical academies and trans-dimensional experiments because Anders gives you the structure, pacing, and description that you need to build a coherent image.

The background work is worthy of this book's Nebula award. I just wish I'd liked the story better.

The core of my dislike is the characters, although for two very different reasons. Laurence is straight out of YA science fiction: geeky, curious, bullied, desperate to belong to something, loyal, and somewhere between stubborn and indecisive. But below that set of common traits, I never connected with him. He was just... there, doing predictable Laurence things and never surprising me or seeming to grow very much.

Laurence eventually goes to work for the Ten Percent Project, which is trying to send 10% of the population into space because clearly the planet is doomed. The blindness of that goal, and the degree to which the founder of that project resembled Elon Musk, was a bit too real to be funny. I kept waiting for Anders to either make a sharper satirical point or to let Laurence develop his own character outside of the depressing reality of techno-utopianism, but the story stayed finely balanced on that knife edge until it stopped being funny and started being awful.

Patricia, on the other hand, I liked from the very beginning. She's independent, determined, angry, empathetic, principled, and thoughtful, and immediately became the character I was cheering for. And every other major character in this novel is absolutely horrific to her.

The sheer amount of abusive gaslighting Patricia is subjected to in this book made me ill. Everyone from her family to her friends to her fellow magicians demean her, squash her, ignore her, trivialize her, shove her into boxes, try to get her to stop believing in things that happened to her, and twist every bit of natural ambition she has into new forms of prison. Even Laurence participates in this; although he's too clueless to be a major source of it, he's set up as her one port in the storm and then basically abandons her. I started the book feeling sorry for her; by the end of the book, I wanted Patricia to burn her life down with fire and start over with a completely new batch of humans. There's no way that she could do worse.

I want to be clear: I think this is an intentional authorial choice. I think Anders is entirely aware of how awful people are being, and the story of Laurence and Patricia barely managing to keep their heads above water despite them is the story she chose to write. A lot of other people loved it; this is more of a taste mismatch with the book than a structural flaw. But there are only so many paternalistic, abusive assholes passing themselves off as authority figures I can take in one book, and this book flew past my threshold and just kept going. Patricia and Laurence are mostly helpless against these people and have to let their worlds be shaped by them even when they know it's wrong, which makes it so, so much harder to bear.

The place where I think Anders did lose control of the plot, at least a little, is the ending. I can't fairly say that it came out of nowhere, since Anders was dropping hints throughout the book, but I did feel like it robbed the characters of agency in a way that I found emotionally unsatisfying as a reader, particularly since everyone in the book had been trying to take away Patricia's agency from nearly the first page. To have the ending then do the same thing added insult to injury in a way that I couldn't stomach. I can see the levels of symbolism knit together by this choice of endings, but, at least in my opinion, it would have been so much more satisfying, and somewhat redeeming of all the shit that Patricia had to go through, if she had been in firm control of how the symbolism came together.

This one's going to be a matter of taste, I think, and the world-building is truly excellent and much better than I had been expecting. But it's firmly in the "not for me" pile.

Rating: 5 out of 10

Keith Packard: snekboard-0.2

Sunday 28th of July 2019 08:20:36 PM
Snekboard v0.2 Update

I've built six prototypes of snekboard version 0.2. They're working great and I'm happy with the design.

New Motor Driver

Having discovered that the TI DRV8838 wasn't up to driving the Lego Power Functions Medium motor (8883) because of it's start-up current draw, I went back and reworked the snekboard circuit to use TI DRV8800 instead. That controller can provide up to 2.8A and doesn't have any trouble with this motor.

The DRV8800 is larger than the DRV8838, so it took a bit of re-wiring to fit them on the circuit board.

New Power Source Selector

In version 0.1, I was using two DFLS130L Schottky diodes to automatically select between the on-board lithium polymer battery and USB to power the board. That "worked", except that there was enough leakage back through them that when the USB connector was unplugged, the battery charge indicator LEDs both lit up, which left me with the choice of disabling those indicators or draining the battery.

To fix that, I found an automatic power selector (with current limit!) part, the TPS2121. This should avoid frying the board when you short the motor controller outputs, although those also have current limiting circuits. Defense in depth!

One issue I found was that this circuit draws current even when the output is disconnected, so I changed the power switch from a SPST to DPST and now control USB and battery power separately.

CircuitPython

I included a W25Q16 2MB NOR flash chip on the board so that it could also run CircuitPython. Before finalizing the design, I thought it might be a good idea to actually get that running.

I've submitted a pull request with the necessary changes. I hope to see that merged at some point, which will allow users to select between CircuitPython and snek.

Smoothing Speed Changes

While the 9V supply on snekboard is designed to supply plenty of current for the motors, if you ask it to suddenly change how much it is producing, it places a huge load on the battery. When this happens, the battery voltage drops below the brown-out value for the SoC and the board resets.

I experimented with how to resolve this by ramping the power up and down in the snek application. That worked great; the motors could easily switch from full speed in one direction to full speed in the other direction.

Instead of having users add code to every snek application, I decided to move this functionality down into the snek implementation. I did this by modifying the PWM and direction pins values in a function called from the timer interrupt. This lets the application continue to run at full speed, while the motor controller slowly adjusts its output. No more resets when switching from full forward to full reverse.

Future Plans

I've got the six v0.2 prototypes that I'll be able to use in for the upcoming class year, but I'm unsure of whether there would be enough interest in the broader community to have more of them made. Let me know if you'd be interested in purchasing snekboards; if I get enough responses, I'll look at running them through Crowd Supply or similar.

Dirk Eddelbuettel: anytime 0.3.5

Sunday 28th of July 2019 03:37:00 PM

A new release of the anytime package is arriving on CRAN. This is the sixteenth release, and comes a good month after the 0.3.4 release.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string. See the anytime page, or the GitHub README.md for a few examples.

This release brings a reworked fallback mechanism enabled via the useR=TRUE option. Because Windows remains a challenging platform which, among other more important ailments, also does not provide timezone information, we no longer rely on the RApiDatetime package which exposes parts of the R API. This works everywhere where timezone information is available, but less so on Windows. Instead, we now use Rcpp::Function to call directly back into R. This received a considerable amount of testing, and the package should now work even better when either a timezone is set, or the Windows fallback is used, or both. My thanks to Christoph Sax for patiently testing and helping to debug this, as well as for his two pull requests contributing to this release (even if one of these is now redundant as we no longer use RApiDatetime).

The full list of changes follows.

Changes in anytime version 0.3.5 (2019-07-28)
  • Fix use of Rcpp::Function-accessed Sys.setenv(), name all arguments in call to C++ (Christoph Sax in #95).

  • Relax constraint on Windows testing in several test files (Christoph Sax in #97).

  • Fix an issue related to TZ environment variable setting (Dirk in #101).

  • Change useR=TRUE behaviour by directly calling R via Rcpp (Dirk in #103 fixing #96).

  • Several updates to unit testing files aiming for more robust behaviour across platforms.

  • Updated documentation in manual pages, README and vignette.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page. The issue tracker tracker off the GitHub repo can be use for questions and comments.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Joachim Breitner: Custom firmware for the YQ8003 bicycle light

Sunday 28th of July 2019 09:30:27 AM

This blog post is about 18 months late, but better late than never...

The YQ8003

1½ years ago, when I was still a daredevil that was biking in Philly I got interested in these fancy strips of LED lights that you put into your bike wheel and when you drive fast enough, they form a stable image, both because of the additional visibility and safety, but also because the seem to be fun gadgets.

There are brands like Monkey Lights, but they are pretty expensive, and there are cheaper similar no-name products available, such as the YQ8003, which you can either order from China or hope to find on eBay for around $30 per piece.

The YQ8003 bike light

Sucky software

The hardware is nice: water proof, easy to install, bright, long-lasting battery. But the software, oh my!

You need Windows to load your own pictures onto the device, and the application is really unpleasant to use, you can’t easily save your edits and sequences of images and so on.

But also the software on the device itself (which sports a microcontroller) was unsatisfying: The transformation it applies to the image assumes that the bar of LEDs goes through the center of the wheel. Obviously that is wrong, as there is the hub. With a small hub the difference is not so bad, but I have rather large hubs (a generator in the front hub, and internal gears in the rear hub), and this make the image not stable, but jump back and forth a bit.

Time to DIY!

So obviously I had to do something about it. At first I planned to to just find out how to load my own pictures onto the hardware, using the existing software on the device. So I needed to find out the protocol.

I was running their program on Windows in VirtualBox, and quickly noticed that the USB connection that you use to load your data onto the YQ8003 is actually a serial-over-USB port. I found a sniffer for serial communication and used that to dump what the Windows app sent to the device. That was all pretty hairy, and I only did it once (and deleted the Windows setup soon), but luckily one dump was sufficient.

I did not find out where in the data sent to the light the image was encoded. But I did find that the protocol used to talk to the device is a standard protocol to talk to microcontrollers, something called “STC ISP”. With that information, I could find out that the microcontroller is a STC12LE5A60S2 with 22MHz and 60KB of RAM, and that it is “8051 compatible”, whatever that means.

So this is how I, for the first and so far only time, ventured into microcontroller territory. It was pretty straight-forward to get a toolchain to compile programs for this microcontroller (using sdcc) and to upload code to it (using stcgal), and I could talk to my code over the serial port. This is promising!

Reverse engineering

I also quickly found out how the magnet (which the device uses to notice when the wheel has done one rotation) is accessed: It triggers interrupt 0.

But finding out how to actually access the LEDs and might them light up was very tricky. This kind of information is not specific to the microcontroller (STC12LE5A60S2), for which I could find documentation, but really depends on how it is wired up.

I was able to extract, from the serial port communication dump mentioned earlier, the firmware in a way I could send it to the microcontroller. So I could always go back to a working state. Moreover I could disassemble that code, and try to make sense of it. But I could not make sense of it, i.e. could not understand .

So if thinking does not help, maybe brute force does? I wrote a program that would take the working firmware, zero out parts of it. Then I would try that firmware and note if it still works. This way, my program would zero out ever more of the firmware, until only a few instructions are left that would still make the LEDs light up.

In the end I had, I think, 13 instructions left that made the LEDs light up lightly. Success! Or so I thought … the resulting program was pretty non-sensical. It essentially increments a value and writes another value to the address stored in the first value. So it just spews data all over the address range, wrapping around when at the end. No surprise it triggers the LEDs somewhere along the way…

(Still, I published the program to minimize binary data under the name bisect-binary – maybe you’ll find it useful for something.)

I actually don’t remember how I eventually figured out what to do, and which bytes and bits to toggle in which order. Maybe more reading, and some advice to look for from people who know more about LEDs.

bSpokeLight

With that knowledge I could finally write my own firmware and user application. The part that goes onto the device is written in C and compiled with sdcc. And the part that runs on your computer is a command line application written in Haskell, that takes the pictures and animations you want, applies the necessary transformations (now taking the width of your hub into account!) and embeds that into the compiled C code to produce a firmware file that you can load onto your device using stcgal.

It support images in all common formats, produces 8 colors and can store up to 8 images on the device, which then circle according to the time you specify. I dubbed the software bSpokeLight.

The light in action with more lights at the GPN19 (The short shutter speed of the camera prevents the visual effect in the eye that allows you to see the images well here)

It actually supports reading GIF animations, but I found that they are much harder to recognize later, unless I rotate the wheel very fast and you know what to look for. I am not sure if this is a limitation of the hardware (and our eyes), a problem with my code or a problem with the particular animations I have tried. Will need to experiment more.

Can you see the swing dancing couple?

As always, I am sharing the code in the hope that others find it useful as well. Thanks to Haskell, Nix and the iohk-nix project I can easily provide pre-compiled binaries for Windows and Linux, statically compiled for the latter for distribution-independence. Let me know if you try to use it and how that went.

Holger Levsen: 20190728-minidebcamp-fosdem

Sunday 28th of July 2019 03:23:23 AM
Mini DebCamp Fosdem 2020?

So someone from Belgium just brought up the excellent idea of having a Mini DebCamp before and/or after FOSDEM 2020. I like it! What do you think?

On Monday after FOSDEM there will be again the Copyleft-Event from SFC, so maybe 3 days of hacking before FOSDEM would be better, but still, whatever, for planing these details there's now #debconf-fosdem on OFTC

It's just an idea, but seriously, we'd only need to rent/find a room for 23-42 hackers nearby, and we'd be set. Debian people are good at self organizing, if they have network and a roof.

Also, there might be beer in Belgium, someone from Belgium just confirmed.

More in Tux Machines

Leftovers: Kate, Krita, UCLA Library and RcppExamples

  • Kate - Document Preview Plugin - Maintainer Wanted!

    At the moment the Document Preview plugin that e.g. allows to preview Markdown or other documents layout-ed via embedding a matching KPart is no longer maintained. If you want to step up and keep that plugin alive and kicking, now is your chance!

  • The Sprint

    Hi -)) haven’t posted for some time, because I was busy travelling and coding for the first half of the month. From Aug 5 to Aug 9, I went to the Krita Sprint in Deventer, Netherlands. According to Boud, I was the first person to arrive. My flight took a transit via Hong Kong where some flights were affected due to natural and social factors, but fortunately mine was not one of them. Upon arrival in Amsterdam I got a ticket for the Intercity to Deventer. Railway constructions made me take a transfer via Utrecht Centraal, but that was not a problem at all: the station has escalators going both up to the hall, and down to the platforms (in China you can only go to the hall by stairs or elevator (which is often crowded after you get off)). When I got out of Deventer Station, Boud immediately recognized me (how?!). It was early in the morning, and the street’s quietness was broken by the sound of me dragging my suitcase. Boud led me through Deventer’s crooked streets and alleys to his house. For the next two days people gradually arrived. I met my main mentor Dmitry (magician!) and his tiger, Sagoskatt, which I (and many others) have mistaken for a giraffe. He was even the voice actor for Sago. He had got quite a lot of insights into the code base (according to Boud, “80%”) and solved a number of bugs in Krita (but he said he introduced a lot of bugs, ha!). Also I met David Revoy (my favourite painter!), the author of Pepper and Carrot. And Tiar, our developer who started to work full-time on Krita this year; she had always been volunteering to support other Krita users and always on the IRC and Reddit. And two of other three GSoC students for the year: Blackbeard (just as his face) and Hellozee. Sh_zam could not come and lost communications due to political issues, which was really unfortunate (eh at least now he can be connected). It is feels so good to be able to see so many people in the community – they are so nice! And it is such an experience to hack in a basement church.

  • How UCLA Library preserves rare objects with open source

    The University of California, Los Angeles, (UCLA) Library houses a collection of millions of rare and unique objects, including materials dating from 3000 BCE, that could be damaged, destroyed, or otherwise threatened if they were displayed. To make these special collections widely available while keeping them secure, the UCLA Library has been modernizing its digital repository, which was established 15 years ago on now-outdated software. [...] Watch Jen's Lightning Talk to learn more about the UCLA Library's rare collections digitization project.

  • RcppExamples 0.1.9

    The RcppExamples package provides a handful of short examples detailing by concrete working examples how to set up basic R data structures in C++. It also provides a simple example for packaging with Rcpp.

Games: Smith and Winston, 7 Billion Humans Sale

Servers: Ampere Computing, SUSE and Red Hat

  • Ampere Computing Is Keeping Close Track Of The Linux Performance For Their ARM Servers

    Hardware vendor Ampere Computing with their impressive ARM servers is doing a great job on closely following their hardware's Linux performance as part of a rigorous continuous testing regiment or ensuring quality, compatibility, and stability while being fully-automated. Ampere Computing's Travis Lazar talked at this week's Linux Foundation events in San Diego over the importance of continuous regression testing for software and hardware development by talking about their internal workflow and software in place. Their internal system is the "Totally Automated Regression System" or TARS for short. TARS makes use of various open-source components including the Phoronix Test Suite and its vast collection of benchmarks for providing comprehensive test coverage plus Ampere's own "extensions" to the Phoronix Test Suite. TARS also incorporates the provisioning/configuration responsibilities as well as analysis of the data.

  • [SUSE] Learn how the Multimodal OS can benefit your organization.
  • From ProdOps to DevOps: Surviving and thriving

    For many of us in Production Operations (ProdOps), change is the enemy. If something changes, there is now an opportunity for things that were working just fine to experience problems. It is like a game of Jenga. When will the tower fall because a seemingly minor change unbalances the whole stack of pieces? ProdOps teams hate change so much, that countless frameworks have been invented to "manage" changes; in reality, these frameworks make the procedure for effecting a change so onerous that most people give up and accept the status quo. Actually, that statement is a bit unfair. These frameworks are an attempt to wrap planning and consensus around production changes, thus minimizing potential downtime caused by random or rogue changes (see Why the lone wolf mentality is a sysadmin mistake).

  • Meet Red Hat at VMworld

    As Red Hat’s Ashesh Badani said in his blog post about the reference architecture for OpenShift on VMware’s SDDC stack “… this is just the first step — Red Hat OpenShift 4 brings optimized installation capabilities to a variety of infrastructures and for this, the companies are working towards a VMware Validated Design. We are excited that VMware is working closely with Red Hat to deliver a simplified experience there in the coming months.”

Late Coverage of Confidential Computing Consortium

  • Microsoft Partners With Google, Intel, And Others To Form Data Protection Consortium

    The software maker joined Google Cloud, Intel, IBM, Alibaba, Arm, Baidu, Red Hat, Swisscom, and Tencent to establish the Confidential Computing Consortium, a group committed to providing better private data protection, promoting the use of confidential computing, and advancing open source standards among members of the technology community.

  • #OSSUMMIT: Confidential Computing Consortium Takes Shape to Enable Secure Collaboration

    At the Open Source Summit in San Diego, California on August 21, the Linux Foundation announced the formation of the Confidential Computing Consortium. Confidential computing is an approach using encrypted data that enables organizations to share and collaborate, while still maintaining privacy. Among the initial backers of the effort are Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent. “The context of confidential computing is that we can actually use the data encrypted while programs are working on it,” John Gossman, distinguished engineer at Microsoft, said during a keynote presentation announcing the new effort. Initially there are three projects that are part of the Confidential Computing Consortium, with an expectation that more will be added over time. Microsoft has contributed its Open Enclave SDK, Red Hat is contributing the Enarx project for Trusted Execution Environments and Intel is contributing its Software Guard Extensions (SGX) software development kit. Lorie Wigle, general manager, platform security product management at Intel, explained that Intel has had a capability built into some of its processors called software guard which essentially provides a hardware-based capability for protecting an area of memory.