Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 8 hours 9 min ago

Dirk Eddelbuettel: RQuantLib 0.4.8: Small updates

Sunday 17th of March 2019 07:12:00 PM

A new version 0.4.8 of RQuantLib reached CRAN and Debian. This release was triggered by a CRAN request for an update to the configure.ac script which was easy enough (and which, as it happens, did not result in changes in the configure script produced). I also belatedly updated the internals of RQuantLib to follow suit to an upstream change in QuantLib. We now seamlessly switch between shared_ptr<> from Boost and from C++11 – Luigi wrote about the how and why in an excellent blog post that is part of a larger (and also excellent) series of posts on QuantLib internals.

QuantLib is a very comprehensice free/open-source library for quantitative finance, and RQuantLib connects it to the R environment and language.

In other news, we finally have a macOS binary package on CRAN. After several rather frustrating months of inaction on the pull request put together to enable this, it finally happened last week. Yay. So CRAN currently has an 0.4.7 macOS binary and should get one based on this release shortly. With Windows restored with the 0.4.7 release, we are in the best shape we have been in years. Yay and three cheers for Open Source and open collaboration models!

The complete set of changes is listed below:

Changes in RQuantLib version 0.4.8 (2019-03-17)
  • Changes in RQuantLib code:

    • Source code supports Boost shared_ptr and C+11 shared_ptr via QuantLib::ext namespace like upstream.
  • Changes in RQuantLib build system:

    • The configure.ac file no longer upsets R CMD check; the change does not actually change configure.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel: Rcpp 1.0.1: Updates

Sunday 17th of March 2019 01:49:00 PM

Following up on the 10th anniversary and the 1.0.0. release, we excited to share the news of the first update release 1.0.1 of Rcpp. package turned ten on Monday—and we used to opportunity to mark the current version as 1.0.0! It arrived at CRAN overnight, Windows binaries have already been built and I will follow up shortly with the Debian binary.

We had four years of regular bi-monthly release leading up to 1.0.0, and having now taken four months since the big 1.0.0 one. Maybe three (or even just two) releases a year will establish itself a natural cadence. Time will tell.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1598 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 152 in BioConductor release 3.8. Per the (partial) logs of CRAN downloads, we currently average 921,000 downloads a month.

This release feature a number of different pull requests detailed below.

Changes in Rcpp version 1.0.1 (2019-03-17)
  • Changes in Rcpp API:

    • Subsetting is no longer limited by an integer range (William Nolan in #920 fixing #919).

    • Error messages from subsetting are now more informative (Qiang and Dirk).

    • Shelter increases count only on non-null objects (Dirk in #940 as suggested by Stepan Sindelar in #935).

    • AttributeProxy::set() and a few related setters get Shield<> to ensure rchk is happy (Romain in #947 fixing #946).

  • Changes in Rcpp Attributes:

    • A new plugin was added for C++20 (Dirk in #927)

    • Fixed an issue where 'stale' symbols could become registered in RcppExports.cpp, leading to linker errors and other related issues (Kevin in #939 fixing #733 and #934).

    • The wrapper macro gets an UNPROTECT to ensure rchk is happy (Romain in #949) fixing #948).

  • Changes in Rcpp Documentation:

    • Three small corrections were added in the 'Rcpp Quickref' vignette (Zhuoer Dong in #933 fixing #932).

    • The Rcpp-modules vignette now has documentation for .factory (Ralf Stubner in #938 fixing #937).

  • Changes in Rcpp Deployment:

    • Travis CI again reports to CodeCov.io (Dirk and Ralf Stubner in #942 fixing #941).

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Andy Simpkins: Race for science: A Prostate Cancer Research Centre charity challenge

Saturday 16th of March 2019 04:18:51 PM

This post may feel like an advert – to an extent it is.  I won’t plug events very often; however as a charity event in aid of prostate cancer and I genuinely think it will be great fun for anybody able to take part, so hence the unashamed plug.  The Race for Science will happen on 30th March in and around Cambridge. There is still time to enter, have fun and raise some money for a deserving charity at the same time.

A few weeks ago friends of ours, Jo (Randombird) and Josh were celebrating their birthday’s, we split into a couple of groups and had a go at a couple of escape room challenges at Lock House Games.  Four of us, Josh, Isy, Jane and myself had a go at the Egyptian Tomb.  Whilst this was the 2nd time Isy has tried an escape room the rest of us were N00bs.   I won’t describe the puzzles inside because that would spoil anyone’s enjoyment who would later goon to play the game.   We did make it out of the room inside our allotted time with only a couple of hints.  It was great fun, and is suitable for all ages our group ranged from 12 to 46, and this would work well for all adult teams as well as more mature children..

We escaped the tomb…

Anyway having had a good time, and whilst we waited for other group to finish their challenge, I spotted a request for people to Beta test some puzzles for the Prostate Cancer Research Centre‘s 2019 Race For Science to happen a couple of days later.

I volunteered my time and re-arranged my Monday so that I could spend the afternoon to trial the escape room elements of the challenge.  Some great people involved and the puzzles were very good indeed.  I visited 3 different locations in Cambridge each with very different puzzles to solve.  One challenge stood head and shoulders above the others; not that the others were poor – they are great, at least as good as a commercial escape room.  The reason that this challenge that was so outstanding wasn’t the because of the challenge itself, it was because it had been set specifically for the venue that will host this challenge, using the venue as part of the puzzle (The Race For Science is a scavenger hunt and contains several challenges at different locations within the city).

Beta testing one of the puzzles

Last Thursday I went back into town in the afternoon to beta test yet another location – Once again this was outstanding, the challenge differed from those that I had trialed the previous week.  This was another fantastic puzzle to solve and took full advantage of its location, both for it’s ‘back story’ and for the puzzle itself.  After this we moved onto testing  the scavenger hunt part of the event.  This is an played on the streets of the city on foot following clues will take you in and around some of the city’s museums and landmarks – and will unlock access to the escape room challenges I had been testing earlier.  My only concern is that it is played using a browser on a mobile device (i.e. phone).  I had to move around a bit in some locations to ensure that I had adequate signal.  You may want to make sure that you have fully charged battery!

The event is open to teams of up-to 6 people and will take the form of an “immersive scavenger hunt adventure”.  Unfortunately I can not take part as I have already played the game, but there is still time for you to register and take part.  Anyway if you are able to get to Cambridge at the end of the month please enter the Race for Science

Dirk Eddelbuettel: littler 0.3.7: Small tweaks

Saturday 16th of March 2019 12:15:00 AM

The eight release of littler as a CRAN package is now available, following in the thirteen-ish year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. And it is (in my very biased eyes) better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript converted to rather recently.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default where a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!).

A few examples as highlighted at the Github repo, as well as in the examples vignette.

This release brings an small update (thanks to Gergely) to scripts install2.r and installGithub.r allow more flexible setting of repositories, and fixes a minor nag from CRAN concerning autoconf programming style.

The NEWS file entry is below.

Changes in littler version 0.3.6 (2019-01-26)
  • Changes in examples

    • The scripts installGithub.r and install2.r get a new option -r | --repos (Gergely Daroczi in #67)
  • Changes in build system

    • The AC_DEFINE macro use rewritten to please R CMD check.

CRANberries provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Arnaud Rebillout: Building your Pelican website with schroot

Saturday 16th of March 2019 12:00:00 AM

Lately I moved my blog to Pelican. I really like how simple and flexible it is. So in this post I'd like to highlight one particular aspect of my Pelican's workflow: how to setup a Debian-based environment to build your Pelican's website, and how to leverage Pelican's Makefile to transparently use this build environment. Overall, this post has more to do with the Debian tooling, and little with Pelican.

Introduction

First thing first, why would you setup a build environment for your project?

Imagine that you run Debian stable on your machine, then you want to build your website with a fancy theme, that requires the latest bleeding edge features from Pelican. But hey, in Debian stable you don't have these shiny new things, and the version of Pelican you need is only available in Debian unstable. How do you handle that? Will you start messing around with apt configuration and pinning, and try to install an unstable package in your stable system? Wrong, please stop.

Another scenario, the opposite: you run Debian unstable on your system. You have all the new shiny things, but sometimes an update of your system might break things. What if you update, and then can't build your website because of this or that? Will you wait a few days until another update comes and fixes everything? How many days before you can build your blog again? Or will you dive in the issues, and debug, which is nice and fun, but can also keep you busy all night, and is not exactly what you wanted to do in the first place, right?

So, for both of these issues, there's one simple answer: setup a build environment for your project. The most simple way is to use a chroot, which is roughly another filesystem hierarchy that you create and install somewhere, and in which you will run your build process. A more elaborate build environment is a container, which brings more isolation from the host system, and many more features, but for something as simple as building your website on your own machine, it's kind of overkill.

So that's what I want to detail here, I'll show you the way to setup and use a chroot. There are many tools for the job, and for example Pelican's official documentation recommends virtualenv, which is kind of the standard Python solution for that. However, I'm not too much of a Python developer, and I'm more familiar with the Debian tools, so I'll show you the Debian way instead.

Version-wise, it's 2019, we're talking about Pelican 4.x, if ever it matters.

Create the chroot

To create a basic, minimal Debian system, the usual command is debootstrap. Then in order to actually use this new system, we'll use schroot. So be sure to have these two packages installed on your machine.

sudo apt install debootstrap schroot

It seems that the standard location for chroots is /srv/chroot, so let's create our chroot there. It also seems that the traditional naming scheme for these chroots is something like SUITE-ARCH-APPLICATION, at least that's what other tools like sbuild do. While you're free to do whatever you want, in this tutorial we'll try to stick to the conventions.

Let's go and create a buster chroot:

SYSROOT=/srv/chroot/buster-amd64-pelican sudo mkdir -p ${SYSROOT:?} sudo debootstrap --variant=minbase buster ${SYSROOT:?}

And there we are, we just installed a minimal Debian system in $SYSROOT, how easy and neat is that! Just run a quick ls there, and see by yourself:

ls ${SYSROOT:?}

Now let's setup schroot to be able to use it. schroot will require a bit of a configuration file that tells it how to use this chroot. This is where things might get a bit complicated and cryptic for the newcomer.

So for now, stick with me, and create the schroot config file as follow:

cat << EOF | sudo tee /etc/schroot/chroot.d/buster-amd64-pelican.conf [buster-amd64-pelican] users=$LOGNAME root-users=$LOGNAME source-users=$LOGNAME source-root-users=$LOGNAME type=directory union-type=overlay directory=/srv/chroot/buster-amd64-pelican EOF

Here, we tell schroot who can use this chroot ($LOGNAME, it's you), as normal user and root user. We also say where is the chroot directory located, and that we want an overlay, which means that the chroot will actually be read-only, and during operation a writable overlay will be stacked up on top of it, so that modifications are possible, but are not saved when you exit the chroot.

In our case, it makes sense because we have no intention to modify the build environment. The basic idea with a build environment is that it's identical for every build, we don't want anything to change, we hate surprises. So we make it read-only, but we also need a writable overlay on top of it, in case some process might want to write things in /var, for example. We don't care about these changes, so we're fine discarding this data after each build, when we leave the chroot.

And now, for the last step, let's install Pelican in our chroot:

schroot -c source:buster-amd64-pelican -u root -- \ bash -c "apt update && apt install --yes make pelican && apt clean"

In this command, we log into the source chroot as root, and we install the two packages make and pelican. We also clean up after ourselves, to save a bit of space on the disk.

At this point, our chroot is ready to be used. If you're new to all of this, then read the next section, I'll try to explain a bit more how it works.

A quick introduction to schroot

In this part, let me try to explain a bit more how schroot works. If you're already acquainted, you can skip this part.

So now that the chroot is ready, let's experiment a bit. For example, you might want to start by listing the chroots available:

$ schroot -l chroot:buster-amd64-pelican source:buster-amd64-pelican

Interestingly, there are two of them... So, this is due to the overlay thing that I mentioned just above. Using the regular chroot (chroot:) gives you the read-only version, for daily use, while the source chroot (source:) allows you to make persistent modifications to the filesystem, for install and maintenance basically. In effect, the source chroot has no overlay mounted on top of it, and is writable.

So you can experiment some more. For example, to have a shell into your regular chroot, run:

$ schroot -c chroot:buster-amd64-pelican

Notice that the namespace (eg. chroot: or source:) is optional, if you omit it, schroot will be smart and choose the right namespace. So the command above is equivalent to:

$ schroot -c buster-amd64-pelican

Let's try to see the overlay thing in action. For example, once inside the chroot, you could create a file in some writable place of the filesystem.

(chroot)$ touch /var/tmp/this-is-an-empty-file (chroot)$ ls /var/tmp this-is-an-empty-file

Then log out with <Ctrl-D>, and log in again. Have a look in /var/tmp: the file is gone. The overlay in action.

Now, there's a bit more to that. If you look into the current directory, you will see that you're not within any isolated environment, you can still see all your files, for example:

(chroot)$ pwd /home/arno/my-pelican-blog (chroot)$ ls content Makefile pelicanconf.py ...

Not only are all your files available in the chroot, you can also create new files, delete existing ones, and so on. It doesn't even matter if you're inside or outside the chroot, and the reason is simple: by default, schroot will mount the /home directory inside the chroot, so that you can access all your files transparently. For more details, just type mount inside the chroot, and see what's listed.

So, this default of schroot is actually what makes it super convenient to use, because you don't have to bother about bind-mounting every directory you care about inside the chroot, which is actually quite annoying. Having /home directly available saves time, because what you want to isolate are the tools you need for the job (so basically /usr), but what you need is the data you work with (which is supposedly in /home). And schroot gives you just that, out of the box, without having to fiddle too much with the configuration.

If you're not familiar with chroots, containers, VMs, or more generally bind mounts, maybe it's still very confusing. But you'd better get used to it, as virtual environment are very standard in software development nowadays.

But anyway, let's get back to the topic. How to make use of this chroot to build our Pelican website?

Chroot usage with Pelican

Pelican provides two helpers to build and manage your project: one is a Makefile, and the other is a Python script called fabfile.py. As I said before, I'm not really a seasoned Pythonista, but it happens that I'm quite a fan of make, hence I will focus on the Makefile for this part.

So, here's how your daily blogging workflow might look like, now that everything is in place.

Open a first terminal, and edit your blog posts with your favorite editor:

$ nano content/bla-bla-bla.md

Then open a second terminal, enter the chroot, build your blog and serve it:

$ schroot -c buster-amd64-pelican (chroot)$ make html (chroot)$ make serve

And finally, open your web browser at http://localhost:8000 and enjoy yourself.

This is easy and neat, but guess what, we can even do better. Open the Makefile and have a look at the very first lines:

PY?=python3 PELICAN?=pelican

It turns out that the Pelican developers know how to write Makefiles, and they were kind enough to allow their users to easily override the default commands. In our case, it means that we can just replace these two lines with these ones:

PY?=schroot -c buster-amd64-pelican -- python3 PELICAN?=schroot -c buster-amd64-pelican -- pelican

And after these changes, we can now completely forget about the chroot, and simply type make html and make serve. The chroot invocation is now handled automatically in the Makefile. How neat!

Maintenance

So you might want to update your chroot from time to time, and you do that with apt, like for any Debian system. Remember the distinction between regular chroot and source chroot due to the overlay? If you want to actually modify your chroot, what you want is the source chroot. And here's the one-liner:

schroot -c source:$PROJECT -u root -- \ bash -c "apt update && apt --yes dist-upgrade && apt clean"

If one day you stop using it, just delete the chroot directory, and the schroot configuration file:

sudo rm /etc/schroot/chroot.d/buster-amd64-pelican.conf sudo rm -fr /srv/chroot/buster-amd64-pelican

And that's about it.

Last words

The general idea of keeping your build environments separated from your host environment is a very important one if you're a software developer, and especially if you're doing consulting and working on several projects at the same time. Installing all the build tools and dependencies directly on your system can work at the beginning, but it won't get you very far.

schroot is only one of the many tools that exist to address this, and I think it's Debian specific. Maybe you have never heard of it, as chroots in general are far from the container hype, even though they have some common use-cases. schroot has been around for a while, it works great, it's simple, flexible, what else? Just give it a try!

It's also well integrated with other Debian tools, for example you might use it through sbuild to build Debian packages (another daily task that is better done in a dedicated build environment), so I think it's a tool worth knowing if you're doing some Debian work.

That's about it, in the end it was mostly a post about schroot, hope you liked it.

Romain Perier: Hello planet !

Friday 15th of March 2019 07:50:10 PM
Introducing myself
My name is Romain, I have been nominated to the status of  Debian Maintainer recently. I am part of the debian-kernel team (still a padawan) since few months, and, as a DM, I will co-maintain the package raspi3-firmware with Gunnar Wolf.

Current contributionsAs a kernel and linux en enginner, I focus on embedded stuffs and kernel development. This is a summary of what I have done in the previous months.
Kernel teamAs a contributor, I work a various things, I try to work where help is the more needed. I have wrote a python script for generating debian changelog in firmware-nonfree, I have bumped the package for new releases. I bump the linux kernel for new upstream releases, I help to close and resolve bugs, I backport new features when it makes sense for doing so, enable new hardware and recently I have added  new flavour for the RPI 1 and RPI Zero in armel ! (spoil)
Raspi3-firmwareI have recently added a new mode in the configuration file of the package that let you device what you would like to boot from the firmware. You can either boot a linux kernel directly, passing the adress of the initramfs to use, a baremetal application, or a second level bootloader like u-boot or barebox (personnally I prefer u-boot). From u-boot then, you can use extlinux and get a nice generated menu by uboot menu. I have also added the support for using the devicetree-blob of the RPI 1 and the RPI Zero W when the firmware boots the kernel directly. I am also participating for reducing lintian warnings, new upstream release and improvements in general.
U-BootI have recently sent a MR for enabling support for the RPI Zero W in uboot for armel and it was accepted (thanks to Vagrant). As I use U-Boot everyday on my boards, I will probably send others MR ;) Raspberry Pi ZeroAs written described above, I have added a flavour for enabling support for the RPI1 and RPI Zero in armel for Linux 4.19.x. Like the Raspberry PI 3, there are no official images for this, but you can use debos or vmdb2 for building a buster image for your PI Zero. I have personally tried it, at home. I was able to run an LXDE session, with llvmpipe (I am still investigating if vc4 in gallium works for this SoC or not, while it's working perfectly fine for the PI3, it fallback to llvmpipe on the PI Zero).Raspberry Pi 3As posted on planet recently by Gunnar, you can find an unofficial image for the PI 3 if you want to try it. On buster you will be able to run a kernel 4.19.x LTS with an excellent DRM/KMS support and Gallium support in mesa. I was able to run a LXDE session with VC4 gallium here !
Future work
I will try my best to get an excellent support for all Raspberry PI in Debian (with unofficial images at the beginning). Including kernel support, kernel bugs fixes or improvements, debos and/or vmdb2 recipes for generating buster images easily, and even graphical stack hacks :) . I will continue my work in the kernel-team, because there are a tons of things to do, and of courses as co-maintainer, maintain raspi3-firmware (that will be probably renamed to something more generic, *spoil*).

John Goerzen: The Rightward, Establishment Bias of Lazy Journalism

Friday 15th of March 2019 03:21:37 PM

Note: I also posted this post on medium.

I remember clearly the moment I’d had enough of NPR for the day. It was early morning January 25 of this year, still pretty dark outside. An NPR anchor was interviewing an NPR reporter — they seem to do that a lot these days — and asked the following simple but important question:

“So if we know that Roger Stone was in communications with WikiLeaks and we know U.S. intelligence agencies have said WikiLeaks was operating at the behest of Russia, does that mean that Roger Stone has been now connected directly to Russia’s efforts to interfere in the U.S. election?”

The factual answer, based on both data and logic, would have been “yes”. NPR, in fact, had spent much airtime covering this; for instance, a June 2018 story goes into detail about Stone’s interactions with WikiLeaks, and less than a week before Stone’s arrest, NPR referred to “internal emails stolen by Russian hackers and posted to Wikileaks.” In November of 2018, The Atlantic wrote, “Russia used WikiLeaks as a conduit — witting or unwitting — and WikiLeaks, in turn, appears to have been in touch with Trump allies.”

Why, then, did the NPR reporter begin her answer with “well,” proceed to hedge, repeat denials from Stone and WikiLeaks, and then wind up saying “authorities seem to have some evidence” without directly answering the question? And what does this mean for bias in the media?

Let us begin with a simple principle: facts do not have a political bias. Telling me that “the sky is blue” no more reflects a Democratic bias than saying “3+3=6” reflects a Republican bias. In an ideal world, politics would shape themselves around facts; ideas most in agreement with the data would win. There are not two equally-legitimate sides to questions of fact. There is no credible argument against “the earth is round”, “climate change is real,” or “Donald Trump is an unindicted co-conspirator in crimes for which jail sentences have been given.” These are factual, not political, statements. If you feel, as I do, a bit of a quickening pulse and escalating tension as you read through these examples, then you too have felt the forces that wish you to be uncomfortable with unarguable reality.

That we perceive some factual questions as political is a sign of a deep dysfunction in our society. It’s a sign that our policies are not always guided by fact, but that a sustained effort exists to cause our facts to be guided by policy.

Facts do not have a political bias. There are not two equally-legitimate sides to questions of fact. “Climate change is real” is a factual, not a political, statement. Our policies are not always guided by fact; a sustained effort exists to cause our facts to be guided by policy.

Why did I say right-wing bias, then? Because at this particular moment in time, it is the political right that is more engaged in the effort to shape facts to policy. Whether it is denying the obvious lies of the President, the clear consensus on climate change, or the contours of various investigations, it is clear that they desire to confuse and mislead in order to shape facts to their whim.

It’s not always so consequential when the media gets it wrong. When CNN breathlessly highlights its developing story — that an airplane “will struggle to maintain altitude once the fuel tanks are empty” —it gives us room to critique the utility of 24/7 media, but not necessarily a political angle.

Hey thanks CNN for making sure we all knew the crucial fact that planes cannot fly on empty fuel tanks pic.twitter.com/P6uXuTCUgI

— Katie Pavlich (@KatiePavlich) March 31, 2014

But ask yourself: who benefits when the media is afraid to report a simple fact about an investigation with political connotations? The obvious answer, in the NPR example I gave, is that Republicans benefit. They want the President to appear innocent, so every hedge on known facts about illegal activities of those in Trump’s orbit is a gift to the right. Every time a reporter gives equal time to climate change deniers is a gift to the right and a blow to informed discussion in a democracy.

Not only is there a rightward bias, but there is also an establishment bias that goes hand-in-hand. Consider this CNN report about Facebook’s “pivot to privacy”, in which CEO Zuckerberg is credited with “changing his tune somewhat”. To the extent to which that article highlights “problems” with this, they take Zuckerberg at face-value and start to wonder if it will be harder to clamp down on fake news in the news feed if there’s more privacy. That is a total misunderstanding of what was being proposed; a more careful reading of the situation was done by numerous outlets, resulting in headlines such as this one in The Intercept: “Mark Zuckerberg Is Trying to Play You — Again.” They correctly point out the only change actually mentioned pertained only to instant messages, not to the news feed that CNN was talking about, and even that had a vague promise to happen “over the next few years.” Who benefited from CNN’s failure to read a press release closely? The established powers — Facebook.

Pay attention to the media and you’ll notice that journalists trip all over themselves to report a new dot in a story, but they run away scared from being the first to connect the dots. Much has been written about the “media narrative,” often critical, with good reason. Back in November of 2018, an excellent article on “The Ubearable Rightness of Seth Abramson” covered one particular case in delightful detail.

Journalists trip all over themselves to report a new dot in a story, but they run away scared from being the first to connect the dots.

Seth Abramson himself wrote, “Trump-Russia is too complex to report. We need a new kind of journalism.” He argues the culprit is not laziness, but rather that “archive of prior relevant reporting that any reporter could review before they publish their own research is now so large and far-flung that more and more articles are frustratingly incomplete or even accidentally erroneous than was the case when there were fewer media outlets, a smaller and more readily navigable archive of past reporting for reporters to sift through, and a less internationalized media landscape.” Whether laziness or not, the effect is the same: a failure to properly contextualize facts leading to misrepresented or outright wrong outcomes that, at present, have a distinct bias towards right-wing and establishment interests.

Yes, the many scandals in Trumpland are extraordinarily complex, and in this age of shrinking newsroom budgets, it’s no wonder that reporters have trouble keeping up. Highly-paid executives like Zuckerberg and politicians in Congress have years of practice with obfuscation, and it takes skill to find the truth (if there even is any) behind a corporate press release or political talking point. One would hope, though, that reporters would be less quick to opine if they lack those skills or the necessary time to dig in.

There’s not just laziness; there’s also, no doubt, a confusion about what it means to be a balanced journalist. It is clear that there are two sides to a debate over, say, whether to give a state’s lottery money to the elementary schools or the universities. When there is the appearance of a political debate over facts, shouldn’t that also receive equal time for each side? I argue no. In fact, politicians making claims that contradict establish fact should be exposed by journalists, not covered by them.

And some of it is, no doubt, fear. Fear that if they come out and say “yes, this implicates Stone with Russian hacking” that the Fox News crowd will attack them as biased. Of course this will happen, but that attack will be wrong. The right has done an excellent job of convincing both reporters and the public that there’s a big left-leaning bias that needs to be corrected, by yelling about it every time a fact is mentioned that they don’t like. The unfortunate result is that the fact-leaning bias in the media is being whittled away.

Politicians making claims that contradict establish fact should be exposed by journalists, not covered by them. The fact-leaning bias in the media is being whittled away.

Regardless of the cause, media organizations and their reporters need to be cognizant of the biases actors of all stripes wish them to display, and refuse to play along. They need to be cognizant of the demands they put on their own reporters, and give them space to understand the context of a story before explaining it. They need to stand up to those that try to diminish facts, to those that would like them to be uninformed.

A world in which reporters know the context of their stories and boldly state facts as facts, come what may, is a world in which reporters strengthen the earth’s democracies. And, by extension, its people.

Ritesh Raj Sarraf: Linux Desktop Usage 2019

Friday 15th of March 2019 02:35:06 PM

If I look back now, it must be more than 20 years since I got fascinated with GNU/Linux ecosystem and started using it.

Back then, it was more curiosity of a young teenager and the excitement to learn something. There’s one thing that I have always admired/respected about Free Software’s values, is: Access for everyone to learn. This is something I never forget and still try to do my bit.

It was perfect timing and I was lucky to be part of it. Free Software was (and still is) a great platform to learn upon, if you have the willingness and desire for it.

Over the years, a lot lot lot has changed, evolved and improved. From the days of writing down the XF86Config configuration file to get the X server running, to a new world where now everything is almost dynamic, is a great milestone that we have achieved.

All through these years, I always used GNU/Linux platform as my primary computing platform. The CLI, Shell and Tools, have all been a great source of learning. Most of the stuff was (and to an extent, still is) standardized and focus was usually on a single project.

There was less competition on that front, rather there was more collaboration. For example, standard tools like: sed, awk, grep etc were single tools. Like you didn’t have 2 variants of it. So, enhancements to these tools was timely and consistent and learning these tools was an incremental task.

On the other hand, on the Desktop side of things, it started and stood for a very long time, to do things their own ways. But eventually, quite a lot of those things have standardized, thankfully.

For the larger part of my desktop usage, I have mostly been a KDE user. I have used other environments like IceWM, Enlightenment briefly but always felt the need to fallback to KDE, as it provided a full and uniform solution. For quite some time, I was more of a user preferring to only use the K* tools, as in if it wasn’t written with kdelibs, I’d try to avoid it. But, In the last 5 years, I took at detour and tried to unlearn and re-learn the other major desktop environment, GNOME.

GNOME is an equally beautiful and elegant desktop environment with a minimalistic user interface (but which at many times ends up plaguing its application’s feature set too, making it “minimalistic feature set applications”). I realized that quite a lot of time and money is invested into the GNOME project, especially by the leading Linux Distribution Vendors.

But the fact is that GNU/Linux is still not a major player on the Desktop market. Some believe that the Desktop Market itself has faded and been replaced by the Mobile market. I think Desktop Computing still has a critical place in the near foreseeable future and the Mobile Platform is more of an extension shell to it. For example, for quickies, the Mobile platform is perfect. But for a substantial amount of work to be done, we still fallback to using our workstations. Mobile platform is good for a quick chat or email, but if you need to write a review report or a blog post or prepare a presentation or update an excel sheet, you’d still prefer to use your workstation.

So…. After using GNOME platform for a couple of years, I realized that there’s a lot of work and thought put into this platform too, just like the KDE platform. BUT To really be able to dream about the “Year of the dominance of the GNU/Linux desktop platform”, all these projects need to work together and synergise their efforts.

Pain points:

  • Multiple tools, multiple efforts wasted. Could be synergised.
  • Data accessiblity
  • Integration and uniformity
Multiple tools

Kmail used to be an awesome email client. Evoltuion today is an awesome email client. Thunderbird was an awesome email client, which from what I last remember, Mozilla had lack of funds to continue maintaining it. And then there’s the never ending stream of new/old projects that come and go. Thankfully, email is pretty standardized in its data format. Otherwise, it would be a nightmare to switch between these client. But still, GNU/Linux platforms have the potential to provide a strong and viable offering if they could synergise their work together. Today, a lot of resource is just wasted and nobody wins. Definitely not the GNU/Linux platform. Who wins are: GMail, Hotmail etc.

If you even look at the browser side of things, Google realized the potential of the Web platform for its business. So they do have a Web client for GNU/Linux. But you’ll never see an equivalent for Email/PIM. Not because it is obsolete. But more because it would hurt their business instead.

Data accessibility

My biggest gripe is data accessiblity. Thankfully, for most of the stuff that we rely upon (email, documents etc), things are standardized. But there still are annoyances. For example, when KDE 4.x debacle occured, kwallet could not export its password database to the newer one. When I moved to GNOME, I had another very very very hard time extracting passwords from kwallet and feeding them to SeaHorse. Then, when recently, I switched back to KDE, I had to similarly struggle exporting back my data from SeaHorse (no, not back to KWallet). Over the years, I realized that critical data should be kept in its simplest format. And let the front-ends do all the bling they want to. I realized this more with Emails. Maildir is a good clean format to store my email in, irrespective of how I access my email. Whether it is dovecot, Evolution, Akonadi, Kmail etc, I still have my bare data intact.

I had burnt myself on the password front quite a bit, so on this migration back to KDE, I wanted an email like solution. So there’s pass, a password store, which fits the bill just like the Email use case. It would make a lot more sense for all Desktop Password Managers to instead just be a frontend interface to pass and let it keep the crucial data in bare minimal format, and accessbile at all times, irrespective of the overhauling that the Desktop projects tend to do every couple of years or so.

Data is critical. Without retaining its compatibility (both backwards and forward), no battle can you win.

I honestly feel the Linux Desktop Architects from the different projects should sit together and agree on a set of interfaces/tools (yes yes there is fd.o) and stick to it. Too much time and energy is wasted otherwise.

Integration and Uniformity

This is something I have always desired and I was quite impressed (and delighted) to see some progress on the KDE desktop in the UI department. On GNOME, I developed a liking for the Evolution email client. Infact, it is my client currently, for Email, NNTP and other PIM. And I still get to use it nicely in a KDE environment. Thank you.

Enrico Zini: gitpython: list all files in a git commit

Friday 15th of March 2019 10:41:40 AM

A little gitpython recipe to list the paths of all files in a commit:

#!/usr/bin/python3 import git from pathlib import Path import sys def list_paths(root_tree, path=Path(".")): for blob in root_tree.blobs: yield path / blob.name for tree in root_tree.trees: yield from list_paths(tree, path / tree.name) repo = git.Repo(".", search_parent_directories=True) commit = repo.commit(sys.argv[1]) for path in list_paths(commit.tree): print(path)

It can be a good base, for example, for writing a script that, given two git branches, shows which django migrations are in one and not in the other, without doing any git checkout of the code.

Dirk Eddelbuettel: #20: Dependencies. Now with badges!

Friday 15th of March 2019 03:41:00 AM

Welcome to post number twenty in the randomly redundant R rant series of posts, or R4 for short. It has been a little quiet since the previous post last June as we’ve been busy with other things but a few posts (or ideas at least) are queued.

Dependencies. We wrote about this a good year ago in post #17 which was (in part) tickled by the experience of installing one package … and getting a boatload of others pulled in. The topic and question of dependencies has seen a few posts over the year, and I won’t be able to do them all justice. Josh and I have been added a few links to the tinyverse.org page. The (currently) last one by Russ Cox titled Our Software Dependency Problem is particularly trenchant.

And just this week the topic came up in two different, and unrelated posts. First, in What I don’t like in you repo, Oleg Kovalov lists a brief but decent number of items by which a repository can be evaluated. And one is about [b]loated dependencies where he nails it with a quick When I see dozens of deps in the lock file, the first question which comes to my mind is: so, am I ready to fix any failures inside any of them? This is pretty close to what we have been saying around the tinyverse.

Second, in Beware the data science pin factory, Eric Colson brings an equation. Quoting from footnote 2: […] the number of relationships (r) grows as a function number of members (n) per this equation: r = (n^2-n) / 2. Granted, this was about human coordination and ideal team size. But let’s just run with it: For n=10, we get r=9 which is not so bad. For n=20, it is r=38. And for n=30 we are at r=87. You get the idea. “Big-Oh-N-squared”.

More dependencies means more edges between more nodes. Which eventually means more breakage.

Which gets us to announcement embedded in this post. A few months ago, in what still seems like a genuinely extra-clever weekend hack in an initial 100 or so lines, Edwin de Jonge put together a remarkable repo on GitLab. It combines Docker / Rocker via hourly cron jobs with deployment at netlify … giving us badges which visualize the direct as well as recursive dependencies of a package. All in about 100 lines, fully automated, autonomously running and deployed via CDN. Amazing work, for which we really need to praise him! So a big thanks to Edwin.

With these CRAN Dependency Badges being available, I have been adding them to my repos at GitHub over the last few months. As two quick examples you can see

  • Rcpp
  • RcppArmadillo

to get the idea. RcppArmadillo (or RcppEigen or many other packages) will always have one: Rcpp. But many widely-used packages such as data.table also get by with a count of zero. It is worth showing this – and the badge does just that! And I even sent a PR to the badger package: if you’re into this, you can have a badge made for your via badger::badge_depdencies(pkgname).

Otherwise, more details at Edwin’s repo and of course his actual tinyverse.netlify.com site hosting the badges. It’s easy as all other badges: reference the CRAN package, get a badge.

So if you buy into the idea that lightweight is the right weight then join us and show it via the dependency badges!

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Hideki Yamane: pbuilder hack with new debootstrap option

Friday 15th of March 2019 02:17:59 AM
Suddenly I noticed that maybe I can use --cache-dir option that I've added to debootstrap some time ago for pbuilder, too. Then hacked it.

> original
real    3m34.811s
user    1m6.676s
sys     0m33.051s

> use aptcache for debootstrap
real    2m52.397s
user    0m59.660s
sys     0m28.631s
It cuts 40s for creating base.tgz. Nice, isn't it? :) Hope pbuilder team will accept this Merge Request and push it to buster since it's worth for stable release, IMHO.

Ben Hutchings: Debian LTS work, February 2019

Thursday 14th of March 2019 11:46:50 PM

I was assigned 19.5 hours of work by Freexian's Debian LTS initiative and carried over 1 hour from January. I worked only 4 hours and so will carry over 16.5 hours.

I backported various security fixes to Linux 3.16, but did not upload a new release yet.

Craig Small: WordPress 5.1.1

Thursday 14th of March 2019 11:09:07 AM

The Debian packages for WordPress version 5.1.1 are being updated as I write this. This is a security fix for WordPress that stops comments causing a cross-site scripting bug. It’s an important one to update.

The backports should happen soon so even if you are using Debian stable you’ll be covered.

Shirish Agarwal: The road to hell is paved with good intentions

Wednesday 13th of March 2019 11:41:25 PM

First of all I would like to share about a video which I should have talked or added about in the ‘Celebrating National Science Day at GMRT‘ blog post. It’s a documentary called ‘The Most Unknown‘ . It’s a great documentary as it gives you a glimpse of how much is there yet to discover. The reason I shared this is I have seen lot of money being removed from Government Research and put god knows where. Just a fair warning, it would be somewhat of a long conversation.

Almost all of IIT’s are in bad shape, in fact IIT Mumbai which I know and have often had the privilege to associate myself with has been going through tough times. This is when institutes such as IIT Mumbai, NCRA, GMRT, FTII and all such institutions have made loads of contributions in creating awareness and has given the public the ability to question rather than just ‘believe’ . For any innovation to happen, you have to question, investigate, prove, share the findings and the way you have done things so it could be reproduced.

Even Social Sciences as shared in the Documentary and my brief learnings and takeaways from TISS has been the same. The reason is even they are in somewhat dire-straits. I was just sharing or having a conversation with another friend few days back who is into higher education that IISER Pune where the recent Wordcamp happened had to commercialize and open its doors to events in order to sustain itself. While I and perhaps all wordcampers would forever be grateful for sharing with us such a great place as well as a studios vibe which also influenced how Wordcamp was held, I did feel sad that we intruded in their study areas which should be meant for IISER’s only.

Before I get too carried away, I should point to people that people should look at Ian Cheney’s some of the old documentaries as well (the one who just did the most Unknown) and has found his previous work compelling as well. The City Dark is a beautiful masterpiece and shares lot of insights about light pollution which India could use well to improve both our lighting as well as reduce light pollution in the atmosphere.

Meeting with Bhakts and ‘Good intentions’

The reason I shared the above was also keeping in mind the conversations I have whenever I meet Bhakts. The term bhakt comes from bhakti in Sanskrit which at one time meant spirituality and purity although now in politics it means one who choose to believe in the leader and the party absolutely. Whenever a bhakt starts losing an logical argument, one of the argument that is often meted out is whatever you say you cannot doubt Mr. Narendra Modi’s intentions which is the reason why I took the often used proverb to prove the same point and is the heading of the blog post. The problem with the whole ‘good intentions’ part is, it’s pretty much a strawman argument. The problem with intentions is everybody can either say or mask their intentions. Even ISIS says that they want to bring back the golden phase of Islam. We have seen their actions, should we believe what they say ? Or even Hitler who said ‘One people, one empire, one leader’ who claimed that the Aryans were superior to the Jewish people while history has gone to show the exact opposite. Israel, today is the eight-biggest arms supplier in the world, our military is the second-biggest buyer of arms from them as well as far more prosperous than us and many other countries. Their work on drip-irrigation and water retention, agricultural techniques, there is much we could learn from them. Same thing about manning borders and such. While I could give many such examples the easiest example to share in context of good intentions gone wrong is Demonetisation in India which deserves its own paragraph.

Demonetisation

Demonetisation was heralded by Mr. Modi with great fanfare . It was supposed to take out black money. While we learned later that black money didn’t get wiped out but has become more into your face. This we learned later was debunked by the earlier R.B.I. Governor Raghuram Rajan and then now from R.B.I. itself. This is before Mr. Narendra Modi announced demonetisation. Sharing below an excerpt from the Freakonomics Radio show which has Mr. Rajan’s interview. Makes for interesting reading or listening as the case may be.

DUBNER: Now, shortly after your departure as governor of the R.B.I., Prime Minister Modi executed a sudden, controversial plan to abolish 500- and 1,000-rupee banknotes, hoping to crack down on the shadow economy and tax evasion. I understand you had not been in favor of that idea, correct? p, li { white-space: pre-wrap; }

RAJAN: Absolutely. It didn’t make sense. I was asked for my opinion, and I said, “Look, it is taking away money that people use in transactions. It’s going to create enormous disruption unless we replace it overnight with freshly printed money.” And it’s very important that we have all that in place, difficult to maintain secrecy, and then the fundamental sort of objective of this, which was to get people to bring out the money that they hoarded in their basements and pay taxes on them — I said, “That’s probably not going to work out, because they’re going to find ways to infuse the money back into the system without paying those taxes.”

DUBNER: It’s been roughly two years now. What have been the effects of this demonetization?

RAJAN: Well, I think more than the numbers suggest, because India was growing at that time. And we had numbers which were in the 7.5 percent growth range at that point, at a time, in 2016, when the world was actually growing quite slowly. When growth picked up in 2017, instead of going along with the world, which we typically do and we exceed world growth significantly, we went down. That suggests it had a tremendous effect on growth, but that, the numbers don’t capture it all, because what actually got killed was the informal sector — the people who were doing work with notes rather than with checks, who didn’t have formal bank accounts. And when you look at the job numbers that some private-sector people estimate 10, 12 million jobs were lost in that episode. And, of course, we haven’t recovered them yet. It was one of those places where more economic thinking would have helped.

DUBNER: Was it a coincidence that Prime Minister Modi went ahead with the plan only after you’d left?

RAJAN: Well, I can’t speak on that. I can only say that I made my objections very, very clear.

Freakonomics Podcast Stephen J. Dubner interviewing Mr. Raghuram Rajan, RBI Governor 5th September 2013 – 5th September 2017. Aired on 6th February 2019 .

I would urge people to listen to Freakonomics Radio as there are lots of pearls of wisdom in there. There is also the Good ideas are not enough podcast which is very relevant to the topic at hand but would digress about the Freakonomics radio for now.

The interesting part to ask from the details known from R.B.I. are –


a. Why did Mr. Narendra Modi feel the need to have the permission of R.B.I. after 38 days ?


b. If Mr. Modi were confident of the end-result then shouldn’t he have instead of asking permission and have the PMO have taken all the responsibility.


In any case, as was seen from R.B.I. counting only 0.3% of the money did not come even though many people’s valid claims were thrown out and the expense in the whole exercise was much more than the shortfall. R.B.I. didn’t get INR 10k crore while it spent INR 13k crore for the new currency. Does anybody see any saving here ?

The bhakts counter-argument is that the bankers were bad, if everybody had done their work, then it would have all worked out as Mr. Modi wanted. The statement itself implies that they didn’t know the reality. Even if we take the statement at face-value that all the bankers were cheaters (which I don’t agree with at all) , didn’t they know it when they were the opposition. Where was the party’s economic intelligence, didn’t it tell them so many years they were in Opposition. This is what the Opposition should be doing and knowing about the state of the Economy and know the workings to say the least.

There is also this https://www.scribd.com/document/401570379/Minutes-of-RBI-s-board-meeting-on-demonetisation

These are minutes obtained by Venkatesh Nayak under the RTI tool.

To rub salt to the wounds, now the IPP is at low of 1.7 percent as well

Jo Shields: Too many cores

Wednesday 13th of March 2019 02:48:52 PM
Arming yourself

ARM is important for us. It’s important for IOT scenarios, and it provides a reasonable proxy for phone platforms when it comes to developing runtime features.

We have big beefy ARM systems on-site at Microsoft labs, for building and testing Mono – previously 16 Softiron Overdrive 3000 systems with 8-core AMD Opteron A1170 CPUs, and our newest system in provisional production, 4 Huawei Taishan XR320 blades with 2×32-core HiSilicon Hi1616 CPUs.

The HiSilicon chips are, in our testing, a fair bit faster per-core than the AMD chips – a good 25-50%. Which begged the question “why are our Raspbian builds so much slower?”

Blowing a raspberry

Raspbian is the de-facto main OS for Raspberry Pi. It’s basically Debian hard-float ARM, rebuilt with compiler flags better suited to ARM11 76JZF-S (more precisely, the ARMv6 architecture, whereas Debian targets ARMv7). The Raspberry Pi is hugely popular, and it is important for us to be able to offer packages optimized for use on Raspberry Pi.

But the Pi hardware is also slow and horrible to use for continuous integration (especially the SD-card storage, which can be burned through very quickly, causing maintenance headaches), so we do our Raspbian builds on our big beefy ARM64 rack-mount servers, in chroots. You can easily do this yourself – just grab the raspbian-archive-keyring package from the Raspbian archive, and pass the Raspbian mirror to debootstrap/pbuilder/cowbuilder instead of the Debian mirror.

These builds have always been much slower than all our Debian/Ubuntu ARM builds (v5 soft float, v7 hard float, aarch64), but on the new Huawei machines, the difference became much more stark – the same commit, on the same server, took 1h17 to build .debs for Ubuntu 16.04 armhf, and 9h24 for Raspbian 9. On the old Softiron hardware, Raspbian builds would rarely exceed 6h (which is still outrageously slow, but less so). Why would the new servers be worse, but only for Raspbian? Something to do with handwavey optimizations in Raspbian? No, actually.

When is a superset not a superset

Common wisdom says ARM architecture versions add new instructions, but can still run code for older versions. This is, broadly, true. However, there are a few cases where deprecated instructions become missing instructions, and continuity demands those instructions be caught by the kernel, and emulated. Specifically, three things are missing in ARMv8 hardware – SWP (swap data between registers and memory), SETEND (set the endianness bit in the CPSR), and CP15 memory barriers (a feature of a long-gone control co-processor). You can turn these features on via abi.cp15_barrier, abi.setend, and abi.swp sysctl flags, whereupon the kernel fakes those instructions as required (rather than throwing SIGILL).

CP15 memory barrier emulation is slow. My friend Vince Sanders, who helped with some of this analysis, suggested a cost of order 1000 cycles per emulated call. How many was I looking at? According to dmesg, about a million per second.

But it’s worse than that – CP15 memory barriers affect the whole system. Vince’s proposal was that the HiSilicon chips were performing so much worse than the AMD ones, because I had 64 cores not 8 – and that I could improve performance by running a VM, with only one core in it (so CP15 calls inside that environment would only affect the entire VM, not the rest of the computer).

Escape from the Pie Folk

I already had libvirtd running on all my ARM machines, from a previous fit of “hey one day this might be useful” – and as it happened, it was. I had to grab a qemu-efi-aarch64 package, containing a firmware, but otherwise I was easily able to connect to the system via virt-manager on my desktop, and get to work setting up a VM. virt-manager has vastly improved its support for non-x86 since I last used it (once upon a time it just wouldn’t boot systems without a graphics card), but I was easily able to boot an Ubuntu 18.04 arm64 install CD and interact with it over serial just as easily as via emulated GPU.

Because I’m an idiot, I then wasted my time making a Raspbian stock image bootable in this environment (Debian kernel, grub-efi-arm64, battling file-size constraints with the tiny /boot, etc) – stuff I would not repeat. Since in the end I just wanted to be as near to our “real” environment as possible, meaning using pbuilder, this simply wasn’t a needed step. The VM’s host OS didn’t need to be Raspbian.

Point is, though, I got my 1-core VM going, and fed a Mono source package to it.

Time taken? 3h40 – whereas the same commit on the 64-core host took over 9 hours. The “use a single core” hypothesis more than proven.

Next steps

The gains here are obvious enough that I need to look at deploying the solution non-experimentally as soon as possible. The best approach to doing so is the bit I haven’t worked out yet. Raspbian workloads are probably at the pivot point between “I should find some amazing way to automate this” and “automation is a waste of time, it’s quicker to set it up by hand”

Many thanks to the #debian-uk community for their curiosity and suggestions with this experiment!

Reproducible builds folks: Reproducible Builds: Weekly report #202

Wednesday 13th of March 2019 02:24:20 PM

Here’s what happened in the Reproducible Builds effort between Sunday March 3 and Saturday March 9 2019:

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week:

Chris Lamb uploaded version 113 to Debian unstable fixing a long list of issues. It included contributions already covered in previous weeks as well as new ones by Chris, including:

  • Provide explicit help when the libarchive system package is missing or “incomplete”. (#50)
  • Explicitly mention when the guestfs module is missing at runtime and we are falling back to a binary diff. (#45)

Vagrant Cascadian made the corresponding update to GNU Guix. []

Packages reviewed and fixed, and bugs filed Test framework development

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This week, Holger Levsen made the following improvements:

  • Analyse node maintenance job runs to determine whether to mark nodes offline. []
  • Detect hanging health check runs, not just failed ones. []
  • Allow members of the jenkins UNIX group to sudo(8) to the jenkins user [] and simplify adding users to said group [].
  • Improve the “SHA1 checker” script to deal with packages with more than one version [] and to re-download buildinfo.debian.net’s files if they are older than two weeks. []
  • Node maintenance. [][][][]
  • In the version checker, correctly deal with a rare situation when several, say, diffoscope versions are available in one Debian suite at the same time. []

In addition, Alexander “lynxis” Couzens, made a number of changes to our OpenWrt support, including:

  • Add OpenWrt support to our database. []
  • Adding a reproducible_openwrt_package_parser.py script. []
  • Strip unreproducible certificates from images. []
Outreachy

Don’t forget that Reproducible Builds is part of May/August 2019 round of Outreachy. Outreachy provides internships to work free software. Internships are open to applicants around the world, working remotely and are not required to move. Interns are paid a stipend of $5,500 for the three month internship and have an additional $500 travel stipend to attend conferences/events.

So far, we received more than ten initial requests from candidates. The closing date for applicants is April 2nd. More information is available on the application page.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Kees Cook: security things in Linux v5.0

Tuesday 12th of March 2019 11:04:16 PM

Previously: v4.20.

Linux kernel v5.0 was released last week! Looking through the changes, here are some security-related things I found interesting:

read-only linear mapping, arm64
While x86 has had a read-only linear mapping (or “Low Kernel Mapping” as shown in /sys/kernel/debug/page_tables/kernel under CONFIG_X86_PTDUMP=y) for a while, Ard Biesheuvel has added them to arm64 now. This means that ranges in the linear mapping that contain executable code (e.g. modules, JIT, etc), are not directly writable any more by attackers. On arm64, this is visible as “Linear mapping” in /sys/kernel/debug/kernel_page_tables under CONFIG_ARM64_PTDUMP=y, where you can now see the page-level granularity:

---[ Linear mapping ]--- ... 0xffffb07cfc402000-0xffffb07cfc403000 4K PTE ro NX SHD AF NG UXN MEM/NORMAL 0xffffb07cfc403000-0xffffb07cfc4d0000 820K PTE RW NX SHD AF NG UXN MEM/NORMAL 0xffffb07cfc4d0000-0xffffb07cfc4d1000 4K PTE ro NX SHD AF NG UXN MEM/NORMAL 0xffffb07cfc4d1000-0xffffb07cfc79d000 2864K PTE RW NX SHD AF NG UXN MEM/NORMAL

per-task stack canary, arm
ARM has supported stack buffer overflow protection for a long time (currently via the compiler’s -fstack-protector-strong option). However, on ARM, the compiler uses a global variable for comparing the canary value, __stack_chk_guard. This meant that everywhere in the kernel needed to use the same canary value. If an attacker could expose a canary value in one task, it could be spoofed during a buffer overflow in another task. On x86, the canary is in Thread Local Storage (TLS, defined as %gs:20 on 32-bit and %gs:40 on 64-bit), which means it’s possible to have a different canary for every task since the %gs segment points to per-task structures. To solve this for ARM, Ard Biesheuvel built a GCC plugin to replace the global canary checking code with a per-task relative reference to a new canary in struct thread_info. As he describes in his blog post, the plugin results in replacing:

8010fad8: e30c4488 movw r4, #50312 ; 0xc488 8010fadc: e34840d0 movt r4, #32976 ; 0x80d0 ... 8010fb1c: e51b2030 ldr r2, [fp, #-48] ; 0xffffffd0 8010fb20: e5943000 ldr r3, [r4] 8010fb24: e1520003 cmp r2, r3 8010fb28: 1a000020 bne 8010fbb0 ... 8010fbb0: eb006738 bl 80129898 <__stack_chk_fail>

with:

8010fc18: e1a0300d mov r3, sp 8010fc1c: e3c34d7f bic r4, r3, #8128 ; 0x1fc0 ... 8010fc60: e51b2030 ldr r2, [fp, #-48] ; 0xffffffd0 8010fc64: e5943018 ldr r3, [r4, #24] 8010fc68: e1520003 cmp r2, r3 8010fc6c: 1a000020 bne 8010fcf4 ... 8010fcf4: eb006757 bl 80129a58 <__stack_chk_fail>

r2 holds the canary saved on the stack and r3 the known-good canary to check against. In the former, r3 is loaded through r4 at a fixed address (0x80d0c488, which “readelf -s vmlinux” confirms is the global __stack_chk_guard). In the latter, it’s coming from offset 0x24 in struct thread_info (which “pahole -C thread_info vmlinux” confirms is the “stack_canary” field).

per-task stack canary, arm64
The lack of per-task canary existed on arm64 too. Ard Biesheuvel solved this differently by coordinating with GCC developer Ramana Radhakrishnan to add support for a register-based offset option (specifically “-mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=...“). With this feature, the canary can be found relative to sp_el0, since that register holds the pointer to the struct task_struct, which contains the canary. I’m hoping there will be a workable Clang solution soon too (for this and 32-bit ARM). (And it’s also worth noting that, unfortunately, this support isn’t yet in a released version of GCC. It’s expected for 9.0, likely this coming May.)

top-byte-ignore, arm64
Andrey Konovalov has been laying the groundwork with his Top Byte Ignore (TBI) series which will also help support ARMv8.3’s Pointer Authentication (PAC) and ARMv8.5’s Memory Tagging (MTE). While TBI technically conflicts with PAC, both rely on using “non-VA-space” (Virtual Address) bits in memory addresses, and getting the kernel ready to deal with ignoring non-VA bits. PAC stores signatures for checking things like return addresses on the stack or stored function pointers on heap, both to stop overwrites of control flow information. MTE stores a “tag” (or, depending on your dialect, a “color” or “version”) to mark separate memory allocation regions to stop use-after-tree and linear overflows. For either of these to work, the CPU has to be put into some form of the TBI addressing mode (though for MTE, it’ll be a “check the tag” mode), otherwise the addresses would resolve into totally the wrong place in memory. Even without PAC and MTE, this byte can be used to store bits that can be checked by software (which is what the rest of Andrey’s series does: adding this logic to speed up KASan).

ongoing: implicit fall-through removal
An area of active work in the kernel is the removal of all implicit fall-through in switch statements. While the C language has a statement to indicate the end of a switch case (“break“), it doesn’t have a statement to indicate that execution should fall through to the next case statement (just the lack of a “break” is used to indicate it should fall through — but this is not always the case), and such “implicit fall-through” may lead to bugs. Gustavo Silva has been the driving force behind fixing these since at least v4.14, with well over 300 patches on the topic alone (and over 20 missing break statements found and fixed as a result of the work). The goal is to be able to add -Wimplicit-fallthrough to the build so that the kernel will stay entirely free of this class of bug going forward. From roughly 2300 warnings, the kernel is now down to about 200. It’s also worth noting that with Stephen Rothwell’s help, this bug has been kept out of linux-next by him sending warning emails to any tree maintainers where a new instance is introduced (for example, here’s a bug introduced on Feb 20th and fixed on Feb 21st).

ongoing: refcount_t conversions
There also continues to be work converting reference counters from atomic_t to refcount_t so they can gain overflow protections. There have been 18 more conversions since v4.15 from Elena Reshetova, Trond Myklebust, Kirill Tkhai, Eric Biggers, and Björn Töpel. While there are more complex cases, the minimum goal is to reduce the Coccinelle warnings from scripts/coccinelle/api/atomic_as_refcounter.cocci to zero. As of v5.0, there are 131 warnings, with the bulk of the remaining areas in fs/ (49), drivers/ (41), and kernel/ (21).

userspace PAC, arm64
Mark Rutland and Kristina Martsenko enabled kernel support for ARMv8.3 PAC in userspace. As mentioned earlier about PAC, this will give userspace the ability to block a wide variety of function pointer overwrites by “signing” function pointers before storing them to memory. The kernel manages the keys (i.e. selects random keys and sets them up), but it’s up to userspace to detect and use the new CPU instructions. The “paca” and “pacg” flags will be visible in /proc/cpuinfo for CPUs that support it.

platform keyring
Nayna Jain introduced the trusted platform keyring, which cannot be updated by userspace. This can be used to verify platform or boot-time things like firmware, initramfs, or kexec kernel signatures, etc.

Edit: added userspace PAC and platform keyring, suggested by Alexander Popov
Edit: tried to clarify TBI vs PAC vs MTE

That’s it for now; please let me know if I missed anything. The v5.1 merge window is open, so off we go! :)

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Daniel Lange: Wiping harddisks in 2019

Tuesday 12th of March 2019 06:53:51 PM

Wiping hard disks is part of my company's policy when returning servers. No exceptions.

Good providers will wipe what they have received back from a customer, but we don't trust that as the hosting / cloud business is under constant budget-pressure and cutting corners (wipefs) is a likely consequence.

With modern SSDs there is "security erase" (man hdparm or see the - as always well maintained - Arch wiki) which is useful if the device is encrypt-by-default. These devices basically "forget" the encryption key but it also means trusting the devices' implementation security. Which doesn't seem warranted. Still after wiping and trimming, a secure erase can't be a bad idea .

Still there are three things to be aware of when wiping modern hard disks:

  1. Don't forget to add bs=4096 (blocksize) to dd as it will still default to 512 bytes and that makes writing even zeros less than half the maximum possible speed. SSDs may benefit from larger block sizes matched to their flash page structure. These are usually 128kB, 256kB, 512kB, 1MB, 2MB and 4MB these days.1
  2. All disks can usually be written to in parallel. screen is your friend.
  3. The write speed varies greatly by disk region, so use 2 hours per TB and wipe pass as a conservative estimate. This is better than extrapolating what you see initially in the fastest region of a spinning disk.
  4. The disks have become huge (we run 12TB disks in production now) but the write speed is still somewhere 100 MB/s ... 300 MB/s. So wiping servers on the last day before returning is not possible anymore with disks larger than 4 TB each (and three passes). Or 12 TB and one pass (where e.g. fully encrypted content allows to just do a final zero-wipe).

hard disk size one pass three passes 1 TB2 h6 h 2 TB4 h12 h 3 TB6 h18 h 4 TB8 h24 h (one day) 5 TB10 h30 h 6 TB12 h36 h 8 TB16 h48 h (two days) 10 TB20 h60 h 12 TB24 h72 h (three days) 14 TB28 h84 h 16 TB32 h96 h (four days) 18 TB36 h108 h 20 TB40 h120 h (five days)

  1. As Douglas pointed out correctly in the comment below, these are IT Kilobytes and Megabytes, so 210 Bytes and 220 Bytes. So Kibibytes and Mebibytes for those firmly in SI territory. 

Bits from Debian: New Debian Developers and Maintainers (January and February 2019)

Tuesday 12th of March 2019 12:00:00 PM

The following contributors got their Debian Developer accounts in the last two months:

  • Paulo Henrique de Lima Santana (phls)
  • Unit 193 (unit193)
  • Marcio de Souza Oliveira (marciosouza)
  • Ross Vandegrift (rvandegrift)

The following contributors were added as Debian Maintainers in the last two months:

  • Romain Perier
  • Felix Yan

Congratulations!

John Goerzen: Goodbye to a 15-year-old Debian server

Tuesday 12th of March 2019 09:18:43 AM

It was October of 2003 that the server I’ve called “glockenspiel” was born. It was the early days of Linux-based VM hosting, using a VPS provider called memset, running under, of all things, User Mode Linux. Over the years, it has been migrated around, sometimes running on the metal and sometimes in a VM. The operating system has been upgraded in-place using standard Debian upgrades over the years, and is now happily current on stretch (albeit with a 32-bit userland). But it has never been reinstalled. When I’d migrate hosting providers, I’d use tar or rsync to stream glockenspiel across the Internet to its new home.

A lot of people reinstall an OS when a new version comes out. I’ve been doing Debian upgrades with apt for ages, and this one is a case in point. It lingers.

Root’s .profile was last modified in November 2004, and its .bashrc was last modified in December 2004. My own home directory still has a .pinerc, .gopherrc, and .arch-params file. I last edited my .vimrc in 2003 and my .emacs dates back to 2002 (having been copied over from a pre-glockenspiel FreeBSD server).

drwxr-xr-x 3 jgoerzen jgoerzen 4096 Dec 3 2003 irclogs -rw-r--r-- 1 jgoerzen jgoerzen 373 Dec 3 2003 .vimrc -rw-r--r-- 1 jgoerzen jgoerzen 651 Nov 27 2003 .reportbugrc drwx------ 3 jgoerzen jgoerzen 4096 Sep 2 2003 .arch-params -rw-r--r-- 1 jgoerzen jgoerzen 1115 Aug 23 2003 .gopherrc drwxr-xr-x 3 jgoerzen jgoerzen 4096 Jul 18 2003 .subversion -rw-r--r-- 1 jgoerzen jgoerzen 15317 Jun 21 2003 .pinerc

Poking around /etc on glockenspiel is like a trip back in time. Various apache sites still have configuration files around, but have long since been disabled. Over the years, glockenspiel has hosted source code repositories using Subversion, arch, tla, darcs, mercurial and git. It’s hosted websites using Drupal, WordPress, Serendipity, and so forth. It’s hosted gopher sites, websites or mailing lists for various Free Software projects (such as Freeciv), and any number of local charitable organizations. Remnants of an FTP configuration still exist, when people used web design software to build websites for those organizations on their PCs and then upload them to glockenspiel.

-rw-r--r-- 1 root root 268 Dec 25 2005 libnet.cfg -rw-r----- 1 root root 1305 Nov 11 2004 mrtg.cfg -rw-r--r-- 1 root root 552 Jul 31 2004 pam.conf

All this has been replaced by a set of Docker containers running my docker-debian-base software. They’re all in git, I can rebuild one of the containers in a few seconds or a few minutes by typing “make”, and there is no cruft from 2002. There are a lot of benefits to this.

And yet, there is a part of me that feels it’s all so… cold. Servers having “personalities” was always a distinctly dubious thing, but these days as we work through more and more layers of virtualization and indirection and become more distant from the hardware, we lose an appreciation for what we have and the many shoulders of giants upon which we stand.

And, so with that, the final farewell to this server that’s been running since 2003:

glockenspiel:/etc# shutdown -P now Shared connection to glockenspiel.complete.org closed.

More in Tux Machines

Forbes Says The Raspberry Pi Is Big Business

Not that it’s something the average Hackaday reader is unaware of, but the Raspberry Pi is a rather popular device. While we don’t have hard numbers to back it up (extra credit for anyone who wishes to crunch the numbers), it certainly seems a day doesn’t go by that there isn’t a Raspberry Pi story on the front page. But given that a small, cheap, relatively powerful, Linux computer was something the hacking community had dreamed of for years, it’s hardly surprising. [...] So where has the Pi been seen punching a clock? At Sony, for a start. The consumer electronics giant has been installing Pis in several of their factories to monitor various pieces of equipment. They record everything from temperature to vibration and send that to a centralized server using an in-house developed protocol. Some of the Pis are even equipped with cameras which feed into computer vision systems to keep an eye out for anything unusual. [Parmy] also describes how the Raspberry Pi is being used in Africa to monitor the level of trash inside of garbage bins and automatically dispatch a truck to come pick it up for collection. In Europe, they’re being used to monitor the health of fueling stations for hydrogen powered vehicles. All over the world, businesses are realizing they can build their own monitoring systems for as little as 1/10th the cost of turn-key systems; with managers occasionally paying for the diminutive Linux computers out of their own pocket. Read more

Graphics: NVIDIA, Nouveau and Vulkan

  • NVIDIA 418.49.04 Linux Driver Brings Host Query Reset & YCbCr Image Arrays
    NVIDIA has issued new Vulkan beta drivers leading up to the Game Developers Conference 2019 as well as this next week there being NVIDIA's GPU Technology Conference (GTC) nearby in California. The only publicly mentioned changes to this weekend's NVIDIA 418.49.04 Linux driver update (and 419.62 on the Windows side) is support for the VK_EXT_host_query_reset and VK_EXT_ycbcr_image_arrays extensions.
  • Nouveau NIR Support Lands In Mesa 19.1 Git
    It shouldn't come as any surprise, but landing today in Mesa 19.1 Git is the initial support for the Nouveau Gallium3D code to make use of the NIR intermediate representation as an alternative to Gallium's TGSI. The Nouveau NIR support is part of the lengthy effort by Red Hat developers on supporting this IR as part of their SPIR-V and compute upbringing. The NIR support is also a stepping stone towards a potential NVIDIA Vulkan driver in the future.
  • Vulkan 1.1.104 Brings Native HDR, Exclusive Fullscreen Extensions
    With the annual Game Developers' Conference (GDC) kicking off tomorrow in San Francisco, Khronos' Vulkan working group today released Vulkan 1.1.104 that comes with several noteworthy extensions. Vulkan 1.1.104 is the big update for GDC 2019 rather than say Vulkan 1.2, but it's quite a nice update as part of the working group's weekly/bi-weekly release regiment. In particular, Vulkan 1.1.104 is exciting for an AMD native HDR extension and also a full-screen exclusive extension.
  • Interested In FreeSync With The RADV Vulkan Driver? Testing Help Is Needed
    Since the long-awaited introduction of FreeSync support with the Linux 5.0 kernel, one of the missing elements has been this variable rate refresh support within the RADV Vulkan driver. When the FreeSync/VRR bits were merged into Linux 5.0, the RadeonSI Gallium3D support was quick to land for OpenGL games but RADV Vulkan support was not to be found. Of course, RADV is the unofficial Radeon open-source Vulkan driver not officially backed by AMD but is the more popular driver compared to their official AMDVLK driver or the official but closed driver in their Radeon Software PRO driver package (well, it's built from the same sources as AMDVLK but currently with their closed-source shader compiler rather than LLVM). So RADV support for FreeSync has been one of the features users have been quite curious about and eager to see.

New Screencasts: Xubuntu 18.04.2, Ubuntu MATE, and Rosa Fresh 11

9 Admirable Graphical File Managers

Being able to navigate your local filesystem is an important function of personal computing. File managers have come a long way since early directory editors like DIRED. While they aren’t cutting-edge technology, they are essential software to manage any computer. File management consists of creating, opening, renaming, moving / copying, deleting and searching for files. But file managers also frequently offer other functionality. In the field of desktop environments, there are two desktops that dominate the open source landscape: KDE and GNOME. They are smart, stable, and generally stay out of the way. These use the widget toolkits Qt and GTK respectively. And there are many excellent Qt and GTK file managers available. We covered the finest in our Qt File Managers Roundup and GTK File Managers Roundup. But with Linux, you’re never short of alternatives. There are many graphical non-Qt and non-Gtk file managers available. This article examines 9 such file managers. The quality is remarkably good. Read more