Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 1 hour 50 min ago

Reproducible Builds: Second Reproducible Builds IRC meeting

10 hours 16 min ago

After the success of our previous IRC meeting, we are having our second IRC meeting today. Monday 26th October, at 18:00 UTC:

  • 11:00am San Francisco. []
  • 2:00pm New York. []
  • 6:00pm London. []
  • 7:00pm Paris/Berlin. []
  • 11:30pm Delhi. []
  • 2:00am Beijing. [] (+1 day)

Please join us on the #reproducible-builds channel on irc.oftc.net — an agenda is available. As mentioned in our previous meeting announcement, due to the unprecedented events in 2020, there will be no in-person Reproducible Builds event this year, but we plan to run these IRC meetings every fortnight.

Marco d'Itri: RPKI validation with FORT Validator

Monday 26th of October 2020 12:25:29 AM

This article documents how to install FORT Validator (an RPKI relying party software which also implements the RPKI to Router protocol in a single daemon) on Debian 10 to provide RPKI validation to routers. If you are using testing or unstable then you can just skip the part about apt pinnings.

The packages in bullseye (Debian testing) can be installed as is on Debian stable with no need to rebuild them, by configuring an appropriate pinning for apt:

cat <<END > /etc/apt/sources.list.d/bullseye.list deb http://deb.debian.org/debian/ bullseye main END cat <<END > /etc/apt/preferences.d/pin-rpki # by default do not install anything from bullseye Package: * Pin: release bullseye Pin-Priority: 100 Package: fort-validator rpki-trust-anchors Pin: release bullseye Pin-Priority: 990 END apt update

Before starting, make sure that curl (or wget) and the web PKI certificates are installed:

apt install curl ca-certificates

If you already know about the legal issues related to the ARIN TAL then you may instruct the package to automatically install it. If you skip this step then you will be asked at installation time about it, either way is fine.

echo 'rpki-trust-anchors rpki-trust-anchors/get_arin_tal boolean true' \ | debconf-set-selections

Install the package as usual:

apt install fort-validator

You may also install rpki-client and gortr on Debian 10, or maybe cfrpki and gortr. I have also tried packaging Routinator 3000 for Debian, but this effort is currently on hold because the Rust ecosystem is broken and hostile to the good packaging practices of Linux distributions.

Marco d'Itri: RPKI validation with OpenBSD's rpki-client and Cloudflare's gortr

Monday 26th of October 2020 12:22:48 AM

This article documents how to install rpki-client (an RPKI relying party software, the actual validator) and gortr (which implements the RPKI to Router protocol) on Debian 10 to provide RPKI validation to routers. If you are using testing or unstable then you can just skip the part about apt pinnings.

The packages in bullseye (Debian testing) can be installed as is on Debian stable with no need to rebuild them, by configuring an appropriate pinning for apt:

cat <<END > /etc/apt/sources.list.d/bullseye.list deb http://deb.debian.org/debian/ bullseye main END cat <<END > /etc/apt/preferences.d/pin-rpki # by default do not install anything from bullseye Package: * Pin: release bullseye Pin-Priority: 100 Package: gortr rpki-client rpki-trust-anchors Pin: release bullseye Pin-Priority: 990 END apt update

Before starting, make sure that curl (or wget) and the web PKI certificates are installed:

apt install curl ca-certificates

If you already know about the legal issues related to the ARIN TAL then you may instruct the package to automatically install it. If you skip this step then you will be asked at installation time about it, either way is fine.

echo 'rpki-trust-anchors rpki-trust-anchors/get_arin_tal boolean true' \ | debconf-set-selections

Install the packages as usual:

apt install rpki-client gortr

And then configure rpki-client to generate its output in the the JSON format needed by gortr:

echo 'OPTIONS=-j' > /etc/default/rpki-client

You may manually start the service unit to immediately generate the data instead of waiting for the next timer run:

systemctl start rpki-client &

gortr too needs to be configured to use the JSON data generated by rpki-client:

echo 'GORTR_ARGS=-bind :323 -verify=false -checktime=false -cache /var/lib/rpki-client/json' > /etc/default/gortr

And then it needs to be restarted to use the new configuration:

systemctl restart gortr

You may also install FORT Validator on Debian 10, or maybe cfrpki with gortr. I have also tried packaging Routinator 3000 for Debian, but this effort is currently on hold because the Rust ecosystem is broken and hostile to the packaging practices of Linux distributions.

Dirk Eddelbuettel: digest 0.6.27: Build fix

Saturday 24th of October 2020 10:36:00 PM

Exactly one week after the previous release 0.6.26 of digest, a minor cleanup release 0.6.27 just arrived on CRAN and will go to Debian shortly.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, and blake3 algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at one million monthly downloads, 282 direct reverse dependencies and 8068 indirect reverse dependencies, or just under half of CRAN) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

Release 0.6.26 brought support for the (nice, even cryptographic) blake3 hash algorithm. In the interest of broader buildability we had already (with a sad face) disabled a few very hardware-specific implementation aspects using intrinsic ops. But to our chagrin, we left one #error define that raised its head on everybody’s favourite CRAN build platform. Darn. So 0.6.27 cleans that up and also removes the check and #error as … all the actual code was already commented out. If you read this and tears start running down your cheeks, then by all means come and help us bring blake3 to its full (hardware-accelerated) potential. This (probably) only needs a little bit of patient work with the build options and configurations. You know where to find us…

My CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Sandro Tosi: Multiple git configurations depending on the repository path

Saturday 24th of October 2020 06:22:32 PM

For my work on Debian, i want to use my debian.org email address, while for my personal projects i want to use my gmail.com address.

One way to change the user.email git config value is to git config --local in every repo, but that's tedious, error-prone and doesn't scale very well with many repositories (and the chances to forget to set the right one on a new repo are ~100%).

The solution is to use the git-config ability to include extra configuration files, based on the repo path, by using includeIf:

Content of ~/.gitconfig:

[user]
name = Sandro Tosi
email = <personal.address>@gmail.com

[includeIf "gitdir:~/deb/"]
path = ~/.gitconfig-deb

Every time the git path is in ~/deb/ (which is where i have all Debian repos) the file ~/.gitconfig-deb will be included; its content:

[user]
email = morph@debian.org

That results in my personal address being used on all repos not part of Debian, where i use my Debian email address. This approach can be extended to every other git configuration values.

Jelmer Vernooij: Debian Janitor: Hosters used by Debian packages

Saturday 24th of October 2020 12:30:15 AM

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

The Janitor knows how to talk to different hosting platforms. For each hosting platform, it needs to support the platform- specific API for creating and managing merge proposals. For each hoster it also needs to have credentials.

At the moment, it supports the GitHub API, Launchpad API and GitLab API. Both GitHub and Launchpad have only a single instance; the GitLab instances it supports are gitlab.com and salsa.debian.org.

This provides coverage for the vast majority of Debian packages that can be accessed using Git. More than 75% of all packages are available on salsa - although in some cases, the Vcs-Git header has not yet been updated.

Of the other 25%, the majority either does not declare where it is hosted using a Vcs-* header (10.5%), or have not yet migrated from alioth to another hosting platform (9.7%). A further 2.3% are hosted somewhere on GitHub (2%), Launchpad (0.18%) or GitLab.com (0.15%), in many cases in the same repository as the upstream code.

The remaining 1.6% are hosted on many other hosts, primarily people’s personal servers (which usually don’t have an API for creating pull requests).

Outdated Vcs-* headers

It is possible that the 20% of packages that do not have a Vcs-* header or have a Vcs header that say there on alioth are actually hosted elsewhere. However, it is hard to know where they are until a version with an updated Vcs-Git header is uploaded.

The Janitor primarily relies on vcswatch to find the correct locations of repositories. vcswatch looks at Vcs-* headers but has its own heuristics as well. For about 2,000 packages (6%) that still have Vcs-* headers that point to alioth, vcswatch successfully finds their new home on salsa.

Merge Proposals by Hoster

These proportions are also visible in the number of pull requests created by the Janitor on various hosters. The vast majority so far has been created on Salsa.

Hoster Open Merged & Applied Closed github.com921685 gitlab.com1230 code.launchpad.net24511 salsa.debian.org1,3605,657126

In this graph, “Open” means that the pull request has been created but likely nobody has looked at it yet. Merged means that the pull request has been marked as merged on the hoster, and applied means that the changes have ended up in the packaging branch but via a different route (e.g. cherry-picked or manually applied). Closed means that the pull request was closed without the changes being incorporated.

Note that this excludes ~5,600 direct pushes, all of which were to salsa-hosted repositories.

See also:

For more information about the Janitor's lintian-fixes efforts, see the landing page.

Dirk Eddelbuettel: RcppSpdlog 0.0.3: New features and much more docs

Friday 23rd of October 2020 11:56:00 PM

A good month after the initial two releases, we are thrilled to announce relase 0.0.3 of RcppSpdlog. This brings us release 1.8.1 of spdlog as well as a few local changes (more below).

RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovic.

This version of RcppSpdlog brings a new top-level function setLogLevel to control what events get logged, updates the main example to show this and to also make the R-aware logger the default logger, and adds both an extended vignette showing several key features and a new (external) package documentation site.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.3 (2020-10-23)
  • New function setLogLevel with R accessor in exampleRsink example

  • Updated exampleRsink to use default logger instance

  • Upgraded to upstream release 1.8.1 which contains finalised upstream use to switch to REprintf() if R compilation detected

  • Added new vignette with extensive usage examples, added compile-time logging switch example

  • A package documentation website was added

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Birger Schacht: An Analysis of 5 Million OpenPGP Keys

Friday 23rd of October 2020 05:54:51 PM

In July I finished my Bachelor’s Degree in IT Security at the University of Applied Sciences in St. Poelten. During the studies I did some elective courses, one of which was about Data Analysis using Python, Pandas and Jupyter Notebooks. I found it very interesting to do calculations on different data sets and to visualize them. Towards the end of the Bachelor I had to find a topic for my Bachelor Thesis and as a long time user of OpenPGP I thought it would be interesting to do an analysis of the collection of OpenPGP keys that are available on the keyservers of the SKS keyserver network.

So in June 2019 I fetched a copy of one of the key dumps of the one of the keyservers (some keyserver publish these copies of their key database so people who want to join the SKS keyserver network can do an initial import). At that time the copy of the key database contained 5,499,675 keys and was around 12GB. Using the hockeypuck keyserver software I imported the keys into an PostgreSQL database. Hockeypuck uses a table called keys to store the keys and in there the column doc stores the OpenPGP keys in JSON format (always with a data field containing the original unparsed data).

For the thesis I split the analysis in three parts, first looking at the Public Key packets, then analysing the User ID packets and finally studying the Signature Packets. To analyse the respective packets I used SQL to export the data to CSV files and then used the pandas read_csv method to create a dataframe of the values. In a couple of cases I did some parsing before converting to a DataFrame to make the analysis step faster. The parsing was done using the pgpdump python library.

Together with my advisor I decided to submit the thesis for a journal, so we revised and compressed the whole paper and the outcome was now

PUBLISHED

in the Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications (JoWUA).

I think the work gives some valuable insight in the development of the use of OpenPGP in the last 30 years. Looking at the public key packets we were able to compare the different public key algorithms and for example visualize how DSA was the most used algorithm until around 2010 when it was replaced by RSA. When looking at the less used algorithms a trend towards ECC based crytography is visible.

What we also noticed was an increase of RSA keys with algorithm ID 3 (RSA Sign-Only), which are deprecated. When we took a deeper look at those keys we realized that most of those keys used a specific User ID string in the User ID packets which allowed us to attribute those keys to two software projects both using the Bouncy Castle Java Cryptographic API (resp. the Spongy Castle version for Android). We also stumbled over a tutorial on how to create RSA keys with Bouncycastle which also describes how to create RSA keys with code that produces RSA Sign-Only keys. In one of those projects, this was then fixed.

By looking at the User ID packets we did some statistics about the most used email providers used by OpenPGP users. One domain stood out, because it is not the domain of an email provider: tellfinder.com is a domain used in around 45,000 keys. Tellfinder is a Big Data analysis software and the UID of all but two of those keys is TellFinder Page Archiver- Signing Key <support@tellfinder.com>.

We also looked at the comments used in OpenPGP User ID fields. In 2013 Daniel Kahn Gillmor published a blog post titled OpenPGP User ID Comments considered harmful in which he pointed out that most of the comments in the User ID field of OpenPGP keys are duplicating information that is already present somewhere in the User ID or the key itself. In our dataset 3,133 comments were exactly the same as the name, 3,346 were the same as the domain and 18,246 comments were similar to the local part of the email address

Last but not least we looked at the signature subpackets and the development of some of the preferences (Preferred Symmetric Algorithm, Preferred Hash Algorithm) that are being published using signature packets.

Analysing this huge dataset of cryptographic keys of the last 20 to 30 years was very interesting and I learned a lot about the history of PGP resp. OpenPGP and the evolution of cryptography overall. I think it would be interesting to look at even more properties of OpenPGP keys and I also think it would be valuable for the OpenPGP ecosystem if these kinds analysis could be done regularly. An approach like Tor Metrics could lead to interesting findings and could also help to back decisions regarding future developments of the OpenPGP standard.

Enrico Zini: Hetzner build machine

Friday 23rd of October 2020 04:59:33 PM

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

Building Qt5 takes a long time. The build server I was using had CPUs and RAM, but was very slow on I/O. I was very frustrated by that, and I started evaluating alternatives. I ended up setting up scripts to automatically provision a throwaway cloud server at Hetzner.

Initial setup

I got an API key from my customer's Hetzner account.

I installed hcloud-cli, currently only in testing and unstable:

apt install hcloud-cli

Then I configured hcloud with the API key:

hcloud context create Spin up

I wrote a quick and dirty script to spin up a new machine, which grew a bit with little tweaks:

#!/bin/sh # Create the server hcloud server create --name buildqt --ssh-key … --start-after-create \ --type cpx51 --image debian-10 --datacenter … # Query server IP IP="$(hcloud server describe buildqt -o json | jq -r .public_net.ipv4.ip)" # Update ansible host file echo "buildqt ansible_user=root ansible_host=$IP" > hosts # Remove old host key ssh-keygen -f ~/.ssh/known_hosts -R "$IP" # Update login script echo "#!/bin/sh" > login echo "ssh root@$IP" >> login chmod 0755 login

I picked a datacenter in the same location as where we have other servers, to get quicker data transfers.

I like that CLI tools have JSON output that I can cleanly pick at with jq. Sadly, my ISP doesn't do IPv6 yet.

Since the server just got regenerated, I remove a possibly cached host key.

Provisioning the machine

One git server I need is behind HTTP authentication. Here's a quick hack to pass the relevant .netrc credentials to ansible before provisioning:

#!/usr/bin/python3 import subprocess import netrc import tempfile import json login, account, password = netrc.netrc().authenticators("…") with tempfile.NamedTemporaryFile(mode="wt", suffix=".json") as fd: json.dump({ "repo_user": login, "repo_password": password, }, fd) fd.flush() subprocess.run([ "ansible-playbook", "-i", "hosts", "-l", "buildqt", "--extra-vars", f"@{fd.name}", "provision.yml", ], check=True)

And here's the ansible playbook:

#!/usr/bin/env ansible-playbook - name: Install and configure buildqt hosts: all tasks: - name: Update apt cache apt: update_cache: yes cache_valid_time: 86400 - name: Create build user user: name: build comment: QT5 Build User shell: /bin/bash - name: Create sources directory become: yes become_user: build file: path: ~/sources state: directory mode: 0755 - name: Download sources become: yes become_user: build get_url: url: "https://…/{{item}}" dest: "~/sources/{{item}}" mode: 0644 with_items: - "qt-everywhere-src-5.15.1.tar.xz" - "qt-creator-enterprise-src-4.13.2.tar.gz" - name: Populate home directory become: yes become_user: build copy: src: build dest: ~/ mode: preserve - name: Write .netrc become: yes become_user: build copy: dest: ~/.netrc mode: 0600 content: | machine … login {{repo_user}} password {{repo_password}} - name: Write .screenrc become: yes become_user: build copy: dest: ~/.screenrc mode: 0644 content: | hardstatus alwayslastline hardstatus string '%{= cw}%-Lw%{= KW}%50>%n%f* %t%{= cw}%+Lw%< %{= kK}%-=%D %Y-%m-%d %c%{-}' startup_message off defutf8 on defscrollback 10240 - name: Install base packages apt: name: git,mc,ncdu,neovim,eatmydata,devscripts,equivs,screen state: present - name: Clone git repo become: yes become_user: build git: repo: https://…@…/….git dest: ~/… - name: Copy Qt license become: yes become_user: build copy: src: qt-license.txt dest: ~/.qt-license mode: 0600

Now everything is ready for a 16 core, 32Gb ram build on SSD storage.

Tear down

When done:

#!/bin/sh hcloud server delete buildqt

The whole spin up plus provisioning takes around a minute, so I can do it when I start a work day, and take it down at the end. The build machine wasn't that expensive to begin with, and this way it will even be billed by the hour.

A first try on a CPX51 machine has just built the full Qt5 Everywhere Enterprise including QtWebEngine and all its frills, for amd64, in under 1 hour and 40 minutes.

Molly de Blanc: Endorsements

Friday 23rd of October 2020 12:22:30 PM

Transparency is essential to trusting a technology. Through transparency we can understand what we’re using and build trust. When we know what is actually going on, what processes are occurring and how it is made, we are able to decide whether interacting with it is something we actually want, and we’re able to trust it and use it with confidence.

This transparency could mean many things, though it most frequently refers to the technology itself: the code or, in the case of hardware, the designs. We could also apply it to the overall architecture of a system. We could think about the decision making, practices, and policies of whomever is designing and/or making the technology. These are all valuable in some of the same ways, including that they allow us to make a conscious choice about what we are supporting.

When we choose to use a piece of technology, we are supporting those who produce it. This could be because we are directly paying for it, however our support is not limited to direct financial contributions. In some cases this is because of things hidden within a technology: tracking mechanisms or backdoors that could allow companies or governments access to what we’re doing. When creating different types of files on a computer, these files can contain metadata that says what software was used to make it. This is an implicit endorsement, and you can also explicitly endorse a technology by talking about that or how you use it. In this, you have a right (not just a duty) to be aware of what you’re supporting. This includes, for example, organizational practices and whether a given company relies on abusive labor policies, indentured servitude, or slave labor.
Endorsements inspire others to choose a piece of technology. Most of my technology is something I investigate purely for functionality, and the pieces I investigate are based on what people I know use. The people I trust in these cases are more inclined than most to do this kind of research, to perform technical interrogations, and to be aware of what producers of technology are up to.

This is how technology spreads and becomes common or the standard choice. In one sense, we all have the responsibility (one I am shirking) to investigate our technologies before we choose them. However, we must acknowledge that not everyone has the resources for this – the time, the skills, the knowledge, and therein endorsements become even more important to recognize.

Those producing a technology have the responsibility of making all of these angles something one could investigate. Understanding cannot only be the realm of experts. It should not require an extensive background in research and investigative journalism to find out whether a company punishes employees who try to unionize or pay non-living wages. Instead, these must be easy activities to carry out. It should be standard for a company (or other technology producer) to be open and share with people using their technology what makes them function. It should be considered shameful and shady to not do so. Not only does this empower those making choices about what technologies to use, but it empowers others down the line, who rely on those choices. It also respects the people involved in the processes of making these technologies. By acknowledging their role in bringing our tools to life, we are respecting their labor. By holding companies accountable for their practices and policies, we are respecting their lives.

Bastian Blank: Salsa updated to GitLab 13.5

Thursday 22nd of October 2020 06:00:00 PM

Today, GitLab released the version 13.5 with several new features. Also Salsa got some changes applied to it.

GitLab 13.5

GitLab 13.5 includes several new features. See the upstream release postfix for a full list.

Shared runner builds on larger instances

It's been way over two years since we started to use Google Compute Engine (GCE) for Salsa. Since then, all the jobs running on the shared runners run within a n1-standard-1 instance, providing a fresh set of one vCPU and 3.75GB of RAM for each and every build.

GCE supports several new instance types, featuring better and faster CPUs, including current AMD EPICs. However, as it turns out, GCE does not support any single vCPU instances for any of those types. So jobs in the future will use n2d-standard-2 for the time being, provinding two vCPUs and 8GB of RAM..

Builds run with IPv6 enabled

All builds run with IPv6 enabled in the Docker environment. This means the lo network device got the IPv6 loopback address ::1 assigned. So tests that need minimal IPv6 support can succeed. It however does not include any external IPv6 connectivity.

Vincent Fourmond: QSoas tips and tricks: generating smooth curves from a fit

Thursday 22nd of October 2020 11:57:13 AM
Often, one would want to generate smooth data from a fit over a small number of data points. For an example, take the data in the following file. It contains (fake) experimental data points that obey to Michaelis-Menten kinetics: $$v = \frac{v_m}{1 + K_m/s}$$ in which \(v\) is the measured rate (the y values of the data), \(s\) the concentration of substrate (the x values of the data), \(v_m\) the maximal rate and \(K_m\) the Michaelis constant. To fit this equation to the data, just use the fit-arb fit: QSoas> l michaelis.dat QSoas> fit-arb vm/(1+km/x) After running the fit, the window should look like this: Now, with the fit, we have reasonable values for \(v_m\) (vm) and \(K_m\) (km). But, for publication, one would want to generate "smooth" curve going through the lines... Saving the curve from "Data.../Save all" doesn't help, since the data has as many points as the original data and looks very "jaggy" (like on the screenshot above)... So one needs a curve with more data points.

Maybe the most natural solution is simply to use generate-buffer together with apply-formula using the formula and the values of km and vm obtained from the fit, like: QSoas> generate-buffer 0 20 QSoas> apply-formula y=3.51742/(1+3.69767/x) By default, generate-buffer generate 1000 evenly spaced x values, but you can change their number using the /samples option. The two above commands can be combined to just one call to generate-buffer: QSoas> generate-buffer 0 20 3.51742/(1+3.69767/x) This works, but it is quite cumbersome and it is not going to work well for complex formulas or the results of differential equations or kinetic systems...

This is why to each fit- command corresponds a sim- command that computes the result of the fit using a "saved parameters" file (here, michaelis.params, but you can also save it yourself) and buffers as "models" for X values: QSoas> generate-buffer 0 20 QSoas> sim-arb vm/(1+km/x) michaelis.params 0 This strategy works with every single fit ! As an added benefit, you even get the fit parameters as meta-data, which are displayed by the show command: QSoas> show 0 Dataset generated_fit_arb.dat: 2 cols, 1000 rows, 1 segments, #0 Flags: Meta-data: commands = sim-arb vm/(1+km/x) michaelis.params 0 fit = arb (formula: vm/(1+km/x)) km = 3.69767 vm = 3.5174 They also get saved as comments if you save the data.

Important note: the sim-arb command will be available only in the 3.0 release, although you can already enjoy it if you use the github version.

About QSoasQSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code and compile it yourself or buy precompiled versions for MacOS and Windows there.

Steinar H. Gunderson: plocate in testing

Thursday 22nd of October 2020 09:00:25 AM

plocate hit testing today, so it's officially on its way to bullseye :-) I'd love to add a backport to stable, but bpo policy says only to backport packages with a “notable userbase”, and I guess 19 installations in popcon isn't that :-) It's also hit Arch Linux, obviously Ubuntu universe, and seemingly also other distributions like Manjaro. No Fedora yet, but hopefully, some Fedora maintainer will pick it up. :-)

Also, pabs pointed out another possible use case, although this is just a proof-of-concept:

pannekake:~/dev/plocate/obj> time apt-file search bin/updatedb locate: /usr/bin/updatedb.findutils mlocate: /usr/bin/updatedb.mlocate roundcube-core: /usr/share/roundcube/bin/updatedb.sh apt-file search bin/updatedb 1,19s user 0,58s system 163% cpu 1,083 total pannekake:~/dev/plocate/obj> time ./plocate -d apt-file.plocate.db bin/updatedb locate: /usr/bin/updatedb.findutils mlocate: /usr/bin/updatedb.mlocate roundcube-core: /usr/share/roundcube/bin/updatedb.sh ./plocate -d apt-file.plocate.db bin/updatedb 0,00s user 0,01s system 79% cpu 0,012 total

Things will probably be quieting down now; there's just not that many more logical features to add.

Christian Kastner: RStudio is a refreshingly intuitive IDE

Wednesday 21st of October 2020 04:33:56 PM

I currently need to dabble with R for a smallish thing. I have previously dabbled with R only once, for an afternoon, and that was about a decade ago, so I had no prior experience to speak of regarding the language and its surrounding ecosystem.

Somebody recommended that I try out RStudio, a popular IDE for R. I was happy to see that an open-source community edition exists, in the form of a .deb package no less, so I installed it and gave it a try.

It's remarkable how intuitive this IDE is. My first guess at doing something has so far been correct every. single. time. I didn't have to open the help, or search the web, for any solutions, either -- they just seem to offer themselves up.

And it's not just my inputs; it's the output, too. The RStudio window has multiple tiles, and each tile has multiple tabs. I found this quite confusing and intimidating on first impression, but once I started doing some work, I was surprised to see that whenever I did something that produced output in one or more of the tabs, it was (again) always in an intuitive manner. There's a fine line between informing with relevant context and distracting with irrelevant context, but RStudio seems to have placed itself on the right side of it.

This, and many other features that pop up here and there, like the live-rendering of LaTeX equations, contributed to what has to be one of the most positive experiences with an IDE that I've had so far.

Dirk Eddelbuettel: RcppZiggurat 0.1.6

Wednesday 21st of October 2020 01:14:00 PM

A new release, now at version 0.1.6, of RcppZiggurat is now on the CRAN network for R.

The RcppZiggurat package updates the code for the Ziggurat generator by Marsaglia and other which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl—all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

This release brings a corrected seed setter and getter which now correctly take care of all four state variables, and not just one. It also corrects a few typos in the vignette. Both were fixed quite a while back, but we somehow managed to not ship this to CRAN for two years.

The NEWS file entry below lists all changes.

Changes in version 0.1.6 (2020-10-18)
  • Several typos were corrected in the vignette (Blagoje Ivanovic in #9).

  • New getters and setters for internal state were added to resume simulations (Dirk in #11 fixing #10).

  • Minor updates to cleanup script and Travis CI setup (Dirk).

Courtesy of my CRANberries, there is a diffstat report for this release. More information is on the RcppZiggurat page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Bits from Debian: Debian donation for Peertube development

Wednesday 21st of October 2020 10:30:00 AM

The Debian project is happy to announce a donation of 10,000 € to help Framasoft reach the fourth stretch-goal of its Peertube v3 crowdfunding campaign -- Live Streaming.

This year's iteration of the Debian annual conference, DebConf20, had to be held online, and while being a resounding success, it made clear to the project our need to have a permanent live streaming infrastructure for small events held by local Debian groups. As such, Peertube, a FLOSS video hosting platform, seems to be the perfect solution for us.

We hope this unconventional gesture from the Debian project will help us make this year somewhat less terrible and give us, and thus humanity, better Free Software tooling to approach the future.

Debian thanks the commitment of numerous Debian donors and DebConf sponsors, particularly all those that contributed to DebConf20 online's success (volunteers, speakers and sponsors). Our project also thanks Framasoft and the PeerTube community for developing PeerTube as a free and decentralized video platform.

The Framasoft association warmly thanks the Debian Project for its contribution, from its own funds, towards making PeerTube happen.

This contribution has a twofold impact. Firstly, it's a strong sign of recognition from an international project - one of the pillars of the Free Software world - towards a small French association which offers tools to liberate users from the clutches of the web's giant monopolies. Secondly, it's a substantial amount of help in these difficult times, supporting the development of a tool which equally belongs to and is useful to everyone.

The strength of Debian's gesture proves, once again, that solidarity, mutual aid and collaboration are values which allow our communities to create tools to help us strive towards Utopia.

Reproducible Builds: Supporter spotlight: Civil Infrastructure Platform

Wednesday 21st of October 2020 12:00:00 AM

The Reproducible Builds project depends on our many projects, supporters and sponsors. We rely on their financial support, but they are also valued ambassadors who spread the word about the Reproducible Builds project and the work that we do.

This is the first installment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org.

However, we are kicking off this series by featuring Urs Gleim and Yoshi Kobayashi of the Civil Infrastructure Platform (CIP) project.


Chris Lamb: Hi Urs and Yoshi, great to meet you. How might you relate the importance of the Civil Infrastructure Platform to a user who is non-technical?

A: The Civil Infrastructure Platform (CIP) project is focused on establishing an open source ‘base layer’ of industrial-grade software that acts as building blocks in civil infrastructure projects. End-users of this critical code include systems for electric power generation and energy distribution, oil and gas, water and wastewater, healthcare, communications, transportation, and community management. These systems deliver essential services, provide shelter, and support social interactions and economic development. They are society’s lifelines, and CIP aims to contribute to and support these important pillars of modern society.


Chris: We have entered an age where our civilisations have become reliant on technology to keep us alive. Does the CIP believe that the software that underlies our own safety (and the safety of our loved ones) receives enough scrutiny today?

A: For companies developing systems running our infrastructure and keeping our factories working, it is part of their business to ensure the availability, uptime, and security of these very systems. However, software complexity continues to increase, and the efforts spent on those systems is now exploding. What is missing is a common way of achieving this through refining the same tools, and cooperating on the hardening and maintenance of standard components such as the Linux operating system.


Chris: How does the Reproducible Builds effort help the Civil Infrastructure Platform achieve its goals?

A: Reproducibility helps a great deal in software maintenance. We have a number of use-cases that should have long-term support of more than 10 years. During this period, we encounter issues that need to be fixed in the original source code. But before we make changes to the source code, we need to check whether it is actually the original source code or not. If we can reproduce exactly the same binary from the source code even after 10 years, we can start to invest time and energy into making these fixes.


Chris: Can you give us a brief history of the Civil Infrastructure Platform? Are there any specific ‘success stories’ that the CIP is particularly proud of?

A: The CIP Project formed in 2016 as a project hosted by Linux Foundation. It was launched out of necessity to establish an open source framework and the subsequent software foundation delivers services for civil infrastructure and economic development on a global scale. Some key milestones we have achieved as a project include our collaboration with Debian, where we are helping with the Debian Long Term Support (LTS) initiative, which aims to extend the lifetime of all Debian stable releases to at least 5 years. This is critical because most control systems for transportation, power plants, healthcare and telecommunications run on Debian-based embedded systems.

In addition, CIP is focused on IEC 62443, a standards-based approach to counter security vulnerabilities in industrial automation and control systems. Our belief is that this work will help mitigate the risk of cyber attacks, but in order to deal with evolving attacks of this kind, all of the layers that make up these complex systems (such as system services and component functions, in addition to the countless operational layers) must be kept secure. For this reason, the IEC 62443 series is attracting attention as the de facto cyber-security standard.


Chris: The Civil Infrastructure Platform project comprises a number of project members from different industries, with stakeholders across multiple countries and continents. How does working together with a broad group of interests help in your effectiveness and efficiency?

A: Although the members have different products, they share the requirements and issues when developing sustainable products. In the end, we are driven by common goals. For the project members, working internationally is simply daily business. We see this as an advantage over regional or efforts that focus on narrower domains or markets.


Chris: The Civil Infrastructure Platform supports a number of other existing projects and initiatives in the open source world too. How much do you feel being a part of the broader free software community helps you achieve your aims?

A: Collaboration with other projects is an essential part of how CIP operates — we want to enable commonly-used software components. It would not make sense to re-invent solutions that are already established and widely used in product development. To this end, we have an ‘upstream first’ policy which means that, if existing projects need to be modified to our needs or are already working on issues that we also need, we work directly with them.


Chris: Open source software in desktop or user-facing contexts receives a significant amount of publicity in the media. However, how do you see the future of free software from an industrial-oriented context?

A: Open source software has already become an essential part of the industry and civil infrastructure, and the importance of open source software there is still increasing. Without open source software, we cannot achieve, run and maintain future complex systems, such as smart cities and other key pieces of civil infrastructure.


Chris: If someone wanted to know more about the Civil Infrastructure Platform (or even to get involved) where should they go to look?

A: We have many avenues to participate and learn more! We have a website, a wiki and you can even follow us on Twitter.


For more about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

Dirk Eddelbuettel: RcppArmadillo 0.10.1.0.0

Tuesday 20th of October 2020 10:36:00 PM

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 786 other packages on CRAN.

A little while ago, Conrad released version 10.1.0 of Armadillo, a a new major release. As before, given his initial heads-up we ran two full reverse-depends checks, and as a consequence contacted four packages authors (two by email, two via PR) about a miniscule required change (as Armadillo now defaults to C++11, an old existing setting of avoiding C++11 lead to an error). Our thanks to those who promptly update their packages—truly appreciated. As it turns out, Conrad also softened the error by the time the release ran around.

But despite our best efforts, the release was delayed considerably by CRAN. We had made several Windows test builds but luck had it that on the uploaded package CRAN got itself a (completely spurious segfault—which can happen on a busy machine building machine things at once). Sadly it took three or four days for CRAN to reply our email. After which it took another number of days for them to ponder the behaviour of a few new ‘deprecated’ messaged tickled by at the most ten or so (out of 786) packages. Oh well. So here we are, eleven days after I emailed the rcpp-devel list about the new package being on CRAN but possibly delayed (due to that seg.fault). But during all that time the package was of course available via the Rcpp drat.

The changes in this release are summarized below as usual and are mostly upstream along with an improved Travis CI setup due to the aforementioned use of the bspm package for binaries at Travis.

Changes in RcppArmadillo version 0.10.1.0.0 (2020-10-09)
  • Upgraded to Armadillo release 10.1.0 (Orchid Ambush)

    • C++11 is now the minimum required C++ standard

    • faster handling of compound expressions by trimatu() and trimatl()

    • faster sparse matrix addition, subtraction and element-wise multiplication

    • expanded sparse submatrix views to handle the non-contiguous form of X.cols(vector_of_column_indices)

    • expanded eigs_sym() and eigs_gen() with optional fine-grained parameters (subspace dimension, number of iterations, eigenvalues closest to specified value)

    • deprecated form of reshape() removed from Cube and SpMat classes

    • ignore and warn on use of the ARMA_DONT_USE_CXX11 macro

  • Switch Travis CI testing to focal and BSPM

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Petter Reinholdtsen: Buster based Bokmål edition of Debian Administrator's Handbook

Tuesday 20th of October 2020 04:35:00 PM

I am happy to report that we finally made it! Norwegian Bokmål became the first translation published on paper of the new Buster based edition of "The Debian Administrator's Handbook". The print proof reading copy arrived some days ago, and it looked good, so now the book is approved for general distribution. This updated paperback edition is available from lulu.com. The book is also available for download in electronic form as PDF, EPUB and Mobipocket, and can also be read online.

I am very happy to wrap up this Creative Common licensed project, which concludes several months of work by several volunteers. The number of Linux related books published in Norwegian are few, and I really hope this one will gain many readers, as it is packed with deep knowledge on Linux and the Debian ecosystem. The book will be available for various Internet book stores like Amazon and Barnes & Noble soon, but I recommend buying "Håndbok for Debian-administratoren" directly from the source at Lulu.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Steve Kemp: Offsite-monitoring, from my desktop.

Tuesday 20th of October 2020 07:15:23 AM

For the past few years I've had a bunch of virtual machines hosting websites, services, and servers. Of course I want them to be available - especially since I charge people money to access at some of them (for example my dns-hosting service) - and that means I want to know when they're not.

The way I've gone about this is to have a bunch of machines running stuff, and then dedicate an entirely separate machine solely for monitoring and alerting. Sure you can run local monitoring, testing that services are available, the root-disk isn't full, and that kind of thing. But only by testing externally can you see if the machine is actually available to end-users, customers, or friends.

A local-agent might decide "I'm fine", but if the hosting-company goes dark due to a fibre cut you're screwed.

I've been hosting my services with Hetzner (cloud) recently, and their service is generally pretty good. Unfortunately I've started to see an increasing number of false-alarms. I'd have a server in Germany, with the monitoring machine in Helsinki (coincidentally where I live!). For the past month I've started to get pinged with a failure every three/four days on average, "service down - dns failed", or "service down - timeout". When the notice would wake me up I'd go check and it would be fine, it was a very transient failure.

To be honest the reason for this is my monitoring is just too damn aggressive, I like to be alerted immediately in case something is wrong. That means if a single test fails I get an alert, as rather than only if a test failed for something more reasonable like three+ consecutive failures.

I'm experimenting with monitoring in a less aggressive fashion, from my home desktop. Since my monitoring tool is a single self-contained golang binary, and it is already packaged as a docker-based container deployment was trivial. I did a little work writing an agent to receive failure-notices, and ping me via telegram - instead of the previous approach where I had an online status-page which I could view via my mobile, and alerts via pushover.

So far it looks good. I've tweaked the monitoring to setup a timeout of 15 seconds, instead of 5, and I've configured it to only alert me if there is an outage which lasts for >= 2 consecutive failures. I guess the TLDR is I now do offsite monitoring .. from my house, rather than from a different region.

The only real reason to write this post was mostly to say that the process of writing a trivial "notify me" gateway to interface with telegram was nice and straightforward, and to remind myself that transient failures are way more common than we expect.

I'll leave things alone for a moment, but it was a fun experiment. I'll keep the two systems in parallel for a while, but I guess I can already predict the outcome:

  • The desktop monitoring will report transient outages now and again, because home broadband isn't 100% available.
  • The heztner-based monitoring, in a different region, will report transient problems, because even hosting companies are not 100% available.
    • Especially at the cheap prices I'm paying.
  • The way to avoid being woken up by transient outages/errors is to be less agressive.
    • I think my paying users will be OK if I find out a services is offline after 5 minutes, rather than after 30 seconds.
    • If they're not we'll have to talk about budgets ..

More in Tux Machines

Android Leftovers

Samsung 980 PRO PCIe 4.0 NVMe SSD Linux Performance

The Samsung 980 PRO PCIe 4.0 NVMe solid-state drives are now available from Internet retailers. For those wondering how these SSDs compare with EXT4 under Linux against other PCIe 4.0/3.0 drives, here are a variety of benchmarks. While Samsung hasn't sent out NVMe SSDs for Linux testing at Phoronix, we continue purchasing the new models due to their high performance state and needing some additional drives for various systems in the lab. When the Samsung 980 PRO reached retail channels this month I picked up the Samsung 980 PRO 500GB and 1TB drives and ran a series of benchmarks on them prior to commissioning. Read more

My Open Source meltdown, and the rise of a star

There comes a time when you feel that you don’t fit anywhere. Where your ideas, principles, motivation and struggles simply don’t align with anyone else. For years, I felt part of something that was larger than myself, had the motivation to use a huge part of my free time to contribute to projects and in several cases, make personal sacrifices to help others, and even envisioned a future for myself in places where I thought it was impossible. It’s that struggle trying to find our place in this huge Open Source world what usually ends up in personal meltdown and professional burnout. It’s not a secret that as fast as technologies evolve, the faster we end up being obsolete, unless we dedicate most of our time to keep up to date on every break through. I’m not the exception to this, and after being an active contributor for almost 15 years, and then have my “time off” to be a full time mom and employee, what happened in the Projects I used to Contribute left me feeling way far from my comfort zone. I’m grateful that most of the places where I’ve contributed has been because people asks for my help, and even after a long absence it was not different from before. Read more

What Linux needs to make it a better mobile desktop

I have a bit of a confession to make. Although Linux is my operating system of choice on the desktop, I tend to skip over my open source-powered laptop in favor of either a MacBook Pro or Chromebook when I'm working beyond my desk. I know...blasphemy, right? I've reached a point in my career and life where I need the tools to be able to get my jobs done as efficiently as possible and without frustration or headache. To be absolutely fair, primary reasons why I overlook my one Linux laptop are because it's too big and the keyboard is absolutely terrible. Given I am a writer by profession, a bad keyboard can be a deal-breaker. Once again, in favor of honesty, the 2016 MacBook Pro keyboard isn't much better. The "butterfly" keys are loud and way too prone to sticking. My 2015 Pixel was, at one point, an absolute dream machine, but the battery life is waning, and sometimes ChromeOS can be a bit flaky with the trackpad. Read more