Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 6 hours 25 min ago

Dirk Eddelbuettel: RcppSimdJson 0.0.6: New Upstream, New Features!

Thursday 25th of June 2020 10:33:00 PM

A very exciting RcppSimdJson release with the updated upstream simdjson release 0.4.0 as well as a first set of new JSON parsing functions just hit CRAN. RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed; see the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk). The very recent 0.4.0 release further improves the already impressive speed.

And this release brings a first set of actually user-facing functions thanks to Brendan which put in a series of PRs! The full NEWS entry follows.

Changes in version 0.0.6 (2020-06-25)
  • Created C++ integer-handling utilities for safe downcasting and integer return (Brendan in #16 closing #13).

  • New JSON functions .deserialize_json and .load_json (Brendan in #16, #17, #20, #21).

  • Upgrade Travis CI to 'bionic', extract package and version from DESCRIPTION (Dirk in #23).

  • Upgraded to simdjson 0.4.0 (Dirk in #25 closing #24).

Courtesy of CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russell Coker: How Will the Pandemic Change Things?

Thursday 25th of June 2020 03:27:28 AM

The Bulwark has an interesting article on why they can’t “Reopen America” [1]. I wonder how many changes will be long term. According to the Wikipedia List of Epidemics [2] Covid-19 so far hasn’t had a high death toll when compared to other pandemics of the last 100 years. People’s reactions to this vary from doing nothing to significant isolation, the question is what changes in attitudes will be significant enough to change society.

Transport

One thing that has been happening recently is a transition in transport. It’s obvious that we need to reduce CO2 and while electric cars will address the transport part of the problem in the long term changing to electric public transport is the cheaper and faster way to do it in the short term. Before Covid-19 the peak hour public transport in my city was ridiculously overcrowded, having people unable to board trams due to overcrowding was really common. If the economy returns to it’s previous state then I predict less people on public transport, more traffic jams, and many more cars idling and polluting the atmosphere.

Can we have mass public transport that doesn’t give a significant disease risk? Maybe if we had significantly more trains and trams and better ventilation with more airflow designed to suck contaminated air out. But that would require significant engineering work to design new trams, trains, and buses as well as expense in refitting or replacing old ones.

Uber and similar companies have been taking over from taxi companies, one major feature of those companies is that the vehicles are not dedicated as taxis. Dedicated taxis could easily be designed to reduce the spread of disease, the famed Black Cab AKA Hackney Carriage [3] design in the UK has a separate compartment for passengers with little air flow to/from the driver compartment. It would be easy to design such taxis to have entirely separate airflow and if setup to only take EFTPOS and credit card payment could avoid all contact between the driver and passengers. I would prefer to have a Hackney Carriage design of vehicle instead of a regular taxi or Uber.

Autonomous cars have been shown to basically work. There are some concerns about safety issues as there are currently corner cases that car computers don’t handle as well as people, but of course there are also things computers do better than people. Having an autonomous taxi would be a benefit for anyone who wants to avoid other people. Maybe approval could be rushed through for autonomous cars that are limited to 40Km/h (the maximum collision speed at which a pedestrian is unlikely to die), in central city areas and inner suburbs you aren’t likely to drive much faster than that anyway.

Car share services have been becoming popular, for many people they are significantly cheaper than owning a car due to the costs of regular maintenance, insurance, and depreciation. As the full costs of car ownership aren’t obvious people may focus on the disease risk and keep buying cars.

Passenger jets are ridiculously cheap. But this relies on the airline companies being able to consistently fill the planes. If they were to add measures to reduce cross contamination between passengers which slightly reduces the capacity of planes then they need to increase ticket prices accordingly which then reduces demand. If passengers are just scared of flying in close proximity and they can’t fill planes then they will have to increase prices which again reduces demand and could lead to a death spiral. If in the long term there aren’t enough passengers to sustain the current number of planes in service then airline companies will have significant financial problems, planes are expensive assets that are expected to last for a long time, if they can’t use them all and can’t sell them then airline companies will go bankrupt.

It’s not reasonable to expect that the same number of people will be travelling internationally for years (if ever). Due to relying on economies of scale to provide low prices I don’t think it’s possible to keep prices the same no matter what they do. A new economic balance of flights costing 2-3 times more than we are used to while having significantly less passengers seems likely. Governments need to spend significant amounts of money to improve trains to take over from flights that are cancelled or too expensive.

Entertainment

The article on The Bulwark mentions Las Vegas as a city that will be hurt a lot by reductions in travel and crowds, the same thing will happen to tourist regions all around the world. Australia has a significant tourist industry that will be hurt a lot. But the mention of Las Vegas makes me wonder what will happen to the gambling in general. Will people avoid casinos and play poker with friends and relatives at home? It seems that small stakes poker games among friends will be much less socially damaging than casinos, will this be good for society?

The article also mentions cinemas which have been on the way out since the video rental stores all closed down. There’s lots of prime real estate used for cinemas and little potential for them to make enough money to cover the rent. Should we just assume that most uses of cinemas will be replaced by Netflix and other streaming services? What about teenage dates, will kissing in the back rows of cinemas be replaced by “Netflix and chill”? What will happen to all the prime real estate used by cinemas?

Professional sporting matches have been played for a TV-only audience during the pandemic. There’s no reason that they couldn’t make a return to live stadium audiences when there is a vaccine for the disease or the disease has been extinguished by social distancing. But I wonder if some fans will start to appreciate the merits of small groups watching large TVs and not want to go back to stadiums, can this change the typical behaviour of groups?

Restaurants and cafes are going to do really badly. I previously wrote about my experience running an Internet Cafe and why reopening businesses soon is a bad idea [4]. The question is how long this will go for and whether social norms about personal space will change things. If in the long term people expect 25% more space in a cafe or restaurant that’s enough to make a significant impact on profitability for many small businesses.

When I was young the standard thing was for people to have dinner at friends homes. Meeting friends for dinner at a restaurant was uncommon. Recently it seemed to be the most common practice for people to meet friends at a restaurant. There are real benefits to meeting at a restaurant in terms of effort and location. Maybe meeting friends at their home for a delivered dinner will become a common compromise, avoiding the effort of cooking while avoiding the extra expense and disease risk of eating out. Food delivery services will do well in the long term, it’s one of the few industry segments which might do better after the pandemic than before.

Work

Many companies are discovering the benefits of teleworking, getting it going effectively has required investing in faster Internet connections and hardware for employees. When we have a vaccine the equipment needed for teleworking will still be there and we will have a discussion about whether it should be used on a more routine basis. When employees spend more than 2 hours per day travelling to and from work (which is very common for people who work in major cities) that will obviously limit the amount of time per day that they can spend working. For the more enthusiastic permanent employees there seems to be a benefit to the employer to allow working from home. It’s obvious that some portion of the companies that were forced to try teleworking will find it effective enough to continue in some degree.

One company that I work for has quit their coworking space in part because they were concerned that the coworking company might go bankrupt due to the pandemic. They seem to have become a 100% work from home company for the office part of the work (only on site installation and stock management is done at corporate locations). Companies running coworking spaces and other shared offices will suffer first as their clients have short term leases. But all companies renting out office space in major cities will suffer due to teleworking. I wonder how this will affect the companies providing services to the office workers, the cafes and restaurants etc. Will there end up being so much unused space in central city areas that it’s not worth converting the city cinemas into useful space?

There’s been a lot of news about Zoom and similar technologies. Lots of other companies are trying to get into that business. One thing that isn’t getting much notice is remote access technologies for desktop support. If the IT people can’t visit your desk because you are working from home then they need to be able to remotely access it to fix things. When people make working from home a large part of their work time the issue of who owns peripherals and how they are tracked will get interesting. In a previous blog post I suggested that keyboards and mice not be treated as assets [5]. But what about monitors, 4G/Wifi access points, etc?

Some people have suggested that there will be business sectors benefiting from the pandemic, such as telecoms and e-commerce. If you have a bunch of people forced to stay home who aren’t broke (IE a large portion of the middle class in Australia) they will probably order delivery of stuff for entertainment. But in the long term e-commerce seems unlikely to change much, people will spend less due to economic uncertainty so while they may shift some purchasing to e-commerce apart from home delivery of groceries e-commerce probably won’t go up overall. Generally telecoms won’t gain anything from teleworking, the Internet access you need for good Netflix viewing is generally greater than that needed for good video-conferencing.

Money

I previously wrote about a Basic Income for Australia [6]. One of the most cited reasons for a Basic Income is to deal with robots replacing people. Now we are at the start of what could be a long term economic contraction caused by the pandemic which could reduce the scale of the economy by a similar degree while also improving the economic case for a robotic workforce. We should implement a Universal Basic Income now.

I previously wrote about the make-work jobs and how we could optimise society to achieve the worthwhile things with less work [7]. My ideas about optimising public transport and using more car share services may not work so well after the pandemic, but the rest should work well.

Business

There are a number of big companies that are not aiming for profitability in the short term. WeWork and Uber are well documented examples. Some of those companies will hopefully go bankrupt and make room for more responsible companies.

The co-working thing was always a precarious business. The companies renting out office space usually did so on a monthly basis as flexibility was one of their selling points, but they presumably rented buildings on an annual basis. As the profit margins weren’t particularly high having to pay rent on mostly empty buildings for a few months will hurt them badly. The long term trend in co-working spaces might be some sort of collaborative arrangement between the people who run them and the landlords similar to the way some of the hotel chains have profit sharing agreements with land owners to avoid both the capital outlay for buying land and the risk involved in renting. Also city hotels are very well equipped to run office space, they have the staff and the procedures for running such a business, most hotels also make significant profits from conventions and conferences.

The way the economy has been working in first world countries has been about being as competitive as possible. Just in time delivery to avoid using storage space and machines to package things in exactly the way that customers need and no more machines than needed for regular capacity. This means that there’s no spare capacity when things go wrong. A few years ago a company making bolts for the car industry went bankrupt because the car companies forced the prices down, then car manufacture stopped due to lack of bolts – this could have been a wake up call but was ignored. Now we have had problems with toilet paper shortages due to it being packaged in wholesale quantities for offices and schools not retail quantities for home use. Food was destroyed because it was created for restaurant packaging and couldn’t be packaged for home use in a reasonable amount of time.

Farmer’s markets alleviate some of the problems with packaging food etc. But they aren’t a good option when there’s a pandemic as disease risk makes them less appealing to customers and therefore less profitable for vendors.

Religion

Many religious groups have supported social distancing. Could this be the start of more decentralised religion? Maybe have people read the holy book of their religion and pray at home instead of being programmed at church? We can always hope.

Related posts:

  1. Cruises and Covid19 Problems With Cruises GQ has an insightful and detailed article...
  2. Designing Shared Cars Almost 10 years ago I blogged about car sharing companies...
  3. Open Hardware – What Would Change I have just read an interesting post speculating about the...

Ian Jackson: Renaming the primary git branch to "trunk"

Wednesday 24th of June 2020 05:38:58 PM
I have been convinced by the arguments that it's not nice to keep using the word master for the default git branch. Regardless of the etymology (which is unclear), some people say they have negative associations for this word, Changing this upstream in git is complicated on a technical level and, sadly, contested.

But git is flexible enough that I can make this change in my own repositories. Doing so is not even so difficult.

So:

Announcement

I intend to rename master to trunk in all repositories owned by my personal hat. To avoid making things very complicated for myself I will just delete refs/heads/master when I make this change. So there may be a little disruption to downstreams.

I intend make this change everywhere eventually. But rather than front-loading the effort, I'm going to do this to repositories as I come across them anyway. That will allow me to update all the docs references, any automation, etc., at a point when I have those things in mind anyway. Also, doing it this way will allow me to focus my effort on the most active projects, and avoids me committing to a sudden large pile of fiddly clerical work. But: if you have an interest in any repository in particular that you want updated, please let me know so I can prioritise it.

Bikeshed

Why "trunk"? "Main" has been suggested elswewhere, and it is often a good replacement for "master" (for example, we can talk very sensibly about a disk's Main Boot Record, MBR). But "main" isn't quite right for the VCS case; for example a "main" branch ought to have better quality than is typical for the primary development branch.

Conversely, there is much precedent for "trunk". "Trunk" was used to refer to this concept by at least SVN, CVS, RCS and CSSC (and therefore probably SCCS) - at least in the documentation, although in some of these cases the command line API didn't have a name for it.

So "trunk" it is.

Aside: two other words - passlist, blocklist

People are (finally!) starting to replace "blacklist" and "whitelist". Seriously, why has it taken everyone this long?

I have been using "blocklist" and "passlist" for these concepts for some time. They are drop-in replacements. I have also heard "allowlist" and "denylist" suggested, but they are cumbersome and cacophonous.

Also "allow" and "deny" seem to more strongly imply an access control function than merely "pass" and "block", and the usefulness of passlists and blocklists extends well beyond access control: protocol compatibility and ABI filtering are a couple of other use cases.



comments

François Marier: Automated MythTV-related maintenance tasks

Wednesday 24th of June 2020 04:45:00 PM

Here is the daily/weekly cronjob I put together over the years to perform MythTV-related maintenance tasks on my backend server.

The first part performs a database backup:

5 1 * * * mythtv /usr/share/mythtv/mythconverg_backup.pl

which I previously configured by putting the following in /home/mythtv/.mythtv/backuprc:

DBBackupDirectory=/var/backups/mythtv

and creating a new directory for it:

mkdir /var/backups/mythtv chown mythtv:mythtv /var/backups/mythtv

The second part of /etc/cron.d/mythtv-maintenance runs a contrib script to optimize the database tables:

10 1 * * * mythtv /usr/bin/chronic /usr/share/doc/mythtv-backend/contrib/maintenance/optimize_mythdb.pl

once a day. It requires the libmythtv-perl and libxml-simple-perl packages to be installed on Debian-based systems.

It is quickly followed by a check of the recordings and automatic repair of the seektable (when possible):

20 1 * * * mythtv /usr/bin/chronic /usr/bin/mythutil --checkrecordings --fixseektable

Next, I force a scan of the music and video databases to pick up anything new that may have been added externally via NFS mounts:

30 1 * * * mythtv /usr/bin/mythutil --quiet --scanvideos 31 1 * * * mythtv /usr/bin/mythutil --quiet --scanmusic

Finally, I defragment the XFS partition for two hours every day except Friday:

45 1 * * 1-4,6-7 root /usr/sbin/xfs_fsr

and resync the RAID-1 arrays once a week to ensure that they stay consistent and error-free:

15 3 * * 2 root /usr/local/sbin/raid_parity_check md0 15 3 * * 4 root /usr/local/sbin/raid_parity_check md2

using a trivial script.

In addition to that cronjob, I also have smartmontools run daily short and weekly long SMART tests via this blurb in /etc/smartd.conf:

/dev/sda -a -d ata -o on -S on -s (S/../.././04|L/../../6/05) /dev/sdb -a -d ata -o on -S on -s (S/../.././04|L/../../6/05)

If there are any other automated maintenance tasks you do on your MythTV server, please leave a comment!

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, May 2020

Wednesday 24th of June 2020 01:03:31 PM

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports In May, 198 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA did 18.0h (out of 14h assigned and 4h from April).
  • Anton Gladky gave back the assigned 10h and declared himself inactive.
  • Ben Hutchings did 19.75h (out of 17.25h assigned and 2.5h from April).
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 17.25h (out of 17.25h assigned).
  • Dylan Aïssi gave back the assigned 6h and declared himself inactive.
  • Emilio Pozuelo Monfort did not manage to work LTS in May and now reported 5h work in April (out of 17.25h assigned plus 46h from April), thus is carrying over 58.25h for June.
  • Markus Koschany did 25.0h (out of 17.25h assigned and 56h from April), thus carrying over 48.25h to June.
  • Mike Gabriel did 14.50h (out of 8h assigned and 6.5h from April).
  • Ola Lundqvist did 11.5h (out of 12h assigned and 7h from April), thus carrying over 7.5h to June.
  • Roberto C. Sánchez did 17.25h (out of 17.25h assigned).
  • Sylvain Beucler did 17.25h (out of 17.25h assigned).
  • Thorsten Alteholz did 17.25h (out of 17.25h assigned).
  • Utkarsh Gupta did 17.25h (out of 17.25h assigned).
Evolution of the situation

In May 2020 we had our second (virtual) contributors meeting on IRC, Logs and minutes are available online. Then we also moved our ToDo from the Debian wiki to the issue tracker on salsa.debian.org.
Sadly three contributors went inactive in May: Adrian Bunk, Anton Gladky and Dylan Aïssi. And while there are currently still enough active contributors to shoulder the existing work, we like to use this opportunity that we are always looking for new contributors. Please mail Holger if you are interested.
Finally, we like to remind you for a last time, that the end of Jessie LTS is coming in less than a month!
In case you missed it (or missed to act), please read this post about keeping Debian 8 Jessie alive for longer than 5 years. If you expect to have Debian 8 servers/devices running after June 30th 2020, and would like to have security updates for them, please get in touch with Freexian.

The security tracker currently lists 6 packages with a known CVE and the dla-needed.txt file has 30 packages needing an update.

Thanks to our sponsors

New sponsors are in bold. With the upcoming start of Jessie ELTS, we are welcoming a few new sponsors and others should join soon.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Matthew Garrett: Making my doorbell work

Wednesday 24th of June 2020 05:09:47 AM
I recently moved house, and the new building has a Doorbird to act as a doorbell and open the entrance gate for people. There's a documented local control API (no cloud dependency!) and a Home Assistant integration, so this seemed pretty straightforward.

Unfortunately not. The Doorbird is on separate network that's shared across the building, provided by Monkeybrains. We're also a Monkeybrains customer, so our network connection is plugged into the same router and antenna as the Doorbird one. And, as is common, there's port isolation between the networks in order to avoid leakage of information between customers. Rather perversely, we are the only people with an internet connection who are unable to ping my doorbell.

I spent most of the past few weeks digging myself out from under a pile of boxes, but we'd finally reached the point where spending some time figuring out a solution to this seemed reasonable. I spent a while playing with port forwarding, but that wasn't ideal - the only server I run is in the UK, and having packets round trip almost 11,000 miles so I could speak to something a few metres away seemed like a bad plan. Then I tried tethering an old Android device with a data-only SIM, which worked fine but only in one direction (I could see what the doorbell could see, but I couldn't get notifications that someone had pushed a button, which was kind of the point here).

So I went with the obvious solution - I added a wifi access point to the doorbell network, and my home automation machine now exists on two networks simultaneously (nmcli device modify wlan0 ipv4.never-default true is the magic for "ignore the gateway that the DHCP server gives you" if you want to avoid this), and I could now do link local service discovery to find the doorbell if it changed addresses after a power cut or anything. And then, like magic, everything worked - I got notifications from the doorbell when someone hit our button.

But knowing that an event occurred without actually doing something in response seems fairly unhelpful. I have a bunch of Chromecast targets around the house (a mixture of Google Home devices and Chromecast Audios), so just pushing a message to them seemed like the easiest approach. Home Assistant has a text to speech integration that can call out to various services to turn some text into a sample, and then push that to a media player on the local network. You can group multiple Chromecast audio sinks into a group that then presents as a separate device on the network, so I could then write an automation to push audio to the speaker group in response to the button being pressed.

That's nice, but it'd also be nice to do something in response. The Doorbird exposes API control of the gate latch, and Home Assistant exposes that as a switch. I'm using Home Assistant's Google Assistant integration to expose devices Home Assistant knows about to voice control. Which means when I get a house-wide notification that someone's at the door I can just ask Google to open the door for them.

So. Someone pushes the doorbell. That sends a signal to a machine that's bridged onto that network via an access point. That machine then sends a protobuf command to speakers on a separate network, asking them to stream a sample it's providing. Those speakers call back to that machine, grab the sample and play it. At this point, multiple speakers in the house say "Someone is at the door". I then say "Hey Google, activate the front gate" - the device I'm closest to picks this up and sends it to Google, where something turns my speech back into text. It then looks at my home structure data and realises that the "Front Gate" device is associated with my Home Assistant integration. It then calls out to the home automation machine that received the notification in the first place, asking it to trigger the front gate relay. That device calls out to the Doorbird and asks it to open the gate. And now I have functionality equivalent to a doorbell that completes a circuit and rings a bell inside my home, and a button inside my home that completes a circuit and opens the gate, except it involves two networks inside my building, callouts to the cloud, at least 7 devices inside my home that are running Linux and I really don't want to know how many computational cycles.

The future is wonderful.

(I work for Google. I do not work on any of the products described in this post. Please god do not ask me how to integrate your IoT into any of this)

comments

Russell Coker: Squirrelmail vs Roundcube

Tuesday 23rd of June 2020 02:52:19 AM

For some years I’ve had SquirrelMail running on one of my servers for the people who like such things. It seems that the upstream support for SquirrelMail has ended (according to the SquirrelMail Wikipedia page there will be no new releases just Subversion updates to fix bugs). One problem with SquirrelMail that seems unlikely to get fixed is the lack of support for base64 encoded From and Subject fields which are becoming increasingly popular nowadays as people who’s names don’t fit US-ASCII are encoding them in their preferred manner.

I’ve recently installed Roundcube to provide an alternative. Of course one of the few important users of webmail didn’t like it (apparently it doesn’t display well on a recent Samsung Galaxy Note), so now I have to support two webmail systems.

Below is a little Perl script to convert a SquirrelMail abook file into the csv format used for importing a RoundCube contact list.

#!/usr/bin/perl print "First Name,Last Name,Display Name,E-mail Address\n"; while(<STDIN>) {   chomp;   my @fields = split(/\|/, $_);   printf("%s,%s,%s %s,%s\n", $fields[1], $fields[2], $fields[0], $fields[4], $fields[3]); }

Related posts:

  1. Conversion of Video Files To convert video files between formats I use Makefiles, this...
  2. putting HTML codes and other special characters into a blog entry I often want to write blog posts about HTML code...
  3. Android Screen Saving Just over a year ago I bought a Samsung Galaxy...

Dirk Eddelbuettel: #27: R and CRAN Binaries for Ubuntu

Monday 22nd of June 2020 11:44:00 PM

Welcome to the 27th post in the rationally regularized R revelations series, or R4 for short. This is a edited / updated version of yesterday’s T^4 post #7 as it really fits the R4 series as well as it fits the T4 series.

A new video in both our T^4 series of video lightning talks with tips, tricks, tools, and toys is also a video in the R^4 series as it revisits a topic previously covered in the latter: how to (more easily) get (binary) packages onto your Ubuntu system. In fact, we show it in three different ways.

The slides are here.

This repo at GitHub support the series: use it to open issues for comments, criticism, suggestions, or feedback.

Thanks to Iñaki Ucar who followed up on twitter with a Fedora version. We exchanged some more message, and concluded that complete comparison (from an empty Ubuntu or Fedora container) to a full system takes about the same time on either system.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Andrew Cater: And a nice quick plug - Apple WWDC keynote

Monday 22nd of June 2020 07:40:48 PM
Demonstrating virtualisation on the new Apple to be powered by Apple silicon and ARM. Parallels virtualisation: Linux gets mentioned - two clear shots, one of Debian 9 and one of Debian 10. That's good publicity, any which way you cut it, with a worldwide tech audience hanging on every word from the presenters.

Now - developer hardware is due out this week - Apple A12 chip inside Mac mini format. Can we run Debian natively there?


Evgeni Golov: mass-migrating modules inside an Ansible Collection

Monday 22nd of June 2020 07:31:05 PM

Im the Foreman project, we've been maintaining a collection of Ansible modules to manage Foreman installations since 2017. That is, 2 years before Ansible had the concept of collections at all.

For that you had to set library (and later module_utils and doc_fragment_plugins) in ansible.cfg and effectively inject our modules, their helpers and documentation fragments into the main Ansible namespace. Not the cleanest solution, but it worked quiet well for us.

When Ansible started introducing Collections, we quickly joined, as the idea of namespaced, easily distributable and usable content units was great and exactly matched what we had in mind.

However, collections are only usable in Ansible 2.8, or actually 2.9 as 2.8 can consume them, but tooling around building and installing them is lacking. Because of that we've been keeping our modules usable outside of a collection.

Until recently, when we decided it's time to move on, drop that compatibility (which costed a few headaches over the time) and release a shiny 1.0.0.

One of the changes we wanted for 1.0.0 is renaming a few modules. Historically we had the module names prefixed with foreman_ and katello_, depending whether they were designed to work with Foreman (and plugins) or Katello (which is technically a Foreman plugin, but has a way more complicated deployment and currently can't be easily added to an existing Foreman setup). This made sense as long as we were injecting into the main Ansible namespace, but with collections the names be became theforemam.foreman.foreman_ <something> and while we all love Foreman, that was a bit too much. So we wanted to drop that prefix. And while at it, also change some other names (like ptable, which became partition_table) to be more readable.

But how? There is no tooling that would rename all files accordingly, adjust examples and tests. Well, bash to the rescue! I'm usually not a big fan of bash scripts, but renaming files, searching and replacing strings? That perfectly fits!

First of all we need a way map the old name to the new name. In most cases it's just "drop the prefix", for the others you can have some if/elif/fi:

prefixless_name=$(echo ${old_name}| sed -E 's/^(foreman|katello)_//') if [[ ${old_name} == 'foreman_environment' ]]; then new_name='puppet_environment' elif [[ ${old_name} == 'katello_sync' ]]; then new_name='repository_sync' elif [[ ${old_name} == 'katello_upload' ]]; then new_name='content_upload' elif [[ ${old_name} == 'foreman_ptable' ]]; then new_name='partition_table' elif [[ ${old_name} == 'foreman_search_facts' ]]; then new_name='resource_info' elif [[ ${old_name} == 'katello_manifest' ]]; then new_name='subscription_manifest' elif [[ ${old_name} == 'foreman_model' ]]; then new_name='hardware_model' else new_name=${prefixless_name} fi

That defined, we need to actually have a ${old_name}. Well, that's a for loop over the modules, right?

for module in ${BASE}/foreman_*py ${BASE}/katello_*py; do old_name=$(basename ${module} .py) … done

While we're looping over files, let's rename them and all the files that are associated with the module:

# rename the module git mv ${BASE}/${old_name}.py ${BASE}/${new_name}.py # rename the tests and test fixtures git mv ${TESTS}/${old_name}.yml ${TESTS}/${new_name}.yml git mv tests/fixtures/apidoc/${old_name}.json tests/fixtures/apidoc/${new_name}.json for testfile in ${TESTS}/fixtures/${old_name}-*.yml; do git mv ${testfile} $(echo ${testfile}| sed "s/${old_name}/${new_name}/") done

Now comes the really tricky part: search and replace. Let's see where we need to replace first:

  1. in the module file
    1. module key of the DOCUMENTATION stanza (e.g. module: foreman_example)
    2. all examples (e.g. foreman_example: …)
  2. in all test playbooks (e.g. foreman_example: …)
  3. in pytest's conftest.py and other files related to test execution
  4. in documentation
sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" ${BASE}/*.py sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" tests/test_playbooks/tasks/*.yml tests/test_playbooks/*.yml sed -E -i "/'${old_name}'/ s/${old_name}/${new_name}/" tests/conftest.py tests/test_crud.py sed -E -i "/`${old_name}`/ s/${old_name}/${new_name}/g' README.md docs/*.md

You've probably noticed I used ${BASE} and ${TESTS} and never defined them… Lazy me.

But here is the full script, defining the variables and looping over all the modules.

#!/bin/bash BASE=plugins/modules TESTS=tests/test_playbooks RUNTIME=meta/runtime.yml echo "plugin_routing:" > ${RUNTIME} echo " modules:" >> ${RUNTIME} for module in ${BASE}/foreman_*py ${BASE}/katello_*py; do old_name=$(basename ${module} .py) prefixless_name=$(echo ${old_name}| sed -E 's/^(foreman|katello)_//') if [[ ${old_name} == 'foreman_environment' ]]; then new_name='puppet_environment' elif [[ ${old_name} == 'katello_sync' ]]; then new_name='repository_sync' elif [[ ${old_name} == 'katello_upload' ]]; then new_name='content_upload' elif [[ ${old_name} == 'foreman_ptable' ]]; then new_name='partition_table' elif [[ ${old_name} == 'foreman_search_facts' ]]; then new_name='resource_info' elif [[ ${old_name} == 'katello_manifest' ]]; then new_name='subscription_manifest' elif [[ ${old_name} == 'foreman_model' ]]; then new_name='hardware_model' else new_name=${prefixless_name} fi echo "renaming ${old_name} to ${new_name}" git mv ${BASE}/${old_name}.py ${BASE}/${new_name}.py git mv ${TESTS}/${old_name}.yml ${TESTS}/${new_name}.yml git mv tests/fixtures/apidoc/${old_name}.json tests/fixtures/apidoc/${new_name}.json for testfile in ${TESTS}/fixtures/${old_name}-*.yml; do git mv ${testfile} $(echo ${testfile}| sed "s/${old_name}/${new_name}/") done sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" ${BASE}/*.py sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" tests/test_playbooks/tasks/*.yml tests/test_playbooks/*.yml sed -E -i "/'${old_name}'/ s/${old_name}/${new_name}/" tests/conftest.py tests/test_crud.py sed -E -i "/`${old_name}`/ s/${old_name}/${new_name}/g' README.md docs/*.md echo " ${old_name}:" >> ${RUNTIME} echo " redirect: ${new_name}" >> ${RUNTIME} git commit -m "rename ${old_name} to ${new_name}" ${BASE} tests/ README.md docs/ ${RUNTIME} done

As a bonus, the script will also generate a meta/runtime.yml which can be used by Ansible 2.10+ to automatically use the new module names if the playbook contains the old ones.

Oh, and yes, this is probably not the nicest script you'll read this year. Maybe not even today. But it got the job nicely done and I don't intend to need it again anyways.

Junichi Uekawa: The situation is getting better in some ways because we are opening up in Tokyo.

Monday 22nd of June 2020 08:01:04 AM
The situation is getting better in some ways because we are opening up in Tokyo. However fear remains the same. The fear of going out vs the business as usual. I'm spending more time on home network than ever before and I am learning about ipv6 more than before. I can probably explain MAP-E or dslite to you like I never could before.

Enrico Zini: Forced italianisation of Südtirol

Sunday 21st of June 2020 10:00:00 PM
Italianization - Wikipedia fascism history italy archive.org 2020-06-22 Italianization (Italian: Italianizzazione; Croatian: talijanizacija; Slovene: poitaljančevanje; German: Italianisierung; Greek: Ιταλοποίηση) is the spread of Italian culture and language, either by integration or assimilation.[1][2] Italianization of South Tyrol - Wikipedia fascism history italy archive.org 2020-06-22 In 1919, at the time of its annexation, the middle part of the County of Tyrol which is today called South Tyrol (in Italian Alto Adige) was inhabited by almost 90% German speakers.[1] Under the 1939 South Tyrol Option Agreement, Adolf Hitler and Benito Mussolini determined the status of the German and Ladin (Rhaeto-Romanic) ethnic groups living in the region. They could emigrate to Germany, or stay in Italy and accept their complete Italianization. As a consequence of this, the society of South Tyrol was deeply riven. Those who wanted to stay, the so-called Dableiber, were condemned as traitors while those who left (Optanten) were defamed as Nazis. Because of the outbreak of World War II, this agreement was never fully implemented. Illegal Katakombenschulen ("Catacomb schools") were set up to teach children the German language. Prontuario dei nomi locali dell'Alto Adige - Wikipedia fascism history italy archive.org 2020-06-22 The Prontuario dei nomi locali dell'Alto Adige (Italian for Reference Work of Place Names of Alto Adige) is a list of Italianized toponyms for mostly German place names in South Tyrol (Alto Adige in Italian) which was published in 1916 by the Royal Italian Geographic Society (Reale Società Geografica Italiana). The list was called the Prontuario in short and later formed an important part of the Italianization campaign initiated by the fascist regime, as it became the basis for the official place and district names in the Italian-annexed southern part of the County of Tyrol. Ettore Tolomei - Wikipedia fascism history italy people archive.org 2020-06-22 Ettore Tolomei (16 August 1865, in Rovereto – 25 May 1952, in Rome) was an Italian nationalist and fascist. He was designated a Member of the Italian Senate in 1923, and ennobled as Conte della Vetta in 1937. South Tyrol Option Agreement - Wikipedia fascism history italy archive.org 2020-06-22 The South Tyrol Option Agreement (German: Option in Südtirol; Italian: Opzioni in Alto Adige) was an agreement in effect between 1939 and 1943, when the native German speaking people in South Tyrol and three communes in the province of Belluno were given the option of either emigrating to neighboring Nazi Germany (of which Austria was a part after the 1938 Anschluss) or remaining in Fascist Italy and being forcibly integrated into the mainstream Italian culture, losing their language and cultural heritage. Over 80% opted to move to Germany.

Dirk Eddelbuettel: T^4 #7 and R^4 #5: R and CRAN Binaries for Ubuntu

Sunday 21st of June 2020 09:48:00 PM

A new video in both our T^4 series of video lightning talks with tips, tricks, tools, and toys is also a video in the R^4 series as it revisits a topic previously covered in the latter: how to (more easily) get (binary) packages onto your Ubuntu system. In fact, we show it in three different ways.

The slides are here.

This repo at GitHub support the series: use it to open issues for comments, criticism, suggestions, or feedback.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Daniel Lange: Upgrading Limesurvey with (near) zero downtime

Sunday 21st of June 2020 07:38:00 PM

Limesurvey is an online survey tool. It is very powerful and commonly used in academic environments because it is Free Software (GPLv2+), allows for local installations protecting the data of participants and allowing to comply with data protection regulations. This also means there are typically no load-balanced multi-server szenarios with HA databases. But simple VMs where Limesurvey runs and needs upgrading in place.

There's an LTS branch (currently 3.x) and a stable branch (currently 4.x). There's also a 2.06 LTS branch that is restricted to paying customers. The main developers behind Limesurvey offer many services from template design to custom development to support to hosting ("Cloud", "Limesurvey Pro"). Unfortunately they also charge for easy updates called "ComfortUpdate" (currently 39€ for three months) and the manual process is made a bit cumbersome to make the "ComfortUpdate" offer more attractive.

Due to Limesurvey being an old code base and UI elements not being clearly separated, most serious use cases will end up patching files and symlinking logos around template directories. That conflicts a bit with the opaque "ComfortUpdate" process where you push a button and then magic happens. Or you have downtime and a recovery case while surveys are running.

If you do not intend to use the "ComfortUpdate" offering, you can prevent Limesurvey from connecting to http://comfortupdate.limesurvey.org daily by adding the updatable stanza as in line 14 to limesurvey/application/config/config.php:

  1. return array(
  2.  [...]
  3.          // Use the following config variable to set modified optional settings copied from config-defaults.php
  4.         'config'=>array(
  5.         // debug: Set this to 1 if you are looking for errors. If you still get no errors after enabling this
  6.         // then please check your error-logs - either in your hosting provider admin panel or in some /logs directory
  7.         // on your webspace.
  8.         // LimeSurvey developers: Set this to 2 to additionally display STRICT PHP error messages and get full access to standard templates
  9.                 'debug'=>0,
  10.                 'debugsql'=>0, // Set this to 1 to enanble sql logging, only active when debug = 2
  11.                 // Mysql database engine (INNODB|MYISAM):
  12.                  'mysqlEngine' => 'MYISAM'
  13. ,               // Update default LimeSurvey config here
  14.                 'updatable' => false,
  15.         )
  16. );

The comma on line 13 is placed like that in the current default limesurvey config.php, don't let yourself get confused. Every item in a php array must end with a comma. It can be on the next line.

The basic principle of low risk, near-zero downtime, in-place upgrades is:

  1. Create a diff between the current release and the target release
  2. Inspect the diff
  3. Make backups of the application webroot
  4. Patch a copy of the application in-place
  5. (optional) stop the web server
  6. Make a backup of the production database
  7. Move the patched application to the production webroot
  8. (if 5) Start the webserver
  9. Upgrade the database (if needed)
  10. Check the application

So, in detail:

Continue reading "Upgrading Limesurvey with (near) zero downtime"

Molly de Blanc: Fire

Sunday 21st of June 2020 04:32:54 PM

The world is on fire.

I know many of you are either my parents friends or here for the free software thoughts, but rather than listen to me, I want you to listed to Black voices in these fields.

If you’re not Black, it’s your job to educate yourself on issues that affect Black people all over the world and the way systems are designed to benefit White Supremacy. It is our job to acknowledge that racism is a problem — whether it appears as White Supremacy, Colonialism, or something as seemingly banal as pay gaps.

We must make space for Black voices. We must make space for Black Women. We must make space for Black trans lives. We must do this in technology. We must build equity. We must listen.

I know I have a platform. It’s one I value highly because I’ve worked so hard for the past twelve years to build it against sexist attitudes in my field (and the world). However, it is time for me to use that platform for the voices of others.

Please pay attention to Black voices in tech and FOSS. Do not just expect them to explain diversity and inclusion, but go to them for their expertise. Pay them respectful salaries. Mentor Black youth and Black people who want to be involved. Volunteer or donate to groups like Black Girls Code, The Last Mile, and Resilient Coders.

If you’re looking for mentorship, especially around things like writing, speaking, community organizing, or getting your career going in open source, don’t hesitate to reach out to me. Mentorship can be a lasting relationship, or a brief exchange around a specific issue of event. If I can’t help you, I’ll try to connect you to someone who can.

We cannot build the techno-utopia unless everyone is involved.

Dirk Eddelbuettel: RcppGSL 0.3.8: More fixes and polish

Sunday 21st of June 2020 02:49:00 PM

Release 0.3.8 of RcppGSL is now getting onto CRAN. The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

Peter Carbonetto let us know in issue #25 that the included example now showed linker errors on (everybody’s favourite CRAN platform) Slowlaris. Kidding aside, the added compiler variety really has benefits because we were indeed missing a good handful or two of inline statements in the headers—which our good friends g++ and clang++ apparently let us get away with. This has been fixed, and a little bit of the usual package polish and cleanup has been added; see the list of detailed changes below.

Changes in version 0.3.8 (2020-06-21)
  • A few missing inline statements were added to the headers fixing a (genuine) error that was seen only on Solaris (Dirk).

  • The nice colNorm example is now in a file by itself, the previous versions are off in a new file colNorm_old.cpp (Dirk).

  • The README.me now sports two new badges (Dirk).

  • Travis CI was updated to 'bionic' and R 4.0 (Dirk).

Special thanks also to CRAN for a super-smooth and fully automated processing of a package with both compiled code and two handful of reverse dependencies.

Courtesy of CRANberries, a summary of changes to the most recent release is also available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dima Kogan: OpenCV C API transition. A rant.

Saturday 20th of June 2020 07:42:00 AM

I just went through a debugging exercise that was so ridiculous, I just had to write it up. Some of this probably should go into a bug report instead of a rant, but I'm tired. And clearly I don't care anymore.

OK, so I'm doing computer vision work. OpenCV has been providing basic functions in this area, so I have been using them for a while. Just for really, really basic stuff, like projection. The C API was kinda weird, and their error handling is a bit ridiculous (if you give it arguments it doesn't like, it asserts!), but it has been working fine for a while.

At some point (around OpenCV 3.0) somebody over there decided that they didn't like their C API, and that this was now a C++ library. Except the docs still documented the C API, and the website said it supported C, and the code wasn't actually removed. They just kinda stopped testing it and thinking about it. So it would mostly continue to work, except some poor saps would see weird failures; like this and this, for instance. OpenCV 3.2 was the last version where it was mostly possible to keep using the old C code, even when compiling without optimizations. So I was doing that for years.

So now, in 2020, Debian is finally shipping a version of OpenCV that definitively does not work with the old code, so I had to do something. Over time I stopped using everything about OpenCV, except a few cvProjectPoints2() calls. So I decided to just write a small C++ shim to call the new version of that function, expose that with =extern "C"= to the rest of my world, and I'd be done. And normally I would be, but this is OpenCV we're talking about. I wrote the shim, and it didn't work. The code built and ran, but the results were wrong. After some pointless debugging, I boiled the problem down to this test program:

#include <opencv2/calib3d.hpp> #include <stdio.h> int main(void) { double fx = 1000.0; double fy = 1000.0; double cx = 1000.0; double cy = 1000.0; double _camera_matrix[] = { fx, 0, cx, 0, fy, cy, 0, 0, 1 }; cv::Mat camera_matrix(3,3, CV_64FC1, _camera_matrix); double pp[3] = {1., 2., 10.}; double qq[2] = {444, 555}; int N=1; cv::Mat object_points(N,3, CV_64FC1, pp); cv::Mat image_points (N,2, CV_64FC1, qq); // rvec,tvec double _zero3[3] = {}; cv::Mat zero3(1,3,CV_64FC1, _zero3); cv::projectPoints( object_points, zero3,zero3, camera_matrix, cv::noArray(), image_points, cv::noArray(), 0.0); fprintf(stderr, "manually-projected no-distortion: %f %f\n", pp[0]/pp[2] * fx + cx, pp[1]/pp[2] * fy + cy); fprintf(stderr, "opencv says: %f %f\n", qq[0], qq[1]); return 0; }

This is as trivial as it gets. I project one point through a pinhole camera, and print out the right answer (that I can easily compute, since this is trivial), and what OpenCV reports:

$ g++ -I/usr/include/opencv4 -o tst tst.cc -lopencv_calib3d -lopencv_core && ./tst manually-projected no-distortion: 1100.000000 1200.000000 opencv says: 444.000000 555.000000

Well that's no good. The answer is wrong, but it looks like it didn't even write anything into the output array. Since this is supposed to be a thin shim to C code, I want this thing to be filling in C arrays, which is what I'm doing here:

double qq[2] = {444, 555}; int N=1; cv::Mat image_points (N,2, CV_64FC1, qq);

This is how the C API has worked forever, and their C++ API works the same way, I thought. Nothing barfed, not at build time, or run time. Fine. So I went to figure this out. In the true spirit of C++, the new API is inscrutable. I'm passing in cv::Mat, but the API wants cv::InputArray for some arguments and cv::OutputArray for others. Clearly cv::Mat can be coerced into either of those types (and that's what you're supposed to do), but the details are not meant to be understood. You can read the snazzy C++-style documentation. Clicking on "OutputArray" in the doxygen gets you here. Then I guess you're supposed to click on "_OutputArray", and you get here. Understand what's going on now? Me neither.

Stepping through the code revealed the problem. cv::projectPoints() looks like this:

void cv::projectPoints( InputArray _opoints, InputArray _rvec, InputArray _tvec, InputArray _cameraMatrix, InputArray _distCoeffs, OutputArray _ipoints, OutputArray _jacobian, double aspectRatio ) { .... _ipoints.create(npoints, 1, CV_MAKETYPE(depth, 2), -1, true); ....

I.e. they're allocating a new data buffer for the output, and giving it back to me via the OutputArray object. This object already had a buffer, and that's where I was expecting the output to go. Instead it went to the brand-new buffer I didn't want. Issues:

  • The OutputArray object knows it already has a buffer, and they could just use it instead of allocating a new one
  • If for some reason my buffer smells bad, they could complain to tell me they're ignoring it to save me the trouble of debugging, and then bitching about it on the internet
  • I think dynamic memory allocation smells bad
  • Doing it this way means the new function isn't a drop-in replacement for the old function

Well that's just super. I can call the C++ function, copy the data into the place it's supposed to go to, and then deallocate the extra buffer. Or I can pull out the meat of the function I want into my project, and then I can drop the OpenCV dependency entirely. Clearly that's the way to go.

So I go poking back into their code to grab what I need, and here's what I see:

static void cvProjectPoints2Internal( const CvMat* objectPoints, const CvMat* r_vec, const CvMat* t_vec, const CvMat* A, const CvMat* distCoeffs, CvMat* imagePoints, CvMat* dpdr CV_DEFAULT(NULL), CvMat* dpdt CV_DEFAULT(NULL), CvMat* dpdf CV_DEFAULT(NULL), CvMat* dpdc CV_DEFAULT(NULL), CvMat* dpdk CV_DEFAULT(NULL), CvMat* dpdo CV_DEFAULT(NULL), double aspectRatio CV_DEFAULT(0) ) { ... }

Looks familiar? It should. Because this is the original C-API function they replaced. So in their quest to move to C++, they left the original code intact, C API and everything, un-exposed it so you couldn't call it anymore, and made a new, shitty C++ wrapper for people to call instead. CvMat is still there. I have no words.

Yes, this is a massive library, and maybe other parts of it indeed did make some sort of non-token transition, but this thing is ridiculous. In the end, here's the function I ended up with (licensed as OpenCV; see the comment)

// The implementation of project_opencv is based on opencv. The sources have // been heavily modified, but the opencv logic remains. This function is a // cut-down cvProjectPoints2Internal() to keep only the functionality I want and // to use my interfaces. Putting this here allows me to drop the C dependency on // opencv. Which is a good thing, since opencv dropped their C API // // from opencv-4.2.0+dfsg/modules/calib3d/src/calibration.cpp // // Copyright (C) 2000-2008, Intel Corporation, all rights reserved. // Copyright (C) 2009, Willow Garage Inc., all rights reserved. // Third party copyrights are property of their respective owners. // // Redistribution and use in source and binary forms, with or without modification, // are permitted provided that the following conditions are met: // // * Redistribution's of source code must retain the above copyright notice, // this list of conditions and the following disclaimer. // // * Redistribution's in binary form must reproduce the above copyright notice, // this list of conditions and the following disclaimer in the documentation // and/or other materials provided with the distribution. // // * The name of the copyright holders may not be used to endorse or promote products // derived from this software without specific prior written permission. // // This software is provided by the copyright holders and contributors "as is" and // any express or implied warranties, including, but not limited to, the implied // warranties of merchantability and fitness for a particular purpose are disclaimed. // In no event shall the Intel Corporation or contributors be liable for any direct, // indirect, incidental, special, exemplary, or consequential damages // (including, but not limited to, procurement of substitute goods or services; // loss of use, data, or profits; or business interruption) however caused // and on any theory of liability, whether in contract, strict liability, // or tort (including negligence or otherwise) arising in any way out of typedef union { struct { double x,y; }; double xy[2]; } point2_t; typedef union { struct { double x,y,z; }; double xyz[3]; } point3_t; void project_opencv( // outputs point2_t* q, point3_t* dq_dp, // may be NULL double* dq_dintrinsics_nocore, // may be NULL // inputs const point3_t* p, int N, const double* intrinsics, int Nintrinsics) { const double fx = intrinsics[0]; const double fy = intrinsics[1]; const double cx = intrinsics[2]; const double cy = intrinsics[3]; double k[12] = {}; for(int i=0; i<Nintrinsics-4; i++) k[i] = intrinsics[i+4]; for( int i = 0; i < N; i++ ) { double z_recip = 1./p[i].z; double x = p[i].x * z_recip; double y = p[i].y * z_recip; double r2 = x*x + y*y; double r4 = r2*r2; double r6 = r4*r2; double a1 = 2*x*y; double a2 = r2 + 2*x*x; double a3 = r2 + 2*y*y; double cdist = 1 + k[0]*r2 + k[1]*r4 + k[4]*r6; double icdist2 = 1./(1 + k[5]*r2 + k[6]*r4 + k[7]*r6); double xd = x*cdist*icdist2 + k[2]*a1 + k[3]*a2 + k[8]*r2+k[9]*r4; double yd = y*cdist*icdist2 + k[2]*a3 + k[3]*a1 + k[10]*r2+k[11]*r4; q[i].x = xd*fx + cx; q[i].y = yd*fy + cy; if( dq_dp ) { double dx_dp[] = { z_recip, 0, -x*z_recip }; double dy_dp[] = { 0, z_recip, -y*z_recip }; for( int j = 0; j < 3; j++ ) { double dr2_dp = 2*x*dx_dp[j] + 2*y*dy_dp[j]; double dcdist_dp = k[0]*dr2_dp + 2*k[1]*r2*dr2_dp + 3*k[4]*r4*dr2_dp; double dicdist2_dp = -icdist2*icdist2*(k[5]*dr2_dp + 2*k[6]*r2*dr2_dp + 3*k[7]*r4*dr2_dp); double da1_dp = 2*(x*dy_dp[j] + y*dx_dp[j]); double dmx_dp = (dx_dp[j]*cdist*icdist2 + x*dcdist_dp*icdist2 + x*cdist*dicdist2_dp + k[2]*da1_dp + k[3]*(dr2_dp + 4*x*dx_dp[j]) + k[8]*dr2_dp + 2*r2*k[9]*dr2_dp); double dmy_dp = (dy_dp[j]*cdist*icdist2 + y*dcdist_dp*icdist2 + y*cdist*dicdist2_dp + k[2]*(dr2_dp + 4*y*dy_dp[j]) + k[3]*da1_dp + k[10]*dr2_dp + 2*r2*k[11]*dr2_dp); dq_dp[i*2 + 0].xyz[j] = fx*dmx_dp; dq_dp[i*2 + 1].xyz[j] = fy*dmy_dp; } } if( dq_dintrinsics_nocore ) { dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 0] = fx*x*icdist2*r2; dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 0] = fy*(y*icdist2*r2); dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 1] = fx*x*icdist2*r4; dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 1] = fy*y*icdist2*r4; if( Nintrinsics-4 > 2 ) { dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 2] = fx*a1; dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 2] = fy*a3; dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 3] = fx*a2; dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 3] = fy*a1; if( Nintrinsics-4 > 4 ) { dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 4] = fx*x*icdist2*r6; dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 4] = fy*y*icdist2*r6; if( Nintrinsics-4 > 5 ) { dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 5] = fx*x*cdist*(-icdist2)*icdist2*r2; dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 5] = fy*y*cdist*(-icdist2)*icdist2*r2; dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 6] = fx*x*cdist*(-icdist2)*icdist2*r4; dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 6] = fy*y*cdist*(-icdist2)*icdist2*r4; dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 7] = fx*x*cdist*(-icdist2)*icdist2*r6; dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 7] = fy*y*cdist*(-icdist2)*icdist2*r6; if( Nintrinsics-4 > 8 ) { dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 8] = fx*r2; //s1 dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 8] = fy*0; //s1 dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 9] = fx*r4; //s2 dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 9] = fy*0; //s2 dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 10] = fx*0;//s3 dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 10] = fy*r2; //s3 dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 11] = fx*0;//s4 dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 11] = fy*r4; //s4 } } } } } } }

This does only the stuff I need: projection only (no geometric transformation), and gradients in respect to the point coordinates and distortions only. Gradients in respect to fxy and cxy are trivial, and I don't bother reporting them.

So now I don't compile or link against OpenCV, my code builds and runs on Debian/sid and (surprisingly) it runs much faster than before. Apparently there was a lot of pointless overhead happening.

Alright. Rant over.

Ingo Juergensmann: Jitsi Meet and ejabberd

Friday 19th of June 2020 05:26:21 PM

Since the more or less global lockdown caused by Covid-19 there was a lot talk about video conferencing solutions that can be used for e.g. those people that try to coordinate with coworkers while in home office. One of the solutions is Jitsi Meet, which is NOT packaged in Debian. But there are Debian packages provided by Jitsi itself.

Jitsi relies on an XMPP server. You can see the network overview in the docs. While Jitsi itself uses Prosody as XMPP and their docs only covers that one. But it's basically irrelevant which XMPP you want to use. Only thing is that you can't follow the official Jitsi documentation when you are not using Prosody but instead e.g. ejabberd. As always, it's sometimes difficult to find the correct/best non-official documentation or how-to, so I try to describe what helped me in configuring Jitsi Meet with ejabberd as XMPP server and my own coturn STUN/TURN server...

This is not a step-by-step description, but anyway... here we go with some links:

One of the first issues I stumpled across was that my Java was too old, but this can be quickly solved by update-alternatives:

There are 3 choices for the alternative java (providing /usr/bin/java).

Selection Path Priority Status
------------------------------------------------------------
* 0 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 auto mode
1 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 manual mode
2 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 1081 manual mode
3 /usr/lib/jvm/jre-7-oracle-x64/bin/java 316 manual mode

It was set to jre-7, but I guess this was from years ago when I ran OpenFire as XMPP server.

If something is not working with Jitsi Meet, it helps to not watch the log files only, but also to open the Debug Console in your web browser. That way I catched some XMPP Failures and saw that it tries to connect to some components where the DNS records were missing:

meet IN A yourIP
chat.meet IN A yourIP
focus.meet IN A yourIP
pubsub.meet IN A yourIP

Of course you also need to add some config to your ejabberd.yml:

host_config:
domain.net:
auth_password_format: scram
meet.domain.net:
## Disable s2s to prevent spam
s2s_access: none
auth_method: anonymous
allow_multiple_connections: true
anonymous_protocol: both
modules:
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
#mod_disco: {}
mod_stun_disco:
secret: "YOURSECRETTURNCREDENTIALS"
services:
-
host: yourIP # Your coturn's public address.
type: stun
transport: udp
-
host: yourIP # Your coturn's public address.
type: stun
transport: tcp
-
host: yourIP # Your coturn's public address.
type: turn
transport: udp
mod_muc:
access: all
access_create: local
access_persistent: local
access_admin: admin
host: "chat.@"
mod_muc_admin: {}
mod_ping: {}
mod_pubsub:
access_createnode: local
db_type: sql
host: "pubsub.@"
ignore_pep_from_offline: false
last_item_cache: true
max_items_node: 5000 # For Jappix this must be set to 1000000
plugins:
- "flat"
- "pep" # requires mod_caps
force_node_config:
"eu.siacs.conversations.axolotl.*":
access_model: open
## Avoid buggy clients to make their bookmarks public
"storage:bookmarks":
access_model: whitelist

There is more config that needs to be done, but one of the XMPP Failures I spotted from debug console in Firefox was that it tried to connect to conference.domain.net while I prefer to use chat.domain.net for its brevity. If you prefer conference instead of chat, then you shouldn't be affected by this. Just make just that your config is consistent with what Jitsi Meet wants to connect to. Maybe not all of the above lines are necessary, but this works for me.

Jitsi config is like this for me with short domains (/etc/jitsi/meet/meet.domain.net-config.js):

var config = {

hosts: {
domain: 'domain.net',
anonymousdomain: 'meet.domain.net',
authdomain: 'meet.domain.net',
focus: 'focus.meet.domain.net',
muc: 'chat.hookipa.net'
},

bosh: '//meet.domain.net:5280/http-bind',
//websocket: 'wss://meet.domain.net/ws',
clientNode: 'http://jitsi.org/jitsimeet',
focusUserJid: 'focus@domain.net',

useStunTurn: true,

p2p: {
// Enables peer to peer mode. When enabled the system will try to
// establish a direct connection when there are exactly 2 participants
// in the room. If that succeeds the conference will stop sending data
// through the JVB and use the peer to peer connection instead. When a
// 3rd participant joins the conference will be moved back to the JVB
// connection.
enabled: true,

// Use XEP-0215 to fetch STUN and TURN servers.
useStunTurn: true,

// The STUN servers that will be used in the peer to peer connections
stunServers: [
//{ urls: 'stun:meet-jit-si-turnrelay.jitsi.net:443' },
//{ urls: 'stun:stun.l.google.com:19302' },
//{ urls: 'stun:stun1.l.google.com:19302' },
//{ urls: 'stun:stun2.l.google.com:19302' },
{ urls: 'stun:turn.domain.net:5349' },
{ urls: 'stun:turn.domain.net:3478' }
],

....

In the above config snippet one of the issues solved was to add the port to the bosh setting. Of course you can also take care that your bosh requests get forwarded by your webserver as reverse proxy. Using the port in the config might be a quick way to test whether or not your config is working. It's easier to solve one issue after the other and make one config change at a time instead of needing to make changes in several places.

/etc/jitsi/jicofo/sip-communicator.properties:

org.jitsi.jicofo.auth.URL=XMPP:meet.domain.net
org.jitsi.jicofo.BRIDGE_MUC=jvbbrewery@chat.meet.domain.net

 /etc/jitsi/videobridge/sip-communicator.properties:

org.jitsi.videobridge.ENABLE_STATISTICS=true
org.jitsi.videobridge.STATISTICS_TRANSPORT=muc
org.jitsi.videobridge.STATISTICS_INTERVAL=5000

org.jitsi.videobridge.xmpp.user.shard.HOSTNAME=localhost
org.jitsi.videobridge.xmpp.user.shard.DOMAIN=domain.net
org.jitsi.videobridge.xmpp.user.shard.USERNAME=jvb
org.jitsi.videobridge.xmpp.user.shard.PASSWORD=SECRET
org.jitsi.videobridge.xmpp.user.shard.MUC_JIDS=JvbBrewery@chat.meet.domain.net
org.jitsi.videobridge.xmpp.user.shard.MUC_NICKNAME=videobridge1

org.jitsi.videobridge.DISABLE_TCP_HARVESTER=true
org.jitsi.videobridge.TCP_HARVESTER_PORT=4443
org.ice4j.ice.harvest.NAT_HARVESTER_LOCAL_ADDRESS=yourIP
org.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS=yourIP
org.ice4j.ice.harvest.DISABLE_AWS_HARVESTER=true
org.ice4j.ice.harvest.STUN_MAPPING_HARVESTER_ADDRESSES=turn.domain.net:3478
org.ice4j.ice.harvest.ALLOWED_INTERFACES=eth0

Sometimes there might be stupid errors like (in my case) wrong hostnames like "chat.meet..domain.net" (a double dot in the domain), but you can spot those easily in the debug console of your browser.

There tons of config options where you can easily make mistakes, but watching your logs and your debug console should really help you in sorting out these kind of errors. The other URLs above are helpful as well and more elaborate then my few lines here. Especially Mike Kuketz has some advanced configuration tips like disabling third party requests with "disableThirdPartyRequests: true" or limiting the number of video streams and such.

Hope this helps...

Kategorie: DebianTags: DebianXMPPWebRTCejabberdJitsi

Russell Coker: Storage Trends

Friday 19th of June 2020 11:53:49 AM

In considering storage trends for the consumer side I’m looking at the current prices from MSY (where I usually buy computer parts). I know that other stores will have slightly different prices but they should be very similar as they all have low margins and wholesale prices are the main factor.

Small Hard Drives Aren’t Viable

The cheapest hard drive that MSY sells is $68 for 500G of storage. The cheapest SSD is $49 for 120G and the second cheapest is $59 for 240G. SSD is cheaper at the low end and significantly faster. If someone needed about 500G of storage there’s a 480G SSD for $97 which costs $29 more than a hard drive. With a modern PC if you have no hard drives you will notice that it’s quieter. For anyone who’s buying a new PC spending an extra $29 is definitely worthwhile for the performance, low power use, and silence.

The cheapest 1TB disk is $69 and the cheapest 1TB SSD is $159. Saving $90 on the cost of a new PC probably isn’t worth while.

For 2TB of storage the cheapest options are Samsung NVMe for $339, Crucial SSD for $335, or a hard drive for $95. Some people would choose to save $244 by getting a hard drive instead of NVMe, but if you are getting a whole system then allocating $244 to NVMe instead of a faster CPU would probably give more benefits overall.

Computer stores typically have small margins and computer parts tend to quickly either become cheaper or be obsoleted by better parts. So stores don’t want to stock parts unless they will sell quickly. Disks smaller than 2TB probably aren’t going to be profitable for stores for very long. The trend of SSD and NVMe becoming cheaper is going to make 2TB disks non-viable in the near future.

NVMe vs SSD

M.2 NVMe devices are at comparable prices to SATA SSDs. For some combinations of quality and capacity NVMe is about 50% more expensive and for some it’s slightly cheaper (EG Intel 1TB NVMe being cheaper than Samsung EVO 1TB SSD). Last time I checked about half the motherboards on sale had a single M.2 socket so for a new workstation that doesn’t need more than 2TB of storage (the largest NVMe that MSY sells) it wouldn’t make sense to use anything other than NVMe.

The benefit of NVMe is NOT throughput (even though NVMe devices can often sustain over 4GB/s), it’s low latency. Workstations can’t properly take advantage of this because RAM is so cheap ($198 for 32G of DDR4) that compiles etc mostly come from cache and because most filesystem writes on workstations aren’t synchronous. For servers a large portion of writes are synchronous, for example a mail server can’t acknowledge receiving mail until it knows that it’s really on disk, so there’s a lot of small writes that block server processes and the low latency of NVMe really improves performance. If you are doing a big compile on a workstation (the most common workstation task that uses a lot of disk IO) then the writes aren’t synchronised to disk and if the system crashes you will just do all the compilation again. While NVMe doesn’t give a lot of benefit over SSD for workstation use (I’ve uses laptops with SSD and NVMe and not noticed a great difference) of course I still want better performance. ;)

Last time I checked I couldn’t easily buy a PCIe card that supported 2*NVMe cards, I’m sure they are available somewhere but it would take longer to get and probably cost significantly more than twice as much. That means a RAID-1 of NVMe takes 2 PCIe slots if you don’t have an M.2 socket on the motherboard. This was OK when I installed 2*NVMe devices on a server that had 18 disks and lots of spare PCIe slots. But for some systems PCIe slots are an issue.

My home server has all PCIe slots used by a video card and Ethernet cards and the BIOS probably won’t support booting from NVMe. It’s a Dell server so I can’t just replace the motherboard with one that has more PCIe slots and M.2 on the motherboard. As it’s running nicely and doesn’t need replacing any time soon I won’t be using NVMe for home server stuff.

Small Servers

Most servers that I am responsible for have less than 2TB of storage. For my clients I now only recommend SSD storage for small servers and am recommending SSD for replacing any failed disks.

My home server has 2*500G SSDs in a BTRFS RAID-1 for the root filesystem, and 3*4TB disks in a BTRFS RAID-1 for storing big files. I bought the SSDs when 500G SSDs were about $250 each and bought 2*4TB disks when they were about $350 each. Currently that server has about 3.3TB of space used and I could probably get it down to about 2.5TB if I deleted things I don’t really need. If I was getting storage for that server now I’d use 2*2TB SSDs and 3*1TB hard drives for the stuff that doesn’t fit on SSDs (I have some spare 1TB disks that came with servers). If I didn’t have spare hard drives I’d get 3*2TB SSDs for that sort of server which would give 3TB of BTRFS RAID-1 storage.

Last time I checked Dell servers had a card for supporting M.2 as an optional extra so Dells probably won’t boot from NVMe without extra expense.

Ars Technica has an informative article about WD selling SMR disks as “NAS” disks [1]. The Shingled Magnetic Recording technology allows greater storage density on a platter which leads to either larger capacity or cheaper disks but at the cost of lower write performance and apparently extremely bad latency in some situations. NAS disks are supposed to be low latency as the expectation is that they will be used in a RAID array and kicked out of the array if they have problems. There are reports of ZFS kicking SMR disks from RAID sets. I think this will end the use of hard drives for small servers. For a server you don’t want to deal with this sort of thing, by definition when a server goes down multiple people will stop work (small server implies no clustering). Spending extra to get SSDs just to avoid the risk of unexpected SMR would be a good plan.

Medium Servers

The largest SSD and NVMe devices that are readily available are 2TB but 10TB disks are commodity items, there are reports of 20TB hard drives being available but I can’t find anyone in Australia selling them.

If you need to store dozens or hundreds of terabytes than hard drives have to be part of the mix at this time. There’s no technical reason why SSDs larger than 10TB can’t be made (the 2.5″ SATA form factor has more than 5* the volume of a 2TB M.2 card) and it’s likely that someone sells them outside the channels I buy from, but probably at a price higher than what my clients are willing to pay. If you want 100TB of affordable storage then a mid range server like the Dell PowerEdge T640 which can have up to 18*3.5″ disks is good. One of my clients has a PowerEdge T630 with 18*3.5″ disks in the 8TB-10TB range (we replace failed disks with the largest new commodity disks available, it used to have 6TB disks). ZFS version 0.8 introduced a “Special VDEV Class” which stores metadata and possibly small data blocks on faster media. So you could have some RAID-Z groups on hard drives for large storage and the metadata on a RAID-1 on NVMe for fast performance. For medium size arrays on hard drives having a “find /” operation take hours is not uncommon, for large arrays having it take days isn’t that uncommon. So far it seems that ZFS is the only filesystem to have taken the obvious step of storing metadata on SSD/NVMe while bulk data is on cheap large disks.

One problem with large arrays is that the vibration of disks can affect the performance and reliability of nearby disks. The ZFS server I run with 18 disks was originally setup with disks from smaller servers that never had ZFS checksum errors, but when disks from 2 small servers were put in one medium size server they started getting checksum errors presumably due to vibration. This alone is a sufficient reason for paying a premium for SSD storage.

Currently the cost of 2TB of SSD or NVMe is between the prices of 6TB and 8TB hard drives, and the ratio of price/capacity for SSD and NVMe is improving dramatically while the increase in hard drive capacity is slow. 4TB SSDs are available for $895 compared to a 10TB hard drive for $549, so it’s 4* more expensive on a price per TB. This is probably good for Windows systems, but for Linux systems where ZFS and “special VDEVs” is an option it’s probably not worth considering. Most Linux user cases where 4TB SSDs would work well would be better served by smaller NVMe and 10TB disks running ZFS. I don’t think that 4TB SSDs are at all popular at the moment (MSY doesn’t stock them), but prices will come down and they will become common soon enough. Probably by the end of the year SSDs will halve in price and no hard drives less than 4TB will be viable.

For rack mounted servers 2.5″ disks have been popular for a long time. It’s common for vendors to offer 2 versions of a rack mount server for 2.5″ and 3.5″ disks where the 2.5″ version takes twice as many disks. If the issue is total storage in a server 4TB SSDs can give the same capacity as 8TB HDDs.

SMR vs Regular Hard Drives

Rumour has it that you can buy 20TB SMR disks, I haven’t been able to find a reference to anyone who’s selling them in Australia (please comment if you know who sells them and especially if you know the price). I expect that the ZFS developers will soon develop a work-around to solve the problems with SMR disks. Then arrays of 20TB SMR disks with NVMe for “special VDEVs” will be an interesting possibility for storage. I expect that SMR disks will be the majority of the hard drive market by 2023 – if hard drives are still on the market. SSDs will be large enough and cheap enough that only SMR disks will offer enough capacity to be worth using.

I think that it is a possibility that hard drives won’t be manufactured in a few years. The volume of a 3.5″ disk is significantly greater than that of 10 M.2 devices so current technology obviously allows 20TB of NVMe or SSD storage in the space of a 3.5″ disk. If the price of 16TB NVMe and SSD devices comes down enough (to perhaps 3* the price of a 20TB hard drive) almost no-one would want the hard drive and it wouldn’t be viable to manufacture them.

It’s not impossible that in a few years time 3D XPoint and similar fast NVM technologies occupy the first level of storage (the ZFS “special VDEV”, OS swap device, log device for database servers, etc) and NVMe occupies the level for bulk storage with no space left in the market for spinning media.

Computer Cases

For servers I expect that models supporting 3.5″ storage devices will disappear. A 1RU server with 8*2.5″ storage devices or a 2RU server with 16*2.5″ storage devices will probably be of use to more people than a 1RU server with 4*3.5″ or a 2RU server with 8*3.5″.

My first IBM PC compatible system had a 5.25″ hard drive, a 5.25″ floppy drive, and a 3.5″ floppy drive in 1988. My current PC is almost a similar size and has a DVD drive (that I almost never use) 5 other 5.25″ drive bays that have never been used, and 5*3.5″ drive bays that I have never used (I have only used 2.5″ SSDs). It would make more sense to have PC cases designed around 2.5″ and maybe 3.5″ drives with no more than one 5.25″ drive bay.

The Intel NUC SFF PCs are going in the right direction. Many of them only have a single storage device but some of them have 2*M.2 sockets allowing RAID-1 of NVMe and some of them support ECC RAM so they could be used as small servers.

A USB DVD drive costs $36, it doesn’t make sense to have every PC designed around the size of an internal DVD drive that will probably only be used to install the OS when a $36 USB DVD drive can be used for every PC you own.

The only reason I don’t have a NUC for my personal workstation is that I get my workstations from e-waste. If I was going to pay for a PC then a NUC is the sort of thing I’d pay to have on my desk.

Related posts:

  1. Finding Storage Performance Problems Here are some basic things to do when debugging storage...
  2. New Storage Developments Eweek has an article on a new 1TB Seagate drive....
  3. Cheap Bulk Storage The Problem Some of my clients need systems that store...

Reproducible Builds (diffoscope): diffoscope 148 released

Friday 19th of June 2020 12:00:00 AM

The diffoscope maintainers are pleased to announce the release of diffoscope version 148. This version includes the following changes:

[ Daniel Fullmer ] * Fix a regression in the CBFS comparator due to changes in our_check_output. [ Chris Lamb ] * Add a remark in the deb822 handling re. potential security issue in the .changes, .dsc, .buildinfo comparator.

You find out more by visiting the project homepage.

More in Tux Machines

8 Best Free and Open Source Linux Renderers

One of the many strengths of Linux is its good range of open source software for artists, photographers, animators, and designers. With inexpensive hardware, free software, and a modicum of talent and inspiration, anyone can create professional-looking computer graphics. If you are new to computer graphics, it may not be clear what is meant by the term rendering. To clarify, rendering is the process of generating an image from a model (or a collection of models, known as a scene file) using computer software. This entails the computer software to perform calculations to translate the scene from a mathematical approximation to a 2D image. To generate the image, the scene file contains objects in a defined language or data structure, containing geometry, lighting, shading, texture, and viewpoint. This data is processed by the rendering software to generate a raster image file or a digital image. Read more

Linux Weekly Roundup: Debian 10.6, Edge for Linux and More - Sep 27, 2020

Here's the Linux Weekly roundup series, curated for you from the Linux and opensource world on application updates, new releases, distribution updates, major news, and upcoming highlights. Read more

Android Leftovers

Today in Techrights