Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 8 hours 1 min ago

Dirk Eddelbuettel: AsioHeaders 1.16.1-1 on CRAN

Tuesday 7th of July 2020 10:28:00 PM

An updated version of the AsioHeaders package arrived on CRAN today (after a we days of “rest” in the incoming directory of CRAN). Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

This release brings a new upstream version. Its changes required a corresponding change in one of (only) three reverse depends which delayed the CRAN admisstion by a few days.

Changes in version 1.16.1-1 (2020-06-28)
  • Upgraded to Asio 1.16.1 (Dirk in #5).

  • Updated README.md with standard set of badges

Via CRANberries, there is a diffstat report relative to the previous release.

Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Gunnar Wolf: Raspberry Pi 4, now running your favorite distribution!

Tuesday 7th of July 2020 07:00:00 AM

Great news, great news! New images available!Grab them while they are hot!

With lots of help (say, all of the heavy lifting) from the Debian Raspberry Pi Maintainer Team, we have finally managed to provide support for auto-building and serving bootable minimal Debian images for the Raspberry Pi 4 family of single-board, cheap, small, hacker-friendly computers!

The Raspberry Pi 4 was released close to a year ago, and is a very major bump in the Raspberry lineup; it took us this long because we needed to wait until all of the relevant bits entered Debian (mostly the kernel bits). The images are shipping a kernel from our Unstable branch (currently, 5.7.0-2), and are less tested and more likely to break than our regular, clean-Stable images. Nevertheless, we do expect them to be useful for many hackers –and even end-users– throughout the world.

The images we are generating are very minimal, they carry basically a minimal Debian install. Once downloaded, of course, you can install whatever your heart desires (because… Face it, if your heart desires it, it must free and of high quality. It must already be in Debian!)

Oh — And very important: Due to a change in the memory layout, if you get the 8GB model (currently the top-of-the-line RPi4), it will still not have USB support, due to a change in its memory layout (that means, no local keyboard/mouse ☹). We are working on getting it ironed out!

Dirk Eddelbuettel: Rcpp 1.0.5: Several Updates

Monday 6th of July 2020 03:43:00 PM

Right on the heels of the news of 2000 CRAN packages using Rcpp (and also hitting 12.5 of CRAN package, or one in eight), we are happy to announce release 1.0.5 of Rcpp. Since the ten-year anniversary and the 1.0.0 release release in November 2018, we have been sticking to a four-month release cycle. The last release has, however, left us with a particularly bad taste due to some rather peculiar interactions with a very small (but ever so vocal) portion of the user base. So going forward, we will change two things. First off, we reiterate that we have already made rolling releases. Each minor snapshot of the main git branch gets a point releases. Between release 1.0.4 and this 1.0.5 release, there were in fact twelve of those. Each and every one of these was made available via the drat repo, and we will continue to do so going forward. Releases to CRAN, however, are real work. If they then end up with as much nonsense as the last release 1.0.4, we think it is appropriate to slow things down some more so we intend to now switch to a six-months cycle. As mentioned, interim releases are always just one install.packages() call with a properly set repos argument away.

Rcpp has become the most popular way of enhancing R with C or C++ code. As of today, 2002 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 203 in BioConductor. And per the (partial) logs of CRAN downloads, we are running steady at around one millions downloads per month.

This release features again a number of different pull requests by different contributors covering the full range of API improvements, attributes enhancements, changes to Sugar and helper functions, extended documentation as well as continuous integration deplayment. See the list below for details.

Changes in Rcpp patch release version 1.0.5 (2020-07-01)
  • Changes in Rcpp API:

    • The exception handler code in #1043 was updated to ensure proper include behavior (Kevin in #1047 fixing #1046).

    • A missing Rcpp_list6 definition was added to support R 3.3.* builds (Davis Vaughan in #1049 fixing #1048).

    • Missing Rcpp_list{2,3,4,5} definition were added to the Rcpp namespace (Dirk in #1054 fixing #1053).

    • A further updated corrected the header include and provided a missing else branch (Mattias Ellert in #1055).

    • Two more assignments are protected with Rcpp::Shield (Dirk in #1059).

    • One call to abs is now properly namespaced with std:: (Uwe Korn in #1069).

    • String object memory preservation was corrected/simplified (Kevin in #1082).

  • Changes in Rcpp Attributes:

    • Empty strings are not passed to R CMD SHLIB which was seen with R 4.0.0 on Windows (Kevin in #1062 fixing #1061).

    • The short_file_name() helper function is safer with respect to temporaries (Kevin in #1067 fixing #1066, and #1071 fixing #1070).

  • Changes in Rcpp Sugar:

    • Two sample() objects are now standard vectors and not R_alloc created (Dirk in #1075 fixing #1074).
  • Changes in Rcpp support functions:

    • Rcpp.package.skeleton() adjusts for a (documented) change in R 4.0.0 (Dirk in #1088 fixing #1087).
  • Changes in Rcpp Documentation:

    • The pdf file of the earlier introduction is again typeset with bibliographic information (Dirk).

    • A new vignette describing how to package C++ libraries has been added (Dirk in #1078 fixing #1077).

  • Changes in Rcpp Deployment:

    • Travis CI unit tests now run a matrix over the versions of R also tested at CRAN (rel/dev/oldrel/oldoldrel), and coverage runs in parallel for a net speed-up (Dirk in #1056 and #1057).

    • The exceptions test is now partially skipped on Solaris as it already is on Windows (Dirk in #1065).

    • The default CI runner was upgraded to R 4.0.0 (Dirk).

    • The CI matrix spans R 3.5, 3.6, r-release and r-devel (Dirk).

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2455 previous questions.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jonathan Dowland: Review: Roku Express

Monday 6th of July 2020 09:55:37 AM

I don't generally write consumer reviews, here or elsewhere; but I have been so impressed by this one I wanted to mention it.

For Holly's birthday this year, taking place under Lockdown, we decided to buy a year's subscription to "Disney+". Our current TV receiver (A Humax Freesat box) doesn't support it so I needed to find some other way to get it onto the TV.

After a short bit of research, I bought the "Roku Express" streaming media player. This is the most basic streamer that Roku make, bottom of their range. For a little bit more money you can get a model which supports 4K (although my TV obviously doesn't: it, and the basic Roku, top out at 1080p) and a bit more gets you a "stick" form-factor and a Bluetooth remote (rather than line-of-sight IR).

I paid £20 for the most basic model and it Just Works. The receiver is very small but sits comfortably next to my satellite receiver-box. I don't have any issues with line-of-sight for the IR remote (and I rely on a regular IR remote for the TV itself of course). It supports Disney+, but also all the other big name services, some of which we already use (Netflix, YouTube BBC iPlayer) and some of which we didn't, since it was too awkward to access them (Google Play, Amazon Prime Video). It has now largely displaced the FreeSat box for accessing streaming content because it works so well and everything is in one place.

There's a phone App that remote-controls the box and works even better than the physical remote: it can offer a full phone-keyboard at times when you need to input text, and can mute the TV audio and put it out through headphones attached to the phone if you want.

My aging Plasma TV suffers from burn-in from static pictures. If left paused for a duration the Roku goes to a screensaver that keeps the whole frame moving. The FreeSat doesn't do this. My Blu Ray player does, but (I think) it retains some static elements.

Reproducible Builds: Reproducible Builds in June 2020

Monday 6th of July 2020 08:11:05 AM

Welcome to the June 2020 report from the Reproducible Builds project. In these reports we outline the most important things that we and the rest of the community have been up to over the past month.

What are reproducible builds?

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security.

But whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into seemingly secure software during the various compilation and distribution processes.

News

The GitHub Security Lab published a long article on the discovery of a piece of malware designed to backdoor open source projects that used the build process and its resulting artifacts to spread itself. In the course of their analysis and investigation, the GitHub team uncovered 26 open source projects that were backdoored by this malware and were actively serving malicious code. (Full article)

Carl Dong from Chaincode Labs uploaded a presentation on Bitcoin Build System Security and reproducible builds to YouTube:

The app intended to trace infection chains of Covid-19 in Switzerland published information on how to perform a reproducible build.

The Reproducible Builds project has received funding in the past from the Open Technology Fund (OTF) to reach specific technical goals, as well as to enable the project to meet in-person at our summits. The OTF has actually also assisted countless other organisations that promote transparent, civil society as well as those that provide tools to circumvent censorship and repressive surveillance. However, the OTF has now been threatened with closure. (More info)

It was noticed that Reproducible Builds was mentioned in the book End-user Computer Security by Mark Fernandes (published by WikiBooks) in the section titled Detection of malware in software.

Lastly, reproducible builds and other ideas around software supply chain were mentioned in a recent episode of the Ubuntu Podcast in a wider discussion about the Snap and application stores (at approx 16:00).


Distribution work

In the ArchLinux distribution, a goal to remove .doctrees from installed files was created via Arch’s ‘TODO list’ mechanism. These .doctree files are caches generated by the Sphinx documentation generator when developing documentation so that Sphinx does not have to reparse all input files across runs. They should not be packaged, especially as they lead to the package being unreproducible as their pickled format contains unreproducible data. Jelle van der Waa and Eli Schwartz submitted various upstream patches to fix projects that install these by default.

Dimitry Andric was able to determine why the reproducibility status of FreeBSD’s base.txz depended on the number of CPU cores, attributing it to an optimisation made to the Clang C compiler []. After further detailed discussion on the FreeBSD bug it was possible to get the binaries reproducible again [].

For the GNU Guix operating system, Vagrant Cascadian started a thread about collecting reproducibility metrics and Jan “janneke” Nieuwenhuizen posted that they had further reduced their “bootstrap seed” to 25% which is intended to reduce the amount of code to be audited to avoid potential compiler backdoors.

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update as well as made the following changes within the distribution itself:

Debian

Holger Levsen filed three bugs (#961857, #961858 & #961859) against the reproducible-check tool that reports on the reproducible status of installed packages on a running Debian system. They were subsequently all fixed by Chris Lamb [][][].

Timo Röhling filed a wishlist bug against the debhelper build tool impacting the reproducibility status of 100s of packages that use the CMake build system which led to a number of tests and next steps. []

Chris Lamb contributed to a conversation regarding the nondeterministic execution of order of Debian maintainer scripts that results in the arbitrary allocation of UNIX group IDs, referencing the Tails operating system’s approach this []. Vagrant Cascadian also added to a discussion regarding verification formats for reproducible builds.

47 reviews of Debian packages were added, 37 were updated and 69 were removed this month adding to our knowledge about identified issues. Chris Lamb identified and classified a new uids_gids_in_tarballs_generated_by_cmake_kde_package_app_templates issue [] and updated the paths_vary_due_to_usrmerge as deterministic issue, and Vagrant Cascadian updated the cmake_rpath_contains_build_path and gcc_captures_build_path issues. [][][].

Lastly, Debian Developer Bill Allombert started a mailing list thread regarding setting the -fdebug-prefix-map command-line argument via an environment variable and Holger Levsen also filed three bugs against the debrebuild Debian package rebuilder tool (#961861, #961862 & #961864).

Development

On our website this month, Arnout Engelen added a link to our Mastodon account [] and moved the SOURCE_DATE_EPOCH git log example to another section []. Chris Lamb also limited the number of news posts to avoid showing items from (for example) 2017 [].

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. It is used automatically in most Debian package builds. This month, Mattia Rizzolo bumped the debhelper compatibility level to 13 [] and adjusted a related dependency to avoid potential circular dependency [].

Upstream work

The Reproducible Builds project attempts to fix unreproducible packages and we try to to send all of our patches upstream. This month, we wrote a large number of such patches including:

Bernhard M. Wiedemann also filed reports for frr (build fails on single-processor machines), ghc-yesod-static/git-annex (a filesystem ordering issue) and ooRexx (ASLR-related issue).

diffoscope

diffoscope is our in-depth ‘diff-on-steroids’ utility which helps us diagnose reproducibility issues in packages. It does not define reproducibility, but rather provides a helpful and human-readable guidance for packages that are not reproducible, rather than relying essentially-useless binary diffs.

This month, Chris Lamb uploaded versions 147, 148 and 149 to Debian and made the following changes:

  • New features:

    • Add output from strings(1) to ELF binaries. (#148)
    • Dump PE32+ executables (such as EFI applications) using objdump(1). (#181)
    • Add support for Zsh shell completion. (#158)
  • Bug fixes:

    • Prevent a traceback when comparing PDF documents that did not contain metadata (ie. a PDF /Info stanza). (#150)
    • Fix compatibility with jsondiff version 1.2.0. (#159)
    • Fix an issue in GnuPG keybox file handling that left filenames in the diff. []
    • Correct detection of JSON files due to missing call to File.recognizes that checks candidates against file(1). []
  • Output improvements:

    • Use the CSS word-break property over manually adding U+200B zero-width spaces as these were making copy-pasting cumbersome. (!53)
    • Downgrade the tlsh warning message to an ‘info’ level warning. (#29)
  • Logging improvements:

  • Testsuite improvements:

    • Update tests for file(1) version 5.39. (#179)
    • Drop accidentally-duplicated copy of the --diff-mask tests. []
    • Don’t mask an existing test. []
  • Codebase improvements:

    • Replace obscure references to WF with “Wagner-Fischer” for clarity. []
    • Use a semantic AbstractMissingType type instead of remembering to check for both types of ‘missing’ files. []
    • Add a comment regarding potential security issue in the .changes, .dsc and .buildinfo comparators. []
    • Drop a large number of unused imports. [][][][][]
    • Make many code sections more Pythonic. [][][][]
    • Prevent some variable aliasing issues. [][][]
    • Use some tactical f-strings to tidy up code [][] and remove explicit u"unicode" strings [].
    • Refactor a large number of routines for clarity. [][][][]

trydiffoscope is the web-based version of diffoscope. This month, Chris Lamb also corrected the location for the celerybeat scheduler to ensure that the clean/tidy tasks are actually called which had caused an accidental resource exhaustion. (#12)

In addition Jean-Romain Garnier made the following changes:

  • Fix the --new-file option when comparing directories by merging DirectoryContainer.compare and Container.compare. (#180)
  • Allow user to mask/filter diff output via --diff-mask=REGEX. (!51)
  • Make child pages open in new window in the --html-dir presenter format. []
  • Improve the diffs in the --html-dir format. [][]

Lastly, Daniel Fullmer fixed the Coreboot filesystem comparator [] and Mattia Rizzolo prevented warnings from the tlsh fuzzy-matching library during tests [] and tweaked the build system to remove an unwanted .build directory []. For the GNU Guix distribution Vagrant Cascadian updated the version of diffoscope to version 147 [] and later 148 [].

Testing framework

We operate a large and many-featured Jenkins-based testing framework that powers tests.reproducible-builds.org. Amongst many other tasks, this tracks the status of our reproducibility efforts across many distributions as well as identifies any regressions that have been introduced. This month, Holger Levsen made the following changes:

  • Debian-related changes:

    • Prevent bogus failure emails from rsync2buildinfos.debian.net every night. []
    • Merge a fix from David Bremner’s database of .buildinfo files to include a fix regarding comparing source vs. binary package versions. []
    • Only run the Debian package rebuilder job twice per day. []
    • Increase bullseye scheduling. []
  • System health status page:

    • Add a note displaying whether a node needs to be rebooted for a kernel upgrade. []
    • Fix sorting order of failed jobs. []
    • Expand footer to link to the related Jenkins job. []
    • Add archlinux_html_pages, openwrt_rebuilder_today and openwrt_rebuilder_future to ‘known broken’ jobs. []
    • Add HTML <meta> header to refresh the page every 5 minutes. []
    • Count the number of ignored jobs [], ignore permanently ‘known broken’ jobs [] and jobs on ‘known offline’ nodes [].
    • Only consider the ‘known offline’ status from Git. []
    • Various output improvements. [][]
  • Tools:

    • Switch URLs for the Grml Live Linux and PureOS package sets. [][]
    • Don’t try to build a disorderfs Debian source package. [][][]
    • Stop building diffoscope as we are moving this to Salsa. [][]
    • Merge several “is diffoscope up-to-date on every platform?” test jobs into one [] and fail less noisily if the version in Debian cannot be determined [].

In addition: Marcus Hoffmann was added as a maintainer of the F-Droid reproducible checking components [], Jelle van der Waa updated the “is diffoscope up-to-date in every platform” check for Arch Linux and diffoscope [], Mattia Rizzolo backed up a copy of a “remove script” run on the Codethink-hosted ‘jump server‘ [] and Vagrant Cascadian temporarily disabled the fixfilepath on bullseye, to get better data about the ftbfs_due_to_f-file-prefix-map categorised issue.

Lastly, the usual build node maintenance was performed by Holger Levsen [][], Mattia Rizzolo [] and Vagrant Cascadian [][][][][].


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Eli Schwartz, Holger Levsen, Jelle van der Waa and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

Enrico Zini: COVID-19 and Capitalism

Sunday 5th of July 2020 10:00:00 PM
Astroturfing: How To Spot A Fake Movement capitalism covid19 news politics Crowds on Demand - Protests, Rallies and Advocacy archive.org 2020-07-06 If the Reopen America protests seem a little off to you, that's because they are. In this video we're going to talk about astroturfing and how insidious it i... Volunteers 3D-Print Unobtainable $11,000 Valve For $1 To Keep Covid-19 Patients Alive; Original Manufacturer Threatens To Sue capitalism covid19 health news Volunteers produce 3D-printed valves for life-saving coronavirus treatments archive.org 2020-07-06 Techdirt has just written about the extraordinary legal action taken against a company producing Covid-19 tests. Sadly, it's not the only example of some individuals putting profits before people. Here's a story from Italy, which is... Germany tries to stop US from luring away firm seeking coronavirus vaccine capitalism covid19 health news archive.org 2020-07-06 Berlin is trying to stop Washington from persuading a German company seeking a coronavirus vaccine to move its research to the United States. He Has 17,700 Bottles of Hand Sanitizer and Nowhere to Sell Them capitalism covid19 news archive.org 2020-07-06 Amazon cracked down on coronavirus price gouging. Now, while the rest of the world searches, some sellers are holding stockpiles of sanitizer and masks. Theranos vampire lives on: Owner of failed blood-testing biz's patents sues maker of actual COVID-19-testing kit capitalism covid19 news archive.org 2020-07-06 And 3D-printed valve for breathing machine sparks legal threat How an Austrian ski paradise became a COVID-19 hotspot capitalism covid19 news archive.org 2020-07-06 Ischgl, an Austrian ski resort, has achieved tragic international fame: hundreds of tourists are believed to have contracted the coronavirus there and taken it home with them. The Tyrolean state government is now facing serious criticism. EURACTIV Germany reports. Hospitals Need to Repair Ventilators. Manufacturers Are Making That Impossible capitalism covid19 health news archive.org 2020-07-06 We are seeing how the monopolistic repair and lobbying practices of medical device companies are making our response to the coronavirus pandemic harder. Homeless people in Las Vegas sleep 6 feet apart in parking lot as thousands of hotel rooms sit empty capitalism covid19 news privilege archive.org 2020-07-06 Las Vegas, Nevada has come under criticism after reportedly setting up a temporary homeless shelter in a parking lot complete with social distancing barriers.

Thorsten Alteholz: My Debian Activities in June 2020

Sunday 5th of July 2020 02:13:06 PM

FTP master

This month I accepted 377 packages and rejected 30. The overall number of packages that got accepted was 411.

Debian LTS

This was my seventy-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30h. During that time I did LTS uploads of:

  • [DLA 2255-1] libtasn1-6 security update for one CVE
  • [DLA 2256-1] libtirpc security update for one CVE
  • [DLA 2257-1] pngquant security update for one CVE
  • [DLA 2258-1] zziplib security update for eight CVEs
  • [DLA 2259-1] picocom security update for one CVE
  • [DLA 2260-1] mcabber security update for one CVE
  • [DLA 2261-1] php5 security update for one CVE

I started to work on curl as well but did not upload a fixed version, so this has to go to ELTS now.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty fourth ELTS month.

Unfortunately in the last month of Wheezy ELTS even I did not find any package to fix a CVE, so during my small allocated time I didn’t uploaded anything.

But at least I did some days of frontdesk duties und updated my working environment for the new ELTS Jessie.

Other stuff

I uploaded a new upstream version of …

Russell Coker: Debian S390X Emulation

Sunday 5th of July 2020 02:58:53 AM

I decided to setup some virtual machines for different architectures. One that I decided to try was S390X – the latest 64bit version of the IBM mainframe. Here’s how to do it, I tested on a host running Debian/Unstable but Buster should work in the same way.

First you need to create a filesystem in an an image file with commands like the following:

truncate -s 4g /vmstore/s390x mkfs.ext4 /vmstore/s390x mount -o loop /vmstore/s390x /mnt/tmp

Then visit the Debian Netinst page [1] to download the S390X net install ISO. Then loopback mount it somewhere convenient like /mnt/tmp2.

The package qemu-system-misc has the program for emulating a S390X system (among many others), the qemu-user-static package has the program for emulating S390X for a single program (IE a statically linked program or a chroot environment), you need this to run debootstrap. The following commands should be most of what you need.

# Install the basic packages you need apt install qemu-system-misc qemu-user-static debootstrap # List the support for different binary formats update-binfmts --display # qemu s390x needs exec stack to solve "Could not allocate dynamic translator buffer" # so you probably need this on SE Linux systems setsebool allow_execstack 1 # commands to do the main install debootstrap --foreign --arch=s390x --no-check-gpg buster /mnt/tmp file:///mnt/tmp2 chroot /mnt/tmp /debootstrap/debootstrap --second-stage # set the apt sources cat << END > /mnt/tmp/etc/apt/sources.list deb http://YOURLOCALMIRROR/pub/debian/ buster main deb http://security.debian.org/ buster/updates main END # for minimal install do not want recommended packages echo "APT::Install-Recommends False;" > /mnt/tmp/etc/apt/apt.conf # update to latest packages chroot /mnt/tmp apt update chroot /mnt/tmp apt dist-upgrade # install kernel, ssh, and build-essential chroot /mnt/tmp apt install bash-completion locales linux-image-s390x man-db openssh-server build-essential chroot /mnt/tmp dpkg-reconfigure locales echo s390x > /mnt/tmp/etc/hostname chroot /mnt/tmp passwd # copy kernel and initrd mkdir -p /boot/s390x cp /mnt/tmp/boot/vmlinuz* /mnt/tmp/boot/initrd* /boot/s390x # setup /etc/fstab cat << END > /mnt/tmp/etc/fstab /dev/vda / ext4 noatime 0 0 #/dev/vdb none swap defaults 0 0 END # clean up umount /mnt/tmp umount /mnt/tmp2 # setcap binary for starting bridged networking setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper # afterwards set the access on /etc/qemu/bridge.conf so it can only # be read by the user/group permitted to start qemu/kvm echo "allow all" > /etc/qemu/bridge.conf

Some of the above can be considered more as pseudo-code in shell script rather than an exact way of doing things. While you can copy and past all the above into a command line and have a reasonable chance of having it work I think it would be better to look at each command and decide whether it’s right for you and whether you need to alter it slightly for your system.

To run qemu as non-root you need to have a helper program with extra capabilities to setup bridged networking. I’ve included that in the explanation because I think it’s important to have all security options enabled.

The “-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0” part is to give entropy to the VM from the host, otherwise it will take ages to start sshd. Note that this is slightly but significantly different from the command used for other architectures (the “ccw” is the difference).

I’m not sure if “noresume” on the kernel command line is required, but it doesn’t do any harm. The “net.ifnames=0” stops systemd from renaming Ethernet devices. For the virtual networking the “ccw” again is a difference from other architectures.

Here is a basic command to run a QEMU virtual S390X system. If all goes well it should give you a login: prompt on a curses based text display, you can then login as root and should be able to run “dhclient eth0” and other similar commands to setup networking and allow ssh logins.

qemu-system-s390x -drive format=raw,file=/vmstore/s390x,if=virtio -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 -nographic -m 1500 -smp 2 -kernel /boot/s390x/vmlinuz-4.19.0-9-s390x -initrd /boot/s390x/initrd.img-4.19.0-9-s390x -curses -append "net.ifnames=0 noresume root=/dev/vda ro" -device virtio-net-ccw,netdev=net0,mac=02:02:00:00:01:02 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper

Here is a slightly more complete QEMU command. It has 2 block devices, for root and swap. It has SE Linux enabled for the VM (SE Linux works nicely on S390X). I added the “lockdown=confidentiality” kernel security option even though it’s not supported in 4.19 kernels, it doesn’t do any harm and when I upgrade systems to newer kernels I won’t have to remember to add it.

qemu-system-s390x -drive format=raw,file=/vmstore/s390x,if=virtio -drive format=raw,file=/vmswap/s390x,if=virtio -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 -nographic -m 1500 -smp 2 -kernel /boot/s390x/vmlinuz-4.19.0-9-s390x -initrd /boot/s390x/initrd.img-4.19.0-9-s390x -curses -append "net.ifnames=0 noresume security=selinux root=/dev/vda ro lockdown=confidentiality" -device virtio-net-ccw,netdev=net0,mac=02:02:00:00:01:02 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper Try It Out

I’ve got a S390X system online for a while, “ssh root@s390x.coker.com.au” with password “SELINUX” to try it out.

PPC64

I’ve tried running a PPC64 virtual machine, I did the same things to set it up and then tried launching it with the following result:

qemu-system-ppc64 -drive format=raw,file=/vmstore/ppc64,if=virtio -nographic -m 1024 -kernel /boot/ppc64/vmlinux-4.19.0-9-powerpc64le -initrd /boot/ppc64/initrd.img-4.19.0-9-powerpc64le -curses -append "root=/dev/vda ro"

Above is the minimal qemu command that I’m using. Below is the result, it stops after the “4.” from “4.19.0-9”. Note that I had originally tried with a more complete and usable set of options, but I trimmed it to the minimal needed to demonstrate the problem.

Copyright (c) 2004, 2017 IBM Corporation All rights reserved. This program and the accompanying materials are made available under the terms of the BSD License available at http://www.opensource.org/licenses/bsd-license.php Booting from memory... Linux ppc64le #1 SMP Debian 4.

The kernel is from the package linux-image-4.19.0-9-powerpc64le which is a dependency of the package linux-image-ppc64el in Debian/Buster. The program qemu-system-ppc64 is from version 5.0-5 of the qemu-system-ppc package.

Any suggestions on what I should try next would be appreciated.

Related posts:

  1. installing Xen domU on Debian Etch I have just been installing a Xen domU on Debian...
  2. Installing a Red Hat based DomU on a Debian Dom0 The first step is to copy /images/xen/vmlinuz and /images/xen/initrd.img from...
  3. SE Linux vs chroot A question that is often asked is whether to use...

Dirk Eddelbuettel: Rcpp now used by 2000 CRAN packages–and one in eight!

Saturday 4th of July 2020 10:20:00 PM

As of yesterday, Rcpp stands at exactly 2000 reverse-dependencies on CRAN. The graph on the left depicts the growth of Rcpp usage (as measured by Depends, Imports and LinkingTo, but excluding Suggests) over time.

Rcpp was first released in November 2008. It probably cleared 50 packages around three years later in December 2011, 100 packages in January 2013, 200 packages in April 2014, and 300 packages in November 2014. It passed 400 packages in June 2015 (when I tweeted about it), 500 packages in late October 2015, 600 packages in March 2016, 700 packages last July 2016, 800 packages last October 2016, 900 packages early January 2017, 1000 packages in April 2017, 1250 packages in November 2017, 1500 packages in November 2018 and then 1750 packages last August. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is available too.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of 2015, seven percent just before Christmas 2015, eight percent in the summer of 2016, nine percent mid-December 2016, cracked ten percent in the summer of 2017 and eleven percent in 2018. We now passed 12.5 percent—so one in every eight CRAN packages dependens on Rcpp. Stunning. There is more detail in the chart: how CRAN seems to be pushing back more and removing more aggressively (which my CRANberries tracks but not in as much detail as it could), how the growth of Rcpp seems to be slowing somewhat outright and even more so as a proportion of CRAN – as one would expect a growth curve to.

To mark the occassion, I sent out two tweets yesterday: first a shorter one with “just the numbers”, followed by a second one also containing the few calculation steps. The screenshot from the second one is below.

2000 user packages is pretty mind-boggling. We can use the progression of CRAN itself compiled by Henrik in a series of posts and emails to the main development mailing list. Not that long ago CRAN itself did have only 1000 packages, then 5000, 10000, and here we are at just over 16000 with Rcpp at 12.5% and still growing (though maybe more slowly). Amazeballs.

The Rcpp team continues to aim for keeping Rcpp as performant and reliable as it has been. A really big shoutout and Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Petter Reinholdtsen: Working on updated Norwegian Bokmål edition of Debian Administrator's Handbook

Saturday 4th of July 2020 09:55:00 PM

Three years ago, the first Norwegian Bokmål edition of "The Debian Administrator's Handbook" was published. This was based on Debian Jessie. Now a new and updated version based on Buster is getting ready. Work on the updated Norwegian Bokmål edition has been going on for a few months now, and yesterday, we reached the first mile stone, with 100% of the texts being translated. A lot of proof reading remains, of course, but a major step towards a new edition has been taken.

The book is translated by volunteers, and we would love to get some help with the proof reading. The translation uses the hosted Weblate service, and we welcome everyone to have a look and submit improvements and suggestions. There is also a proof readers PDF available on request, get in touch if you want to help out that way.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Russ Allbery: Review: The Light Brigade

Saturday 4th of July 2020 03:49:00 AM

Review: The Light Brigade, by Kameron Hurley

Publisher: Saga Copyright: 2019 ISBN: 1-4814-4798-X Format: Kindle Pages: 355

In the wake of the Blink, which left a giant crater where São Paulo was, Dietz signed up for the military. To be a hero. To satisfy an oath of vengeance. To kill aliens.

Corporations have consumed the governments that used to run Earth and have divided the world between them. Dietz's family, before the Blink, were ghouls in Tene-Silva territory, non-citizens who scavenged a precarious life on the margins. Citizenship is a reward for loyalty and a mechanism of control. The only people who don't fit into the corporate framework are the Martians, former colonists who went dark for ten years and re-emerged as a splinter group offering to use their superior technology to repair environmental damage to the northern hemisphere caused by corporate wars. When the Blink happens, apparently done with technology far beyond what the corporations have, corporate war with the Martians is the unsurprising result.

Long-time SF readers will immediately recognize The Light Brigade as a response to Starship Troopers with far more cynical world-building. For the first few chapters, the parallelism is very strong, down to the destruction of a large South American city (São Paulo instead of Buenos Aires), a naive military volunteer, and horrific basic training. But, rather than dropships, the soldiers in Dietz's world are sent into battle via, essentially, Star Trek transporters. These still very experimental transporters send Dietz to a different mission than the one in the briefing.

Advance warning that I'm going to talk about what's happening with Dietz's drops below. It's a spoiler, but you would find out not far into the book and I don't think it ruins anything important. (On the contrary, it may give you an incentive to stick through the slow and unappealing first few chapters.)

I had so many suspension of disbelief problems with this book. So many.

This starts with the technology. The core piece of world-building is Star Trek transporters, so fine, we're not talking about hard physics. Every SF story gets one or two free bits of impossible technology, and Hurley does a good job showing the transporters through a jaundiced military eye. But, late in the book, this technology devolves into one of my least-favorite bits of SF hand-waving that, for me, destroyed that gritty edge.

Technology problems go beyond the transporters. One of the bits of horror in basic training is, essentially, torture simulators, whose goal is apparently to teach soldiers to dissociate (not that the book calls it that). One problem is that I never understood why a military would want to teach dissociation to so many people, but a deeper problem is that the mechanics of this simulation made no sense. Dietz's training in this simulator is a significant ongoing plot point, and it kept feeling like it was cribbed from The Matrix rather than something translatable into how computers work.

Technology was the more minor suspension of disbelief problem, though. The larger problem was the political and social world-building.

Hurley constructs a grim, totalitarian future, which is a fine world-building choice although I think it robs some nuance from the story she is telling about how militaries lie to soldiers. But the totalitarian model she uses is one of near-total information control. People believe what the corporations tell them to believe, or at least are indifferent to it. Huge world events (with major plot significance) are distorted or outright lies, and those lies are apparently believed by everyone. The skepticism that exists is limited to grumbling about leadership competence and cynicism about motives, not disagreement with the provided history. This is critical to the story; it's a driver behind Dietz's character growth and is required to set up the story's conclusion.

This is a model of totalitarianism that's familiar from Orwell's Nineteen Eighty-Four. The problem: The Internet broke this model. You now need North Korean levels of isolation to pull off total message control, which is incompatible with the social structure or technology level that Hurley shows.

You may be objecting that the modern world is full of people who believe outrageous propaganda against all evidence. But the world-building problem is not that some people believe the corporate propaganda. It's that everyone does. Modern totalitarians have stopped trying to achieve uniformity (because it stopped working) and instead make the disagreement part of the appeal. You no longer get half a country to believe a lie by ensuring they never hear the truth. Instead, you equate belief in the lie with loyalty to a social or political group, and belief in the truth with affiliation with some enemy. This goes hand in hand with "flooding the zone" with disinformation and fakes and wild stories until people's belief in the accessibility of objective truth is worn down and all facts become ideological statements. This does work, all too well, but it relies on more information, not less. (See Zeynep Tufekci's excellent Twitter and Tear Gas if you're unfamiliar with this analysis.) In that world, Dietz would have heard the official history, the true history, and all sorts of wild alternative histories, making correct belief a matter of political loyalty. There is no sign of that.

Hurley does gesture towards some technology to try to explain this surprising corporate effectiveness. All the soldiers have implants, and military censors can supposedly listen in at any time. But, in the story, this censorship is primarily aimed at grumbling and local disloyalty. There is no sign that it's being used to keep knowledge of significant facts from spreading, nor is there any sign of the same control among the general population. It's stated in the story that the censors can't even keep up with soldiers; one would have to get unlucky to be caught. And yet the corporation maintains preternatural information control.

The place this bugged me the most is around knowledge of the current date. For reasons that will be obvious in a moment, Dietz has reasons to badly want to know what month and year it is and is unable to find this information anywhere. This appears to be intentional; Tene-Silva has a good (albeit not that urgent) reason to keep soldiers from knowing the date. But I don't think Hurley realizes just how hard that is.

Take a look around the computer you're using to read this and think about how many places the date shows up. Apart from the ubiquitous clock and calendar app, there are dates on every file, dates on every news story, dates on search results, dates in instant messages, dates on email messages and voice mail... they're everywhere. And it's not just the computer. The soldiers can easily smuggle prohibited outside goods into the base; knowledge of the date would be much easier. And even if Dietz doesn't want to ask anyone, there are opportunities to go off base during missions. Somehow every newspaper and every news bulletin has its dates suppressed? It's not credible, and it threw me straight out of the story.

These world-building problems are unfortunate, since at the heart of The Light Brigade is a (spoiler alert) well-constructed time travel story that I would have otherwise enjoyed. Dietz is being tossed around in time with each jump. And, unlike some of these stories, Hurley does not take the escape hatch of alternate worlds or possible futures. There is a single coherent timeline that Dietz and the reader experience in one order and the rest of the world experiences in a different order.

The construction of this timeline is incredibly well-done. Time can only disconnect at jump and return points, and Hurley maintains tight control over the number of unresolved connections. At every point in the story, I could list all of the unresolved discontinuities and enjoy their complexity and implications without feeling overwhelmed by them. Dietz gains some foreknowledge, but in a way that's wildly erratic and hard to piece together fast enough for a single soldier to do anything about the plot. The world spins out of control with foreshadowing of grimmer and grimmer events, and then Hurley pulls it back together in a thoroughly satisfying interweaving of long-anticipated scenes and major surprises.

I'm not usually a fan of time travel stories, but this is one of the best I've read. It also has a satisfying emotional conclusion (albeit marred for me by some unbelievable mystical technobabble), which is impressive given how awful and nasty Hurley makes this world. Dietz is a great first-person narrator, believably naive and cynical by turns, and piecing together the story structure alongside the protagonist built my emotional attachment to Dietz's character arc. Hurley writes the emotional dynamics of soldiers thoughtfully and well: shit-talking, fights, sudden moments of connection, shared cynicism over degenerating conditions, and the underlying growth of squad loyalty that takes over other motivations and becomes the reason to keep on fighting.

Hurley also pulled off a neat homage to (and improvement on) Starship Troopers that caught me entirely by surprise and that I've hopefully not spoiled.

This is a solid science fiction novel if you can handle the world-building. I couldn't, but I understand why it was nominated for the Hugo and Clarke awards. Recommended if you're less picky about technological and social believability than I am, although content warning for a lot of bloody violence and death (including against children) and a horrifically depressing world.

Rating: 6 out of 10

Michael Prokop: Grml 2020.06 – Codename Ausgehfuahangl

Friday 3rd of July 2020 02:32:25 PM

We did it again™, at the end of June we released Grml 2020.06, codename Ausgehfuahangl. This Grml release (a Linux live system for system administrators) is based on Debian/testing (AKA bullseye) and provides current software packages as of June, incorporates up to date hardware support and fixes known issues from previous Grml releases.

I am especially fond of our cloud-init and qemu-guest-agent integration, which makes usage and automation in virtual environments like Proxmox VE much more comfortable.

Once as the Qemu Guest Agent setting is enabled in the VM options (also see Proxmox wiki), you’ll see IP address information in the VM summary:

Using a cloud-init drive allows using an SSH key for login as user "grml", and you can control network settings as well:

It was fun to focus and work on this new Grml release together with Darsha, and we hope you enjoy the new Grml release as much as we do!

Norbert Preining: KDE/Plasma Status Update 2020-07-04

Friday 3rd of July 2020 02:06:09 PM

Great timing for 4th of July, here is another status update of KDE/Plasma for Debian. Short summary: everything is now available for Debian sid and testing, for both i386 and am64 architectures!

(Update 2020-07-07: Plasma 5.19.3 is included!)

(Update 2020-07-15: Frameworks 5.72 and KDE Apps 20.04.3 are included!)

With Qt 5.14 arriving in Debian/testing, and some tweaks here and there, we finally have all the packages (2 additional deps, 82 frameworks, 47 Plasma, 216 Apps, 3 other apps) built on both Debian unstable and Debian testing, for both amd64 and i386 architectures. Again, big thanks to OBS!

Repositories:
For Unstable:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Unstable/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma519/Debian_Unstable/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Unstable/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Unstable/ ./

For Testing:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Testing/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Testing/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma519/Debian_Testing/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Testing/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Testing/ ./

As usual, don’t forget that you need to import my OBS gpg key: obs-npreining.asc, best to download it and put the file into /etc/apt/trusted.gpg.d/obs-npreining.asc.

Enjoy.

Dirk Eddelbuettel: #28: Welcome RSPM and test-drive with Bionic and Focal

Friday 3rd of July 2020 01:05:00 PM

Welcome to the 28th post in the relatively random R recommendations series, or R4 for short. Our last post was a “double entry” in this R4 series and the newer T4 video series and covered a topic touched upon in this R4 series multiple times: easy binary install, especially on Ubuntu.

That post already previewed the newest kid on the block: RStudio’s RSPM, now formally announced. In the post we were only able to show Ubuntu 18.04 aka bionic. With the formal release of RSPM support has been added for Ubuntu 20.04 aka focal—and we are happy to announce that of course we added a corresponding Rocker r-rspm container. So you can now take full advantage of RSPM either via docker pull rocker/r-rspm:18.04 or via docker pull rocker/r-rspm:20.04 covering the two most recent LTS releases.

RSPM is a nice accomplishment. Covering multiple Linux distributions is an excellent achievement. Allowing users to reason in terms of the CRAN packages (i.e. installing xml2, not r-cran-xml2) eases use. Doing it from via the standard R command install.packages() (or wrapper around it like our install.r from littler package) is very good too and an excellent technical achievement.

There is, as best as I can tell, only one shortcoming, along with one small bit of false advertising. The shortcoming is technical. By bringing the package installation into the user application domain, it is separated from the system and lacks integration with system libraries. What do I mean here? If you were to add R to a plain Ubuntu container, say 18.04 or 20.04, then added the few lines to support RSPM and install xml2 it would install. And fail. Why? Because the system library libxml2 does not get installed with the RSPM package—whereas the .deb from the distribution or PPAs does. So to help with some popular packages I added libxml2, libunits and a few more for geospatial work to the rocker/r-rspm containers. Being already present ensures packages xml2 and units can run immediately. Please file issue tickets at the Rocker repo if you come across other missing libraries we could preload. (A related minor nag is incomplete coverage. At least one of my CRAN packages does not (yet?) come as a RSPM binary. Then again, CRAN has 16k packages, and the RSPM coverage is much wider than the PPA one. But completeness would be neat. The final nag is lack of Debian support which seems, well, odd.)

So what about the small bit of false advertising? Well it is claimed that RSPM makes installation “so much faster on Linux”. True, faster than the slowest possible installation from source. Also easier. But we had numerous posts on this blog showing other speed gains: Using ccache. And, of course, using binaries. And as the initial video mentioned above showed, installing from the PPAs is also faster than via RSPM. That is easy to replicate. Just set up the rocker/r-ubuntu:20.04 (or 18.04) container alongside the rocker/r-rspm:20.04 (or also 18.04) container. And then time install.r rstan (or install.r tinyverse) in the RSPM one against apt -y update; apt install -y r-cran-rstan (or ... r-cran-tinyverse). In every case I tried, the installation using binaries from the PPA was still faster by a few seconds. Not that it matters greatly: both are very, very quick compared to source installation (as e.g. shown here in 2017 (!!)) but the standard Ubuntu .deb installation is simply faster than using RSPM. (Likely due to better CDN usage so this may change over time. Neither method appears to do downloads in parallel so there is scope for both for doing better.)

So in sum: Welcome to RSPM, and nice new tool—and feel free to “drive” it using rocker/r-rspm:18.04 or rocker/r-rspm:20.04.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Reproducible Builds (diffoscope): diffoscope 150 released

Friday 3rd of July 2020 12:00:00 AM

The diffoscope maintainers are pleased to announce the release of diffoscope version 150. This version includes the following changes:

[ Chris Lamb ] * Don't crash when listing entries in archives if they don't have a listed size (such as hardlinks in .ISO files). (Closes: reproducible-builds/diffoscope#188) * Dump PE32+ executables (including EFI applications) using objdump. (Closes: reproducible-builds/diffoscope#181) * Tidy detection of JSON files due to missing call to File.recognizes that checks against the output of file(1) which was also causing us to attempt to parse almost every file using json.loads. (Whoops.) * Drop accidentally-duplicated copy of the new --diff-mask tests. * Logging improvements: - Split out formatting of class names into a common method. - Clarify that we are generating presenter formats in the opening logs. [ Jean-Romain Garnier ] * Remove objdjump(1) offsets before instructions to reduce diff noise. (Closes: reproducible-builds/diffoscope!57)

You find out more by visiting the project homepage.

Ben Hutchings: Debian LTS work, June 2020

Thursday 2nd of July 2020 07:25:52 PM

I was assigned 20 hours of work by Freexian's Debian LTS initiative, and worked all 20 hours this month.

I sent a final request for testing for the next update to Linux 3.16 in jessie. I also prepared an update to Linux 4.9, included in both jessie and stretch. I completed backporting of kernel changes related to CVE-2020-0543, which was still under embargo, to Linux 3.16.

Finally I uploaded the updates for Linux 3.16 and 4.9, and issued DLA-2241 and DLA-2242.

The end of June marked the end of long-term support for Debian 8 "jessie" and for Linux 3.16. I am no longer maintaining any stable kernel branches, but will continue contributing to them as part of my work on Debian 9 "stretch" LTS and other Debian releases.

Mike Gabriel: My Work on Debian LTS (June 2020)

Thursday 2nd of July 2020 02:20:01 PM

In June 2020, I have worked on the Debian LTS project for 8 hours (of 8 hours planned).

LTS Work
  • frontdesk: CVE bug triaging for Debian jessie LTS: mailman, alpine, python3.4, redis, pound, pcre3, ngircd, mutt, lynis, libvncserver, cinder, bison, batik.
  • upload to jessie-security: libvncserver (DLA-2264-1 [1], 9 CVEs)
  • upload to jessie-security: mailman (DLA-2265-1 [2], 1 CVE)
  • upload to jessie-security: mutt (DLA-2268-1 [3] and DLA-2268-2 [4]), 2 CVEs)
Other security related work for Debian
  • make sure all security fixes for php-horde-* are also in Debian unstable
  • upload freerdp2 2.1.2+dfsg-1 to unstable (9 CVEs)
References

Russell Coker: Desklab Portable USB-C Monitor

Thursday 2nd of July 2020 10:42:07 AM

I just got a 15.6″ 4K resolution Desklab portable touchscreen monitor [1]. It takes power via USB-C and video input via USB-C or mini HDMI, has touch screen input, and has speakers built in for USB or HDMI sound.

PC Use

I bought a mini-DisplayPort to HDMI adapter and for my first test ran it from my laptop, it was seen as a 1920*1080 DisplayPort monitor. The adaptor is specified as supporting 4K so I don’t know why I didn’t get 4K to work, my laptop has done 4K with other monitors.

The next thing I plan to get is a VGA to HDMI converter so I can use this on servers, it can be a real pain getting a monitor and power cable to a rack mounted server and this portable monitor can be powered by one of the USB ports in the server. A quick search indicates that such devices start at about $12US.

The Desklab monitor has no markings to indicate what resolution it supports, no part number, and no serial number. The only documentation I could find about how to recognise the difference between the FullHD and 4K versions is that the FullHD version supposedly draws 2A and the 4K version draws 4A. I connected my USB Ammeter and it reported that between 0.6 and 1.0A were drawn. If they meant to say 2W and 4W instead of 2A and 4A (I’ve seen worse errors in manuals) then the current drawn would indicate the 4K version. Otherwise the stated current requirements don’t come close to matching what I’ve measured.

Power

The promise of USB-C was power from anywhere to anywhere. I think that such power can theoretically be done with USB 3 and maybe USB 2, but asymmetric cables make it more challenging.

I can power my Desklab monitor from a USB battery, from my Thinkpad’s USB port (even when the Thinkpad isn’t on mains power), and from my phone (although the phone battery runs down fast as expected). When I have a mains powered USB charger (for a laptop and rated at 60W) connected to one USB-C port and my phone on the other the phone can be charged while giving a video signal to the display. This is how it’s supposed to work, but in my experience it’s rare to have new technology live up to it’s potential at the start!

One thing to note is that it doesn’t have a battery. I had imagined that it would have a battery (in spite of there being nothing on their web site to imply this) because I just couldn’t think of a touch screen device not having a battery. It would be nice if there was a version of this device with a big battery built in that could avoid needing separate cables for power and signal.

Phone Use

The first thing to note is that the Desklab monitor won’t work with all phones, whether a phone will take the option of an external display depends on it’s configuration and some phones may support an external display but not touchscreen. The Huawei Mate devices are specifically listed in the printed documentation as being supported for touchscreen as well as display. Surprisingly the Desklab web site has no mention of this unless you download the PDF of the manual, they really should have a list of confirmed supported devices and a forum for users to report on how it works.

My phone is a Huawei Mate 10 Pro so I guess I got lucky here. My phone has a “desktop mode” that can be enabled when I connect it to a USB-C device (not sure what criteria it uses to determine if the device is suitable). The desktop mode has something like a regular desktop layout and you can move windows around etc. There is also the option of having a copy of the phone’s screen, but it displays the image of the phone screen vertically in the middle of the landscape layout monitor which is ridiculous.

When desktop mode is enabled it’s independent of the phone interface so I had to find the icons for the programs I wanted to run in an unsorted list with no search usable (the search interface of the app list brings up the keyboard which obscures the list of matching apps). The keyboard takes up more than half the screen and there doesn’t seem to be a way to make it smaller. I’d like to try a portrait layout which would make the keyboard take something like 25% of the screen but that’s not supported.

It’s quite easy to type on a keyboard that’s slightly larger than a regular PC keyboard (a 15″ display with no numeric keypad or cursor control keys). The hackers keyboard app might work well with this as it has cursor control keys. The GUI has an option for full screen mode for an app which is really annoying to get out of (you have to use a drop down from the top of the screen), full screen doesn’t make sense for a display this large. Overall the GUI is a bit clunky, imagine Windows 3.1 with a start button and task bar. One interesting thing to note is that the desktop and phone GUIs can be run separately, so you can type on the Desklab (or any similar device) and look things up on the phone. Multiple monitors never really interested me for desktop PCs because switching between windows is fast and easy and it’s easy to resize windows to fit several on the desktop. Resizing windows on the Huawei GUI doesn’t seem easy (although I might be missing some things) and the keyboard takes up enough of the screen that having multiple windows open while typing isn’t viable.

I wrote the first draft of this post on my phone using the Desklab display. It’s not nearly as easy as writing on a laptop but much easier than writing on the phone screen.

Currently Desklab is offering 2 models for sale, 4K resolution for $399US and FullHD for $299US. I got the 4K version which is very expensive at the moment when converted to Australian dollars. There are significantly cheaper USB-C monitors available (such as this ASUS one from Kogan for $369AU), but I don’t think they have touch screens and therefore can’t be used with a phone unless you enable the phone screen as touch pad mode and have a mouse cursor on screen. I don’t know if all Android devices support that, it could be that a large part of the desktop experience I get is specific to Huawei devices.

One annoying feature is that if I use the phone power button to turn the screen off it shuts down the connection to the Desklab display, but the phone screen will turn off it I leave it alone for the screen timeout (which I have set to 10 minutes).

Caveats

When I ordered this I wanted the biggest screen possible. But now that I have it the fact that it doesn’t fit in the pocket of my Scott e Vest jacket [2] will limit what I can do with it. Maybe I’ll be buying a 13″ monitor in the near future, I expect that Desklab will do well and start selling them in a wide range of sizes. A 15.6″ portable device is inconvenient even if it is in the laptop format, a thin portable screen is inconvenient in many ways.

Netflix doesn’t display video on the Desklab screen, I suspect that Netflix is doing this deliberately as some misguided attempt at stopping piracy. It is really good for watching video as it has the speakers in good locations for stereo sound, it’s a pity that Netflix is difficult.

The functionality on phones from companies other than Huawei is unknown. It is likely to work on most Android phones, but if a particular phone is important to you then you want to Google for how it worked for others.

Related posts:

  1. Thinkpad X1 Carbon I just bought a Thinkpad X1 Carbon to replace my...
  2. Samsung Galaxy Note 2 A few weeks ago I bought a new Samsung Galaxy...
  3. More About the Thinkpad X301 Last month I blogged about the Thinkpad X301 I got...

Emmanuel Kasper: Test a webcam from the command line on Linux with VLC

Thursday 2nd of July 2020 07:30:46 AM
Since this info was too well hidden on the internet, here is the information: cvlc v4l2:///dev/video0and there you go.
If you have multiple cameras connected, you can try /dev/video0 up to /dev/video5

Evgeni Golov: Automatically renaming the default git branch to "devel"

Thursday 2nd of July 2020 07:12:21 AM

It seems GitHub is planning to rename the default brach for newly created repositories from "master" to "main". It's incredible how much positive PR you can get with a one line configuration change, while still working together with the ICE.

However, this post is not about bashing GitHub.

Changing the default branch for newly created repositories is good. And you also should do that for the ones you create with git init locally. But what about all the repositories out there? GitHub surely won't force-rename those branches, but we can!

Ian will do this as he touches the individual repositories, but I tend to forget things unless I do them immediately…

Oh, so this is another "automate everything with an API" post? Yes, yes it is!

And yes, I am going to use GitHub here, but something similar should be implementable on any git hosting platform that has an API.

Of course, if you have SSH access to the repositories, you can also just edit HEAD in an for loop in bash, but that would be boring ;-)

I'm going with devel btw, as I'm already used to develop in the Foreman project and devel in Ansible.

acquire credentials

My GitHub account is 2FA enabled, so I can't just use my username and password in a basic HTTP API client. So the first step is to acquire a personal access token, that can be used instead. Of course I could also have implemented OAuth2 in my lousy script, but ain't nobody have time for that.

The token will require the "repo" permission to be able to change repositories.

And we'll need some boilerplate code (I'm using Python3 and requests, but anything else will work too):

#!/usr/bin/env python3 import requests BASE='https://api.github.com' USER='evgeni' TOKEN='abcdef' headers = {'User-Agent': '@{}'.format(USER)} auth = (USER, TOKEN) session = requests.Session() session.auth = auth session.headers.update(headers) session.verify = True

This will store our username, token, and create a requests.Session so that we don't have to pass the same data all the time.

get a list of repositories to change

I want to change all my own repos that are not archived, not forks, and actually have the default branch set to master, YMMV.

As we're authenticated, we can just list the repositories of the currently authenticated user, and limit them to "owner" only.

GitHub uses pagination for their API, so we'll have to loop until we get to the end of the repository list.

repos_to_change = [] url = '{}/user/repos?type=owner'.format(BASE) while url: r = session.get(url) if r.ok: repos = r.json() for repo in repos: if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'master': repos_to_change.append(repo['name']) if 'next' in r.links: url = r.links['next']['url'] else: url = None else: url = None create a new devel branch and mark it as default

Now that we know which repos to change, we need to fetch the SHA of the current master, create a new devel branch pointing at the same commit and then set that new branch as the default branch.

for repo in repos_to_change: master_data = session.get('{}/repos/evgeni/{}/git/ref/heads/master'.format(BASE, repo)).json() data = {'ref': 'refs/heads/devel', 'sha': master_data['object']['sha']} session.post('{}/repos/{}/{}/git/refs'.format(BASE, USER, repo), json=data) default_branch_data = {'default_branch': 'devel'} session.patch('{}/repos/{}/{}'.format(BASE, USER, repo), json=default_branch_data) session.delete('{}/repos/{}/{}/git/refs/heads/{}'.format(BASE, USER, repo, 'master'))

I've also opted in to actually delete the old master, as I think that's the safest way to let the users know that it's gone. Letting it rot in the repository would mean people can still pull and won't notice that there are no changes anymore as the default branch moved to devel.

So…

announcement

I've updated all my (those in the evgeni namespace) non-archived repositories to have devel instead of master as the default branch.

Have fun updating!

code #!/usr/bin/env python3 import requests BASE='https://api.github.com' USER='evgeni' TOKEN='abcd' headers = {'User-Agent': '@{}'.format(USER)} auth = (USER, TOKEN) session = requests.Session() session.auth = auth session.headers.update(headers) session.verify = True repos_to_change = [] url = '{}/user/repos?type=owner'.format(BASE) while url: r = session.get(url) if r.ok: repos = r.json() for repo in repos: if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'master': repos_to_change.append(repo['name']) if 'next' in r.links: url = r.links['next']['url'] else: url = None else: url = None for repo in repos_to_change: master_data = session.get('{}/repos/evgeni/{}/git/ref/heads/master'.format(BASE, repo)).json() data = {'ref': 'refs/heads/devel', 'sha': master_data['object']['sha']} session.post('{}/repos/{}/{}/git/refs'.format(BASE, USER, repo), json=data) default_branch_data = {'default_branch': 'devel'} session.patch('{}/repos/{}/{}'.format(BASE, USER, repo), json=default_branch_data) session.delete('{}/repos/{}/{}/git/refs/heads/{}'.format(BASE, USER, repo, 'master'))

More in Tux Machines

Programming Leftovers

  • Engineer Your Own Electronics With PCB Design Software

    A lot of self-styled geeks out there tend to like to customize their own programs, devices, and electronics. And for the true purists, that can mean building from the ground up (you know, like Superman actor Henry Cavill building a gaming PC to the delight of the entire internet). Building electronics from the ground up can mean a lot of different things: acquiring parts, sometimes from strange sources; a bit of elbow grease on the mechanical side of things; and today, even taking advantage of the 3D printing revolution that’s finally enabling people to manufacture customized objects in their home. Beyond all of these things though, engineering your own devices can also mean designing the underlying electronics — beginning with printed circuit boards, also known as PCBs. [...] On the other hand, there are also plenty of just-for-fun options to consider. For example, consider our past buyer’s guide to the best Linux laptop, in which we noted that you can always further customize your hardware. With knowledge of PCB design, that ability to customize even a great computer or computer setup is further enhanced. You might, for instance, learn how to craft PCBs and devices amounting to your own mouse, gaming keyboard, or homemade speakers — all of which can make your hardware more uniquely your own. All in all, PCB design is a very handy skill to have in 2020. It’s not typically necessary, in that there’s usually a device or some light customization that can give you whatever you want or need out of your electronics. But for “geeks” and tech enthusiasts, knowledge of PCB design adds another layer to the potential to customize hardware.

  • Programming pioneer Fran Allen dies aged 88 after a career of immense contributions to compilers

    Frances Allen, one of the leading computer scientists of her generation and a pioneer of women in tech, died last Tuesday, her 88th birthday. Allen is best known for her work on compiler organisation and optimisation algorithms. Together with renowned computer scientist John Cocke, she published a series of landmark papers in the late '60s and '70s that helped to lay the groundwork for modern programming. In recognition of her efforts, in 2006 Allen became the first woman to be awarded the AM Turing Award, often called the Nobel Prize of computing.

  • Excellent Free Tutorials to Learn ECMAScript

    ECMAScript is an object‑oriented programming language for performing computations and manipulating computational objects within a host environment. The language was originally designed as a scripting language, but is now often used as a general purpose programming language. ECMAScript is best known as the language embedded in web browsers but has also been widely adopted for server and embedded applications.

  • Alexander Larsson: Compatibility in a sandboxed world

    Compatibility has always been a complex problems in the Linux world. With the advent of containers/sandboxing it has become even more complicated. Containers help solve compatibility problems, but there are still remaining issues. Especially on the Linux desktop where things are highly interconnected. In fact, containers even create some problems that we didn’t use to have. Today I’ll take a look at the issues in more details and give some ideas on how to best think of compatibility in this post-container world, focusing on desktop use with technologies like flatpak and snap. [...] Another type of compatibility is that of communication protocols. Two programs that talk to each other using a networking API (which could be on two different machines, or locally on the same machine) need to use a protocol to understand each other. Changes to this protocol need to be carefully considered to ensure they are compatible. In the remote case this is pretty obvious, as it is very hard to control what software two different machines use. However, even for local communication between processes care has to be taken. For example, a local service could be using a protocol that has several implementations and they all need to stay compatible. Sometimes local services are split into a service and a library and the compatibility guarantees are defined by the library rather than the service. Then we can achieve some level of compatibility by ensuring the library and the service are updated in lock-step. For example a distribution could ship them in the same package.

  • GXml-0.20 Released

    GXml is an Object Oriented implementation of DOM version 4, using GObject classes and written in Vala. Has a fast and robust serialization implementation from GObject to XML and back, with a high degree of control. After serialization, provides a set of collections where you can get access to child nodes, using lists or hash tables. New 0.20 release is the first step toward 1.0. It provides cleaner API and removes old unmaintained implementations. GXml is the base of other projects depending on DOM4, like GSVG an engine to read SVG documents based on its specificacion 1.0. GXml uses a method to set properties and fill declared containers for child nodes, accessing GObject internals directly, making it fast. A libxml-2.0 engine is used to read sequentially each node, but is prepared to implement new ones in the future.

  • Let Mom Help You With Object-Oriented Programming

    Mom is a shortcut for creating Moo classes (and roles). It allows you to define a Moo class with the brevity of Class::Tiny. (In fact, Mom is even briefer.) A simple example: Mom allows you to use Moo features beyond simply declaring Class::Tiny-like attributes though. You can choose whether attributes are read-only, read-write, or read-write-private, whether they're required or optional, specify type constraints, defaults, etc.

  • Perl Weekly Challenge 73: Min Sliding Window and Smallest Neighbor

    These are some answers to the Week 73 of the Perl Weekly Challenge organized by Mohammad S. Anwar. Spoiler Alert: This weekly challenge deadline is due in a few days from now (on Aug. 16, 2020). This blog post offers some solutions to this challenge, please don’t read on if you intend to complete the challenge on your own.

  • [rakulang] 2020.32 Survey, Please

    The TPF Marketing Committee wants to learn more about how you perceive “The Perl Foundation” itself, and asks you to fill in this survey (/r/rakulang, /r/perl comments). Thank you!

Hardware With Linux Support: NUVIA and AMD Wraith Prism

  • Performance Delivered a New Way

    The server CPU has evolved at an incredible pace over the last two decades. Gone are the days of discrete CPUs, northbridges, southbridges, memory controllers, other external I/O and security chips. In today’s modern data center, the SoC (System On A Chip) does it all. It is the central point of coordination for virtually all workloads and the main hub where all the fixed-function accelerators connect, such as AI accelerators, GPUs, network interface controllers, storage devices, etc.

  • NUVIA Published New Details On Their Phoenix CPU, Talks Up Big Performance/Perf-Per-Watt

    Since leaving stealth last year and hiring some prominent Linux/open-source veterans to complement their ARM processor design experts, we have been quite eager to hear more about this latest start-up aiming to deliver compelling ARM server products. Today they shared some early details on their initial "Phoenix" processor that is coming within their "Orion" SoC. The first-generation Phoenix CPU is said to have a "complete overhaul" of the CPU pipeline and is a custom core based on the ARM architecture. They believe that Phoenix+Orion will be able to take on Intel/AMD x86_64 CPUs not only in raw performance but also in performance-per-Watt.

  • Take control of your AMD Wraith Prism RGB on Linux with Wraith Master

    Where the official vendor doesn't bother with supporting Linux properly, once again the community steps in to provide. If you want to tweak your AMD Wraith Prism lighting on Linux, check out Wraith Master. It's a similar project to CM-RGB that we previously highlighted. With the Wraith Master project, they provide a "feature-complete" UI and command-line app for controlling the fancy LED system on AMD's Wraith Prism cooler with eventual plans to support more.

The Massive Privacy Loopholes in School Laptops

It’s back to school time and with so many school districts participating in distance learning, many if not most are relying on computers and technology more than ever before. Wealthier school districts are providing their students with laptops or tablets, but not all schools can afford to provide each student with a computer which means that this summer parents are scrambling to find a device for their child to use for school. Geoffery Fowler wrote a guide in the Washington Post recently to aid parents in sourcing a computer or tablet for school. Given how rough kids can be with their things, many people are unlikely to give their child an expensive, premium laptop. The guide mostly focuses on incredibly low-cost, almost-disposable computers, so you won’t find a computer in the list that has what I consider a critical feature for privacy in the age of video conferencing: hardware kill switches. Often a guide like this would center on Chromebooks as Google has invested a lot of resources to get low-cost Chromebooks into schools yet I found Mr. Fowler’s guide particularly interesting because of his opinion on Chromebooks in education... Read more Also: Enabling Dark Mode on a Chromebook (Do not try this at home)

Christopher Arnold: The Momentum of Openness - My Journey From Netscape User to Mozillian Contributor

Working at Mozilla has been a very educational experience over the past eight years. I have had the chance to work side-by-side with many engineers at a large non-profit whose business and ethics are guided by a broad vision to protect the health of the web ecosystem. How did I go from being on the front of a computer screen in 1995 to being behind the workings of the web now? Below is my story of how my path wended from being a Netscape user to working at Mozilla, the heir to the Netscape legacy. It's amazing to think that a product I used 25 years ago ended up altering the course of my life so dramatically thereafter. But the world and the web was much different back then. And it was the course of thousands of people with similar stories, coming together for a cause they believed in. The Winding Way West Like many people my age, I followed the emergence of the World Wide Web in the 1990’s with great fascination. My father was an engineer at International Business Machines when the Personal Computer movement was just getting started. His advice to me during college was to focus on the things you don't know or understand rather than the wagon-wheel ruts of the well trodden path. He suggested I study many things, not just the things I felt most comfortable pursuing. He said, "You go to college so that you have interesting things to think about when you're waiting at the bus stop." He never made an effort to steer me in the direction of engineering. In 1989 he bought me a Macintosh personal computer and said, "Pay attention to this hypertext trend. Networked documents is becoming an important new innovation." This was long before the World Wide Web became popular in the societal zeitgeist. His advice was prophetic for me. [...] The Mozilla Project grew inside AOL for a long while beside the AOL browser and Netscape browsers. But at some point the executive team believed that this needed to be streamlined. Mitchell Baker, an AOL attorney, Brendan Eich, the inventor of JavaScript, and an influential venture capitalist named Mitch Kapoor came up with a suggestion that the Mozilla Project should be spun out of AOL. Doing this would allow all of the enterprises who had interest in working in open source versions of the project to foster the effort while Netscape/AOL product team could continue to rely on any code innovations for their own software within the corporation. A Mozilla in the wild would need resources if it were to survive. First, it would need to have all the patents that were in the Netscape portfolio to avoid hostile legal challenges from outside. Second, there would need to be a cash injection to keep the lights on as Mozilla tried to come up with the basis for its business operations. Third, it would need protection from take-over bids that might come from AOL competitors. To achieve this, they decided Mozilla should be a non-profit foundation with the patent grants and trademark grants from AOL. Engineers who wanted to continue to foster AOL/Netscape vision of an open web browser specifically for the developer ecosystem could transfer to working for Mozilla. Mozilla left Netscape's crowdsourced web index (called DMOZ or open directory) with AOL. DMOZ went on to be the seed for the PageRank index of Google when Google decided to split out from powering the Yahoo search engine and seek its own independent course. It's interesting to note that AOL played a major role in helping Google become an independent success as well, which is well documented in the book The Search by John Battelle. Once the Mozilla Foundation was established (along with a $2 Million grant from AOL) they sought donations from other corporations who were to become dependent on the project. The team split out Netscape Communicator's email component as the Thunderbird email application as a stand-alone open source product and the Phoenix browser was released to the public as "Firefox" because of a trademark issue with another US company on usage of the term "Phoenix" in association with software. Google had by this time broken off from its dependence on Yahoo as a source of web traffic for its nascent advertising business. They offered to pay Mozilla Foundation for search traffic that they could route to their search engine traffic to Google preferentially over Yahoo or the other search engines of the day. Taking "revenue share" from advertising was not something that the non-profit Mozilla Foundation was particularly well set up to do. So they needed to structure a corporation that could ingest these revenues and re-invest them into a conventional software business that could operate under the contractual structures of partnerships with other public companies. The Mozilla Corporation could function much like any typical California company with business partnerships without requiring its partners to structure their payments as grants to a non-profit. [...] Working in the open was part of the original strategy AOL had when they open sourced Netscape. If they could get other companies to build together with them, the collaborative work of contributors outside the AOL payroll could contribute to the direct benefit of the browser team inside AOL. Bugzilla was structured as a hierarchy of nodes, where a node owner could prioritize external contributions to the code base and commit them to be included in the derivative build which would be scheduled to be released as a new update package ever few months. Module Owners, as they were called, would evaluate candidate fixes or new features against their own list of items to triage in terms of product feature requests or complaints from their own team. The main team that shipped each version was called Release Engineering. They cared less about the individual features being worked on than the overall function of the broader software package. So they would bundle up a version of the then-current software that they would call a Nightly build, as there were builds being assembled each day as new bugs were upleveled and committed to the software tree. Release engineering would watch for conflicts between software patches and annotate them in Bugzilla so that the various module owners could look for conflicts that their code commits were causing in other portions of the code base. Read more