Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 8 hours 25 min ago

Reproducible Builds (diffoscope): diffoscope 153 released

Friday 24th of July 2020 12:00:00 AM

The diffoscope maintainers are pleased to announce the release of diffoscope version 153. This version includes the following changes:

[ Chris Lamb ] * Drop some legacy argument styles; --exclude-directory-metadata and --no-exclude-directory-metadata have been replaced with --exclude-directory-metadata={yes,no}. * Code improvements: - Make it easier to navigate the main.py entry point. - Use a relative import for get_temporary_directory in diffoscope.diff. - Rename bail_if_non_existing to exit_if_paths_do_not_exist. - Rewrite exit_if_paths_do_not_exist to not check files multiple times. * Documentation improvements: - CONTRIBUTING.md: - Add a quick note about adding/suggesting new options. - Update and expand the release process documentation. - Add a reminder to regenerate debian/tests/control. - README.rst: - Correct URL to build job on Jenkins. - Clarify and correct contributing info to point to salsa.debian.org.

You find out more by visiting the project homepage.

Sean Whitton: keyboardingupdates

Thursday 23rd of July 2020 06:55:24 PM
Marks and mark rings in GNU Emacs

I recently attempted to answer the question of whether experienced Emacs users should consider partially or fully disabling Transient Mark mode, which is (and should be) the default in modern GNU Emacs.

That blog post was meant to be as information-dense as I could make it, but now I’d like to describe the experience I have been having after switching to my custom pseudo-Transient Mark mode, which is labelled “mitigation #2” in my older post.

In summary: I feel like I’ve uncovered a whole editing paradigm lying just beneath the surface of the editor I’ve already been using for years. That is cool and enjoyable in itself, but I think it’s also helped me understand other design decisions about the basics of the Emacs UI better than before – in particular, the ideas behind how Emacs chooses where to display buffers, which were very frustrating to me in the past. I am now regularly using relatively obscure commands like C-x 4 C-o. I see it! It all makes sense now!

I would encourage everyone who has never used Emacs without Transient Mark mode to try turning it off for a while, either fully or partially, just to see what you can learn. It’s fascinating how it can come to seem more convenient and natural to pop the mark just to go back to the end of the current line after fixing up something earlier in the line, even though doing so requires pressing two modified keys instead of just C-e.

Eshell

I was amused to learn some years ago that someone was trying to make Emacs work as an X11 window manager. I was amazed and impressed to learn, more recently, that the project is still going and a fair number of people are using it. Kudos! I suspect that the basic motivation for such projects is that Emacs is a virtual Lisp machine, and it has a certain way of managing visible windows, and people would like to be able to bring both of those to their X11 window management.

However, I am beginning to suspect that the intrinsic properties of Emacs buffers are tightly connected to the ways in which Emacs manages visible windows, and the intrinsic properties of Emacs buffers are at least as fundamental as its status as a virtual Lisp machine. Thus I am not convinced by the idea of trying to use Emacs’ ways of handling visible windows to handle windows which do not contain Emacs buffers. (but it’s certainly nice to learn it’s working out for others)

The more general point is this. Emacs buffers are as fundamental to Emacs as anything else is, so it seems unlikely to be particularly fruitful to move something typically done outside of Emacs into Emacs, unless that activity fits naturally into an Emacs buffer or buffers. Being suited to run on a virtual Lisp machine is not enough.

What could be more suited to an Emacs buffer, however, than a typical Unix command shell session? By this I mean things like running commands which produce text output, and piping this output between commands and into and out of files. Typically the commands one enters are sort of like tiny programs in themselves, even if there are no pipes involved, because you have to spend time determining just what options to pass to achieve what you want. It is great to have all your input and output available as ordinary buffer text, navigable just like all your other Emacs buffers.

Full screen text user interfaces, like top(1), are not the sort of thing I have in mind here. These are suited to terminal emulators, and an Emacs buffer makes a poor terminal emulator – what you end up with is a sort of terminal emulator emulator. Emacs buffers and terminal emulators are just different things.

These sorts of thoughts lead one to Eshell, the Emacs Shell. Quoting from its documentation:

The shell’s role is to make [system] functionality accessible to the user in an unformed state. Very roughly, it associates kernel functionality with textual commands, allowing the user to interact with the operating system via linguistic constructs. Process invocation is perhaps the most significant form this takes, using the kernel’s fork' andexec’ functions.

Emacs is … a user application, but it does make the functionality of the kernel accessible through an interpreted language – namely, Lisp. For that reason, there is little preventing Emacs from serving the same role as a modern shell. It too can manipulate the kernel in an unpredetermined way to cause system changes. All it’s missing is the shell-ish linguistic model.

Eshell has been working very well for me for the past month or so, for, at least, Debian packaging work, which is very command shell-oriented (think tools like dch(1)).

The other respects in which Eshell is tightly integrated with the rest of Emacs are icing on the cake. In particular, Eshell can transparently operate on remote hosts, using TRAMP. So when I need to execute commands on Debian’s ftp-master server to process package removal requests, I just cd /ssh:fasolo: in Eshell. Emacs takes care of disconnecting and connecting to the server when needed – there is no need to maintain a fragile SSH connection and a shell process (or anything else) running on the remote end.

Or I can cd /ssh:athena\|sudo:root@athena: to run commands as root on the webserver hosting this blog, and, again, the text of the session survives on my laptop, and may be continued at my leisure, no matter whether athena reboots, or I shut my laptop and open it up again the next morning. And of course you can easily edit files on the remote host.

Sean Whitton: Kinesis Advantage 2 for heavy Emacs users

Thursday 23rd of July 2020 04:44:27 PM

A little under two months ago I invested in an expensive ergonomic keyboard, a Kinesis Advantage 2, and set about figuring out how to use it most effectively with Emacs. The default layout for the keyboard is great for strong typists who control their computer mostly with their mouse, but less good for Emacs users, who are strong typists that control their computer mostly with their keyboard.

It took me several tries to figure out where to put the ctrl, alt, backspace, delete, return and spacebar keys, and aside from one forum post I ran into, I haven’t found anyone online who came up with anything much like what I’ve come up with, so I thought I should probably write up a blog post.

The mappings
  • The pairs of arrow keys under the first two fingers of each hand become ctrl and alt/meta keys. This way there is a ctrl and alt/meta key for each hand, to reduce the need for one-handed chording.

    I bought the keyboard expecting to have all modifier keys on my thumbs. However, (i) only the two large thumb keys can be pressed without lifting your hand away from the home row, or stretching in a way that’s not healthy; and (ii) only the outermost large thumb key can be comfortably held down as a modifier.

    It takes a little work to get used to using the third and fifth fingers of one hand to hold down both alt/meta and shift, for typing core Emacs commands like M-^ and M-@, but it does become natural to do so.

  • The arrow keys are moved to the four ctrl/alt/super keys which run along the top of the thumb key areas.

  • The outermost large thumb key of each hand becomes a spacebar. This means it is easy to type C-u C-SPC with the right hand while the left hand holds down control, and sequences like C-x C-SPC and C-a C-SPC C-e with the left hand with the right hand holding down control.

    It took me a while to realise that it is not wasteful to have two spacebars.

  • The inner large thumb keys become backspace and return.

  • The international key becomes delete.

    Rarely needed for Emacs users, as we have C-d, so initially I just had no delete key, but soon came to regret this when trying to edit text in web forms.

  • Caps Lock becomes Super, but remains caps lock on the keypad layer.

    See my rebindings for ordinary keyboards for some discussion of having just a single Super key.

Sequences of two modified keys on different halves of the keyboard

It is desirable to input sequences like C-x C-o without switching which hand is holding the control key. This requires one-handed chording, but this is trecherous when the modifier keys not under the thumbs, because you might need to press the modified key with the same finger that’s holding the modifier!

Fortunately, most or all sequences of two keys modified by ctrl or alt/meta, where each of the two modifier keys is typed by a different hand, begin with C-c, C-x or M-g, and the left hand can handle each of these on its own. This leaves the right hand completely free to hit the second modified key while the left hand continues to hold down the modifier.

My rebindings for ordinary keyboards

I have some rebindings to make Emacs usage more ergonomic on an ordinary keyboard. So far, my Kinesis Advantage setup is close enough to that setup that I’m not having difficulty switching back and forth from my laptop keyboard.

The main difference is for sequences of two modified keys on different halves of the keyboard – which of the two modified keys is easiest to type as a one-handed chord is different on the Kinesis Advantage than on my laptop keyboard. At this point, I’m executing these sequences without any special thought, and they’re rare enough that I don’t think I need to try to determine what would be the most ergonomic way to handle them.

Dima Kogan: Finding long runs of "notable" data in a log

Thursday 23rd of July 2020 02:19:00 PM

Here's yet another instance where the data processing I needed done could be acomplished entirely in the shell, with vnlog tools.

I have some time-series data in a text table. Via some join and filter operations, I have boiled down this table to a sequence of time indices where something interesting happened. For instance let's say it looks like this:

t.vnl

# time 1976 1977 1978 1979 1980 1986 1987 1988 1989 2011 2012 2013 2014 2015 4679 4680 4681 4682 4683 4684 4685 4686 4687 7281 7282 7283 7291 7292 7293

I'd like to find the longest contiguous chunk of time where the interesting thing kept happening. How? Like this!

$ < t.vnl vnl-filter -p 'time,d=diff(time)' | vnl-uniq -c -f -1 | vnl-filter 'd==1' -p 'count=count+1,time=time-1' | vnl-sort -nrk count | vnl-align # count time 9 4679 5 2011 5 1976 4 1986 3 7291 3 7281

Bam! So the longest run was 9-frames-long, starting at time = 4679.

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, June 2020

Thursday 23rd of July 2020 02:10:32 PM

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports In June, 202.00 work hours have been dispatched among 12 paid contributors. Their reports are available: Evolution of the situation

June was the last month of Jessie LTS which ended on 2020-06-20. If you still need to run Jessie somewhere, please read the post about keeping Debian 8 Jessie alive for longer than 5 years.
So, as (Jessie) LTS is dead, long live the new LTS, Stretch LTS! Stretch has received its last point release, so regular LTS operations can now continue.
Accompanying this, for the first time, we have prepared a small survey about our users and contributors, who they are and why they are using LTS. Filling out the survey should take less than 10 minutes. We would really appreciate if you could participate in the survey online! On July 27th 2020 we will close the survey, so please don’t hesitate and participate now! After that, there will be a followup with the results.

The security tracker for Stretch LTS currently lists 29 packages with a known CVE and the dla-needed.txt file has 44 packages needing an update in Stretch LTS.

Thanks to our sponsors

New sponsors are in bold.

We welcome CoreFiling this month!

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Enrico Zini: Build Qt5 cross-builder with raspbian sysroot: compiling with the sysroot (continued)

Thursday 23rd of July 2020 12:51:11 PM

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

The previous rounds of attempts ended in one issue too many to investigate in the allocated hourly budget.

Andreas Gruber wrote:

Long story short, a fast solution for the issue with EGLSetBlobFuncANDROID is to remove libraspberrypi-dev from your sysroot and do a full rebuild. There will be some changes to the configure results, so please review them - if they are relevant for you - before proceeding with your work.

That got me unstuck! dpkg --purge libraspberrypi-dev in the sysroot, and we're back in the game.

While Qt5's build has proven extremely fragile, I was surprised that some customization from Raspberry Pi hadn't yet broken something. In the end, they didn't disappoint.

More i386 issues

The run now stops with a new 32bit issue related to v8 snapshots:

qt-everywhere-src-5.15.0/qtwebengine/src/core/release$ /usr/bin/g++ -pie -Wl,--fatal-warnings -Wl,--build-id=sha1 -fPIC -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now -Wl,-z,defs -Wl,--as-needed -m32 -pie -Wl,--disable-new-dtags -Wl,-O2 -Wl,--gc-sections -o "v8_snapshot/mksnapshot" -Wl,--start-group @"v8_snapshot/mksnapshot.rsp" -Wl,--end-group -ldl -lpthread -lrt -lz /usr/bin/ld: skipping incompatible //usr/lib/x86_64-linux-gnu/libz.so when searching for -lz /usr/bin/ld: skipping incompatible //usr/lib/x86_64-linux-gnu/libz.a when searching for -lz /usr/bin/ld: cannot find -lz collect2: error: ld returned 1 exit status

Attempted solution: apt install zlib1g-dev:i386.

Alternative solution (untried): configure Qt5 with -no-webengine-v8-snapshot.

It builds!

Installation paths

Now it tries to install files into debian/tmp/home/build/sysroot/opt/qt5custom-armhf/.

I realise that I now need to package the sysroot itself, both as a build-dependency of the Qt5 cross-compiler, and as a runtime dependency of the built cross-builder.

Conclusion

The current work in progress, patches, and all, is at https://github.com/Truelite/qt5custom/tree/master/debian-cross-qtwebengine

It blows my mind how ridiculously broken is the Qt5 cross-compiler build, for a use case that, looking at how many people are trying, seems to be one of the main ones for the cross-builder.

Junichi Uekawa: Joys of sshfs slave mode.

Wednesday 22nd of July 2020 11:59:27 PM
Joys of sshfs slave mode. When I want to have parts of my source tree on remote, I use sshfs slave mode, combined with emacs tramp things look very much integrated. sshfs interface only has obnoxious -o slave option which makes it talk to stdin/stdout, which needs to be connected to sftp-server from the local host. Using dpipe from vde2 seems to be a popular method to run the tool. Something like: dpipe /usr/lib/openssh/sftp-server = ssh hostname sshfs :/directory/to/be/shared ~/mnt/src -o slave I wish I can limit the visibility from sftp-server but maybe that's okay.

Bits from Debian: Let's celebrate DebianDay 2020 around the world

Wednesday 22nd of July 2020 01:30:00 PM

We encourage our community to celebrate around the world the 27th Debian anniversary with organized DebianDay events. This year due to the COVID-19 pandemic we cannot organize in-person events, so we ask instead that contributors, developers, teams, groups, maintainers, and users promote The Debian Project and Debian activities online on August 16th (and/or 15th).

Communities can organize a full schedule of online activities throughout the day. These activities can include talks, workshops, active participation with contributions such as translations assistance or editing, debates, BoFs, and all of this in your local language using tools such as Jitsi for capturing audio and video from presenters for later streaming to YouTube.

If you are not aware of any local community organizing a full event or you don't want to join one, you can solo design your own activity using OBS and stream it to YouTube. You can watch an OBS tutorial here.

Don't forget to record your activity as it will be a nice idea to upload it to Peertube later.

Please add your event/activity on the DebianDay wiki page and let us know about and advertise it on Debian micronews. To share it, you have several options:

  • Follow the steps listed here for Debian Developers.
  • Contact us using IRC in channel #debian-publicity on the OFTC network, and ask us there.
  • Send a mail to debian-publicity@lists.debian.org and ask for your item to be included in micronews. This is a publicly archived list.

PS: DebConf20 online is coming! It will be held from August 23rd to 29th, 2020. Registration is already open.

Enrico Zini: nc | sudo

Wednesday 22nd of July 2020 09:29:41 AM

Question: what does this command do?

# Don't do this nc localhost 12345 | sudo tar xf -

Answer: it sends the password typed into sudo to the other endpoint of netcat.

I can reproduce this with both nc.traditional and nc.openbsd.

One might be tempted to just put sudo in front of everything, but it'll mean that only nc will run as root:

# This is probably not what you want sudo nc localhost 12345 | tar xf -

The fix that I will never remember, thanks to twb on IRC, is to close nc's stdin:

<&- nc localhost 12345 | sudo tar xf -

Or flip the table and just use sudo -s:

$ sudo -s # nc localhost 12345 | tar xf - Updates

Harald Koenig suggested two alternative spellings that might be easier to remember:

nc localhost 12345 < /dev/null | sudo tar xf - < /dev/null nc localhost 12345 | sudo tar xf -

And thinking along those lines, there could also be the disappointed face variant:

:| nc localhost 12345 | sudo tar xf -

Matthias Urlichs suggested the approach of precaching sudo's credentials, making the rest of the command lines more straightforward (and TIL: sudo id):

sudo id nc localhost 12345 | sudo tar xf -

Or even better:

sudo id && nc localhost 12345 | sudo tar xf - Shortcomings of nc | tar

Tomas Janousek commented:

There's one more problem with a plain tar | nc | tar without redirection or extra parameteres: it doesn't know when to stop. So the best way to use it, I believe, is:

tar c | nc -N

nc -d | tar x

The -N option terminates the sending end of the connection, and the -d option tells the receiving netcat to never read any input. These two parameters, I hope, should also fix your sudo/password problem.

Hope it helps!

Andrew Cater: How to use jigdo to download media images

Tuesday 21st of July 2020 09:42:26 PM
I worked with the CD release team over the weekend for the final release of Debian Stretch. One problem: we have media images which we cannot test because the team does not have the hardware. I asked questions on the debian-cd mailing list about the future of these and various other .iso images.

Could we replace some DVDs and larger files with smaller jigdo files so that people can download files to build the DVD locally?

People asked me:
  • How do you actually use jigdo to produce a usable media image? 
  • What advantages does jigdo bring over just downloading a large .iso image?
Why jigdo?
  • Downloading large files on a slow or lossy link is difficult.
  • Downloading large (several GB) files via http can be unreliable.
  • Jigdo can be quicker than trying to download one large file and failing.
  • There are few CD mirrors worldwide: jigdo can use a local Debian mirror.
  • The transport mechanism is http - no need for a particular port to be opened.
Using jigdo

Jigdo uses information from a template file to reconstruct an .iso file by downloading Debian packages from a mirror. The image is checksummed and verified at the end of the complete download. if the download is interrupted, you can import the previously downloaded part of the file.

It's a command line application - the GUI never really happened - but it is fairly easy to use.  apt install jigdo-file then find the .jigdo file and .template files that you need for the image from a CD mirror: https://cdimage.debian.org/debian-cd/current/amd64/jigdo-cd/

To build the netinst CD for AMD64, for example: you need the .jigdo file as a minimum: debian-10.4.0-amd64-netinst.jigdo
If you only have this file, jigdo-lite will download the template later but you can save the template in the same directory and save time. The jigdo file is only 25k or so and the template is 4.6M rather than 336M. I copied them into my home directory to build there. The process does not need root permissions.

Run the command jigdo-lite This prompts you for a .jigdo file to use. By default, this uses http to fetch the file from a distant webserver.
(If the files are local, you can use the file:/// syntax.For example: file:///home/amacater/debian-10.4.0-amd64-netinst.jigdo)
jigdo-lite then reads the .jigdo file and outputs some information about the .isoIt offers the chance to reload any failed download, then prompts for a mirror name. The download pulls in small numbers of files at a time, saves to a temporary directory and will checksum the eventual .iso file.
This will work for any larger file including the 16GB .iso distributed only as a .jigdo
For i386 and AMD, the images are bootable when copied to a USB stick. Use dd to write them and verify the copy.
  • Plug in a USB that can be overwritten.
  • Use dmesg as root to work out which device this is.
  • Change to the directory in which you have your .iso image.
  • Write the image to the stick in 4M blocks and display progress with the syntax of the command below (all one line if wrapped).

dd if=debian-10.4.0-amd64-netinst.iso of=/dev/sdX obs=4M oflag=sync status=progress




Jonathan Dowland: FlashFloppy OLED display

Tuesday 21st of July 2020 02:34:35 PM

This is the tenth part in a series of blog posts. The previous post was Amiga floppy recovery project: what next?. The whole series is available here: Amiga.

Rotary encoder, OLED display and mount

I haven't made any substantive progress on my Amiga floppy recovery project for a while, but I felt like some retail therapy a few days ago so I bought a rotary encoder and OLED display for the Gotek floppy disk emulator along with a 3D-printed mount for them. I'm pleased with the results! The rather undescriptive "DSKA0001" in the picture is a result of my floppy image naming scheme: the display is capable of much more useful labels such as "Lemmings", "Deluxe Paint IV", etc.

The Gotek and all the new bits can now be moved inside the Amiga A500's chassis.

Bits from Debian: New Debian Developers and Maintainers (May and June 2020)

Tuesday 21st of July 2020 02:00:00 PM

The following contributors got their Debian Developer accounts in the last two months:

  • Richard Laager (rlaager)
  • Thiago Andrade Marques (andrade)
  • Vincent Prat (vivi)
  • Michael Robin Crusoe (crusoe)
  • Jordan Justen (jljusten)
  • Anuradha Weeraman (anuradha)
  • Bernelle Verster (indiebio)
  • Gabriel F. T. Gomes (gabriel)
  • Kurt Kremitzki (kkremitzki)
  • Nicolas Mora (babelouest)
  • Birger Schacht (birger)
  • Sudip Mukherjee (sudip)

The following contributors were added as Debian Maintainers in the last two months:

  • Marco Trevisan
  • Dennis Braun
  • Stephane Neveu
  • Seunghun Han
  • Alexander Johan Georg Kjäll
  • Friedrich Beckmann
  • Diego M. Rodriguez
  • Nilesh Patra
  • Hiroshi Yokota

Congratulations!

Enrico Zini: Screen ghosts

Tuesday 21st of July 2020 09:39:03 AM

I noticed an odd effect, that reminds me of screen ghosting on old CRT monitors, when my laptop screen is locked:

Those faint white vertical lines one can see, are actually window borders, and the lock screen is leaking contents of my unlocked desktop! Here is the same screen, unlocked:

However, moving the windows around does not reflect on the ghost image on top of the lock screen: reshuffling windows then locking, produces always the same ghost image. The white border reflects where the window has been at some time in the past.

The lock screen does not seem to be responsible, either: dragging a solid colored window on the laptop screen has the same effect:

But taking a screenshot of it does not show the time traveling ghost windows:

This is happening on two different laptops, an HP EliteBook x360 G1, and a Lenovo ThinkPad X240, one that I've been using since 3 years, one that I've been using since a week, and whose only thing in common is a 1920x1080 IPS screen and an Intel GPU.

I have no idea where to start debugging this. Please reach out to me at enrico@debian.org if any of this makes sense to you.

Update: Jim Paris pointed me to https://en.wikipedia.org/wiki/Image_persistence which looks pretty much like what is happening here.

Jim Paris also pointed out that a black background doesn't show the ghosting.

I changed the lock screen background by editing /etc/lightdm/lightdm-gtk-greeter.conf and adding background=#000000 to the [greeter] section, to limit information leakage through ghosting in the lock screen.

Evgeni Golov: Building and publishing documentation for Ansible Collections

Monday 20th of July 2020 07:17:16 PM

I had a draft of this article for about two months, but never really managed to polish and finalize it, partially due to some nasty hacks needed down the road. Thankfully, one of my wishes was heard and I had now the chance to revisit the post and try a few things out. Sadly, my wish was granted only partially and the result is still not beautiful, but read yourself ;-)

UPDATE: I've published a follow up post on building documentation for Ansible Collections using antsibull, as my wish was now fully granted.

As part of my day job, I am maintaining the Foreman Ansible Modules - a collection of modules to interact with Foreman and its plugins (most notably Katello). We've been maintaining this collection (as in set of modules) since 2017, so much longer than collections (as in Ansible Collections) existed, but the introduction of Ansible Collections allowed us to provide a much easier and supported way to distribute the modules to our users.

Now users usually want two things: features and documentation. Features are easy, we already have plenty of them. But documentation was a bit cumbersome: we had documentation inside the modules, so you could read it via ansible-doc on the command line if you had the collection installed, but we wanted to provide online readable and versioned documentation too - something the users are used to from the official Ansible documentation.

Building HTML from Ansible modules

Ansible modules contain documentation in form of YAML blocks documenting the parameters, examples and return values of the module. The Ansible documentation site is built using Sphinx from reStructuredText. As the modules don't contain reStructuredText, Ansible hashad a tool to generate it from the documentation YAML: build-ansible.py document-plugins. The tool and the accompanying libraries are not part of the Ansible distribution - they just live in the hacking directory. To run them we need a git checkout of Ansible and source hacking/env-setup to set PYTHONPATH and a few other variables correctly for Ansible to run directly from that checkout.

It would be nice if that'd be a feature of ansible-doc, but while it isn't, we need to have a full Ansible git checkout to be able to continue.The tool has been recently split out into an own repository/distribution: antsibull. However it was also a bit redesigned to be easier to use (good!), and my hack to abuse it to build documentation for out-of-tree modules doesn't work anymore (bad!). There is an issue open for collections support, so I hope to be able to switch to antsibull soon.

Anyways, back to the original hack.

As we're using documentation fragments, we need to tell the tool to look for these, because otherwise we'd get errors about not found fragments. We're passing ANSIBLE_COLLECTIONS_PATHS so that the tool can find the correct, namespaced documentation fragments there. We also need to provide --module-dir pointing at the actual modules we want to build documentation for.

ANSIBLEGIT=/path/to/ansible.git source ${ANSIBLEGIT}/hacking/env-setup ANSIBLE_COLLECTIONS_PATHS=../build/collections python3 ${ANSIBLEGIT}/hacking/build-ansible.py document-plugins --module-dir ../plugins/modules --template-dir ./_templates --template-dir ${ANSIBLEGIT}/docs/templates --type rst --output-dir ./modules/

Ideally, when antsibull supports collections, this will become antsibull-docs collection … without any need to have an Ansible checkout, sourcing env-setup or pass tons of paths.

Until then we have a Makefile that clones Ansible, runs the above command and then calls Sphinx (which provides a nice Makefile for building) to generate HTML from the reStructuredText.

You can find our slightly modified templates and themes in our git repository in the docs directory.

Publishing HTML documentation for Ansible Modules

Now that we have a way to build the documentation, let's also automate publishing, because nothing is worse than out-of-date documentation!

We're using GitHub and GitHub Actions for that, but you can achieve the same with GitLab, TravisCI or Jenkins.

First, we need a trigger. As we want always up-to-date documentation for the main branch where all the development happens and also documentation for all stable releases that are tagged (we use vX.Y.Z for the tags), we can do something like this:

on: push: tags: - v[0-9]+.[0-9]+.[0-9]+ branches: - master

Now that we have a trigger, we define the job steps that get executed:

steps: - name: Check out the code uses: actions/checkout@v2 - name: Set up Python uses: actions/setup-python@v2 with: python-version: "3.7" - name: Install dependencies run: make doc-setup - name: Build docs run: make doc

At this point we will have the docs built by make doc in the docs/_build/html directory, but not published anywhere yet.

As we're using GitHub anyways, we can also use GitHub Pages to host the result.

- uses: actions/checkout@v2 - name: configure git run: | git config user.name "${GITHUB_ACTOR}" git config user.email "${GITHUB_ACTOR}@bots.github.com" git fetch --no-tags --prune --depth=1 origin +refs/heads/*:refs/remotes/origin/* - name: Set up Python uses: actions/setup-python@v2 with: python-version: "3.7" - name: Install dependencies run: make doc-setup - name: Build docs run: make doc - name: commit docs run: | git checkout gh-pages rm -rf $(basename ${GITHUB_REF}) mv docs/_build/html $(basename ${GITHUB_REF}) dirname */index.html | sort --version-sort | xargs -I@@ -n1 echo '<div><a href="@@/"><p>@@</p></a></div>' >> index.html git add $(basename ${GITHUB_REF}) index.html git commit -m "update docs for $(basename ${GITHUB_REF})" || true - name: push docs run: git push origin gh-pages

As this is not exactly self explanatory:

  1. Configure git to have a proper author name and email, as otherwise you get ugly history and maybe even failing commits
  2. Fetch all branch names, as the checkout action by default doesn't do this.
  3. Setup Python, Sphinx, Ansible etc.
  4. Build the documentation as described above.
  5. Switch to the gh-pages branch from the commit that triggered the workflow.
  6. Remove any existing documentation for this tag/branch ($GITHUB_REF contains the name which triggered the workflow) if it exists already.
  7. Move the previously built documentation from the Sphinx output directory to a directory named after the current target.
  8. Generate a simple index of all available documentation versions.
  9. Commit all changes, but don't fail if there is nothing to commit.
  10. Push to the gh-pages branch which will trigger a GitHub Pages deployment.

Pretty sure this won't win any beauty contest for scripting and automation, but it gets the job done and nobody on the team has to remember to update the documentation anymore.

You can see the results on theforeman.org or directly on GitHub.

Steinar H. Gunderson: Reverse-engineering the FIRST marathon program

Monday 20th of July 2020 07:05:00 PM

Last year, I ran my first marathon ever (at 3:07:52, in the fairly hilly Oslo course), using the FIRST marathon program (which, despite the name, is not necessarily meant for beginners). This year, as the Covid-19 lockdowns started, I decided to go for it again using the same program, but there was one annoyance; I wanted to change target times as it became obvious my initial target got too easy, but there's no way to calculate it electronically.

FIRST comes in the form of a book; you can find an older version of the 10K and marathon programs if you search a bit online, but fundamentally, the way it works is that you declare a 5K personal best (whether true or not), look up a bunch of tempos in a table in the book from that, and then use that to get three runs every week. (You also do cross-training and strength training, or at least that's the idea.) For instance, the book might say that this week's track intervals are 6x 800m, so you go look up your 800m interval times in the table. If you have a 5K PB of 19:30, the book might say that 800m interval times are 2:52 (3:35/km), so off you go running.

The tables are never explained, and they don't match up with the formulas that were published in the earlier versions. There's at least one running calculator that can derive FIRST paces, but it defaults to miles and has a different calculation for marathon pace (which sometimes creates absurd situations like “long tempo” being slower than “marathon pace”), so I really wanted to just get the formulas to input into my own spreadsheets.

Enter regression. I just typed in a bunch of the tables, graphed them, saw that everything was on a dead straight line (R=1.00 for linear regression) and got the constants from there. So without further ado:

If you can run 5K at x seconds per kilometer, the Holy Gospel of FIRST declares that you can run 42.195K at 1.15313x seconds. (I am sure there are more sophisticated models, but perhaps this is good enough?) Incidentally or not, this means an 18:30 5K becomes nearly exactly three hours on a marathon (only two seconds away). (I didn't bother with the 10K and half-marathon estimation paces; there are so many numbers to input).

The tempo run paces are even simpler. Take your 5K pace, and add 10 sec/km, and that's short tempo (ST). Medium tempo (MT) is 5K + 20 sec/km. Long tempo (LT) is 5K + 29 sec/km.

That leaves only the track repeats. For this, first take the 5K pace and multiply by 1.00579, leaving what I will call the “reference pace” (RP). I don't know if this constant carries any particular meaning, and obviously, it's nowhere in the book; it's just the slope of the regression. 400m time is 400m at RP, minus 10 seconds. (That is 10 seconds absolute time, not 10 seconds/km. So if you have an 18:30 5K PB, you'll have an 18:36 5K at RP, which is 1:29 400m at RP, which then gives a 1:19 400m.)

Similarly: 600m is -13 seconds, 800m is -16 seconds, 1000m is -18 seconds, 1200m is also -18 seconds, 1600m is -16 seconds, and 2000m (which is specified, but seemingly never used in any of the programs) is -15 seconds. You can see two effects going against each other here; longer intervals mean more seconds to shave off for a given pace, but they also give lower pace, and thus the U-like shape.

And that's all there is to it. Happy running, and may there be a good race close to you!

Dominique Dumont: Security gotcha with log collection on Azure Kubernetes cluster.

Monday 20th of July 2020 04:26:22 PM

Azure Kubernetes Service provides a nice way to set up Kubernetes
cluster in the cloud. It’s quite practical as AKS is setup by default
with a rich monitoring and reporting environment. By default, all
container logs are collected, CPU and disk data are gathered.

Shirish Agarwal: Hearing loss, pandemic, lockdown

Monday 20th of July 2020 02:26:52 AM

Sorry for not being on blog for sometime, the last few months have been brutal. While I am externally ok, because of the lockdown I sensed major hearing loss. First, I thought it may be a hallucination or something but as it persisted for days, I got myself checked and found out that I got 80% hearing loss in my right ear. How and why I don’t know. Is this NIHL or some other kind of hearing loss is yet to be ascertained. I do live what is and used to be one of the busiest roads in the city, now for last few months not so much. On top of it, you have various other noises.

Tinnitus

I also experienced Tinnitus which again I perceived to be a hallucination but found it’s not. I have no clue if my eiplepsy has anything to do with hearing loss or both are different. I did discover that while today we know that something like Tinnitus exists, just 10-15 years back, people might mistake it for madness. In a way it is madness because you are constantly hearing sound, music etc. 24×7 , that is enough to drive anybody mad.

During this brief period, did learn what an Otoscope is . I did get audiometry tests done but need to get at least a second or if possible also a third opinion but those will have to wait as the audio clinics are about 8-10 kms. away. In the open-close-open-close environment just makes it impossible to figure out the time, date and get it done. After that is done then probably get a hearing device, probably a Siemens Signia hearing aid. The hearing aids are damn expensive, almost 50k per piece and they probably have a lifetime of about 5-6 years, so it’s a bit of a expensive proposition. I also need a second or/and third opinion on the audiometry profile so I know things are correct. All of these things are gonna take time.

Pandemic Situation in India and Testing

Coincidentally, was talking to couple of people about this. It is sad to see that we have the third highest number of covid cases at 1/10th the tests we are doing vis-a-vis U.S.A. According to statistical site ourworldindata , we seem to be testing 0.22 per thousand people compared to 2.28 people per thousand done by United States. Sadly it doesn’t give the breakup of the tests, from what I read the PCR tests are better than the antibody tests, a primer shares the difference between the two tests. IIRC, the antibody tests are far cheaper than the swab tests but swab tests are far more accurate as it looks for the virus’s genetic material (RNA) . Anyways coming to the numbers, U.S. has a population of roughly 35 crores taking a little bit liberty from numbers given at popclock . India meanwhile has 135 crore or almost four times the population of U.S. and the amount of testing done is 1/10th as shared above. Just goes to share where the GOI priorities lie . We are running out of beds, ventilators and whatever else there is. Whatever resources are there are being used for covid patients and they are being charged a bomb. I have couple of hospitals near my place and the cost of a bed in an isolation ward is upward of INR 100k and if you need a ventilator then add another 50k . And in moment of rarity, the differences between charges of private and public are zero. Meaning there is immense profiteering happening it seems in the medical world. Heck, even the Govt. is on the act where they are charging 18% GST on sanitizers. If this is not looting then I dunno what is.

Example of Medical Bills people have to pay.

China, Nepal & Diplomacy

While everybody today knows how China has intruded and captured quite a part of Ladakh, this wasn’t the case when they started in April. That time Ajai Shukla had shared this with the top defence personnel but nothing came of it. Then on May 30th he broke/shared the news with the rest of the world and was immediately branded anti-national, person on Chinese payroll and what not. This is when he and Pravin Sawnhey of Force Magazine had both been warning of the same from last year itself. Pravin, has a youtube channel and had been warning India against Chinese intentions from 2015 and even before that. He had warned repeatedly that our obsession with the Pakistan border meant that we were taking eyes of the border with China which spans almost 2300 odd kms. going all the way to Arunachal Pradesh. A good map which shows the conflict can be found at dw.com which I am sharing/reproducing below –

India-China Border Areas – Copyright DW.com 2020

Note:- I am sharing a neutral party’s rendering of the border disputes or somebody who doesn’t have much at stake as the two countries have so that things could be looked at little objectively.

The Prime Minister on the other hand, made the comment which made galvanising a made-up word into verb . It means to go without coming in. In fact, several news sites shared the statement told by the Prime Minister and the majority of people were shocked. In fact, there had been reports that he gave the current CDS, General Rawat, a person of his own choosing, a peace of his mind. But what lead to this confrontation in the first place ? I think many pieces are part of that puzzle, one of the pieces are surely the cutting of defense budget for the last 6 years, Even this year, if you look at the budget slashes done in the earlier part of the year when he shared how HAL had to raise loans from the market to pay salaries of its own people. Later he shared how the Govt. was planning to slash the defence budget. Interestingly, he had also shared some of the reasons which reaffirm that it is the only the Govt. which can solve some of the issues/conundrum –

“First, it must recognize that our firms competing for global orders are up against rivals that are being supported by their home governments with tax and export incentives and infrastructure that almost invariably surpasses India’s. Our government must provide its aerospace firms with a level playing field, if not a competitive advantage. The greatest deterrent to growth our companies face is the high cost of capital and lack of access to funds. In several cases, Indian MSMEs have had to turn down offers to build components and assemblies for global OEM supply chains simply because the cost of capital to create the shop floor and train the personnel was too high. This resulted in a loss of business and a missed opportunity for creating jobs and skills. To overcome this, the government could create a sector specific “A&D Fund” to provide low cost capital quickly to enable our MSMEs to grab fleeting business opportunities. ” – Ajai Shukla, blogpost – 13th March 2020 .

And then reporting on 11th May 2020 itself, CDS Gen. Rawat himself commented on saving the budget, they were in poor taste but still he shared what he thought about it. So, at the end of it one part of the story. The other part of the story probably lies in India’s relations with its neighbors and lack of numbers in diplomats and diplomacy. So let me cover both the things one by one .

Diplomats, lack of numbers and hence the hands we are dealth with

When Mr. Modi started his first term, he used the term ‘Maximum Governance, Minimum Government’ but sadly cut those places where it indeed needs more people, one of which is diplomacy. A slightly dated 2012 article/opinion shared writes that India needs to engage with the rest of the world and do with higher number. Cut to 2020 and the numbers more or less remain the same . What Mr. Modi tried to do is instead of using diplomats, he tried to use his charm and hugopolicy for lack of a better term. 6 years later, here we are. After 200 trips abroad, not a single trade agreement to show what he done. I could go on but both time and energy are not on my side hence now switching to Nepal

Nepal, once friend, now enemy ?

Nepal had been a friend of India for 70 odd years, what changed in the last few years that it changed from friend to enemy ? There had been two incidents in recent memory that changed the status quo. The first is the 2015 Nepal blockade . Now one could argue it either way but the truth is that Nepal understood that it is heavily dependent on India hence as any sovereign country would do in its interest it also started courting China for imports so there is some balance.

The second one though is one of our own making. On December 16, 2014 RBI allowed Nepali citizens to have cash upto INR 25,000/- . Then in 2016 when demonetization was announced, they said that people could exchange only upto INR 4,500/- which was far below the limit shared above. And btw, before people start blaming just RBI for the decision, FEMA decisions are taken jointly by the finance ministry (FE) as well as ministry of external affairs (MEA) . So without them knowing the decision could not have been taken when announcing it. The result of lowering of demonetization is what made Nepal move more into Chinese hands and this has been shared by number of people in numerous articles in different websites. The wire interview with the vice-chairman of Niti Ayog is pretty interesting. The argument that Nepal show give an estimate of how much old money is there falls flat when in demonetization itself, it was thought of that around 30-40% was black money and would not be returned but by RBI’s own admissions all 99.3% of the money was returned. Perhaps they should have consulted Prof. Arun Kumar of JNU who has extensively written and studied the topic before doing that fool-hardy step. It is the reason that since then, an economy which was searing at 9% has been contracting ever since, I could give a dozen articles stating that, but for the moment, just one will suffice. The slowing economy and the sharp divisions between people based on either outlook, religion or whatever also encouraged China to attack us. This year is not good for India. The only thing I hope Indians and people all over do is just maintain physical distances, masks and somehow survive till middle of next year without getting infected when probably most of the vaccine candidates have been trialed, results are in and we have a ready vaccine. I do hope that at least for once, ICMR shares data even after the vaccine is approved, whichever vaccine. Till later.

Enrico Zini: More notable people

Sunday 19th of July 2020 10:00:00 PM
René Carmille - Wikipedia history people archive.org 2020-07-20 René Carmille (8 January 1886 – 25 January 1945) was a French humanitarian, civil servant, and member of the French Resistance. During World War II, Carmille saved tens of thousands of Jews in Nazi-occupied France. In his capacity at the government's Demographics Department, Carmille sabotaged the Nazi census of France, saving tens of thousands of Jewish people from death camps. Gino Strada - Wikipedia health people archive.org 2020-07-20 Gino Strada (born Luigi Strada; 21 April 1948) is an Italian war surgeon and founder of Emergency, a UN-recognized international non-governmental organization. Morbo di K - Wikipedia history people archive.org 2020-07-20 Il morbo di K è una malattia inventata nel 1943, durante la Seconda guerra mondiale, da Adriano Ossicini insieme al dottor Giovanni Borromeo per salvare alcuni italiani di religione ebraica dalle persecuzioni nazifasciste a Roma.[1][2][3][4] Gino Bartali - Wikipedia history people archive.org 2020-07-20 Stage races

Russ Allbery: PGP::Sign 1.01

Sunday 19th of July 2020 03:09:00 AM

This is mostly a test-suite fix for my Perl module to automate creation and verification of detached signatures.

The 1.00 release of PGP::Sign added support for GnuPG v2 and changed the default to assume that gpg is GnuPG v2, but this isn't the case on some older operating systems (particularly non-Linux ones). That in turn caused chaos for automated testing.

This release fixes the test suite to detect when gpg is GnuPG v1 and run the test suite accordingly, trying to exercise as much of the module as possible in that case. It also fixes a few issues found by Perl::Critic::Freenode.

You can get the latest release from CPAN or from the PGP::Sign distribution page.

Chris Lamb: The comedy is over

Saturday 18th of July 2020 10:59:19 PM

By now everyone must have seen the versions of comedy shows with the laugh track edited out. The removal of the laughter doesn't just reveal the artificial nature of television and how it conscripts the viewer into laughing along; by subverting key conversational conventions, it reveals some of the myriad and subtle ways humans communicate with one another:

Although the show's conversation is ostensibly between two people, the viewer serves as a silent third actor through which they — and therefore we — are meant to laugh along with. Then, when this third character is forcibly muted, viewers not only have to endure the stilted gaps, they also sense an uncanny loss of familiarity by losing their 'own' part in the script.

A similar phenomenon can be seen in other art forms. In Garfield Minus Garfield, the forced negative spaces that these pauses introduce are discomfiting, almost to the level of performance art:

But when the technique is applied to other TV shows such as The Big Bang Theory, it is unsettling in entirely different ways, exposing the dysfunctional relationships and the adorkable mysogny at the heart of the show:

Once you start to look for it, the ur-elements of the audience, response and timing in the way we communicate are everywhere, from the gaps we leave so that others instinctively know when you have finished speaking, to the myriad of ways you can edit a film. These components are always present, it is only when one of them is taken away that they become more apparent. Today, the small delays added by videoconferencing adds an uncanny awkwardness to many of our everyday interactions too. It is said that "comedy is tragedy plus timing", so it is unsurprising that Zoom's undermining of timing leads, by this simple calculus of human interactions, to feelings of... tragedy.


§


Leaving aside the usual comments about Pavlovian conditioning and the shows that are the exceptions, complaints against canned laughter are the domain of the pub bore. I will therefore only add two brief remarks. First, rather than being cynically added to artificially inflate the lack of 'real' comedy, laugh tracks were initially added to replicate the live audience of existing shows. In other words, without a laugh track, these new shows might have ironically appeared almost as eerie as the fan edits cited above are today.

Secondly, although laugh tracks are described as "false", this is not entirely correct. After all, someone did actually laugh, even if it was for an entirey different joke. In his Simulacra and Simulation, cultural theorist Jean Baudrillard might have poetically identified canned laughter as a "reflection of a profound reality", rather than an outright falsehood. One day, when this laughter becomes entirely algorithmically generated, Baudrillard would describe it as "an order of sorcery", placing it metaphysically on the same level as the entirely pumpkin-free Pumpkin Spiced Latte.


§


For a variety of reasons I recently decided to try interacting with various social media platforms in new ways. One way of loosening my addiction to this pornography of the amygdala was to hide the number of replies, 'likes' and related numbers:

The effect of installing this extension was immediate. I caught my eyes darting to where the numbers had been and realised I had been subconsciously looking for the input — and perhaps even the outright validation — of the masses. To be sure, these numbers can be relevant and sometimes useful, but they do implicitly involve delegating part of your responsibility of thinking for yourself to the vox populi, or the Greek chorus of the 21st century.

Like many of you reading this, I am sure I told myself that the number of 'likes' has no bearing on whether I should agree with something, but hiding the numbers reveals much of this might have been a convenient fiction; as an entire century of discoveries in behavioural economics has demonstrated, all the pleasingly-satisfying arguments for rational free-market economics stand no chance against our inherent buggy mammalian brains.


§


Tying a few things together, when attempting to doomscroll through social media without these numbers, I realised that social media without the scorecard of engagement is almost exactly like watching these shows without the laugh track.

Without the number of 'retweets', the lazy prompts to remind you exactly when, how and for how much to respond are removed, and replaced with the same stilted silences of those edited scenes from Friends. At times, the existential loneliness of Garfield Minus Garfield creeps in too, and there is more than enough of the dysfunctional, validation-seeking and parasocial 'conversations' of The Big Bang Theory. Most of all, the whole exercise permits a certain level of detached, critical analysis, allowing one to observe that the platforms often feel like a pre-written script with your 'friends' cast as actors, all perpetuated on the heady fumes of rows INSERT-ed into a database on the other side of the world.

I'm not quite sure how this will affect my usage of the platforms, and any time spent away from these sites may mean fewer online connections at a time when we all need them the most. But as the Karal Marling, professor at the University of Minnesota wrote about artificial audiences: "Let me be the laugh track."

More in Tux Machines

Programming Leftovers

  • Engineer Your Own Electronics With PCB Design Software

    A lot of self-styled geeks out there tend to like to customize their own programs, devices, and electronics. And for the true purists, that can mean building from the ground up (you know, like Superman actor Henry Cavill building a gaming PC to the delight of the entire internet). Building electronics from the ground up can mean a lot of different things: acquiring parts, sometimes from strange sources; a bit of elbow grease on the mechanical side of things; and today, even taking advantage of the 3D printing revolution that’s finally enabling people to manufacture customized objects in their home. Beyond all of these things though, engineering your own devices can also mean designing the underlying electronics — beginning with printed circuit boards, also known as PCBs. [...] On the other hand, there are also plenty of just-for-fun options to consider. For example, consider our past buyer’s guide to the best Linux laptop, in which we noted that you can always further customize your hardware. With knowledge of PCB design, that ability to customize even a great computer or computer setup is further enhanced. You might, for instance, learn how to craft PCBs and devices amounting to your own mouse, gaming keyboard, or homemade speakers — all of which can make your hardware more uniquely your own. All in all, PCB design is a very handy skill to have in 2020. It’s not typically necessary, in that there’s usually a device or some light customization that can give you whatever you want or need out of your electronics. But for “geeks” and tech enthusiasts, knowledge of PCB design adds another layer to the potential to customize hardware.

  • Programming pioneer Fran Allen dies aged 88 after a career of immense contributions to compilers

    Frances Allen, one of the leading computer scientists of her generation and a pioneer of women in tech, died last Tuesday, her 88th birthday. Allen is best known for her work on compiler organisation and optimisation algorithms. Together with renowned computer scientist John Cocke, she published a series of landmark papers in the late '60s and '70s that helped to lay the groundwork for modern programming. In recognition of her efforts, in 2006 Allen became the first woman to be awarded the AM Turing Award, often called the Nobel Prize of computing.

  • Excellent Free Tutorials to Learn ECMAScript

    ECMAScript is an object‑oriented programming language for performing computations and manipulating computational objects within a host environment. The language was originally designed as a scripting language, but is now often used as a general purpose programming language. ECMAScript is best known as the language embedded in web browsers but has also been widely adopted for server and embedded applications.

  • Alexander Larsson: Compatibility in a sandboxed world

    Compatibility has always been a complex problems in the Linux world. With the advent of containers/sandboxing it has become even more complicated. Containers help solve compatibility problems, but there are still remaining issues. Especially on the Linux desktop where things are highly interconnected. In fact, containers even create some problems that we didn’t use to have. Today I’ll take a look at the issues in more details and give some ideas on how to best think of compatibility in this post-container world, focusing on desktop use with technologies like flatpak and snap. [...] Another type of compatibility is that of communication protocols. Two programs that talk to each other using a networking API (which could be on two different machines, or locally on the same machine) need to use a protocol to understand each other. Changes to this protocol need to be carefully considered to ensure they are compatible. In the remote case this is pretty obvious, as it is very hard to control what software two different machines use. However, even for local communication between processes care has to be taken. For example, a local service could be using a protocol that has several implementations and they all need to stay compatible. Sometimes local services are split into a service and a library and the compatibility guarantees are defined by the library rather than the service. Then we can achieve some level of compatibility by ensuring the library and the service are updated in lock-step. For example a distribution could ship them in the same package.

  • GXml-0.20 Released

    GXml is an Object Oriented implementation of DOM version 4, using GObject classes and written in Vala. Has a fast and robust serialization implementation from GObject to XML and back, with a high degree of control. After serialization, provides a set of collections where you can get access to child nodes, using lists or hash tables. New 0.20 release is the first step toward 1.0. It provides cleaner API and removes old unmaintained implementations. GXml is the base of other projects depending on DOM4, like GSVG an engine to read SVG documents based on its specificacion 1.0. GXml uses a method to set properties and fill declared containers for child nodes, accessing GObject internals directly, making it fast. A libxml-2.0 engine is used to read sequentially each node, but is prepared to implement new ones in the future.

  • Let Mom Help You With Object-Oriented Programming

    Mom is a shortcut for creating Moo classes (and roles). It allows you to define a Moo class with the brevity of Class::Tiny. (In fact, Mom is even briefer.) A simple example: Mom allows you to use Moo features beyond simply declaring Class::Tiny-like attributes though. You can choose whether attributes are read-only, read-write, or read-write-private, whether they're required or optional, specify type constraints, defaults, etc.

  • Perl Weekly Challenge 73: Min Sliding Window and Smallest Neighbor

    These are some answers to the Week 73 of the Perl Weekly Challenge organized by Mohammad S. Anwar. Spoiler Alert: This weekly challenge deadline is due in a few days from now (on Aug. 16, 2020). This blog post offers some solutions to this challenge, please don’t read on if you intend to complete the challenge on your own.

  • [rakulang] 2020.32 Survey, Please

    The TPF Marketing Committee wants to learn more about how you perceive “The Perl Foundation” itself, and asks you to fill in this survey (/r/rakulang, /r/perl comments). Thank you!

Hardware With Linux Support: NUVIA and AMD Wraith Prism

  • Performance Delivered a New Way

    The server CPU has evolved at an incredible pace over the last two decades. Gone are the days of discrete CPUs, northbridges, southbridges, memory controllers, other external I/O and security chips. In today’s modern data center, the SoC (System On A Chip) does it all. It is the central point of coordination for virtually all workloads and the main hub where all the fixed-function accelerators connect, such as AI accelerators, GPUs, network interface controllers, storage devices, etc.

  • NUVIA Published New Details On Their Phoenix CPU, Talks Up Big Performance/Perf-Per-Watt

    Since leaving stealth last year and hiring some prominent Linux/open-source veterans to complement their ARM processor design experts, we have been quite eager to hear more about this latest start-up aiming to deliver compelling ARM server products. Today they shared some early details on their initial "Phoenix" processor that is coming within their "Orion" SoC. The first-generation Phoenix CPU is said to have a "complete overhaul" of the CPU pipeline and is a custom core based on the ARM architecture. They believe that Phoenix+Orion will be able to take on Intel/AMD x86_64 CPUs not only in raw performance but also in performance-per-Watt.

  • Take control of your AMD Wraith Prism RGB on Linux with Wraith Master

    Where the official vendor doesn't bother with supporting Linux properly, once again the community steps in to provide. If you want to tweak your AMD Wraith Prism lighting on Linux, check out Wraith Master. It's a similar project to CM-RGB that we previously highlighted. With the Wraith Master project, they provide a "feature-complete" UI and command-line app for controlling the fancy LED system on AMD's Wraith Prism cooler with eventual plans to support more.

The Massive Privacy Loopholes in School Laptops

It’s back to school time and with so many school districts participating in distance learning, many if not most are relying on computers and technology more than ever before. Wealthier school districts are providing their students with laptops or tablets, but not all schools can afford to provide each student with a computer which means that this summer parents are scrambling to find a device for their child to use for school. Geoffery Fowler wrote a guide in the Washington Post recently to aid parents in sourcing a computer or tablet for school. Given how rough kids can be with their things, many people are unlikely to give their child an expensive, premium laptop. The guide mostly focuses on incredibly low-cost, almost-disposable computers, so you won’t find a computer in the list that has what I consider a critical feature for privacy in the age of video conferencing: hardware kill switches. Often a guide like this would center on Chromebooks as Google has invested a lot of resources to get low-cost Chromebooks into schools yet I found Mr. Fowler’s guide particularly interesting because of his opinion on Chromebooks in education... Read more Also: Enabling Dark Mode on a Chromebook (Do not try this at home)

Christopher Arnold: The Momentum of Openness - My Journey From Netscape User to Mozillian Contributor

Working at Mozilla has been a very educational experience over the past eight years. I have had the chance to work side-by-side with many engineers at a large non-profit whose business and ethics are guided by a broad vision to protect the health of the web ecosystem. How did I go from being on the front of a computer screen in 1995 to being behind the workings of the web now? Below is my story of how my path wended from being a Netscape user to working at Mozilla, the heir to the Netscape legacy. It's amazing to think that a product I used 25 years ago ended up altering the course of my life so dramatically thereafter. But the world and the web was much different back then. And it was the course of thousands of people with similar stories, coming together for a cause they believed in. The Winding Way West Like many people my age, I followed the emergence of the World Wide Web in the 1990’s with great fascination. My father was an engineer at International Business Machines when the Personal Computer movement was just getting started. His advice to me during college was to focus on the things you don't know or understand rather than the wagon-wheel ruts of the well trodden path. He suggested I study many things, not just the things I felt most comfortable pursuing. He said, "You go to college so that you have interesting things to think about when you're waiting at the bus stop." He never made an effort to steer me in the direction of engineering. In 1989 he bought me a Macintosh personal computer and said, "Pay attention to this hypertext trend. Networked documents is becoming an important new innovation." This was long before the World Wide Web became popular in the societal zeitgeist. His advice was prophetic for me. [...] The Mozilla Project grew inside AOL for a long while beside the AOL browser and Netscape browsers. But at some point the executive team believed that this needed to be streamlined. Mitchell Baker, an AOL attorney, Brendan Eich, the inventor of JavaScript, and an influential venture capitalist named Mitch Kapoor came up with a suggestion that the Mozilla Project should be spun out of AOL. Doing this would allow all of the enterprises who had interest in working in open source versions of the project to foster the effort while Netscape/AOL product team could continue to rely on any code innovations for their own software within the corporation. A Mozilla in the wild would need resources if it were to survive. First, it would need to have all the patents that were in the Netscape portfolio to avoid hostile legal challenges from outside. Second, there would need to be a cash injection to keep the lights on as Mozilla tried to come up with the basis for its business operations. Third, it would need protection from take-over bids that might come from AOL competitors. To achieve this, they decided Mozilla should be a non-profit foundation with the patent grants and trademark grants from AOL. Engineers who wanted to continue to foster AOL/Netscape vision of an open web browser specifically for the developer ecosystem could transfer to working for Mozilla. Mozilla left Netscape's crowdsourced web index (called DMOZ or open directory) with AOL. DMOZ went on to be the seed for the PageRank index of Google when Google decided to split out from powering the Yahoo search engine and seek its own independent course. It's interesting to note that AOL played a major role in helping Google become an independent success as well, which is well documented in the book The Search by John Battelle. Once the Mozilla Foundation was established (along with a $2 Million grant from AOL) they sought donations from other corporations who were to become dependent on the project. The team split out Netscape Communicator's email component as the Thunderbird email application as a stand-alone open source product and the Phoenix browser was released to the public as "Firefox" because of a trademark issue with another US company on usage of the term "Phoenix" in association with software. Google had by this time broken off from its dependence on Yahoo as a source of web traffic for its nascent advertising business. They offered to pay Mozilla Foundation for search traffic that they could route to their search engine traffic to Google preferentially over Yahoo or the other search engines of the day. Taking "revenue share" from advertising was not something that the non-profit Mozilla Foundation was particularly well set up to do. So they needed to structure a corporation that could ingest these revenues and re-invest them into a conventional software business that could operate under the contractual structures of partnerships with other public companies. The Mozilla Corporation could function much like any typical California company with business partnerships without requiring its partners to structure their payments as grants to a non-profit. [...] Working in the open was part of the original strategy AOL had when they open sourced Netscape. If they could get other companies to build together with them, the collaborative work of contributors outside the AOL payroll could contribute to the direct benefit of the browser team inside AOL. Bugzilla was structured as a hierarchy of nodes, where a node owner could prioritize external contributions to the code base and commit them to be included in the derivative build which would be scheduled to be released as a new update package ever few months. Module Owners, as they were called, would evaluate candidate fixes or new features against their own list of items to triage in terms of product feature requests or complaints from their own team. The main team that shipped each version was called Release Engineering. They cared less about the individual features being worked on than the overall function of the broader software package. So they would bundle up a version of the then-current software that they would call a Nightly build, as there were builds being assembled each day as new bugs were upleveled and committed to the software tree. Release engineering would watch for conflicts between software patches and annotate them in Bugzilla so that the various module owners could look for conflicts that their code commits were causing in other portions of the code base. Read more