Language Selection

English French German Italian Portuguese Spanish

Fedora Magazine

Syndicate content
Guides, information, and news about the Fedora operating system for users, developers, system administrators, and community members.
Updated: 3 hours 52 min ago

PowerShell on Linux? A primer on Object-Shells

Friday 24th of September 2021 08:00:00 AM

In the previous post, Install PowerShell on Fedora Linux, we went through different ways to install PowerShell on Fedora Linux and explained the basics of PowerShell. This post gives you an overview of PowerShell and a comparison to POSIX-compliant shells.

Table of contents Differences at first glance — Usability

One of the very first differences to take note of when using PowerShell for the first time is semantic clarity.

Most commands in traditional POSIX shells, like the Bourne Again Shell (BASH), are heavily abbreviated and often require memorizing.

Commands like awk, ps, top or even ls do not communicate what they do with their name. Only when one already does know what they do, do the names start to make sense. Once I know that ls lists files the abbreviation makes sense.

In PowerShell on the other hand, commands are perfectly self-descriptive. They accomplish this by following a strict naming convention.

Commands in PowerShell are called “cmdlets” (pronounced commandlets). These always follow the scheme of Verb-Noun.

One example: To get all files or child-items in a directory I tell PowerShell like this:

PS > Get-ChildItem Directory: /home/Ozymandias42 Mode LastWriteTime Length Name ---- ------------- ------ ---- d---- 14/04/2021 08:11 Folder1 d---- 13/04/2021 11:55 Folder2

An Aside:
The cmdlet name is Get-ChildItem not Items. This is in acknowledgement of Set-theory. Each of the standard cmdlets return a list or a set of results. The number of items in a set —mathematicians call this the sets cardinality— can be 0, 1 or any arbitrary natural number, meaning the set can be empty, contain exactly one result or many results. The reason for this, and why I stress this here, is because the standard cmdlets also implicitly implement a ForEach-Loop for any results they return. More about this later.

Speed and efficiency Aliases

You might have noticed that standard cmdlets are long and can therefore be time consuming when writing scripts. However, many cmdlets are aliased and don’t necessarily depend on the case, which mitigates this problem.

Let’s write a script with unaliased cmdlets as an example:

PS > Get-Process | ForEach-Object {Write-Host $_.Name -ForegroundColor Cyan}

This lists the name of running processes in cyan. As you can see, many characters are in upper case and cmdlets names are relatively long. Let’s shorten them and replace upper case letters to make the script easier to type:

PS > gps | foreach {write-host $_.name -foregroundcolor cyan}

This is the same script but with greatly simplified input.

To see the full list of aliased cmdlets, type Get-Alias.

Custom aliases

Just like any other shell, PowerShell also lets you set your own aliases by using the Set-Alias cmdlet. Let’s alias Write-Host to something simpler so we can make the same script even easier to type:

PS > Set-Alias -Name wh -Value Write-Host

Here, we aliased wh to Write-Host to increase typebility. When setting aliases, -Name indicates what you want the alias to be and -Value indicates what you want to alias to.

Let’s see how it looks now:

PS > gps | foreach {wh $_.name -foregroundcolor cyan}

You can see that we already made the script easier to type. If we wanted, we could also alias ForEach-Object to fe, but you get the gist.

If you want to see the properties of an alias, you can type Get-Alias. Let’s check the properties of the alias wh using the Get-Alias cmdlet:

PS > Get-Alias wh CommandType Name Version Source ----------- ---- ------- ------ Alias wh -> Write-Host Autocompletion and suggestions

PowerShell suggests cmdlets or flags when you press the Tab key twice, by default. If there is nothing to suggest, PowerShell automatically completes to the cmdlet.

Differences between POSIX Shells — Char-stream vs. Object-stream

Any scripting will eventually string commands together via pipe | and soon come to notice a few key differences.

In bash what is moved from one command to the next through a pipe is just a string of characters. However, in PowerShell this is not the case.

In PowerShell, every cmdlet is aware of data structures and objects. For example, a structure like this:

{ firstAuthor=Ozy, secondAuthor=Skelly }

This data is kept as-is even if a command, used alone, would have presented this data as follows:

AuthorNr. AuthorName 1 Ozy 2 Skelly

In bash, on the other hand, that formatted output would need to be created by parsing with helper tools like awk or cut first, to be usable with a different command.

PowerShell does not require this parsing since the underlying structure is sent when using a pipe rather than the formatted output shown without. So the command authorObject | doThingsWithSingleAuthor firstAuthor is possible.

The following examples shall further illustrate this.

Beware: This will get fairly technical and verbose. Skip if satisfied already.

A few of the most often used constructs to illustrate the advantage of PowerShell over bash, when using pipes, are to:

  • filter for something
  • format output
  • sort output

When implementing these in bash there are a few things that will re-occur time and time again.
The following sections will exemplarise these constructs and their variants in bash and contrast them with their PowerShell equivalents.

To filter for something

Let’s say you want to see all processes matching the name ssh-agent.
In human thinking terms you know what you want.

  1. Get all processes
  2. Filter for all processes that match our criteria
  3. Print those processes

To apply this in bash we could do it in two ways.

The first one, which most people who are comfortable with bash might use is this one:

$ ps -p $(pgrep ssh-agent)

At first glance this is straight forward. ps get’s all processes and the -p flag tells it to filter for a given list of pids.
What the veteran bash user might forget here however is that this might read this way but is not actually run as such. There’s a tiny but important little thing called the order of evaluation.

$() is a subshell. A subshell is run, or evaluated, first. This means the list of pids to filter again is first and the result is then returned in place of the subshell for the waiting outer command ps to use.

This means it is written as:

  1. Print processes
  2. Filter Processes

but evaluated the other way around. It also implicitly combines the original steps 2. and 3.

A less often used variant that more closely matches the human thought pattern and evaluation order is:

$ pgrep ssh-agent | xargs ps

The second one still combines two steps, the steps 1. and 2. but follows the evaluation logic a human would think of.

The reason this variant is less used is that ominous xargs command. What this basically does is to append all lines of output from the previous command as a single long line of arguments to the command followed by it. In this case ps.

This is necessary because pgrep produces output like this:

$ pgrep bash 14514 15308

When used in conjunction with a subshell ps, might not care about this but when using pipes to approximate the human evaluation order this becomes a problem.

What xargs does, is to reduce the following construct to a single command:

$ for i in $(pgrep ssh-agent); do ps $i ; done

Okay. Now we have talked a LOT about evaluation order and how to do it in bash in different ways with different evaluation orders of the three basic steps we outlined.

So with this much preparation, how does PowerShell handle it?

PS > Get-Process | Where-Object Name -Match ssh-agent

Completely self-descriptive and follows the evaluation order of the steps we outlined perfectly. Also do take note of the absence of xargs or any explicit for-loop.

As mentioned in our aside a few hundred words back, the standard cmdlets all implement ForEach internally and do it implicitly when piped input in list-form.

Output formatting

This is where PowerShell really shines. Consider a simple example to see how it’s done in bash first. Say we want to list all files in a directory sorted by size from the biggest to the smallest and listed as a table with filename, size and creation date. Also let’s say we have some files with long filenames in there and want to make sure we get the full filename no matter how big our terminal.

Field separators, column-counting and sorting

Now the first obvious step is to run ls with the -l flag to get a list with not just the filenames but the creation date and the file sizes we need to sort against too.

We will get a more verbose output than we need. Like this one:

$ ls -l total 148692 -rwxr-xr-x 1 root root 51984 May 16 2020 [ -rwxr-xr-x 1 root root 283728 May 7 18:13 appdata2solv lrwxrwxrwx 1 root root 6 May 16 2020 apropos -> whatis -rwxr-xr-x 1 root root 35608 May 16 2020 arch -rwxr-xr-x 1 root root 14784 May 16 2020 asn1Coding -rwxr-xr-x 1 root root 18928 May 16 2020 asn1Decoding [not needed] [not needed]

What is apparent is, that to get the kind of output we want we have to get rid of the fields marked [not needed] in the above example but that’s not the only thing needing work. We also need to sort the output so that the biggest file is the first in the list, meaning reverse sort…

This, of course, can be done in multiple ways but it only shows again, how convoluted bash scripts can get.

We can either sort with the ls tool directly by using the -r flag for reverse sort, and the –sort=size flag for sort by size, or we can pipe the whole thing to sort and supply that with the -n flag for numeric sort and the -k 5 flag to sort by the fifth column.

Wait! fifth ? Yes. Because this too we would have to know. sort, by default, uses spaces as field separators, meaning in the tabular output of ls -l the numbers representing the size is the 5th field.

Getting rid of fields and formatting a nice table

To get rid of the remaining fields, we once again have multiple options. The most straightforward option, and most likely to be known, is probably cut. This is one of the few UNIX commands that is self-descriptive, even if it’s just because of the natural brevity of it’s associated verb. So we pipe our results, up to now, into cut and tell it to only output the columns we want and how they are separated from each other.

cut -f5- -d” “ will output from the fifth field to the end. This will get rid of the first columns.

283728 May 7 18:13 appdata2solv 51984 May 16 2020 [ 35608 May 16 2020 arch 14784 May 16 2020 asn1Coding 6 May 16 2020 apropos -> whatis

This is till far from how we wanted it. First of all the filename is in the last column and then the filesize is in the Human unfriendly format of blocks instead of KB, MB, GB and so on. Of course we could fix that too in various ways at various points in our already long pipeline.

All of this makes it clear that transforming the output of traditional UNIX commands is quite complicated and can often be done at multiple points in the pipeline.

How it’s done in PowerShell PS > Get-ChildItem | Sort-Object Length -Descending | Format-Table -AutoSize Name, @{Name="Size"; Expression= {[math]::Round($_.Length/1MB,2).toString()+" MB"} }, CreationTime #Reformatted over multiple lines for better readability.

The only actual output transformation being done here is the conversion and rounding of bytes to megabytes for better human readability. This also is one of the only real weaknesses of PowerShell, that it lacks a simple mechanism to get human readable filesizes.

That part aside it’s clear, that Format-Table allows you to simply list the columns wanted by their names in the order you want them.

This works because of the aforementioned object-nature of piped data-streams in PowerShell. There is no need to cut apart strings by delimiters.

Remote Administration with PowerShell — PowerShell-Sessions on Linux!? Background

Remote administration via PowerShell on Windows has traditionally always been done via Windows Remoting, using the WinRM protocol.

With the release of Windows 10, Microsoft has also offered a Windows native OpenSSH Server and Client.

Using the SSH Server alone on Windows provides the user a CMD prompt unless the default system Shell is changed via a registry key.

A more elegant option is to make use of the Subsystem facility in sshd_config. This makes it possible to configure arbitrary binaries as remote-callable subsystems instead of the globally configured default shell.

By default there is usually one already there. The sftp subsystem.

To make PowerShell available as Subsystem one simply needs to add it like so:

Subsystem powershell /usr/bin/pwsh -sshs --noprofile --nologo

This works —with the correct paths of course— on all OS’ PowerShell Core is available for. So that means Windows, Linux, and macOS.

What this is good for

It is now possible to open a PowerShell (Remote) Session to a properly configured SSH-enabled Server by doing this:

PS > Enter-PSSession -HostName <target-HostName-or-IP> -User <targetUser> -IdentityFilePath <path-to-id_rsa-file> ... <-SSHTransport>

What this does is to register and enter an interactive PSSession with the Remote-Host. By itself this has no functional difference from a normal SSH-session. It does, however, allow for things like running scripts from a local host on remote machines via other cmdlets that utilise the same subsystem.

One such example is the Invoke-Command cmdlet. This becomes especially useful, given that Invoke-Command has the -AsJob flag.

What this enables is running local scripts as batchjobs on multiple remote servers while using the local Job-manager to get feedback about when the jobs have finished on the remote machines.

While it is possible to run local scripts via ssh on remote hosts it is not as straight forward to view their progress and it gets outright hacky to run local scripts remotely. We refrain from giving examples here, for brevity’s sake.

With PowerShell, however, this can be as easy as this:

$listOfRemoteHosts | Invoke-Command -HostName $_ -FilePath /home/Ozymandias42/Script2Run-Remotely.ps1 -AsJob

Overview of the running tasks is available by doing this:

PS > Get-Job Id Name PSJobTypeName State HasMoreData Location Command -- ---- ------------- ----- ----------- -------- ------- 1 Job1 BackgroundJob Running True localhost Microsoft.PowerShe…

Jobs can then be attached to again, should they require manual intervention, by doing Receive-Job <JobName-or-JobNumber>.

Conclusion

In conclusion, PowerShell applies a fundamentally different philosophy behind its syntax in comparison to standard POSIX shells like bash. Of course, for bash, it’s historically rooted in the limitations of the original UNIX. PowerShell provides better semantic clarity with its cmdlets and outputs which means better understandability for humans, hence easier to use and learn. PowerShell also provides aliased cmdlets in the case of unaliased cmdlets being too long. The main difference is that PowerShell is object-oriented, leading to elimination of input-output parsing. This allows PowerShell scripts to be more concise.

Fedora Linux earns recognition from the Digital Public Goods Alliance as a DPG!

Thursday 23rd of September 2021 08:00:00 AM

In the Fedora Project community, we look at open source as not only code that can change how we interact with computers, but also as a way for us to positively influence and shape the future. The more hands that help shape a project, the more ideas, viewpoints and experiences the project represents — that’s truly what the spirit of open source is built from.

But it’s not just the global contributors to the Fedora Project who feel this way. August 2021 saw Fedora Linux recognized as a digital public good by the Digital Public Goods Alliance (DPGA), a significant achievement and a testament to the openness and inclusivity of the project.

We know that digital technologies can save lives, improve the well-being of billions, and contribute to a more sustainable future. We also know that in tackling those challenges, Open Source is uniquely positioned in the world of digital solutions by inherently welcoming different ideas and perspectives critical to lasting success.

But, we also know that many regions and countries around the world do not have access to those technologies. Open Source technologies can be the difference between achieving the Sustainable Development Goals (SDGs) by 2030 or missing the targets. Projects like Fedora Linux, which represent much more than code itself, are the game-changers we need. Already, individuals, organizations, governments, and Open Source communities, including the Fedora Project’s own, are working to make sure the potential of Open Source is realized and equipped to take on the monumental challenges being faced.

The Digital Public Goods Alliance is a multi-stakeholder initiative, endorsed by the United Nations Secretary-General. It works to accelerate the attainment of the SDGs in low- and middle-income countries by facilitating the discovery, development, use of, and investment in digital public goods (DPGs). DPGs are Open Source software, open data, open AI models, open standards, and open content that adhere to privacy and other applicable best practices, and do no harm. This definition, drawn from the UN Secretary-General’s 2020 Roadmap for Digital Cooperation, serves as the foundation of the DPG Registry, an online repository for DPGs. 

The DPG Registry was created to help increase the likelihood of discovery, and therefore use of, DPGs. Today, we are excited to share that Fedora Linux was added to the DPG Registry! Recognition as a DPG increases the visibility, support for, and prominence of open projects that have the potential to tackle global challenges. To become a digital public good, all projects are required to meet the DPG Standard to ensure they truly encapsulate Open Source principles. 

As an Open Source leader, Fedora Linux can make achieving the SDGs a reality through its role as a convener of many Open Source “upstream” communities. In addition to providing a fully-featured desktop, server, cloud, and container operating system, it also acts as a platform where different Open Source software and work come together. Fedora Linux by default only ships its releases with purely Open Source software packages and components. While third-party repositories are available for use with proprietary packages or closed components, Fedora Linux is a complete offering with some of the greatest innovations that Open Source has to offer. Collectively this means Fedora Linux can act as a gateway, empowering the creation of more and better solutions to better tackle the challenges they are trying to address.

The DPG designation also aligns with Fedora’s fundamental foundations:

  • Freedom: Fedora Linux was built as Free and Open Source Software from the beginning. Fedora Linux only ships and distributes Free Software from its default repositories. Fedora Linux already uses widely-accepted Open Source licenses.
  • Friends: Fedora has an international community of hundreds spread across six continents. The Fedora Community is strong and well-positioned to scale as the upstream distribution of the world’s most-widely used enterprise flavor of Linux.
  • Features: Fedora consistently delivers on innovation and features in Open Source. Fedora Linux 34 was a record-breaking release, with 63 new approved Changes in the last release.
  • First: Fedora leverages its unique position and resources in the Free Software world to deliver on innovation. New ideas and features are tried out in the Fedora Community to discover what works, and what doesn’t. We have many stories of both.

For us, recognition as a digital public good brings honor and is a great moment for us, as a community, to reaffirm our commitment to contribute and grow the Open Source ecosystem.

This is a proud moment for each Fedora Community member because we are making a difference. Our work matters and has value in creating an equitable world; this is a fantastic and important feeling.

If you have an interest in learning more about the Digital Public Goods Alliance please reach out to hello@digitalpublicgoods.net.

Install PowerShell on Fedora Linux

Wednesday 22nd of September 2021 08:00:00 AM

PowerShell (also written pwsh) is a powerful open source command-line and object-oriented shell developed and maintained by Microsoft. It is syntactically verbose and intuitive for the user. This article is a guide on how to install PowerShell on the host and inside a Podman or Toolbox container.

Table of contents Why use PowerShell

PowerShell, as the name suggests, is powerful. The syntax is verbose and semantically clear to the end user. For those that don’t want to write long commands all the time, most commands are aliased. The aliases can be viewed with Get-Alias or here.

The most important difference between PowerShell and traditional shells, however, is its output pipeline. While normal shells output strings or character streams, PowerShell outputs objects. This has far reaching implications for how command pipelines work and comes with quite a few advantages.

Demonstration

The following examples illustrate the verbosity and simplicity. Lines that start with the pound symbol (#) are comments. Lines that start with PS > are commands, PS > being the prompt:

# Return all files greater than 50MB in the current directory. ## Longest form PS > Get-Childitem | Where-Object Length -gt 50MB ## Shortest form (with use of aliases) PS > gci | ? Length -gt 40MB ## Output looks like this    Directory: /home/Ozymandias42/Downloads Mode                 LastWriteTime         Length Name ----                 -------------         ------ ---- -----          20/08/2020    13:55     2000683008 40MB-file.img # In order: get VMs, get snapshots, only select the last 3 and remove selected list: PS > Get-VM VM-1 | Get-Snapshot | Select-Object -Last 3 | Remove-Snapshot

What this shows quite well is that input-output reformatting with tools like cut, sed, awk or similar, which Bash scripts often need, is usually not necessary in PowerShell. The reason for this is that PowerShell works fundamentally different than traditional POSIX shells such as Bash, Zsh, or other shells like Fish. The commands of traditional shells are output as strings whereas in PowerShell they are output as objects.

Comparison between Bash and PowerShell

The following example illustrates the advantages of the object-output in PowerShell in contrast to the traditional string-output in Bash. Suppose you want a script that outputs all processes that occupy 200MB or more in RAM. With Bash, this might look something like this:

$ ps -eO rss | awk -F' ' \ '{ if($2 >= (1024*200)) { \ printf("%s\t%s\t%s\n",$1,$2,$6);} \ }' PID RSS COMMAND A B C [...]

The first obvious difference is readability or more specifically, semantic clarity. Neither ps nor awk are self-descriptive. ps shows the process status and awk is a text processing tool and language whose letters are the initials of its developers’ last names, Aho, Weinberger, Kernighan (see Wikipedia). Before contrasting it with PowerShell however, examine the script:

  • ps -e outputs all running processes;
  • -O rss outputs the default output of ps plus the amount of kilobytes each process uses, the rss field; this output looks like this:
    PID RSS S TTY TIME COMMAND 1 13776 S ? 00:00:01 /usr/lib/systemd/systemd
  • | pipe operator uses the output of the command on the left side as input for the command on the right side.
  • awk -F’ ‘ declares “space” as the input field separator. So going with the above example, PID is the first, RSS the second and so on.
  • ‘{ if($2 >= (1024*200) is the beginning of the actual AWK-script. It checks whether field 2 (RSS) contains a number larger than or equal to 1024*200KB (204800KB, or 200MB);
  • { printf(“%s\t%s\t%s\n”,$1,$2,$6);} }’ continues the script. If the previous part evaluates to true, this outputs the first, second and sixth field (PID, RSS and COMMAND fields respectively).

With this in mind, step back and look at what was required for this script to be written and for it to work:

  • The input command ps had to have the field we wanted to filter against in its output. This was not the case by default and required us to use the -O flag with the rss field as argument.
  • We had to treat the output of ps as a list of input fields, requiring us to know their order and structure. Or in other words, we had to at least know that RSS would be the second field. Meaning we had to know how the output of ps would look beforehand.
  • We then had to know what unit the data we were filtering against was in as well as what unit the processing tool would work in. Meaning we had to know that the RSS field uses kilobytes and that awk does too. Otherwise we would not have been able to write the expression ($2 <= 1024*200)

Now, contrast the above with the PowerShell equivalent:

# Longest form PS > Get-Process | Where-Object WorkingSet -ge 200MB # Shortest form (with use of aliases) PS > gps | ? ws -ge 200MB NPM(K) PM(M) WS(M) CPU(s) Id SI ProcessName ------ ----- ----- ------ -- -- ----------- A B C D E F G [...]

This first thing to notice is that we have perfect semantic clarity. The commands are perfectly self-descriptive in what they do.

Furthermore there is no requirement for input-output reformatting, nor is there concern about the unit used by the input command. The reason for this is that PowerShell does not output strings, but objects.

To understand this think about the following. In Bash the output of a command is equal to that what it prints out in the terminal. In PowerShell what is printed on the terminal is not equal to the information, that is actually available. This is, because the output-printing system in PowerShell also works with objects. So every command in PowerShell marks some of the properties of its output objects as printable and others not. However, it always includes all properties, whereas Bash only includes what it actually prints. One can think of it like JSON objects. Where output in Bash would be separated into “fields” by a delimiter such as a space or tab, it becomes an easily addressable object property in PowerShell, with the only requirement being, that one has to know its name. Like WorkingSet in the above example.

To see all available properties of a command’s output objects and their types, one can simply do something like:

PS > Get-Process | Get-Member Install PowerShell

PowerShell is available in several package formats, including RPM used by Fedora Linux. This article shows how to install PowerShell on Fedora Linux using various methods.

I recommend installing it natively. But I will also show how to do it in a container. I will show using both the official Microsoft PowerShell container and a Fedora Linux 30 toolbox container. The advantage of the container-method is that it’s guaranteed to work, since all dependencies are bundled in it, and isolation from the host. Regardless, I recommend doing it natively, despite the official docs only explicitly stating Fedora Linux releases 28 to 30 as being supported.

Note: Supported means guaranteed to work. It does not necessarily mean incompatible with other releases. This means, that while not guaranteed, releases higher than 30 should still work. They did in fact work in our tests.

It is more difficult to set up PowerShell and run it in a container than to run it directly on a host. It takes more time to install and you will not be able to run host commands directly.

Install PowerShell on a host using the package manager Method 1: Microsoft repositories

Installation is as straight-forward as can be and the procedure doesn’t differ from any other software installed through third party repositories.

It can be split into four general steps:

  1. Adding the new repository’s GPG key
  2. Adding repository to DNF repository list
  3. Refreshing DNF cache to include available packages from the new repository
  4. Installing new packages

Powershell is then launched with the command pwsh.

$ sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc $ curl https://packages.microsoft.com/config/rhel/7/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo $ sudo dnf makecache $ sudo dnf install powershell $ pwsh

To remove the repository and packages, run the following.

$ sudo rm /etc/yum.repos.d/microsoft.repo $ sudo dnf remove powershell Method 2: RPM file

This method is not meaningfully different from the first method. In fact it adds the GPG key and the repository implicitly when installing the RPM file. This is because the RPM file contains the link to both in it’s metadata.

First, get the .rpm file for the version you want from the PowerShell Core GitHub repository. See the readme.md
“Get Powershell” table for links.

Second, enter the following:

$ sudo dnf install powershell-<version>.rhel.7.<architecture>.rpm

Substitute <version> and <architecture> with the version and architecture you want to use respectively, for example powershell-7.1.3-1.rhel.7.x86_64.rpm.

Alternatively you could even run it with the link instead, skipping the need to download it first.

$ sudo dnf install https://github.com/PowerShell/PowerShell/releases/download/v<version>/powershell-<version>.rhel.7.<architecture>.rpm

To remove PowerShell, run the following.

$ sudo dnf remove powershell Install via container Method 1: Podman container

Podman is an Open Container Initiative (OCI) compliant drop-in replacement for Docker.

Microsoft provides a PowerShell Docker container. The following example will use that container with Podman.

For more information about Podman, visit Podman.io. Fedora Magazine has a tag dedicated to Podman.

To use PowerShell in Podman, run the following script:

$ podman run \ -it \ --privileged \ --rm \ --name powershell \ --env-host \ --net=host --pid=host --ipc=host \ --volume $HOME:$HOME \ --volume /:/var/host \ mcr.microsoft.com/powershell \ /usr/bin/pwsh -WorkingDirectory $(pwd)

This script creates a Podman container for PowerShell and immediately attaches to it. It also mounts the /home and the host’s root directories into the container so they’re available there. However, the host’s root directory is available in /var/host.

Unfortunately, you can only indirectly run host commands while inside the container. As a workaround, run chroot /var/host to chroot to the root and then run host commands.

To break the command down, everything is mandatory unless specified:

  • -it creates a persistent environment that does not kick you out when you enter it;
  • --privileged gives extended privileges to the container (optional);
  • --rm removes the container when you exit;
  • --name powershell sets the name of the container to powershell;
  • --env-host sets all host environment variables to the container’s variables (optional);
  • --volume $HOME:$HOME mounts the user directory;
  • --volume /:/var/host mounts the root directory to /var/host (optional);
  • --net=host --pid=host --ipc=host runs the process in the host’s namespaces instead of a separate set of namespaces for the contained process;
  • docker.io/microsoft/powershell enters the container;
  • /usr/bin/pwsh -WorkingDirectory $(pwd) enters the container in the current directory (optional).

Optional but very convenient: alias pwsh with the script to easily access the Podman container by typing pwsh.

To remove the PowerShell image, run the following.

$ podman rmi mcr.microsoft.com/powershell Method 2: Fedora Linux Toolbox container

Toolbox is an elegant solution to setup persistent environments without affecting the host system as a whole. It acts as a wrapper around Podman and takes care of supplying a lot of the flags demonstrated in the previous method. For this reason, Toolbox is a lot easier to use than Podman. It was designed to work for development and debugging. With Toolbox, you can run any command the same as you would directly on the Fedora Workstation host (including dnf).

The installation procedure is similar to the installation on the host methods, with the only difference being that those steps are done inside a container. Make sure you have the toolbox package installed.

Preparing and entering the Fedora 34 Toolbox container is a two step process:

  1. Creating the Fedora 34 Toolbox container
  2. Running the Fedora 34 Toolbox container
$ toolbox create --image registry.fedoraproject.org/f34/fedora-toolbox $ toolbox enter --container fedora-toolbox

Then, follow the instructions at Method 1: Microsoft repositories.

Optional but very convenient: alias pwsh with toolbox run –container fedora-toolbox pwsh to easily access the Toolbox container by typing pwsh.

To remove the Toolbox container, make certain you have stopped the Toolbox session by entering exit and then run the following:

$ podman kill fedora-toolbox $ toolbox rm fedora-toolbox

Open source game achievements

Friday 17th of September 2021 08:00:00 AM

Learn how Gamerzilla brings an achievement system to open source games and enables all developers to implement achievements separate from the game platform.

Some open source games rival the quality of commercial games. While it is hard to match the quality of triple-a games, open source games compete effectively against the indie games. But, gamer expectations change over time. Early games included a high score. Achievements expanded over time to promote replay. For example, you may have completed a level but you didn’t find all the secrets or collect all the coins. The Xbox 360 introduced the first multi-game online achievement system. Since that introduction, many game platforms added an achievement system.

Open source games are largely left out of the achievement systems. You can publish an open source game on Steam, but it costs money and they focus on working with companies not the free software community. Additionally, this locks players into a non-free platform.

Commercial game developers are not well served either, since some players enjoy achievements and refuse to purchase from other stores due to the inability to share their accomplishments. This lock-in gives the power to the platform holder. Each platform has a different system forcing the developer to implement support and testing multiple times. Smaller platform are likely to be skipped entirely. Furthermore, the platform holder has access to the achievement data on all companies using their system which could be used for competitive advantage.

Architecture of Gamerzilla

Gamerzilla is an open source game achievement system which attempts to correct this situation. The design considered both open source and commercial games. You can run your own Gamerzilla server, use one provided by a game store, or even distributions, or other groups could run them. Where you buy the game doesn’t matter. The achievement data uploads to your Gamerzilla server.

Game achievements require two things, a game, and a Gamerzilla server. As game collections grow, however, that setup has a disadvantage. Each game needs to have credentials to upload to the Gamerzilla server. Many gamers turn to game launchers due to their large number of games and ability to synchronize with one or more stores. By adding Gamerzilla support to the launcher, the individual games no longer need to know your credentials. Session results will relay from the game launcher to the Gamerzilla server.

At one time, freegamedev.net provided the Hubzilla social networking system. We created an addon allowing us to jump start Gamerzilla development. Unfortunately server upgrades broke the service so freegamedev.net stopped offering it.

For Gamerzilla servers, two implementations exist. Maintaining Hubzilla is a complex task, so we developed a standalone Gamerzilla service using .Net and React. The API used by games remains the same so it doesn’t matter which implementation you connect to.

Game launchers development and support often lags. To facilitate adding support, we created libgamerzilla. The library handles all the interaction between the game launcher, games, and the Gamerzilla server. Right now only GameHub has an implementation with Gamerzilla support and merging into the project is pending. On Fedora Linux, libgamerzilla-server package serves as a temporary solution. It does not launch games but listens for achievements and relays them to your server.

Game support continues growing. As with game launchers, developers use libgamerzilla to handle the Gamerzilla integration. The library, written in C, is in use in a variety of languages like Python and nim. Games which already have an achievement system typically take only a few days to add support. For other games ,collecting all the information to award the achievements occupies the bulk of the implementation time.

Setting up a server

The easiest server to setup is the Hubzilla addon. That, however, requires a working Hubzilla site which is not the simplest thing to setup. The new .Net and React server can be setup relatively easily on Fedora Linux, although there are a lot of steps. The readme details all the steps. The long set of steps is, in part, due to the lack of a built release. This means you need to build the .Net and the React code. Once built, React code serves up directly in Apache. A new service runs the .Net piece. Apache proxies all requests to the Gamerzilla API for the new service.

With the setup steps done, Gamerzilla runs but there are no users. There needs to be an easy way to create an administrator and register new users. Unfortunately this piece does not exist yet. At this time, users must be entered directly using the sqlite3 command line tool. The instructions are in the readme. Users can be publicly visible or not. The approval flag allows new users to not use the system immediately but web registration still needs to be implemented The user piece is designed with replacement in mind. It would not be hard to replace backend/Service/UserService.cs to integrate with an existing site. Gaming web sites could use this to offer Gamerzilla achievements to their users.

Currently the backend uses a sqlite database. No performance testing has been done. We expect that larger installations may need to modify the system to use a more robust database system.

Testing the system

There is no game launcher easily available at the moment. If you install libgamerzilla-server, you will have the command gamerzillaserver available from the command line. The first time you run it, you enter your url and login information. Subsequent executions will simply read the information from the configuration file. There is currently no way to correct a mistake except deleting the file at .local/share/gamerzillaserver/server.cfg and running gamerzillaserver again.

Most games have no built releases with Gamerzilla support. Pinball Disc Room on itch.io does have support built in the Linux version. The web version has no achievements There are only two achievements in the game, one for surviving for ten seconds and the other for unlocking and using the tunnel. With a little practice you can get an achievement. You need to check your Gamerzila server as the game provides no visual notification of the achievement.

Currently no game packaged in Fedora Linux supports Gamerzilla. SuperTuxKart merged support but is still awaiting a new release. Seahorse adventures and Shippy 1984 added achievements but new releases are not packaged yet. Some games with support, we maintain independently as the developers ignore pull requests or other attempt to contact them.

Future work

Gamerzilla needs more games. A variety of games currently support the system. An addition occurs nearly every month. If you have a game you like, ask the developer to support Gamerzilla. If you are making a game and need help adding support, please let us now.

Server development proceeds at a slow pace and we hope to have a functional registration system soon. After that we may setup a permanent hosting site. Right now you can see our test server. Some people expressed concern with the .Net backend. The API is not very complex and could be rewritten in Python fairly easily.

The largest unknown remains game launchers. GameHub wants a generic achievement interface. We could try to work with them to get that implemented. Adding support to the itch.io app could increase interest in the system. Another possibility is to do away with the game launcher entirely. Perhaps adding something like the gamerzillaserver to Gnome might be possible. You would then configure your url and login information on a settings page. Any game launched could then record achievements.

How to check for update info and changelogs with rpm-ostree db

Wednesday 15th of September 2021 08:00:00 AM

This article will teach you how to check for updates, check the changed packages, and read the changelogs with rpm-ostree db and its subcommands.

The commands will be demoed on a Fedora Silverblue installation and should work on any OS that uses rpm-ostree.

Introduction

Let’s say you are interested in immutable systems. Using a base system that is read-only while you build your use cases on top of containers technology sounds very attractive and it persuades you to select a distro that uses rpm-ostree.

You now find yourself on Fedora Silverblue (or another similar distro) and you want to check for updates. But you hit a problem. While you can find the updated packages on Fedora Silverblue with GNOME Software, you can’t actually read their changelogs. You also can’t use dnf updateinfo to read them on the command line, since there’s no DNF on the host system.

So, what should you do? Well, rpm-ostree has subcommands that can help in this situation.

Checking for updates

The first step is to check for updates. Simply run rpm-ostree upgrade –check:

$ rpm-ostree upgrade --check
...
AvailableUpdate:
Version: 34.20210905.0 (2021-09-05T20:59:47Z)
Commit: d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4
GPGSignature: Valid signature by 8C5BA6990BDB26E19F2A1A801161AE6945719A39
SecAdvisories: 1 moderate
Diff: 4 upgraded

Notice that while it doesn’t tell the updated packages in the output, it does show the Commit for the update as d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4. This will be useful later.

Next thing you need to do is find the Commit for the current deployment you are running. Run rpm-ostree status to get the BaseCommit of the current deployment:

$ rpm-ostree status State: idle Deployments: ● fedora:fedora/34/x86_64/silverblue Version: 34.20210904.0 (2021-09-04T19:16:37Z) BaseCommit: e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e GPGSignature: Valid signature by 8C5BA6990BDB26E19F2A1A801161AE6945719A39 RemovedBasePackages: ... LayeredPackages: ... ...

For this example BaseCommit is e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e.

Now you can find the diff of the two commits with rpm-ostree db diff [commit1] [commit2]. In this command commit1 will be the BaseCommit from the current deployment and commit2 will be the Commit from the upgrade checking command.

$ rpm-ostree db diff e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4 ostree diff commit from: e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e ostree diff commit to: d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4 Upgraded: soundtouch 2.1.1-6.fc34 -> 2.1.2-1.fc34

The diff output shows that soundtouch was updated and indicates the version numbers. View the changelogs by adding –changelogs to the previous command:

$ rpm-ostree db diff e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4 --changelogs ostree diff commit from: e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e ostree diff commit to: d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4 Upgraded: soundtouch 2.1.1-6.fc34.x86_64 -> 2.1.2-1.fc34.x86_64 * dom ago 29 2021 Uwe Klotz <uwe.klotz@gmail.com> - 2.1.2-1 - Update to new upstream version 2.1.2 Bump version to 2.1.2 to correct incorrect version info in configure.ac * sex jul 23 2021 Fedora Release Engineering <releng@fedoraproject.org> - 2.1.1-7 - Rebuilt for https://fedoraproject.org/wiki/Fedora_35_Mass_Rebuild

This output shows the commit notes as well as the version numbers.

Conclusion

Using rpm-ostree db you are now able to have the functionality equivalent to dnf check-update and dnf updateinfo.

This will come in handy if you want to inspect detailed info about the updates you install.

Stuart D Gathman: How do you Fedora?

Monday 13th of September 2021 08:00:00 AM

We recently interviewed Fedora user Stuart D. Gathman on how he uses Fedora Linux. This is a part of a series on the Fedora Magazine where we profile users and how they use Fedora Linux to get things done. If you are interested in being interviewed for a further installment in this series you can contact us on the feedback form.

Who are you and What do you do ??

For 35 years, Stuart worked as a System programmer for a small company where his projects included database servers, device drivers, protocol stacks, expert systems, accounting systems, aged AR/AP reports, and EDI. Currently, he is doing hourly consulting work for small businesses.

Stuart’s childhood heroes were his Dad and George Müller. His favorite movies are “The Gods must be crazy” and “The mission”. He grew up in a pacifist denomination, so feels “The mission” movie is very relevant to him. He loves over roasted vegetables.

Composing and performing music, Mesh networking, and refurbishing discarded computers to run Fedora Linux are some of his spare time interests as well as history, especially ancient Western and 19th century English/American.

“Love/charity, Hope, Faith, Virtue, and knowledge” are the five qualities someone should possess, according to Stuart.

Fedora Community

Stuart’s first Linux was Red Hat Linux 3 in 1996. ”At first, I wasn’t sure how free it was going to work out but I took the plunge with Fedora 8 and was very pleased with the quality and stability. It was so nice to have more recent applications already packaged that required a lot of effort on my part to package for Red Hat Linux. One doesn’t need to be a programmer to contribute. There are other skills that go into making a great Linux distro. The most important skill that can be learned is how to submit a useful bug report. I honestly love the way Fedora Linux is organized, as is, and I keep hoping my Pinebook (ARM) onboard wifi and mic will start working with the next kernel release”, says Stuart.

The one person who influenced him to contribute to Fedora Linux was Caleb James DeLisle. Stuart had been building local RHL, RHEL, and Fedora Linux packages for personal and work repositories since 1996, but hadn’t taken the plunge to become an official Fedora packager. In 2015, he began researching how to decentralize network connections and on Mar 22, 2016, his Fedora Cjdns package was officially approved!. After that, he added more packages supporting decentralized communication, like OpenAS2, and more are on the way.

Some of the skills Stuart uses in his work are building software from source, RPM packaging (similar to programming), understanding and following Fedora Linux packaging guidelines, and learning unfamiliar programming languages sufficiently to build their applications from the source.

Stuart’s suggestions to newbies who want to become involved in the Fedora project are: “First, learn to file bug reports then, extend Fedora for your own use, write cheat sheets, create a diagram to help you and others understand an application or utility, add a cool photo to your desktop backgrounds, make your own sound effects, develop “best practices” for using applications and utilities for a particular job or project. Write about it for Fedora Magazine. Outline how you would explain the benefits of Fedora to various audiences”. His biggest concern is that Fedora Linux might fall victim to politics unrelated to the project goals.

What hardware do you use ??

Stuart is currently running a second-hand Dell Optiplex 790 desktop, a Raspberry Pi3, and Pinebook (ARM) small form factor notebook with an all-day battery which were all bought new. He also runs, a Dell Poweredge T310 VM host running Fedora Linux in virtual machines, a Dell Poweredge SC440, a Dell Inspiron 1440, a Thinkpad T61, (all of which were being discarded) and a refurbished Dell Latitude 3570 laptop. The Inspiron 1440 is his favorite as it is so comfortable to use, but the core 2 duo does not support hardware virtualization.

What software do you use??

Stuart is currently running Fedora Linux 33 and Fedora LInux 34. He wrote an article about using LVM for system upgrades for the Fedora Magazine that can allow easy booting between versions for testing new releases.

Stuart relies on a suite of applications in his work:

VYM for quickly building outlines Nheko for messaging Glabels for business cards Gkrellm

MAKE MORE with Inkscape – Ink/Stitch

Friday 10th of September 2021 08:00:00 AM

Inkscape, the most used and loved tool of Fedora’s Design Team, is not just a program for doing nice vector graphics. With vector graphics (in our case SVG) a lot more can be done. Many programs can import this format. Also, Inkscape can do a lot more than just graphics. The first article of this series showed how to produce GCode with Inkscape. This article will examine another Inkscape extension – Ink/Stitch. Ink/Stitch is an extension for designing embroidery with Inkscape.

DIY Embroidery

In the last few years the do-it-yourself or maker scene has experienced a boom. You could say it all began with the inexpensive option of 3D printing; followed by also not expensive CNC machines and laser cutters/engravers. Also the prices for more traditionalmachines such as embroidery machines have fallen during recent years. Home embroidery machines are now available for 500 US dollars.

If you don’t want to or can’t buy one yourself, the nearest MakerSpace often has one. Even the prices for commercial single-head embroidery machines are down to 5,000 US dollars. They are an investment that can pay off quickly.

Software for Embroidery Design

Some of the home machines include their own software for designing embroidery. But most, if not all, of these applications are Windows-only. Also, the most used manufacturer-independent software of this area – Embird – is only available for Windows. But you could run it in Wine.

Another solution for Linux – Embroidermodde – is not really developed anymore. And this is after having had a fundraising campaign.

Today, only one solution is left – Ink/Stitch

The logo of the Ink/Stitch project Open Source and Embroidery Design

Ink/Stitch started out using libembroidery. Today pyembroidery is used. The manufacturers can’t be blamed for the prices of these machines and the number of Linux users. It is hardly worthwhile to develop applications for Linux.

The Embroidery File Format Problem


There is a problem with the proliferation of file formats for embroidery machines; especially among manufacturers that cook their own file format for their machines. In some cases, even a single manufacturer may use several different file formats.

  • .10o – Toyota embroidery machines
  • .100 – Toyota embroidery machines
  • .CSD – Poem, Huskygram, and Singer EU embroidery home sewing machines.
  • .DSB – Baruda embroidery machines
  • .JEF – MemoryCraft 10000 machines.
  • .SEW – MemoryCraft 5700, 8000, and 9000 machines.
  • .PES – Brother and Babylock embroidery home sewing machines.
  • .PEC – Brother and Babylock embroidery home sewing machines.
  • .HUS – Husqvarna/Viking embroidery home sewing machines.
  • .PCS – Pfaff embroidery home sewing machines.
  • .VIP – old Pfaff format also used by Husqvarna machines.
  • .VP3 – newer Pfaff embroidery home sewing machines.
  • .DST – Tajima commercial embroidery sewing machines.
  • .EXP – Melco commercial embroidery sewing machines.
  • .XXX – Compucon, Singer embroidery home sewing machines.
  • .ZSK – ZSK machines on the american market

This is just a small selection of the file formats that are available for embroidery. You can find a more complete list here. If you are interested in deeper knowledge about these file formats, see here for more information.

File Formats of Ink/Stitch

Ink/Stitch can currently read the following file formats: 100, 10o, BRO, DAT, DSB, DST, DSZ, EMD, EXP, EXY, FXY, GT, INB, JEF, JPX, KSM, MAX, MIT, NEW, PCD, PCM, PCQ, PCS, PEC, PES, PHB, PHC, SEW, SHV, STC, STX, TAP, TBF, U01, VP3, XXX, ZXY and also GCode as TXT file.

For the more important task of writing/saving your work, Ink/Stitch supports far fewer formats: DST, EXP, JEF, PEC, PES, U01, VP3 and of course SVG, CSV and GCode as TXT

Besides the problem of all these file formats, there are other problems that a potential stitch program has to overcome.

Working with the different kinds of stitches is one difficulty. The integration of tools for drawing and lettering is another. But why invent such a thing from scratch? Why not take an existing vector program and just add the functions for embroidery to it? That was the idea behind the Ink/Stitch project over three years ago.

Install Ink/Stitch

Ink/Stitch is an extension for Inkscape. Inkscape’s new functionality for downloading and installing extensions is still experimental. And you will not find Ink/Stitch among the extensions that are offered there. You must download the extension manually. After it is downloaded, unzip the package into your directory for Inkscape extensions. The default location is ~/.config/Inkscape/extensions (or /usr/share/inkscape/extensions for system-wide availability). If you have changed the defaults, you may need to check Inkscape’s settings to find the location of the extensions directory.

Customization – Install Add-ons for Ink/Stitch

The Ink/Stitch extension provides a function called Install Add-Ons for Inkscape, which you should run first.

The execution of this function – Extensions > Ink/Stitch > Thread Color Management > Install thread color palettes for Inkscape – will take a while.

Do not become nervous as there is no progress bar or a similar thing to see.

This function will install 70 color palettes of various yarn manufacturers and a symbol library for Ink/Stitch.

Inkscape with the swatches dialogue open, which shows the Madeira Rayon color palette

If you use the download from Github version 2.0.0, the ZIP-file contains the color palette files. You only need to unpack them into the right directory (~/.config/inkscape/palettes/). If you need a hoop template, you can download one and save it to ~/.config/inkscape/templates.

The next time you start Inkscape, you will find it under File > New From Template.

Lettering with Ink/Stitch

The way that is by far the easiest and most widely used, is to get a embroidery design using the Lettering function of Ink/Stitch. It is located under Extensions > Ink/Stitch > Lettering. Lettering for embroidery is not simple. What you expect are so called satin stitched letters. For this, special font settings are needed.

Inkscape with a “Chopin” glyph for satin stitching defined for the Lettering function

You can convert paths to satin stitching. But this is more work intensive than using the Lettering function. Thanks to the work of an active community, the May 2021 release of Ink/Stitch 2.0 brought more predefined fonts for this. An English tutorial on how to create such fonts can be found here.

Version 2.0 also brings functions (Extensions > Ink/Stitch > Font Management) to make managing these kinds of fonts easier. There are also functions for creating these kinds of fonts. But you will need knowledge about font design with Inkscape to do so. First, you create an an entire SVG font. It is then feed through a JSON script which converts the SVG font into the type of files that Ink/Stitch’s font management function works with.

On the left side the Lettering dialogue and on the right the preview of this settings

The function will open a dialogue window where you just have to put in your text, choose the size and font, and then it will render a preview.

Embroider Areas/Path-Objects

The easiest thing with Ink/Stitch, is to embroider areas or paths. Just draw your path. When you use shapes then you have to convert them and then run Extensions > Ink/Stitch > Fill Tools > Break Apart Fill Objects…

This breaks apart the path into its different parts. You have to use this function. The Path > Break apart function of Inkscape won’t work for this.

Next, you can run Ink/Stitch’s built-in simulator: Extensions > Ink/Stitch > Visualise and Export > Simulator/Realistic Preview.

The new Fedora logo as Stitch Plan Preview

Be careful with the simulator. It takes a lot system resources and it will take a while to start. You may find it easier to use the function Extensions > Ink/Stitch > Visualise and Export > Stitch Plan Preview. The latter renders the threading of the embroidery outside of the document.

Nicubunu’s Fedora hat icon as embroidery. The angles for the stitches of the head part and the brim are different so that it looks more realistic. The outline is done in Satin stitching Simple Satin and Satin Embroidery

Ink/Stitch will convert each stroke with a continuous line (no dashes) to what they call Zig-Zag or Simple Satin. Stitches are created along the path using the stroke width you have specified. This will work as long there aren’t too many curves on the path.

Parameter setting dialogue and on the right the Fedora logo shape embroidered as Zig-Zag line

This is simple. But it is by far not the best way. It is better to use the Satin Tools for this. The functions for the Satin embroidery can be found under Extensions > Satin Tools. The most important is the conversion function which converts paths to satin strokes.

Fedora logo shape as Satin Line embroidery

You can also reverse the stitch direction using Extensions > Satin Tools > Flip Satin Column Rails. This underlines the 3D effect satin embroidery gets, especially when you make puff embroidery. For machines that have this capability, you can also set the markings for the trims of jump stitches. To visualize these trims, Ink/Stitch uses the symbols that where installed from its own symbol library.

The Ink/Stitch Stitch Library

What is called the stitch library is simply the kind of stitches that Ink/Stitch can create. The Fill Stitch and Zig-Zag/Satin Stitch have already been introduced. But there are more.

  • Running Stitches: These are used for doing outline designs. The running stitch produces a series of small stitches following a line or curve. Each dashed line will be converted into a Running Stitch. The size of the dashes does not matter.
A running stitch – each dashed line will be converted in such one
  • Bean Stitches: These can also be used for outline designs or add details to a design. The bean stitch describes a repetition of running stitches back and forth. This results in thicker threading.
Bean Stitches – creating a thicker line
  • Manual Stitch: In this mode, Ink/Stitch will use each node of a path as a needle penetration point; exactly as they are placed.
In manual mode – each node will be the needle penetration point
  • E-Stitch: The main use for e-stitch is a simple but strong cover stitch for applique items. It is often used for baby cloths because their skin tends to be more sensitive.
E-Stitch mostly used for applications on baby cloths, soft but strong connection Embroidery Thread List

Some embroidery machines (especially those designed for commercial use) allow different threads to be fitted in advance according to what will be needed for the design. These machines will automatically switch to the right thread when needed. Some file formats for embroidery support this feature. But some do not. Ink/Stitch can apply custom thread lists to an embroidery design.

If you want to work on an existing design, you can import a thread list: Extensions > Ink/Stitch > Import Threadlist. Thread lists can also be exported: Save As different file formats as *.zip. You can also print them: Extensions > Ink/Stitch > Visualise and Export > Print PDF.

Conclusion

Writing software for embroidery design is not easy. Many functions are needed and diverse (sometimes closed-source) file formats make the task difficult. Ink/Stitch has managed to create a useful tool with many functions. It enables the user to get started with basic embroidery design. Some things could be done a little better. But it is definitely a good tool as-is and I expect that it will become better over time. Machine embroidery can be an interesting hobby and with Ink/Stitch the Fedora Linux user can begin designing breathtaking things.

Apps for daily needs part 5: video editors

Wednesday 8th of September 2021 08:00:00 AM

Video editing has become a popular activity. People need video editors for various reasons, such as work, education, or just a hobby. There are also now many platforms for sharing video on the internet. Almost all social media and chat messengers provide features for sharing videos. This article will introduce some of the open source video editors that you can use on Fedora Linux. You may need to install the software mentioned. If you are unfamiliar with how to add software packages in Fedora Linux, see my earlier article Things to do after installing Fedora 34 Workstation. Here is a list of a few apps for daily needs in the video editors category.

Kdenlive

When anyone asks about an open source video editor on Linux, the answer that often comes up is Kdenlive. It is a very popular video editor among open source users. This is because its features are complete for general purposes and are easy to use by someone who is not a professional.

Kdenlive supports multi-track, so you can combine audio, video, images, and text from multiple sources. This application also supports various video and audio formats without having to convert them first. In addition, Kdenlive provides a wide variety of effects and transitions to support your creativity in producing cool videos. Some of the features that Kdenlive provides are titler for creating 2D titles, audio and video scopes, proxy editing, timeline preview, keyframeable effects, and many more.

More information is available at this link: https://kdenlive.org/en/

Shotcut

Shotcut has more or less the same features as Kdenlive. This application is a general purposes video editor. It has a fairly simple interface, but with complete features to meet the various needs of your video editing work.

Shotcut has a complete set of features for a video editor, ranging from simple editing to high-level capabilities. It also supports various video, audio, and image formats. You don’t need to worry about your work history, because this application has unlimited undo and redo. Shotcut also provides a variety of video and audio effects features, so you have freedom to be creative in producing your video works. Some of the features offered are audio filters, audio mixing, cross fade audio and video dissolve transition, tone generator, speed change, video compositing, 3 way color wheels, track compositing/blending mode, video filters, etc.

More information is available at this link: https://shotcut.org/

Pitivi

Pitivi will be the right choice if you want a video editor that has an intuitive and clean user interface. You will feel comfortable with how it looks and will have no trouble finding the features you need. This application is classified as very easy to learn, especially if you need an application for simple editing needs. However, Pitivi still offers a variety of features, like trimming & cutting, sound mixing, keyframeable audio effects, audio waveforms, volume keyframe curves, video transitions, etc.

More information is available at this link: https://www.pitivi.org/

Cinelerra

Cinelerra is a video editor that has been in development for a long time. There are tons of features for your video work such as built-in frame render, various video effects, unlimited layers, 8K support, multi camera support, video audio sync, render farm, motion graphics, live preview, etc. This application is maybe not suitable for those who are just learning. I think it will take you a while to get used to the interface, especially if you are already familiar with other popular video editor applications. But Cinelerra will still be an interesting choice as your video editor.

More information is available at this link: http://cinelerra.org/

Conclusion

This article presented four video editor apps for your daily needs that are available on Fedora Linux. Actually there are many other video editors that you can use in Fedora Linux. You can also use Olive (Fedora Linux repo), OpenShot (rpmfusion-free) , Flowblade (rpmfusion-free) and many more. Each video editor has its own advantages. Some are better at correcting color, while others are better at a variety of transitions and effects. Some are better when it comes to how easy it is to add text. Choose the application that suits your needs. Hopefully this article can help you to choose the right video editors. If you have experience in using these applications, please share your experience in the comments.

Contribute at Fedora Linux 35 Audio, i18n, GNOME 41 , and Kernel test days

Sunday 5th of September 2021 08:00:00 AM

Fedora test days are events where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora Linux before, this is a perfect way to get started.

There are three upcoming test events in the coming weeks.

  • Tuesday, September 07 to September 13, is to test week i18n.
  • Thursdy, September 09 to September 16, is to test GNOME 41.
  • Sunday, September 12 to September 19, is test week for Kernel 5.14.
  • Wednesday, September 15 is to test Fedora Linux 35 Audio changes.

Come and test with us to make Fedora Linux 35 even better. Read more below on how to do it.

i18n test week

GNOME is the default desktop environment for Fedora Workstation and thus for many Fedora users. A lot of our users use Fedora Linux in their preferred languages and it’s important that we test the changes. The wiki contains more details about how to participate. The test week is Sept 07 through Sept 13

Audio test day

There is a recent proposal to replace the PulseAudio daemon with a functionally compatible implementation based on PipeWire. This means that all existing clients using the PulseAudio client library will continue to work as before, as well as applications shipped as Flatpak. The test day is to verify that everything works as expected.
This will occur on Wed, Sept 15

Kernel test week

The kernel team is working on the final integration for kernel 5.14. This version was just recently released and will arrive soon in Fedora Linux. As a result, the Fedora kernel and QA teams have organized a test week for Sunday, Sept 12 through Sunday, Sept 19. Refer to the wiki page for links to the test images you’ll need to participate. This document clearly outlines the steps. The test image goes live 24hrs before the test week starts.

GNOME test week

GNOME is the default desktop environment for Fedora Workstation and thus for many Fedora Linux users. As a part of the planned change the GNOME 41 megaupdate will land on Fedora which will then be shipped with Fedora Linux 35. To ensure that everything works fine the Workstation WG and QA team will have their test week from Thursday, Sept 09 through Sept 16. Refer to the wiki page for links and resources to test GNOME during test week.

How do test days work?

A test day or week is an event where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. Test days are the perfect way to start contributing if you not in the past.

The only requirement to get started is the ability to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days are on the wiki page links provided above. If you are available on or around the days of the events, please do some testing and report your results.

Install ONLYOFFICE Docs on Fedora Linux with Podman and connect it with Nextcloud

Friday 3rd of September 2021 08:00:00 AM

If you need a reliable office suite for online editing and collaboration within your sync & share platform, you can try ONLYOFFICE Docs. In this tutorial, we learn how to install it on your Fedora Linux with Podman and discover the ONLYOFFICE-Nextcloud integration.

What is ONLYOFFICE Docs

ONLYOFFICE Docs (Document Server) is an open-source office suite distributed under GNU AGPL v3.0. It is comprised of web-based viewers and collaborative editors for text documents, spreadsheets, and presentations. The suite is highly compatible with OOXML formats (docx, xlsx, pptx).

A brief features overview includes:

  • Full set of editing and styling tools, operations with fonts and styles, paragraph and text formatting.
  • Inserting and customizing all kinds of objects: shapes, charts, text art, text boxes, etc.
  • Academic formatting and navigation: endnotes, footnotes, table of contents, bookmarks.
  • Content Controls for creating digital forms and templates.
  • Extending functionality with plugins, building your own plugins using API.
  • Collaborative features: real-time and paragraph-locking co-editing modes, review and track changes, comments and mentions, integrated chat, version history.
  • Flexible access permissions: edit, view, comment, fill forms, review, restriction on copying, downloading, and printing, custom filter for spreadsheets.

You can integrate ONLYOFFICE Docs with various cloud services such as Nextcloud, ownCloud, Seafile, Alfresco, Plone, etc. What’s more, developers can embed the editors into their own solutions. 

You can also use the suite together with ONLYOFFICE Groups, a free open-source collaboration platform distributed under Apache 2.0. The complete solution is available as ONLYOFFICE Workspace.

What is Podman

Podman is a daemonless container engine for developing, managing, and running OCI containers on your Linux system. Users can run containers either as root or in rootless mode. 

It is available by default on Fedora Workstation. If it’s not the case, install podman with the command:

sudo dnf install podman What you need for ONLYOFFICE Docs installation
  • CPU: single core 2 GHz or better
  • RAM: 2 GB or more
  • HDD: at least 40 GB of free space
  • At least 4 GB of swap
Install and run ONLYOFFICE Docs

Start with the following commands for the root-privileged deployment. This creates directories for mounting from the container to the host system:

$ sudo mkdir -p /app/onlyoffice/DocumentServer/logs \ /app/onlyoffice/DocumentServer/data \ /app/onlyoffice/DocumentServer/lib \ /app/onlyoffice/DocumentServer/db

Now mount these directories via podman. When prompted, select the image from docker.io):

$ sudo podman run -i -t -d -p 80:80 -p 443:443 --restart=always \ -v /app/onlyoffice/DocumentServer/logs:/var/log/onlyoffice:Z \ -v /app/onlyoffice/DocumentServer/data:/var/www/onlyoffice/Data:Z \ -v /app/onlyoffice/DocumentServer/lib:/var/lib/onlyoffice:Z \ -v /app/onlyoffice/DocumentServer/db:/var/lib/postgresql:Z \ -u root onlyoffice/documentserver:latest

Please note that rootless deployment is NOT recommended for ONLYOFFICE Docs.

To check that ONLYOFFICE is working correctly, run:

$ sudo podman exec $(sudo podman ps -q) sudo supervisorctl start ds:example

Then, open http://localhost/welcome and click the word “here” in the line Once started the example will be available here. Or look for the orange “button” that says “GO TO TEST EXAMPLE”. This opensthe test example where you can create a document.

Alternatively, to install ONLYOFFICE Docs, you can build an image in podman:

$ git clone https://github.com/ONLYOFFICE/Docker-DocumentServer.git $ cd Docker-DocumentServer/ $ sudo podman build --tag oods6.2.0:my -f ./Dockerfile

Or build an image from the Docker file in buildah (you need root access):

$ buildah bud --tag oods6.2.0buildah:mybuildah -f ./Dockerfile Activate HTTPS

To secure the application via SSL basically two things are needed:

  • Private key (.key)
  • SSL certificate (.crt)

So you need to create and install the following files:

/app/onlyoffice/DocumentServer/data/certs/onlyoffice.key /app/onlyoffice/DocumentServer/data/certs/onlyoffice.crt

You can get certificates in several ways depending on your requirements: buy from certification centers, request from Let’s Encrypt, or create a self-signed certificate through OpenSSL (note that self-signed certificates are not recommended for production use).

Secure ONLYOFFICE Docs switching to the HTTPS protocol:

$ sudo mkdir /app/onlyoffice/DocumentServer/data/certs $ sudo cp onlyoffice.crt /app/onlyoffice/DocumentServer/data/certs/ $ sudo cp onlyoffice.key /app/onlyoffice/DocumentServer/data/certs/ $ sudo chown -R 100108:100111 /app/onlyoffice/DocumentServer/data/certs/ # find the podman container id $ sudo podman ps -a # restart the container to use the new certificate $ sudo podman restart {container_id}

Now you can integrate ONLYOFFICE Docs with the platform you already use and start working with your documents.

ONLYOFFICE-Nextcloud integration example

To connect ONLYOFFICE Docs and Nextcloud (or any other DMS), you need a connector. This is an integration app that functions like a bridge between two services.  

In case you’re new to Nextcloud, you can install it with Podman following this tutorial.   

If you already have Nextcloud installed, you just need to install and activate the connector. Do this with the following steps:

  1. launch your Nextcloud as an admin,
  2. click your user icon in the upper right corner,
  3. switch to + Apps,
  4. find ONLYOFFICE in the list of available applications in the section “Office & text”,
  5. click the Download and enable button. 

ONLYOFFICE now appears in the Active apps section and you can go ahead with the configuration. 

Select your user icon again in the upper right corner -> Settings -> Administration -> ONLYOFFICE. On the settings page, you can configure:

  • The address of the machine with ONLYOFFICE installed
  • Secret key (JWT that protects docs from unauthorized access)
  • ONLYOFFICE and Nextcloud addresses for internal requests

You can also adjust additional settings which are not mandatory but will make your user experience more comfortable:

  • Restrict access to the editors to user groups
  • Enable/disable the Open file in the same tab option
  • Select file formats that will be opened by default with ONLYOFFICE
  • Customize editor interface
  • Enable watermarking
Conclusion

Installing ONLYOFFICE Docs on Fedora Linux with Podman is quite easy. It will give you a powerful office suite for integration into any Document Managemet System.

Getting ready for Fedora Linux

Wednesday 1st of September 2021 08:00:00 AM
Introduction

Why does Linux remain vastly invisible to ordinary folks who make general use of computers? This article steps through the process to move to Fedora Linux Workstation for non-Linux users. It also describes features of the GUI (Graphic User Interface) and CLI (Command Line Interface) for the newcomer. This is a quick introduction, not an in-depth course.

Installation and configuration are straightforward

Supposedly, a bootable USB drive is the most baffling experience of starting Linux for a beginner. In all fairness, installation with Fedora Media Writer and Anaconda is intuitive.

Step-by-step installation process
  1. Make a Fedora USB stick: 5 to 7 minutes depending on USB speed
  2. Understand disk partitions and Linux file systems
  3. Boot from a USB device
  4. Install with the Fedora installer, Anaconda: 15 to 20 minutes
  5. Software updates: 5 minutes

Following this procedure, it is easy to help family and friends install Fedora Linux.

Package management and configuration

Instead of configuring the OS manually, adding tools and applications you need, you may choose a functional bundle from Fedora Labs for a specific use case. Design Suite, Scientific, Python Classroom, and more, are available. Plus, all processes are complete without the command line.

Connecting devices and services
  • Add a USB printer: Fedora Linux detects most printers in a few seconds. Some may require the drivers.
  • Configure a USB keyboard: Refer to simple work-around for a mechanical keyboard.
  • Sync with Google Drive: Add an account either after installation, or at any time afterward.
Desktop customization is easy

The default GNOME desktop is decent and free from distractions.

A shortlist to highlight desktop benefits:

  • Simplicity: Clean design, fluid and elegant application grid.
  • Reduced user effort: No alerts for paid services or long list of user consent.
  • Accommodating software: GNOME requires little specialist knowledge or technical ability.
  • Neat layout of system Settings: Larger icons and a better layout.

The image below shows the applications and desktops currently available. Get here by selecting “Activities” and then the “Show Applications” icon at the bottom of the screen at the far right. There you will find LibreOffice for your document, spreadsheet, and presentation creation. Also available is Firefox for your web browsing. More applications are added using the Software icon (second from right at the bottom of the screen).

GNOME desktop Enable touchpad click (tapping)

A change for touchpad settings is required for laptop users.

  1. Go to Activies > Show Applications > Settings > Mouse & Touchpad > Touchpad
  2. Change the default behavior of touchpad settings (double click) to tap-to-click (single tap) using the built-in touchpad
  3. Select ‘Tap to Click’
Add user accounts using the users settings tool

During installation, you set up your first login account. For training or demo purposes, it is common to create a new user account.

  1. Add users: Go to Settings > Users > Unlock > Authentication> Add user
  2. Click at the top of the screen at the far right and then navigate to Power Off / Log out, and Select Switch User to relogin as the new user.
Fedora Linux is beginner-friendly

Yes, Fedora Linux caters to a broader selection of users. Since that is the case, why not dip into the shallow end of the Fedora community?

  • Fedora Docs: Clarity of self-help content is outstanding.
  • Ask Fedora: Get help for anything about Fedora Linux.
  • Magazine: Useful tips and user story are engaging. Make a suggestion to write about.
  • Nest with Fedora: Warm welcome virtually from Fedora Linux community.
  • Release parties.
Command line interface is powerful

The command line is a way of giving instructions to a computer (shell) using a terminal. To be fair, the real power behind Fedora Linux is the Bash shell that empowers users to be problem solvers. The good news is that the text-based command is universally compatible across different versions of Linux. The Bash shell comes with the Fedora Linux, so there is no need to install it.

The following will give you a feeling for the command line. However, you can accomplish many if not all day-to-day tasks without using the command line.

How to use commands?

Access the command line by selecting “Activities” and then the “Show Applications” icon at the bottom of the screen at the far right. Select Terminal.

Understand the shell prompt

The standard shell prompt looks like this:

[hank@fedora_test ~]$

The shell prompt waits for a command.

It shows the name of the user (hank), the computer being used (fedora_test), and the current working directory within the filesystem (~, meaning the user’s home directory). The last character of the prompt, $, indicates that this is a normal user’s prompt.

Enter commands

What common tasks should a beginner try out with command lines?

  • Command line information is available from the Fedora Magazine and other sites.
  • Use ls and cd to list and navigate your file system.
  • Make new directories (folders) with mkdir.
  • Delete files with rm.
  • Use lsblk command to display partition details.
How to deal with the error messages
  • Be attentive to error messages in the terminal. Common errors are missing arguments, typo of file name.
  • Pause to think about why that happened.
  • Figure out the correct syntax using the man command. For example:
    man ls
    displays the manual page for the ls command.
Perform administration tasks using sudo

When a user executes commands for installation, removal, or change of software, the sudo command allows users to gain administrative or root access. The actions that required sudo command are often called ‘the administrative tasks’. Sudo stands for SuperUser DO. The syntax for the sudo command is as follows:

sudo [COMMAND]
  1. Replace COMMAND with the command to run as the root user.
  2. Enter password

What are the most used sudo commands to start with?

  • List privileges
sudo -l
  • Install a package
sudo dnf install [package name]
  • Update a package
sudo dnf update [package name]
  • List all packages
sudo dnf grouplist [package name]
  • Manage disk partitions
sudo fdisk -l Built-in text editor is light and efficient

Nano is the default command-line-based text editor for Fedora Linux. vi is another one often used on Fedora Linux. Both are light and fast. Which to us is a personal choice, really. Nano and vi remain essential tools for editing config files and writing scripts. Generally, Nano is much simpler to work with than vi but vi can be more powerful when you get used to it.

What does a beginner benefit from a text editor?
  • Learn fundamentals of computing

Linux offers a vast range of customization options and monitoring. Shell scripts make it possible to add new functionality and the editor is used to create the scripts.

  • Build cool things for home automation

Raspberry Pi is a testing ground to build awesome projects for homes. Fedora can be installed on Raspberry Pi. Schools use the tiny microcomputer for IT training and experiment. Instead of a visual editor, it is easier to use a light and simple Nano editor to write files.

  • Test proof of concept with the public cloud services

Most of the public cloud suppliers offer free sandbox account to spin up a virtual machine or configure the network. Cloud servers run Linux OS, so editing configuration files require a text editor. Without installing additional software, it is easy to invoke Nano on a remote server.

How to use Nano text editor

Type nano and file name after the shell prompt $ and press Enter.

[hank@fedora_test ~]$ nano [filename]

Note that many of the most used commands are displayed at the bottom of the nano screen. The symbol ^ in Nano means to press the Ctrl key.

  • Use the arrow keys on the keyboard to move up and down, left and right.
  • Edit file.
  • Get built-in help by pressing ^G
  • Exit by entering ^X and Y to save your file and return to the shell prompt.
Examples of file extensions used for configuration or shell scripts
  • .cfg: User-configurable files in the /etc directory.
  • .yaml: A popular type of configuration file with cross-language data portability.
  • .json: JSON is a lightweight & open standard format for storing and transporting data.
  • .sh: A shell script used universally for Unix/Linux systems.

Above all, this is not a comprehensive guide on Nano or vi. Yet, adventurous learners should be aware of text editors for their next step in becoming accomplished in Fedora Linux.

Conclusion

Does Fedora Workstation simplify the user experience of a beginner with Linux? Yes, absolutely. It is entirely possible to create a desktop quickly and get the job done without installing additional software or extensions.

Taking it to the next level, how to get more people into Fedora Linux?

  • Make Fedora Linux device available at home. A repurposed computer with the above guide is a starting point.
  • Demonstrate cool things with Fedora Linux.
  • Share power user tips with shell scripts.
  • Get involved with Open Source Software community such as the Fedora project.

How to install only security and bugfixes updates with DNF

Monday 30th of August 2021 08:00:00 AM

This article will explore how to filter the updates available to your Fedora Linux system by type. This way you can choose to, for example, only install security or bug fixes updates. This article will demo running the dnf commands inside toolbox instead of using a real Fedora Linux install.

You might also want to read Use dnf updateinfo to read update changelogs before reading this article.

Introduction

If you have been managing system updates for Fedora Linux or any other GNU/Linux distro, you might have noticed how, when you run a system update (with dnf update, in the case of Fedora Workstation), you usually are not installing only security updates.

Due to how package management in a GNU/Linux distro works, generally (with the exception of software running in a container, under Flatpak, or similar technologies) you are updating every single package regardless of whether it’s a “system” software or an “app”.

DNF divides updates in three types: “security”, “bugfix” and “enhancement”. And, as you will see, DNF allows filtering which types you want to operate on.

But, why would you want to update only a subset of packages?

Well, this might depend on how you personally choose to deal with system updates. If you are not comfortable at the moment with updating everything, then restricting the current update to only security updates might be a good choice. You could also install bug fix updates as well and only install enhancements and other types of updates during a future opportunity.

How to filter security and bug fix updates

Start by creating a Fedora Linux 34 toolbox:

toolbox create --distro fedora --release f34 updatefilter-demo

Then enter that toolbox:

toolbox enter updatefilter-demo

From now on commands can be run on a real Fedora Linux install.

First, run dnf check-update to see the unfiltered list of packages:

$ dnf check-update audit-libs.x86_64 3.0.5-1.fc34 updates avahi.x86_64 0.8-14.fc34 updates avahi-libs.x86_64 0.8-14.fc34 updates ... vim-minimal.x86_64 2:8.2.3318-1.fc34 updates xkeyboard-config.noarch 2.33-1.fc34 updates yum.noarch 4.8.0-1.fc34 updates

DNF supports passing the types of updates to operate on as parameter: ‐‐security for security updates, ‐‐bugfix for bug fix updates and ‐‐enhancement for enhancement updates. Those work on commands such as dnf check-update, dnf update and dnf updateinfo.

For example, this is how you filter the list of available updates by security updates only:

$ dnf check-update --security avahi.x86_64 0.8-14.fc34 updates avahi-libs.x86_64 0.8-14.fc34 updates curl.x86_64 7.76.1-7.fc34 updates ... libgcrypt.x86_64 1.9.3-3.fc34 updates nettle.x86_64 3.7.3-1.fc34 updates perl-Encode.x86_64 4:3.12-460.fc34 updates

And now same thing but by bug fix updates only:

$ dnf check-update --bugfix audit-libs.x86_64 3.0.5-1.fc34 updates ca-certificates.noarch 2021.2.50-1.0.fc34 updates coreutils.x86_64 8.32-30.fc34 updates ... systemd-pam.x86_64 248.7-1.fc34 updates systemd-rpm-macros.noarch 248.7-1.fc34 updates yum.noarch 4.8.0-1.fc34 updates

They can even be combined, so you can use two or more of them at the same time. For example, you can filter the list to show both security and bug fix updates:

$ dnf check-update --security --bugfix audit-libs.x86_64 3.0.5-1.fc34 updates avahi.x86_64 0.8-14.fc34 updates avahi-libs.x86_64 0.8-14.fc34 updates ... systemd-pam.x86_64 248.7-1.fc34 updates systemd-rpm-macros.noarch 248.7-1.fc34 updates yum.noarch 4.8.0-1.fc34 updates

As mentioned, dnf updateinfo also works with this filtering, so you can filter dnf updateinfo, dnf updateinfo list and dnf updateinfo info. For example, for the list of security updates and their IDs:

$ dnf updateinfo list --security FEDORA-2021-74ebf2f06f Moderate/Sec. avahi-0.8-14.fc34.x86_64 FEDORA-2021-74ebf2f06f Moderate/Sec. avahi-libs-0.8-14.fc34.x86_64 FEDORA-2021-83fdddca0f Moderate/Sec. curl-7.76.1-7.fc34.x86_64 FEDORA-2021-e14e86e40e Moderate/Sec. glibc-2.33-20.fc34.x86_64 FEDORA-2021-e14e86e40e Moderate/Sec. glibc-common-2.33-20.fc34.x86_64 FEDORA-2021-e14e86e40e Moderate/Sec. glibc-minimal-langpack-2.33-20.fc34.x86_64 FEDORA-2021-8b25e4642f Low/Sec. krb5-libs-1.19.1-14.fc34.x86_64 FEDORA-2021-83fdddca0f Moderate/Sec. libcurl-7.76.1-7.fc34.x86_64 FEDORA-2021-31fdc84207 Moderate/Sec. libgcrypt-1.9.3-3.fc34.x86_64 FEDORA-2021-d1fc0b9d32 Moderate/Sec. nettle-3.7.3-1.fc34.x86_64 FEDORA-2021-92e07de1dd Important/Sec. perl-Encode-4:3.12-460.fc34.x86_64

If desired, you can install only security updates:

# dnf update --security ================================================================================ Package Arch Version Repository Size ================================================================================ Upgrading: avahi x86_64 0.8-14.fc34 updates 289 k avahi-libs x86_64 0.8-14.fc34 updates 68 k curl x86_64 7.76.1-7.fc34 updates 297 k ... perl-Encode x86_64 4:3.12-460.fc34 updates 1.7 M Installing weak dependencies: glibc-langpack-en x86_64 2.33-20.fc34 updates 563 k Transaction Summary ================================================================================ Install 1 Package Upgrade 11 Packages Total download size: 9.7 M Is this ok [y/N]:

Or even to install both security and bug fix updates while ignoring enhancement updates:

# dnf update --security --bugfix ================================================================================ Package Arch Version Repo Size ================================================================================ Upgrading: audit-libs x86_64 3.0.5-1.fc34 updates 116 k avahi x86_64 0.8-14.fc34 updates 289 k avahi-libs x86_64 0.8-14.fc34 updates 68 k ... rpm-plugin-systemd-inhibit x86_64 4.16.1.3-1.fc34 fedora 23 k shared-mime-info x86_64 2.1-2.fc34 fedora 374 k sqlite x86_64 3.34.1-2.fc34 fedora 755 k Transaction Summary ================================================================================ Install 11 Packages Upgrade 45 Packages Total download size: 32 M Is this ok [y/N]: Install only specific updates

You may also choose to only install the updates with a specific ID, such as FEDORA-2021-74ebf2f06f for avahi by using –advisory and specifying the ID:

# dnf update --advisory=FEDORA-2021-74ebf2f06f ================================================================================ Package Architecture Version Repository Size ================================================================================ Upgrading: avahi x86_64 0.8-14.fc34 updates 289 k avahi-libs x86_64 0.8-14.fc34 updates 68 k Transaction Summary ================================================================================ Upgrade 2 Packages Total download size: 356 k Is this ok [y/N]:

Or even multiple updates, with ‐‐advisories:

# dnf update --advisories=FEDORA-2021-74ebf2f06f,FEDORA-2021-83fdddca0f ================================================================================ Package Architecture Version Repository Size ================================================================================ Upgrading: avahi x86_64 0.8-14.fc34 updates 289 k avahi-libs x86_64 0.8-14.fc34 updates 68 k curl x86_64 7.76.1-7.fc34 updates 297 k libcurl x86_64 7.76.1-7.fc34 updates 284 k Transaction Summary ================================================================================ Upgrade 4 Packages Total download size: 937 k Is this ok [y/N]: Conclusion

In the end it all comes down to how you personally prefer to manage your updates. But if you need, for whichever reason, to only install security updates, then these filters will surely come in handy!

Automatically Light Up a Sign When Your Webcam is in Use

Friday 27th of August 2021 08:00:00 AM

At the beginning of COVID lockdown and multiple people working from home it was obvious there was a need to let others know when I’m in a meeting or on a live webcam. So naturally it took me one year to finally do something about it. Now I’m here to share what I learned along the way. You too can have your very own “do not disturb” sign automatically light up outside your door to tell people not to walk in half-dressed on laundry day.

At first I was surprised Zoom doesn’t have this kind of feature built in. But then again I might use Teams, Meet, Hangouts, WebEx, Bluejeans, or any number of future video collaboration apps. Wouldn’t it make sense to just use a system-wide watch for active webcams or microphones? Like most problems in life, this one can be helped with the Linux kernel. A simple check of the uvcvideo module will show if a video device is in use. Without using events all that is left is to poll it for changes. I chose to build a taskbar icon for this. I would normally do this with my trusty C++. But I decided to step out of my usual comfort zone and use Python in case someone wanted to port it to other platforms. I also wanted to renew my lesser Python-fu and face my inner white space demons. I came up with the following ~90 lines of practical and simple but insecure Python:

https://github.com/jboero/livewebcam/blob/main/livewebcam

Aside from the icon bits, a daemon thread performs the following basic check every 1s, calling scripts as changed:

def run(self): while True: val=subprocess.check_output(['lsmod | grep \'^uvcvideo\' | awk \'{print $3}\''], shell=True, text=True).strip() if val != self.status: self.status = val if val == '0': val=subprocess.check_output(['~/bin/webcam_deactivated.sh']) else: val=subprocess.check_output(['~/bin/webcam_activated.sh']) time.sleep(1)

Rather than implement the parsing of modules, just using a hard-coded shell command got the job done. Now whatever scripts you choose to put in ~/bin/ will be used when at least one webcam activates or deactivates. I recently had a futile go at the kernel maintainers regarding a bug in usb_core triggered by uvcvideo. I would just as soon not go a step further and attempt an events patch to uvcvideo. Also, this leaves room for Mac or Windows users to port their own simple checks.

Now that I had a happy icon that sits in my KDE system tray I could implement scripts for on and off. This is where things got complicated. At first I was going to stick a magnetic bluetooth LED badge on my door to flash “LIVE” whenvever I was in a call. These things are ubiquitous on the internet and cost about $10 for basically an embedded ARM Cortex-M0 with an LED screen, bluetooth, and battery. They are basically a full Raspberry Pi Pico kit but soldered onto the board.

These Bluetooth LED badges with 48Mhz ARM Cortex-M0 chips have a lot of potential, but they need custom firmware to be any use.

Unfortunately these badges use a fixed firmware that is either listening to Bluetooth transmissions or showing your message – it doesn’t do both which is silly. Many people have posted feedback that they should be so much more. Sure enough someone has already tinkered with custom firmware. Unfortunately the firmware was for older USB variants and I’m not about to de-solder or buy an ISP programmer to flash eeprom just for this. That would be a super interesting project for later and would be a great Rpi alternative but all I want right now is a remote controlled light outside my door. I looked at everything including WiFi smart bulbs to replace my recessed lighting bulbs, to BTLE candles which are an interesting option. Along the way I learned a lot about Bluetooth Low Energy including how a kernel update can waste 4 hours of weekend with bluetooth stack crashes. BTLE is really interesting and makes a lot more sense after reading up on it. Sure enough there is Python that can set the display message on your LED badge across the room, but once it is set, Bluetooth will stop listening for you to change it or shut it off. Darn. I guess I should just make do with USB, which actually has a standard command to control power to ports. Let’s see if something exists for this already.

A programmable Bluetooth LED sign costs £10 or for £30 you can have a single LED up to 59 inches away.

It looked like there are options out there even if they’re not ideal. Then suddenly I found it. Neon sign “ON AIR” for £15 and it’s as dumb as they come – just using 5v from USB power. Perfect.

Bingo – now all I needed to do was control the power to it.

The command to control USB power is uhubctl which is in Fedora repos. Unfortunately most USB hubs don’t support this command. In fact very few support it going back 20 years which seems silly. Hubs will happily report that power has been disconnected even though no such disconnection has been made. I assume it’s just a few cents extra to build in this feature but I’m not a USB hub manufacturer. Therefore I needed to source a pre-owned one. In the end I found a BYTECC BT-UH340 from the US. This was all I needed to finalize it. Adding udev rules to allow the wheel group to control USB power, I can now perform a simple uhubctl -a off -l 1-1 -p 1 to turn anything off.

The BYTECC BT-UH340 is one of few hubs I could actually find to support uhubctl power.

Now with a spare USB extension cable lead to my door I finally have a complete solution. There is an “ON AIR” sign on the outside of my door that lights up automatically whenever any of my webcams are in use. I would love to see a Mac port or improvements in pull requests. I’m sure it can all be better. Even further I would love to hone my IoT skills and sort out flashing those Bluetooth badges. If anybody wants to replicate this please be my guest, and suggestions are always welcome.

Auto-updating podman containers with systemd

Wednesday 25th of August 2021 08:00:00 AM

Auto-Updating containers can be very useful in some cases. Podman provides mechanisms to take care of container updates automatically. This article demonstrates how to use Podman Auto-Updates for your setups.

Podman

Podman is a daemonless Docker replacement that can handle rootfull and rootless containers. It is fully aware of SELinux and Firewalld. Furthermore, it comes pre-installed with Fedora Linux so you can start using it right away.

If Podman is not installed on your machine, use one of the following commands to install it. Select the appropriate command for your environment.

# Fedora Workstation / Server / Spins $ sudo dnf install -y podman # Fedora Silverblue, IoT, CoreOS $ rpm-ostree install podman

Podman is also available for many other Linux distributions like CentOS, Debian or Ubuntu. Please have a look at the Podman Install Instructions.

Auto-Updating Containers

Updating the Operating System on a regular basis is somewhat mandatory to get the newest features, bug fixes, and security updates. But what about containers? These are not part of the Operating System.

Why Auto-Updating?

If you want to update your Operating System, it can be as easy as:

$ sudo dnf update

This will not take care of the deployed containers. But why should you take care of these? If you check the content of containers, you will find the application (for example MariaDB in the docker.io/library/mariadb container) and some dependencies, including basic utilities.

Running updates for containers can be tedious and time-consuming, since you have to:

  1. pull the new image
  2. stop and remove the running container
  3. start the container with the new image

This procedure must be done for every container. Updating 10 containers can easily end up taking 30-40 commands that must be run.

Automating these steps will save time and ensure, that everything is up-to-date.

Podman and systemd

Podman has built-in support for systemd. This means you can start/stop/restart containers via systemd without the need of a separate daemon. The Podman Auto-Update feature requires you to have containers running via systemd. This is the only way to automatically ensure that all desired containers are running properly. Some articles like these for Bitwarden and Matrix Server already had a look at this feature. For this article, I will use an even simpler Apache httpd container.

First, start the container with the desired settings.

# Run httpd container with some custom settings $ sudo podman container run -d -t -p 80:80 --name web -v web-volume:/usr/local/apache2/htdocs/:Z docker.io/library/httpd:2.4 # Just a quick check of the container $ sudo podman container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 58e5b07febdf docker.io/library/httpd:2.4 httpd-foreground 4 seconds ago Up 5 seconds ago 0.0.0.0:80->80/tcp web # Also check the named volume $ sudo podman volume ls DRIVER VOLUME NAME local web-volume

Now, set up systemd to handle the deployment. Podman will generate the necessary file.

# Generate systemd service file $ sudo podman generate systemd --new --name --files web /home/USER/container-web.service

This will generate the file container-web.service in your current directory. Review and edit the file to your liking. Here is the file contents with added newlines and formatting to improve readability.

# container-web.service [Unit] Description=Podman container-web.service Documentation=man:podman-generate-systemd(1) Wants=network.target After=network-online.target RequiresMountsFor=%t/containers [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStartPre=/bin/rm -f %t/container-web.pid %t/container-web.ctr-id ExecStart=/usr/bin/podman container run \ --conmon-pidfile %t/container-web.pid \ --cidfile %t/container-web.ctr-id \ --cgroups=no-conmon \ --replace \ -d \ -t \ -p 80:80 \ --name web \ -v web-volume:/usr/local/apache2/htdocs/ \ docker.io/library/httpd:2.4 ExecStop=/usr/bin/podman container stop \ --ignore \ --cidfile %t/container-web.ctr-id \ -t 10 ExecStopPost=/usr/bin/podman container rm \ --ignore \ -f \ --cidfile %t/container-web.ctr-id PIDFile=%t/container-web.pid Type=forking [Install] WantedBy=multi-user.target default.target

Now, remove the current container, copy the file to the proper systemd directory, and start/enable the service.

# Remove the temporary container $ sudo podman container rm -f web # Copy the service file $ sudo cp container-web.service /etc/systemd/system/container-web.service # Reload systemd $ sudo systemctl daemon-reload # Enable and start the service $ sudo systemctl enable --now container-web # Another quick check $ sudo podman container ls $ sudo systemctl status container-web

Please be aware, that the container can now only be managed via systemd. Starting and stopping the container with the “podman” command may interfere with systemd.

Now that the general setup is out of the way, have a look at auto-updating this container.

Manual Auto-Updates

The first thing to look at is manual auto-updates. Sounds weird? This feature allows you to avoid the 3 steps per container, but you will have full control over the update time and date. This is very useful if you only want to update containers in a maintenance window or on the weekend.

Edit the /etc/systemd/system/container-web.service file and add the label shown below to it.

--label "io.containers.autoupdate=registry"

The changed file will have a section appearing like this:

...snip... ExecStart=/usr/bin/podman container run \ --conmon-pidfile %t/container-web.pid \ --cidfile %t/container-web.ctr-id \ --cgroups=no-conmon \ --replace \ -d \ -t \ -p 80:80 \ --name web \ -v web-volume:/usr/local/apache2/htdocs/ \ --label "io.containers.autoupdate=registry" \ docker.io/library/httpd:2.4 ...snip...

Now reload systemd and restart the container service to apply the changes.

# Reload systemd $ sudo systemctl daemon-reload # Restart container-web service $ sudo systemctl restart container-web

After this setup you can run a simple command to update a running instance to the latest available image for the used tag. In this example case, if a new 2.4 image is available in the registry, Podman will download the image and restart the container automatically with a single command.

# Update containers $ sudo podman auto-update Scheduled Auto-Updates

Podman also provides a systemd timer unit that enables container updates on a schedule. This can be very useful if you don’t want to handle the updates on your own. If you are running a small home server, this might be the right thing for you, so you are getting the latest updates every week or so.

Enable the systemd timer for podman as follows:

# Enable podman auto update timer unit $ sudo systemctl enable --now podman-auto-update.timer Created symlink /etc/systemd/system/timers.target.wants/podman-auto-update.timer → /usr/lib/systemd/system/podman-auto-update.timer.

Optionally, you can edit the schedule of the timer. By default, the update will run every Monday morning, which is ok for me. Edit the timer module using this command:

$ sudo systemctl edit podman-auto-update.timer

This will bring up your default editor. Changing the schedule is beyond the scope of this article but the link to systemd.timer below will help. The Demo section of Systemd Timers for Scheduling Tasks contains details as well.

That’s it. Nothing more to do. Podman will now take care of image updates and also prune old images on a schedule.

Hints & Tips

Auto-Updating seems like the perfect solution for container updates, but you should consider some things, before doing so.

  • avoid using the “latest” tag, since it can include major updates
  • consider using tags like “2” or “2.4”, if the image provider has them
  • test auto-updates beforehand (does the container support updates without additional steps?)
  • consider having backups of your Podman volumes, in case something goes sideways
  • auto-updates might not be very useful for highly productive setups, where you need full control over the image version in use
  • updating a container also restarts the container and prunes the old image
  • occasionally check if the updates are being applied

If you take care of the above hints, you should be good to go.

Docs & Links

If you want to learn more about this topic, please check out the links below. There is a lot of useful information in the official documentation and some blogs.

Conclusion

As you can see, without the use of additional tools, you can easily run auto-updates on Podman containers manually or on a schedule. Scheduling allows unattended updates overnight, and you will get all the latest security updates, features, and bug fixes. Some setups I have tested successfully are: MariaDB, Ghost Blog, WordPress, Gitea, Redis, and PostgreSQL.

Apps for daily needs part 4: audio editors

Monday 23rd of August 2021 08:00:00 AM

Audio editor applications or digital audio workstations (DAW) were only used in the past by professionals, such as record producers, sound engineers, and musicians. But nowadays many people who are not professionals also need them. These tools are used for narration on presentations, video blogs, and even just as a hobby. This is especially true now since there are so many online platforms that facilitate everyone sharing audio works, such as music, songs, podcast, etc. This article will introduce some of the open source audio editors or DAW that you can use on Fedora Linux. You may need to install the software mentioned. If you are unfamiliar with how to add software packages in Fedora Linux, see my earlier article Things to do after installing Fedora 34 Workstation. Here is a list of a few apps for daily needs in the audio editors or DAW category.

Audacity

I’m sure many already know Audacity. It is a popular multi-track audio editor and recorder that can be used for post-processing all types of audio. Most people use Audacity to record their voices, then do editing to make the results better. The results can be used as a podcast or a narration for a video blog. In addition, people also use Audacity to create music and songs. You can record live audio through a microphone or mixer. It also supports 32 bit sound quality.

Audacity has a lot of features that can support your audio works. It has support for plugins, and you can even write your own plugin. Audacity provides many built-in effects, such as noise reduction, amplification, compression, reverb, echo, limiter, and many more. You can try these effects while listening to the audio directly with the real-time preview feature. The built in plugin-manager lets you manage frequently used plugins and effects.

More information is available at this link: https://www.audacityteam.org/

LMMS

LMMS or Linux MultiMedia Studio is a comprehensive music creation application. You can use LMMS to produce your music from scratch with your computer. You can create melodies and beats according to your creativity, and make it better with selection of sound instruments and various effects. There are several built-in features related to musical instruments and effects, such as 16 built-in sythesizers, embedded ZynAddSubFx, drop-in VST effect plug-in support, bundled graphic and parametric equalizer, built-in analyzer, and many more. LMMS also supports MIDI keyboards and other audio peripherals.

More information is available at this link: https://lmms.io/

Ardour

Ardour has capabilities similar to LMMS as a comprehensive music creation application. It says on its website that Ardour is a DAW application that is the result of collaboration between musicians, programmers, and professional recording engineers from around the world. Ardour has various functions that are needed by audio engineers, musicians, soundtrack editors, and composers.

Ardour provides complete features for recording, editing, mixing, and exporting. It has unlimited multichannel tracks, non-linear editor with unlimited undo/redo, a full featured mixer, built-in plugins, and much more. Ardour also comes with video playback tools, so it is also very helpful in the process of creating and editing soundtracks for video projects.

More information is available at this link: https://ardour.org/

TuxGuitar

TuxGuitar is a tablature and score editor. It comes with a tablature editor, score viewer, multitrack display, time signature management, and tempo management. It includes various effects, such as bend, slide, vibrato, etc. While TuxGuitar focuses on the guitar, it allows you to write scores for other instruments. It can also serve as a basic MIDI editor. You need to have an understanding of tablature and music scoring to be able to use it.

More information is available at this link: http://www.tuxguitar.com.ar/

Conclusion

This article presented four audio editors as apps for your daily needs and use on Fedora Linux. Actually there are many other audio editors, or DAW, that you can use on Fedora Linux. You can also use Mixxx, Rosegarden, Kwave, Qtractor, MuseScore, musE, and many more. Hopefully this article can help you investigate and choose the right audio editor or DAW. If you have experience using these applications, please share your experiences in the comments.

MAKE MORE with Inkscape – G-Code Tools

Friday 20th of August 2021 08:00:00 AM

Inkscape, the most used and loved tool of Fedora’s Design Team, is not just a program for doing nice vector graphics. With vector graphics (in our case SVG) a lot more can be done. Many programs can import this format. Inkscape can also do a lot more than just graphics. This series will show you some things you can do besides graphics with Inkscape. This first article of the series will show how Inkscape’s G-Code Tools extension can be used to produce G-Code. G-Code, in turn, is useful for programming machines such as plotters and laser engravers.

What is G-Code and what is it used for

The construction of machines for the hobby sector is booming. The publication of the source code for RepRap 3D printers for self-construction and the availability of electronic components, such as Arduino or Raspberry Pi are probably some of the causes for this boom. Mechanical engineering as a hobby is finding more and more adopters. This trend hasn’t stopped with 3D printers. There are also CNC milling machines, plotters, laser engravers, cutters and and even machines that you can build yourself.

You don’t have to design or build these machines yourself. You can purchase such machines relatively cheaply as a kit or already assembled. All these machines have one thing in common – they are computer-controlled. Computer Aided Manufacturing (CAM), which has been widespread in the manufacturing industry, is now also taking place at home.

G-Code or G programming language

The most widespread language for programming CAM machines is G-Code. G-Code is also known as the G programming language. This language was developed at MIT in the 1950s. Since then, various organizations have developed versions of this programming language. Keep this in mind when you work with it. Different countries have different standards for this language. The name comes from the fact that many instructions in this code begin with the letter G. This letter is used to transmit travel or path commands to the machine.

The commands go, in the truest sense of the word, from A (absolute or incremental position around the X-axis; turning around X) to Z (absolute or incrementing in the direction of the Z-axis). Commands prefixed with M (miscellaneous) transmit other instructions to the machine. Switching coolant on/off is an example of an M command. If you want a more complete list of G-Code commands there is a table on Wikipedia.

%
G00 X0 Y0 F70
G01 Z-1 F50
G01 X0 Y20 F50
G02 X20 Y0 J-20
G01 X0 Y0
G00 Z0 F70
M30
%

This small example would mill a square. You could write this G-Code in any editor of your choice. But when it comes to more complex things, you typically won’t do this sort of low-level coding by hand. When it comes to 3D-Printing the slicer writes the G-Code for you. But what about when you want to use a plotter or a laser engraver?

Other Software for writing G-Code

So you will need a program to do this job for you. Sure, some CAD programs can write G-Code. But not all open source CAD programs can do this. Here are some other open source solutions for this:

As you can see, there is no problem finding a tool for doing this. What I dislike is the use of raster graphics. I use a CNC machine because it works more precisely than I would be able to by hand. Using raster graphics and tracing it to make a path for G-Code is not precise anymore. I find that the use of vector graphics, which has paths anyway, is much more precise.

Inkscape and G-Code Tools installation

When it comes to vector graphics, there is no way around Inkscape; at least not if you use Linux. There are a few other programs. But they do not have anywhere near the capability that Inkscape has. Or they are designed for other purposes. So the question is, “Can Inkscape be used for creating G-Code?” And the answer is, “Yes!” Since version 0.91, Inkscape has been packaged with an extension called GCode Tools. This extension does exactly what we want – it converts paths to G-Code.

So all you have to do, if you have not already done it, is install Inkscape:

$ sudo dnf install Inkscape

One thing to note from the start (where light is, is also shadow) – the GCode Tools extension has a lot of functionality that is not well documented. The developer thinks it’s a good idea to use a forum for documentation. Also, basic knowledge about G-Code and CAM is necessary to understand the functions.

Another point to be aware of is that the development isn’t as vibrant as it was at the time the GCode Tools were packaged with Inkscape.

Getting started with Inkscape’s G-Code Tools extension

The first step is the same as when you would make any other thing in Inkscape – adjust your document properties. So open the document settings with Shift + Ctrl + D or by a clicking on the icon on the command bar and set the document properties to the size of your work piece.

Next, set the orientation points by going to Extensions > Gcodetools > Orientation points. You can use the default settings. The default settings will probably give you something similar to what is shown below.

Inkscape with document setup and the orientation points
The Tool library

The next step is to edit the tool library (Extensions > Gcodetools > Tools library). This will open the dialog window for the tool setting. There you choose the tool you will use. The default tool is fine. After you have chosen the tool and hit Apply, a rectangle will be on the canvas with the settings for the tool. These settings can be edited with the text tool (T). But this is a bit tricky.

Inkscape with the default tool library settings added into the document

The G-Code Tools extension will use these settings later. These tool settings are grouped together with an identifiable name. If you de-group these settings, this name will be lost.

There are two possibilities to avoid losing the identifier if you ungroup the tool settings. You can use the de-group with 4 clicks with the activated selection tool. Or you can de-group it by using Shift + Ctrl + G and then give the group a name later using the XML-editor.

In the first case you should watch that the group is restored before you draw anything new. Otherwise the newly drawn object will be added to this group.

Now you can draw the paths you want to later convert to G-Code. Objects like rectangles, circles, stars and polygons as well text must be converted to paths (Path > Object to Path or Shift + Ctrl + C).

Keep in mind that this function often does not produce clean paths. You will have to control it and clean it afterwards. You can find an older article here, that describes the process.

Hershey Fonts or Stroke Fonts

Regarding fonts, keep in mind that TTF and OTF are so called Outline Fonts. This means the contour of the single character is defined and it will be engraved or cut as such. If you do not want this and want to use, for example, a script font then you have to use Stroke Fonts instead. Inkscape itself brings a small collection of them by default (see Extensions > Text > Hershey text).

The stroke fonts of the Hershey Text extension

Another article about how make your own Stroke Fonts will follow. They are not only useful for engraving, but also for embroidery.

The Area Fill Functions

In some cases it might be necessary to fill paths with a pattern. The G-Code Tools extension has a function which offers two ways to fill objects with patterns – zig zag and spiral. There is another function which currently is not working (Inkscape changed some parts for the extensions with the release of version 1.0). The latter function would fill the object with the help of the offset functions in Inkscape. These functions are under Extensions > Gcodetools > Area.

The Fill Area function of the G-Code Tools extension. Left the pattern fill and right (currently not working) the offset filling. The extension will execute the active tab! The area fillings of the G-Code Tool, on top Zig zag and on the bottom Spiral. Note the results will look different, if you apply this function letter-by-letter instead of on the whole path.


With more and different area fillings you will often have to draw the paths by hand (about 90% of the time). The EggBot extension has a function for filling regions with hatches. You also can use the classical hatch patterns. But you will have to convert the fill pattern back to an object. Otherwise the G-Code Tools extension can not convert it. Besides these, Evilmadscientist has a good wiki page describing fill methods.

Converting paths to G-Code

To convert drawn paths to G-Code, use the function Extensions > Gcodetools > Paths to G-Code. This function will be run on the selected objects. If no object is selected, then all paths in the document will be converted.

There is currently no functionality to save G-Code using the file menu. This must be done from within the G-Code Tools extension dialog box when you convert the paths to G-Code. On the Preferences tab, you have to specify the path and the name for the output file.

On the canvas, different colored lines and arrows will be rendered. Blue and green lines show curves (G02 and G03). Red lines show straight lines (G01). When you see this styling, then you know that you are working with G-Code.

Fedora’s logo converted to G-Code with the Inkscape G-Code Tools Conclusion

Opinions differ as to whether Inkscape is the right tool for creating G-Code. If you keep in mind that Inkscape works only in two dimensions and don’t expect too much, you can create G-Code with it. For simple jobs like plotting some lettering or logos, it is definitely enough. The main disadvantage of the G-Code Tools extension is that its documentation is lacking. This makes it difficult to get started with G-Code Tools. Another disadvantage is that there is not currently much active development of G-Code Tools. There are other extensions for Inkscape that also targeted G-Code. But they are already history or are also not being actively developed. The Makerbot Unicorn GCode Output extension and the GCode Plot extension are a few examples of the latter case. The need for an easy way to export G-Code directly definitely exists.

below: a time traveling resource monitor

Wednesday 18th of August 2021 08:00:00 AM

In this article, we introduce below: an Apache 2.0 licensed resource monitor for modern Linux systems. below allows you to replay previously recorded data.

Background

One of the kernel’s primary responsibilities is mediating access to resources. Sometimes this might mean parceling out physical memory such that multiple processes can share the same host. Other times it might mean ensuring equitable distribution of CPU time. In all these contexts, the kernel provides the mechanism and leaves the policy to “someone else”. In more recent times, this “someone else” is usually a runtime like systemd or dockerd. The runtime takes input from a scheduler or end user — something along the lines of what to run and how to run it — and turns the right knobs and pulls the right levers on the kernel such that the workload can —well — get to work.

In a perfect world this would be the end of the story. However, the reality is that resource management is a complex and rather opaque amalgam of technologies that has evolved over decades of computing. Despite some of this technology having various warts and dead ends, the end result — a container — works relatively well. While the user does not usually need to concern themselves with the details, it is crucial for infrastructure operators to have visibility into their stack. Visibility and debuggability are essential for detecting and investigating misconfigurations, bugs, and systemic issues.

To make matters more complicated, resource outages are often difficult to reproduce. It is not unusual to spend weeks waiting for an issue to reoccur so that the root cause can be investigated. Scale further compounds this issue: one cannot run a custom script on every host in the hopes of logging bits of crucial state if the bug happens again. Therefore, more sophisticated tooling is required. Enter below.

Motivation

Historically Facebook has been a heavy user of atop [0]. atop is a performance monitor for Linux that is capable of reporting the activity of all processes as well as various pieces of system level activity. One of the most compelling features atop has over tools like htop is the ability to record historical data as a daemon. This sounds like a simple feature, but in practice this has enabled debugging countless production issues. With long enough data retention, it is possible to go backwards in time and look at the host state before, during, and after the issue or outage.

Unfortunately, it became clear over the years that atop had certain deficiencies. First, cgroups [1] have emerged as the defacto way to control and monitor resources on a Linux machine. atop still lacks support for this fundamental building block. Second, atop stores data on disk with custom delta compression. This works fine under normal circumstances, but under heavy resource pressure the host is likely to lose data points. Since delta compression is in use, huge swaths of data can be lost for periods of time where the data is most important. Third, the user experience has a steep learning curve. We frequently heard from atop power users that they love the dense layout and numerous keybindings. However, this is a double edged sword. When someone new to the space wants to debug a production issue, they’re solving two problems at once now: the issue at hand and how to use atop.

below was designed and developed by and for the resource control team at Facebook with input from production atop users. The resource control team is responsible for, as the name suggests, resource management at scale. The team is comprised of kernel developers, container runtime developers, and hardware folks. Recognizing the opportunity for a next-generation system monitor, we designed below with the following in mind:

  • Ease of use: below must be both intuitive for new users as well as powerful for daily users
  • Opinionated statistics: below displays accurate and useful statistics. We try to avoid collecting and dumping stats just because we can.
  • Flexibility: when the default settings are not enough, we allow the user to customize their experience. Examples include configurable keybindings, configurable default view, and a scripting interface (the default being a terminal user interface).
Install

To install the package:

# dnf install -y below

To turn on the recording daemon:

# systemctl enable --now below Quick tour

below’s most commonly used mode is replay mode. As the name implies, replay mode replays previously recorded data. Assuming you’ve already started the recording daemon, start a session by running:

$ below replay --time "5 minutes ago"

You will then see the cgroup view:

If you get stuck or forget a keybinding, press ? to access the help menu.

The very top of the screen is the status bar. The status bar displays information about the current sample. You can move forwards and backwards through samples by pressing t and T, respectively. The middle section is the system overview. The system overview contains statistics about the system as a whole that are generally always useful to see. The third and lowest section is the multipurpose view. The image above shows the cgroup view. Additionally, there are process and system views, accessible by pressing p and s, respectively.

Press and to move the list selection. Press <Enter> to collapse and expand cgroups. Suppose you’ve found an interesting cgroup and you want to see what processes are running inside it. To zoom into the process view, select the cgroup and press z:

Press z again to return to the cgroup view. The cgroup view can be somewhat long at times. If you have a vague idea of what you’re looking for, you can filter by cgroup name by pressing / and entering a filter:

At this point, you may have noticed a tab system we haven’t explored yet. To cycle forwards and backwards through tabs, press <Tab> and <Shift> + <Tab> respectively. We’ll leave this as an exercise to the reader.

Other features

Under the hood, below has a powerful design and architecture. Facebook is constantly upgrading to newer kernels, so we never assume a data source is available. This tacit assumption enables total backwards and forwards compatibility between kernels and below versions. Furthermore, each data point is zstd compressed and stored in full. This solves the issues with delta compression we’ve seen atop have at scale. Based on our tests, our per-sample compression can achieve on average a 5x compression ratio.

below also uses eBPF [2] to collect information about short-lived processes (processes that live for shorter than the data collection interval). In contrast, atop implements this feature with BSD process accounting, a known slow and priority-inversion-prone kernel interface.

For the user, below also supports live-mode and a dump interface. Live mode combines the recording daemon and the TUI session into one process. This is convenient for browsing system state without committing to a long running daemon or disk space for data storage. The dump interface is a scriptable interface to all the data below stores. Dump is both powerful and flexible — detailed data is available in CSV, JSON, and human readable format.

Conclusion

below is an Apache 2.0 licensed open source project that we (the below developers) think offers compelling advantages over existing tools in the resource monitoring space. We’ve spent a great deal of effort preparing below for open source use so we hope that readers and the community get a chance to try below out and report back with bugs and feature requests.

[0]: https://www.atoptool.nl/
[1]: https://en.wikipedia.org/wiki/Cgroups
[2]: https://ebpf.io/

Barrier: an introduction

Monday 16th of August 2021 08:00:00 AM
What is barrier?

To reduce the number of keyboards and mice you can get a physical KVM switch but the down side to the physical KVM switch is it requires you to select a device each time you want to swap. barrier is a virtual KVM switch that allows one keyboard and mouse to control anything from 2-15 computers and you can even have a mix of linux, Windows or Mac.

Don’t confuse Keyboard, Video and Mouse (KVM) with Kernel Virtual Machine (KVM) they are very different and this article will be covering the former. If the Kernel Virtual Machine topic is of interest to you read through this Red Hat article https://www.redhat.com/en/topics/virtualization/what-is-KVM that provides an overview of the latter type of KVM.

Installing barrier on Fedora Linux (KDE Plasma)

Enter Alt+Ctrl+T to display the terminal screen and you will enter the following to download barrier from the Fedora system repository.

$ sudo dnf install barrier Installing barrier on Windows and Mac

If you are looking to install on alternate operating systems you can find the Windows and Mac downloads here: https://github.com/debauchee/barrier/releases.

Nuances of version 2.3.3
  • barrier does not support Wayland, the default display protocol used in both Gnome and KDE, so you will need to switch your desktop to use the X11 protocol to use barrier.
  • If you are unable to move your mouse from the host to a client computer, make sure you do not have scroll lock enabled. If scroll lock is enabled it will prevent the pointer from moving to a client.
  • When using more then one Linux machine, verify you are using the same version of barrier on each one (thanks to @ilikelinux for pointing this requirement out). If you need to check your version enter the following at the terminal.
$ dnf list barrier To use X11 in KDE:
  1. Select the Fedora icon in the bottom left
  2. Select Leave on the bottom right of the menu
  3. Select Log Out
  4. Select OK
  5. Select Desktop Session on the bottom left side of the screen and select X11
  6. Log back in
Set up your barrier host

At the command line type barrier and the main screen will display.

$ barrier
  1. Select the check box next to Server (share this computer’s mouse and keyboard)
  2. Click the Configure Server… button
Barrier

You should now be on the Screens and links tab. Here you will see a recycle icon on the top left and a blue monitor icon on the top right.

To add a client, drag the blue monitor icon to the location you want your monitor to be when you move the mouse from your host to client device. Think of this as how you would want a multi-monitor setup to appear.

If you want to remove one drag the blue monitor to the recycle bin.

Barrier Server Configuration

After you have setup a client in the location grid, double click the same icon to open the Screen Settings dialog box.

Barrier Screen Settings
  1. Fill in the Screen name field with whatever name you would like.
  2. Under Aliases type a different name and select Add.

At this point, your host is ready to go and you can click the Start button on the bottom right of the screen.

Barrier

Note: Depending on your current firewall configuration, you might need to add an exception for the synergy service so that network connections to that port (24800/tcp) can get through to your barrier server. You probably want to restrict this access to only a select few source IP addresses (barrier clients).

Set up your barrier client

The barrier client side setup is very simple.

  1. Start barrier on the client machine.
  2. Select the check box next to Client (use another computer’s mouse and keyboard)
  3. From the host computer you can look at the IP addresses field and copy its value to the Server IP field on the client.
  4. Click Start on the bottom right.
Client on Microsoft Windows 10 Conclusion

From this point on, you can move your mouse between each computer you added and even copy and paste text back and forth just like they are on the same computer. barrier has numerous options you can use to tweak the program under the Hotkeys and Advanced server settings tabs on the host. Now that you are up and running, go ahead and spend time messing around with different options to see what suites you best.

Note: barrier requires that the each host has a physical display.

Use dnf updateinfo to read update changelogs

Friday 13th of August 2021 08:00:00 AM

This article will explore how to check the changelogs for the Fedora Linux operating system using the command line and dnf updateinfo. Instead of showing the commands running on a real Fedora Linux install, this article will demo running the dnf commands in toolbox.

Introduction

If you have used any type of computer recently (be it a desktop, laptop or even a smartphone), you most likely have had to deal with software updates. You might have an opinion about them. They might be a “necessary evil”, something that always breaks your setup and makes you waste hours fixing the new problems that appeared, or you might even like them.

No matter your opinion, there are reasons to update your software: mainly bug fixes, especially security-related bug fixes. After all, you most likely don’t want someone getting your private data by exploiting a bug that happens because of a interaction between the code of your web browser and the code that renders text on your screen.

If you manage your software updates in a manual or semi-manual fashion (in comparison to letting the operating system auto-update your software), one feature you should be aware of is “changelogs”.

A changelog is, as the name hints, a big list of changes between two releases of the same software. The changelog content can vary a lot. It may depend on the team, the type of software, its importance, and the number of changes. It can range from a very simple “several small bugs were fixed in this release”-type message, to a list of links to the bugs fixed on a issue tracker with a small description, to a big and detailed list of changes or elaborate blog posts.

Now, how do you check the changelogs for the updates?

If you use Fedora Workstation the easy way to see the changelog with a GUI is with Gnome Software. Select the name of the package or name of the software on the updates page and the changelog is displayed. You could also try your favorite GUI package manager, which will most likely show it to you as well. But how does one do the same thing via CLI?

How to use dnf updateinfo

Start by creating a Fedora 34 toolbox called updateinfo-demo:

toolbox create --distro fedora --release f34 updateinfo-demo

Now, enter the toolbox:

toolbox enter updateinfo-demo

The commands from here on can also be used on a normal Fedora install.

First, check the updates available:

$ dnf check-update audit-libs.x86_64 3.0.3-1.fc34 updates ca-certificates.noarch 2021.2.50-1.0.fc34 updates coreutils.x86_64 8.32-30.fc34 updates coreutils-common.x86_64 8.32-30.fc34 updates curl.x86_64 7.76.1-7.fc34 updates dnf.noarch 4.8.0-1.fc34 updates dnf-data.noarch 4.8.0-1.fc34 updates expat.x86_64 2.4.1-1.fc34 updates file-libs.x86_64 5.39-6.fc34 updates glibc.x86_64 2.33-20.fc34 updates glibc-common.x86_64 2.33-20.fc34 updates glibc-minimal-langpack.x86_64 2.33-20.fc34 updates krb5-libs.x86_64 1.19.1-14.fc34 updates libcomps.x86_64 0.1.17-1.fc34 updates libcurl.x86_64 7.76.1-7.fc34 updates libdnf.x86_64 0.63.1-1.fc34 updates libeconf.x86_64 0.4.0-1.fc34 updates libedit.x86_64 3.1-38.20210714cvs.fc34 updates libgcrypt.x86_64 1.9.3-3.fc34 updates libidn2.x86_64 2.3.2-1.fc34 updates libmodulemd.x86_64 2.13.0-1.fc34 updates librepo.x86_64 1.14.1-1.fc34 updates libsss_idmap.x86_64 2.5.2-1.fc34 updates libsss_nss_idmap.x86_64 2.5.2-1.fc34 updates libuser.x86_64 0.63-4.fc34 updates libxcrypt.x86_64 4.4.23-1.fc34 updates nano.x86_64 5.8-3.fc34 updates nano-default-editor.noarch 5.8-3.fc34 updates nettle.x86_64 3.7.3-1.fc34 updates openldap.x86_64 2.4.57-5.fc34 updates pam.x86_64 1.5.1-6.fc34 updates python-setuptools-wheel.noarch 53.0.0-2.fc34 updates python-unversioned-command.noarch 3.9.6-2.fc34 updates python3.x86_64 3.9.6-2.fc34 updates python3-dnf.noarch 4.8.0-1.fc34 updates python3-hawkey.x86_64 0.63.1-1.fc34 updates python3-libcomps.x86_64 0.1.17-1.fc34 updates python3-libdnf.x86_64 0.63.1-1.fc34 updates python3-libs.x86_64 3.9.6-2.fc34 updates python3-setuptools.noarch 53.0.0-2.fc34 updates sssd-client.x86_64 2.5.2-1.fc34 updates systemd.x86_64 248.6-1.fc34 updates systemd-libs.x86_64 248.6-1.fc34 updates systemd-networkd.x86_64 248.6-1.fc34 updates systemd-pam.x86_64 248.6-1.fc34 updates systemd-rpm-macros.noarch 248.6-1.fc34 updates vim-minimal.x86_64 2:8.2.3182-1.fc34 updates xkeyboard-config.noarch 2.33-1.fc34 updates yum.noarch 4.8.0-1.fc34 updates

OK, so run your first dnf updateinfo command:

$ dnf updateinfo Updates Information Summary: available 5 Security notice(s) 4 Moderate Security notice(s) 1 Low Security notice(s) 11 Bugfix notice(s) 8 Enhancement notice(s) 3 other notice(s)

This is the summary of updates. As you can see there are security updates, bugfix updates, enhancement updates and some which are not specified.

Look at the list of updates and which types they belong to:

$ dnf updateinfo list FEDORA-2021-e4866762d8 enhancement audit-libs-3.0.3-1.fc34.x86_64 FEDORA-2021-1f32e18471 bugfix ca-certificates-2021.2.50-1.0.fc34.noarch FEDORA-2021-b09e010a46 bugfix coreutils-8.32-30.fc34.x86_64 FEDORA-2021-b09e010a46 bugfix coreutils-common-8.32-30.fc34.x86_64 FEDORA-2021-83fdddca0f Moderate/Sec. curl-7.76.1-7.fc34.x86_64 FEDORA-2021-3b74285c43 bugfix dnf-4.8.0-1.fc34.noarch FEDORA-2021-3b74285c43 bugfix dnf-data-4.8.0-1.fc34.noarch FEDORA-2021-523ee0a81e enhancement expat-2.4.1-1.fc34.x86_64 FEDORA-2021-07625b9c81 unknown file-libs-5.39-6.fc34.x86_64 FEDORA-2021-e14e86e40e Moderate/Sec. glibc-2.33-20.fc34.x86_64 FEDORA-2021-e14e86e40e Moderate/Sec. glibc-common-2.33-20.fc34.x86_64 FEDORA-2021-e14e86e40e Moderate/Sec. glibc-minimal-langpack-2.33-20.fc34.x86_64 FEDORA-2021-8b25e4642f Low/Sec. krb5-libs-1.19.1-14.fc34.x86_64 FEDORA-2021-3b74285c43 bugfix libcomps-0.1.17-1.fc34.x86_64 FEDORA-2021-83fdddca0f Moderate/Sec. libcurl-7.76.1-7.fc34.x86_64 FEDORA-2021-3b74285c43 bugfix libdnf-0.63.1-1.fc34.x86_64 FEDORA-2021-ca22b882a5 enhancement libeconf-0.4.0-1.fc34.x86_64 FEDORA-2021-f9c139edd8 bugfix libedit-3.1-38.20210714cvs.fc34.x86_64 FEDORA-2021-31fdc84207 Moderate/Sec. libgcrypt-1.9.3-3.fc34.x86_64 FEDORA-2021-bc56cf7c1f enhancement libidn2-2.3.2-1.fc34.x86_64 FEDORA-2021-da2ec14d7f bugfix libmodulemd-2.13.0-1.fc34.x86_64 FEDORA-2021-3b74285c43 bugfix librepo-1.14.1-1.fc34.x86_64 FEDORA-2021-1db6330a22 unknown libsss_idmap-2.5.2-1.fc34.x86_64 FEDORA-2021-1db6330a22 unknown libsss_nss_idmap-2.5.2-1.fc34.x86_64 FEDORA-2021-8226c82fe9 bugfix libuser-0.63-4.fc34.x86_64 FEDORA-2021-e6916d6758 bugfix libxcrypt-4.4.22-2.fc34.x86_64 FEDORA-2021-fed4036fd9 bugfix libxcrypt-4.4.23-1.fc34.x86_64 FEDORA-2021-3122d2b8d2 unknown nano-5.8-3.fc34.x86_64 FEDORA-2021-3122d2b8d2 unknown nano-default-editor-5.8-3.fc34.noarch FEDORA-2021-d1fc0b9d32 Moderate/Sec. nettle-3.7.3-1.fc34.x86_64 FEDORA-2021-97949d7a4e bugfix openldap-2.4.57-5.fc34.x86_64 FEDORA-2021-e6916d6758 bugfix pam-1.5.1-6.fc34.x86_64 FEDORA-2021-07931f7f08 bugfix python-setuptools-wheel-53.0.0-2.fc34.noarch FEDORA-2021-2056ce89d9 enhancement python-unversioned-command-3.9.6-1.fc34.noarch FEDORA-2021-d613e00b72 enhancement python-unversioned-command-3.9.6-2.fc34.noarch FEDORA-2021-2056ce89d9 enhancement python3-3.9.6-1.fc34.x86_64 FEDORA-2021-d613e00b72 enhancement python3-3.9.6-2.fc34.x86_64 FEDORA-2021-3b74285c43 bugfix python3-dnf-4.8.0-1.fc34.noarch FEDORA-2021-3b74285c43 bugfix python3-hawkey-0.63.1-1.fc34.x86_64 FEDORA-2021-3b74285c43 bugfix python3-libcomps-0.1.17-1.fc34.x86_64 FEDORA-2021-3b74285c43 bugfix python3-libdnf-0.63.1-1.fc34.x86_64 FEDORA-2021-2056ce89d9 enhancement python3-libs-3.9.6-1.fc34.x86_64 FEDORA-2021-d613e00b72 enhancement python3-libs-3.9.6-2.fc34.x86_64 FEDORA-2021-07931f7f08 bugfix python3-setuptools-53.0.0-2.fc34.noarch FEDORA-2021-1db6330a22 unknown sssd-client-2.5.2-1.fc34.x86_64 FEDORA-2021-3141f0eff1 bugfix systemd-248.6-1.fc34.x86_64 FEDORA-2021-3141f0eff1 bugfix systemd-libs-248.6-1.fc34.x86_64 FEDORA-2021-3141f0eff1 bugfix systemd-networkd-248.6-1.fc34.x86_64 FEDORA-2021-3141f0eff1 bugfix systemd-pam-248.6-1.fc34.x86_64 FEDORA-2021-3141f0eff1 bugfix systemd-rpm-macros-248.6-1.fc34.noarch FEDORA-2021-b8b1f6e54f enhancement vim-minimal-2:8.2.3182-1.fc34.x86_64 FEDORA-2021-67645ae09f enhancement xkeyboard-config-2.33-1.fc34.noarch FEDORA-2021-3b74285c43 bugfix yum-4.8.0-1.fc34.noarch

The output is in three columns. These show the ID for an update, the type of the update, and the package to which it refers.

If you want to see the Bodhi page for a specific update, just add the id to the end of this URL:
https://bodhi.fedoraproject.org/updates/.

For example, https://bodhi.fedoraproject.org/updates/FEDORA-2021-3141f0eff1 for systemd-248.6-1.fc34.x86_64 or https://bodhi.fedoraproject.org/updates/FEDORA-2021-b09e010a46 for coreutils-8.32-30.fc34.x86_64.

The next command will list the actual changelog.

dnf updateinfo info

The output from this command is quite long. So only a few interesting excerpts are provided below.

Start with a small one:

=============================================================================== ca-certificates-2021.2.50-1.0.fc34 =============================================================================== Update ID: FEDORA-2021-1f32e18471 Type: bugfix Updated: 2021-06-18 22:08:02 Description: Update the ca-certificates list to the lastest upstream list. Severity: Low

Notice how this info has the update ID, type, updated time, description and severity. Very simple and easy to understand.

Now look at the systemd update which, in addition to the previous items, has some bugs associated with it in Red Hat Bugzilla, a more elaborate description, and a different severity.

=============================================================================== systemd-248.6-1.fc34 =============================================================================== Update ID: FEDORA-2021-3141f0eff1 Type: bugfix Updated: 2021-07-24 22:00:30 Bugs: 1963428 - if keyfile >= 1024*4096-1 service "systemd-cryptsetup@<partition name>" can't start : 1965815 - 50-udev-default.rules references group "sgx" which does not exist : 1975564 - systemd-cryptenroll SIGABRT when adding recovery key - buffer overflow : 1984651 - systemd[1]: Assertion 'a <= b' failed at src/libsystemd/sd-event/sd-event.c:2903, function sleep_between(). Aborting. Description: - Create 'sgx' group (and also use soft-static uids for input and render, see https://pagure.io/setup/c/df3194a7295c2ca3cfa923981b046f4bd2754825 and https://pagure.io/packaging-committee/issue/1078 (#1965815) : - Various bugfixes (#1963428, #1975564) : - Fix for a regression introduced in the previous release with sd-event abort (#1984651) : : No need to log out or reboot. Severity: Moderate

Next look at a curl update. This has a security update with several CVEs associated with it. Each CVE has its respective Red Hat Bugzilla bug.

=============================================================================== curl-7.76.1-7.fc34 =============================================================================== Update ID: FEDORA-2021-83fdddca0f Type: security Updated: 2021-07-22 22:03:07 Bugs: 1984325 - CVE-2021-22922 curl: wrong content via metalink is not being discarded [fedora-all] : 1984326 - CVE-2021-22923 curl: Metalink download sends credentials [fedora-all] : 1984327 - CVE-2021-22924 curl: bad connection reuse due to flawed path name checks [fedora-all] : 1984328 - CVE-2021-22925 curl: Incorrect fix for CVE-2021-22898 TELNET stack contents disclosure [fedora-all] Description: - fix TELNET stack contents disclosure again (CVE-2021-22925) : - fix bad connection reuse due to flawed path name checks (CVE-2021-22924) : - disable metalink support to fix the following vulnerabilities : CVE-2021-22923 - metalink download sends credentials : CVE-2021-22922 - wrong content via metalink not discarded Severity: Moderate

This item shows a simple enhancement update.

=============================================================================== python3-docs-3.9.6-1.fc34 python3.9-3.9.6-1.fc34 =============================================================================== Update ID: FEDORA-2021-2056ce89d9 Type: enhancement Updated: 2021-07-08 22:00:53 Description: Update of Python 3.9 and python3-docs to latest release 3.9.6 Severity: None

Finally an “unknown” type update.

=============================================================================== file-5.39-6.fc34 =============================================================================== Update ID: FEDORA-2021-07625b9c81 Type: unknown Updated: 2021-06-11 22:16:57 Bugs: 1963895 - Wrong detection of python bytecode mimetypes Description: do not classify python bytecode files as text (#1963895) Severity: None Conclusion

So, in what situation does dnf updateinfo become handy?

Well, you could use it if you prefer managing updates fully via the CLI, or if you are unable to successfully use the GUI tools at a specific moment.

In which case is checking the changelog useful?

Say you manage the updates yourself, sometimes you might not consider it ideal to stop what you are doing to update your system. Instead of simply installing the updates, you check the changelogs. This allows you to figure out whether you should prioritize your updates (maybe there’s a important security fix?) or whether to postpone a bit longer (no important fix, “I will do it later when I’m not doing anything important”).

Build your own Fedora IoT Remix

Wednesday 11th of August 2021 08:00:00 AM

Fedora IoT Edition is aimed at the Internet of Things. It was introduced in the article How to turn on an LED with Fedora IoT in 2018. It is based on RPM-OSTree as a core technology to gain some nifty properties and features which will be covered in a moment.

RPM-OSTree is a high-level tool built on libostree which is a set of tools establishing a “git-like” model for committing and exchanging filesystem trees, deployment of said trees, bootloader configuration and layered RPM package management. Such a system benefits from the following properties:

  • Transactional upgrade and rollback
  • Read-only filesystem areas
  • Potentially small updates through deltas
  • Branching, including rebase and multiple deployments
  • Reproducible filesystem
  • Specification of filesystem through version-controlled code

Exchange of filesystem trees and corresponding commits is done through OSTree repositories or remotes. When using one of the Fedora Editions based on RPM-OSTree there are remotes from which the system downloads commits and applies them, rather than downloading and installing separate RPMs.

A Remix in the Fedora ecosystem is an altered, opinionated version of the OS. It covers the needs of a specific niche. This article will dive into the world of building your own filesystem commits based on Fedora IoT Edition. You will become acquainted to the tools, terminology, design and processes of such a system. If you follow the directions in this guide you will end up with your own Fedora IoT Remix.

Preparations

You will need some packages to get started. On non-ostree systems install the packages ostree and rpm-ostree. Both are available in the Fedora Linux package repositories. Additionally install git to access the Fedora IoT ostree spec sources.

sudo dnf install ostree rpm-ostree git

Assuming you have a spare, empty folder laying around to work with, start there by creating some files and folders that will be needed along the way.

mkdir .cache .build-repo .deploy-repo .tmp custom

The .cache directory is used by all build commands around rpm-ostree. The folders build and deploy store separate repositories to keep the build environment separate from the actual remix. The .tmp directory is used to combine the git-managed upstream sources (from Fedora IoT, for example) with modifications kept in the custom directory.

As you build your own OSTree as derivative from Fedora IoT you will need the sources. Clone them into the folder .fedora-iot-spec. They contain several configuration files specifying how the ostree filesystem for Fedora IoT is built, what packages to include, etc.

git clone -b "f34" https://pagure.io/fedora-iot/ostree.git .fedora-iot-spec OSTree repositories

Create repositories to build and store an OSTree filesystem and its contents . A place to store commits and manage their metadata. Wait, what? What is an OSTree commit anyway? Glad you ask! With rpm-ostree you build so-called libostree commits. The terminology is roughly based on git. They essentially work in similar ways. Those commits store diffs from one state of the filesystem to the next. If you change a binary blob inside the tree, the commit contains this change. You can deploy this specific version of the filesystem at any time.

Use the ostree init command to create two ostree repositories.

ostree --repo=".build-repo" init --mode=bare-user ostree --repo=".deploy-repo" init --mode=archive

The main difference between the repositories is their mode. Create the build repository in “bare-user” mode and the “production” repository in “archive” mode. The bare* mode is well suited for build environments. The “user” portion additionally allows non-root operation and storing extended attributes. Create the other repository in archive mode. It stores objects compressed; making them easy to move around. If all that doesn’t mean a thing to you, don’t worry. The specifics don’t matter for your primary goal here – to build your own Remix.

Let me share just a little anecdote on this: When I was working on building ostree-based systems on GitLab CI/CD pipelines and we had to move the repositories around different jobs, we once tried to move them uncompressed in bare-user mode via caches. We learned that, while this works with archive repos, it does not with bare* repos. Important filesystem attributes will get corrupted on the way.

Custom flavor

What’s a Remix without any customization? Not much! Create some configuration files as adjustment for your own OS. Assuming you want to deploy the Remix on a system with a hardware watchdog (a Raspberry Pi, for example) start with a watchdog configuration file:

./custom/watchdog.conf watchdog-device = /dev/watchdog max-load-1 = 24 max-load-15 = 9 realtime = yes priority = 1 watchdog-timeout = 15 # Broadcom BCM2835 limitation

The postprocess-script is an arbitrary shell script executed inside the target filesystem tree as part of the build process. It allows for last-minute customization of the filesystem in a restricted and (by default) network-less environment. It’s a good place to ensure the correct file permissions are set for the custom watchdog configuration file.

./custom/treecompose-post.sh #!/bin/sh set -e # Prepare watchdog chown root:root /etc/watchdog.conf chmod 0644 /etc/watchdog.conf Plant a Treefile

Fedora IoT is pretty minimal and keeps its main focus on security and best-practices. The rest is up to you and your use-case. As a consequence, the watchdog package is not provided from the get-go. In RPM-OSTree the spec file is called Treefile and encoded in JSON. In the Treefile you specify what packages to install, files and folders to exclude from packages, configuration files to add to the filesystem tree and systemd units to enable by default.

./custom/treefile.json { "ref": "OSTreeBeard/stable/x86_64", "ex-jigdo-spec": "fedora-iot.spec", "include": "fedora-iot-base.json", "boot-location": "modules", "packages": [ "watchdog" ], "remove-files": [ "etc/watchdog.conf" ], "add-files": [ ["watchdog.conf", "/etc/watchdog.conf"] ], "units": [ "watchdog.service" ], "postprocess-script": "treecompose-post.merged.sh" }

The ref is basically the branch name within the repository. Use it to refer to this specific spec in rpm-ostree operations. With ex-jigdo-spec and include you link this Treefile to the configuration of the Fedora IoT sources. Additionally specify the Fedora Updates repo in the repos section. It is not part of the sources so you will have to add that yourself. More on that in a moment.

With packages you instruct rpm-ostree to install the watchdog package. Exclude the watchdog.conf file and replace it with the one from the custom directory by using remove-files and add-files. Now just enable the watchdog.service and you are good to go.

All available treefile options are available in the official RPM-OSTree documentation.

Add another RPM repository

In it’s initial configuration the OSTree only uses the initial Fedora 34 package repository. Add the Fedora 34 Updates repository as well. To do so, add the following file to your custom directory.

./custom/fedora-34-updates.repo [fedora-34-updates] name=Fedora 34 - $basearch - Updates #baseurl=http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/ metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f34&arch=$basearch enabled=1 repo_gpgcheck=0 type=rpm gpgcheck=1 #metadata_expire=7d gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-34-$basearch skip_if_unavailable=False

Now tell rpm-ostree in the spec for your Remix to include this repository. Use the treefile‘s repos section.

./custom/treefile.json { ... "repos": [ "fedora-34", "fedora-34-updates" ], ... } Build your own Fedora IoT Remix

You have all that need to build your first ostree based filesystem. By now you setup a certain project structure, downloaded the Fedora IoT upstream specs, and added some customization and initialized the ostree repositories. All you need to do now is throw everything together and create a nicely flavored Fedora IoT Remix salsa.

cp ./.fedora-iot-spec/* .tmp/ cp ./custom/* .tmp/

Combine the postprocessing-scripts of the Fedora IoT upstream sources and your custom directory.

cat "./.fedora-iot-spec/treecompose-post.sh" "./custom/treecompose-post.sh" > ".tmp/treecompose-post.merged.sh" chmod +x ".tmp/treecompose-post.merged.sh"

Remember that you specified treecompose-post.merged.sh as your post-processing script earlier in treefile.json? That’s where this file comes from.

Note that all the files – systemd units, scripts, configurations – mentioned in ostree.json are now available in .tmp. This folder is the build context that all the references are relative to.

You are only one command away from kicking off your first build of a customized Fedora IoT. Now, kick-of the build with rpm-ostree compose tree command. Now grab a cup of coffee, enjoy and wait for the build to finish. That may take between 5 to 10 minutes depending on your host hardware. See you later!

sudo rpm-ostree compose tree --unified-core --cachedir=".cache" --repo=".build-repo" --write-commitid-to="$COMMIT_FILE" ".tmp/treefile.json" Prepare for deployment

Oh, erm, you are back already? Ehem. Good! – The .build-repo now stores a complete filesystem tree of around 700 to 800 MB of compressed data. The last thing to do before you consider putting this on the network and deploying it on your device(s) (at least for now) is to add a commit with an arbitrary commit subject and metadata and to pull the result over to the deploy-repo.

sudo ostree --repo=".deploy-repo" pull-local ".build-repo" "OSTreeBeard/stable/x86_64"

The deploy-repo can now be placed on any file-serving webserver and then used as a new ostree remote … theoretically. I won’t go through the topic of security for ostree remotes just yet. As an initial advise though: Always sign OSTree commits with GPG to ensure the authenticity of your updates. Apart from that it’s only a matter of adding the remote configuration on your target and using rpm-ostree rebase to switch over to this Remix.

As a final thing before you leave to do outside stuff (like with fresh air, sun, ice-cream or whatever), take a look around the newly built filesystem to ensure that everything is in place.

Explore the filesystem

Use ostree refs to list available refs in the repo or on your system.

$ ostree --repo=".deploy-repo" refs OSTreeBeard/stable/x86_64

Take a look at the commits of a ref with ostree log.

$ ostree --repo=".deploy-repo" log OSTreeBeard/stable/x86_64 commit 849c0648969c8c2e793e5d0a2f7393e92be69216e026975f437bdc2466c599e9 ContentChecksum: bcaa54cc9d8ffd5ddfc86ed915212784afd3c71582c892da873147333e441b26 Date: 2021-07-27 06:45:36 +0000 Version: 34 (no subject)

List the ostree filesystem contents with ostree ls.

$ ostree --repo=".build-repo" ls OSTreeBeard/stable/x86_64 d00755 0 0 0 / l00777 0 0 0 /bin -> usr/bin l00777 0 0 0 /home -> var/home l00777 0 0 0 /lib -> usr/lib l00777 0 0 0 /lib64 -> usr/lib64 l00777 0 0 0 /media -> run/media l00777 0 0 0 /mnt -> var/mnt l00777 0 0 0 /opt -> var/opt l00777 0 0 0 /ostree -> sysroot/ostree l00777 0 0 0 /root -> var/roothome l00777 0 0 0 /sbin -> usr/sbin l00777 0 0 0 /srv -> var/srv l00777 0 0 0 /tmp -> sysroot/tmp d00755 0 0 0 /boot d00755 0 0 0 /dev d00755 0 0 0 /proc d00755 0 0 0 /run d00755 0 0 0 /sys d00755 0 0 0 /sysroot d00755 0 0 0 /usr d00755 0 0 0 /var $ ostree --repo=".build-repo" ls OSTreeBeard/stable/x86_64 /usr/etc/watchdog.conf -00644 0 0 208 /usr/etc/watchdog.conf

Take note that the watchdog.conf file is located under /usr/etc/watchdog.conf. On booted deployment this is located at /etc/watchdog.conf as usual.

Where to go from here?

You took a brave step in building a customized Fedora IoT on your local machine. First I introduced you the concepts and vocabulary so you could understand where you were at and where you wanted to go. You then ensured all the tools were in place. You looked at the ostree repository modes and mechanics before analyzing a typical ostree configuration. To spice it up and make it a bit more interesting you made an additional service and configuration ready to role out on your device(s). To do that you added the Fedora Updates RPM repository and then kicked off the build process. Last but not least, you packaged the result up in a format ready to be placed somewhere on the network.

There are a lot more topics to cover. I could explain how to configure an NGINX to serve ostree remotes effectively. Or how to ensure the security and authenticity of the filesystem and updates through GPG signatures. Also, how one manually alters the filesystem and what tooling is available for building the filesystem. There is also more to be explained about how to test the Remix and how to build flashable images and installation media.

Let me know in the comments what you think and what you care about. Tell me what you’d like to read next. If you already built Fedora IoT, I’m happy to read your stories too.

References

More in Tux Machines

today's howtos

  • How to use wall command in linux - Unixcop

    wall is (an abbreviation of write to all) is a Unix command-line utility that displays the contents of a computer file or standard input to all logged-in users. It is used by root to send out shutting down message to all users just before poweroff. It displays a message on the terminals of all logged-in users. The messages can_be either typed on the terminal or the contents of a file. Also usually, system administrators send messages to announce maintenance and ask users to log out and close all open programs.The messages ‘re shown to all logged in users with a terminal open.

  • Any Port in a Storm: Ports and Security, Part 1

    When IT and Security professionals talk about port numbers, we’re referring to the TCP and UDP port numbers a service is running on that are waiting to accept connections. But what exactly is a port?

  • Book Review: Data Science at the Command Line By Jeroen Janssens

    Data Science at the Command Line: Obtain, Scrub, Explore, and Model Data with Unix Power Tools written by Jeroen Janssens is the second edition of the series “Data Science at the Command Line”. This book demonstrates how the flexibility of the command line can help you become a more efficient and productive data scientist. You will learn how to combine small yet powerful command-line tools to quickly obtain, scrub, explore, and model your data. To get you started, author Jeroen Janssens provides a Docker image packed with over 80 tools–useful whether you work with Windows, macOS, or Linux.

  • How to Take a Typing Test on Linux With tt

    In the modern era of technology, typing has become one of the most common activities for a lot of professions. Learning to type faster with accuracy can help you get more things done in the same amount of time. However, touch typing is not a skill that you can master overnight. It takes regular practice and testing to improve your speed and accuracy gradually. While there are a lot of websites that help you achieve this, all you essentially need on Linux is a terminal. Let's see how.

  • FIX: Google Chrome doesn’t work on Kali linux
  • How to install OpenToonz on a Chromebook

    Today we are looking at how to install OpenToonz on a Chromebook. Please follow the video/audio guide as a tutorial where we explain the process step by step and use the commands below. If you have any questions, please contact us via a YouTube comment and we would be happy to assist you!

Security and DRM Leftovers

Linux 5.15-rc3

So after a somewhat rocky merge window and second rc, things are now
actually looking pretty normal for rc3. Knock wood.

There are fixes all over, and the statistics look fairly regular, with
drivers dominating as they should (since they are most of the tree).
And outside of drivers, we have a fairly usual mix of changes -
architecture fixes, networking, filesystems, and tooling (the latter
being mostly kvm selftests).

Shortlog appended, it's not too long and easy to scan through to get a
flavor for the details if you happen to care.

Please do give it a whirl,

             Linus

Read more Also: Linux 5.15-rc3 Released - Looking "Pretty Normal" Plus Performance Fix - Phoronix

Huawei launches OS openEuler, aims to construct 'ecological base of national digital infrastructure'

Chinese tech giant Huawei launched openEuler operating system (OS) on Saturday, another self-developed OS after the HarmonyOS, as it tries to "solve the domestic stranglehold problem of lacking its homegrown OS in basic technology," and build a full-scenario covered ecosystem to prepare for more US bans. The openEuler OS can be widely deployed in various forms of equipment such as servers, cloud computing and edge computing. Its application scenarios cover Information Technology, Communication Technology and Operational Technology to achieve unifying an operating system with multi-device support, according to the company's introduction. In the ICT field, Huawei provides products and solutions such as servers, storage, cloud services, edge computing, base stations, routers, industrial control among others, all of which need to be equipped with an OS. Huawei has therefore been building capabilities to achieve a unified OS architecture, and meet the demands of different application scenarios, the firm said on Saturday. The openEuler program was initially announced back in 2019 as an open source operating system. Today's launch is an updated one. Read more