Language Selection

English French German Italian Portuguese Spanish

Fedora Magazine

Syndicate content
Guides, information, and news about the Fedora operating system for users, developers, system administrators, and community members.
Updated: 31 min 23 sec ago

Using Ansible to configure Podman containers

16 hours 30 min ago

In complex IT infrastructure, there are many repetitive tasks. Running those tasks successfully is not easy. Human error always presents a chance of failure. With the help of Ansible, you can perform all of the tasks through a remote host executed with playbooks, and those playbooks can be reused as many times as you need. In this article you will learn how to install and configure Ansible on Fedora Linux, and how to use it to manage and configure Podman containers. 

Ansible

Ansible is an open source infrastructure automation tool sponsored by Red Hat. It can deal with all the problems that come with large infrastructure, like installing & updating packages, taking backups, ensuring specific services are always running, and much more. You do this with a playbook which is written in YAML. Ansible playbooks can be used again and again, making the system administrator’s job less complex. Playbooks also eliminate repetitive tasks and can be easily modified. But we have many automation tools like Ansible, why use it? Unlike some other configuration management tools, Ansible is agentless: you don’t have to install anything on managed nodes. For more information about Ansible, see the Ansible tag in Fedora Magazine.

Podman

Podman is an open source container engine which is used for developing, managing and running container images. But what is a container? Every time you create any new application and deploy it either on physical servers, cloud servers or virtual machines, the most common problems which you face are portability and compatibility. This is where containers come into the picture. Containers virtualize at the OS level so they only contain required libraries and app services. The benefits of containers include:

  • portabilty 
  • isolation
  • scaling
  • light weight
  • fast boot up
  • smaller disk and memory requirements

In a nutshell: when you build a container image for any application, all of the required dependencies are packed into the container. You can now run that container on any host OS without any portability and compatibility issues.

The key highlight of Podman is that it is daemon-less, and so does not require root privileges to run containers. You can build the container images with the help of a Dockerfile or pull images from Docker Hub, fedoraproject.org or Quay. For more information about Podman, see the Podman tag in Fedora Magazine.

Why configure Podman with Ansible?

Ansible provides a way to easily run repetitive tasks many times. It also has tons of modules for cloud providers like AWS, GCP, and Azure, for container management tools like Docker and Podman, and also for database management. Ansible also has a community (Ansible Galaxy) where you can find tons of Ansible roles created by contributors from all over the world. All of this makes Ansible a great tool for DevOps engineers and system administrators.

With DevOps, the development of applications is fast-paced. Developing applications which can run on any operating system is essential. This is where Podman comes into picture.

Installing ansible

First, install Ansible:

$ sudo dnf install ansible -y Configuring ansible

Ansible needs ssh to work on managed nodes, so first generate a key pair.

$ ssh-keygen

Once the key is generated, copy the key to the managed node.

Enter yes and enter the password of the managed node. Now your managed host can be accessed remotely.

For ansible to access managed nodes, you need to store all hostnames or IP addresses in an inventory file. By default, this is in ~/etc/ansible/hosts.

This is what the inventory file looks like. Here square brackets are used to assign groups to some specific nodes.

[group1]
green.example.com
blue.example.com
[group2]
192.168.100.11
192.168.100.10

Check that all managed nodes can be reached.

$ ansible all -m ping

You should see output like this:[mahesh@fedora new] $ ansible all -m ping
fedora.example.com I SUCCESS {
    "ansibe_facts": {
       "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
[mahesh@fedora new] $

Now create your first playbook which will install Podman on managed nodes. First create a file with any name with .yml extension.

$ vim name_of_playbook.yml

The playbook should look something like below. The first field is name for the playbook. The hosts field is used to mention hostname or group name mentioned in inventory. become: yes indicates escalating privileges and tasks contain all the tasks that are going to execute, here name specifies task name, yum is module to install packages, below that specify name of package in name field and state is for installing or removing the package.


– name: First playbook
   hosts: fedora.example.com
   become: yes
  tasks:
    – name: Installing podman.
      yum:
        name: podman
       state: present

Check for any syntax errors in the file.$ ansible-playbook filename --syntax-check

Now run the playbook.$ ansible-playbook filename

You should get output like this:[mahesh@fedora new] $ ansible-playbook podman_installation.yml
PLAY [First playbook] *************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************
0k: [fedora.example.com]

TASK [Installing podman] ************************************************************************************************
changed: [fedora.example.com]

PLAY RECAP *************************************************************************************************
fedora.example.com    : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
[mahesh@fedora new] $

Now create a new playbook which pulls an image from Docker Hub. You’ll use the podman_image module to pull the httpd image of version 2-alpine from Docker Hub. --- 
- name: Playbook for podman.
    hosts: fedora.example.com
  tasks:
    - name: Pull httpd:2-alpine image from dockerhub.
      podman_image:
       name: docker.io/httpd
      tag: 2-alpine

Now check the pulled image.[mahesh@fedora new] $ podman images
REPOSITORY           TAG       IMAGE ID           CREATED       SIZE
docker.io/library/httpd       2-alpine         fa848876521a    11 days ago        57 MB

[mahesh@fedora new] $  

Create a new playbook to run the httpd image. See the podman_container module documentation for more information.--- 
- name: Playbook for podman.
   hosts: fedora.example.com
   tasks:
    - name: Running httpd image.
      containers.podman.podman_container:
       name: my-first-container
      image:  docker.io/httpd:2-alpine
       state: started

Check that the container is running.[mahesh@fedora new] $ podman ps
CONTAINER ID     IMAGE   COMMAND   CREATED      STATUS   PORTS   NAMES    
45d966eOe207     docker.io/library/httpd:2-alpine    httpd-foreground    13 seconds ago    Up 13 seconds ago       my-first-container

[mahesh@fedora new] $

Now to stop the running container, change the state value from started to absent.    - name: Stopping httpd container.
      containers.podman.podman_container:
       name: my-first-container
       image:  docker.io/httpd:2-alpine
       state: absent

When you run the podman ps command, you won’t see any containers running.[mahesh@fedora new] $ podman ps
CONTAINER ID    IMAGE    COMMAND    CREATED    STATUS    PORTS    NAMES

[mahesh@fedora new] $

There are so many things that are possible with podman_container like recreating containers, restarting containers, checking whether container is running or not and many more. See the documentation for information on performing these actions.

Getting better at counting rpm-ostree based systems

Monday 10th of May 2021 08:00:00 AM

This article describes the extension of the Fedora 32 user count mechanism to rpm-ostree based systems. It also provides tips for opting out, if necessary.

How Fedora counts users

Since the release of Fedora 32, a new mechanism has been in place to better count the number of Fedora users while respecting their privacy. This system is explicitly designed to make sure that no personally identifiable information is sent from counted systems. It also ensures that the Fedora infrastructure does not collect any personal data. The nickname for this new counting mechanism is “Count Me”, from the option name. Details are available in DNF Better Counting change request for Fedora 32. In short, the Count Me mechanism works by telling Fedora servers how old your system is (with a very large approximation). This occurs randomly during a metadata refresh request performed by DNF.

Adding support for rpm-ostree based systems

The current mechanism works great for classic editions of Fedora (Workstation, Server, Spins, etc.). However, rpm-ostree based systems (such as Fedora Silverblue, Fedora IoT and Fedora CoreOS) do not fetch any repository metadata in the default case. This means they can not take advantage of this mechanism. We thus decided to implement a stand-alone method, based on the same logic, in rpm-ostree. The new implementation has the same privacy preserving properties as the original DNF implementation.

Time line

Our new Count Me mechanism will be enabled by default in the upcoming Fedora 34 release for Fedora IoT and Fedora Silverblue. This will occur for both upgraded machines and for new installs. For instructions on opting out, see below.

Since Fedora CoreOS is an automatically updating operating system, existing machines will adopt the Count Me logic without user intervention. However, counting will be enabled approximately three months after publication of this article. This delay is to ensure that users have time to opt out if they prefer to do so. Thus, default counting will be enabled starting with the testing and next Fedora CoreOS releases that will be published at the beginning of August 2021 and in the stable release that will go out two weeks after.

More information is available in the tracking issue for Fedora CoreOS.

Opting out of counting

Full instructions on disabling this functionality are available in the rpm-ostree documentation. We are reproducing them here for convenience.

Disable the process

You can disable counting by stopping the rpm-ostree-countme.timer and masking the corresponding unit, as a precaution:

$ systemctl mask --now rpm-ostree-countme.timer

Execute that command in advance to disable the default counting when you update to Fedora 34.

Modify your Butane configuration

Fedora CoreOS users can use the same systemctl command to manually mask the unit. You may also use the following snippet as part of your Butane config to disable counting on first boot via Ignition:

variant: fcos version: 1.3.0 systemd: units: - name: rpm-ostree-countme.timer enabled: false mask: true

Fedora CoreOS documentation contains details about using the Butane config snippet and how Fedora CoreOS is provisioned.

Contribute to Fedora Kernel 5.12 Test Week

Friday 7th of May 2021 08:00:00 AM

The kernel team is working on final integration for kernel 5.12. This version was recently released and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Sunday, May 09, 2021 through Sunday, May 16, 2021. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.

Introducing the Fedora i3 Spin

Wednesday 5th of May 2021 08:00:00 AM

Fedora 34 features the brand new i3 Spin created by the Fedora i3 S.I.G. This new spin features the popular i3wm tiling window manager. This will appeal to both novices and advanced users who prefer not to use a mouse, touchpad, or other pointing device to interact with their environment. The Fedora i3 spin offers a complete experience with a minimalistic user interface and a lightweight environment. It is intended for the power user, as well as others.

The i3 Experience

The i3 window manager is designed and developed with power users and developers in mind. The use of keyboard short cuts, however, will appeal to novices and advanced users who prefer not to use a mouse, touchpad, or other pointing device to interact with their environment. Our intention is to bring this experience to all users, giving them a lightweight environment, usable and extendable, where people can just work.

Design Goals

The Fedora i3 S.I.G work is based on the design goals for the project. These goals determine what we decide to include and how we tune or customize the contents of the Fedora i3 Spin.

The following is a list of the packages included. Others may be added from the Fedora Linux repository as required. Keep in mind that this is a minimalist spin.

Thunar

Thunar file manager is a modern file manager for the Xfce Desktop Environment. It is designed from the ground up to be fast, easy-to-use, lightweight, and full featured.

Mousepad

Mousepad text editor aims to be an easy-to-use and fast editor. This is not a complete development environment, however, it is powerful enough to read code and highlight syntax.

Azote

Azote is a GTK+3-based picture browser and background setter. The user interface is designed with multi-display setups in mind. Azote includes several color management tools.

network-manager-applet

nm-applet is a small GTK+3-based front-end for NetworkManager. It allows you to control, configure and use your network. It will cover everything from wired to wireless connections, including VPN management.

Firefox

Firefox is the default web browser chosen by the Fedora Project to be included in the different projects we ship. While not the lightest weight browser, it is the standard for Fedora Linux.

Get Fedora i3 Spin

Fedora 34 with i3wm is available for download here. For support, you can visit the #fedora-i3 channel on Freenode IRC or use the Users Mailing List.

Configure WireGuard VPNs with NetworkManager

Monday 3rd of May 2021 08:00:00 AM

Virtual Private Networks (VPNs) are used extensively. Nowadays there are different solutions available which allow users access to any kind of resource while maintaining their confidentiality and privacy.

Lately, one of the most commonly used VPN protocols is WireGuard because of its simplicity, speed and the security it offers. WireGuard’s implementation started in the Linux kernel but currently it is available in other platforms such as iOS and Android among others.

WireGuard uses UDP as its transport protocol and it bases the communication between peers upon Critokey Routing (CKR). Each peer, either server or client, has a pair of keys (public and private) and there is a link between public keys and allowed IPs to communicate with. For further information about WireGuard please visit its page.

This article describes how to set up WireGuard between two peers: PeerA and PeerB. Both nodes are running Fedora Linux and both are using NetworkManager for a persistent configuration.

WireGuard set up and networking configuration

You are only three steps away from having a persistent VPN connection between PeerA and PeerB:

  1. Install the required packages.
  2. Generate key pairs.
  3. Configure the WireGuard interfaces.
Installation

Install the wireguard-tools package on both peers (PeerA and PeerB):

$ sudo -i # dnf -y install wireguard-tools

This package is available in the Fedora Linux updates repository. It creates a configuration directory at /etc/wireguard/. This is where you will create the keys and the interface configuration file.

Generate the key pairs

Next, use the wg utility to generate both public and private keys on each node:

# cd /etc/wireguard # wg genkey | tee privatekey | wg pubkey > publickey Configure the WireGuard interface on PeerA

WireGuard interfaces use the names: wg0, wg1 and so on. Create the configuration for the WireGuard interface. For this, you need the following items:

  • The IP address and MASK you want to configure in the PeerA node.
  • The UDP port where this peer listens.
  • PeerA’s private key.
# cat << EOF > /etc/wireguard/wg0.conf [Interface] Address = 172.16.1.254/24 SaveConfig = true ListenPort = 60001 PrivateKey = mAoO2RxlqRvCZZoHhUDiW3+zAazcZoELrYbgl+TpPEc= [Peer] PublicKey = IOePXA9igeRqzCSzw4dhpl4+6l/NiQvkDSAnj5LtShw= AllowedIPs = 172.16.1.2/32 EOF

Allow UDP traffic through the port on which this peer will listen:

# firewall-cmd --add-port=60001/udp --permanent --zone=public # firewall-cmd --reload success

Finally, import the interface profile into NetworkManager. As a result, the WireGuard interface will persist after reboots.

# nmcli con import type wireguard file /etc/wireguard/wg0.conf Connection 'wg0' (21d939af-9e55-4df2-bacf-a13a4a488377) successfully added.

Verify the status of device wg0:

# wg interface: wg0   public key: FEPcisOjLaZsJbYSxb0CI5pvbXwIB3BCjMUPxuaLrH8=   private key: (hidden)   listening port: 60001 peer: IOePXA9igeRqzCSzw4dhpl4+6l/NiQvkDSAnj5LtShw=   allowed ips: 172.16.1.2/32 # nmcli -p device show wg0 ===============================================================================                              Device details (wg0) =============================================================================== GENERAL.DEVICE:                         wg0 ------------------------------------------------------------------------------- GENERAL.TYPE:                           wireguard ------------------------------------------------------------------------------- GENERAL.HWADDR:                         (unknown) ------------------------------------------------------------------------------- GENERAL.MTU:                            1420 ------------------------------------------------------------------------------- GENERAL.STATE:                          100 (connected) ------------------------------------------------------------------------------- GENERAL.CONNECTION:                     wg0 ------------------------------------------------------------------------------- GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveC> ------------------------------------------------------------------------------- IP4.ADDRESS[1]:                         172.16.1.254/24 IP4.GATEWAY:                            -- IP4.ROUTE[1]:                           dst = 172.16.1.0/24, nh = 0.0.0.0, mt => ------------------------------------------------------------------------------- IP6.GATEWAY:                            -- -------------------------------------------------------------------------------

The above output shows that interface wg0 is connected. It is now able to communicate with one peer whose VPN IP address is 172.16.1.2.

Configure the WireGuard interface in PeerB

It is time to create the configuration file for the wg0 interface on the second peer. Make sure you have the following:

  • The IP address and MASK to set on PeerB.
  • The PeerB’s private key.
  • The PeerA’s public key.
  • The PeerA’s IP address or hostname and the UDP port on which it is listening for WireGuard traffic.
# cat << EOF > /etc/wireguard/wg0.conf [Interface] Address = 172.16.1.2 SaveConfig = true PrivateKey = UBiF85o7937fBK84c2qLFQwEr6eDhLSJsb5SAq1lF3c= [Peer] PublicKey = FEPcisOjLaZsJbYSxb0CI5pvbXwIB3BCjMUPxuaLrH8= AllowedIPs = 172.16.1.254/32 Endpoint = peera.example.com:60001 EOF

The last step is about importing the interface profile into NetworkManager. As I mentioned before, this allows the WireGuard interface to have a persistent configuration after reboots.

# nmcli con import type wireguard file /etc/wireguard/wg0.conf Connection 'wg0' (39bdaba7-8d91-4334-bc8f-85fa978777d8) successfully added.

Verify the status of device wg0:

# wg interface: wg0   public key: IOePXA9igeRqzCSzw4dhpl4+6l/NiQvkDSAnj5LtShw=   private key: (hidden)   listening port: 47749 peer: FEPcisOjLaZsJbYSxb0CI5pvbXwIB3BCjMUPxuaLrH8=   endpoint: 192.168.124.230:60001   allowed ips: 172.16.1.254/32 # nmcli -p device show wg0 ===============================================================================                              Device details (wg0) =============================================================================== GENERAL.DEVICE:                         wg0 ------------------------------------------------------------------------------- GENERAL.TYPE:                           wireguard ------------------------------------------------------------------------------- GENERAL.HWADDR:                         (unknown) ------------------------------------------------------------------------------- GENERAL.MTU:                            1420 ------------------------------------------------------------------------------- GENERAL.STATE:                          100 (connected) ------------------------------------------------------------------------------- GENERAL.CONNECTION:                     wg0 ------------------------------------------------------------------------------- GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveC> ------------------------------------------------------------------------------- IP4.ADDRESS[1]:                         172.16.1.2/32 IP4.GATEWAY:                            -- ------------------------------------------------------------------------------- IP6.GATEWAY:                            -- -------------------------------------------------------------------------------

The above output shows that interface wg0 is connected. It is now able to communicate with one peer whose VPN IP address is 172.16.1.254.

Verify connectivity between peers

After executing the procedure described earlier both peers can communicate to each other through the VPN connection as demonstrated in the following ICMP test:

[root@peerb ~]# ping 172.16.1.254 -c 4 PING 172.16.1.254 (172.16.1.254) 56(84) bytes of data. 64 bytes from 172.16.1.254: icmp_seq=1 ttl=64 time=0.566 ms 64 bytes from 172.16.1.254: icmp_seq=2 ttl=64 time=1.33 ms 64 bytes from 172.16.1.254: icmp_seq=3 ttl=64 time=1.67 ms 64 bytes from 172.16.1.254: icmp_seq=4 ttl=64 time=1.47 ms

In this scenario, if you capture UDP traffic on port 60001 on PeerA you will see the communication relying on WireGuard protocol and the encrypted data:

Capture of UDP traffic between peers relying on WireGuard protocol Conclusion

Virtual Private Networks (VPNs) are very common. Among a wide variety of protocols and tools for deploying a VPN, WireGuard is a simple, lightweight and secure choice. It allows secure point-to-point connections between peers based on CryptoKey routing and the procedure is very straight-forward. In addition, NetworkManager supports WireGuard interfaces allowing persistent configurations after reboots.

Access freenode using Matrix clients

Friday 30th of April 2021 08:00:53 AM

Matrix (also written [matrix]) is an open source project and a communication protocol. The protocol standard is open and it is free to use or implement. Matrix is being recognized as a modern successor to the older Internet Relay Chat (IRC) protocol. Mozilla, KDE, FOSDEM and GNOME are among several large projects that have started using chat clients and servers that operate over the Matrix protocol. Members of the Fedora project have discussed whether or not the community should switch to using the Matrix protocol.

The Matrix project has implemented an IRC bridge to enable communication between IRC networks (for example, freenode) and Matrix homeservers. This article is a guide on how to register, identify and join freenode channels from a Matrix client via the Matrix IRC bridge.

Check out Beginner’s guide to IRC for more information about IRC.

Preparation

You need to set everything up before you register a nick. A nick is a username.

Install a client

Before you use the IRC bridge, you need to install a Matrix client. This guide will use Element. Other Matrix clients are available.

First, install the Matrix client Element from Flathub on your PC. Alternatively, browse to element.io to run the Element client directly in your browser.

Next, click Create Account to register a new account on matrix.org (a homeserver hosted by the Matrix project).

Create rooms

For the IRC bridge, you need to create rooms with the required users.

First, click the (plus) button next to People on the left side in Element and type @appservice-irc:matrix.org in the field to create a new room with the user.

Second, create another new room with @freenode_NickServ:matrix.org.

Register a nick at freenode

If you have already registered a nick at freenode, skip the remainder of this section.

Registering a nickname is optional, but strongly recommended. Many freenode channels require a registered nickname to join.

First, open the room with appservice-irc and enter the following:

!nick <your_nick>

Substitute <your_nick> with the username you want to use. If the nick is already taken, NickServ will send you the following message:

This nickname is registered. Please choose a different nickname, or identify via /msg NickServ identify <password>.

If you receive the above message, use another nick.

Second, open the room with NickServ and enter the following:

REGISTER <your_password> <your_email@example.com>

You will receive a verification email from freenode. The email will contain a verification command similar to the following:

/msg NickServ VERIFY REGISTER <your_nick> <verification_code>

Ignore /msg NickServ at the start of the command. Enter the remainder of the command in the room with NickServ. Be quick! You will have 24 hours to verify before the code expires.

Identify your nick at freenode

If you just registered a new nick using the procedure in the previous section, then you should already be identified. If you are already identified, skip the remainder of this section.

First, open the room with @appservice-irc:matrix.org and enter the following:

!nick <your_nick>

Next, open the room with @freenode_NickServ:matrix.org and enter the following:

IDENTIFY <your_nick> <your_password> Join a freenode channel

To join a freenode channel, press the (plus) button next to Rooms on the left side in Element and type #freenode_#<your_channel>:matrix.org. Substitute <your_channel> with the freenode channel you want to join. For example, to join the #fedora channel, use #freenode_#fedora:matrix.org. For a list of Fedora Project IRC channels, see Communicating_and_getting_help — IRC_for_interactive_community_support.

Further reading

How to rebase to Fedora 34 on Silverblue

Wednesday 28th of April 2021 08:00:00 AM

Silverblue is an operating system for your desktop built on Fedora. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. If you want to update to Fedora 34 on your Silverblue system, this article tells you how. It not only shows you what to do, but also how to revert things if something unforeseen happens.

Prior to actually doing the rebase to Fedora 34, you should apply any pending updates. Enter the following in the terminal:

$ rpm-ostree update

or install updates through GNOME Software and reboot.

Rebasing using GNOME Software

GNOME Software shows you that there is new version of Fedora Silverblue available on the Updates screen.

Fedora 34 is available

First thing you need to do is to download the new image, so click on the Download button. This will take some time. After it’s done you will see that the update is ready to install.

Fedora 34 is ready for installation

Click on the Install button. This step will take only a few moments and the computer will be restarted at the end. After restart you will end up in new and shiny release of Fedora 34. Easy, isn’t it?

Rebasing using terminal

If you prefer to do everything in a terminal, than this next guide is for you.

Rebasing to Fedora 34 using terminal is easy. First, check if the 34 branch is available:

$ ostree remote refs fedora

You should see the following in the output:

fedora:fedora/34/x86_64/silverblue

Next, rebase your system to the Fedora 34 branch.

$ rpm-ostree rebase fedora:fedora/34/x86_64/silverblue

Finally, the last thing to do is restart your computer and boot to Fedora 34.

How to roll back

If anything bad happens—for instance, if you can’t boot to Fedora 34 at all—it’s easy to go back. Pick the previous entry in the GRUB menu at boot (if you don’t see it, try to press ESC during boot), and your system will start in its previous state before switching to Fedora 34. To make this change permanent, use the following command:

$ rpm-ostree rollback

That’s it. Now you know how to rebase Silverblue to Fedora 34 and roll back. So why not do it today?

What’s new in Fedora Workstation 34

Tuesday 27th of April 2021 05:29:25 PM

Fedora Workstation 34 is the latest version of our leading-edge operating system and this time there are major improvements heading your way. Best of all, you can download it from the official website. What’s new, I hear you ask!?  Well let’s get to it.

GNOME 40

GNOME 40 is a major update to the GNOME desktop, which Fedora community members played a key role in designing and implementing, so you can be sure that the needs of Fedora users were taken into account.

The first thing you notice as you log into the GNOME 40 desktop is that you are now taken directly to a redesigned overview screen. You will notice that the dash bar has moved to the bottom of the screen. Another major change to GNOME 40 is the virtual work spaces are now horizontal which brings GNOME more in line with most other desktops out there and should thus make getting used to GNOME and Fedora easier for new users.

Work has also been done to improve gesture support in the desktop with 3-finger horizontal swipes for switching workspaces, and 3-finger vertical swipes for bringing up the overview.

The updated overview design brings a collection of other improvements, including:

  • The dash now separates favorite and non-favorite running apps. This makes it clear which apps have been favorited and which haven’t.
  • Window thumbnails have been improved, and now have an app icon over each one, to help identification.
  • When workspaces are set to be on all displays, the workspace switcher is now shown on all displays rather than just the primary one.
  • App launcher drag and drop has been improved, to make it easier to customize the arrangement of the app grid.

The changes in GNOME 40 underwent a good deal of user testing, and have had a very positive reaction so far, so we’re excited to be introducing them to the Fedora community. For more information, see forty.gnome.org or the GNOME 40 release notes.

App Improvements

GNOME Weather has been redesigned for this release with two views, one for the hourly forecast for the next 48 hours, and one for the daily forecast for the next 10 days.

The new version now shows more information, and is more mobile-friendly, as it supports narrower sizes.

Other apps which have been improved include Files, Maps, Software and Settings. See the GNOME 40 release notes for more details.

PipeWire

PipeWire is the new audio and video server, created by Wim Taymans, who also co-created the GStreamer multimedia framework. Until now, it has only been used for video capture, but in Fedora Workstation 34 we are making the jump to also use it for audio, replacing PulseAudio.

PipeWire is designed to be compatible with both PulseAudio and Jack, so applications should generally work as before. We have also worked with Firefox and Chrome to ensure that they work well with PipeWire. PipeWire support is also coming soon in OBS Studio, so if you are a podcaster, we’ve got you covered.

PipeWire has had a very positive reception from the pro-audio community. It is prudent to say that there may be pro-audio applications that will not work 100% from day one, but we are receiving a constant stream of test reports and patches, which we will be using to continue the pro-audio PipeWire experience during the Fedora Workstation 34 lifecycle.

Improved Wayland support

Support for running Wayland on top of the proprietary NVIDIA driver is expected to be resolved within the Fedora Workstation 34 lifetime. Support for running a pure Wayland client on the NVIDIA driver already exists. However, this currently lacks support for the Xwayland compatibility layer, which is used by many applications. This is why Fedora still defaults to X.Org when you install the NVIDIA driver.

We are working upstream with NVIDIA  to ensure Xwayland  works in Fedora with NVIDIA hardware acceleration.

QtGNOME platform and Adwaita-Qt

Jan Grulich has continued his great work on the QtGNOME platform and Adawaita-qt themes, ensuring that  Qt applications integrate well with Fedora Workstation. The Adwaita theme that we use in Fedora has evolved over the years, but with the updates to QtGNOME platform and Adwaita-Qt in Fedora 34, Qt applications will more closely match the current GTK style in Fedora Workstation 34.

As part of this work, the appearance and styling of Fedora Media Writer has also been improved.

Toolbox

Toolbox is our great tool for creating development environments that are isolated from your host system, and it has seen lots of improvements for Fedora 34. For instance we have put a lot of work into improving the CI system integration for toolbox to avoid breakages in our stack causing Toolbox to stop working.

A lot of work has been put into the RHEL integration in Toolbox, which means that you can easily set up a containerized RHEL environment on a Fedora system, and thus conveniently do development for RHEL servers and cloud instances. Creating a RHEL environment on Fedora is now as easy as running: toolbox create –distro rhel –release 8.4. 

This gives you the advantage of an up to date desktop which supports the latest hardware, while being able to do RHEL-targeted development in a way that feels completely native.

Btrfs

Fedora Workstation has been using Btrfs as its default file system since Fedora 33. Btrfs is a modern filesystem that is developed by many companies and projects. Workstation’s adoption of Btrfs came about through fantastic collaboration between Facebook and the Fedora community. Based on user feedback so far, people feel that Btrfs provides a snappier and more responsive experience, compared with the old ext4 filesystem.

With Fedora 34, new workstation installs now use Btrfs transparent compression by default. This saves significant disk space compared with uncompressed Btrfs, often in the range of 20-40%. It also increases the lifespan of SSDs and other flash media.

Fedora Linux 34 is officially here!

Tuesday 27th of April 2021 02:00:00 PM

Today, I’m excited to share the results of the hard work of thousands of contributors to the Fedora Project: our latest release, Fedora Linux 34, is here! I know a lot of you have been waiting… I’ve seen more “is it out yet???” anticipation on social media and forums than I can remember for any previous release. So, if you want, wait no longer — upgrade now or go to Get Fedora to download an install image. Or, if you’d like to learn more first, read on. 

The first thing you might notice is our beautiful new logo. Developed by the Fedora Design Team with input from the wider community, this new logo solves a lot of the technical problems with our old logo while keeping its Fedoraness. Stay tuned for new Fedora swag featuring the new design!

A Fedora Linux for every use case

Fedora Editions are targeted outputs geared toward specific “showcase” uses on the desktop, in server & cloud environments, and the Internet of Things.

Fedora Workstation focuses on the desktop, and in particular, it’s geared toward software developers who want a “just works” Linux operating system experience. This release features GNOME 40, the next step in focused, distraction-free computing. GNOME 40 brings improvements to navigation whether you use a trackpad, a keyboard, or a mouse. The app grid and settings have been redesigned to make interaction more intuitive. You can read more about what changed and why in a Fedora Magazine article from March.

Fedora CoreOS is an emerging Fedora Edition. It’s an automatically-updating, minimal operating system for running containerized workloads securely and at scale. It offers several update streams that can be followed for automatic updates that occur roughly every two weeks. Currently the next stream is based on Fedora Linux 34, with the testing and stable streams to follow. You can find information about released artifacts that follow the next stream from the download page and information about how to use those artifacts in the Fedora CoreOS Documentation.

Fedora IoT provides a strong foundation for IoT ecosystems and edge computing use cases. With this release, we’ve improved support for popular ARM devices like Pine64, RockPro64, and Jetson Xavier NX. Some i.MX8 system on a chip devices like the 96boards Thor96 and Solid Run HummingBoard-M have improved hardware support. In addition, Fedora IoT 34 improves support for hardware watchdogs for automated system recovery.”

Of course, we produce more than just the Editions. Fedora Spins and Labs target a variety of audiences and use cases, including Fedora Jam, which allows you to unleash your inner musician, and desktop environments like the new Fedora i3 Spin, which provides a tiling window manager. And, don’t forget our alternate architectures: ARM AArch64, Power, and S390x.

General improvements

No matter what variant of Fedora you use, you’re getting the latest the open source world has to offer. Following our “First” foundation, we’ve updated key programming language and system library packages, including Ruby 3.0 and Golang 1.16. In Fedora KDE Plasma, we’ve switched from X11 to Wayland as the default.

Following the introduction of BTRFS as the default filesystem on desktop variants in Fedora Linux 33, we’ve introduced transparent compression on BTRFS filesystems.

We’re excited for you to try out the new release! Go to https://getfedora.org/ and download it now. Or if you’re already running Fedora Linux, follow the easy upgrade instructions. For more information on the new features in Fedora Linux 34, see the release notes.

In the unlikely event of a problem…

If you run into a problem, check out the Fedora 34 Common Bugs page, and if you have questions, visit our Ask Fedora user-support platform.

Thank you everyone

Thanks to the thousands of people who contributed to the Fedora Project in this release cycle, and especially to those of you who worked extra hard to make this another on-time release during a pandemic. Fedora is a community, and it’s great to see how much we’ve supported each other. Be sure to join us on April 30 and May 1 for a virtual release party!

Exploring the world of declarative programming

Monday 26th of April 2021 08:00:00 AM
Introduction

Most of us use imperative programming languages like C, Python, or Java at home. But the universe of programming languages is endless and there are languages where no imperative command has gone before. That which may sound impossible at the first glance is feasible with Prolog and other so called declarative languages. This article will demonstrate how to split a programming task between Python and Prolog.

In this article I do not want to teach Prolog. There are resources available for that. We will demonstrate how simple it is to solve a puzzle solely by describing the solution. After that it is up to the reader how far this idea will take them.

To proceed, you should have a basic understanding of Python. Installation of Prolog and the Python-Prolog bridge is accomplished using this command:

dnf install pl python3-pyswip

Our exploration uses SWI-Prolog, an actively developed Prolog which has the Fedora package name “pl”. The Python/SWI-Prolog bridge is pyswip.

If you are a bold adventurer you are welcome to follow me exploring the world of declarative programming.

Puzzle

The example problem for our exploration will be a puzzle similar to what you may have seen before.

How many triangles are there?

Getting started

Get started by opening a fresh text file with your favorite text editor. Copy all three text blocks in the sections below (Input, Process and Output) together into one file.

Input

This section sets up access to the Prolog interface and defines data for the problem. This is a simple case so it is fastest to write the data lines by hand. In larger problems you may get your input data from a file or from a database.

#!/usr/bin/python from pyswip import Prolog prolog = Prolog() prolog.assertz("line([a, e, k])") prolog.assertz("line([a, d, f, j])") prolog.assertz("line([a, c, g, i])") prolog.assertz("line([a, b, h])") prolog.assertz("line([b, c, d, e])") prolog.assertz("line([e, f, g, h])") prolog.assertz("line([h, i, j, k])")
  • The first line is the UNIX way to tell that this text file is a Python program.
    Don’t forget to make your file executable by using chmod +x yourfile.py .
  • The second line imports a Python module which is doing the Python/Prolog bridge.
  • The third line makes a Prolog instance available inside Python.
  • Next lines are puzzle related. They describe the picture you see above.
    Single small letters stand for concrete points.
    [a,e,k] is the Prolog way to describe a list of three points.
    line() declares that it is true that the list inside parentheses is a line .

The idea is to let Python do the work and to feed Prolog.

“Process”

This section title is quoted because nothing is actually processed here. This is simply the description (declaration) of the solution.

There is no single variable which gets a new value. Technically the processing is done in the section titled Output below where you find the command prolog.query().

prolog.assertz(""" triangle(A, B, C) :-  line(L1),  line(L2),  line(L3),  L1 \= L2,  member(A, L1),  member(B, L1),  member(A, L2),  member(C, L2),  member(B, L3),  member(C, L3),  A @< B, B @< C""")

First of all: All capital letters and strings starting with a capital letter are Prolog variables!

The statements here are the description of what a triangle is and you can read this like:

  • If all lines after “:-“ are true, then triangle(A, B, C) is a triangle
  • There must exist three lines (L1 to L3).
  • Two lines must be different. “\=” means not equal in Prolog. We do not want to count a triangle where all three points are on the same line! So we check if at least two different lines are used.
  • member() is a Prolog predicate which is true if the first argument is inside the second argument which must be a list. In sum these six lines express that the three points must be pairwise on different lines.
  • The last two lines are only true if the three points are in alphabetical order. (“@<” compares terms in Prolog.) This is necessary, otherwise [a, h, k] and [a, k, h] would count as two triangles. Also, the case where a triangle contains the same point two or even three times is excluded by these final two lines.

As you can see, it is often not that obvious what defines a triangle. But for a computed approach you must be rather strict and rigorous.

Output

After the hard work in the process chapter the rest is easy. Just have Python ask Prolog to search for triangles and count them all.

total = 0 for result in prolog.query("triangle(A, B, C)"):  print(result)  total += 1 print("There are", total, "triangles.")

Run the program using this command in the directory containing yourfile.py :

./yourfile.py

The output shows the listing of each triangle found and the final count.

{'A': 'a', 'B': 'e', 'C': 'f'} {'A': 'a', 'B': 'e', 'C': 'g'} {'A': 'a', 'B': 'e', 'C': 'h'} {'A': 'a', 'B': 'd', 'C': 'e'} {'A': 'a', 'B': 'j', 'C': 'k'} {'A': 'a', 'B': 'f', 'C': 'g'} {'A': 'a', 'B': 'f', 'C': 'h'} {'A': 'a', 'B': 'c', 'C': 'e'} {'A': 'a', 'B': 'i', 'C': 'k'} {'A': 'a', 'B': 'c', 'C': 'd'} {'A': 'a', 'B': 'i', 'C': 'j'} {'A': 'a', 'B': 'g', 'C': 'h'} {'A': 'a', 'B': 'b', 'C': 'e'} {'A': 'a', 'B': 'h', 'C': 'k'} {'A': 'a', 'B': 'b', 'C': 'd'} {'A': 'a', 'B': 'h', 'C': 'j'} {'A': 'a', 'B': 'b', 'C': 'c'} {'A': 'a', 'B': 'h', 'C': 'i'} {'A': 'd', 'B': 'e', 'C': 'f'} {'A': 'c', 'B': 'e', 'C': 'g'} {'A': 'b', 'B': 'e', 'C': 'h'} {'A': 'e', 'B': 'h', 'C': 'k'} {'A': 'f', 'B': 'h', 'C': 'j'} {'A': 'g', 'B': 'h', 'C': 'i'} There are 24 triangles.

There are certainly more elegant ways to display this output but the point is:
Python should do the output handling for Prolog.

If you are a star programmer you can make the output look like this:

*************************** * There are 24 triangles. * *************************** Conclusion

Splitting a programming task between Python and Prolog makes it easy to keep the Prolog part pure and monotonic, which is good for logic reasoning. It is also easy to make the input and output handling with Python.

Be aware that Prolog is a bit more complicated and can do much more than what I explained here. You can find a really good and modern introduction here: The Power of Prolog.

Contribute at the Fedora 34 CoreOS Test Day

Friday 23rd of April 2021 08:00:00 AM

The Fedora CoreOS team released the first Fedora CoreOS next stream based on Fedora 34. They expect to promote this to the testing stream in two weeks, on the usual schedule. As a result, the Fedora CoreOS and QA teams have organized a test week. It begins Monday, April 26, 2021 and runs through the end of the week. Refer to the wiki page for links to the test cases and materials you’ll need to participate. Read below for details.

How does a test day work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you on test day.

Build smaller containers

Wednesday 21st of April 2021 08:00:00 AM

Working with containers is a daily task for many users and developers. Container developers often need to (re)build container images frequently. If you develop containers, have you ever thought about reducing the image size? Smaller images have several benefits. They require less bandwidth to download and they save costs when run in cloud environments. Also, using smaller container images on Fedora CoreOS, IoT and Silverblue improves overall system performance because those operating systems rely heavily on container workflows. This article will provide a few tips for reducing the size of container images.

The tools

The host operating system in the following examples is Fedora Linux 33. The examples use Podman 3.1.0 and Buildah 1.2.0. Podman and Buildah are pre-installed in most Fedora Linux variants. If you don’t have Podman or Buildah installed, run the following command to install them.

$ sudo dnf install -y podman buildah The task

Begin with a basic example. Build a web container meeting the following requirements.

  • The container must be based on Fedora Linux
  • Use the Apache httpd web server
  • Include a custom website
  • The container should be relatively small

The following steps will also work on more complex images.

The setup

First, create a project directory. This directory will include your website and container file.

$ mkdir smallerContainer $ cd smallerContainer $ mkdir files $ touch files/index.html

Make a simple landing page. For this demonstration, you may copy the below HTML into the index.html file.

<!doctype html> <html lang="de"> <head> <title>Container Page</title> </head> <body> <header> <h1>Container Page</h1> </header> <main> <h2>Fedora</h2> <ul> <li><a href="https://getfedora.org">Fedora Project</a></li> <li><a href="https://docs.fedoraproject.org/">Fedora Documentation</a></li> <li><a href="https://fedoramagazine.org">Fedora Magazine</a></li> <li><a href="https://communityblog.fedoraproject.org/">Fedora Community Blog</a></li> </ul> <h2>Podman</h2> <ul> <li><a href="https://podman.io">Podman</a></li> <li><a href="https://docs.podman.io/">Podman Documentation</a></li> <li><a href="https://github.com/containers/podman">Podman Code</a></li> <li><a href="https://podman.io/blogs/">Podman Blog</a></li> </ul> <h2>Buildah</h2> <ul> <li><a href="https://buildah.io">Buildah</a></li> <li><a href="https://github.com/containers/buildah">Buildah Code</a></li> <li><a href="https://buildah.io/blogs/">Buildah Blog</a></li> </ul> <h2>Skopeo</h2> <ul> <li><a href="https://github.com/containers/skopeo">skopeo Code</a></li> </ul> <h2>CRI-O</h2> <ul> <li><a href="https://cri-o.io/">CRI-O</a></li> <li><a href="https://github.com/cri-o/cri-o">CRI-O Code</a></li> <li><a href="https://medium.com/cri-o">CRI-O Blog</a></li> </ul> </main> </body> </html>

Optionally, test the above index.html file in your browser.

$ firefox files/index.html

Finally, create a container file. The file can be named either Dockerfile or Containerfile.

$ touch Containerfile

You should now have a project directory with a file system layout similar to what is shown in the below diagram.

smallerContainer/ |- files/ | |- index.html | |- Containerfile The build

Now make the image. Each of the below stages will add a layer of improvements to help reduce the size of the image. You will end up with a series of images, but only one Containerfile.

Stage 0: a baseline container image

Your new image will be very simple and it will only include the mandatory steps. Place the following text in Containerfile.

# Use Fedora 33 as base image FROM registry.fedoraproject.org/fedora:33 # Install httpd RUN dnf install -y httpd # Copy the website COPY files/* /var/www/html/ # Expose Port 80/tcp EXPOSE 80 # Start httpd CMD ["httpd", "-DFOREGROUND"]

In the above file there are some comments to indicate what is being done. More verbosely, the steps are:

  1. Create a build container with the base FROM registry.fedoraproject.org/fedora:33
  2. RUN the command: dnf install -y httpd
  3. COPY files relative to the Containerfile to the container
  4. Set EXPOSE 80 to indicate which port is auto-publishable
  5. Set a CMD to indicate what should be run if one creates a container from this image

Run the below command to create a new image from the project directory.

$ podman image build -f Containerfile -t localhost/web-base

Use the following command to examine your image’s attributes. Note in particular the size of your image (467 MB).

$ podman image ls REPOSITORY TAG IMAGE ID CREATED SIZE localhost/web-base latest ac8c5ed73bb5 5 minutes ago 467 MB registry.fedoraproject.org/fedora 33 9f2a56037643 3 months ago 182 MB

The example image shown above is currently occupying 467 MB of storage. The remaining stages should reduce the size of the image significantly. But first, verify that the image works as intended.

Enter the following command to start the container.

$ podman container run -d --name web-base -P localhost/web-base

Enter the following command to list your containers.

$ podman container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d24063487f9f localhost/web-base httpd -DFOREGROUN... 2 seconds ago Up 3 seconds ago 0.0.0.0:46191->80/tcp web-base

The container shown above is running and it is listening on port 46191. Going to localhost:46191 from a web browser running on the host operating system should render your web page.

$ firefox localhost:46191 Stage 1: clear caches and remove other leftovers from the container

The first step one should always perform to optimize the size of their container image is “clean up”. This will ensure that leftovers from installations and packaging are removed. What exactly this process entails will vary depending on your container. For the above example you can just edit Containerfile to include the following lines.

[...] # Install httpd RUN dnf install -y httpd && \ dnf clean all -y [...]

Build the modified Containerfile to reduce the size of the image significantly (237 MB in this example).

$ podman image build -f Containerfile -t localhost/web-clean $ podman image ls REPOSITORY TAG IMAGE ID CREATED SIZE localhost/web-clean latest f0f62aece028 6 seconds ago 237 MB Stage 2: remove documentation and unneeded package dependencies

Many packages will pull in recommendations, weak dependencies and documentation when they are installed. These are often not needed in a container and can be excluded. The dnf command has options to indicate that it should not include weak dependencies or documentation.

Edit Containerfile again and add the options to exclude documentation and weak dependencies on the dnf install line:

[...] # Install httpd RUN dnf install -y httpd --nodocs --setopt install_weak_deps=False && \ dnf clean all -y [...]

Build Containerfile with the above modifications to achieve an even smaller image (231 MB).

$ podman image build -f Containerfile -t localhost/web-docs $ podman image ls REPOSITORY TAG IMAGE ID CREATED SIZE localhost/web-docs latest 8a76820cec2f 8 seconds ago 231 MB Stage 3: use a smaller container base image

The prior stages, in combination, have reduced the size of the example image by half. But there is still one more thing that can be done to reduce the size of the image. The base image registry.fedoraproject.org/fedora:33 is meant for general purpose use. It provides a collection of packages that many people expect to be pre-installed in their Fedora Linux containers. The collection of packages provided in the general purpose Fedora Linux base image is often more extensive than needed, however. The Fedora Project also provides a fedora-minimal base image for those who wish to start with only the essential packages and then add only what they need to achieve a smaller total image size.

Use podman image search to search for the fedora-minimal image as shown below.

$ podman image search fedora-minimal INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED fedoraproject.org registry.fedoraproject.org/fedora-minimal 0

The fedora-minimal base image excludes DNF in favor of the smaller microDNF which does not require Python. When registry.fedoraproject.org/fedora:33 is replaced with registry.fedoraproject.org/fedora-minimal:33, dnf needs to be replaced with microdnf.

# Use Fedora minimal 33 as base image FROM registry.fedoraproject.org/fedora-minimal:33 # Install httpd RUN microdnf install -y httpd --nodocs --setopt install_weak_deps=0 && \ microdnf clean all -y [...]

Rebuild the image to see how much storage space has been recovered by using fedora-minimal (169 MB).

$ podman image build -f Containerfile -t localhost/web-docs $ podman image ls REPOSITORY TAG IMAGE ID CREATED SIZE localhost/web-minimal latest e1603bbb1097 7 minutes ago 169 MB

The initial image size was 467 MB. Combining the methods detailed in each of the above stages has resulted in a final image size of 169 MB. The final total image size is smaller than the original base image size of 182 MB!

Building containers from scratch

The previous section used a container file and Podman to build a new image. There is one last thing to demonstrate — building a container from scratch using Buildah. Podman uses the same libraries to build containers as Buildah. But Buildah is considered a pure build tool. Podman is designed to work as a replacement for Docker.

When building from scratch using Buildah, the container is empty — there is nothing in it. Everything needed must be installed or copied from outside the container. Fortunately, this is quite easy with Buildah. Below, a small Bash script is provided which will build the image from scratch. Instead of running the script, you can run each of the commands from the script individually in a terminal to better understand what is being done.

#!/usr/bin/env bash set -o errexit # Create a container CONTAINER=$(buildah from scratch) # Mount the container filesystem MOUNTPOINT=$(buildah mount $CONTAINER) # Install a basic filesystem and minimal set of packages, and httpd dnf install -y --installroot $MOUNTPOINT --releasever 33 glibc-minimal-langpack httpd --nodocs --setopt install_weak_deps=False dnf clean all -y --installroot $MOUNTPOINT --releasever 33 # Cleanup buildah unmount $CONTAINER # Copy the website buildah copy $CONTAINER 'files/*' '/var/www/html/' # Expose Port 80/tcp buildah config --port 80 $CONTAINER # Start httpd buildah config --cmd "httpd -DFOREGROUND" $CONTAINER # Save the container to an image buildah commit --squash $CONTAINER web-scratch

Alternatively, the image can be built by passing the above script to Buildah. Notice that root privileges are not required.

$ buildah unshare bash web-scratch.sh $ podman image ls REPOSITORY TAG IMAGE ID CREATED SIZE localhost/web-scratch latest acca45fc9118 9 seconds ago 155 MB

The final image is only 155 MB! Also, the attack surface has been reduced. Not even DNF (or microDNF) is installed in the final image.

Conclusion

Building smaller container images has many advantages. Reducing the needed bandwidth, the disk footprint and attack surface will lead to better images overall. It is easy to reduce the footprint with just a few small changes. Many of the changes can be done without altering the functionality of the resulting image.

It is also possible to build very small images from scratch which will only hold the needed binaries and configuration files.

Something bugging you in Fedora Linux? Let’s get it fixed!

Monday 19th of April 2021 08:00:00 AM

Software has bugs. Any complicated system is guaranteed to have at least some bits that don’t work as planned. Fedora Linux is a very complicated system. It contains thousands of packages created by countless independent upstream projects around the world. There are also hundreds of updates every week. So, it’s inevitable that problems creep in. This article addresses the bug fixing process and how some bugs may be prioritized.

The release development process

As a Linux distribution project, we want to deliver a polished, “everything just works” experience to our users. Our release process starts with “Rawhide”. This is our development area where we integrate new versions of all that updated free and open source software. We’re constantly improving our ongoing testing and continuous integration processes to make even Rawhide safe to use for the adventurous. By its nature, however, Rawhide will always be a little bit rough.

Twice a year we take that rough operating system and branch it for a beta release, and then a final release. As we do that, we make a concerted effort to find problems. We run Test Days to check on specific areas and features. “Candidate builds” are made which are checked against our release validation test plan. We then enter a “freeze” state where only approved changes go into the candidates. This isolates the candidate from the constant development (which still goes into Rawhide!) so new problems are not introduced.

Many bugs, big and small, are squashed as part of the release process. When all goes according to plan, we have a shiny new on-schedule Fedora Linux release for all of our users. (We’ve done this reliably and repeatedly for the last few years — thanks, everyone who works so hard to make it so!) If something is really wrong, we can mark it as a “release blocker”. That means we won’t ship until it’s fixed. This is often appropriate for big issues, and definitely turns up the heat and attention that bug gets.

Sometimes, we have issues that are persistent. Perhaps something that’s been going on for a release or two, or where we don’t have an agreed solution. Some issues are really annoying and frustrating to many users, but individually don’t rise to the level we’d normally block a release for. We can mark these things as blockers. But that is a really big sledgehammer. A blocker may cause the bug to get finally smashed, but it can also cause disruption all around. If the schedule slips, all the other bug fixes and improvements, as well as features people have been working on, don’t get to users.

The Prioritized Bugs process

So, we have another way to address annoying bugs! The Prioritized Bugs process is a different way to highlight issues that result in unpleasantness for a large number of users. There’s no hammer here, but something more like a spotlight. Unlike the release blocker process, the Prioritized Bugs process does not have a strictly-defined set of criteria. Each bug is evaluated based on the breadth and severity of impact.

A team of interested contributors helps curate a short list of issues that need attention. We then work to connect those issues to people who can fix them. This helps take pressure off of the release process, by not tying the issues to any specific deadlines. Ideally, we find and fix things before we even get to the beta stage. We try to keep the list short, no more than a handful, so there truly is a focus. This helps the teams and individuals addressing problems because they know we’re respectful of their often-stretched-thin time and energy.

Through this process, Fedora has resolved dozens of serious and annoying problems. This includes everything from keyboard input glitches to SELinux errors to that thing where gigabytes of old, obsolete package updates would gradually fill up your disk. But we can do a lot more — we actually aren’t getting as many nominations as we can handle. So, if there’s something you know that’s causing long-term frustration or affecting a lot of people and yet which seems to not be reaching a resolution, follow the Prioritized Bugs process and let us know.

You can help

All Fedora contributors are invited to participate in the Prioritized Bugs process. Evaluation meetings occur every two weeks on IRC. Anyone is welcome to join and help us evaluate the nominated bugs. See the calendar for meeting time and location. The Fedora Program Manager sends an agenda to the triage and devel mailing lists the day before meetings.

Bug reports welcome

Big or small, when you find a bug, we really appreciate it if you report it. In many cases, the best place to do that is with the project that creates the software. For example, lets say there is a problem with the way the Darktable photography software renders images from your digital camera. It’s best to take that to the Darktable developers. For another example, say there’s a problem with the GNOME or KDE desktop environments or with the software that is part of them. Taking these issues to those projects will usually get you the best results.

However, if it’s a Fedora-specific problem, like something with our build or configuration of the software, or a problem with how it’s integrated, don’t hesitate to file a bug with us. This is also true when there is a problem which you know has a fix that we just haven’t included yet.

I know this is kind of complex… it’d be nice to have a one-stop place to handle all of the bugs. But remember that Fedora packagers — the people who do the work of taking upstream software and configuring it to build in our system — are largely volunteers. They are not always the deepest experts in the code for the software they’re working with. When in doubt, you can always file a Fedora bug. The folks in Fedora responsible for the corresponding package can help with their connections to the upstream software project.

Remember, when you find a bug that’s gone through diagnosis and doesn’t yet have a good fix, when you see something that affects a lot of people, or when there’s a long-standing problem that just isn’t getting attention, please nominate it as a Prioritized Bug. We’ll take a look and see what can be done!

PS: The famous image in the header is, of course, from the logbook of the Mark II computer at Harvard where Rear Admiral Grace Murray Hopper worked. But contrary to popular belief about the story, this isn’t the first use of the term “bug” for a systems problem — it was already common in engineering, which is why it was funny to find a literal bug as the cause of an issue. #nowyouknow #jokeexplainer

Use the DNF local plugin to speed up your home lab

Friday 16th of April 2021 08:00:00 AM
Introduction

If you are a Fedora Linux enthusiast or a developer working with multiple instances of Fedora Linux then you might benefit from the DNF local plugin. An example of someone who would benefit from the DNF local plugin would be an enthusiast who is running a cluster of Raspberry Pis. Another example would be someone running several virtual machines managed by Vagrant. The DNF local plugin reduces the time required for DNF transactions. It accomplishes this by transparently creating and managing a local RPM repository. Because accessing files on a local file system is significantly faster than downloading them repeatedly, multiple Fedora Linux machines will see a significant performance improvement when running dnf with the DNF local plugin enabled.

I recently started using this plugin after reading a tip from Glenn Johnson (aka glennzo) in a 2018 fedoraforum.org post. While working on a Raspberry Pi based Kubernetes cluster running Fedora Linux and also on several container-based services, I winced with every DNF update on each Pi or each container that downloaded a duplicate set of rpms across my expensive internet connection. In order to improve this situation, I searched for a solution that would cache rpms for local reuse. I wanted something that would not require any changes to repository configuration files on every machine. I also wanted it to continue to use the network of Fedora Linux mirrors. I didn’t want to use a single mirror for all updates.

Prior art

An internet search yields two common solutions that eliminate or reduce repeat downloads of the same RPM set – create a private Fedora Linux mirror or set up a caching proxy.

Fedora provides guidance on setting up a private mirror. A mirror requires a lot of bandwidth and disk space and significant work to maintain. A full private mirror would be too expensive and it would be overkill for my purposes.

The most common solution I found online was to implement a caching proxy using Squid. I had two concerns with this type of solution. First, I would need to edit repository definitions stored in /etc/yum.repo.d on each virtual and physical machine or container to use the same mirror. Second, I would need to use http and not https connections which would introduce a security risk.

After reading Glenn’s 2018 post on the DNF local plugin, I searched for additional information but could not find much of anything besides the sparse documentation for the plugin on the DNF documentation web site. This article is intended to raise awareness of this plugin.

About the DNF local plugin

The online documentation provides a succinct description of the plugin: “Automatically copy all downloaded packages to a repository on the local filesystem and generating repo metadata”. The magic happens when there are two or more Fedora Linux machines configured to use the plugin and to share the same local repository. These machines can be virtual machines or containers running on a host and all sharing the host filesystem, or separate physical hardware on a local area network sharing the file system using a network-based file system sharing technology. The plugin, once configured, handles everything else transparently. Continue to use dnf as before. dnf will check the plugin repository for rpms, then proceed to download from a mirror if not found. The plugin will then cache all rpms in the local repository regardless of their upstream source – an official Fedora Linux repository or a third-party RPM repository – and make them available for the next run of dnf.

Install and configure the DNF local plugin

Install the plugin using dnf. The createrepo_c packages will be installed as a dependency. The latter is used, if needed, to create the local repository.

sudo dnf install python3-dnf-plugin-local

The plugin configuration file is stored at /etc/dnf/plugins/local.conf. An example copy of the file is provided below. The only change required is to set the repodir option. The repodir option defines where on the local filesystem the plugin will keep the RPM repository.

[main] enabled = true # Path to the local repository. # repodir = /var/lib/dnf/plugins/local # Createrepo options. See man createrepo_c [createrepo] # This option lets you disable createrepo command. This could be useful # for large repositories where metadata is priodically generated by cron # for example. This also has the side effect of only copying the packages # to the local repo directory. enabled = true # If you want to speedup createrepo with the --cachedir option. Eg. # cachedir = /tmp/createrepo-local-plugin-cachedir # quiet = true # verbose = false

Change repodir to the filesystem directory where you want the RPM repository stored. For example, change repodir to /srv/repodir as shown below.

... # Path to the local repository. # repodir = /var/lib/dnf/plugins/local repodir = /srv/repodir ...

Finally, create the directory if it does not already exist. If this directory does not exist, dnf will display some errors when it first attempts to access the directory. The plugin will create the directory, if necessary, despite the initial errors.

sudo mkdir -p /srv/repodir

Repeat this process on any virtual machine or container that you want to share the local repository. See the use cases below for more information. An alternative configuration using NFS (network file system) is also provided below.

How to use the DNF local plugin

After you have installed the plugin, you do not need to change how you use dnf. The plugin will cause a few additional steps to run transparently behind the scenes whenever dnf is called. After dnf determines which rpms to update or install, the plugin will try to retrieve them from the local repository before trying to download them from a mirror. After dnf has successfully completed the requested updates, the plugin will copy any rpms downloaded from a mirror to the local repository and then update the local repository’s metadata. The downloaded rpms will then be available in the local repository for the next dnf client.

There are two points to be aware of. First, benefits from the local repository only occur if multiple machines share the same architecture (for example, x86_64 or aarch64). Virtual machines and containers running on a host will usually share the same architecture as the host. But if there is only one aarch64 device and one x86_64 device there is little real benefit to a shared local repository unless one of the devices is constantly reset and updated which is common when developing with a virtual machine or container. Second, I have not explored how robust the local repository is to multiple dnf clients updating the repository metadata concurrently. I therefore run dnf from multiple machines serially rather than in parallel. This may not be a real concern but I want to be cautious.

The use cases outlined below assume that work is being done on Fedora Workstation. Other desktop environments can work as well but may take a little extra effort. I created a GitHub repository with examples to help with each use case. Click the Code button at https://github.com/buckaroogeek/dnf-local-plugin-examples to clone the repository or to download a zip file.

Use case 1: networked physical machines

The simplest use case is two or more Fedora Linux computers on the same network. Install the DNF local plugin on each Fedora Linux machine and configure the plugin to use a repository on a network-aware file system. There are many network-aware file systems to choose from. Which file system you will use will probably be influenced by the existing devices on your network.

For example, I have a small Synology Network Attached Storage device (NAS) on my home network. The web admin interface for the Synology makes it very easy to set up a NFS server and export a file system share to other devices on the network. NFS is a shared file system that is well supported on Fedora Linux. I created a share on my NAS named nfs-dnf and exported it to all the Fedora Linux machines on my network. For the sake of simplicity, I am omitting the details of the security settings. However, please keep in mind that security is always important even on your own local network. If you would like more information about NFS, the online Red Hat Enable Sysadmin magazine has an informative post that covers both client and server configurations on Red Hat Enterprise Linux. They translate well to Fedora Linux.

I configured the NFS client on each of my Fedora Linux machines using the steps shown below. In the below example, quga.lan is the hostname of my NAS device.

Install the NFS client on each Fedora Linux machine.

$ sudo dnf install nfs-utils

Get the list of exports from the NFS server:

$ showmount -e quga.lan Export list for quga.lan: /volume1/nfs-dnf pi*.lan

Create a local directory to be used as a mount point on the Fedora Linux client:

$ sudo mkdir -p /srv/repodir

Mount the remote file system on the local directory. See man mount for more information and options.

$ sudo mount -t nfs -o vers=4 quga.lan:/nfs-dnf /srv/repodir

The DNF local plugin will now work until as long as the client remains up. If you want the NFS export to be automatically mounted when the client is rebooted, then you must to edit /etc/fstab as demonstrated below. I recommend making a backup of /etc/fstab before editing it. You can substitute vi with nano or another editor of your choice if you prefer.

$ sudo vi /etc/fstab

Append the following line at the bottom of /etc/fstab, then save and exit.

quga.lan:/volume1/nfs-dnf /srv/repodir nfs defaults,timeo=900,retrans=5,_netdev 0 0

Finally, notify systemd that it should rescan /etc/fstab by issuing the following command.

$ sudo systemctl daemon-reload

NFS works across the network and, like all network traffic, may be blocked by firewalls on the client machines. Use firewall-cmd to allow NFS-related network traffic through each Fedora Linux machine’s firewall.

$ sudo firewall-cmd --permanent --zone=public --allow-service=nfs

As you can imagine, replicating these steps correctly on multiple Fedora Linux machines can be challenging and tedious. Ansible automation solves this problem.

In the rpi-example directory of the github repository I’ve included an example Ansible playbook (configure.yaml) that installs and configures both the DNF plugin and the NFS client on all Fedora Linux machines on my network. There is also a playbook (update.yaml) that runs a DNF update across all devices. See this recent post in Fedora Magazine for more information about Ansible.

To use the provided Ansible examples, first update the inventory file (inventory) to include the list of Fedora Linux machines on your network that you want to managed. Next, install two Ansible roles in the roles subdirectory (or another suitable location).

$ ansible-galaxy install --roles-path ./roles -r requirements.yaml

Run the configure.yaml playbook to install and configure the plugin and NFS client on all hosts defined in the inventory file. The role that installs and configures the NFS client does so via /etc/fstab but also takes it a step further by creating an automount for the NFS share in systemd. The automount is configured to mount the share only when needed and then to automatically unmount. This saves network bandwidth and CPU cycles which can be important for low power devices like a Raspberry Pi. See the github repository for the role and for more information.

$ ansible-playbook -i inventory configure.yaml

Finally, Ansible can be configured to execute dnf update on all the systems serially by using the update.yaml playbook.

$ ansible-playbook -i inventory update.yaml

Ansible and other automation tools such as Puppet, Salt, or Chef can be big time savers when working with multiple virtual or physical machines that share many characteristics.

Use case 2: virtual machines running on the same host

Fedora Linux has excellent built-in support for virtual machines. The Fedora Project also provides Fedora Cloud base images for use as virtual machines. Vagrant is a tool for managing virtual machines. Fedora Magazine has instructions on how to set up and configure Vagrant. Add the following line in your .bashrc (or other comparable shell configuration file) to inform Vagrant to use libvirt automatically on your workstation instead of the default VirtualBox.

export VAGRANT_DEFAULT_PROVIDER=libvirt

In your project directory initialize Vagrant and the Fedora Cloud image (use 34-cloud-base for Fedora Linux 34 when available):

$ vagrant init fedora/33-cloud-base

This creates a Vagrant file in the project directory. Edit the Vagrant file to look like the example below. DNF will likely fail with the default memory settings for libvirt. So the example Vagrant file below provides additional memory to the virtual machine. The example below also shares the host /srv/repodir with the virtual machine. The shared directory will have the same path in the virtual machine – /srv/repodir. The Vagrant file can be downloaded from github.

# -*- mode: ruby -*- # vi: set ft=ruby : # define repo directory; same name on host and vm REPO_DIR = "/srv/repodir" Vagrant.configure("2") do |config| config.vm.box = "fedora/33-cloud-base" config.vm.provider :libvirt do |v| v.memory = 2048 # v.cpus = 2 end # share the local repository with the vm at the same location config.vm.synced_folder REPO_DIR, REPO_DIR # ansible provisioner - commented out by default # the ansible role is installed into a path defined by # ansible.galaxy_roles-path below. The extra_vars are ansible # variables passed to the playbook. # # config.vm.provision "ansible" do |ansible| # ansible.verbose = "v" # ansible.playbook = "ansible/playbook.yaml" # ansible.extra_vars = { # repo_dir: REPO_DIR, # dnf_update: false # } # ansible.galaxy_role_file = "ansible/requirements.yaml" # ansible.galaxy_roles_path = "ansible/roles" # end end

Once you have Vagrant managing a Fedora Linux virtual machine, you can install the plugin manually. SSH into the virtual machine:

$ vagrant ssh

When you are at a command prompt in the virtual machine, repeat the steps from the Install and configure the DNF local plugin section above. The Vagrant configuration file should have already made /srv/repodir from the host available in the virtual machine at the same path.

If you are working with several virtual machines or repeatedly re-initiating a new virtual machine then some simple automation becomes useful. As with the network example above, I use ansible to automate this process.

In the vagrant-example directory on github, you will see an ansible subdirectory. Edit the Vagrant file and remove the comment marks under the ansible provisioner section. Make sure the ansible directory and its contents (playbook.yaml, requirements.yaml) are in the project directory.

After you’ve uncommented the lines, the ansible provisioner section in the Vagrant file should look similar to the following:

.... # ansible provisioner # the ansible role is installed into a path defined by # ansible.galaxy_roles-path below. The extra_vars are ansible # variables passed to the playbook. # config.vm.provision "ansible" do |ansible| ansible.verbose = "v" ansible.playbook = "ansible/playbook.yaml" ansible.extra_vars = { repo_dir: REPO_DIR, dnf_update: false } ansible.galaxy_role_file = "ansible/requirements.yaml" ansible.galaxy_roles_path = "ansible/roles" end ....

Ansible must be installed (sudo dnf install ansible). Note that there are significant changes to how Ansible is packaged beginning with Fedora Linux 34 (use sudo dnf install ansible-base ansible-collections*).

If you run Vagrant now (or reprovision: vagrant provision), Ansible will automatically download an Ansible role that installs the DNF local plugin. It will then use the downloaded role in a playbook. You can vagrant ssh into the virtual machine to verify that the plugin is installed and to verify that rpms are coming from the DNF local repository instead of a mirror.

Use case 3: container builds

Container images are a common way to distribute and run applications. If you are a developer or enthusiast using Fedora Linux containers as a foundation for applications or services, you will likely use dnf to update the container during the development/build process. Application development is iterative and can result in repeated executions of dnf pulling the same RPM set from Fedora Linux mirrors. If you cache these rpms locally then you can speed up the container build process by retrieving them from the local cache instead of re-downloading them over the network each time. One way to accomplish this is to create a custom Fedora Linux container image with the DNF local plugin installed and configured to use a local repository on the host workstation. Fedora Linux offers podman and buildah for managing the container build, run and test life cycle. See the Fedora Magazine post How to build Fedora container images for more about managing containers on Fedora Linux.

Note that the fedora_minimal container uses microdnf by default which does not support plugins. The fedora container, however, uses dnf.

A script that uses buildah and podman to create a custom Fedora Linux image named myFedora is provided below. The script creates a mount point for the local repository at /srv/repodir. The below script is also available in the container-example directory of the github repository. It is named base-image-build.sh.

#!/bin/bash set -x # bash script that creates a 'myfedora' image from fedora:latest. # Adds dnf-local-plugin, points plugin to /srv/repodir for local # repository and creates an external mount point for /srv/repodir # that can be used with a -v switch in podman/docker # custom image name custom_name=myfedora # scratch conf file name tmp_name=local.conf # location of plugin config file configuration_name=/etc/dnf/plugins/local.conf # location of repodir on container container_repodir=/srv/repodir # create scratch plugin conf file for container # using repodir location as set in container_repodir cat <<EOF > "$tmp_name" [main] enabled = true repodir = $container_repodir [createrepo] enabled = true # If you want to speedup createrepo with the --cachedir option. Eg. # cachedir = /tmp/createrepo-local-plugin-cachedir # quiet = true # verbose = false EOF # pull registry.fedoraproject.org/fedora:latest podman pull registry.fedoraproject.org/fedora:latest #start the build mkdev=$(buildah from fedora:latest) # tag author buildah config --author "$USER" "$mkdev" # install dnf-local-plugin, clean # do not run update as local repo is not operational buildah run "$mkdev" -- dnf --nodocs -y install python3-dnf-plugin-local createrepo_c buildah run "$mkdev" -- dnf -y clean all # create the repo dir buildah run "$mkdev" -- mkdir -p "$container_repodir" # copy the scratch plugin conf file from host buildah copy "$mkdev" "$tmp_name" "$configuration_name" # mark container repodir as a mount point for host volume buildah config --volume "$container_repodir" "$mkdev" # create myfedora image buildah commit "$mkdev" "localhost/$custom_name:latest" # clean up working image buildah rm "$mkdev" # remove scratch file rm $tmp_name

Given normal security controls for containers, you usually run this script with sudo and when you use the myFedora image in your development process.

$ sudo ./base_image_build.sh

To list the images stored locally and see both fedora:latest and myfedora:latest run:

$ sudo podman images

To run the myFedora image as a container and get a bash prompt in the container run:

$ sudo podman run -ti -v /srv/repodir:/srv/repodir:Z myfedora /bin/bash

Podman also allows you to run containers rootless (as an unprivileged user). Run the script without sudo to create the myfedora image and store it in the unprivileged user’s image repository:

$ ./base-image-build.sh

In order to run the myfedora image as a rootless container on a Fedora Linux host, an additional flag is needed. Without the extra flag, SELinux will block access to /srv/repodir on the host.

$ podman run --security-opt label=disable -ti -v /srv/repodir:/srv/repodir:Z myfedora /bin/bash

By using this custom image as the base for your Fedora Linux containers, the iterative building and development of applications or services on them will be faster.

Bonus Points – for even better dnf performance, Dan Walsh describes how to share dnf metadata between a host and container using a file overlay (see https://www. redhat.com/sysadmin/speeding-container-buildah). This technique will work in combination with a shared local repository only if the host and the container use the same local repository. The dnf metadata cache includes metadata for the local repository under the name _dnf_local.

I have created a container file that uses buildah to do a dnf update on a fedora:latest image. I’ve also created a container file to repeat the process using a myfedora image. There are 53 MB and 111 rpms in the dnf update. The only difference between the images is that myfedora has the DNF local plugin installed. Using the local repository cut the elapse time by more than half in this example and saves 53MB of internet bandwidth consumption.

With the fedora:latest image the command and elapsed time is:

# sudo time -f "Elapsed Time: %E" buildah bud -v /var/cache/dnf:/var/cache/dnf:O - f Containerfile.3 . 128 Elapsed Time: 0:48.06

With the myfedora image the command and elapsed time is less than half of the base run. The :Z on the -v volume below is required when running the container on a SELinux-enabled host.

# sudo time -f "Elapsed Time: %E" buildah bud -v /var/cache/dnf:/var/cache/dnf:O -v /srv/repodir:/srv/repodir:Z -f Containerfile.4 . 133 Elapsed Time: 0:19.75 Repository management

The local repository will accumulate files over time. Among the files will be many versions of rpms that change frequently. The kernel rpms are one such example. A system upgrade (for example upgrading from Fedora Linux 33 to Fedora Linux 34) will copy many rpms into the local repository. The dnf repomanage command can be used to remove outdated rpm archives. I have not used the plugin long enough to explore this. The interested and knowledgeable reader is welcome to write an article about the dnf repomanage command for Fedora Magazine.

Finally, I keep the x86_64 rpms for my workstation, virtual machines and containers in a local repository that is separate from the aarch64 local repository for the Raspberry Pis and (future) containers hosting my Kubernetes cluster. I have separated them for reasons of convenience and happenstance. A single repository location should work across all architectures.

An important note about Fedora Linux system upgrades

Glenn Johnson has more than four years experience with the DNF local plugin. On occasion he has experienced problems when upgrading to a new release of Fedora Linux with the DNF local plugin enabled. Glenn strongly recommends that the enabled attribute in the plugin configuration file /etc/dnf/plugins/local.conf be set to false before upgrading your systems to a new Fedora Linux release. After the system upgrade, re-enable the plugin. Glenn also recommends using a separate local repository for each Fedora Linux release. For example, a NFS server might export /volume1/dnf-repo/33 for Fedora Linux 33 systems only. Glenn hangs out on fedoraforum.org – an independent online resource for Fedora Linux users.

Summary

The DNF local plugin has been beneficial to my ongoing work with a Fedora Linux based Kubernetes cluster. The containers and virtual machines running on my Fedora Linux desktop have also benefited. I appreciate how it supplements the existing DNF process and does not dictate any changes to how I update my systems or how I work with containers and virtual machines. I also appreciate not having to download the same set of rpms multiple times which saves me money, frees up bandwidth, and reduces the load on the Fedora Linux mirror hosts. Give it a try and see if the plugin will help in your situation!

Thanks to Glenn Johnson for his post on the DNF local plugin which started this journey, and for his helpful reviews of this post.

Fedora Workstation 34 feature focus: Btrfs transparent compression

Wednesday 14th of April 2021 08:00:00 AM

The release of Fedora 34 grows ever closer, and with that, some fun new features! A previous feature focus talked about some changes coming to GNOME version 40. This article is going to go a little further under the hood and talk about data compression and transparent compression in btrfs. A term like that may sound scary at first, but less technical users need not be wary. This change is simple to grasp, and will help many Workstation users in several key areas.

What is transparent compression exactly?

Transparent compression is complex, but at its core it is simple to understand: it makes files take up less space. It is somewhat like a compressed tar file or ZIP file. Transparent compression will dynamically optimize your file system’s bits and bytes into a smaller, reversible format. This has many benefits that will be discussed in more depth later on, however, at its core, it makes files smaller. This may leave most computer users with a question: “I can’t just read ZIP files. You need to decompress them. Am I going to need to constantly decompress things when I access them?”. That is where the “transparent” part of this whole concept comes in.

Transparent compression makes a file smaller, but the final version is indistinguishable from the original by the human viewer. If you have ever worked with Audio, Video, or Photography you have probably heard of the terms “lossless” and “lossy”. Think of transparent compression like a lossless compressed PNG file. You want the image to look exactly like the original. Small enough to be streamed over the web but still readable by a human. Transparent compression works similarly. Your file system will look and behave the same way as before (no ZIP files everywhere, no major speed reductions). Everything will look, feel, and behave the same. However, in the background it is taking up much less disk space. This is because BTRFS will dynamically compress and decompress your files for you. It’s “Transparent” because even with all this going on, you won’t notice the difference.

You can learn more about transparent compression at https://btrfs.wiki.kernel.org/index.php/Compression

Transparent compression sounds cool, but also too good to be true…

I would be lying if I said transparent compression doesn’t slow some things down. It adds extra CPU cycles to pretty much any I/O operation, and can affect performance in certain scenarios. However, Fedora is using the extremely efficient zstd:1 algorithm. Several tests show that relative to the other benefits, the downsides are negligible (as I mentioned in my explanation before). Better disk space usage is the greatest benefit. You may also receive reduction of write amplification (can increase the lifespan of SSDs), and enhanced read/write performance.

Btrfs transparent compression is extremely performant, and chances are you won’t even notice a difference when it’s there.

I’m convinced! How do I get this working?

In fresh installations of Fedora 34 and its corresponding beta, it should be enabled by default. However, it is also straightforward to enable before and after an upgrade from Fedora 33. You can even enable it in Fedora 33, if you aren’t ready to upgrade just yet.

  1. (Optional) Backup any important data. The process itself is completely safe, but human error isn’t.
  2. To truly begin you will be editing your fstab. This file tells your computer what file systems exist where, and how they should be handled. You need to be cautious here, but only a few small changes will be made so don’t be intimidated. On an installation of Fedora 33 with the default Btrfs layout the /etc/fstab file will probably look something like this:
<strong>$ $EDITOR /etc/fstab</strong>
UUID=1234 /                       btrfs   subvol=root     0 0
UUID=1234 /boot                   ext4    defaults        1 2
UUID=1234         /boot/efi               vfat    umask=0077,shortname=winnt 0 2
UUID=1234 /home                   btrfs   subvol=home     0 0

NOTE: While this guide builds around the standard partition layout, you may be an advanced enough user to partition things yourself. If so, you are probably also advanced enough to extrapolate the info given here onto your existing system. However, comments on this article are always open for any questions.

Disregard the /boot and /boot/efi directories as they aren’t (currently) compressed. You will be adding the argument compress=zstd:1. This tells the computer that it should transparently compress any newly written files if they benefit from it. Add this option in the fourth column, which currently only contains the subvol option for both /home and /:

UUID=1234 /                       btrfs   subvol=root,compress=zstd:1     0 0
UUID=1234 /boot                   ext4    defaults        1 2
UUID=1234         /boot/efi               vfat    umask=0077,shortname=winnt 0 2
UUID=1234 /home                   btrfs   subvol=home,compress=zstd:1     0 0

Once complete, simply save and exit (on the default nano editor this is CTRL-X, SHIFT-Y, then ENTER).

3. Now that fstab has been edited, tell the computer to read it again. After this, it will make all the changes required:

$ sudo mount -o remount / /home/

Once you’ve done this, you officially have transparent compression enabled for all newly written files!

Recommended: Retroactively compress old files

Chances are you already have many files on your computer. While the previous configuration will compress all newly written files, those old files will not benefit. I recommend taking this next (but optional) step to receive the full benefits of transparent compression.

  1. (Optional) Clean out any data you don’t need (empty trash etc.). This will speed things up. However, it’s not required.
  2. Time to compress your data. One simple command can do this, but its form is dependent on your system. Fedora Workstation (and any other desktop spins using the DNF package manager) should use:
$ sudo btrfs filesystem defrag -czstd -rv / /home/

Fedora Silverblue users should use:

$ sudo btrfs filesystem defrag -czstd -rv / /var/home/

Silverblue users may take note of the immutability of some parts of the file system as described here as well as this Bugzilla entry.

NOTE: You may receive several warnings that say something like “Cannot compress permission denied.”. This is because some files, on Silverblue systems especially, the user cannot easily modify. This is a tiny subset of files. They will most likely compress on their own, in time, as the system upgrades.

Compression can take anywhere from a few minutes to an hour depending on how much data you have. Luckily, since all new writes are compressed, you can continue working while this process completes. Just remember it may partially slow down your work at hand and/or the process itself depending on your hardware.

Once this command completes you are officially fully compressed!

How much file space is used, how big are my files

Due to the nature of transparent compression, utilities like du will only report exact, uncompressed, files space usage. This is not the actual space they take up on the disk. The compsize utility is the best way to see how much space your files are actually taking up on disk. An example of a compsize command is:

$ sudo compsize -x / /home/

This example provides exact information on how the two locations, / and /home/ are currently, transparently, compressed. If not installed, this utility is available in the Fedora Linux repository.

Conclusion:

Transparent compression is a small but powerful change. It should benefit everyone from developers to sysadmin, from writers to artists, from hobbyists to gamers. It is one among many of the changes in Fedora 34. These changes will allow us to take further advantage of our hardware, and of the powerful Fedora Linux operating system. I have only just touched the surface here. I encourage those of you with interest to begin at the Fedora Project Wiki and Btrfs Wiki to learn more!

Scheduling tasks with cron

Monday 12th of April 2021 08:48:00 AM

Cron is a scheduling daemon that executes tasks at specified intervals. These tasks are called cron jobs and are mostly used to automate system maintenance or administration tasks. For example, you could set a cron job to automate repetitive tasks such as backing up database or data, updating the system with the latest security patches, checking the disk space usage, sending emails, and so on. The cron jobs can be scheduled to run by the minute, hour, day of the month, month, day of the week, or any combination of these.

Some advantages of cron

These are a few of the advantages of using cron jobs:

  • You have much more control over when your job runs i.e. you can control the minute, the hour, the day, etc. when it will execute.
  • It eliminates the need to write the code for the looping and logic of the task and you can shut it off when you no longer need to execute the job.
  • Jobs do not occupy your memory when not executing so you are able to save the memory allocation.
  • If a job fails to execute and exits for some reason it will run again when the proper time comes.
Installing the cron daemon

Luckily Fedora Linux is pre-configured to run important system tasks to keep the system updated. There are several utilities that can run tasks such as cron, anacron, at and batch. This article will focus on the installation of the cron utility only. Cron is installed with the cronie package that also provides the cron services.

To determine if the package is already present or not, use the rpm command:

$ rpm -q cronie Cronie-1.5.2-4.el8.x86_64

If the cronie package is installed it will return the full name of the cronie package. If you do not have the package present in your system it will say the package is not installed.
To install type this:

$ dnf install cronie Running the cron daemon

A cron job is executed by the crond service based on information from a configuration file. Before adding a job to the configuration file, however, it is necessary to start the crond service, or in some cases install it. What is crond? Crond is the compressed name of cron daemon (crond). To determine if the crond service is running or not, type in the following command:

$ systemctl status crond.service ● crond.service - Command Scheduler Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor pre> Active: active (running) since Sat 2021-03-20 14:12:35 PDT; 1 day 21h ago Main PID: 1110 (crond)

If you do not see something similar including the line “Active: active (running) since…”, you will have to start the crond daemon. To run the crond service in the current session, enter the following command:

$ systemctl start crond.service

To configure the service to start automatically at boot time, type the following:

$ systemctl enable crond.service

If, for some reason, you wish to stop the crond service from running, use the stop command as follows:

$ systemctl stop crond.service

To restart it, simply use the restart command:

$ systemctl restart crond.service Defining a cron job The cron configuration

Here is an example of the configuration details for a cron job. This defines a simple cron job to pull the latest changes of a git master branch into a cloned repository:

*/59 * * * * username cd /home/username/project/design && git pull origin master

There are two main parts:

  • The first part is “*/59 * * * *”. This is where the timer is set to every 59 minutes.
  • The rest of the line is the command as it would run from the command line.
    The command itself in this example has three parts:
    • The job will run as the user “username”
    • It will change to the directory /home/username/project/design
    • The git command runs to pull the latest changes in the master branch.
Timing syntax

The timing information is the first part of the cron job string, as mentioned above. This determines how often and when the cron job is going to run. It consists of 5 parts in this order:

  • minute
  • hour
  • day of the month
  • month
  • day of the week

Here is a more graphic way to explain the syntax may be seen here:

.---------------- minute (0 - 59) | .------------- hour (0 - 23) | | .---------- day of month (1 - 31) | | | .------- month (1 - 12) OR jan,feb,mar,apr … | | | | .---- day of week (0-6) (Sunday=0 or 7) | | | | | OR sun,mon,tue,wed,thr,fri,sat | | | | | * * * * * user-name command-to-be-executed Use of the asterisk

An asterisk (*) may be used in place of a number to represents all possible values for that position. For example, an asterisk in the minute position would make it run every minute. The following examples may help to better understand the syntax.

This cron job will run every minute, all the time:

* * * * * [command]

A slash (/) indicates a multiple number of minutes The following example will run 12 times per hour, i.e., every 5 minutes:

*/5 * * * * [command]

The next example will run once a month, on the second day of the month at midnight (e.g. January 2nd 12:00am, February 2nd 12:00am, etc.):

0 0 2 * * [command] Using crontab to create a cron job

Cron jobs run in the background and constantly check the /etc/crontab file, and the /etc/cron.*/ and /var/spool/cron/ directories. Each user has a unique crontab file in /var/spool/cron/ .

These cron files are not supposed to be edited directly. The crontab command is the method you use to create, edit, install, uninstall, and list cron jobs.

The same crontab command is used for creating and editing cron jobs. And what’s even cooler is that you don’t need to restart cron after creating new files or editing existing ones.

$ crontab -e

This opens your existing crontab file or creates one if necessary. The vi editor opens by default when calling crontab -e. Note: To edit the crontab file using Nano editor, you can optionally set the EDITOR=nano environment variable.

List all your cron jobs using the option -l and specify a user using the -u option, if desired.

$ crontab -l $ crontab -u username -l

Remove or erase all your cron jobs using the following command:

$ crontab -r

To remove jobs for a specific user you must run the following command as the root user:

$ crontab -r -u username

Thank you for reading. cron jobs may seem like a tool just for system admins, but they are actually relevant to many kinds of web applications and user tasks.

Reference

Fedora Linux documentation for Automated Tasks

Using network bound disk encryption with Stratis

Wednesday 7th of April 2021 08:00:00 AM

In an environment with many encrypted disks, unlocking them all is a difficult task. Network bound disk encryption (NBDE) helps automate the process of unlocking Stratis volumes. This is a critical requirement in large environments. Stratis version 2.1 added support for encryption, which was introduced in the article “Getting started with Stratis encryption.” Stratis version 2.3 recently introduced support for Network Bound Disk Encryption (NBDE) when using encrypted Stratis pools, which is the topic of this article.

The Stratis website describes Stratis as an “easy to use local storage management for Linux.” The  short video “Managing Storage With Stratis” gives a quick demonstration of the basics. The video was recorded on a Red Hat Enterprise Linux 8 system, however, the concepts shown in the video also apply to Stratis in Fedora Linux.

Prerequisites

This article assumes you are familiar with Stratis, and also Stratis pool encryption. If you aren’t familiar with these topics, refer to this article and the Stratis overview video previously mentioned.

NBDE requires Stratis 2.3 or later. The examples in this article use a pre-release version of Fedora Linux 34. The Fedora Linux 34 final release will include Stratis 2.3.

Overview of network bound disk encryption (NBDE)

One of the main challenges of encrypting storage is having a secure method to unlock the storage again after a system reboot. In large environments, typing in the encryption passphrase manually doesn’t scale well. NBDE addresses this and allows for encrypted storage to be unlocked in an automated manner.

At a high level, NBDE requires a Tang server in the environment. Client systems (using Clevis Pin) can automatically decrypt storage as long as they can establish a network connection to the Tang server. If there is no network connectivity to the Tang server, the storage would have to be decrypted manually.

The idea behind this is that the Tang server would only be available on an internal network, thus if the encrypted device is lost or stolen, it would no longer have access to the internal network to connect to the Tang server, therefore would not be automatically decrypted.

For more information on Tang and Clevis, see the man pages (man tang, man clevis) , the Tang GitHub page, and the Clevis GitHub page.

Setting up the Tang server

This example uses another Fedora Linux system as the Tang server with a hostname of tang-server. Start by installing the tang package:

dnf install tang

Then enable and start the tangd.socket with systemctl:

systemctl enable tangd.socket --now

Tang uses TCP port 80, so you also need to open that in the firewall:

firewall-cmd --add-port=80/tcp --permanent
firewall-cmd --add-port=80/tcp

Finally, run tang-show-keys to display the output signing key thumbprint. You’ll need this later.

# tang-show-keys l3fZGUCmnvKQF_OA6VZF9jf8z2s Creating the encrypted Stratis Pool

The previous article on Stratis encryption goes over how to setup an encrypted Stratis pool in detail, so this article won’t cover that in depth.

The first step is capturing a key that will be used to decrypt the Stratis pool. Even when using NBDE, you need to set this, as it can be used to manually unlock the pool in the event that the NBDE server is unreachable. Capture the pool1 key with the following command:

# stratis key set --capture-key pool1key Enter key data followed by the return key:

Then I’ll create an encrypted Stratis pool (using the pool1key just created) named pool1 using the /dev/vdb device:

# stratis pool create --key-desc pool1key pool1 /dev/vdb

Next, create a filesystem in this Stratis pool named filesystem1, create a mount point, mount the filesystem, and create a testfile in it:

# stratis filesystem create pool1 filesystem1 # mkdir /filesystem1 # mount /dev/stratis/pool1/filesystem1 /filesystem1 # cd /filesystem1 # echo "this is a test file" > testfile Binding the Stratis pool to the Tang server

At this point, we have the encrypted Stratis pool created, and also have a filesystem created in the pool. The next step is to bind your Stratis pool to the Tang server that you just setup. Do this with the stratis pool bind nbde command.

When you make the Tang binding, you need to pass several parameters to the command:

  • the pool name (in this example, pool1)
  • the key descriptor name (in this example, pool1key)
  • the Tang server name (in this example, http://tang-server)

Recall that on the Tang server, you previously ran tang-show-keys which showed the Tang output signing key thumbprint is l3fZGUCmnvKQF_OA6VZF9jf8z2s. In addition to the previous parameters, you either need to pass this thumbprint with the parameter –thumbprint l3fZGUCmnvKQF_OA6VZF9jf8z2s, or skip the verification of the thumbprint with the –trust-url parameter.

It is more secure to use the –thumbprint parameter. For example:

# stratis pool bind nbde pool1 pool1key http://tang-server --thumbprint l3fZGUCmnvKQF_OA6VZF9jf8z2s Unlocking the Stratis Pool with NBDE

Next reboot the host, and validate that you can unlock the Stratis pool with NBDE, without requiring the use of the key passphrase. After rebooting the host, the pool is no longer available:

# stratis pool list
Name Total Physical Properties

To unlock the pool using NBDE, run the following command:

# stratis pool unlock clevis

Note that you did not need to use the key passphrase. This command could be automated to run during the system boot up.

At this point, the pool is now available:

# stratis pool list Name Total Physical Properties pool1 4.98 GiB / 583.65 MiB / 4.41 GiB ~Ca, Cr

You can mount the filesystem and access the file that was previously created:

# mount /dev/stratis/pool1/filesystem1 /filesystem1/ # cat /filesystem1/testfile this is a test file Rotating Tang server keys

Best practices recommend that you periodically rotate the Tang server keys and update the Stratis client servers to use the new Tang keys.

To generate new Tang keys, start by logging in to your Tang server and look at the current status of the /var/db/tang directory. Then, run the tang-show-keys command:

# ls -al /var/db/tang total 8 drwx------. 1 tang tang 124 Mar 15 15:51 . drwxr-xr-x. 1 root root 16 Mar 15 15:48 .. -rw-r--r--. 1 tang tang 361 Mar 15 15:51 hbjJEDXy8G8wynMPqiq8F47nJwo.jwk -rw-r--r--. 1 tang tang 367 Mar 15 15:51 l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk # tang-show-keys l3fZGUCmnvKQF_OA6VZF9jf8z2s

To generate new keys, run tangd-keygen and point it to the /var/db/tang directory:

# /usr/libexec/tangd-keygen /var/db/tang

If you look at the /var/db/tang directory again, you will see two new files:

# ls -al /var/db/tang total 16 drwx------. 1 tang tang 248 Mar 22 10:41 . drwxr-xr-x. 1 root root 16 Mar 15 15:48 .. -rw-r--r--. 1 tang tang 361 Mar 15 15:51 hbjJEDXy8G8wynMPqiq8F47nJwo.jwk -rw-r--r--. 1 root root 354 Mar 22 10:41 iyG5HcF01zaPjaGY6L_3WaslJ_E.jwk -rw-r--r--. 1 root root 349 Mar 22 10:41 jHxerkqARY1Ww_H_8YjQVZ5OHao.jwk -rw-r--r--. 1 tang tang 367 Mar 15 15:51 l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk

And if you run tang-show-keys, it will show the keys being advertised by Tang:

# tang-show-keys
l3fZGUCmnvKQF_OA6VZF9jf8z2s
iyG5HcF01zaPjaGY6L_3WaslJ_E

You can prevent the old key (starting with l3fZ) from being advertised by renaming the two original files to be hidden files, starting with a period. With this method, the old key will no longer be advertised, however it will still be usable by any existing clients that haven’t been updated to use the new key. Once all clients have been updated to use the new key, these old key files can be deleted.

# cd /var/db/tang # mv hbjJEDXy8G8wynMPqiq8F47nJwo.jwk .hbjJEDXy8G8wynMPqiq8F47nJwo.jwk # mv l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk .l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk

At this point, if you run tang-show-keys again, only the new key is being advertised by Tang:

# tang-show-keys iyG5HcF01zaPjaGY6L_3WaslJ_E

Next, switch over to your Stratis system and update it to use the new Tang key. Stratis supports doing this while the filesystem(s) are online.

First, unbind the pool:

# stratis pool unbind pool1

Next, set the key with the original passphrase used when the encrypted pool was created:

# stratis key set --capture-key pool1key
Enter key data followed by the return key:

Finally, bind the pool to the Tang server with the updated key thumbprint:

# stratis pool bind nbde pool1 pool1key http://tang-server --thumbprint iyG5HcF01zaPjaGY6L_3WaslJ_E

The Stratis system is now configured to use the updated Tang key. Once any other client systems using the old Tang key have been updated, the two original key files that were renamed to hidden files in the /var/db/tang directory on the Tang server can be backed up and deleted.

What if the Tang server is unavailable?

Next, shutdown the Tang server to simulate it being unavailable, then reboot the Stratis system.

Again, after the reboot, the Stratis pool is not available:

# stratis pool list Name Total Physical Properties

If you try to unlock it with NBDE, this fails because the Tang server is unavailable:

# stratis pool unlock clevis Execution failed: An iterative command generated one or more errors: The operation 'unlock' on a resource of type pool failed. The following errors occurred: Partial action "unlock" failed for pool with UUID 4d62f840f2bb4ec9ab53a44b49da3f48: Cryptsetup error: Failed with error: Error: Command failed: cmd: "clevis" "luks" "unlock" "-d" "/dev/vdb" "-n" "stratis-1-private-42142fedcb4c47cea2e2b873c08fcf63-crypt", exit reason: 1 stdout: stderr: /dev/vdb could not be opened.

At this point, without the Tang server being reachable, the only option to unlock the pool is to use the original key passphrase:

# stratis key set --capture-key pool1key
Enter key data followed by the return key:

You can then unlock the pool using the key:

# stratis pool unlock keyring

Next, verify the pool was successfully unlocked:

# stratis pool list Name Total Physical Properties pool1 4.98 GiB / 583.65 MiB / 4.41 GiB ~Ca, Cr

Contribute at Fedora Linux 34 Upgrade, Audio, Virtualization, IoT, and Bootloader test days

Monday 5th of April 2021 08:50:00 PM

Fedora test days are events where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are three upcoming test events in the next week.

  • Wednesday April 07, is to test the Upgrade from Fedora 32 and 33 to Fedora 34.
  • Friday April 09, is to test Audio changes made by Pipewire implementation.
  • Tuesday April 13, is to test the Virtualization in Fedora 34.
  • Monday, April 12 to Friday, April 16 is to test Fedora IoT 34.
  • Monday, April 12 and Tuesday, April 13 is to test the bootloader.

Come and test with us to make the upcoming Fedora Linux 34 even better. Read more below on how to do it.

Upgrade test day

As we come closer to Fedora Linux 34 release dates, it’s time to test upgrades. This release has a lot of changes and it becomes essential that we test the graphical upgrade methods as well as the command line. As a part of this test day, we will test upgrading from a full updated, F32 and F33 to F34 for all architectures (x86_64, ARM, aarch64) and variants (WS, cloud, server, silverblue, IoT).
This test day will happen on Wednesday, April 07

Audio test day

There is a recent proposal to replace the PulseAudio daemon with a functionally compatible implementation based on PipeWire. This means that all existing clients using the PulseAudio client library will continue to work as before, as well as applications shipped as Flatpak. The test day is to test that everything works as expected.
This will occur on Friday, April 09

Virtualization test day

We are going to test all forms of virtualization possible in Fedora. The test day will focus on testing Fedora or your favorite distro inside a bare metal implementation of Fedora running Boxes, KVM, VirtualBox and whatever you have. The general features of installing the OS and working with it are outlined in the test cases which you will find on the results page.
This will be happening on Tuesday, April 13.

IoT test week

For this test week, the focus is all-around; test all the bits that come in a Fedora IoT release as well as validate different hardware. This includes:

  • Basic installation to different media
  • Installing in a VM
  • rpm-ostree upgrades, layering, rebasing
  • Basic container manipulation with Podman.

We welcome all different types of hardware, but have a specific list of target hardware for convenience. This will be happening the week of Monday, April 12 to Friday, April 16.

Bootloader Test Day

This test day will focus on ensuring the working of new shim and Grub with BIOS and EFI/UEFI with secure boot enabled. We would like to have the community test it on as many possible types of off the shelve hardware. The test day will happen Monday, April 12 and Tuesday, April 13. More information is available on the wiki page.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. Test days are the perfect way to start contributing if you not in the past.

The only requirement to get started is the ability to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days are on the wiki page links provided above. If you are available on or around the days of the events, please do some testing and report your results.

[Ed. note: Updated at 1920 UTC on 6 April to add IoT test day information; Updated at 1905 UTC on 7 April to add bootloader test day information]

Fedora Council statement on Richard Stallman rejoining FSF Board

Friday 2nd of April 2021 12:00:00 PM

The Fedora Project envisions a world where everyone benefits from free and open source software built by inclusive, welcoming, and open-minded communities.

We care about free software, but free software is not just bits and bytes. Fedora is our people, and our vision includes healthy community. A healthy community requires that we be welcoming and inclusive. For those in our community who have experienced harassment of any kind, this past week has been painful and exhausting.

There is no room for harassment, bullying, or other forms of abuse in Fedora. We take our Code of Conduct seriously in order to ensure a welcoming community.

Along with many in the free and open source software world, the Fedora Council was taken aback that the Free Software Foundation (FSF) has allowed Richard Stallman to rejoin their Board of Directors given his history of abuse and harassment. The Fedora Council does not normally involve itself with the governance of other projects. However, this is an exceptional case due to the FSF’s stewardship of the GPL family of licenses, which are critical for the work we do.

In keeping with our values, we will stop providing funding or attendance to any FSF-sponsored events and any events at which Richard Stallman is a featured speaker or exhibitor. This also applies to any organization where he has a leadership role.

Excellent technical contribution is not enough. We expect all members of our community to uphold the Friends value.

– The Fedora Council

Playing with modular synthesizers and VCV Rack

Wednesday 31st of March 2021 08:00:00 AM

You know about using Fedora Linux to write code, books, play games, and listen to music. You can also do system simulation, work on electronic circuits, work with embedded systems too via Fedora Labs. But you can also make music with the VCV Rack software. For that, you can use to Fedora Jam or work from a standard Fedora Workstation installation with the LinuxMAO Copr repository enabled. This article describes how to use modular synthesizers controlled by Fedora Linux.

Some history

The origin of the modular synthesizer dates back to the 1950’s and was soon followed in the 60’s by the Moog modular synthesizer. Wikipedia has a lot more on the history.

Moog synthesizer circa 1975

But, by the way, what is a modular synthesizer ?

These synthesizers are made of hardware “blocks” or modules with specific functions like oscillators, amplifier, sequencer, and other various functions. The blocks are connected together by wires. You make music with these connected blocks by manipulating knobs. Most of these modular synthesizers came without keyboard.

Modular synthesizers were very common in the early days of progressive rock (with Emerson Lake and Palmer) and electronic music (Klaus Schulze, for example). 

After a while people forgot about modular synthesizers because they were cumbersome, hard to tune, hard to fix, and setting a patch (all the wires connecting the modules) was a time consuming task not very easy to perform live. Price was also a problem because systems were mostly sold as a small series of modules, and you needed at least 10 of them to have a decent set-up.

In the last few years, there has been a rebirth of these synthesizers. Doepfer produces some affordable models and a lot of modules are also available and have open sources schematics and codes (check Mutable instruments for example).

But, a few years ago came … VCV Rack. VCV Rack stands for Voltage Controlled Virtual Rack: software-based modular synthesizer lead by Andrew Belt. His first commit on GitHub was Monday Nov 14 18:34:40 2016. 

Getting started with VCV Rack Installation

To be able to use VCV Rack, you can either go to the VCV Rack web site and install a binary for Linux or, you can activate a Copr repository dedicated to music: the LinuxMAO Copr repository (disclaimer: I am the man behind this Copr repository). As a reminder, Copr is not officially supported by Fedora infrastructure. Use packages at your own risk.

Enable the repository with:

sudo dnf copr enable ycollet/linuxmao

Then install VCV Rack:

sudo dnf install Rack-v1

You can now start VCV Rack from the console of via the Multimedia entry in the start menu:

$ Rack & Add some modules

The first step is now to clean up everything and leave just the AUDIO-8 module. You can remove modules in various ways:

  • Click on a module and hit the backspace key
  • Right click on a module and click “delete”

The AUDIO-8 module allows you to connect from and to audio devices. Here are the features for this module.

Now it’s time to produce some noise (for the music, we’ll see that later).

Right click inside VCV Rack (but outside of a module) and a module search window will appear. 

Enter “VCO-2” in the search bar and click on the image of the module. This module is now on VCV Rack.

To move a module: click and drag the module.

To move a group of modules, hit shift + click + drag a module and all the modules on the right of the dragged modules will move with the selected module.

Now you need to connect the modules by drawing a wire between the “OUT” connector of VCO-2 module and the “1” “TO DEVICE” of AUDIO-8 module.

Left-click on the “OUT” connector of the VCO-2 module and while keeping the left-click, drag your mouse to the “1” “TO DEVICE” of the AUDIO-8 module. Once on this connector, release your left-click. 

To remove a wire, do a right-click on the connector where the wire is connected.

To draw a wire from an already connected connector, hold “ctrl+left+click” and draw the wire. For example, you can draw a wire from “OUT” connector of module VCO-2 to the “2” “TO DEVICE” connector of AUDIO-8 module.

What are these wires ?

Wires allow you to control various part of the module. The information handled by these wires are Control Voltages, Gate signals, and Trigger signals.

CV (Control Voltages): These typically control pitch and range between a minimum value around -1 to -5 volt and a maximum value between 1 and 5 volt.

What is the GATE signal you find on some modules? Imagine a keyboard sending out on/off data to an amplifier module: its voltage is at zero when no key is  pressed and jumps up to max level (5v for example) when a key is pressed; release the key, and the voltage goes back to zero again. A GATE signal can be emitted by things other than a keyboard. A clock module, for example, can emit gate signals.

Finally, what is a TRIGGER signal you find on some modules? It’s a square pulse which starts when you press a key and stops after a while.

In the modular world, gate and trigger signals are used to trigger drum machines, restart clocks, reset sequencers and so on. 

Connecting everybody

Let’s control an oscillator with a CV signal. But before that, remove your VCO-2 module (click on the module and hit backspace).

Do a right-click on VCV Rack a search for these modules:

  • VCO-1 (a controllable oscillator)
  • LFO-1 (a low frequency oscillator which will control the frequency of the VCO-1)

Now draw wires:

  • between the “SAW” connector of the LFO-1 module and the “V/OCT” (Voltage per Octave) connector of the VCO-1 module
  • between the “SIN” connector of the VCO-1 module and the “1” “TO DEVICE” of the AUDIO-8 module

You can adjust the range of the frequency by turning the FREQ knob of the LFO-1 module.

You can also adjust the low frequency of the sequence by turning the FREQ knob of the VCO-1 module.

The Fundamental modules for VCV Rack

When you install the Rack-v1, the Rack-v1-Fundamental package is automatically installed. Rack-v1 only installs the rack system, with input / output modules, but without other basic modules.

In the Fundamental VCV Rack packages, there are various modules available.

Some important modules to have in mind:

  • VCO: Voltage Controlled Oscillator
  • LFO: Low Frequency Oscillator
  • VCA: Voltage Controlled Amplifier
  • SEQ: Sequencers (to define a sequence of voltage / notes)
  • SCOPE: an oscilloscope, very useful to debug your connexions
  • ADSR: a module to generate an envelope for a note. ADSR stands for Attack / Decay / Sustain / Release

And there are a lot more functions available. I recommend you watch tutorials related to VCV Rack on YouTube to discover all these functionalities, in particular the Video Channel of Omri Cohen.

What to do next

Are you limited to the Fundamental modules? No, certainly not! VCV Rack provides some closed sources modules (for which you’ll need to pay) and a lot of other modules which are open source. All the open source modules are packages for Fedora 32 and 33. How many VCV Rack packages are available ?

sudo dnf search rack-v1 | grep src | wc -l 150

And counting.  Each month new packages appear. If you want to install everything at once, run:

sudo dnf install `dnf search rack-v1 | grep src | sed -e "s/\(^.*\)\.src.*/\1/"`

Here are some recommended modules to start with.

  • BogAudio (dnf install rack-v1-BogAudio)
  • AudibleInstruments (dnf install rack-v1-AudibleInstruments)
  • Valley (dnf install rack-v1-Valley)
  • Befaco (dnf install rack-v1-Befaco)
  • Bidoo (dnf install rack-v1-Bidoo)
  • VCV-Recorder (dnf install rack-v1-VCV-Recorder)
A more complex case

From Fundamental, use MIXER, AUDIO-8, MUTERS, SEQ-3, VCO-1, ADSR, VCA.

Use:

  • Plateau module from Valley package (it’s an enhanced reverb).
  • BassDrum9 from DrumKit package.
  • HolonicSystems-Gaps from HolonicSystems-Free package.

How it sounds: checkout this video on my YouTube channel.

Managing MIDI

VCV Rack as a bunch of modules dedicated to MIDI management.

With these modules and with a tool like the Akai LPD-8:

You can easily control knob in VCV Rack modules from a real life device.

Before buying some devices, check it’s Linux compatibility. Normally every “USB Class Compliant” device works out of the box in every Linux distribution.

The MIDI → Knob mapping is done via the “MIDI-MAP” module. Once you have selected the MIDI driver (first line) and MIDI device (second line), click on “unmapped”. Then, touch a knob you want to control on a module (for example the “FREQ” knob of the VCO-1 Fundamental module). Now, turn the knob of the MIDI device and there you are; the mapping is done.

Artistic scopes

Last topic of this introduction paper: the scopes.

VCV Rack has several standard (and useful) scopes. The SCOPE module from Fundamental for example.

But it also has some interesting scopes.

This used 3 VCO-1 modules from Fundamental and a fullscope from wiqid-anomalies.

The first connector at the top of the scope corresponds to the X input. The one below is the Y input and the other one below controls the color of the graph.

For the complete documentation of this module, check:

For more information

If you’re looking for help or want to talk to the VCV Rack Community, visit their Discourse forum. You can get patches (a patch is the file saved by VCV Rack) for VCV Rack on Patch Storage.

Check out how vintage synthesizers looks like on Vintage Synthesizer Museum or Google’s online exhibition. The documentary “I Dream of Wires” provides a look at the history of modular synthesizers. Finally, the book Developing Virtual Syntehsizers with VCV Rack provides more depth.

More in Tux Machines

Videos/Shows: Ubuntu Cinnamon Remix 21.04, Coder Radio, and KDE Breeze Redesign and Blue Ocean

NetBSD: aiomixer, X/Open Curses and ncurses, and other news

aiomixer is an application that I've been maintaining outside of NetBSD for a few years. It was available as a package, and was a "graphical" (curses, terminal-based) mixer for NetBSD's audio API, inspired by programs like alsamixer. For some time I've thought that it should be integrated into the NetBSD base system - it's small and simple, very useful, and many developers and users had it installed (some told me that they would install it on all of their machines that needed audio output). For my particular use case, as well as my NetBSD laptop, I have some small NetBSD machines around the house plugged into speakers that I play music from. Sometimes I like to SSH into them to adjust the playback volume, and it's often easier to do visually than with mixerctl(1). However, there was one problem: when I first wrote aiomixer 2 years ago, I was intimidated by the curses API, so opted to use the Curses Development Kit instead. This turned out to be a mistake, as not only was CDK inflexible for an application like aiomixer, it introduced a hard dependency on ncurses. Read more

Core Scheduling Looks Like It Will Be Ready For Linux 5.14 To Avoid Disabling SMT/HT

It looks like the years-long effort around CPU core scheduling that's been worked on by multiple vendors in light of CPU security vulnerabilities threatening SMT/HT security will see mainline later this summer with Linux 5.14. Linux core scheduling has been worked on by pretty much all of the hyperscalers and public cloud providers to improve security without disabling Hyper Threading. Core scheduling is ultimately about what resources can share a CPU core and ensuring potentially unsafe tasks don't run on a sibling thread of a trusted task. Read more

IBM/Red Hat/Fedora Leftovers

  • Automating RHEL for Edge image rollback with GreenBoot

    With the release of Red Hat Enterprise Linux (RHEL) 8.3, Red Hat announced an rpm-ostree version of RHEL targeted for Edge use cases called RHEL for Edge. One of the unique features of rpm-ostree is that when you update the operating system, a new deployment is created, and the previous deployment is also retained. This means that if there are issues on the updated version of the operating system, you can roll back to the previous deployment with a single rpm-ostree command, or by selecting the previous deployment in the GRUB boot loader. While this ability to manually roll back is very useful, it still requires manual intervention. Edge computing use case scenarios might be up in the tens or hundreds of thousands of nodes, and with this number of systems, automation is critical. In addition, in Edge deployments, these systems might be across the country or across the world, and it might not be practical to access a console on them in the event of issues with an updated image. This is why RHEL for Edge includes GreenBoot, which can automate RHEL for Edge operating system rollbacks. This post will cover an overview of how to get started with GreenBoot and will walk through an example of using GreenBoot.

  • Using Ansible to configure Podman containers

    In complex IT infrastructure, there are many repetitive tasks. Running those tasks successfully is not easy. Human error always presents a chance of failure. With help of Ansible, you perform all of the tasks through a remote host and, as the tasks are executed with playbooks, and those playbooks can be reused as many times as you need. In this article you will learn how to install and configure Ansible on Fedora Linux and describe how to use it to manage and configure Podman containers. Ansible Ansible is an open source infrastructure automation tool sponsored by Red Hat. It can deal with all the problems that come with large infrastructure, like installing & updating packages, taking backups, ensuring specific services are always running, and much more. You do this with a playbook which is written in YAML. Ansible playbooks can be used again and again, making the system administrator’s job less complex. Playbooks also eliminate repetitive tasks and can be easily modified. But we have many automation tools like Ansible, why use it? Unlike some other configuration management tools, Ansible is agentless: you don’t have to install anything on managed nodes. For more information about Ansible, see the Ansible tag in Fedora Magazine.

  • Getting better at counting rpm-ostree based systems

    Since the release of Fedora 32, a new mechanism has been in place to better count the number of Fedora users while respecting their privacy. This system is explicitly designed to make sure that no personally identifiable information is sent from counted systems. It also insures that the Fedora infrastructure does not collect any personal data. The nickname for this new counting mechanism is “Count Me”, from the option name. Details are available in DNF Better Counting change request for Fedora 32. In short, the Count Me mechanism works by telling Fedora servers how old your system is (with a very large approximation). This occurs randomly during a metadata refresh request performed by DNF.

  • Cockpit 244

    Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from Cockpit version 244 and Cockpit Machines 244.

  • A brief introduction to Ansible Vault

    Ansible Vault is an Ansible feature that helps you encrypt confidential information without compromising security.