Language Selection

English French German Italian Portuguese Spanish

Fedora Magazine

Syndicate content
Guides, information, and news about the Fedora operating system for users, developers, system administrators, and community members.
Updated: 12 hours 27 min ago

Convert your filesystem to Btrfs

Friday 22nd of January 2021 08:00:00 AM
Introduction

The purpose of this article is to give you an overview about why, and how to migrate your current partitions to a Btrfs filesystem. To read a step-by-step walk through of how this is accomplished – follow along, if you’re curious about doing this yourself.

Starting with Fedora 33, the default filesystem is now Btrfs for new installations. I’m pretty sure that most users have heard about its advantages by now: copy-on-write, built-in checksums, flexible compression options, easy snapshotting and rollback methods. It’s really a modern filesystem that brings new features to desktop storage.

Updating to Fedora 33, I wanted to take advantage of Btrfs, but personally didn’t want to reinstall the whole system for ‘just a filesystem change’. I found [there was] little guidance on how exactly to do it, so decided to share my detailed experience here.

Watch out!

Doing this, you are playing with fire. Hopefully you are not surprised to read the following:

During editing partitions and converting file systems, you can have your data corrupted and/or lost. You can end up with an unbootable system and might be facing data recovery. You can inadvertently delete your partitions or otherwise harm your system.

These conversion procedures are meant to be safe even for production systems – but only if you plan ahead, have backups for critical data and rollback plans. As a sudoer, you can do anything without limits, without any of the usual safety guards protecting you.

The safe way: reinstalling Fedora

Reinstalling your operating system is the ‘official’ way of converting to Btrfs, recommended for most users. Therefore, choose this option if you are unsure about anything in this guide. The steps are roughly the following:

  1. Backup your home folder and any data that might be used in your system like /etc. [Editors note: VM’s too]
  2. Save your list of installed packages to a file.
  3. Reinstall Fedora by removing your current partitions and choosing the new default partitioning scheme with Btrfs.
  4. Restore the contents of your home folder and reinstall the packages using the package list.

For detailed steps and commands, see this comment by a community user at ask.fedoraproject.org. If you do this properly, you’ll end up with a system that is functioning in the same way as before, with minimal risk of losing any data.

Pros and cons of conversion

Let’s clarify this real quick: what kind of advantages and disadvantages does this kind of filesystem conversion have?

The good
  • Of course, no reinstallation is needed! Every file on your system will remain the exact same as before.
  • It’s technically possible to do it in-place i.e. without a backup.
  • You’ll surely learn a lot about btrfs!
  • It’s a rather quick procedure if everything goes according to plan.
The bad
  • You have to know your way around the terminal and shell commands.
  • You can lose data, see above.
  • If anything goes wrong, you are on your own to fix it.
The ugly
  • You’ll need about 20% of free disk space for a successful conversion. But for the complete backup & reinstall scenario, you might need even more.
  • You can customize everything about your partitions during the process, but you can also do that from Anaconda if you choose to reinstall.
What about LVM?

LVM layouts have been the default during the last few Fedora installations. If you have an LVM partition layout with multiple partitions e.g. / and /home, you would somehow have to merge them in order to enjoy all the benefits of Btrfs.

If you choose so, you can individually convert partitions to Btrfs while keeping the volume group. Nevertheless, one of the advantages of migrating to Btrfs is to get rid of the limits imposed by the LVM partition layout. You can also use the send-receive functionality offered by btrfs to merge the partitions after the conversion.

See also on Fedora Magazine: Reclaim hard-drive space with LVM, Recover your files from Btrfs snapshots and Choose between Btrfs and LVM-ext4.

Getting acquainted with Btrfs

It’s advisable to read at least the following to have a basic understanding about what Btrfs is about. If you are unsure, just choose the safe way of reinstalling Fedora.

Must reads Useful resources Conversion steps Create a live image

Since you can’t convert mounted filesystems, we’ll be working from a Fedora live image. Install Fedora Media Writer and ‘burn’ Fedora 33 to your favorite USB stick.

Free up disk space

btrfs-convert will recreate filesystem metadata in your partition’s free disk space, while keeping all existing ext4 data at its current location.

Unfortunately, the amount of free space required cannot be known ahead – the conversion will just fail (and do no harm) if you don’t have enough. Here are some useful ideas for freeing up space:

  • Use baobab to identify large files & folders to remove. Don’t manually delete files outside of your home folder if possible.
  • Clean up old system journals: journalctl –vacuum-size=100M
  • If you are using Docker, carefully use tools like docker volume prune, docker image prune -a
  • Clean up unused virtual machine images inside e.g. GNOME Boxes
  • Clean up unused packages and flatpaks: dnf autoremove, flatpak remove –unused,
  • Clean up package caches: pkcon refresh force -c -1, dnf clean all
  • If you’re confident enough to, you can cautiously clean up the ~/.cache folder.
Convert to Btrfs

Save all your valuable data to a backup, make sure your system is fully updated, then reboot into the live image. Run gnome-disks to find out your device handle e.g. /dev/sda1 (it can look different if you are using LVM). Check the filesystem and do the conversion: [Editors note: The following commands are run as root, use caution!]

$ sudo su - # fsck.ext4 -fyv /dev/sdXX # man btrfs-convert (read it!) # btrfs-convert /dev/sdXX

This can take anywhere from 10 minutes to even hours, depending on the partition size and whether you have a rotational or solid-state hard drive. If you see errors, you’ll likely need more free space. As a last resort, you could try btrfs-convert -n.

How to roll back?

If the conversion fails for some reason, your partition will remain ext4 or whatever it was before. If you wish to roll back after a successful conversion, it’s as simple as

# btrfs-convert -r /dev/sdXX

Warning! You will permanently lose your ability to roll back if you do any of these: defragmentation, balancing or deleting the ext2_saved subvolume.

Due to the copy-on-write nature of Btrfs, you can otherwise safely copy, move and even delete files, create subvolumes, because ext2_saved keeps referencing to the old data.

Mount & check

Now the partition is supposed to have btrfs file system. Mount it and look around your files… and subvolumes!

# mount /dev/sdXX /mnt # man btrfs-subvolume (read it!) # btrfs subvolume list / (-t for a table view)

Because you have already read the relevant manual page, you should now know that it’s safe to create subvolume snapshots, and that you have an ext2-saved subvolume as a handy backup of your previous data.

It’s time to read the Btrfs sysadmin guide, so that you won’t confuse subvolumes with regular folders.

Create subvolumes

We would like to achieve a ‘flat’ subvolume layout, which is the same as what Anaconda creates by default:

toplevel (volume root directory, not to be mounted by default) +-- root (subvolume root directory, to be mounted at /) +-- home (subvolume root directory, to be mounted at /home)

You can skip this step, or decide to aim for a different layout. The advantage of this particular structure is that you can easily create snapshots of /home, and have different compression or mount options for each subvolume.

# cd /mnt # btrfs subvolume snapshot ./ ./root2 # btrfs subvolume create home2 # cp -a home/* home2/

Here, we have created two subvolumes. root2 is a full snapshot of the partition, while home2 starts as an empty subvolume and we copy the contents inside. (This cp command doesn’t duplicate data so it is going to be fast.)

  • In /mnt (the top-level subvolume) delete everything except root2, home2, and ext2_saved.
  • Rename root2 and home2 subvolumes to root and home.
  • Inside root subvolume, empty out the home folder, so that we can mount the home subvolume there later.

It’s simple if you get everything right!

Modify fstab

In order to mount the new volume after a reboot, fstab has to be modified, by replacing the old ext4 mount lines with new ones.

You can use the command blkid to learn your partition’s UUID.

UUID=xx / btrfs subvol=root 0 0 UUID=xx /home btrfs subvol=home 0 0

(Note that the two UUIDs are the same if they are referring to the same partition.)

These are the defaults for new Fedora 33 installations. In fstab you can also choose to customize compression and add options like noatime.

See the relevant wiki page about compression and man 5 btrfs for all relevant options.

Chroot into your system

If you’ve ever done system recovery, I’m pretty sure you know these commands. Here, we get a shell prompt that is essentially inside your system, with network access.

First, we have to remount the root subvolume to /mnt, then mount the /boot and /boot/efi partitions (these can be different depending on your filesystem layout):

# umount /mnt # mount -o subvol=root /dev/sdXX /mnt # mount /dev/sdXX /mnt/boot # mount /dev/sdXX /mnt/boot/efi

Then we can move on to mounting system devices:

# mount -t proc /proc /mnt/proc # mount --rbind /dev /mnt/dev # mount --make-rslave /mnt/dev # mount --rbind /sys /mnt/sys # mount --make-rslave /mnt/sys # cp /mnt/etc/resolv.conf /mnt/etc/resolv.conf.chroot # cp -L /etc/resolv.conf /mnt/etc # chroot /mnt /bin/bash $ ping www.fedoraproject.org Reinstall GRUB & kernel

The easiest way – now that we have network access – is to reinstall GRUB and the kernel because it does all configuration necessary. So, inside the chroot:

# mount /boot/efi # dnf reinstall grub2-efi shim # grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg # dnf reinstall kernel-core ...or just renegenerating initramfs: # dracut --kver $(uname -r) --force

This applies if you have an UEFI system. Check the docs below if you have a BIOS system. Let’s check if everything went well, before rebooting:

# cat /boot/grub2/grubenv # cat /boot/efi/EFI/fedora/grub.cfg # lsinitrd /boot/initramfs-$(uname -r).img | grep btrfs

You should have proper partition UUIDs or references in grubenv and grub.cfg (grubenv may not have been updated, edit it if needed) and see insmod btrfs in grub.cfg and btrfs module in your initramfs image.

See also: Reinstalling GRUB 2 and Verifying the Initial RAM Disk Image in the Fedora System Administration Guide.

Reboot

Now your system should boot properly. If not, don’t panic, go back to the live image and fix the issue. In the worst case, you can just reinstall Fedora from right there.

After first boot

Check that everything is fine with your new Btrfs system. If you are happy, you’ll need to reclaim the space used by the old ext4 snapshot, defragment and balance the subvolumes. The latter two might take some time and is quite resource intensive.

You have to mount the top level subvolume for this:

# mount /dev/sdXX -o subvol=/ /mnt/someFolder # btrfs subvolume delete /mnt/someFolder/ext2_saved

Then, run these commands when the machine has some idle time:

# btrfs filesystem defrag -v -r -f / # btrfs filesystem defrag -v -r -f /home # btrfs balance start -m /

Finally, there’s a “no copy-on-write” attribute that is automatically set for virtual machine image folders for new installations. Set it if you are using VMs:

# chattr +C /var/lib/libvirt/images $ chattr +C ~/.local/share/gnome-boxes/images

This attribute only takes effect for new files in these folders. Duplicate the images and delete the originals. You can confirm the result with lsattr.

Wrapping up

I really hope that you have found this guide to be useful, and was able to make a careful and educated decision about whether or not to convert to Btrfs on your system. I wish you a successful conversion process!

Feel free to share your experience here in the comments, or if you run into deeper issues, on ask.fedoraproject.org.

Deploy your own Matrix server on Fedora CoreOS

Monday 18th of January 2021 08:00:00 AM

Today it is very common for open source projects to distribute their software via container images. But how can these containers be run securely in production? This article explains how to deploy a Matrix server on Fedora CoreOS.

What is Matrix?

Matrix provides an open source, federated and optionally end-to-end encrypted communication platform.

From matrix.org:

Matrix is an open source project that publishes the Matrix open standard for secure, decentralised, real-time communication, and its Apache licensed reference implementations.

Matrix also includes bridges to other popular platforms such as Slack, IRC, XMPP and Gitter. Some open source communities are replacing IRC with Matrix or adding Matrix as an new communication channel (see for example Mozilla, KDE, and Fedora).

Matrix is a federated communication platform. If you host your own server, you can join conversations hosted on other Matrix instances. This makes it great for self hosting.

What is Fedora CoreOS?

From the Fedora CoreOS docs:

Fedora CoreOS is an automatically updating, minimal, monolithic, container-focused operating system, designed for clusters but also operable standalone, optimized for Kubernetes but also great without it.

With Fedora CoreOS (FCOS), you get all the benefits of Fedora (podman, cgroups v2, SELinux) packaged in a minimal automatically updating system thanks to rpm-ostree.

To get more familiar with Fedora CoreOS basics, check out this getting started article:

Getting started with Fedora CoreOS Creating the Fedora CoreOS configuration

Running a Matrix service requires the following software:

This guide will demonstrate how to run all of the above software in containers on the same FCOS server. All the services will all be configured to run under podman container engines.

Assembling the FCCT configuration

Configuring and provisioning these containers on the host requires an Ignition file. FCCT generates the ignition file using a YAML configuration file as input. On Fedora Linux you can install FCCT using dnf:

$ sudo dnf install fcct

A GitHub repository is available for the reader, it contains all the configuration needed and a basic template system to simplify the personnalisation of the configuration. These template values use the %%VARIABLE%% format and each variable is defined in a file named secrets.

User and ssh access

The first configuration step is to define an SSH key for the default user core.

variant: fcos version: 1.3.0 passwd: users: - name: core ssh_authorized_keys: - %%SSH_PUBKEY%% Cgroups v2

Fedora CoreOS comes with cgroups version 1 by default, but it can be configured to use cgroups v2. Using the latest version of cgroups allows for better control of the host resources among other new features.

Switching to cgroups v2 is done via a systemd service that modifies the kernel arguments and reboots the host on first boot.

systemd: units: - name: cgroups-v2-karg.service enabled: true contents: | [Unit] Description=Switch To cgroups v2 After=systemd-machine-id-commit.service ConditionKernelCommandLine=systemd.unified_cgroup_hierarchy ConditionPathExists=!/var/lib/cgroups-v2-karg.stamp [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/rpm-ostree kargs --delete=systemd.unified_cgroup_hierarchy ExecStart=/bin/touch /var/lib/cgroups-v2-karg.stamp ExecStart=/bin/systemctl --no-block reboot [Install] WantedBy=multi-user.target Podman pod

Podman supports the creation of pods. Pods are quite handy when you need to group containers together within the same network namespace. Containers within a pod can communicate with each other using the localhost address.

Create and configure a pod to run the different services needed by Matrix:

- name: podmanpod.service enabled: true contents: | [Unit] Description=Creates a podman pod to run the matrix services. After=cgroups-v2-karg.service network-online.target Wants=After=cgroups-v2-karg.service network-online.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=sh -c 'podman pod exists matrix || podman pod create -n matrix -p 80:80 -p 443:443 -p 8448:8448' [Install] WantedBy=multi-user.target

Another advantage of using a pod is that we can control which ports are exposed to the host from the pod as a whole. Here we expose the ports 80, 443 (HTTP and HTTPS) and 8448 (Matrix federation) to the host to make these services available outside of the pod.

A web server with Let’s Encrypt support

The Matrix protocol is HTTP based. Clients connect to their homeserver via HTTPS. Federation between Matrix homeservers is also done over HTTPS. For this setup, you will need three domains. Using distinct domains helps to isolate each service and protect against cross-site scripting (XSS) vulnerabilities.

  • example.tld: The base domain for your homeserver. This will be part of the user Matrix IDs (for example: @username:example.tld).
  • matrix.example.tld: The sub-domain for your Synapse Matrix server.
  • chat.example.tld: The sub-domain for your Element web client.

To simplify the configuration, you only need to set your own domain once in the secrets file.

You will need to ask Let’s Encrypt for certificates on first boot for each of the three domains. Make sure that the domains are configured beforehand to resolve to the IP address that will be assigned to your server. If you do not know what IP address will be assigned to your server in advance, you might want to use another ACME challenge method to get Let’s Encrypt certificates (see DNS Plugins).

- name: certbot-firstboot.service enabled: true contents: | [Unit] Description=Run certbot to get certificates ConditionPathExists=!/var/srv/matrix/letsencrypt-certs/archive After=podmanpod.service network-online.target nginx-http.service Wants=network-online.target Requires=podmanpod.service nginx-http.service [Service] Type=oneshot ExecStart=/bin/podman run \ --name=certbot \ --pod=matrix \ --rm \ --cap-drop all \ --volume /var/srv/matrix/letsencrypt-webroot:/var/lib/letsencrypt:rw,z \ --volume /var/srv/matrix/letsencrypt-certs:/etc/letsencrypt:rw,z \ docker.io/certbot/certbot:latest \ --agree-tos --webroot certonly [Install] WantedBy=multi-user.target

Once the certificates are available, you can start the final instance of nginx. Nginx will act as an HTTPS reverse proxy for your services.

- name: nginx.service enabled: true contents: | [Unit] Description=Run the nginx server After=podmanpod.service network-online.target certbot-firstboot.service Wants=network-online.target Requires=podmanpod.service certbot-firstboot.service [Service] ExecStartPre=/bin/podman pull docker.io/nginx:stable ExecStart=/bin/podman run \ --name=nginx \ --pull=always \ --pod=matrix \ --rm \ --volume /var/srv/matrix/nginx/nginx.conf:/etc/nginx/nginx.conf:ro,z \ --volume /var/srv/matrix/nginx/dhparam:/etc/nginx/dhparam:ro,z \ --volume /var/srv/matrix/letsencrypt-webroot:/var/www:ro,z \ --volume /var/srv/matrix/letsencrypt-certs:/etc/letsencrypt:ro,z \ --volume /var/srv/matrix/well-known:/var/well-known:ro,z \ docker.io/nginx:stable ExecStop=/bin/podman rm --force --ignore nginx [Install] WantedBy=multi-user.target

Because Let’s Encrypt certificates have a short lifetime, they must be renewed frequently. Set up a system timer to automate their renewal:

- name: certbot.timer enabled: true contents: | [Unit] Description=Weekly check for certificates renewal [Timer] OnCalendar=Sun --* 02:00:00 Persistent=true [Install] WantedBy=timers.target - name: certbot.service enabled: false contents: | [Unit] Description=Let's Encrypt certificate renewal ConditionPathExists=/var/srv/matrix/letsencrypt-certs/archive After=podmanpod.service network-online.target Wants=network-online.target Requires=podmanpod.service [Service] Type=oneshot ExecStart=/bin/podman run \ --name=certbot \ --pod=matrix \ --rm \ --cap-drop all \ --volume /var/srv/matrix/letsencrypt-webroot:/var/lib/letsencrypt:rw,z \ --volume /var/srv/matrix/letsencrypt-certs:/etc/letsencrypt:rw,z \ docker.io/certbot/certbot:latest \ renew ExecStartPost=/bin/systemctl restart --no-block nginx.service Synapse and database

Finally, configure the Synapse server and PostgreSQL database.

The Synapse server requires a configuration and secrets keys to be generated. Follow the GitHub repository’s README file section to generate those.

Once these steps completed, add a systemd service using podman to run Synapse as a container:

- name: synapse.service enabled: true contents: | [Unit] Description=Run the synapse service. After=podmanpod.service network-online.target Wants=network-online.target Requires=podmanpod.service [Service] ExecStart=/bin/podman run \ --name=synapse \ --pull=always \ --read-only \ --pod=matrix \ --rm \ -v /var/srv/matrix/synapse:/data:z \ docker.io/matrixdotorg/synapse:v1.24.0 ExecStop=/bin/podman rm --force --ignore synapse [Install] WantedBy=multi-user.target

Setting up the PostgreSQL database is very similar. You will also need to provide a POSTGRES_PASSWORD, in the repository’s secrets file and declare a systemd service (check here for all the details).

Setting the Fedora CoreOS host

FCCT provides a storage directive which is useful for creating directories and adding files on the Fedora CoreOS host.

The following configuration makes sure that the configuration needed by each service is present under /var/srv/matrix. Each service has a dedicated directory. For example /var/srv/matrix/nginx and /var/srv/matrix/synapse. These directories are mounted by podman as volumes when the containers are started.

storage: directories: - path: /var/srv/matrix mode: 0700 - path: /var/srv/matrix/synapse/media_store mode: 0777 - path: /var/srv/matrix/postgres - path: /var/srv/matrix/letsencrypt-webroot trees: - local: synapse path: /var/srv/matrix/synapse - local: nginx path: /var/srv/matrix/nginx - local: nginx-http path: /var/srv/matrix/nginx-http - local: letsencrypt-certs path: /var/srv/matrix/letsencrypt-certs - local: well-known path: /var/srv/matrix/well-known - local: element-web path: /var/srv/matrix/element-web files: - path: /etc/postgresql_synapse contents: local: postgresql_synapse mode: 0700 Auto-updates

You are now ready to setup the most powerful part of Fedora CoreOS ‒ auto-updates. On Fedora CoreOS, the system is automatically updated and restarted approximately once every two weeks for each new Fedora CoreOS release. On startup, all containers will be updated to the latest version (because the pull=always option is set). The containers are stateless and volume mounts are used for any data that needs to be persistent across reboots.

The PostgreSQL container is an exception. It can not be fully updated automatically because it requires manual intervention for major releases. However, it will still be updated with new patch releases to fix security issues and bugs as long as the specified version is supported (approximately five years). Be aware that Synapse might start requiring a newer version before support ends. Consequently, you should plan a manual update approximately once per year for new PostgreSQL releases. The steps to update PostgreSQL are documented in this project’s README.

To maximise availability and avoid service interruptions in the middle of the day, set an update strategy in Zincati’s configuration to only allow reboots for updates during certain periods of time. For example, one might want to restrict reboots to week days between 2 AM and 4 AM UTC. Make sure to pick the correct time for your timezone. Fedora CoreOS uses the UTC timezone by default. Here is an example configuration snippet that you could append to your config.yaml:

[updates] strategy = "periodic" [[updates.periodic.window]] days = [ "Mon", "Tue", "Wed", "Thu", "Fri" ] start_time = "02:00" length_minutes = 120 Creating your own Matrix server by using the git repository

Some sections where lightly edited to make this article easier to read but you can find the full, unedited configuration in this GitHub repository. To host your own server from this configuration, fill in the secrets values and generate the full configuration with fcct via the provided Makefile:

$ cp secrets.example secrets ${EDITOR} secrets # Fill in values not marked as generated by synapse

Next, generate the Synapse secrets and include them in the secrets file. Finally, you can build the final configuration with make:

$ make # This will generate the config.ign file

You are now ready to deploy your Matrix homeserver on Fedora CoreOS. Follow the instructions for your platform of choice from the documentation to proceed.

Registering new users

What’s a service without users? Open registration was disabled by default to avoid issues. You can re-enable open registration in the Synapse configuration if you are up for it. Alternatively, even with open registration disabled, it is possible to add new users to your server via the command line:

$ sudo podman run --rm --tty --interactive \ --pod=matrix \ -v /var/srv/matrix/synapse:/data:z,ro \ --entrypoint register_new_matrix_user \ docker.io/matrixdotorg/synapse:latest \ -c /data/homeserver.yaml http://127.0.0.1:8008 Conclusion

You are now ready to join the Matrix federated universe! Enjoy your quickly deployed and automatically updating Matrix server! Remember that auto-updates are made as safe as possible by the fact that if anything breaks, you can either rollback the system to the previous version or use the previous container image to work around any bugs while they are being fixed. Being able to quickly setup a system that will be kept updated and secure is the main advantage of the Fedora CoreOS model.

To go further, take a look at this other article that is taking advantage of Terraform to generate the configuration and directly deploy Fedora CoreOS on your platform of choice.

Deploy Fedora CoreOS servers with Terraform

Discover Fedora Kinoite: a Silverblue variant with the KDE Plasma desktop

Thursday 14th of January 2021 08:00:00 AM

Fedora Kinoite is an immutable desktop operating system featuring the KDE Plasma desktop. In short, Fedora Kinoite is like Fedora Silverblue but with KDE instead of GNOME. It is an emerging variant of Fedora, based on the same technologies as Fedora Silverblue (rpm-ostree, Flatpak, podman) and created exclusively from official RPM packages from Fedora.

Fedora Kinoite is like Silverblue, but what is Silverblue?

From the Fedora Silverblue documentation:

Fedora Silverblue is an immutable desktop operating system. It aims to be extremely stable and reliable. It also aims to be an excellent platform for developers and for those using container-focused workflows.

For more details about what makes Fedora Silverblue interesting, read the “What is Fedora Silverblue?” article previously published on Fedora Magazine. Everything in that article also applies to Fedora Kinoite.

Fedora Kinoite status

Kinoite is not yet an official emerging edition of Fedora. But this is in progress and currently planned for Fedora 35. However, it is already usable! Join us if you want to help.

Kinoite is made from the same packages that are available in the Fedora repositories and that go into the classic Fedora KDE Plasma Spin, so it is as functional and stable as classic Fedora. I’m also “eating my own dog food” as it is the only OS installed on the laptop that I am currently using to write this article.

However, be aware that Kinoite does not currently have graphical support for updates. You will need to be confortable with using the command line to manage updates, install Flatpaks, or overlay packages with rpm-ostree.

How to try Fedora Kinoite

As Kinoite is not yet an official Fedora edition, there is not a dedicated installer for it for now. To get started, install Silverblue and switch to Kinoite with the following commands:

# Add the temporary unofficial Kinoite remote $ curl -O https://tim.siosm.fr/downloads/siosm_gpg.pub $ sudo ostree remote add kinoite https://siosm.fr/kinoite/ --gpg-import siosm_gpg.pub # Optional, only if you want to keep Silverblue available $ sudo ostree admin pin 0 # Switch to Kinoite $ sudo rpm-ostree rebase kinoite:fedora/33/x86_64/kinoite # Reboot $ sudo systemctl reboot How to keep it up-to-date

Kinoite does not yet have graphical support for updates in Discover. Work is in progress to teach Discover how to manage an rpm-ostree system. Flatpak management mostly works with Discover but the command line is still the best option for now.

To update the system, use:

$ rpm-ostree update

To update Flatpaks, use:

$ flatpak update Status of KDE Apps Flatpaks

Just like Fedora Silverblue, Fedora Kinoite focuses on applications delivered as Flatpaks. Some non KDE applications are available in the Fedora Flatpak registry, but until this selection is expanded with KDE ones, your best bet is to look for them in Flathub (see all KDE Apps on Flathub). Be aware that applications on Flathub may include non-free or proprietary software. The KDE SIG is working on packaging KDE Apps as Fedora provided Flatpaks but this is not ready yet.

Submitting bug reports

Report issues in the Fedora KDE SIG issue tracker or in the discussion thread at discussion.fedoraproject.org.

Other desktop variants

Although this project started with KDE, I have also already created variants for XFCE, Mate, Deepin, Pantheon, and LXQt. They are currently available from the same remote as Kinoite. Note that they will remain unofficial until someone steps up to maintain them officially in Fedora.

I have also created an additional smaller Base variant without any desktop environment. This allows you to overlay the lightweight window manager of your choice (i3, Sway, etc.). The same caveats as the ones for other desktop environments apply (currently unofficial and will need a maintainer).

Java development on Fedora Linux

Friday 8th of January 2021 08:00:00 AM

Java is a lot. Aside from being an island of Indonesia, it is a large software development ecosystem. Java was released in January 1996. It is approaching its 25th birthday and it’s still a popular platform for enterprise and casual software development. Many things, from banking to Minecraft, are powered by Java development.

This article will guide you through all the individual components that make Java and how they interact. This article will also cover how Java is integrated in Fedora Linux and how you can manage different versions. Finally, a small demonstration using the game Shattered Pixel Dungeon is provided.

A birds-eye perspective of Java

The following subsections present a quick recap of a few important parts of the Java ecosystem.

The Java language

Java is a strongly typed, object oriented, programming language. Its principle designer is James Gosling who worked at Sun, and Java was officially announced in 1995. Java’s design is strongly inspired by C and C++, but using a more streamlined syntax. Pointers are not present and parameters are passed-by-value. Integers and floats no longer have signed and unsigned variants, and more complex objects like Strings are part of the base definition.

But that was 1995, and the language has seen its ups and downs in development. Between 2006 and 2014, no major releases were made, which led to stagnation and which opened up the market to competition. There are now multiple competing Java-esk languages like Scala, Clojure and Kotlin. A large part of ‘Java’ programming nowadays uses one of these alternative language specifications which focus on functional programming or cross-compilation.

// Java public class Hello { public static void main(String[] args) { println("Hello, world!"); } } // Scala object Hello { def main(args: Array[String]) = { println("Hello, world!") } } // Clojure (defn -main [& args] (println "Hello, world!")) // Kotlin fun main(args: Array<String>) { println("Hello, world!") }

The choice is now yours. You can choose to use a modern version or you can opt for one of the alternative languages if they suit your style or business better.

The Java platform

Java isn’t just a language. It is also a virtual machine to run the language. It’s a C/C++ based application that takes the code, and executes it on the actual hardware. Aside from that, the platform is also a set of standard libraries which are included with the Java Virtual Machine (JVM) and which are written in the same language. These libraries contain logic for things like collections and linked lists, date-times, and security.

And the ecosystem doesn’t stop there. There are also software repositories like Maven and Clojars which contain a considerable amount of usable third-party libraries. There are also special libraries aimed at certain languages, providing extra benefits when used together. Additionally, tools like Apache Maven, Sbt and Gradle allow you to compile, bundle and distribute the application you write. What is important is that this platform works with other languages. You can write your code in Scala and have it run side-by-side with Java code on the same platform.

Last but not least, there is a special link between the Java platform and the Android world. You can compile Java and Kotlin for the Android platform to get additional libraries and tools to work with.

License history

Since 2006, the Java platform is licensed under the GPL 2.0 with a classpath-exception. This means that everybody can build their own Java platform; tools and libraries included. This makes the ecosystem very competitive. There are many competing tools for building, distribution, and development.

Sun ‒ the original maintainer of Java ‒ was bought by Oracle in 2009. In 2017, Oracle changed the license terms of the Java package. This prompted multiple reputable software suppliers to create their own Java packaging chain. Red Hat, IBM, Amazon and SAP now have their own Java packages. They use the OpenJDK trademark to distinguish their offering from Oracle’s version.

It deserves special mention that the Java platform package provided by Oracle is not FLOSS. There are strict license restrictions to Oracle’s Java-trademarked platform. For the remainder of this article, Java refers to the FLOSS edition ‒ OpenJDK.

Finally, the classpath-exception deserves special mention. While the license is GPL 2.0, the classpath-exception allows you to write proprietary software using Java as long as you don’t change the platform itself. This puts the license somewhere in between the GPL 2.0 and the LGPL and it makes Java very suitable for enterprises and commercial activities.

Praxis

If all of that seems quite a lot to take in, don’t panic. It’s 25 years of software history and there is a lot of competition. The following subsections demonstrate using Java on Fedora Linux.

Running Java locally

The default Fedora Workstation 33 installation includes OpenJDK 11. The open source code of the platform is bundled for Fedora Workstation by the Fedora Project’s package maintainers. To see for yourself, you can run the following:

$ java -version

Multiple versions of OpenJDK are available in Fedora Linux’s default repositories. They can be installed concurrently. Use the alternatives command to select which installed version of OpenJDK should be used by default.

$ dnf search openjdk $ alternatives --config java

Also, if you have Podman installed, you can find most OpenJDK options by searching for them.

$ podman search openjdk

There are many options to run Java, both natively and in containers. Many other Linux distributions also come with OpenJDK out of the box. Pkgs.org has a comprehensive list. GNOME Boxes or Virt Manager will be your friend in that case.

To get involved with the Fedora community directly, see their project Wiki.

Alternative configurations

If the Java version you want is not available in the repositories, use SDKMAN to install Java in your home directory. It also allows you to switch between multiple installed versions and it comes with popular CLI tools like Ant, Maven, Gradle and Sbt.

Last but not least, some vendors provide direct downloads for Java. Special mention goes to AdoptOpenJDK which is a collaborative effort among several major vendors to provide simple FLOSS packages and binaries.

Graphical tools

Several integrated development environments (IDEs) are available for Java. Some of the more popular IDEs include:

  • Eclipse: This is free software published and maintained by the Eclipse Foundation. Install it directly from the Fedora Project’s repositories or from Flathub.
  • NetBeans: This is free software published and maintained by the Apache foundation. Install it from their site or from Flathub.
  • IntelliJ IDEA: This is proprietary software but it comes with a gratis community version. It is published by Jet Brains. Install it from their site or from Flathub.

The above tools are themselves written in OpenJDK. They are examples of dogfooding.

Demonstration

The following demonstration uses Shattered Pixel Dungeon ‒ a Java based roque-like which is available on Android, Flathub and others.

First, set up a development environment:

$ curl -s "https://get.sdkman.io" | bash $ source "$HOME/.sdkman/bin/sdkman-init.sh" $ sdk install gradle

Next, close your terminal window and open a new terminal window. Then run the following commands in the new window:

$ git clone https://github.com/00-Evan/shattered-pixel-dungeon.git $ cd shattered-pixel-dungeon $ gradle desktop:debug

Now, import the project in Eclipse. If Eclipse is not already installed, run the following command to install it:

$ sudo dnf install eclipse-jdt

Use Import Projects from File System to add the code of Shattered Pixel Dungeon.

As you can see in the imported resources on the top left, not only do you have the code of the project to look at, but you also have the OpenJDK available with all its resources and libraries.

If this motivates you further, I would like to point you towards the official documentation from Shattered Pixel Dungeon. The Shattered Pixel Dungeon build system relies on Gradle which is an optional extra that you will have to configure manually in Eclipse. If you want to make an Android build, you will have to use Android Studio. Android Studio is a gratis, Google-branded version of IntelliJ IDEA.

Summary

Developing with OpenJDK on Fedora Linux is a breeze. Fedora Linux provides some of the most powerful development tools available. Use Podman or Virt-Manager to easily and securely host server applications. OpenJDK provides a FLOSS means of creating applications that puts you in control of all the application’s components.

Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Network address translation part 1 – packet tracing

Monday 4th of January 2021 08:00:00 AM

The first post in a series about network address translation (NAT). Part 1 shows how to use the iptables/nftables packet tracing feature to find the source of NAT related connectivity problems.

Introduction

Network address translation is one way to expose containers or virtual machines to the wider internet. Incoming connection requests have their destination address rewritten to a different one. Packets are then routed to a container or virtual machine instead. The same technique can be used for load-balancing where incoming connections get distributed among a pool of machines.

Connection requests fail when network address translation is not working as expected. The wrong service is exposed, connections end up in the wrong container, request time out, and so on. One way to debug such problems is to check that the incoming request matches the expected or configured translation.

Connection tracking

NAT involves more than just changing the ip addresses or port numbers. For instance, when mapping address X to Y, there is no need to add a rule to do the reverse translation. A netfilter system called “conntrack” recognizes packets that are replies to an existing connection. Each connection has its own NAT state attached to it. Reverse translation is done automatically.

Ruleset evaluation tracing

The utility nftables (and, to a lesser extent, iptables) allow for examining how a packet is evaluated and which rules in the ruleset were matched by it. To use this special feature “trace rules” are inserted at a suitable location. These rules select the packet(s) that should be traced. Lets assume that a host coming from IP address C is trying to reach the service on address S and port P. We want to know which NAT transformation is picked up, which rules get checked and if the packet gets dropped somewhere.

Because we are dealing with incoming connections, add a rule to the prerouting hook point. Prerouting means that the kernel has not yet made a decision on where the packet will be sent to. A change to the destination address often results in packets to get forwarded rather than being handled by the host itself.

Initial setup # nft 'add table inet trace_debug'
# nft 'add chain inet trace_debug trace_pre { type filter hook prerouting priority -200000; }'
# nft "insert rule inet trace_debug trace_pre ip saddr $C ip daddr $S tcp dport $P tcp flags syn limit rate 1/second meta nftrace set 1"

The first rule adds a new table This allows easier removal of the trace and debug rules later. A single “nft delete table inet trace_debug” will be enough to undo all rules and chains added to the temporary table during debugging.

The second rule creates a base hook before routing decisions have been made (prerouting) and with a negative priority value to make sure it will be evaluated before connection tracking and the NAT rules.

The only important part, however, is the last fragment of the third rule: “meta nftrace set 1″. This enables tracing events for all packets that match the rule. Be as specific as possible to get a good signal-to-noise ratio. Consider adding a rate limit to keep the number of trace events at a manageable level. A limit of one packet per second or per minute is a good choice. The provided example traces all syn and syn/ack packets coming from host $C and going to destination port $P on the destination host $S. The limit clause prevents event flooding. In most cases a trace of a single packet is enough.

The procedure is similar for iptables users. An equivalent trace rule looks like this:

# iptables -t raw -I PREROUTING -s $C -d $S -p tcp --tcp-flags SYN SYN  --dport $P  -m limit --limit 1/s -j TRACE Obtaining trace events

Users of the native nft tool can just run the nft trace mode:

# nft monitor trace

This prints out the received packet and all rules that match the packet (use CTRL-C to stop it):

trace id f0f627 ip raw prerouting  packet: iif "veth0" ether saddr ..

We will examine this in more detail in the next section. If you use iptables, first check the installed version via the “iptables –version” command. Example:

# iptables --version
iptables v1.8.5 (legacy)

(legacy) means that trace events are logged to the kernel ring buffer. You will need to check dmesg or journalctl. The debug output lacks some information but is conceptually similar to the one provided by the new tools. You will need to check the rule line numbers that are logged and correlate those to the active iptables ruleset yourself. If the output shows (nf_tables), you can use the xtables-monitor tool:

# xtables-monitor --trace

If the command only shows the version, you will also need to look at dmesg/journalctl instead. xtables-monitor uses the same kernel interface as the nft monitor trace tool. Their only difference is that it will print events in iptables syntax and that, if you use a mix of both iptables-nft and nft, it will be unable to print rules that use maps/sets and other nftables-only features.

Example

Lets assume you’d like to debug a non-working port forward to a virtual machine or container. The command “ssh -p 1222 10.1.2.3” should provide remote access to a container running on the machine with that address, but the connection attempt times out.

You have access to the host running the container image. Log in and add a trace rule. See the earlier example on how to add a temporary debug table. The trace rule looks like this:

nft "insert rule inet trace_debug trace_pre ip daddr 10.1.2.3 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1"

After the rule has been added, start nft in trace mode: nft monitor trace, then retry the failed ssh command. This will generate a lot of output if the ruleset is large. Do not worry about the large example output below – the next section will do a line-by-line walkthrough.

trace id 9c01f8 inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn
trace id 9c01f8 inet trace_debug trace_pre rule ip daddr 10.2.1.2 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1 (verdict continue)
trace id 9c01f8 inet trace_debug trace_pre verdict continue
trace id 9c01f8 inet trace_debug trace_pre policy accept
trace id 9c01f8 inet nat prerouting packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp  tcp dport 1222 tcp flags == syn
trace id 9c01f8 inet nat prerouting rule ip daddr 10.1.2.3  tcp dport 1222 dnat ip to 192.168.70.10:22 (verdict accept)
trace id 9c01f8 inet filter forward packet: iif "enp0" oif "veth21" ether saddr .. ip daddr 192.168.70.10 .. tcp dport 22 tcp flags == syn tcp window 29200
trace id 9c01f8 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)
trace id 9c01f8 inet filter allowed_dnats rule drop (verdict drop)
trace id 20a4ef inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn Line-by-line trace walkthrough

The first line generated is the packet id that triggered the subsequent trace output. Even though this is in the same grammar as the nft rule syntax, it contains header fields of the packet that was just received. You will find the name of the receiving network interface (here named “enp0”) the source and destination mac addresses of the packet, the source ip address (can be important – maybe the reporter is connecting from a wrong/unexpected host) and the tcp source and destination ports. You will also see a “trace id” at the very beginning. This identification tells which incoming packet matched a rule. The second line contains the first rule matched by the packet:

trace id 9c01f8 inet trace_debug trace_pre rule ip daddr 10.2.1.2 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1 (verdict continue)

This is the just-added trace rule. The first rule is always one that activates packet tracing. If there would be other rules before this, we would not see them. If there is no trace output at all, the trace rule itself is never reached or does not match. The next two lines tell that there are no further rules and that the “trace_pre” hook allows the packet to continue (verdict accept).

The next matching rule is

trace id 9c01f8 inet nat prerouting rule ip daddr 10.1.2.3  tcp dport 1222 dnat ip to 192.168.70.10:22 (verdict accept)

This rule sets up a mapping to a different address and port. Provided 192.168.70.10 really is the address of the desired VM, there is no problem so far. If its not the correct VM address, the address was either mistyped or the wrong NAT rule was matched.

IP forwarding

Next we can see that the IP routing engine told the IP stack that the packet needs to be forwarded to another host:

trace id 9c01f8 inet filter forward packet: iif "enp0" oif "veth21" ether saddr .. ip daddr 192.168.70.10 .. tcp dport 22 tcp flags == syn tcp window 29200

This is another dump of the packet that was received, but there are a couple of interesting changes. There is now an output interface set. This did not exist previously because the previous rules are located before the routing decision (the prerouting hook). The id is the same as before, so this is still the same packet, but the address and port has already been altered. In case there are rules that match “tcp dport 1222” they will have no effect anymore on this packet.

If the line contains no output interface (oif), the routing decision steered the packet to the local host. Route debugging is a different topic and not covered here.

trace id 9c01f8 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)

This tells that the packet matched a rule that jumps to a chain named “allowed_dnats”. The next line shows the source of the connection failure:

trace id 9c01f8 inet filter allowed_dnats rule drop (verdict drop)

The rule unconditionally drops the packet, so no further log output for the packet exists. The next output line is the result of a different packet: trace id 20a4ef inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn

The trace id is different, the packet however has the same content. This is a retransmit attempt: The first packet was dropped, so TCP re-tries. Ignore the remaining output, it does not contain new information. Time to inspect that chain.

Ruleset investigation

The previous section found that the packet is dropped in a chain named “allowed_dnats” in the inet filter table. Time to look at it:

# nft list chain inet filter allowed_dnats
table inet filter {
 chain allowed_dnats {
  meta nfproto ipv4 ip daddr . tcp dport @allow_in accept
  drop
   }
}

The rule that accepts packets in the @allow_in set did not show up in the trace log. Double-check that the address is in the @allow_set by listing the element:

# nft "get element inet filter allow_in { 192.168.70.10 . 22 }"
Error: Could not process rule: No such file or directory

As expected, the address-service pair is not in the set. We add it now.

# nft "add element inet filter allow_in { 192.168.70.10 . 22 }"

Run the query command now, it will return the newly added element.

# nft "get element inet filter allow_in { 192.168.70.10 . 22 }" table inet filter { set allow_in { type ipv4_addr . inet_service elements = { 192.168.70.10 . 22 } } }

The ssh command should now work and the trace output reflects the change:

trace id 497abf58 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)

trace id 497abf58 inet filter allowed_dnats rule meta nfproto ipv4 ip daddr . tcp dport @allow_in accept (verdict accept)

trace id 497abf58 ip postrouting packet: iif "enp0" oif "veth21" ether .. trace id 497abf58 ip postrouting policy accept

This shows the packet passes the last hook in the forwarding path – postrouting.

In case the connect is still not working, the problem is somewhere later in the packet pipeline and outside of the nftables ruleset.

Summary

This Article gave an introduction on how to check for packet drops and other sources of connectivity problems with the nftables trace mechanism. A later post in the series shows how to inspect the connection tracking subsystem and the NAT information that may be attached to tracked flows.

A 2020 love letter to the Fedora community

Thursday 31st of December 2020 08:00:00 AM

[This message comes directly from the desk of Matthew Miller, the Fedora Project Leader. — Ed.]

When I wrote about COVID-19 and the Fedora community all the way back on March 16, it was very unclear how 2020 was going to turn out. I hoped that we’d have everything under control and return to normal soon—we didn’t take our Flock to Fedora in-person conference off the table for another month. Back then, I naively hoped that this would be a short event and that life would return to normal soon. But of course, things got worse, and we had to reimagine Flock as a virtual event on short notice. We weren’t even sure if we’d be able to make our regular Fedora Linux releases on schedule.

Even without the pandemic, 2020 was already destined to be an interesting year. Because Red Hat moved the datacenter where most of Fedora’s servers live, our infrastructure team had to move our servers across the continent. Fedora 33 had the largest planned change set of any Fedora Linux release—and not small things either. We changed the default filesystem for desktop variants to BTRFS and promoted Fedora IoT to an Edition. We also began Fedora ELN—a new process which does a nightly build of Fedora’s development branch in the same configuration Red Hat would use to compose Red Hat Enterprise Linux. And Fedora’s popularity keeps growing, which means more users to support and more new community members to onboard. It’s great to be successful, but we also need to keep up with ourselves!

So, it was already busy. And then the pandemic came along. In many ways, we’re fortunate: we’re already a global community used to distributed work, and we already use chat-based meetings and video calls to collaborate. But it made the datacenter move more difficult. The closure of Red Hat offices meant that some of the QA hardware was inaccessible. We couldn’t gather together in person like we’re used to doing. And of course, we all worried about the safety of our friends and family. Isolation and disruption just plain make everything harder.

I’m always proud of the Fedora community, but this year, even more so. In a time of great stress and uncertainty, we came together and did our best work. Flock to Fedora became Nest With Fedora. Thanks to the heroic effort of Marie Nordin and many others, it was a resounding success. We had way more attendees than we’ve ever had at an in-person Flock, which made our community more accessible to contributors who can’t always join us. And we followed up with our first-ever virtual release party and an online Fedora Women’s Day, both also resounding successes.

And then, we shipped both Fedora 32 and Fedora 33 on time, extending our streak to six releases—three straight years of hitting our targets.

The work we all did has not gone unnoticed. You already know that Lenovo is shipping Fedora Workstation on select laptop models. I’m happy to share that two of the top Linux podcasts have recognized our work—particularly Fedora 33—in their year-end awards. LINUX Unplugged listeners voted Fedora Linux their favorite Linux desktop distribution. Three out of the four Destination Linux hosts chose Fedora as the best distro of the year, specifically citing the exciting work we’ve done on Fedora 33 and the strength of our community. In addition, OMG! Ubuntu! included Fedora 33 in its “5 best Linux distribution releases of 2020” and TechRepublic called Fedora 33 “absolutely fantastic“.

Like everyone, I’m looking ahead to 2021. The next few months are still going to be hard, but the amazing work on mRNA and other new vaccine technology means we have clear reasons to be optimistic. Through this trying year, the Fedora community is stronger than ever, and we have some great things to carry forward into better times: a Nest-like virtual event to compliment Flock, online release parties, our weekly Fedora Social Hour, and of course the CPE team’s great trivia events.

In 2021, we’ll keep doing the great work to push the state of the art forward. We’ll be bold in bringing new features into Fedora Linux. We’ll try new things even when we’re worried that they might not work, and we’ll learn from failures and try again. And we’ll keep working to make our community and our platform inclusive, welcoming, and accessible to all.

To everyone who has contributed to Fedora in any way, thank you. Packagers, blog writers, doc writers, testers, designers, artists, developers, meeting chairs, sysadmins, Ask Fedora answerers, D&I team, and more—you kicked ass this year and it shows. Stay safe and healthy, and we’ll meet again in person soon.Oh, one more thing! Join us for a Fedora Social Hour New Year’s Eve Special. We’ll meet at 23:30 UTC today in Hopin (the platform we used for Nest and other events). Hope to see you there!

Choose between Btrfs and LVM-ext4

Wednesday 30th of December 2020 08:00:00 AM

Fedora 33 introduced a new default filesystem in desktop variants, Btrfs. After years of Fedora using ext4 on top of Logical Volume Manager (LVM) volumes, this is a big shift. Changing the default file system requires compelling reasons. While Btrfs is an exciting next-generation file system, ext4 on LVM is well established and stable. This guide aims to explore the high-level features of each and make it easier to choose between Btrfs and LVM-ext4.

In summary

The simplest advice is to stick with the defaults. A fresh Fedora 33 install defaults to Btrfs and upgrading a previous Fedora release continues to use whatever was initially installed, typically LVM-ext4. For an existing Fedora user, the cleanest way to get Btrfs is with a fresh install. However, a fresh install is much more disruptive than a simple upgrade. Unless there is a specific need, this disruption could be unnecessary. The Fedora development team carefully considered both defaults, so be confident with either choice.

What about all the other file systems?

There are a large number of file systems for Linux systems. The number explodes after adding in combinations of volume managers, encryption methods, and storage mechanisms . So why focus on Btrfs and LVM-ext4? For the Fedora audience these two setups are likely to be the most common. Ext4 on top of LVM became the default disk layout in Fedora 11, and ext3 on top of LVM came before that.

Now that Btrfs is the default for Fedora 33, the vast majority of existing users will be looking at whether they should stay where they are or make the jump forward. Faced with a fresh Fedora 33 install, experienced Linux users may wonder whether to use this new file system or fall back to what they are familiar with. So out of the wide field of possible storage options, many Fedora users will wonder how to choose between Btrfs and LVM-ext4.

Commonalities

Despite core differences between the two setups, Btrfs and LVM-ext4 actually have a lot in common. Both are mature and well-tested storage technologies. LVM has been in continuous use since the early days of Fedora Core and ext4 became the default in 2009 with Fedora 11. Btrfs merged into the mainline Linux kernel in 2009 and Facebook uses it widely. SUSE Linux Enterprise 12 made it the default in 2014. So there is plenty of production run time there as well.

Both systems do a great job preventing file system corruption due to unexpected power outages, even though the way they accomplish it is different. Supported configurations include single drive setups as well as spanning multiple devices, and both are capable of creating nearly instant snapshots. A variety of tools exist to help manage either system, both with the command line and graphical interfaces. Either solution works equally well on home desktops and on high-end servers.

Advantages of LVM-ext4 Structure of ext4 on LVM

The ext4 file system focuses on high-performance and scalability, without a lot of extra frills. It is effective at preventing fragmentation over extended periods of time and provides nice tools for when it does happen. Ext4 is rock solid because it built on the previous ext3 file system, bringing with it all the years of in-system testing and bug fixes.

Most of the advanced capabilities in the LVM-ext4 setup come from LVM itself. LVM sits “below” the file system, which means it supports any file system. Logical volumes (LV) are generic block devices so virtual machines can use them directly. This flexibility allows each logical volume to use the right file system, with the right options, for a variety of situations. This layered approach also honors the Unix philosophy of small tools working together.

The volume group (VG) abstraction from the hardware allows LVM to create flexible logical volumes. Each LV pulls from the same storage pool but has its own configuration. Resizing volumes is a lot easier than resizing physical partitions as there are no limitation of ordered placement of the data. LVM physical volumes (PV) can be any number of partitions and can even move between devices while the system is running.

LVM supports read-only and read-write snapshots, which make it easy to create consistent backups from active systems. Each snapshot has a defined size, and a change to the source or snapshot volume use space from there. Alternately, logical volumes can also be part of a thinly provisioned pool. This allows snapshots to automatically use data from a pool instead of consuming fixed sized chunks defined at volume creation.

Multiple devices with LVM

LVM really shines when there are multiple devices. It has native support for most RAID levels and each logical volume can have a different RAID level. LVM will automatically choose appropriate physical devices for the RAID configuration or the user can specify it directly. Basic RAID support includes data striping for performance (RAID0) and mirroring for redundancy (RAID1). Logical volumes can also use advanced setups like RAID5, RAID6, and RAID10. LVM RAID support is mature because under the hood LVM uses the same device-mapper (dm) and multiple-device (md) kernel support used by mdadm.

Logical volumes can also be cached volumes for systems with both fast and slow drives. A classic example is a combination of SSD and spinning-disk drives. Cached volumes use faster drives for more frequently accessed data (or as a write cache), and the slower drive for bulk data.

The large number of stable features in LVM and the reliable performance of ext4 are a testament to how long they have been in use. Of course, with more features comes complexity. It can be challenging to find the right options for the right feature when configuring LVM. For single drive desktop systems, features of LVM like RAID and cache volumes don’t apply. However, logical volumes are more flexible than physical partitions and snapshots are useful. For normal desktop use, the complexity of LVM can also be a barrier to recovering from issues a typical user might encounter.

Advantages of Btrfs Btrfs Structure

Lessons learned from previous generations guided the features built into Btrfs. Unlike ext4, it can directly span multiple devices, so it brings along features typically found only in volume managers. It also has features that are unique in the Linux file system space (ZFS has a similar feature set, but don’t expect it in the Linux kernel).

Key Btrfs features

Perhaps the most important feature is the checksumming of all data. Checksumming, along with copy-on-write, provides the key method of ensuring file system integrity after unexpected power loss. More uniquely, checksumming can detect errors in the data itself. Silent data corruption, sometimes referred to as bitrot, is more common that most people realize. Without active validation, corruption can end up propagating to all available backups. This leaves the user with no valid copies. By transparently checksumming all data, Btrfs is able to immediately detect any such corruption. Enabling the right dup or raid option allows the file system to transparently fix the corruption as well.

Copy-on-write (COW) is also a fundamental feature of Btrfs, as it is critical in providing file system integrity and instant subvolume snapshots. Snapshots automatically share underlying data when created from common subvolumes. Additionally, after-the-fact deduplication uses the same technology to eliminate identical data blocks. Individual files can use COW features by calling cp with the reflink option. Reflink copies are especially useful for copying large files, such as virtual machine images, that tend to have mostly identical data over time.

Btrfs supports spanning multiple devices with no volume manager required. Multiple device support unlocks data mirroring for redundancy and striping for performance. There is also experimental support for more advanced RAID levels, such as RAID5 and RAID6. Unlike standard RAID setups, the Btrfs raid1 option actually allows an odd number of devices. For example, it can use 3 devices, even if they are are different sizes.

All RAID and dup options are specified at the file system level. As a consequence, individual subvolumes cannot use different options. Note that using the RAID1 option with multiple devices means that all data in the volume is available even if one device fails and the checksum feature maintains the integrity of the data itself. That is beyond what current typical RAID setups can provide.

Additional features

Btrfs also enables quick and easy remote backups. Subvolume snapshots can be sent to a remote system for storage. By leveraging the inherent COW meta-data in the file system, these transfers are efficient by only sending incremental changes from previously sent snapshots. User applications such as snapper make it easy to manage these snapshots.

Additionally, a Btrfs volume can have transparent compression and chattr +c will mark individual files or directories for compression. Not only does compression reduce the space consumed by data, but it helps extend the life of SSDs by reducing the volume of write operations. Compression certainly introduces additional CPU overhead, but a lot of options are available to dial in the right trade-offs.

The integration of file system and volume manager functions by Btrfs means that overall maintenance is simpler than LVM-ext4. Certainly this integration comes with less flexibility, but for most desktop, and even server, setups it is more than sufficient.

Btrfs on LVM

Btrfs can convert an ext3/ext4 file system in place. In-place conversion means no data to copy out and then back in. The data blocks themselves are not even modified. As a result, one option for an existing LVM-ext4 systems is to leave LVM in place and simply convert ext4 over to Btrfs. While doable and supported, there are reasons why this isn’t the best option.

Some of the appeal of Btrfs is the easier management that comes with a file system integrated with a volume manager. By running on top of LVM, there is still some other volume manager in play for any system maintenance. Also, LVM setups typically have multiple fixed sized logical volumes with independent file systems. While Btrfs supports multiple volumes in a given computer, many of the nice features expect a single volume with multiple subvolumes. The user is still stuck manually managing fixed sized LVM volumes if each one has an independent Btrfs volume. Though, the ability to shrink mounted Btrfs filesystems does make working with fixed sized volumes less painful. With online shrink there is no need to boot a live image.

The physical locations of logical volumes must be carefully considered when using the multiple device support of Btrfs. To Btrfs, each LV is a separate physical device and if that is not actually the case, then certain data availability features might make the wrong decision. For example, using raid1 for data typically provides protection if a single drive fails. If the actual logical volumes are on the same physical device, then there is no redundancy.

If there is a strong need for some particular LVM feature, such as raw block devices or cached logical volumes, then running Btrfs on top of LVM makes sense. In this configuration, Btrfs still provides most of its advantages such as checksumming and easy sending of incremental snapshots. While LVM has some operational overhead when used, it is no more so with Btrfs than with any other file system.

Wrap up

When trying to choose between Btrfs and LVM-ext4 there is no single right answer. Each user has unique requirements, and the same user may have different systems with different needs. Take a look at the feature set of each configuration, and decide if there is something compelling about one over the other. If not, there is nothing wrong with sticking with the defaults. There are excellent reasons to choose either setup.

Contribute at the Fedora Test Week for Kernel 5.10

Monday 28th of December 2020 08:00:00 AM

The kernel team is working on final integration for kernel 5.10. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Monday, January 04, 2021 through Monday, January 11, 2021. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.

Deploy Fedora CoreOS servers with Terraform

Wednesday 23rd of December 2020 08:00:00 AM

Fedora CoreOS is a lightweight, secure operating system optimized for running containerized workloads. A YAML document is all you need to describe the workload you’d like to run on a Fedora CoreOS server.

This is wonderful for a single server, but how would you describe a fleet of cooperating Fedora CoreOS servers? For example, what if you wanted a set of servers running load balancers, others running a database cluster and others running a web application? How can you get them all configured and provisioned? How can you configure them to communicate with each other? This article looks at how Terraform solves this problem.

Getting started

Before you start, decide whether you need to review the basics of Fedora CoreOS. Check out this previous article on the Fedora Magazine:

Getting started with Fedora CoreOS

Terraform is an open source tool for defining and provisioning infrastructure. Terraform defines infrastructure as code in files. It provisions infrastructure by calculating the difference between the desired state in code and observed state and applying changes to remove the difference.

HashiCorp, the company that created and maintains Terraform, offers an RPM repository to install Terraform.

sudo dnf config-manager --add-repo \ https://rpm.releases.hashicorp.com/fedora/hashicorp.repo sudo dnf install terraform

To get yourself familiar with the tools, start with a simple example. You’re going to create a single Fedora CoreOS server in AWS. To follow along, you need to install awscli and have an AWS account. awscli can be installed from the Fedora repositories and configured using the aws configure command

sudo dnf install -y awscli aws configure

Please note, AWS is a paid service. If executed correctly, participants should expect less than $1 USD in charges, but mistakes may lead to unexpected charges.

Configuring Terraform

In a new directory, create a file named config.yaml. This file will hold the contents of your Fedore CoreOS configuration. The configuration simply adds an SSH key for the core user. Modify the authorized_ssh_key section to use your own.

variant: fcos version: 1.2.0 passwd: users: - name: core ssh_authorized_keys: - "ssh-ed25519 AAAAC3....... user@hostname"

Next, create a file main.tf to contain your Terraform specification. Take a look at the contents section by section. It begins with a block to specify the versions of your providers.

terraform { required_providers { ct = { source = "poseidon/ct" version = "0.7.1" } aws = { source = "hashicorp/aws" version = "~> 3.0" } } }

Terraform uses providers to control infrastructure. Here it uses the AWS provider to provision EC2 servers, but it can provision any kind of AWS infrastructure. The ct provider from Poseidon Labs stands for config transpiler. This provider will transpile Fedora CoreOS configurations into Ignition configurations. As a result, you do not need to use fcct to transpile your configurations. Now that your provider versions are specified, initialize them.

provider "aws" { region = "us-west-2" } provider "ct" {}

The AWS region is set to us-west-2 and the ct provider requires no configuration. With the providers configured, you’re ready to define some infrastructure. Use a data source block to read the configuration.

data "ct_config" "config" { content = file("config.yaml") strict = true }

With this data block defined, you can now access the transpiled Ignition output as data.ct_config.config.rendered. To create an EC2 server, use a resource block, and pass the Ignition output as the user_data attribute.

resource "aws_instance" "server" { ami = "ami-0699a4456969d8650" instance_type = "t3.micro" user_data = data.ct_config.config.rendered }

This configuration hard-codes the virtual machine image (AMI) to the latest stable image of Fedora CoreOS in the us-west-2 region at time of writing. If you would like to use a different region or stream, you can discover the correct AMI on the Fedora CoreOS downloads page.

Finally, you’d like to know the public IP address of the server once it’s created. Use an output block to define the outputs to be displayed once Terraform completes its provisioning.

output "instance_ip_addr" { value = aws_instance.server.public_ip }

Alright! You’re ready to create some infrastructure. To deploy the server simply run:

terraform init # Installs the provider dependencies terraform apply # Displays the proposed changes and applies them

Once completed, Terraform prints the public IP address of the server, and you can SSH to the server by running ssh core@{public ip here}. Congratulations — you’ve provisioned your first Fedora CoreOS server using Terraform!

Updates and immutability

At this point you can modify the configuration in config.yaml however you like. To deploy your change simply run terraform apply again. Notice that each time you change the configuration, when you run terraform apply it destroys the server and creates a new one. This aligns well with the Fedora CoreOS philosophy: Configuration can only happen once. Want to change that configuration? Create a new server. This can feel pretty alien if you’re accustomed to provisioning your servers once and continuously re-configuring them with tools like Ansible, Puppet or Chef.

The benefit of always creating new servers is that it is significantly easier to test that newly provisioned servers will act as expected. It can be much more difficult to account for all of the possible ways in which updating a system in place may break. Tooling that adheres to this philosophy typically falls under the heading of Immutable Infrastructure. This approach to infrastructure has some of the same benefits seen in functional programming techniques, namely that mutable state is often a source of error.

Using variables

You can use Terraform input variables to parameterize your infrastructure. In the previous example, you might like to parameterize the AWS region or instance type. This would let you deploy several instances of the same configuration with differing parameters. What if you want to parameterize the Fedora CoreOS configuration? Do so using the templatefile function.

As an example, try parameterizing the username of your user. To do this, add a username variable to the main.tf file:

variable "username" { type = string description = "Fedora CoreOS user" default = "core" }

Next, modify the config.yaml file to turn it into a template. When rendered, the ${username} will be replaced.

variant: fcos version: 1.2.0 passwd: users: - name: ${username} ssh_authorized_keys: - "ssh-ed25519 AAAAC3....... user@hostname"

Finally, modify the data block to render the template using the templatefile function.

data "ct_config" "config" { content = templatefile( "config.yaml", { username = var.username } ) strict = true }

To deploy with username set to jane, run terraform apply -var=”username=jane”. To verify, try to SSH into the server with ssh jane@{public ip address}.

Leveraging the dependency graph

Passing variables from Terraform into Fedora CoreOS configuration is quite useful. But you can go one step further and pass infrastructure data into the server configuration. This is where Terraform and Fedora CoreOS start to really shine.

Terraform creates a dependency graph to model the state of infrastructure and to plan updates. If the output of one resource (e.g the public IP address of a server) is passed as the input of another service (e.g the destination in a firewall rule), Terraform understands that changes in the former require recreating or modifying the later. If you pass infrastructure data into a Fedora CoreOS configuration, it will participate in the dependency graph. Updates to the inputs will trigger creation of a new server with the new configuration.

Consider a system of one load balancer and three web servers as an example.

The goal is to configure the load balancer with the IP address of each web server so that it can forward traffic to them.

Web server configuration

First, create a file web.yaml and add a simple Nginx configuration with a templated message.

variant: fcos version: 1.2.0 systemd: units: - name: nginx.service enabled: true contents: | [Unit] Description=Nginx Web Server After=network-online.target Wants=network-online.target [Service] ExecStartPre=-/bin/podman kill nginx ExecStartPre=-/bin/podman rm nginx ExecStartPre=/bin/podman pull nginx ExecStart=/bin/podman run --name nginx -p 80:80 -v /etc/nginx/index.html:/usr/share/nginx/html/index.html:z nginx [Install] WantedBy=multi-user.target storage: directories: - path: /etc/nginx files: - path: /etc/nginx/index.html mode: 0444 contents: inline: | <html> <h1>Hello from Server ${count}</h1> </html>

In main.tf, you can create three web servers using this template with the following blocks:

data "ct_config" "web" { count = 3 content = templatefile( "web.yaml", { count = count.index } ) strict = true } resource "aws_instance" "web" { count = 3 ami = "ami-0699a4456969d8650" instance_type = "t3.micro" user_data = data.ct_config.web[count.index].rendered }

Notice the use of count = 3 and the count.index variable. You can use count to make many copies of a resource. Here, it creates three configurations and three web servers. The count.index variable is used to pass the first configuration to the first web server and so on.

Load balancer configuration

The load balancer will be a basic HAProxy load balancer that forwards to each server. Place the configuration in a file named lb.yaml:

variant: fcos version: 1.2.0 systemd: units: - name: haproxy.service enabled: true contents: | [Unit] Description=Haproxy Load Balancer After=network-online.target Wants=network-online.target [Service] ExecStartPre=-/bin/podman kill haproxy ExecStartPre=-/bin/podman rm haproxy ExecStartPre=/bin/podman pull haproxy ExecStart=/bin/podman run --name haproxy -p 80:8080 -v /etc/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy [Install] WantedBy=multi-user.target storage: directories: - path: /etc/haproxy files: - path: /etc/haproxy/haproxy.cfg mode: 0444 contents: inline: | global log stdout format raw local0 defaults mode tcp log global option tcplog frontend http bind *:8080 default_backend http backend http balance roundrobin %{ for name, addr in servers ~} server ${name} ${addr}:80 check %{ endfor ~}

The template expects a map with server names as keys and IP addresses as values. You can create that using the zipmap function. Use the ID of the web servers as keys and the public IP addresses as values.

data "ct_config" "lb" { content = templatefile( "lb.yaml", { servers = zipmap( aws_instance.web.*.id, aws_instance.web.*.public_ip ) } ) strict = true } resource "aws_instance" "lb" { ami = "ami-0699a4456969d8650" instance_type = "t3.micro user_data = data.ct_config.lb.rendered }

Finally, add an output block to display the IP address of the load balancer.

output "load_balancer_ip" { value = aws_instance.lb.public_ip }

All right! Run terraform apply and the IP address of the load balancer displays on completion. You should be able to make requests to the load balancer and get responses from each web server.

$ export LB={{load balancer IP here}} $ curl $LB <html> <h1>Hello from Server 0</h1> </html> $ curl $LB <html> <h1>Hello from Server 1</h1> </html> $ curl $LB <html> <h1>Hello from Server 2</h1> </html>

Now you can modify the configuration of the web servers or load balancer. Any changes can be realized by running terraform apply once again. Note in particular that any change to the web server IP addresses will cause Terraform to recreate the load balancer (changing the count from 3 to 4 is a simple test). Hopefully this emphasizes that the load balancer configuration is indeed a part of the Terraform dependency graph.

Clean up

You can destroy all the infrastructure using the terraform destroy command. Simply navigate to the folder where you created main.tf and run terraform destroy.

Where next?

Code for this tutorial can be found at this GitHub repository. Feel free to play with examples and contribute more if you find something you’d love to share with the world. To learn more about all the amazing things Fedora CoreOS can do, dive into the docs or come chat with the community. To learn more about Terraform, you can rummage through the docs, checkout #terraform on freenode, or contribute on GitHub.

4 cool new projects to try in COPR from December 2020

Monday 21st of December 2020 08:00:00 AM

COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open-source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

This article presents a few new and interesting projects in COPR. If you’re new to using COPR, see the COPR User Documentation for how to get started.

Blanket

Blanket is an application for playing background sounds, which may potentially improve your focus and increase your productivity. Alternatively, it may help you relax and fall asleep in a noisy environment. No matter what time it is or where you are, Blanket allows you to wake up while birds are chirping, work surrounded by friendly coffee shop chatter or distant city traffic, and then sleep like a log next to a fireplace while it is raining outside. Other popular choices for background sounds such as pink and white noise are also available.

Installation instructions

The repo currently provides Blanket for Fedora 32 and 33. To install it, use these commands:

sudo dnf copr enable tuxino/blanket sudo dnf install blanket k9s

k9s is a command-line tool for managing Kubernetes clusters. It allows you to list and interact with running pods, read their logs, dig through used resources, and overall make the Kubernetes life easier. With its extensibility through plugins and customizable UI, k9s is welcoming to power-users.

For many more preview screenshots, please see the project page.

Installation instructions

The repo currently provides k9s for Fedora 32, 33, and Fedora Rawhide as well as EPEL 7, 8, Centos Stream, and others. To install it, use these commands:

sudo dnf copr enable luminoso/k9s sudo dnf install k9s rhbzquery

rhbzquery is a simple tool for querying the Fedora Bugzilla instance. It provides an interface for specifying the search query but it doesn’t list results in the command-line. Instead, rhbzquery generates a Bugzilla URL and opens it in a web browser.

Installation instructions

The repo currently provides rhbzquery for Fedora 32, 33, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable petersen/rhbzquery sudo dnf install rhbzquery gping

gping is a more visually intriguing alternative to the standard ping command, as it shows results in a graph. It is also possible to ping multiple hosts at the same time to easily compare their response times.

Installation instructions

The repo currently provides gping for Fedora 32, 33, and Fedora Rawhide as well as for EPEL 7 and 8. To install it, use these commands:

sudo dnf copr enable atim/gping sudo dnf install gping

Using pods with Podman on Fedora

Friday 18th of December 2020 08:00:00 AM

This article shows the reader how easy it is to get started using pods with Podman on Fedora. But what is Podman? Well, we will start by saying that Podman is a container engine developed by Red Hat, and yes, if you thought about Docker when reading container engine, you are on the right track.

A whole new revolution of containerization started with Docker, and Kubernetes added the concept of pods in the area of container orchestration when dealing with containers that share some common resources. But hold on! Do you really think it is worth sticking with Docker alone by assuming it’s the only effective way of containerization? Podman can also manage pods on Fedora as well as the containers used in those pods.

Podman is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images.

From the official Podman documentation at http://docs.podman.io/en/latest/ Why should you switch to Podman?

Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode. Podman directly interacts with an image registry, containers and image storage.

To install podman, run this command using sudo:

sudo dnf -y install podman Creating a pod

To start using the pod we first need to create it and for that we have a basic command structure

$ podman pod create
d2a5d381247c8677bb8b0907261c119c8644e3fb06235d0aafcb27ec32d89f48

The command above contains no arguments and hence it will create a pod with a randomly generated name. You might however, want to give your pod a relevant name. For that you just need to modify the above command a bit.

$ podman pod create --name climoiselle e65767428fa0be2a3275c59542f58f5b5a2b0ce929598e9d78128a8846c28493

The pod will be created and will report back to you the ID of the pod. In the example shown the pod was given the name ‘climoiselle’. To view the newly created pod is easy by using the command shown below:

$ podman pod list POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID e65767428fa0 climoiselle Created 19 seconds ago 1 d74fb8bf66e7 d2a5d381247c blissful_dewdney Created 32 minutes ago 1 3185af079c26

As you can see, there are two pods listed here, one named blissful_dewdney and the one created from the example named climoiselle. No doubt you notice that both pods already include one container, yet we didn’t deploy a container to the pods yet.

What is that extra container inside the pod? This randomly generated container is an infra container. Every podman pod includes this infra container and in practice these containers do nothing but go to sleep. Their purpose is to hold the namespaces associated with the pod and to allow Podman to connect other containers to the pod. The other purpose of the infra container is to allow the pod to keep running when all associated containers have been stopped.

You can also view the individual containers within a pod with the command:

$ podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME d74fb8bf66e7 k8s.gcr.io/pause:3.2 38 seconds ago Created e65767428fa0-infra e65767428fa0 climoiselle 3185af079c26 k8s.gcr.io/pause:3.2 32 minutes ago Created d2a5d381247c-infra d2a5d381247c blissful_dewdney Add a container

The cool thing is, you can add more containers to your newly deployed pod. Always remember the name of your pod. It’s important as you’ll need that name in order to deploy the container in that pod. We’ll use the official Fedora image and deploy a container that uses it to run the bash shell.

$ podman run -it --rm --pod climoiselle fedora /bin/bash

When finished, type exit or hit Ctrl+D to leave the shell running in the container.

Everything in a single command

Podman has an agile characteristic when it comes to deploying a container in a pod which you created. You can create a pod and deploy a container to the said pod with a single command using Podman. Let’s say you want to deploy an NGINX container, exposing external port 8080 to internal port 80 to a new pod named test_server.

$ podman run -dt --pod new:test_server -p 8080:80 nginx Trying to pull registry.fedoraproject.org/nginx... manifest unknown: manifest unknown Trying to pull registry.access.redhat.com/nginx... unsupported: This repo requires terms acceptance and is only available on registry.redhat.io Trying to pull registry.centos.org/nginx... manifest unknown: manifest unknown Trying to pull docker.io/library/nginx... Getting image source signatures Copying blob e05167b6a99d done Copying blob 2766c0bf2b07 done Copying blob 70ac9d795e79 done Copying blob 6ec7b7d162b2 done Copying blob cb420a90068e done Copying config ae2feff98a done Writing manifest to image destination Storing signatures 7cb4336ccc26835750f23b412bcb9270b6f5b0d1a4477abc45cdc12308bfe961

Let’s check all pods that have been created and the number of containers running in each of them …

$ podman pod list POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID 7495cc9c7d93 test_server Running 2 minutes ago 2 6bd313bbfb0d e65767428fa0 climoiselle Created 11 minutes ago 1 d74fb8bf66e7 d2a5d381247c blissful_dewdney Created 43 minutes ago 1 3185af079c26

Do you want to know a detailed configuration of the pods which are running? Just type in the command shown below:

podman pod inspect [pod's name/id] Make it stop!

To stop the pods, we need to use the name or ID of the pod. With the information from podman’s pod list command, we can view the pods and their infra id. Simply use podman with the command stop and give the particular name/infra id of the pod.

$ podman pod stop test_server 7495cc9c7d93e0753b4473ad4f2478acfc70d5afd12db2f3e315773f2df30c3f

After following this short tutorial, you can see how quickly you can use pods with podman on fedora. It’s an easy and convenient way to use containers that share resources and interact together.

Further reading

Ben Cotton: How Do You Fedora?

Monday 14th of December 2020 08:00:00 AM

We recently interviewed Ben Cotton on how he uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming an interviewee.

Who is Ben Cotton?

If you follow the Fedora’ Community Blog, there’s a good chance you already know who Ben is. 

Ben’s Linux journey started around late 2002. Frustrated with some issues on using Windows XP, and starting a new application administrator role at his university where some services were being run on FreeBSD. A friend introduced him to Red Hat Linux, when Ben decided it made sense to get more practice with Unix-like operating systems. He switched to Fedora full-time in 2006, after he landed a job as a Linux system administrator.

Since then, his career has included system administration, people management, support engineering, development, and marketing. Several years ago, he even earned a Master’s degree in IT Project Management. The variety of experience has helped Ben learn how to work with different groups of people. “A lot of what I’ve learned has come from making mistakes. When you mess up communication, you hopefully do a better job the next time.”

Besides tech, Ben also has a range of well-rounded interests. “I used to do a lot of short fiction writing, but these days I mostly write my opinions about whatever is on my mind.” As for favorite foods, he claims “All of it. Feed me.”

Additionally, Ben has taste that spans genres. His childhood hero was a character from the science fiction series “Star Trek: The Next Generation”. “As a young lad, I wanted very much to be Wesley Crusher.” His favorite movies are a parody film and a spy thriller: “‘Airplane!’ and ‘The Hunt for Red October’” respectively. 

When asked for the five greatest qualities he thinks someone can possess, Ben responded cleverly: “Kindness. Empathy. Curiosity. Resilience. Red hair.”

Ben wearing the official “#action bcotton” shirt

His Fedora Story

As a talented writer who described himself as “not much of a programmer”, he selected the Fedora Docs team in 2009 as an entry point into the community. What he found was that “the Friends foundation was evident.” At the time, he wasn’t familiar with tools such as Git, DocBook XML, or Publican (docs toolchain at the time). The community of experienced doc writers helped him get on his feet and freely gave their time. To this day, Ben considers many of them to be his friends and feels really lucky to work with them. Notably “jjmcd, stickster, sparks, and jsmith were a big part of the warm welcome I received.”

Today, as a senior program manager, he describes his job as “Chief Cat Herding Officer”- as his job is largely composed of seeing what different parts of the project are doing and making sure they’re all heading in the same general direction. 

Despite having a huge responsibility, Ben also helps a lot in his free time with tasks outside of his job duties, like website work, CommBlog and Magazine editing, packaging, etc… none of which are his core job responsibilities. He tries to find ways to contribute that match his skills and interests. Building credibility, paying attention, developing relationships with other contributors, and showing folks that he’s able to help, is much more important to him than what his “official” title is. 

When thinking towards the future, Ben feels hopeful watching the Change proposals come in. “Sometimes they get rejected, but that’s to be expected when you’re trying to advance the state of the art. Fedora contributors are working hard to push the project forward.“

The Fedora Community 

As a longtime member of the community, Ben has various notions about the Fedora Project that have been developed over the years. For starters, he wants to make it easier to bring new contributors on board. He believes the Join SIG has “done tremendous work in this area”, but new contributors will keep the community vibrant. 

If Ben had to pick a best moment, he’d choose Flock 2018 in Dresden. “That was my first Fedora event and it was great to meet so many of the people who I’ve only known online for a decade.” 

As for bad moments, Ben hasn’t had many. Once he accidentally messed up a Bugzilla query resulting in accidental closure of hundreds of bugs and has dealt with some frustrating mailing list threads, but remains positive, affirming that “frustration is okay.”

To those interested in becoming involved in the Fedora Project, Ben says “Come join us!” There’s something to appeal to almost anyone. “Take the time to develop relationships with the people you meet as you join, because without the Friends foundation, the rest falls apart.”

Pet Peeves

One issue he finds challenging is a lack of documentation. “I’ve learned enough over the years that I can sort of bumble through making code changes to things, but a lot of times it’s not clear how the code ties together.” Ben sees how sparse or nonexistent documentation can be frustrating to newcomers who might not have the knowledge that is assumed.

Another concern Ben has is that the “interesting” parts of technology are changing. “Operating systems aren’t as important to end users as they used to be thanks to the rise of mobile computing and Software-as-a-Service. Will this cause our pool of potential new contributors to decrease?”

Likewise, Ben believes that it’s not always easy to get people to understand why they should care about open source software. “The reasons are often abstract and people don’t see that they’re directly impacted, especially when the alternatives provide more immediate convenience.”

What Hardware?

For work, Ben has a ThinkPad X1 Carbon running Fedora 33 KDE. His personal server/desktop is a machine he put together from parts that runs Fedora 33 KDE. He uses it as a file server, print server, Plex media server, and general-purpose desktop. If he has some spare time to get it started, Ben also has an extra laptop that he wants to start using to test Beta releases and “maybe end up running rawhide on it”.

What Software?

Ben has been a KDE user for a decade. A lot of his work is done in a web browser (Firefox for work stuff, Chrome for personal). He does most of his scripting in Python these days, with some inherited scripts in Perl.

Notable applications that Ben uses include:

  • Cherrytree for note-taking
  • Element for IRC
  • Inkscape and Kdenlive when he needs to edit videos.
  • Vim on the command line and Kate when he wants a GUI

Add storage to your Fedora system with LVM

Monday 7th of December 2020 08:00:00 AM

Sometimes there is a need to add another disk to your system. This is where Logical Volume Management (LVM) comes in handy. The cool thing about LVM is that it’s fairly flexible. There are several ways to add a disk. This article describes one way to do it.

Heads up!

This article does not cover the process of physically installing a new disk drive into your system. Consult your system and disk documentation on how to do that properly.

Important: Always make sure you have backups of important data. The steps described in this article will destroy data if it already exists on the new disk.

Good to know

This article doesn’t cover every LVM feature deeply; the focus is on adding a disk. But basically, LVM has volume groups, made up of one or more partitions and/or disks. You add the partitions or disks as physical volumes. A volume group can be broken down into many logical volumes. Logical volumes can be used as any other storage for filesystems, ramdisks, etc. More information can be found here.

Think of the physical volumes as forming a pool of storage (a volume group) from which you then carve out logical volumes for your system to use directly.

Preparation

Make sure you can see the disk you want to add. Use lsblk prior to adding the disk to see what storage is already available or in use.

$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT zram0 251:0 0 989M 0 disk [SWAP] vda 252:0 0 20G 0 disk ├─vda1 252:1 0 1G 0 part /boot └─vda2 252:2 0 19G 0 part └─fedora_fedora-root 253:0 0 19G 0 lvm /

This article uses a virtual machine with virtual storage. Therefore the device names start with vda for the first disk, vdb for the second, and so on. The name of your device may be different. Many systems will see physical disks as sda for the first disk, sdb for the second, and so on.

Once the new disk has been connected and your system is back up and running, use lsblk again to see the new block device.

$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT zram0 251:0 0 989M 0 disk [SWAP] vda 252:0 0 20G 0 disk ├─vda1 252:1 0 1G 0 part /boot └─vda2 252:2 0 19G 0 part └─fedora_fedora-root 253:0 0 19G 0 lvm / vdb 252:16 0 10G 0 disk

There is now a new device named vdb. The location for the device is /dev/vdb.

$ ls -l /dev/vdb brw-rw----. 1 root disk 252, 16 Nov 24 12:56 /dev/vdb

We can see the disk, but we cannot use it with LVM yet. If you run blkid you should not see it listed. For this and following commands, you’ll need to ensure your system is configured so you can use sudo:

$ sudo blkid /dev/vda1: UUID="4847cb4d-6666-47e3-9e3b-12d83b2d2448" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="830679b8-01" /dev/vda2: UUID="k5eWpP-6MXw-foh5-Vbgg-JMZ1-VEf9-ARaGNd" TYPE="LVM2_member" PARTUUID="830679b8-02" /dev/mapper/fedora_fedora-root: UUID="f8ab802f-8c5f-4766-af33-90e78573f3cc" BLOCK_SIZE="4096" TYPE="ext4" /dev/zram0: UUID="fc6d7a48-2bd5-4066-9bcf-f062b61f6a60" TYPE="swap" Add the disk to LVM

Initialize the disk using pvcreate. You need to pass the full path to the device. In this example it is /dev/vdb; on your system it may be /dev/sdb or another device name.

$ sudo pvcreate /dev/vdb Physical volume "/dev/vdb" successfully created.

You should see the disk has been initialized as an LVM2_member when you run blkid:

$ sudo blkid /dev/vda1: UUID="4847cb4d-6666-47e3-9e3b-12d83b2d2448" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="830679b8-01" /dev/vda2: UUID="k5eWpP-6MXw-foh5-Vbgg-JMZ1-VEf9-ARaGNd" TYPE="LVM2_member" PARTUUID="830679b8-02" /dev/mapper/fedora_fedora-root: UUID="f8ab802f-8c5f-4766-af33-90e78573f3cc" BLOCK_SIZE="4096" TYPE="ext4" /dev/zram0: UUID="fc6d7a48-2bd5-4066-9bcf-f062b61f6a60" TYPE="swap" /dev/vdb: UUID="4uUUuI-lMQY-WyS5-lo0W-lqjW-Qvqw-RqeroE" TYPE="LVM2_member"

You can list all physical volumes currently available using pvs:

$ sudo pvs PV VG Fmt Attr PSize PFree /dev/vda2 fedora_fedora lvm2 a-- <19.00g 0 /dev/vdb lvm2 --- 10.00g 10.00g

/dev/vdb is listed as a PV (phsyical volume), but it isn’t assigned to a VG (Volume Group) yet.

Add the pysical volume to a volume group

You can find a list of available volume groups using vgs:

$ sudo vgs VG #PV #LV #SN Attr VSize VFree fedora_fedora 1 1 0 wz--n- 19.00g 0

In this example, there is only one volume group available. Next, add the physical volume to fedora_fedora:

$ sudo vgextend fedora_fedora /dev/vdb Volume group "fedora_fedora" successfully extended

You should now see the physical volume is added to the volume group:

$ sudo pvs PV VG Fmt Attr PSize PFree /dev/vda2 fedora_fedora lvm2 a– <19.00g 0 /dev/vdb fedora_fedora lvm2 a– <10.00g <10.00g

Look at the volume groups:

$ sudo vgs VG #PV #LV #SN Attr VSize VFree fedora_fedora 2 1 0 wz–n- 28.99g <10.00g

You can get a detailed list of the specific volume group and physical volumes as well:

$ sudo vgdisplay fedora_fedora --- Volume group --- VG Name fedora_fedora System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 28.99 GiB PE Size 4.00 MiB Total PE 7422 Alloc PE / Size 4863 / 19.00 GiB Free PE / Size 2559 / 10.00 GiB VG UUID C5dL2s-dirA-SQ15-TfQU-T3yt-l83E-oI6pkp

Look at the PV:

$ sudo pvdisplay /dev/vdb --- Physical volume --- PV Name /dev/vdb VG Name fedora_fedora PV Size 10.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 2559 Free PE 2559 Allocated PE 0 PV UUID 4uUUuI-lMQY-WyS5-lo0W-lqjW-Qvqw-RqeroE

Now that we have added the disk, we can allocate space to logical volumes (LVs):

$ sudo lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root fedora_fedora -wi-ao---- 19.00g

Look at the logical volumes. Here’s a detailed look at the root LV:

$ sudo lvdisplay fedora_fedora/root --- Logical volume --- LV Path /dev/fedora_fedora/root LV Name root VG Name fedora_fedora LV UUID yqc9cw-AvOw-G1Ni-bCT3-3HAa-qnw3-qUSHGM LV Write Access read/write LV Creation host, time fedora, 2020-11-24 11:44:36 -0500 LV Status available LV Size 19.00 GiB Current LE 4863 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0

Look at the size of the root filesystem and compare it to the logical volume size.

$ df -h / Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_fedora-root 19G 1.4G 17G 8% /

The logical volume and the filesystem both agree the size is 19G. Let’s add 5G to the root logical volume:

$ sudo lvresize -L +5G fedora_fedora/root Size of logical volume fedora_fedora/root changed from 19.00 GiB (4863 extents) to 24.00 GiB (6143 extents). Logical volume fedora_fedora/root successfully resized.

We now have 24G available to the logical volume. Look at the / filesystem.

$ df -h / Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_fedora-root 19G 1.4G 17G 8% /

We are still showing only 19G free. This is because the logical volume is not the same as the filesytem. To use the new space added to the logical volume, resize the filesystem.

$ sudo resize2fs /dev/fedora_fedora/root resize2fs 1.45.6 (20-Mar-2020) Filesystem at /dev/fedora_fedora/root is mounted on /; on-line resizing required old_desc_blocks = 3, new_desc_blocks = 3 The filesystem on /dev/fedora_fedora/root is now 6290432 (4k) blocks long.

Look at the size of the filesystem.

$ df -h / Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_fedora-root 24G 1.4G 21G 7% /

As you can see, the root file system (/) has taken all of the space available on the logical volume and no reboot was needed.

You have now initialized a disk as a physical volume, and extended the volume group with the new physical volume. After that you increased the size of the logical volume, and resized the filesystem to use the new space from the logical volume.

Vagrant beyond the basics

Wednesday 2nd of December 2020 10:00:00 AM

There are, like most things in the Unix/Linux world, many ways of doing things with Vagrant, but here are some examples of ways to grow your Vagrantfile portfolio and increase your knowledge and use.

If you have not yet installed vagrant you can follow the first part of this series.

Installing and running Vagrant using qemu-kvm Some Vagrantfile basics

All Vagrantfiles start with “Vagrant.configure(“2”) do |config|” and finish with a corresponding “end”:

Vagrant.configure("2") do |config|
  ...
  ...
end

The “2” represents the version of Vagrant, and is currently either 1 or 2. Unless you need to use the older version simply stick with the latest.

The config structure is broken down into namespaces:

config.vm – modify the configuration of the machine(s) that Vagrant manages.

config.ssh – for configuring how Vagrant will access your machine over SSH.

config.winrm – configuring how Vagrant will access your Windows guest over WinRM.

config.winssh – the WinSSH communicator is built specifically for the Windows native port of OpenSSH.

config.vagrant – modify the behavior of Vagrant itself.

Each line in a namespace begins with the word ‘config’:

config.vm.box = “fedora/32-cloud-base”
config.vm.network “private_network”

There are many options here, and a read of the documentation pages is strongly recommended. They can be found at https://www.vagrantup.com/docs/vagrantfile

Also in this section you can configure provider-specific options. In this case the provider is libvirt, and the specific config looks like this:

config.vm.provider :libvirt do |libvirt|
  libvirt.cpus = 1
  libvirt.memory = 512

In the example above, all libvirt VMs will be created with a single CPU and 512Mb of memory unless specifically overridden.

The VM namespace is where you define all machines you want this Vagrantfile to build. Notice that this is still a part of the config section, and lines should therefore begin with ‘config’. All sections or parts of sections have an ‘end’ statement to close them off.

Creating multiple machines at once

Depending on what you need to achieve, this can be a simple loop or multiple machine definitions. To create any number of machines in a series, with the same settings but perhaps different names and/or IP addresses, you can just provide a range as shown here:

(1..5).each do |i|
  config.vm.define "server#{i}" do |server|
    server.vm.hostname = "server#{i}.example.com"
  end
end

This will create 5 servers, named server1, server2, server3 etc.

Of note, using Ruby style “for i in 1..3 do” doesn’t work despite Vagrantfile syntax actually being Ruby, so use the method from the example above.

If you need servers with different hostnames, different hardware etc then you’ll need to specify them individually, or at least in groups if the situation lends itself to that. Let’s say you need to create a typical web/db/load balancer infrastructure, with 2 web servers, a single database server and a load balancer for the web traffic. Ignoring the specific software setup for this, to simply create the virtual machines ready for provisioning you could use something like this:

# Load Balancer
config.vm.define "loadbal", primary: true do |loadbal|
  loadbal.vm.hostname = "loadbal"
end

# Database
config.vm.define "db", primary: true do |db|
  db.vm.hostname = "db"
end

# Web Servers x2
(1..2).each do |i|
  config.vm.define "web#{i}" do |web|
    web.vm.hostname = "web#{i}"
  end
end

This uses a combination of multiple machine calls and a small loop to build 4 VMs with a single ‘vagrant up’ command.

Networking

Vagrant generally creates its own network for VM access, and you use this with ‘vagrant ssh’. If you create more than one VM then you must use the VM name to identify which one you wish to connect to – vagrant ssh vmname.

There are a number of configuration options available which allow you to interact with your VMs in various ways.

The vagrant-libvirt plugin creates a network for the guests to use. This is automated and will always be present even if you define your own networks. The network is named “vagrant-libvirt” and can be seen either in the Virtual Networks tab of virt-manager’s connection details or by issuing a sudo virsh net-list command.

If you use dhcp for your guests, you can find the individual IP addresses with the virsh net-dhcp-list command: sudo virsh net-dhcp-leases vagrant-libvirt

Port Forwarding

The simplest change to default networking is port forwarding. This uses a simple format like most Vagrant config: config.vm.network “forwarded_port”, guest: 80, host: 8080

This listens to port 8080 on your local machine and forwards connections to port 80 on the Vagrant machine. If you need to use a UDP port, simply add , protocol: “udp” to the end of that line (notice that comma which should come immediately after the second port number).

Obviously for more complex configurations this might not be ideal, as you need to specify every single port you want to forward. If you then add multiple machines the complexity can really become too much.

In addition to this, anyone on your network can access these ports if they know your IP address, so that’s something you should be aware of.

Public Network

This creates a network card for the Vagrant VM which connects to your host network, and will therefore be visible to all machines on that network. As Vagrant is not designed to be secure, you should be aware of any vulnerabilities and take steps to protect against them.

To configure a public network, add config.vm.network “public_network” to your Vagrantfile. This will use DHCP to obtain a network address.

If you wish to assign a static IP address, you can add one to the end of the network declaration: config.vm.network “public_network”, ip: “192.168.0.1”

If you’re creating multiple guests you can put the network configuration in the vm namespace, and even allocate IPs based on iteration too:

Vagrant.configure("2") do |config|
  config.vm.box = "centos/8"
  config.vm.provider :libvirt do |libvirt|
    libvirt.qemu_use_session = false
  end

  # Servers x2
  (1..2).each do |i|
    config.vm.define "server#{i}" do |server|
      server.vm.hostname = "server#{i}"
      server.vm.network "public_network", ip: "192.168.122.20#{i}"
    end
  end
end Private Network

This works very much like the Public Network option, only the network is only available to the host machine and the Vagrant guests. The syntax is almost identical too: config.vm.network “private_network”, type: “dhcp”

 To use a static IP address, simply add it:

config.vm.network "private_network", ip: "192.168.50.4"

This will create a new network in libvirt, usually named something like “vagrant-private-dhcp” – you can see this with the command sudo virsh net-list while the VM is running. This network is created and destroyed along with the vagrant guests.

Again, the network config can be specified for all guests, or per guest as shown in the public network example above.

Provisioning

Once you have your VMs defined, you can obviously then do whatever you want with them, but as soon as you issue a ‘vagrant destroy’ command any changes will be lost. This is where automated provisioning comes in.

You can use several methods to provision your machines, from simple file copies to shell scripts, Ansible, Chef and Puppet. Many of the main methods can be used, but I’ll cover the simple ones here – if you need to use something else please read the documentation as it’s all covered.

File uploads

To copy a file to the Vagrant guest, add a line to the Vagrantfile like this:

config.vm.provision "file", source: "~/myfile", destination: "myfile"

You can copy directories too:

config.vm.provision "file", source: "~/path/to/host/folder", destination: "$HOME/remote/newfolder"

The directory structure should already exist on the Vagrant host, and will be copied in its entirety, including subdirectories and files.

Note: If you add a trailing slash to the destination path, the source path will be placed under this so make sure you only do this if you want that outcome. For example, if the above destination was “$HOME/remote/newfolder/”, then the result would see “$HOME/remote/newfolder/folder” created with the contents of the source placed here.

Shell commands

You can include individual commands, inline scripts or external scripts to perform provisioning tasks.

A single command would take this form, and any valid command line command can be used here: config.vm.provision “shell”, inline: “sudo dnf update -y”

An inline script is less common, and declared at the top of the Vagrantfile then called during provisioning:

$script = &lt;&lt;-SCRIPT
echo I am provisioning...
date > /etc/vagrant_provisioned_at
SCRIPT

Vagrant.configure("2") do |config|
  config.vm.provision "shell", inline: $script
end

More common is the external shell script, which gives more flexibility and makes code more modular. Vagrant uploads the file to the guest then executes it. Simply call the script in the provisioning line:

config.vm.provision “shell”, path: “script.sh”

The file need not be local to the Vagrant host either:

config.vm.provision “shell”, path: “https://example.com/provisioner.sh”

Ansible

To use Ansible to provision your VMs you must have it installed on the Vagrant host; see https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-on-rhel-centos-or-fedora.

You specify an Ansible playbook to provision your VM in the following way:

config.vm.provision "ansible" do |ansible|
  ansible.playbook = "playbook.yml"
end

This then calls the playbook, which will run as any externally-run ansible playbook would.

If you’re building multiple VMs with your Vagrantfile then it’s likely you want different configurations for some of them, and in this case you should provision within the definition of each VM, as shown here:

# Web Servers x2
(1..2).each do |i|
  config.vm.define "web#{i}" do |web|
    web.vm.hostname = "web#{i}"
    web.vm.provision "ansible" do |ansible|
      ansible.playbook = "web.yml"
    end
  end
end

Ansible provisioners come in two formats – ansible and ansible_local. The ansible provisioner requires that Ansible is installed on the Vagrant host, and will connect remotely to your guest VMs to provision them. This means all necessary ssh authentication must be in place for it to work. The ansible_local provisioner executes playbooks directly on the guest VMs, which therefore requires Ansible be installed on each of the guests you want to provision. Vagrant will try to install Ansible on the guests in order to do this, (This can be controlled with the install option, but is enabled by default). On RHEL-style systems like Fedora, Ansible is installed from the EPEL repository. Simply use either ansible or ansible_local in the config_vm_provision command to choose the style you need.

Synced Folders

Vagrant allows you to sync folders between your Vagrant host and your guests, allowing access to configuration files, data etc. By default, the folder containing the Vagrant file is shared and mounted under /vagrant on each guest.

To configure additional synced folders, use the config.vm.synced.folder command:

config.vm.synced_folder "src/", "/srv/website"

The two parameters are the source folder on the Vagrant host and the mount directory on the guest. The destination folder will be created if it does not exist, recursively if necessary.

Options for synced folders allow you to configure them better, including the option to disable them completely. Other options allow you to specify a group owner of the folder (group), the folder owner (owner), plus mount options. There are others but these are the main ones.

You can disable the default share with the following command:

config.vm.synced_folder ".", "/vagrant", disabled: true

Other options are configured as follows:

config.vm.synced_folder "src/", "/srv/website",
  owner: "apache", group: "apache" NFS synced folders

When using Vagrant on a Linux host, synced folders use NFS (with the exception of the default share which uses rsync; see below) so you must have NFS installed on the Vagrant host, and the guests also need NFS support installation. To use NFS with non-Linux hosts, simply specify the folder type as ‘nfs’:

config.vm.synced_folder ".", "/vagrant", type: "nfs" RSync synced folders

These are the easiest to use as they usually work without any intervention on a Linux host. This is a one-way sync from host to guest performed at startup (vagrant up) or after a vagrant reload command is issued. The default share of the Vagrant project directory is done with rsync. To configure a synced folder with rsync, specify the type as ‘rsync’:

config.vm.synced_folder ".", "/vagrant", type: "rsync"

Getting started with Stratis – up and running

Monday 30th of November 2020 08:00:00 AM

When adding storage to a Linux server, system administrators often use commands like pvcreate, vgcreate, lvcreate, and mkfs to integrate the new storage into the system. Stratis is a command-line tool designed to make managing storage much simpler. It creates, modifies, and destroys pools of storage. It also allocates and deallocates filesystems from the storage pools.

Instead of an entirely in-kernel approach like ZFS or Btrfs, Stratis uses a hybrid approach with components in both user space and kernel land. It builds on existing block device managers like device mapper and existing filesystems like XFS. Monitoring and control is performed by a user space daemon.

Stratis tries to avoid some ZFS characteristics like restrictions on adding new hard drives or replacing existing drives with bigger ones. One of its main design goals is to achieve a positive command-line experience.

Install Stratis

Begin by installing the required packages. Several Python-related dependencies will be automatically pulled in. The stratisd package provides the stratisd daemon which creates, manages, and monitors local storage pools. The stratis-cli package provides the stratis command along with several Python libraries.

# yum install -y stratisd stratis-cli

Next, enable the stratisd service.

# systemctl enable --now stratisd

Note that the “enable –now” syntax shown above both permanently enables and immediately starts the service.

After determining what disks/block devices are present and available, the three basic steps to using Stratis are:

  1. Create a pool of the desired disks.
  2. Create a filesystem in the pool.
  3. Mount the filesystem.

In the following example, four virtual disks are available in a virtual machine. Be sure not to use the root/system disk (/dev/vda in this example)!

# sfdisk -s /dev/vda: 31457280 /dev/vdb:   5242880 /dev/vdc:   5242880 /dev/vdd:   5242880 /dev/vde:   5242880 total: 52428800 blocks Create a storage pool using Stratis # stratis pool create testpool /dev/vdb /dev/vdc # stratis pool list Name Total Physical Size  Total Physical Used testpool 10 GiB 56 MiB

After creating the pool, check the status of its block devices:

# stratis blockdev list Pool Name   Device Node Physical Size   State  Tier testpool  /dev/vdb            5 GiB  In-use  Data testpool  /dev/vdc            5 GiB  In-use  Data Create a filesystem using Stratis

Next, create a filesystem. As mentioned earlier, Stratis uses the existing DM (device mapper) and XFS filesystem technologies to create thinly-provisioned filesystems. By building on these existing technologies, large filesystems can be created and it is possible to add physical storage as storage needs grow.

# stratis fs create testpool testfs # stratis fs list Pool Name  Name  Used Created        Device            UUID testpool  testfs 546 MiB  Apr 18 2020 09:15 /stratis/testpool/testfs  095fb4891a5743d0a589217071ff71dc

Note that “fs” in the example above can optionally be written out as “filesystem”.

Mount the filesystem

Next, create a mount point and mount the filesystem.

# mkdir /testdir # mount /stratis/testpool/testfs /testdir # df -h | egrep 'stratis|Filesystem' Filesystem Size Used Avail Use% Mounted on /dev/mapper/stratis-1-3e8e[truncated]71dc  1.0T  7.2G 1017G   1% /testdir

The actual space used by a filesystem is shown using the stratis fs list command demonstrated previously. Notice how the testdir filesystem has a virtual size of 1.0T. If the data in a filesystem approaches its virtual size, and there is available space in the storage pool, Stratis will automatically grow the filesystem. Note that beginning with Fedora 34, the form of device path will be /dev/stratis/<pool-name>/<filesystem-name>.

Add the filesystem to fstab

To configure automatic mounting of the filesystem at boot time, run following commands:

# UUID=`lsblk -n -o uuid /stratis/testpool/testfs` # echo "UUID=${UUID} /testdir xfs defaults 0 0" >> /etc/fstab

After updating fstab, verify that the entry is correct by unmounting and mounting the filesystem:

# umount /testdir # mount /testdir # df -h | egrep 'stratis|Filesystem' Filesystem Size Used Avail Use% Mounted on /dev/mapper/stratis-1-3e8e[truncated]71dc  1.0T  7.2G 1017G   1% /testdir Adding cache devices with Stratis

Suppose /dev/vdd is an available SSD (solid state disk). To configure it as a cache device and check its status, use the following commands:

# stratis pool add-cache testpool  /dev/vdd # stratis blockdev Pool Name   Device Node Physical Size  State   Tier testpool   /dev/vdb            5 GiB  In-use   Data testpool   /dev/vdc            5 GiB  In-use   Data testpool   /dev/vdd            5 GiB  In-use  Cache Growing the storage pool

Suppose the testfs filesystem is close to using all the storage capacity of testpool. You could add an additional disk/block device to the pool with commands similar to the following:

# stratis pool add-data testpool /dev/vde # stratis blockdev Pool Name Device Node Physical Size   State   Tier testpool   /dev/vdb           5 GiB  In-use   Data testpool   /dev/vdc           5 GiB  In-use   Data testpool   /dev/vdd           5 GiB  In-use  Cache testpool   /dev/vde           5 GiB  In-use   Data

After adding the device, verify that the pool shows the added capacity:

# stratis pool Name      Total Physical Size   Total Physical Used testpool             15 GiB           606 MiB Conclusion

Stratis is a tool designed to make managing storage much simpler. Creating a filesystem with enterprise functionalities like thin-provisioning, snapshots, volume management, and caching can be accomplished quickly and easily with just a few basic commands.

See also Getting Started with Stratis Encryption.

Getting started with Fedora CoreOS

Friday 27th of November 2020 08:00:00 AM

This has been called the age of DevOps, and operating systems seem to be getting a little bit less attention than tools are. However, this doesn’t mean that there has been no innovation in operating systems. [Edit: The diversity of offerings from the plethora of distributions based on the Linux kernel is a fine example of this.] Fedora CoreOS has a specific philosophy of what an operating system should be in this age of DevOps.

Fedora CoreOS’ philosophy

Fedora CoreOS (FCOS) came from the merging of CoreOS Container Linux and Fedora Atomic Host. It is a minimal and monolithic OS focused on running containerized applications. Security being a first class citizen, FCOS provides automatic updates and comes with SELinux hardening.

For automatic updates to work well they need to be very robust. The goal being that servers running FCOS won’t break after an update. This is achieved by using different release streams (stable, testing and next). Each stream is released every 2 weeks and content is promoted from one stream to the other (next -> testing -> stable). That way updates landing in the stable stream have had the opportunity to be tested over a long period of time.

Getting Started

For this example let’s use the stable stream and a QEMU base image that we can run as a virtual machine. You can use coreos-installer to download that image.

From your (Workstation) terminal, run the following commands after updating the link to the image. [Edit: On Silverblue the container based coreos tools are the simplest method to try. Instructions can be found at https://docs.fedoraproject.org/en-US/fedora-coreos/tutorial-setup/ , in particular “Setup with Podman or Docker”.]

$ sudo dnf install coreos-installer $ coreos-installer download --image-url https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.20200907.3.0/x86_64/fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2.xz $ xz -d fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2.xz $ ls fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2 Create a configuration

To customize a FCOS system, you need to provide a configuration file that will be used by Ignition to provision the system. You may use this file to configure things like creating a user, adding a trusted SSH key, enabling systemd services, and more.

The following configuration creates a ‘core’ user and adds an SSH key to the authorized_keys file. It is also creating a systemd service that uses podman to run a simple hello world container.

version: "1.0.0" variant: fcos passwd: users: - name: core ssh_authorized_keys: - ssh-ed25519 my_public_ssh_key_hash fcos_key systemd: units: - contents: | [Unit] Description=Run a hello world web service After=network-online.target Wants=network-online.target [Service] ExecStart=/bin/podman run --pull=always --name=hello --net=host -p 8080:8080 quay.io/cverna/hello ExecStop=/bin/podman rm -f hello [Install] WantedBy=multi-user.target enabled: true name: hello.service

After adding your SSH key in the configuration save it as config.yaml. Next use the Fedora CoreOS Config Transpiler (fcct) tool to convert this YAML configuration into a valid Ignition configuration (JSON format).

Install fcct directly from Fedora’s repositories or get the binary from GitHub.

$ sudo dnf install fcct $ fcct -output config.ign config.yaml Install and run Fedora CoreOS

To run the image, you can use the libvirt stack. To install it on a Fedora system using the dnf package manager

$ sudo dnf install @virtualization

Now let’s create and run a Fedora CoreOS virtual machine

$ chcon --verbose unconfined_u:object_r:svirt_home_t:s0 config.ign $ virt-install --name=fcos \ --vcpus=2 \ --ram=2048 \ --import \ --network=bridge=virbr0 \ --graphics=none \ --qemu-commandline="-fw_cfg name=opt/com.coreos/config,file=${PWD}/config.ign" \ --disk=size=20,backing_store=${PWD}/fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2

Once the installation is successful, some information is displayed and a login prompt is provided.

Fedora CoreOS 32.20200907.3.0 Kernel 5.8.10-200.fc32.x86_64 on an x86_64 (ttyS0) SSH host key: SHA256:BJYN7AQZrwKZ7ZF8fWSI9YRhI++KMyeJeDVOE6rQ27U (ED25519) SSH host key: SHA256:W3wfZp7EGkLuM3z4cy1ZJSMFLntYyW1kqAqKkxyuZrE (ECDSA) SSH host key: SHA256:gb7/4Qo5aYhEjgoDZbrm8t1D0msgGYsQ0xhW5BAuZz0 (RSA) ens2: 192.168.122.237 fe80::5054:ff:fef7:1a73 Ignition: user provided config was applied Ignition: wrote ssh authorized keys file for user: core

The Ignition configuration file did not provide any password for the core user, therefore it is not possible to login directly via the console. (Though, it is possible to configure a password for users via Ignition configuration.)

Use Ctrl + ] key combination to exit the virtual machine’s console. Then check if the hello.service is running.

$ curl http://192.168.122.237:8080 Hello from Fedora CoreOS!

Using the preconfigured SSH key, you can also access the VM and inspect the services running on it.

$ ssh core@192.168.122.237 $ systemctl status hello ● hello.service - Run a hello world web service Loaded: loaded (/etc/systemd/system/hello.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-10-28 10:10:26 UTC; 42s ago zincati, rpm-ostree and automatic updates

The zincati service drives rpm-ostreed with automatic updates.
Check which version of Fedora CoreOS is currently running on the VM, and check if Zincati has found an update.

$ ssh core@192.168.122.237 $ rpm-ostree status State: idle Deployments: ● ostree://fedora:fedora/x86_64/coreos/stable Version: 32.20200907.3.0 (2020-09-23T08:16:31Z) Commit: b53de8b03134c5e6b683b5ea471888e9e1b193781794f01b9ed5865b57f35d57 GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0 $ systemctl status zincati ● zincati.service - Zincati Update Agent Loaded: loaded (/usr/lib/systemd/system/zincati.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-10-28 13:36:23 UTC; 7s ago … Oct 28 13:36:24 cosa-devsh zincati[1013]: [INFO ] initialization complete, auto-updates logic enabled Oct 28 13:36:25 cosa-devsh zincati[1013]: [INFO ] target release '32.20201004.3.0' selected, proceeding to stage it ... zincati reboot ...

After the restart, let’s remote login once more to check the new version of Fedora CoreOS.

$ ssh core@192.168.122.237 $ rpm-ostree status State: idle Deployments: ● ostree://fedora:fedora/x86_64/coreos/stable Version: 32.20201004.3.0 (2020-10-19T17:12:33Z) Commit: 64bb377ae7e6949c26cfe819f3f0bd517596d461e437f2f6e9f1f3c24376fd30 GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0 ostree://fedora:fedora/x86_64/coreos/stable Version: 32.20200907.3.0 (2020-09-23T08:16:31Z) Commit: b53de8b03134c5e6b683b5ea471888e9e1b193781794f01b9ed5865b57f35d57 GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0

rpm-ostree status now shows 2 versions of Fedora CoreOS, the one that came in the QEMU image, and the latest one received from the update. By having these 2 versions available, it is possible to rollback to the previous version using the rpm-ostree rollback command.

Finally, you can make sure that the hello service is still running and serving content.

$ curl http://192.168.122.237:8080 Hello from Fedora CoreOS!

More information: Fedora CoreOS updates

Deleting the Virtual Machine

To clean up afterwards, the following commands will delete the VM and associated storage.

$ virsh destroy fcos $ virsh undefine --remove-all-storage fcos Conclusion

Fedora CoreOS provides a solid and secure operating system tailored to run applications in containers. It excels in a DevOps environment which encourages the hosts to be provisioned using declarative configuration files. Automatic updates and the ability to rollback to a previous version of the OS, bring a peace of mind during the operation of a service.

Learn more about Fedora CoreOS by following the tutorials available in the project’s documentation.

Podman with capabilities on Fedora

Monday 16th of November 2020 08:00:00 AM

Containerization is a booming technology. As many as seventy-five percent of global organizations could be running some type of containerization technology in the near future. Since widely used technologies are more likely to be targeted by hackers, securing containers is especially important. This article will demonstrate how POSIX capabilities are used to secure Podman containers. Podman is the default container management tool in RHEL8.

Determine the Podman container’s privilege mode

Containers run in either privileged or unprivileged mode. In privileged mode, the container uid 0 is mapped to the host’s uid 0. For some use cases, unprivileged containers lack sufficient access to the resources of the host machine. Technologies and techniques including Mandatory Access Control (apparmor, SELinux), seccomp filters, dropping of capabilities, and namespaces help to secure containers regardless of their mode of operation.

To determine the privilege mode from outside the container:

$ podman inspect --format="{{.HostConfig.Privileged}}" <container id>

If the above command returns true then the container is running in privileged mode. If it returns false then the container is running in unprivileged mode.

To determine the privilege mode from inside the container:

$ ip link add dummy0 type dummy

If this command allows you to create an interface then you are running a privileged container. Otherwise you are running an unprivileged container.

Capabilities

Namespaces isolate a container’s processes from arbitrary access to the resources of its host and from access to the resources of other containers running on the same host. Processes within privileged containers, however, might still be able to do things like alter the IP routing table, trace arbitrary processes, and load kernel modules. Capabilities allow one to apply finer-grained restrictions on what resources the processes within a container can access or alter; even when the container is running in privileged mode. Capabilities also allow one to assign privileges to an unprivileged container that it would not otherwise have.

For example, to add the NET_ADMIN capability to an unprivileged container so that a network interface can be created inside of the container, you would run podman with parameters similar to the following:

[root@vm1 ~]# podman run -it --cap-add=NET_ADMIN centos [root@b27fea33ccf1 /]# ip link add dummy0 type dummy [root@b27fea33ccf1 /]# ip link

The above commands demonstrate a dummy0 interface being created in an unprivileged container. Without the NET_ADMIN capability, an unprivileged container would not be able to create an interface. The above commands demonstrate how to grant a capability to an unprivileged container.

Currently, there are about 39 capabilities that can be granted or denied. Privileged containers are granted many capabilities by default. It is advisable to drop unneeded capabilities from privileged containers to make them more secure.

To drop all capabilities from a container:

$ podman run -it -d --name mycontainer --cap-drop=all centos

To list a container’s capabilities:

$ podman exec -it 48f11d9fa512 capsh --print

The above command should show that no capabilities are granted to the container.

Refer to the capabilities man page for a complete list of capabilities:

$ man capabilities

Use the capsh command to list the capabilities you currently possess:

$ capsh --print

As another example, the below command demonstrates dropping the NET_RAW capability from a container. Without the NET_RAW capability, servers on the internet cannot be pinged from within the container.

$ podman run -it --name mycontainer1 --cap-drop=net_raw centos >>> ping google.com (will output error, operation not permitted)

As a final example, if your container were to only need the SETUID and SETGID capabilities, you could achieve such a permission set by dropping all capabilities and then re-adding only those two.

$ podman run -d --cap-drop=all --cap-add=setuid --cap-add=setgid fedora sleep 5 > /dev/null; pscap | grep sleep

The pscap command shown above should show the capabilities that have been granted to the container.

I hope you enjoyed this brief exploration of how capabilities are used to secure Podman containers.

Thank You!

Using Fedora 33 with Microsoft’s WSL2

Wednesday 11th of November 2020 08:00:00 AM

If you’re like me, you may find yourself running Windows for a variety of reasons from work to gaming. Sure you could run Fedora in a virtual machine or as a container, but those don’t blend into a common windows experience as easily as the Windows Subsystem for Linux (WSL). Using Fedora via WSL will let you blend the two environments together for a fantastic development environment.

Prerequisites

There are a few basics you’ll need in order to make this all work. You should be running Windows 10, and have WSL2 installed already. If not, check out the Microsoft documentation for instructions, and come back here when you’re finished. Microsoft recommends setting wsl2 as the distro default for simplicity. This guide assumes you’ve done that.

Next, you’re going to need some means of unpacking xz compressed files. You can do this with another WSL-based distribution, or use 7zip.

Download a Fedora 33 rootfs

Since Fedora doesn’t ship an actual rootfs archive, we’re going to abuse the one used to generate the container image for dockerhub. You will want to download the tar.xz file from the fedora-cloud GitHub repository. Once you have the tar.xz, uncompress it, but don’t unpack it. You want to end up with something like fedora-33-datestamp.tar. Once you have that, you’re ready to build the image.

Composing the WSL Fedora build

I prefer to use c:\distros, but you can choose nearly whatever location you want. Whatever you choose, make sure the top level path exists before you import the build. Now open a cmd or powershell prompt, because it’s time to import:

wsl.exe --import Fedora-33 c:\distros\Fedora-33 $HOME\Downloads\fedora-33.tar

You will see Fedora-33 show up in wsl’s list

PS C:\Users\jperrin> wsl.exe -l -v
  NAME                   STATE           VERSION
  Fedora-33                 Stopped         2

From here, you can start to play around with Fedora in wsl, but we have a few things we need to do to make it actually useful as a wsl distro.

wsl -d Fedora-33

This will launch Fedora’s wsl instance as the root user. From here, you’re going to install a few core packages and set a new default user. You’re also going to need to configure sudo, otherwise you won’t be able to easily elevate privileges if you need to install something else later.

dnf update
dnf install wget curl sudo ncurses dnf-plugins-core dnf-utils passwd findutils

wslutilites uses curl and wget for things like VS Code integration, so they’re useful to have around. Since you need to use a Copr repo for this, you want the added dnf functionality.

Add your user

Now it’s time to add your user, and set it as the default.

useradd -G wheel yourusername
passwd yourusername

Now that you’ve created your username and added a password, make sure they work. Exit the wsl instance, and launch it again, this time specifying the username. You’re also going to test sudo, and check your uid.

wsl -d Fedora-33 -u yourusername
$id -u
1000
$ sudo cat /etc/shadow

Assuming everything worked fine, you’re now ready to set the default user for your Fedora setup in Windows. To do this, exit the wsl instance and get back into Powershell. This Powershell one-liner configures your user properly:

Get-ItemProperty Registry::HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Lxss\*\ DistributionName | Where-Object -Property DistributionName -eq Fedora-33  | Set-ItemProperty -Name DefaultUid -Value 1000

Now you should be able to launch WSL again without specifying a user, and be yourself instead of root.

Customize!

From here, you’re done getting the basic Fedora 33 setup running in wsl, but it doesn’t have the Windows integration piece yet. If this is something you want, there’s a Copr repo to enable. If you choose to add this piece, you’ll be able to run Windows apps directly from inside your shell, as well as integrate your Linux environment easily with VS Code. Note that Copr is not officially supported by Fedora infrastructure. Use packages at your own risk

dnf copr enable trustywolf/wslu

Now you can go configure your terminal, setup a Python development environment, or however else you want to use Fedora 33. Enjoy!

Getting started with Stratis – encryption

Monday 9th of November 2020 08:00:00 AM

Stratis is described on its official website as an “easy to use local storage management for Linux.” See this short video for a quick demonstration of the basics. The video was recorded on a Red Hat Enterprise Linux 8 system. The concepts shown in the video also apply to Stratis in Fedora.

Stratis version 2.1 introduces support for encryption. Continue reading to learn how to get started with encryption in Stratis.

Prerequisites

Encryption requires Stratis version 2.1 or greater. The examples in this post use a pre-release of Fedora 33. Stratis 2.1 will be available in the final release of Fedora 33.

You’ll also need at least one available block device to create an encrypted pool. The examples shown below were done on a KVM virtual machine with a 5 GB virtual disk drive (/dev/vdb).

Create a key in the kernel keyring

The Linux kernel keyring is used to store the encryption key. For more information on the kernel keyring, refer to the keyrings manual page (man keyrings).  

Use the stratis key set command to set up the key within the kernel keyring.  You must specify where the key should be read from. To read the key from standard input, use the –capture-key option. To retrieve the key from a file, use the –keyfile-path <file> option. The last parameter is a key description. It will be used later when you create the encrypted Stratis pool.

For example, to create a key with the description pool1key, and to read the key from standard input, you would enter:

# stratis key set --capture-key pool1key Enter desired key data followed by the return key:

The command prompts us to type the key data / passphrase, and the key is then created within the kernel keyring.  

To verify that the key was created, run stratis key list:

# stratis key list Key Description pool1key

This verifies that the pool1key was created. Note that these keys are not persistent. If the host is rebooted, the key will need to be provided again before the encrypted Stratis pool can be accessed (this process is covered later).

If you have multiple encrypted pools, they can have a separate keys, or they can share the same key.

The keys can also be viewed using the following keyctl commands:

# keyctl get_persistent @s 318044983 # keyctl show Session Keyring  701701270 --alswrv      0     0  keyring: _ses  649111286 --alswrv      0 65534   \_ keyring: _uid.0  318044983 ---lswrv      0 65534   \_ keyring: _persistent.0 1051260141 --alswrv      0     0       \_ user: stratis-1-key-pool1key Create the encrypted Stratis pool

Now that a key has been created for Stratis, the next step is to create the encrypted Stratis pool. Encrypting a pool can only be done at pool creation. It isn’t currently possible to encrypt an existing pool.

Use the stratis pool create command to create a pool. Add –key-desc and the key description that you provided in the previous step (pool1key). This will signal to Stratis that the pool should be encrypted using the provided key. The below example creates the Stratis pool on /dev/vdb, and names it pool1. Be sure to specify an empty/available device on your system.

# stratis pool create --key-desc pool1key pool1 /dev/vdb

You can verify that the pool has been created with the stratis pool list command:

# stratis pool list  Name                     Total Physical   Properties pool1   4.98 GiB / 37.63 MiB / 4.95 GiB      ~Ca, Cr

In the sample output shown above, ~Ca indicates that caching is disabled (the tilde negates the property). Cr indicates that encryption is enabled.  Note that caching and encryption are mutually exclusive. Both features cannot be simultaneously enabled.

Next, create a filesystem. The below example, demonstrates creating a filesystem named filesystem1, mounting it at the /filesystem1 mountpoint, and creating a test file in the new filesystem:

# stratis filesystem create pool1 filesystem1 # mkdir /filesystem1 # mount /stratis/pool1/filesystem1 /filesystem1 # cd /filesystem1 # echo "this is a test file" > testfile Access the encrypted pool after a reboot

When you reboot you’ll notice that Stratis no longer shows your encrypted pool or its block device:

# stratis pool list Name   Total Physical   Properties # stratis blockdev list Pool Name   Device Node   Physical Size   Tier

To access the encrypted pool, first re-create the key with the same key description and key data / passphrase that you used previously:

# stratis key set --capture-key pool1key Enter desired key data followed by the return key:

Next, run the stratis pool unlock command, and verify that you can now see the pool and its block device:

# stratis pool unlock # stratis pool list Name                      Total Physical   Properties pool1   4.98 GiB / 583.65 MiB / 4.41 GiB      ~Ca, Cr # stratis blockdev list Pool Name   Device Node   Physical Size   Tier pool1       /dev/dm-2          4.98 GiB   Data

Next, mount the filesystem and verify that you can access the test file you created previously:

# mount /stratis/pool1/filesystem1 /filesystem1/ # cat /filesystem1/testfile  this is a test file Use a systemd unit file to automatically unlock a Stratis pool at boot

It is possible to automatically unlock your Stratis pool at boot without manual intervention. However, a file containing the key must be available. Storing the key in a file might be a security concern in some environments.

The systemd unit file shown below provides a simple method to unlock a Stratis pool at boot and mount the filesystem. Feedback on a better/alternative methods is welcome. You can provide suggestions in the comment section at the end of this article.

Start by creating your key file with the following command. Be sure to substitute passphrase with the same key data / passphrase you entered previously.

# echo -n passphrase > /root/pool1key

Make sure that the file is only readable by root:

# chmod 400 /root/pool1key # chown root:root /root/pool1key

Create a systemd unit file at /etc/systemd/system/stratis-filesystem1.service with the following content:

[Unit] Description = stratis mount pool1 filesystem1 file system After = stratisd.service [Service] ExecStartPre=sleep 2 ExecStartPre=stratis key set --keyfile-path /root/pool1key pool1key ExecStartPre=stratis pool unlock ExecStartPre=sleep 3 ExecStart=mount /stratis/pool1/filesystem1 /filesystem1 RemainAfterExit=yes [Install] WantedBy = multi-user.target

Next, enable the service so that it will run at boot:

# systemctl enable stratis-filesystem1.service

Now reboot and verify that the Stratis pool has been automatically unlocked and that its filesystem is mounted.

Summary and conclusion

In today’s environment, encryption is a must for many people and organizations. This post demonstrated how to enable encryption in Stratis 2.1.

Reclaim hard-drive space with LVM

Friday 6th of November 2020 08:00:00 AM

LVM is a tool for logical volume management which includes allocating disks, striping, mirroring and resizing logical volumes. It is commonly used on Fedora installations (prior to BTRFS as default it was LVM+Ext4). But have you ever started up your system to find out that Gnome just said the home volume is almost out of space! Luckily, there is likely some space sitting around in another volume, unused and ready to re-alocate. Here’s how to reclaim hard-drive space with LVM.

The key to easily re-alocate space between volumes is the Logical Volume Manager (LVM). Fedora 32 and before use LVM to divide disk space by default. This technology is similar to standard hard-drive partitions, but LVM is a lot more flexible. LVM enables not only flexible volume size management, but also advanced capabilities such as read-write snapshots, striping or mirroring data across multiple drives, using a high-speed drive as a cache for a slower drive, and much more. All of these advanced options can get a bit overwhelming, but resizing a volume is straight-forward.

LVM basics

The volume group serves as the main container in the LVM system. By default Fedora only defines a single volume group, but there can be as many as needed. Actual hard-drive and hard-drive partitions are added to the volume group as physical volumes. Physical volumes add available free space to the volume group. A typical Fedora install has one formatted boot partition, and the rest of the drive is a partition configured as an LVM physical volume.

Out of this pool of available space, the volume group allocates one or more logical volumes. These volumes are similar to hard-drive partitions, but without the limitation of contiguous space on the disk. LVM logical volumes can even span multiple devices! Just like hard-drive partitions, logical volumes have a defined size and can contain any filesystem which can then be mounted to specific directories.

What’s needed

Confirm the system uses LVM with the gnome-disks application, and make sure there is free space available in some other volume. Without space to reclaim from another volume, this guide isn’t useful. A Fedora live CD/USB is also needed. Any file system that needs to shrink must be unmounted. Running from a live image allows all the volumes on the hard-disk to remain unmounted, even important directories like / and /home.

Use gnome-disks to verify free space A word of warning

No data should be lost by following this guide, but it does muck around with some very low-level and powerful commands. One mistake could destroy all data on the hard-drive. So backup all the data on the disk first!

Resizing LVM volumes

To begin, boot the Fedora live image and select Try Fedora at the dialog. Next, use the Run Command to launch the blivet-gui application (accessible by pressing Alt-F2, typing blivet-gui, then pressing enter). Select the volume group on the left under LVM. The logical volumes are on the right.

Explore logical volumes in blivet-gui

The logical volume labels consist of both the volume group name and the logical volume name. In the example, the volume group is “fedora_localhost-live” and there are “home”, “root”, and “swap” logical volumes allocated. To find the full volume, select each one, click on the gear icon, and choose resize. The slider in the resize dialog indicates the allowable sizes for the volume. The minimum value on the left is the space already in use within the file system, so this is the minimum possible volume size (without deleting data). The maximum value on the right is the greatest size the volume can have based on available free space in the volume group.

Resize dialog in blivet-gui

A grayed out resize option means the volume is full and there is no free space in the volume group. It’s time to change that! Look through all of the volumes to find one with plenty of extra space, like in the screenshot above. Move the slider to the left to set the new size. Free up enough space to be useful for the full volume, but still leave plenty of space for future data growth. Otherwise, this volume will be the next to fill up.

Click resize and note that a new item appears in the volume listing: free space. Now select the full volume that started this whole endeavor, and move the slider all the way to the right. Press resize and marvel at the new improved volume layout. However, nothing has changed on the hard drive yet. Click on the check-mark to commit the changes to disk.

Review changes in blivet-gui

Review the summary of the changes, and if everything looks right, click Ok to proceed. Wait for blivet-gui to finish. Now reboot back into the main Fedora install and enjoy all the new space in the previously full volume.

Planning for the future

It is challenging to know how much space any particular volume will need in the future. Instead of immediately allocating all available free space, consider leaving it free in the volume group. In fact, Fedora Server reserves space in the volume group by default. Extending a volume is possible while it is online and in use. No live image or reboot needed. When a volume is almost full, easily extend the volume using part of the available free space and keep working. Unfortunately the default disk manager, gnome-disks, does not support LVM volume resizing, so install blivet-gui for a graphical management tool. Alternately, there is a simple terminal command to extend a volume:

lvresize -r -L +1G /dev/fedora_localhost-live/root Wrap-up

Reclaiming hard-drive space with LVM just scratches the surface of LVM capabilities. Most people, especially on the desktop, probably don’t need the more advanced features. However, LVM is there when the need arises, though it can get a bit complex to implement. BTRFS is the default filesystem, without LVM, starting with Fedora 33. BTRFS can be easier to manage while still flexible enough for most common usages. Check out the recent Fedora Magazine articles on BTRFS to learn more.

More in Tux Machines

Xfce’s Thunar File Manager Gets Split View, File Creation Times, and More

Thunar 4.17 is here as the first milestone towards the next major release that will be part of the upcoming Xfce 4.18 desktop environment, which is now in early development. I know many of you love and use Thunar, so here’s a look at the major new features coming to your Xfce desktop environment. The big news is that Thunar now finally features a split view, allowing you to use the file manager as a dual-pane file explorer/commander. I bet many of you were hoping for this feature, so here it is and you’ll be able to use soon on your Xfce desktop, hopefully later this year. Read more

9to5Linux Weekly Roundup: January 24th, 2021 (1st Anniversary)

Believe it or not, today is 9to5Linux’s first anniversary! It is on this day (January 24th) that I’ve launched 9to5Linux.com a year ago and it wouldn’t be possible without your support, so THANK YOU for all your feedback and donations (they were put to good use) so far. Here’s to us and to many more happy years together! This has been another amazing week of Linux news and releases as TUXEDO Computers and System76 announced new Linux laptops, Oracle announced Linux 5.10 LTS support for VirtualBox, Raspberry Pi Foundation announced their own silicon, and the KDE Plasma 5.21 desktop environment entered public beta testing. Check them all out in the weekly roundup below, along with all the latest Linux distro and app releases! Read more

today's howtos

  • Install Oracle Virtualbox 6.1.18 in Ubuntu 20.04 / CentOS 8 & Fedora

    Virtualbox an open-source application for running operating systems virtually in our base system and this application available for multiple operating systems (ie) Windows, Linux, and macOS. It has a large number of features, high performing software used in enterprise-level and licensed under General Public License (GPL). It is developed by a community based on a dedicated company. This tutorial will be helpful for beginners to install Oracle VirtualBox 6.1.18 in Ubuntu 20.04, Ubuntu 19.10, CentOS 8 / Redhat 8, and Fedora.

  • How To Install Docker on Linux Mint 20

    In this tutorial, we will show you how to install Docker on Linux Mint 20. For those of you who didn’t know, Docker is an open-source project that automates the deployment of the application inside the software container. The container allows the developer to package up all project resources such as libraries, dependencies, assets, etc. Docker is written in a Go Programming language and is developed by Dot cloud. It is basically a container engine that uses the Linux Kernel features like namespaces and control groups to create containers on top of an operating system and automates the application deployment on the container. This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you through the step by step installation of Docker on a Linux Mint 20 (Ulyana).

  • How to Secure Email Server Against Hacking with VPN (CentOS/RHEL)

    In this tutorial, I’m going to share with you my tips and tricks to secure CentOS/RHEL email servers against hacking with a self-hosted VPN server. Many spammers are trying to hack into other people’s email servers. If successful, they would use the hacked email server to send large volumes of spam or steal valuable data. Why do we use a self-hosted VPN server? Because it allows you to enable whitelisting, so only trusted users connected to the VPN server can access your mail server.

  • Fixed compile of libvdpau-va-gl in OE

    I posted yesterday about the problem in OpenEmbedded when the compile of a package requires execution of a binary: https://bkhome.org/news/202101/fixed-compile-of-samba-without-krb5-in-oe.html This problem does not occur if the build-architecture and target-architectures are the same. The problem occurs with a cross-compile. Today I had the same problem, with package 'libvdpau-va-gl'. I had previously compiled this in OE, but now the build-arch is x86_64 and the target-arch is aarch64.

KDE: Kate and Konsole

     
  • The Kate Text Editor - January 2021

    It not only got some nice visual refresh but a much better fuzzy matching algorithm. The fuzzy matching algorithm is on its way to be upstream to KCoreAddons to be used by more parts of the KDE universe. Praise to Waqar Ahmed for implementing this and pushing it to upstream. And thanks to Forrest Smith for allowing us to use his matching algorithm under LGPLv2+! [...] As you can see on our team page a lot of new people helped out in the scope of the last year. I hope to see more people showing up there as new contributors. It is a pleasure that Waqar Ahmed & Jan Paul Batrina now have full KDE developer accounts! Especially Waqar came up with a lot of nifty ideas what could be fixed/improved/added and he did already do a lot of work to actually get these things done! I actually wanted to write earlier about what cool new stuff is there, but had too much review requests to look after. Great! ;=) No I can read review request instead of light novels in the evening.

  •  
  • Contributing to Konsole

    I never thought I could contribute with Open Source, or even imagined I could change my workspace, in my mind doing it was beyond my programming skills. I was a Windows user for a long time, until one day I couldn’t stand anymore how the system was so slow, it was not a top computer, but it was a reasonable one to be that slow. So I changed to Debian and used it for a time until change to other distros, but I was amazed how fast it was, of course I couldn’t use all of the same programs I used to work with but I did learn new ones.