Language Selection

English French German Italian Portuguese Spanish


Syndicate content It's FOSS
A Linux and Open Source Web Portal
Updated: 47 min ago

10 Open Source Static Site Generators to Create Fast and Resource-Friendly Websites

Friday 25th of September 2020 03:32:34 AM

Brief: Looking to deploy a static web-page? No need to fiddle with HTML and CSS. These open source static website generators will help you deploy beautiful, functional static websites in no time.

What is a static website?

Technically, a static website means the webpages are not generated on the server dynamically. The HTML, CSS, JavaScript lie on the server in the version the end user receives it. The raw source code files are already prebuilt, the source code doesn’t change with the next server request.

It’s FOSS is a dynamic website which depends on several databases and the web pages are generated and served when there’s a request from your browser. Majority of the web is powered by dynamic sites where you interact with the websites and there are plenty of content that often change.

Static websites give you a few benefits like faster loading times, less server resource requirements, and better security (debatable?).

Traditionally, static websites are more suitable for creating small websites with only a few pages and where the content doesn’t change too often.

This, however, is changing thanks to static website generator tools and you can use them to create blogs as well.

I have compiled a list of open source static site generators that can help you build a beautiful website.

Best Open Source Static Site Generators

It’s worth noting that you will not get complex functionality on a static website. In that case, you might want to check out our list of best open source CMS for dynamic websites.

1. Jekyll

Jekyll is one of the most popular open source static generator built using Ruby. In fact, Jekyll is the engine behind GitHub pages which lets you host websites using GitHub for free.

Setting up Jekyll is easy across multiple platforms which includes Ubuntu as well. It utilizes Markdown, Liquid (for template), HTML, and CSS to generate the static site files. It is also a pretty good option if you want to build a blog without advertisements or a product page to promote your tool or service.

Also, it supports migrating your blog from popular CMSs like Ghost, WordPress, Drupal 7, and more. You get to manage permalinks, categories, pages, posts, and custom layouts, which is nice. So, even if you already have an existing website that you want to convert to a static site, Jekyll should be a perfect solution. You can learn more about it by exploring the official documentation or its GitHub page.

Jekyll 2. Hugo

Hugo is yet another popular open source framework for building static sites. It’s built using the Go programming language.

It is fast, simple, and reliable. You also get some advanced theming support if you need it. It also offers some useful shortcuts to help you get things done easily. No matter whether it’s a portfolio site or a blog, Hugo is capable enough to manage a variety of content types.

To get started, you can follow its official documentation or go through its GitHub page to install it and learn more about its usage. You can also deploy Hugo on GitHub pages or any CDN if required.

Hugo 3. Hexo

Hexo is an interesting open-source framework powered by Node.js. Similar to others, you will end up creating blazing fast websites but you will also get a good collection of themes and plugins.

You do get a powerful API here to extend functionality as per your requirements as well. If you already have a website, you can simply use its Migrator extension as well.

To get started, you can go through the official documentation or just explore their GitHub page.

Hexo 4. Gatsby

Gatsby is an increasingly open-source popular site generator framework. It utilizes React.js for creating fast and beautiful websites.

I was quite interested to give this a try for some experimental projects a few years back and it is impressive to see the availability thousands of new plugins and themes. Unlike other static site generators, you can use Gatsby to generate a site and get the benefits of static sites without losing any features.

It offers a lot of useful integrations from popular services. Of course, you can keep it simple or use it coupled with a popular CMS of your choice, which should be interesting. You can take a look at their official documentation and its GitHub page to find out more on it.

Gatsby 5. VuePress

VuePress is a static site generator powered by Vue.js which happens to be an open-source progressive JavaScript framework.

If you know HTML, CSS, and JavaScript, you can easily get started using VuePress. You should find several useful plugins and themes to get a head start on building your site. Also, it seems like Vue.js is being actively improved and has the attention of more developers, which is a good thing.

You can learn more about it through their official guide and the GitHub page.

VuePress 6. Nuxt.js

Nuxt.js utilizes Vue.js and Node.js but it focuses on providing modularity and has the ability to rely on the server-side instead of the client-side. Not just that, it aims to provide an intuitive experience to the developers with descriptive errors, and detailed documentations among other things.

As it claims, Nuxt.js should be the best of both world with all of its features and flexibility that you get to build a static website. They also offer a Nuxt Online sandbox to let you directly test it without a lot of effort.

You can explore its GitHub page or visit the official site to get more details.

7. Docusaurus

Docusaurus is an interesting open-source static site generator tailored for building documentation websites. It happens to be a part of Facebook’s open source initiative.

It is built using React. You get all the essential features like document versioning, document search, and translation mostly pre-configured. If you’re planning to build a documentation website for any of your products/services, this should be a good start.

You can learn more about it on its GitHub page and its official website.

Docusaurus 8. Eleventy

Eleventy describes itself as an alternative to Jekyll and aims for a simpler approach to make faster static websites.

It seems easy to get started and it also offers proper documentation to help you out. If you want a simple static site generator that gets the job done, Eleventy seems to be an interesting choice.

You can explore more about it on its GitHub page and visit the official website for more details.

Eleventy 9. Publii

Publii is an impressive open-source CMS that makes it easy to generate a static site. It’s built using Electron and Vue.js. You can also migrate your posts from a WordPress site if needed. In addition to that, it offers several one-click synchronizations with GitHub Pages, Netlify, and similar services.

You also get a WYSIWYG editor if you utilize Publii to generate a static site. To get started, visit the official website to download it or explore its GitHub page for more information.

Publii 10. Primo

An interesting open-source static site generator which is still in active development. Even though it’s not a full-fledged solution with all the features compared to other static generators, it is a unique project.

Primo aims to help you build and develop a site using a visual builder which can be easily edited and deployed to any host of your choice.

You can visit the official website or explore its GitHub page for more information.

Primo Wrapping Up

There are a lot of other site generators available out there. However, I’ve tried to mention the best static generators that gives you the fastest loading times, the best security, and an impressive flexibility.

Did I miss any of your favorites? Let me know in the comments below.

Rosetta@home: Help the Fight Against COVID-19 With Your Linux System

Thursday 24th of September 2020 06:42:32 AM

Want to contribute to the research on coronavirus? You don’t necessarily have to be a scientist for this. You may contribute with part of your computer’s computing power thanks to Rosetta@home project.

Sounds interesting? Let me share more details on it.

What is Rosetta@home?

Rosetta@home is a distributed computing project for protein structure prediction, based at the Baker laboratory at the University of Washington and running on the Berkeley Open Infrastructure for Network Computing (BOINC) open source platform, which was originally developed to support the SETI@home.

Not enough computing power? Utilize the power of distributed computing

To predict and design the structures of naturally occurring proteins is very computationally intensive. To speed up the process, Dr. David Becker had filled the entire lab and the hallway with desktop computers. Then they started getting complaints of heating up the building, but still didn’t have enough computing power to accurately predict and design protein structures.

How does Rosetta@home work?

Rosetta@home uses idle computer processing power from volunteers’ computers to perform calculations on individual work units. When a requested task is being completed, the client sends the results to a central project server where they are validated and incorporated into project databases.

As of 28th March 2020 the computing power of Rosetta@home has been increased up to 1.7 PetaFlops, due to the recently joined users looking to participate in the fight against the COVID-19 pandemic. Thanks to that, On 26th June 2020, Rosetta@home researchers announced they had created antiviral proteins that neutralized SARS-CoV-2 in the lab.

Is BOINC platform safe?

After few years of operation on millions of systems, there have been no security incidents reported due to BOINC. This fact doesn’t mean that there is no possibility of a security risks.

BOINC uses a mechanism called code signing, based on public-key cryptography, that eliminates the vulnerability, as long as projects use proper practice. Each project has a code-signing key pair consisting of a public key and a private key which is used to create “signatures” for programs. The BOINC client will only run programs with valid signatures.

Projects are instructed to keep the private key only on a computer that is permanently offline to create signatures. Therefore hackers cannot trick BOINC into running malware.

Most BOINC projects follow these practices. If you’re concerned about security, you should attach to a project only if you know it follows the code-signing procedure correctly. If in doubt, you may ask project administrators to describe how they do code signing.

Contributing to Rosetta@home with BOINC platform

If you are interested in contributing to the scientific research by donating some computing power, you’ll find rest of this article helpful.

I’ll discuss the following:

  • Prerequisite for joining BOINC platform
  • Using BOINC platform to donate computing power to a project of your choice via your desktop Linux
  • Using Raspberry Pi to run BOINC all the time
System Requirements of the BOINC platform

The BOINC distributed computing platform with which you can access Rosetta@home is available on a 64bit operating system such as Windows, Linux, and macOS and FreeBSD.

You will need a CPU of at least 500 MHz, 200 megabytes of free disk space, 512 megabytes of RAM, and Internet connectivity.

The more CPU cores your system has, the more RAM is required as a work unit will “feed” each core.

Create a user account on BOINC platform

Before you configure the BOINC platform, create an account using your computer. If you will use a Raspberry Pi, you can join the “crunch-on-arm” team.

Please note that the same account can be used to multiple machines at a time. All of your machines will appear on your account.

Install BOINC platform on various Linux distributions

BOINC application has the following element:

  • boinc-client (daemon that runs the platform)
  • boinctui: terminal based interface for selecting projects and other settings
  • boinc-manager: GUI-based interface for selecting projects and other settings

If you are using a server, you should install boinctui. If you are using Linux desktop, you can opt for boinc-manager.

I’ll stick with the GUI tool in this part of the tutorial.

On Debian/Ubuntu

BOINC tools are available in the universe repository in Ubuntu 20.04 so make sure that you have universe repository enabled on your Ubuntu system.

Use the following commands to install it:

sudo apt install boinc-client boinc-manager Install BOINC on Fedora

Open a terminal and enter the following command:

sudo dnf install boinc-client boinc-manager Install BOINC on RedHat/CentOS

First, make sure that the EPEL repository is enabled, by running following command on a terminal:

sudo yum install epel-release

You can now install the necessary packages:

sudo yum install boinc-client boinc-manager Open the BOINC manager and add a project

After installing, open BOINC manager. You will be asked to add a project and to create an account or login to an existing.

Add your credentials and click finish when prompted.

After a few minutes, the status will change to running.

You don’t need to worry if your system resources will be consumed when you want to use your computer. By default, if the BOINC manager notices that the user needs more than the 25% of CPU resources, the BOINC computation will be suspended.

If you want the application to be suspended at a lower or higher CPU usage, you can change your profile settings on the website where you created your account.

Rosetta@home on a Raspberry Pi 4

An ideal device to run 24/7 the Rosetta@home application is a Raspberry Pi, which is powerful enough and with very low power consumption.

To fight COVID-19 using a Raspberry Pi 4, you need a model with 2 GB RAM or more. My personal recommendation is the 4 GB RAM option, because with my 2 GB model one of the cores is idling as it is running out of memory.

Step 1: Install Ubuntu Server (Recommended)

You need to have some operating system on your Raspberry Pi. Installing Ubuntu server on Raspberry Pi is one of the most convenient choices.

Step 2: Install BOINC platform

To install the BOINC client and the command line management interface run the following command on server running on the Raspberry Pi.

sudo apt install boinc-client boinctui Additional steps for Raspberry Pi 2 GB model

Your account by default is set to utilize 90% of the memory when the user is idling. The Rosetta work units require 1.9gb of memory to run on the Quad core Raspberry Pi, so there is a possibility for the client not be able to start due to the initial reading. If the Raspberry Pi runs out of memory, it will suspend one of the 4 running tasks as mentioned earlier. To override the 1.9gb threshold add the following lines to the location below:

sudo nano /var/lib/boinc-client/global_prefs_override.xml

Add now these lines

<global_preferences> <ram_max_used_busy_pct>100.000000</ram_max_used_busy_pct> <ram_max_used_idle_pct>100.000000</ram_max_used_idle_pct> <cpu_usage_limit>100.000000</cpu_usage_limit> </global_preferences>

This setting will increase the default memory available to Rosetta to the maximum amount of memory on the board.

Step 3: Add Rosetta@home project

On your Raspberry Pi command line type ‘boinctui’ and click enter to load the terminal GUI. 


Press F9 on the keyboard, to bring down the menu choices. Use the arrow keys to go to Projects and press Enter.

You may notice a few available projects to choose from but if you are interested to actively support the fight against COVID-19 choose Rosetta. You will be prompted to either create a user account or to use an existing account.

Select “Existing User” and the enter the credentials you created on the website at the initial step.  As you can see, I have already selected the Rosetta project.

It will take a moment to download the work units, and then automatically it will start crunching data on your Raspberry Pi 4!


If you want to stop using BOINC, simply delete the boinc packages you installed earlier. I believe you know how to use your distribution’s package manager for removing software.

One of the benefits of distributing computing is to allow user’s to donate their system resources for the common good. Despite the grief the pandemic has spread worldwide, it can make us also to realize the necessity of volunteering in one way or the other.

If you ever wondered about a good use of your Raspberry Pi, Rosetta@home can be included to the list.

Let us know at the comments below if you started “cruching” and which platform you chose. Stay safe!

Meet eDEX-UI, A Sci-Fi Inspired Linux Terminal Emulator With Some Cool Features

Tuesday 22nd of September 2020 06:34:36 AM

Brief: eDEX-UI is a cool sci-fi inspired terminal emulator that looks cool with a bunch of options like system monitoring. Here, we take a quick look at what it offers.

You probably already know plenty of fun Linux commands. You know what else can be fun when it comes to Linux command line? The terminal itself.

Yes, the terminal emulator (commonly known as terminal) can be pretty amusing as well. Remember the Cool Retro Term terminal that gives you a vintage terminal of 80s and early 90s?

How about an eye candy terminal that is heavily inspired from the TRON Legacy movie effects?

In this article, let’s take a look at an amazing cross-platform terminal emulator that can keep you drooling over your terminal!

eDEX-UI: A Cool Terminal Emulator

eDEX-UI is an open-source cross-platform terminal emulator that presents you with a Sci-Fi inspired look along with useful some features as well.

It was originally inspired from the DEX UI project. It is also worth noting that eDEX-UI is no longer maintained but it hasn’t been completely abandoned. You can learn more about the current status of the project here.

Even though eDEX-UI is more about the looks and the futuristic theme for a terminal, it could double up as a system monitoring tool for Linux in the future if the development resumes or if someone else forks it. How? Because it shows system stats in the sidebar while you work in the terminal.

Let’s take a look at what else it offers and how to get it installed on your computer.

Features of eDEX-UI

eDEX-UI is essentially a terminal emulator. You can use it like your regular terminal for running commands and whatever else you do in the terminal.

It runs in full screen with sidebars and bottom panels to monitor system and networks stats. There is also a virtual keyboard for touch devices.

I made a short video and I suggest watching this video to see this cool terminal emulator in action. Play the video with sound for the complete effect (trust me on this).

Subscribe to our YouTube channel for more Linux videos

eDEX-UI has a directory viewer on the left bottom side.

  • Multiple tabs
  • Support for curses
  • Directory viewer to show the contents of the current working directory
  • Displays system information that includes Motherboard info, Network status, IP, network bandwidth used, CPU usage, temperature of the CPU, RAM usage, and so on
  • Customization options to change the theme, keyboard layout, CSS injection
  • Optional sound effect to give you a hacking vibe
  • Cross-platform support (Windows, macOS, and Linux)
Installing eDEX on Linux eDEX-UI with Matrix theme


A few hours after our coverage, eDEX put up a notice that the project is not actively maintained but it is not abandoned yet. More details can be found here.

As mentioned, it supports all the major platforms that includes Windows, macOS, and of course, Linux.

To install it on any Linux distribution, you can either grab the AppImage file from its GitHub releases section or find it in one of the available repositories that include AUR as well.

In case you didn’t know, I’d recommend going through our guide on using AppImage in Linux.

You can visit the project on its GitHub page and if you like it, feel free to star their repository.

eDEX-UI My experience with eDEX-UI

I liked this terminal emulator because of the sci-fi inspired look. However, I found it pretty heavy on the system resources. I didn’t check the CPU temperature on my Linux system but the CPU consumption was surely high.

So, you might have to take care about that if you need it running it in the background or in a separate workspace (like I do). Apart from that, it’s an impressive tool with useful options like directory viewer and system resource monitoring.

By the way, if you just want to entertain guests and children with a hacking simulation, try Hollywood tool.

What do you think about eDEX-UI? Is it something you would like to give a try or is too kiddish/overwhelming for you?

Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension

Monday 21st of September 2020 06:57:17 AM

There is something about tiling windows that attracts many people. Perhaps it looks good or perhaps it is time-saving if you are a fan of keyboard shortcuts in Linux. Or maybe it’s the challenge of using the uncommon tiling windows.

Tiling Windows in Linux | Image Source

From i3 to Sway, there are so many tiling window managers available for Linux desktop. Configuring a tiling window manager itself requires a steep learning curve.

This is why projects like Regolith desktop exist to give you preconfigured tiling desktop so that you can get started with tiling windows with less effort.

Let me introduce you to a similar project named Material Shell that makes using tiling feature even easier than Regolith.

Material Shell GNOME Extension: Convert GNOME desktop into a tiling window manager

Material Shell is a GNOME extension and that’s the best thing about it. This means that you don’t have to log out and log in to another desktop environment or window manager. You can enable or disable it from within your current session.

I’ll list the features of Material Shell but it will be easier to see it in action:

Subscribe to our YouTube channel for more Linux videos

The project is called Material Shell because it follows the Material Design guideline and thus gives the applications an aesthetically pleasing interface. Here are its main features:

Intuitive interface

Material Shell adds a left panel for quick access. On this panel, you can find the system tray at the bottom and the search and workspaces on the top.

All the new apps are added to the current workspace. You can create new workspace and switch to it for organizing your running apps into categories. This is the essential concept of workspace anyway.

In Material Shell, every workspace can be visualized as a row with several apps rather than a box with several apps in it.

Tiling windows

In a workspace, you can see all your opened applications on the top all the time. By default, the applications are opened to take the entire screen like you do in GNOME desktop. You can change the layout to split it in half or multiple columns or a grid of apps using the layout changer in the top right corner.

This video shows all the above features at a glance:

Persistent layout and workspaces

That’s not it. Material Shell also remembers the workspaces and windows you open so that you don’t have to reorganize your layout again. This is a good feature to have as it saves time if you are particular about which application goes where.

Hotkeys/Keyboard shortcut

Like any tiling windows manager, you can use keyboard shortcuts to navigate between applications and workspaces.

  • Super+W Navigate to the upper workspace.
  • Super+S Navigate to the lower workspace.
  • Super+A Focus the window at the left of the current window.
  • Super+D Focus the window at the right of the current window.
  • Super+1, Super+2 … Super+0 Navigate to specific workspace
  • Super+Q Kill the current window focused.
  • Super+[MouseDrag] Move window around.
  • Super+Shift+A Move the current window to the left.
  • Super+Shift+D Move the current window to the right.
  • Super+Shift+W Move the current window to the upper workspace.
  • Super+Shift+S Move the current window to the lower workspace.
Installing Material Shell


Tiling windows could be confusing for many users. You should be familiar with GNOME Extensions to use it. Avoid trying it if you are absolutely new to Linux or if you are easily panicked if anything changes in your system.

Material Shell is a GNOME extension. So, please check your desktop environment to make sure you are running GNOME 3.34 or higher version.

I would also like to add that tiling windows could be confusing for many users.

Apart from that, I noticed that after disabling Material Shell it removes the top bar from Firefox and the Ubuntu dock. You can get the dock back by disabling/enabling Ubuntu dock extension from the Extensions app in GNOME. I haven’t tried but I guess these problems should also go away after a system reboot.

I hope you know how to use GNOME extensions. The easiest way is to just open this link in the browser, install GNOME extension plugin and then enable the Material Shell extension.

If you don’t like it, you can disable it from the same extension link you used earlier or use the GNOME Extensions app:

To tile or not?

I use multiple screens and I found that Material Shell doesn’t work well with multiple monitors. This is something the developer(s) can improve in the future.

Apart from that, it’s a really easy to get started with tiling windows with Material Shell. If you try Material Shell and like it, appreciate the project by giving it a star or sponsoring it on GitHub.

For some reasons, tiling windows are getting popular. Recently released Pop OS 20.04 also added tiling window features.

But as I mentioned previously, tiling layouts are not for everyone and it could confuse many people.

How about you? Do you prefer tiling windows or you prefer the classic desktop layout?

Linux Jargon Buster: What is a Rolling Release Distribution?

Sunday 20th of September 2020 06:50:24 AM

After understanding what is Linux, what is a Linux distribution, when you start using Linux, you might come across the term ‘rolling release’ in Linux forum discussions.

In this Linux jargon buster, you’ll learn about rolling release model of Linux distributions.

What is a rolling release distribution?

In software development, rolling release is a model where updates to a software are continuously rolled out rather than in batches of versions. This way the software always remains up-to-date. A rolling release distribution follows the same model and it provides the latest Linux kernel and the software version as they are released.

Arch Linux is the most popular example of a rolling release distribution however Gentoo is the oldest rolling release distribution still in development.

When you use a rolling release distribution, you get small but frequent updates. There are no major XYZ version release here like Ubuntu. You regularly update Arch or the other rolling release distribution and you’ll always have the latest version of your distribution.

The rolling release also comes at the cost of testing. You may have surprises when the latest update starts creating problem for your system.

Rolling release vs point release distributions

Many Linux distributions like Debian, Ubuntu, Linux Mint, Fedora etc follow the point release model. They will release a major XYZ version after every few months/years.

The point release consists of new versions of the Linux kernel, desktop environments and other software.

When a new major version of a point release distribution is released, you’ll have to do special efforts to upgrade your system.

On the contrary, you keep on getting new features updates in a rolling release distribution as it gets released from the developers. This way, you don’t need to do a version upgrade after some months or years. You always have the latest stuff.

Oh.. but my Ubuntu also gets regular updates, almost on a weekly basis. Does it mean Ubuntu is also rolling release?

No. Ubuntu is not rolling release. You see, the updates you usually get from Ubuntu are security and maintenance updates (except for some software like Mozilla Firefox), not new feature release.

For example, GNOME 3.38 has been released but Ubuntu LTS release 20.04 won’t give you GNOME 3.38. It will stick to the 3.36 version. If there are security or maintenance update to GNOME 3.36, you’ll get it with your Ubuntu updates.

Same goes for the LibreOffice release. Ubuntu 20.04 LTS sticks with LibreOffice 6.x series whereas LibreOffice 7 is already out there. Keep in mind that I am talking about software versions available in the official repositories. You are free to download a newer version of LibreOffice from their official website or use a PPA. But you won’t get it from Ubuntu’s repositories.

When Ubuntu releases the next version Ubuntu 20.10, you’ll get LibreOffice 7 and GNOME 3.38.

Why do some rolling release distributions have ‘version number’ and release names? Arch Linux ISO Refresh

That’s a fair question. Arch Linux is rolling release which always keeps your system updated and yet you’ll see something like Arch Linux 2020.9.01 version number.

Now imagine you installed Arch Linux in 2018. You regularly update your Arch Linux system and so you have all the latest kernel and software in September 2020.

But what happens if you decide to Arch Linux in September 2020 on a new system? If you use the same installation media you used in 2018, you’ll have to install all the system updates released in the last two years or more. That’s inconvinient, isn’t it?

This is why Arch Linux (and other rolling release distributions) provide a new ISO (OS installer image file) with all the latest software every month or every few months. This is called ISO refresh. Thus, new users get a more recent copy of the Linux distribution.

If you are already using a rolling release distribution, you don’t to worry about the new refreshed ISO. Your system is already at par with it. The ISO refresh is helpful to people who are going to install it on a new system.

Pros and cons of rolling release distributions

The benefit of the rolling release model is that you get small but more frequent updates. You always have the latest kernel and the latest software releases available from your distribution’s repositories.

However, this could also bring unforeseen problems with the new software. Point release usually test essential components for system integration to avoid inconvenient bugs. This is not the case in rolling release distribution where the software is rolled out as soon it is released by their developers.

Should you use rolling release or point release distribution?

That’s up to you. If you are a new Linux user or if you are not comfortable troubleshooting your Linux system, stick with a point release distribution of your choice. This is also recommended for your production and mission-critical machines. You would want a stable system here.

If you want the latest and greatest of Linux kernel and software and you are not afraid of spending some time in troubleshooting (happens from time to time) the you may choose a rolling release distribution.

At this point, I would also like to mention the hybrid rolling releasing model of Manjaro Linux. Manjaro does follow a rolling release model where you don’t have to upgrade your system to a newer version. However, Manjaro also performs testing of the essential software components instead of just blindly rolling it out to the users. This is one the reasons why so many people use Manjrao Linux.

Was it clear enough?

I hope you have a slightly better understanding of the term ‘rolling release distribution’ now. If you still have some doubts around it please leave a comment and I’ll try to answer. I might update the article to cover your questions. Enjoy :)

Deepin 20 Review: The Gorgeous Linux Distro Becomes Even More Beautiful (and Featureful)

Friday 18th of September 2020 03:55:01 AM

Deepin is one of the most beautiful Linux distributions based on the stable branch of Debian and with the latest release of version 20, it’s better than ever before.

There are a bunch of changes and visual improvements that makes it a wonderful Linux distribution.

In this article, I’m going to take a look at Deepin 20 and let you know what it offers to help you decide if you would want to try it out!

What’s New In Deepin 20?

Visually, Deepin 20 comes with a major overhaul and it looks way cleaner and intuitive. It looks a lot like macOS or should I say that macOS Big Sur looks like Deepin version 20.

In addition to the visual improvements (for which it is known for), the base repository has been upgraded to Debian 10.5. Here are some other highlights include:

  • Dual-kernel support (Linux Kernel 5.4 LTS and Linux Kernel 5.7)
  • Personalized notification management
  • Improved system installer
  • Improved app management
  • Enhanced fingerprint recognition
  • A brand-new device manager

Of course, the list doesn’t end here. I’ll go through each of the key changes in the article as you read on.

Deepin 20 Review of New Features

Let me dwell over the most significant changes in the new release:

Visual Improvements

Overall, the look and feel of Deepin 20 is much cleaner and beautiful when compared to its previous release. The choice of the default icon theme, positioning of the menu bar, notification icon, and several other things look perfectly fine.

Of course, the rounded Windows, colorful icons, animation effects, and some subtle transparency makes the user experience rich.

The dark mode is now available across all the system apps and it’s now easier to switch light/dark theme per application instead of making it a system-wide change.

In addition to all the improvements, you also get the ability to tweak the experience and control a lot of things including the transparency.

Personalized Notification Management

With Deepin 20, you can also tweak/control the notifications and the alerts for each application to avoid unnecessary distractions.

Dual-Kernel Support

The dual-kernel support is a big deal and it should be exciting for users looking for the best hardware compatibility and stability with more devices. While installing Deepin 20, you will get the option to choose Linux Kernel 5.4 LTS or Linux Kernel 5.7 (Stable).

Improved System Installer

To encourage more people to try Deepin 20 and have a good experience, the system installer has seen some improvements which makes it cleaner and easier to use.

It’s also worth noting that if you have an NVIDIA graphics card, it will detect it and offer you installing the required proprietary drivers to get things working.

App Store Improvements

Deepin 20 includes some improvements to the app store to help you manage and install new apps easily.

However, I found it a bit too slow to load up. Yes, it could be the fact that the mirrors/resources are based in Mainland China. Maybe, they could have got around this issue by caching the icons/app information for the app store instead of loading it up every time.

Overall, even though the UI improvements are fantastic, using the app store wasn’t a smooth experience. It definitely needs some work on it.

New App Additions & Upgrades

There are some new useful app additions like Device Manager, Draw, Font Manager, Log Viewer, Voice Notes, Screen Capture (combining Deepin Screenshot and Deepin Screen Recorder), and Cheese (to take a photo or video on your PC).

Among them, the device manager should come in pretty handy and depending on what you need, other additions should seem pretty exciting.

Tools like Document viewer, archive manager, and a couple other apps have been upgraded.

Other Fixes & Improvements

The release also includes several bug fixes and improvements for the slideshow, dock, icons in the tray area, and so on.

New icon pack additions, improvements to the fingerprint recognition UI, and some other under-the-hood improvements make up for a great experience on Deepin 20.

To get all the details, you might want to check out their official announcement post. And, to sum up, I’ll just share my thoughts on using it.

My Experience With Deepin 20

Considering all the eye candy improvements, it’s definitely an attractive offering as a Linux distribution for both enthusiasts and for users looking to switch from Windows or macOS.

It is a bit heavy on system resources, but isn’t anything crazy for a modern computer with a decent graphics card. I’m not sure how it would perform with integrated graphics, but it shouldn’t be an issue, I think — unless you have a really old chipset.

The dual-kernel makes it an interesting choice to have good stability and compatibility with a wide range of hardware.

Overall, Deepin 20 is a wonderful experience and if you’re not aiming for a lightweight distribution but an eye candy experience, you should definitely give this a try.

Have you tried Deepin 20 already? Feel free to share your thoughts in the comments below!

How to Fix “Repository is not valid yet” Error in Ubuntu Linux

Thursday 17th of September 2020 02:12:22 PM

I recently installed Ubuntu server on my Raspberry Pi. I connected it to the Wi-Fi from Ubuntu terminal and went about doing what I do after installing any Linux system which is to update the system.

When I used the ‘sudo apt update’ command, it gave me an error which was kind of unique to me. It complained that release file for the repository was invalid for a certain time period.

E: Release file for is not valid yet (invalid for another 159d 15h 20min 52s). Updates for this repository will not be applied.

Here’s the complete output:

ubuntu@ubuntu:~$ sudo apt update Hit:1 focal InRelease Get:2 focal-updates InRelease [111 kB] Get:3 focal-backports InRelease [98.3 kB] Get:4 focal-security InRelease [107 kB] Reading package lists... Done E: Release file for is not valid yet (invalid for another 21d 23h 17min 25s). Updates for this repository will not be applied. E: Release file for is not valid yet (invalid for another 159d 15h 21min 2s). Updates for this repository will not be applied. E: Release file for is not valid yet (invalid for another 159d 15h 21min 32s). Updates for this repository will not be applied. E: Release file for is not valid yet (invalid for another 159d 15h 20min 52s). Updates for this repository will not be applied. Fixing “release file is not valid yet” error in Ubuntu and other Linux distributions

The reason for the error is the difference in the time on the system and the time in real world.

You see, every repository file is signed on some date and you can see this information by viewing the release file:

sudo head /var/lib/apt/lists/ports.ubuntu.com_ubuntu_dists_focal_InRelease -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Origin: Ubuntu Label: Ubuntu Suite: focal Version: 20.04 Codename: focal Date: Thu, 23 Apr 2020 17:33:17 UTC Architectures: amd64 arm64 armhf i386 ppc64el riscv64 s390x

Now, for some reasons, the time on my Ubuntu server was in the past and this is why Ubuntu complained that the release file is not valid yet for X many days.

If you are connected to the internet, you may wait a few minutes for your system to synchronize the time.

If it doesn’t work, you may force the system to use local time as real time clock (hardware clock):

sudo timedatectl set-local-rtc 1

The timedatectl command enables you to configure time, date and change timezone on Linux.

You shouldn’t need to restart. It works immediately and you can verify it by updating your Ubuntu system again.

If the problem is solved, you may set the real time clock to use UTC (as recommended by Ubuntu).

sudo timedatectl set-local-rtc 0

Did it fix the issue for you?

I hope this quick tip helped you to fix this error. If you are still facing the issue, let me know in the comment section and I’ll try to help you out.

GNOME 3.38 is Here With Customizable App Grid, Performance Improvements and Tons of Other Changes

Wednesday 16th of September 2020 03:48:52 PM

GNOME 3.36 brought some much-needed improvements along with a major performance boost. Now, after 6 months, we’re finally here with GNOME 3.38 with a big set of changes.

GNOME 3.38 Key Features

Here are the main highlight of GNOME 3.38 codenamed Orbis:

Subscribe to our YouTube channel for more Linux videos Customizable App Menu

The app grid or the app menu will now be customizable as part of a big change in GNOME 3.38.

Now, you can create folders by dragging application icons over each other and move them to/from folders and set it right back in the app grid. You can also just reposition the icons as you want in the app grid.

Also, these changes are some basic building blocks for upcoming design changes planned for future updates — so it’ll be exciting to see what we can expect.

Calendar Menu Updates

The notification area is a lot cleaner with the recent GNOME updates but now with GNOME 3.38, you can finally access calendar events right below the calendar area to make things convenient and easy to access.

It’s not a major visual overhaul, but there’s a few improvements to it.

Parental Controls Improvement

You will observe a parental control service as a part of GNOME 3.38. It supports integration with various components of the desktop, the shell, the settings, and others to help you limit what a user can access.

The Restart Button

Some subtle improvements lead to massive changes and this is exactly one of those changes. It’s always annoying to click on the “Power Off” / “Shut down” button first and then hit the “Restart” button to reboot the system.

So, with GNOME 3.38, you will finally notice a “Restart” entry as a separate button which will save you click and give you a peace of mind.

Screen Recording Improvements

GNOME shell’s built-in screen record is now a separate system service which should potentially make recording the screen a smooth experience.

Also, window screencasting had several improvements to it along with some bug fixes:

GNOME apps Updates

The GNOME calculator has received a lot of bug fixes. In addition to that, you will also find some major changes to the epiphany GNOME browser.

GNOME Boxes now lets you pick the OS from a list of operating systems and GNOME Maps was updated with some UI changes as well.

Not just limited to these, you will also find subtle updates and fixes to GNOME control center, Contacts, Photos, Nautilus, and some other packages.

Performance & multi-monitor support improvements

There’s a bunch of under-the-hood improvements to improve GNOME 3.38 across the board. For instance, there were some serious fixes to Mutter which now lets two monitors run at different refresh rates.

Previously, if you had one monitor with a 60 Hz refresh rate and another with 144 Hz, the one with the slower rate will limit the second monitor. But, with the improvements in GNOME 3.38, it will handle multi-monitors without limiting any of them.

Also, some changes reported by Phoronix pointed out around 10% lower render time in some cases. So, that’s definitely a great performance optimization.

Miscellaneous other changes
  • Battery percentage indicator
  • Restart option in the power menu
  • New welcome tour
  • Fingerprint login
  • QR code scanning for sharing Wi-Fi hotspot
  • Privacy and other improvements to GNOME Browser
  • GNOME Maps is now responsive and changes its size based on the screen
  • Revised icons

You can find a details list of changes in their official changelog.

Wrapping Up

GNOME 3.38 is indeed an impressive update to improve the GNOME experience. Even though the performance was greatly improved with GNOME 3.36, more optimizations is a very good thing for GNOME 3.38.

GNOME 3.38 will be available in Ubuntu 20.10 and Fedora 33. Arch and Manjaro users should be getting it soon.

I think there are plenty of changes in right direction. What do you think?

Open Usage Commons: Google’s Initiative to Manage Trademark for Open Source Projects Runs into Controversy

Wednesday 16th of September 2020 07:18:01 AM

Back in July, Google announced a new organization named Open Usage Commons. The aim of the organization is to help “projects protect their project identity through programs such as trademark management and usage guidelines”.

Google believes that “creating a neutral, independent ownership for these trademarks gives contributors and consumers peace of mind regarding their use of project names in a fair and transparent way”.

Open Usage Commons and the controversy with IBM

Everything seems good in theory, right? But soon after the Google’s announcement of the Open Usage Commons, IBM made an objection.

The problem is that Google included Istio project under Open Usage Commons. IBM is one of the founding members of the Istio project and it wants the project to be under open governance with CNCF.

On behalf of It’s FOSS, I had a quick interaction with Heikki Nousiainen, CTO at Aiven to clear some air on the entire Open Usage Commons episode.

What is the Open Usage Commons trying to do?

Heikki Nousiainen: The stated purpose of Google’s Open Usage Commons (OUC) is to provide a neutral and independent organization for open source projects to host and manage their trademarks. By applying open source software principles to trademarks, this will provide transparency and consistency. The idea is that this will lead to a more vibrant ecosystem for end users because vendors and developers can confidently build something that relies on projects’ brands. 

Although other foundations, such as the Cloud Native Computing Foundation (CNCF) and Apache Foundation, provide some direction on trademarks, OUC provides more precision and consistency in what constitutes fair use for vendors. This avoids what has generally been left to the individual projects to decide, which has resulted in a confusing patchwork of guidelines.

Additionally, it is likely an attempt by Google to avoid situations similar to what Amazon Web Services (AWS) has faced with Elasticsearch, e.g. where trademarks have appeared to be increasingly used to prevent exactly what Google is attempting to accomplish with this foundation, relatively open use of project brand identifiers by competing vendors.

What are the problems surrounding the Commons?

Heikki Nousiainen: The main controversy surrounds the question as to why Istio was not placed under CNCF governance as IBM was clearly expecting it to be placed under an Open Governance model once it matured.

However, Open Usage Commons does not touch the governance model at all. Google, of course, has incentive to be able to trust they can utilize the recognized brands and trademarks to help customers recognize the services built on top of these familiar technologies.

How will it impact the open source world, both positive and negative impacts?

Heikki Nousiainen: It will remain to be seen what the long-term impact will be due to the fact that the only member projects are currently driven by Google. Although controversial, it doesn’t seem like the fears that Google would be able to enact effective control over member projects will materialize.

A more telling question is, “Who will be likely to participate?” One thing is for sure, this will spark a long overdue discussion on how Open Source trademarks should be used when moving from software bundles to services offered in the cloud.

Does it sound like some big players will have control over the definition of ‘open source trademarks’? 

Heikki Nousiainen: Despite all the controversy over licensing, big players in this space have been and will remain key in securing the resources and support needed for the open source community to thrive.

Although there is some self-interest here, the creation of vehicles such as this do not necessarily constitute an attempt at imposing unjustified control over projects. As a community-driven software, all must work alongside one another to achieve success.

Personally, I think Google’s long term game plan is to protect its Google Cloud Platform from possible lawsuits over the use of popular source projects’ trademarks and branding.

What do you think of the entire Open Usage Commons episode?

How to Run Multiple Linux Commands at Once in Linux Terminal [Essential Beginners Tip]

Tuesday 15th of September 2020 06:58:11 AM

Running two or more commands in one line can save you a good deal of time and help you become more efficient and productive in Linux.

There are three ways you can run multiple commands in one line in Linux:

;Command 1 ; Command 2Run command 1 first and then command 2&&Command 1 && Command 2Run command 2 only if command 1 ends sucessfully||Command 1 || Command 2Run command 2 only if command 1 fails

Let me show you in detail how you can chain commands in Linux.

Using ; to run multiple Linux commands in one line

The simplest of them all is the semicolon (;). You just combine several commands that you want to run using ; in the following fashion:

cmd1; cmd2; cmd3

Here, cmd1 will run first. Irrespective of whether cmd1 runs successfully or with error, cmd2 will run after it. And when cmd2 command finishes, cmd3 will run.

Let’s take an example you can practice easily (if you want to).

mkdir new_dir; cd new_dir; pwd

In the above command, you first create a new directory named new_dir with mkdir command. Then you switch to this newly created directory using cd command. Lastly you print your current location with pwd command.

Running Multiple Commands Linux with ;

The space after semicolon (;) is optional but it makes the chain of commands easily readable.

Using && to run multiple Linux commands

Some times you want to ensure that the in the chain of Linux commands, the next command only runs when the previous command ends successfully. This is where the logical AND operator && comes into picture:

cmd1 && cmd2 && cmd3

If you use Ubuntu or Debian based distributions, you must have come across this command that utilizes && concept:

sudo apt update && sudo apt upgrade

Here the first command (sudo apt update) first refreshes the package database cache. If there is no error, it will then upgrade all the packages that have newer versions available.

Let’s take earlier example. If the new_dir already exists, mkdir command will return error. The difference in the behavior of ; and && can be see in the screenshot below:

Did you see how commands separated by && stopped when the first command resulted into error?

Using || to run several Linux commands at once

You can use the logical OR operator (||) to run a chain of commands but the next command only runs when the previous command ends in error. This is opposite to what you saw with &&.

cmd1 || cmd2 || cmd3

If cmd1 fails, cmd2 runs. If cmd2 runs successfully, cmd3 won’t run.

In the screenshot above, mkdir new_dir command fails because new_dir already exists. Since this command fails, the next command cd new_dir is executed successfully. And now that this command has run successfully, the next command pwd won’t run.

Bonus Tip: Combine && and || operators

You may combine the operators to run two or more Linux commands.

If you combine three commands with && and ||, it will behave as the ternary operator in C/C++ ( condition ? expression_true ; expression_false).

cmd1 && cmd2 || cmd3

For example, you can check if file exists in bash, and print messages accordingly.

[ -f file.txt ] && echo "File exists" || echo "File doesn't exist"

Run the above command before and after creating the file.txt file to see the difference:

Like copy-paste in Linux terminal, running multiple commands at once is also one of the many Linux command line tips for saving time. Though elementary, it is an essential concept any Linux terminal user should know.

You can also use ;, && and || to run multiple commands in bash scripts as well.

I hope you liked this terminal trick. Stay tuned for more Linux command tips and tools published every Tuesday under the #TerminalTuesday series.

KeePassXC is An Amazing Community Driven Open Source Password Manager [Not Cloud Based]

Monday 14th of September 2020 03:57:59 PM

Brief: KeePassXC is a useful open-source cross-platform password manager that doesn’t compromise on features even if it’s not a cloud-based tool. Here, we take a quick look at it.

KeePassXC: A Cross-Platform Open Source Password Manager

KeePassXC is a community fork of KeePassX which aims to be a cross-platform port for KeePass Password Safe (available for Windows). It is completely free to use and cross-platform as well (Windows, Linux, and macOS)

In fact, it is one of the best password managers for Linux out there. It features options for both newbies and power users who want advanced controls to secure their password database on their system.

Yes, unlike my favorite Bitwarden password manager, KeePassXC is not cloud-based and the passwords never leave the system. Some users do prefer to not save their passwords and secrets in cloud servers.

You should find all the essential features you will ever need on a password manager when you start using it. But, here, to give you a head start, I’ll highlight some features offered.

Features of KeePassXC

It is worth noting that the features might prove to be a little overwhelming for a newbie. But, considering that you want to make the most out of it, I think you should actually know what it offers:

  • Password Generator
  • Ability to import passwords from 1Password, KeePass 1, and any CSV files
  • Easily share databases by exporting and synchronizing with SSL certificate support
  • Database Encryption supported (256 bit AES)
  • Browser Integration Available (optional)
  • Ability to search for your credentials
  • Auto-type passwords into applications
  • Database reports to check password health and other stats
  • Supports exporting to CSV and HTML
  • 2-factor authentication token support
  • Attach files to passwords
  • YubiKey support
  • Command line option available
  • SSH Agent integration available
  • Change encryption algorithms if required
  • Ability to use DuckDuckGO to download the website icons
  • Database auto-lock timeout
  • Ability to clear the clipboard and the search query
  • Auto-file save
  • Folder/Nested Folder support
  • Set expiration of a credential
  • Dark theme available
  • Cross-platform support

As you can observe, it is a feature-rich password manager indeed. So, I’d advise you to properly explore it if you want to utilize every option present.

Installing KeePassXC on Linux

You should find it listed in your software center of the distribution you’ve installed.

You can also get the AppImage file from the official website. I’d suggest you to check out our guide on using AppImage files in Linux if you didn’t know already.

In either case, you will also find a snap available for it. In addition to that, you also get an Ubuntu PPA, Debian package, Fedora package, and Arch package.

If you’re curious, you can just explore the official download page for the available packages and check out their GitHub page for the source code as well.

Get KeePassXC Wrapping Up

If you’re not a fan of cloud-based open-source password managers like Bitwarden, KeePassXC should be an excellent choice for you.

The number of options that you get here lets you keep your password secure and easy to maintain across multiple platforms. Even though you don’t have an “official” mobile app from the developer team, you may try some of their recommended apps which are compatible with the database and offer the same functionalities.

Have you tried KeePassXC yet? What do you prefer using as your password manager? Let me know your thoughts in the comments below.

Linux Jargon Buster: What is a Long Term Support (LTS) Release? What is Ubuntu LTS?

Sunday 13th of September 2020 06:52:54 AM

In Linux world, specially when it comes to Ubuntu, you’ll come across the term LTS (long term support).

If you’re an experienced Linux user, you probably know the various aspects of a Linux distribution like an LTS release. But, new users or less tech-savvy users may not know about it.

In this chapter of Linux Jargon Buster, you’ll learn about what is an LTS release for Linux distributions.

What is a Long Term Support (LTS) Release?

Long-Term Support (LTS) release is normally associated with an application or an operating system for which you will get security, maintenance and (sometimes) feature updates for a longer duration of time.

The LTS versions are considered to be the most stable releases which undergoes extensive testing and mostly includes years of improvements along the way.

It is important to note that an LTS version of software does not necessarily involve feature updates unless there’s a newer LTS release. But, you will get the necessary bug fixes and security fixes in the updates of a Long Term Support version.

An LTS release is recommended for production-ready consumers, businesses, and enterprises because you get years of software support and no system-breaking changes with software updates.

If you notice a non-LTS release for any software, it is usually the bleeding-edge version of it with new features and a short span of support (say 6-9 months) when compared to 3-5 years of support on an LTS release.

To give you more clarity on LTS and non-LTS releases, let’s take a look at some pros and cons of choosing an LTS release.

Pros of LTS releases
  • Software updates with security and maintenance fixes for a long time (5 year support for Ubuntu).
  • Extensive testing
  • No system-breaking changes with software updates
  • You get plenty of time to prep your system for the next LTS release
Cons of LTS releases
  • Does not offer the latest and greatest features
  • You may miss out on the latest hardware support
  • You may also miss out on the latest application upgrades

Now, that you know what is an LTS release and its pros and cons it’s time to know about Ubuntu’s LTS release. Ubuntu is one of the most popular Linux distribution and one of the few distributions that has both LTS and non-LTS releases.

This is why I decided to dedicate an entire section to it.

What is Ubuntu LTS?

Ubuntu has a non-LTS release every six months and a LTS release every 2 years since 2006 and that’s not going to change.

The latest LTS release is — Ubuntu 20.04 and it will be supported until April 2025. In other words, Ubuntu 20.04 will receive software updates till then. The non-LTS releases are supported for nine months only.

You will always find an Ubuntu LTS release to be labelled as “LTS“. At least, when it comes to the official Ubuntu website to explore the latest Ubuntu releases.

To give you some clarity, if you notice Ubuntu 16.04 LTS, that means — it was released back in April 2016 and is supported until 2021 (considering 5 years of software updates).

Similarly, you can guess the update support for each Ubuntu LTS release by considering the next 5 years of its release date for software support.

Ubuntu LTS software updates: What does it include?

Ubuntu LTS versions receive security and maintenance updates for the lifecycle of their release. Unless the release reaches the End of Life, you will get all the necessary security and bug fixes.

You will not notice any functional upgrades in an LTS release. So, if you want to try the latest experimental technologies, you may want to upgrade your Ubuntu release to a non-LTS release.

I’d suggest you to refer our latest Ubuntu upgrade guide to know more about upgrading Ubuntu.

I would also recommend you to read our article on which Ubuntu version to install to clear your confusion on different Ubuntu flavours available like Xubuntu or Kubuntu and how are they different.

I hope you have a better understanding of the term LTS now specially when it comes to Ubuntu LTS. Stay tuned for more Linux jargon explainers in the future.

Shutdown Taking Too Long? Here’s How to Investigate and Fix Long Shutdown Time in Linux

Friday 11th of September 2020 07:40:17 AM

Your Linux system is taking too long to shut down? Here are the steps you can take to find out what is causing the delayed shutdown and fix the issue.

I hope you are a tad bit familiar with the sigterm and sigkill concept.

When you shut down your Linux system, it sends the sigterm and politely asks the running processes to stop. Some processes misbehave and they ignore the sigterm and keep on running.

This could cause a delay to the shutdown process as your system will wait for the running processes to stop for a predefined time period. After this time period, it sends the kill signal to force stop all the remaining running processes and shuts down the system. I recommend reading about sigterm vs sigkill to understand the difference.

In fact, in some cases, you would see a message like ‘a stop job is running’ on the black screen.

If your system is taking too long in shutting down, you can do the following:

  • Check which process/service is taking too long and if you can remove or reconfigure it to behave properly.
  • Change the default waiting period before your system force stops the running processes. [Quick and dirty fix]

I am using Ubuntu here which uses systemd. The commands and steps here are valid for any Linux distribution that uses systemd (most of them do).

Check which processes are causing long shutdown in Linux

If you want to figure out what’s wrong, you should check what happened at the last shutdown. Use this command to get the power of ‘I know what you did last session’ (pun intended):

journalctl -rb -1

The journalctl command allows you to read system logs. With options ‘-b -1’ you filter the logs for the last boot session. With option ‘-r’, the logs are shown in reverse chronological order.

In other words, the ‘journalctl -rb -1’ command will show the system logs just before your Linux system was shutdown the last time. This is what you need to analyze the long shutdown problem in Linux.

No journal logs? Here’s what you should do

If there are no journal logs, please make sure that your distribution uses systemd.

Even on some Linux distributions with systemd, the journal logs are not activated by default.

Make sure that /var/log/journal exists. If it doesn’t, create it:

sudo mkdir /var/log/journal

You should also check the content of /etc/systemd/journald.conf file and make sure that the value of Storage is set to either auto or persistent.

Do you find something suspicious in the logs? Is there a process/service refusing to stop? If yes, investigate if you could remove it without side effects or if you could reconfigure it. Don’t go blindly removing stuff here, please. You should have knowledge of the process.

Speed up shutdown in Linux by reducing default stop timeout [Quick fix]

The default wait period for the shut down is usually set at 90 seconds. Your system tries to force stop the services after this time period.

If you want your Linux system to shut down quickly, you can change this waiting period.

You’ll find all the systemd settings in the config file located at /etc/systemd/system.conf. This file should be filled with lots of line starting with #. They represent the default values of the entries in the file.

Before you do anything, it will be a good idea to make a copy of the original file.

sudo cp /etc/systemd/system.conf /etc/systemd/system.conf.orig

Look for DefaultTimeoutStopSec here. It should probably be set to 90 sec.


You have to change this value to something more convenient like 5 or 10 seconds.


If you don’t know how to edit the config file in terminal, use this command to open the file for editing in your system’s default text editor (like Gedit):

sudo xdg-open /etc/systemd/system.conf Change Shutdown Time Settings Ubuntu

Don’t forget to remove the # before DefaultTimeoutStopSec. Save the file and reboot your system.

This should help you reduce the shutdown delay for your Linux system.

Watchdog issue!

Linux has a module named watchdog that is used for monitoring whether certain services are running or not. It could be configured to automatically reboot systems if they are hanged due to software error.

It is unusual to use Watchdog on desktop systems because you can manually shutdown or reboot the system. It is often used on remote servers.

First check watchdog is running:

ps -af | grep watch*

If watchdog is running on your system, you can change the ShutdownWatchdogSec value from 10 minutes to something lower in the systemd config file /etc/systemd/system.conf.

.ugb-50dafde-wrapper.ugb-container__wrapper{border-radius:0px !important;padding-top:0 !important;padding-bottom:0 !important;background-color:#f1f1f1 !important}.ugb-50dafde-wrapper > .ugb-container__side{padding-top:35px !important;padding-bottom:35px !important}.ugb-50dafde-wrapper.ugb-container__wrapper:before{background-color:#f1f1f1 !important}.ugb-50dafde-content-wrapper > h1,.ugb-50dafde-content-wrapper > h2,.ugb-50dafde-content-wrapper > h3,.ugb-50dafde-content-wrapper > h4,.ugb-50dafde-content-wrapper > h5,.ugb-50dafde-content-wrapper > h6{color:#222222}.ugb-50dafde-content-wrapper > p,.ugb-50dafde-content-wrapper > ol li,.ugb-50dafde-content-wrapper > ul li{color:#222222}

Recommended Read:

.ugb-6ab8d55 .ugb-blog-posts__featured-image{border-radius:0px !important}.ugb-6ab8d55 .ugb-blog-posts__title a{color:#000000 !important}.ugb-6ab8d55 .ugb-blog-posts__title a:hover{color:#00b6ba !important}Find Out How Long Does it Take To Boot Your Linux System

How long does your Linux system takes to boot? Here’s how to find it out with systemd-analyze command.

Were you able to fix the lengthy shutdown?

I hope this tutorial helped you in investigating and fixing the long shutdown issue on your system. Do let me know in the comments if you managed to fix it.

The New YubiKey 5C NFC Security Key Lets You Use NFC to Easily Authenticate Your Secure Devices

Thursday 10th of September 2020 07:24:42 AM

If you are extra cautious about securing your online accounts with the best possible authentication method, you probably know about Yubico. They make hardware authentication security keys to replace two-factor authentication and get rid of the password authentication system for your online accounts.

Basically, you just plug the security key on your computer or use the NFC on your smartphone to unlock access to accounts. In this way, your authentication method stays completely offline.

Of course, you can always use a good password manager for Linux available out there. But if you own or work for a business or just extra cautious about your privacy and security and want to add an extra layer of security, these hardware security keys could be worth a try. These devices have gained some popularity lately.

Yubico’s latest product – ‘YubiKey 5C NFC‘ is probably something impressive because it can be used both as USB type C key and NFC (just touch your device with the key).

Here, let’s take a look at an overview of this security key.

Please note that It’s FOSS is an affiliate partner of Yubico. Please read our affiliate policy.

Yubico 5C NFC: Overview

YubiKey 5C NFC is the latest offering that uses both USB-C and NFC. So, you can easily plug it in on Windows, macOS, and Linux computers. In addition to the computers, you can also use it with your Android or iOS smartphones or tablets.

Not just limited to USB-C and NFC support (which is a great thing), it also happens to be the world’s first multi-protocol security key with smart card support as well.

Hardware security keys aren’t that common because of their cost for an average consumer. But, amidst the pandemic, with the rise of remote work, a safer authentication system will definitely come in handy.

Here’s what Yubico mentioned in their press release:

“The way that people work and go online is vastly different today than it was a few years ago, and especially within the last several months. Users are no longer tied to just one device or service, nor do they want to be. That’s why the YubiKey 5C NFC is one of our most sought-after security keys — it’s compatible with a majority of modern-day computers and mobile phones and works well across a range of legacy and modern applications. At the end of the day, our customers crave security that ‘just works’ no matter what.”  said Guido Appenzeller, Chief Product Officer, Yubico.

The protocols that YubiKey 5C NFC supports are FIDO2, WebAuthn, FIDO U2F, PIV (smart card), OATH-HOTP and OATH-TOTP (hash-based and time-based one-time passwords), OpenPGP, YubiOTP, and challenge-response.

Considering all those protocols, you can easily secure any online account that supports hardware authentication while also having the ability to access identity access management (IAM) solutions. So, it’s a great option for both individual users and enterprises.

Pricing & Availability

The YubiKey 5C NFC costs $55. You can order it directly from their online store or get it from any authorized resellers in your country. The cost might also vary depending on the shipping charges but $55 seems to be a sweet spot for serious users who want the best-level of security for their online accounts.

It’s also worth noting that you get volume discounts if you order more than two YubiKeys.

Order YubiKey 5C NFC Wrapping Up

No matter whether you want to secure your cloud storage account or any other online account, Yubico’s latest offering is something that’s worth taking a look at if you don’t mind spending some money to secure your data.

Have you ever used YubiKey or some other secure key like LibremKey etc? How is your experience with it? Do you think these devices are worth spending the extra money?

How to Connect to WiFi from the Terminal in Ubuntu Linux

Tuesday 8th of September 2020 03:04:31 PM

In this tutorial, you’ll learn how to connect to wireless network from the terminal in Ubuntu. This is particularly helpful if you are using Ubuntu server where you don’t have access to the regular desktop environment.

I primarily use desktop Linux on my home computers. I also have multiple Linux servers for hosting It’s FOSS and related websites and open source software like Nextcloud, Discourse, Ghost, Rocket Chat etc.

I use Linode for quickly deploying Linux servers in cloud in minutes. But recently, I installed Ubuntu server on my Raspberry Pi. This is the first time I installed a server on a physical device and I had to do extra stuff to connect Ubuntu server to WiFi via command line.

In this tutorial, I’ll show the steps to connect to WiFi using terminal in Ubuntu Linux. You should

  • not be afraid of using terminal to edit files
  • know the wifi access point name (SSID) and the password
Connect to WiFi from terminal in Ubuntu

It is easy when you are using Ubuntu desktop because you have the GUI to easily do that. It’s not the same when you are using Ubuntu server and restricted to the command line.

Ubuntu uses Netplan utility for easily configuring networking. In Netplan, you create YAML file with the description of network interface and with the help of the netplan command line tool, you generate all the required configuration.

Let’s see how to connect to wireless networking from the terminal using Netplan.

Step 1: Identify your wireless network interface name

There are several ways to identify your network interface name. You can use the ip command, the deprecated ipconfig command or check this file:

ls /sys/class/net

This should give you all the available networking interface (Ethernet, wifi and loopback). The wireless network interface name starts with ‘w’ and it is usually named similar to wlanX, wlpxyz.

abhishek@itsfoss:~$ ls /sys/class/net eth0 lo wlan0

Take a note of this interface name. You’ll use it in the next step.

Step 2: Edit the Netplan configuration file with the wifi interface details

The Netplan configuration file resides in /etc/netplan directory. If you check the contents of this directory, you should see files like 01-network-manager-all.yml or 50-cloud-init.yaml.

If it is Ubuntu server, you should have cloud-init file. For desktops, it should be network-manager file.

The Network Manager on the Linux desktop allows you to choose a wireless network. You may hard code the wifi access point in its configuration. This could help you in some cases (like suspend) where connection drops automatically.

Whichever file it is, open it for editing. I hope you are a tad bit familiar with Nano editor because Ubuntu comes pre-installed with it.

sudo nano /etc/netplan/50-cloud-init.yaml

YAML files are very sensitive about spaces, indention and alignment. Don’t use tabs, use 4 (or 2, whichever is already used in the YAML file) spaces instead where you see an indention.

Basically, you’ll have to add the following lines with the access point name (SSID) and its password (usually) in quotes:

wifis: wlan0: dhcp4: true optional: true access-points: "SSID_name": password: "WiFi_password"

Again, keep the alignment as I have shown or else YAML file won’t be parsed and it will throw an error.

Your complete configuration file may look like this:

# This file is generated from information provided by the datasource. Changes # to it will not persist across an instance reboot. To disable cloud-init's # network configuration capabilities, write a file # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following: # network: {config: disabled} network: ethernets: eth0: dhcp4: true optional: true version: 2 wifis: wlan0: dhcp4: true optional: true access-points: "SSID_name": password: "WiFi_password"

I find it strange that despite the message that changes will not persist across an instance reboot, it still works.

Anyway, generate the configuration using this command:

sudo netplan generate

And now apply this:

sudo netplan apply

If you are lucky, you should have network connected. Try to ping a website or run apt update command.

However, things may not go as smooth and you may see some errors. Try some extra steps if that’s the case.

Possible troubleshooting

It is possible that when you use the netplan apply command, you see an error in the output that reads something like this:

Failed to start netplan-wpa-wlan0.service: Unit netplan-wpa-wlan0.service not found. Traceback (most recent call last): File "/usr/sbin/netplan", line 23, in <module> netplan.main() File "/usr/share/netplan/netplan/cli/", line 50, in main self.run_command() File "/usr/share/netplan/netplan/cli/", line 179, in run_command self.func() File "/usr/share/netplan/netplan/cli/commands/", line 46, in run self.run_command() File "/usr/share/netplan/netplan/cli/", line 179, in run_command self.func() File "/usr/share/netplan/netplan/cli/commands/", line 173, in command_apply utils.systemctl_networkd('start', sync=sync, extra_services=netplan_wpa) File "/usr/share/netplan/netplan/cli/", line 86, in systemctl_networkd subprocess.check_call(command) File "/usr/lib/python3.8/", line 364, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['systemctl', 'start', '--no-block', 'systemd-networkd.service', 'netplan-wpa-wlan0.service']' returned non-zero exit status 5.

It is possible that wpa_supplicant service is not running. Run this command:

sudo systemctl start wpa_supplicant

Run netplan apply once again. If it fixes the issue well and good. Otherwise, shutdown your Ubuntu system using:

shutdown now

Start your Ubuntu system again, log in and generate and apply netplan once again:

sudo netplan generate sudo netplan apply

It may show warning (instead of error) now. It is warning and not an error. I checked the running systemd services and found that netplan-wpa-wlan0.service was already running. Probably it showed the warning because it was already running and ‘netplan apply’ updated the config file (even without any changes).

Warning: The unit file, source configuration file or drop-ins of netplan-wpa-wlan0.service changed on disk. Run 'systemctl daemon-reload' to reload units.

It is not crtical and you may check that the internet is probably working already by running apt update.

I hope you were able to connect to wifi using the command line in Ubuntu with the help of this tutorial. If you are still facing trouble with it, do let me know in the comment section.

How to Install Ubuntu Server on a Raspberry Pi

Tuesday 8th of September 2020 06:12:42 AM

The Raspberry Pi is the best-known single-board computer. Initially, the scope of the Raspberry Pi project was targeted to the promotion of teaching of basic computer science in schools and in developing countries.

Its low cost, portability and very low power consumption, made the models far more popular than anticipated. From weather station to home automation, tinkerers built so many cool projects using Raspberry Pi.

The 4th generation of the Raspberry Pi, is equipped with features and processing power of a regular desktop computer. But this article is not about using RPi as desktop. Instead, I’ll show you how to install Ubuntu server on Raspberry Pi.

In this tutorial I will use a Raspberry Pi 4 and I will cover the following:

  • Installing Ubuntu Server on a microSD card
  • Setting up a wireless network connection on the Raspberry Pi
  • Accessing your Raspberry Pi via SSH

You’ll need the following things for this tutorial:

  • A micro SD card (8 GB or greater recommended)
  • A computer (running Linux, Windows or macOS) with a micro SD card reader
  • A Raspberry Pi 2, 3 or 4
  • Good internet connection
  • An HDMI cable for the Pi 2 & 3 and a micro HDMI cable for the Pi 4 (optional)
  • A USB keyboard set (optional)
Installing Ubuntu Server on a Raspberry Pi

I have used Ubuntu for creating Raspberry Pi SD card in this tutorial but you may follow it on other Linux distributions, macOS and Windows as well. This is because the steps for preparing the SD card is the same with Raspberry Pi Imager tool.

The Raspberry Pi Imager tool downloads the image of your choice of Raspberry Pi OS automatically. This means that you need a good internet connection for downloading data around 1 GB.

Step 1: Prepare the SD Card with Raspberry Pi Imager

Make sure you have inserted the microSD card into your computer, and install the Raspberry Pi Imager at your computer.

You can download the Imager tool for your operating system from these links:

Despite I use Ubuntu, I won’t use the Debian package that is listed above, but I will install the snap package using the command line. This method can be applied to wider range of Linux distributions.

sudo snap install rpi-imager

Once you have installed Raspberry Pi Imager tool, find and open it and click on the “CHOOSE OS” menu.

Scroll across the menu and click on “Ubuntu” (Core and Server Images).

From the available images, I choose the Ubuntu 20.04 LTS 64 bit. If you have a Raspberry Pi 2, you are limited to the 32bit image.

Important Note: If you use the latest Raspberry Pi 4 – 8 GB RAM model, you should choose the 64bit OS, otherwise you will be able to use 4 GB RAM only.

Select your microSD card from the “SD Card” menu, and click on “WRITE”after.

If it shows some error, try writing it again. It will now download the Ubuntu server image and write it to the micro SD card.

It will notify you when the process is completed.

Step 2: Add WiFi support to Ubuntu server

Once the micro SD card flashing is done, you are almost ready to use it. There is one thng that you may want to do before using it and that is to add Wi-Fi support. If you don’t do it right now, you’ll have to put extra effort later in connecting to wifi from terminal in Ubuntu server.

With the SD card still inserted in the card reader, open the file manager and locate the “system-boot” partition on the card.

The file that you are looking for and need to edit is named network-config.

This process can be done on Windows and MacOS too. Edit the network-config file as already mentioned to add your Wi-Fi credentials.

Firstly, uncomment (remove the hashtag “#” at the beginning) from lines that are included in the rectangular box.

After that, replace myhomewifi with your Wi-Fi network name enclosed in quotation marks, such as “itsfoss” and the “S3kr1t” with the Wi-Fi password enclosed in quotation marks, such as “12345679”.

It may look like this:

wifis: wlan0: dhcp4: true optional: true access-points: "your wifi name": password: "your_wifi_password"

Save the file and insert the micro SD card into your Raspberry Pi. During the first boot, if your Raspberry Pi fails connect to the Wi-Fi network, simply reboot your device.

Step 3: Use Ubuntu server on Raspberry Pi (if you have dedicated monitor, keyboard and mouse for Raspberry Pi)

If you have got an additional set of mouse, keyboard and a monitor for the Raspberry Pi, you can use easily use it like any other computer (but without GUI).

Simply insert the micro SD card to the Raspberry Pi, plug in the monitor, keyboard and mouse. Now turn on your Raspberry Pi. It will present TTY login screen (black terminal screen) and aks for username and password.

  • Default username: ubuntu
  • Default password: ubuntu

When prompted, use “ubuntu” for the password. Right after a successful login, Ubuntu will ask you to change the default password.

Enjoy your Ubuntu Server!

Step 3: Connect remotely to your Raspberry Pi via SSH (if you don’t have monitor, keyboard and mouse for Raspberry Pi)

It is okay if you don’t have a dedicated monitor to be used with Raspberry Pi. Who needs a monitor with a server when you can just SSH into it and use it the way you want?

On Ubuntu and Mac OS, an SSH client is usually already installed. To connect remotely to your Raspberry Pi, you need to discover its IP address. Check the devices connected to your network and see which one is the Raspberry Pi.

Since I don’t have access to a Windows machine, you can access a comprehensive guide provided by Microsoft.

Open a terminal and run the following command:

ssh ubuntu@raspberry_pi_ip_address

You will be asked to confirm the connection with the message:

Are you sure you want to continue connecting (yes/no/[fingerprint])?

Type “yes” and click the enter key.

When prompted, use “ubuntu” for the password as mentioned earlier. You’ll be asked to change the password of course.

Once done, you will be automatically logged out and you have to reconnect, using your new password.

Your Ubuntu server is up and running on a Raspberry Pi!


Installing Ubuntu Server on a Raspberry Pi is an easy process and it comes pre-configured at a great degree which the use a pleasant experience.

I have to say that among all the operating systems that I tried on my Raspberry Pi, Ubuntu Server was the easiest to install. I am not exaggerating. Check my guide on installing Arch Linux on Raspberry Pi for reference.

I hope this guide helped you in installing Ubuntu server on your Raspberry Pi as well. If you have questions or suggestions, please let me know in the comment section.

Zim: A Wiki-Like Note-taking App That Makes Things Easier

Monday 7th of September 2020 02:57:28 PM

Brief: Zim is an impressive note taking app for users who want a wiki-style collection of their notes, tasks, or ideas. Here, we take a look at what it offers.

Zim: A Desktop Wiki

Zim is undoubtedly one of the best note-taking apps for Linux but it’s not just another ordinary note app that lets you add ideas/tasks and save them.

It’s tailored to help you maintain a collection of notes in the form of wiki pages. In other words, you can have a lot of notes (tasks/ideas) and link them to each other that will make it easier to go through what you’ve added in the past.

Here, I’ll give an overview of the features you get with Zim and how to get it installed on Linux.

Features of Zim Wiki

You’ll find a lot of options as you explore, I’ll highlight the key features here:

  • Supports adding tasks
  • You can use it as a journal
  • Easy to keep archive of notes
  • Several Markup types supported
  • Auto-save
  • Easily compatible with other text editors
  • Ability to link other notes to navigate through a collection of pages
  • Ability to open embedded images using native applications
  • Supports embedded equations
  • LaTeX’s equation editor
  • Export notes as HTML to publish them as webpages
  • Easily edit the config files to tweak the color scheme of your editor
  • Keybind support to navigate just using the keyboard
  • Supports plugins to add spell checker and other useful tools
  • Basic formatting support for essential things like subscript, superscript, and so on
  • Ability to save different versions of notes (version control system)
  • Easy to utilize/create templates for quick-use
  • Cross-platform support

The feature-set is definitely impressive and somewhat overwhelming for basic note-taking usage. But, depending on your usage, you should give it a try to make the most out of it.

Installing Zim On Linux

You should find it listed on your software center or app center of your Linux distribution. Just search for it and get it installed.

There is also a Zim Wiki flatpak package iavailable. I’d recommend you to check out our article on using Flatpak on Linux if you don’t know about it.

You should also find it on other repositories like AUR. Also, you can find the source code on GitHub, if you’re curious.

Zim Wiki Wrapping Up

Zim Wiki is definitely a great note-taking app for Linux. You can also use it on your Windows or mac OS system. So, you can have your collection of notes/ideas anywhere you want.

Unlike some other note-taking applications, you won’t find a mobile client for it as far as I’m aware.

Overall, it’s an interesting choice for power users with a lot of notes and ideas to keep track of. What do you think about it? Let me know your thoughts in the comments below.

Linux Jargon Buster: What is a Linux Distribution? Why is it Called a ‘Distribution’?

Sunday 6th of September 2020 06:46:37 AM

In this chapter of the Linux Jargon Buster, let’s discuss something elementary.

Let’s discuss what is a Linux distribution, why it is called a distribution (or distro) and how is it different from the Linux kernel. You’ll also learn a thing or two about why some people insist of calling Linux as GNU/Linux.

What is a Linux distribution?

A Linux distribution is an operating system composed of the Linux kernel, GNU tools, additional software and a package manager. It may also include display server and desktop environment to be used as regular desktop operating system.

The term is Linux distribution (or distro in short form) because an entity like Debian or Ubuntu ‘distributes’ the Linux kernel along with all the necessary software and utilities (like network manager, package manager, desktop environments etc) so that it can be used as an operating system.

Your distributions also takes the responsibility of providing updates to maintain the kernel and other utilities.

So, Linux is the kernel whereas the Linux distribution is the operating system. This is the reason why they are also sometime referred as Linux-based operating systems.

Don’t worry if not all the above makes sense right away. I’ll explain it in a bit more detail.

Linux is just a kernel, not an operating system: What does it mean?

You might have come across that phrase and that’s entirely correct. The kernel is at the core of an operating system and it is close to the actual hardware. You interact with it using the applications and shell.

Linux Kernel Structure

To understand that, I’ll use the same analogy that I had used in my detailed guide on what is Linux. Think of operating systems as vehicles and kernel as engine. You cannot drive an engine directly. Similarly, you cannot use kernel directly.

Operating System Analogy

A Linux distribution can be seen as a vehicle manufacturer like Toyota or Ford that provides you ready to use cars just like Ubuntu or Fedora distributions provide you a ready to use operating systems based on Linux.

What is GNU/Linux?

Take a look at this picture once again. What Linus Torvalds created in 1991 is just the innermost circle, i.e. the Linux kernel.

Linux Kernel Structure

To use Linux even in the most primitive form (without even a GUI), you need a shell. Most commonly, it is Bash shell.

And then, you need to run some commands in the shell to do some work. Can you recall some basic Linux commands? There is cat, cp, mv, grep find, diff, gzip and more.

Technically, not all of these so called ‘Linux commands’ belong to Linux exclusively. A lot of them originate mainly from the UNIX operating system.

Even before Linux came into existence, Richard Stallman had created the GNU (recursive acronym for GNU is not Unix) project, the first of the free software project, in 1983. The GNU project implemented many of the popular Unix utilities like cat, grep, awk, shell (bash) along with developing their own compilers (GCC) and editors (Emacs).

Back in the 80s UNIX was proprietary and super expensive. This is why Linus Torvalds developed a new kernel that was like UNIX. To interact with the Linux kernel, Torvalds used GNU tools which were available for free under their open source GPL license.

With the GNU tools, it also behaved like UNIX. This is the reason why Linux is also termed as UNIX-like operating system.

You cannot imagine Linux without the shell and all those commands. Since Linux integrates deeply with the GNU tools, almost dependent on it, the purists demand that GNU should get its fair share of recognition and this is why they insist on calling it GNU Linux (written as GNU/Linux).


So, what is the correct term? Linux, GNU/Linux, Linux distribution, Linux distro, Linux based operating system or UNIX-like operating system? I say it depends on you and the context. I have provided you enough detail so that you have a better understanding of these related terms.

I hope you are liking this Linux Jargon Buster series and learning new things. Your feedback and suggestions are welcome.

PCLinuxOS Review: This Classic Independent Linux Distribution is Definitely Worth a Look

Saturday 5th of September 2020 04:22:04 AM

Most of the Linux distributions that we cover on It’s FOSS are based on either Ubuntu or Arch.

No, we don’t have any affinity for either Ubuntu or Arch though personally, I love using Manjaro. It’s just that majority of new Linux distributions are based on these two.

While discussing within the team, we thought, why fixate over new distributions. Why not go for the classic distributions? Distributions that don’t belong to DEB/Arch domain.

So, today, we are going to be looking at an independent distro that tends to go against the flow. We’ll be looking at PCLinuxOS.

What is PCLinuxOS?

Back in 2000, Bill Reynolds (also known as Texstar) created a series of packages to improve Mandrake Linux, which later became Mandriva Linux. PCLinuxOS first became a separate distro in 2003 when Texstar forked Mandrake. He said that he made the move because he wanted “to provide an outlet for my crazy desire to package source code without having to deal with egos, arrogance and politics”.

As I said earlier, PCLinuxOS does not follow the rest of the Linux world. PCLinuxOS does not use systemd. Instead, it uses SysV init and “will continue to do so for the foreseeable future“.

It also has one of the oddest package management systems, I have ever encountered. PCLinuxOS uses apt and synaptic to handle RPM packages. Unlike most distros that use either apt or rpm, PCLinuxOS is a rolling distro. It also supports Flatpak.

The PCLinuxOS team offers three different versions: KDE, MATE, and XFCE. The PCLinuxOS community has also created a number of community releases with more desktop options.

PCLinuxOS Updater System requirements for PCLinuxOS

According to the PCLinuxOS wiki, the following hardware is recommended to run PCLinuxOS:

  • Modern Intel or AMD processor.
  • 10 GB or more free space recommended.
  • Minimum 2 GB of memory. – Recommended 4 GB or more.
  • Any modern video card by Nvidia, ATI, Intel, SiS, Matrox, or VIA.
  • 3D desktop support requires a 3D instructions set compatible card.
  • Any Sound Blaster, AC97, or HDA compatible card.
  • A CD or DVD drive.
  • Flash drives can also be used to install, with PCLinuxOS-LiveUSB script just for this purpose.
  • Generally any onboard network card will suffice.
  • A high-speed internet connection is recommended for performing any updates/software installations as necessary.
Experience with PCLinuxOS

I originally encountered PCLinuxOS when I was first entering the Linux world about 7+ years ago. Back then I was trying out distros like crazy. At the time, I didn’t quite understand it and ended up going with Lubuntu.

Recently, I was reminded of the distro when Matt Hartley, community manager at OpenShot mentioned it on the Bryan Lunduke podcast. PCLinuxOS is Hartley’s daily driver and has been for a while. Based on his comments, I decided to take another look at it.

Smooth installation PCLinuxOS installer

The majority of Linux distros use one of three installers, Ubiquity, Anaconda, or Calamares. PCLinuxOS is one of the few that has its own installer, which it inherited from Mandrake. The installation went quickly and without any issue.

After the installation, I booted into the MATE desktop environment (because I had to). A dialog box asked me if I wanted to enable the update notifier. It’s always best to be up-to-date, so I did.

Handy set of utilities

Besides the usual list of utilities, office programs, and web tools, PCLinuxOS has a couple of interesting additions. Both Zoom (a videoconferencing tool) and AnyDesk (a remote desktop application) come pre-installed for your remote working needs. The menu also includes an option to install VirtualBox GuestAdditions (in case you installed PCLinuxOS on VirtualBox).

PCLinuxOS Control Center

PCLinuxOS comes with a control center to handle all of your system admin needs. It covers installing software, file sharing, handles network connections, handles hardware issues, and security.

Create your own custom PCLinuxOS live disk

It also comes with a couple of apps that allow you to download a new PCLinuxOS ISO, write that ISO to a disc or USB, or create your own LiveCD based on your current system.

It is easy to create your own custom PCLinuxOS ISO No sudo in PCLinuxOS

Interestingly, PCLinuxOS doesn’t have sudo installed. According to the FAQ, “Some distros…leaving sudo in a default state where all administrator functions are allowed without the requirement to enter the root password. We consider this an unacceptable security risk.” Whenever you perform a task that requires admin privileges, a window appears asking for your password.

Strong community

One of the cool things about PCLinuxOS is its strong community. That community creates a monthly e-magazine. Each issue contains news, tutorials, puzzles, and even recipes. The only other distro (or family of distros) that has sustained a community publication for over 15 years is Ubuntu with the Full Circle Magazine. Be sure to check it out.

No hardware issues noticed (for my system)

This is one of the last distros I will review on my Dell Latitude D630. (I’m moving up to a newer Thinkpad.) One of the major problems I’ve had in the past was getting the Nvidia GPU to work correctly. I didn’t have any issues with PCLinuxOS. It just worked out of the box.

Final Thoughts PCLinuxOS Desktop

PCLinuxOS also provides an easy way to remaster the system after installation. It allows you to create a live disk of PCLinuxOS with your customization. I

PCLinuxOS feels like part of the past and part of the present. It reflects the pre-systemd days and offers a modern desktop and apps at the same time. The only thing I would complain about is that there are fewer applications available in the repos than more popular distros, but the availability of Flatpak and AppImages should fix that.

PCLinuxOS’ tag line is: “So cool ice cubes are jealous“. It might sound corny, but I think it’s true, especially if you aren’t a fan of the direction the rest of the Linux world has taken. If you find something lacking in the big Linux distros, check out this old-little distro with a great community.

Have you ever used PCLinuxOS? What is your favorite independent distro? Please let us know in the comments below. If you found this article interesting, please take a minute to share it on social media, Hacker News or Reddit.

How to Copy Paste in Linux Terminal [For Absolute Beginners]

Friday 4th of September 2020 06:57:31 AM

I have been using Linux for a decade now and this is why sometimes I take things for granted.

Copy pasting in the Linux terminal is one of such things.

I thought everyone already knew this until one of the It’s FOSS readers asked me this question. I gave the following suggestion to the Ubuntu user:

Use Ctrl+Insert or Ctrl+Shift+C for copying and Shift+Insert or Ctrl+Shift+V for pasting text in the terminal in Ubuntu. Right click and selecting the copy/paste option from the context menu is also an option.

I thought of elaborating on this topic specially when there is no single universal way of copy and paste in the Linux terminal.

How to copy paste text and commands in the Linux terminal

There are several ways to do this.

Method 1: Using keyboard shortcuts for copy pasting in the terminal

On Ubuntu and many other Linux distributions, you can use Ctrl+Insert or Ctrl+shift+C for copying text and Shift+Insert or Ctrl+shift+V for pasting text in the terminal.

The copy pasting also works for the external sources. If you copy a command example from It’s FOSS website (using the generic Ctrl+C keys), you can paste this command into the terminal using the Ctrl+Shift+V into the terminal.

Similarly, you can use Ctrl+shift+C to copy text from the terminal and then use it to paste in a text editor or web browser using the regular Ctrl+V shortcut.

Basically, when you are interacting with the Linux terminal, you use the Ctrl+Shift+C/V for copy-pasting.

Method 2: Using right click context menu for copy pasting in the terminal

Another way of copying and pasting in the terminal is by using the right click context menu.

Select the text in the terminal, right click and select Copy. Similarly, to paste the selected text, right click and select Paste.

Method 3: Using mouse to copy paste in Linux terminal

Another way to copy paste in Linux terminal is by using only the mouse.

You can select the text you want to copy and then press the middle mouse button (scrolling wheel) to paste the copied text.

Please keep in mind that these methods may not work in all the Linux distributions for a specific reason that I explain in the next section.

There is no universal key shortcuts for copy paste in the Linux terminal. Here’s why!

The keybindings for copy-pasting are dependent on the terminal emulator (commonly known as terminal) you are using.

If you didn’t know that already terminal is just an application and you can install other terminals like Guake or Terminator.

Different terminal applications may have their own keybindings for copying and pasting like Alt+C/V or Ctrl+Alt+C/V.

Most Linux terminals use the Ctrl+Shift+C/V keys but if it doesn’t work for you, you may try other key combinations or configure the keys from the preferences of the terminal emulator.

Quick word about Putty

If you use Putty on Linux or Windows, it uses an entire different keybindings. In Putty, selecting a text automatically copies it and you can paste it using right click.

Why Linux terminals do not use the ‘universal’ Ctrl+C and Ctrl+V for

No Linux terminal will give you Ctrl+C for copying the text. This is because by default Ctrl+C keybinding is used for sending an interrupt signal to the command running in foreground. This usually stops the running command.

Using Ctrl+C stops a running command in Linux terminal

This behavior has been existing long before Ctrl+C and Ctrl+V started being used for copy-pasting text.

Since the Ctrl+C keys are ‘reserved’ for stopping a command, it cannot be used for copying.

Used Ctrl+S and hanged the terminal?

Most of us use Ctrl+S keys to save changes made to text, images etc. This key is almost universal for saving same as Ctrl+C is for copying.
However, if you enter Ctrl+S in Linux terminal, it will freeze the terminal. No need to close the terminal and start it again. You can use Ctrl+Q to unfreeze the terminal.
Ctrl+S and Ctrl+Q are shortcut keys for flow control.

I know this is elementary for the Sherlock Holmes of the Linux world but it could still be useful to the Watsons.

New or not, you may always use shortcuts in Linux terminal to make your life easier.

More in Tux Machines