Language Selection

English French German Italian Portuguese Spanish

Linux Journal

Syndicate content
Updated: 5 hours 29 min ago

Linux Journal Ceases Publication: An Awkward Goodbye

Thursday 8th of August 2019 01:55:28 AM
by Kyle Rankin IMPORTANT NOTICE FROM LINUX JOURNAL, LLC: On August 7, 2019, Linux Journal shut its doors for good. All staff were laid off and the company is left with no operating funds to continue in any capacity. The website will continue to stay up for the next few weeks, hopefully longer for archival purposes if we can make it happen. –Linux Journal, LLC


Final Letter from the Editor: The Awkward Goodbye

by Kyle Rankin

Have you ever met up with a friend at a restaurant for dinner, then after dinner you both step out to the street and say a proper goodbye, only when you leave, you find out that you both are walking in the same direction? So now, you get to walk together awkwardly until the true point where you part, and then you have another, second goodbye, that's much more awkward.

That's basically this post. 

So, it was almost two years ago that I first said goodbye to Linux Journal and the Linux Journal community in my post "So Long and Thanks for All the Bash". That post was a proper goodbye. For starters, it had a catchy title with a pun. The post itself had all the elements of a proper goodbye: part retrospective, part "Thank You" to the Linux Journal team and the community, and OK, yes, it was also part rant. I recommend you read (or re-read) that post, because it captures my feelings about losing Linux Journal way better than I can muster here on our awkward second goodbye. 

Of course, not long after I wrote that post, we found out that Linux Journal wasn't dead after all! We all actually had more time together and got to work fixing everything that had caused us to die in the first place. A lot of our analysis of what went wrong and what we intended to change was captured in my article "What Linux Journal's Resurrection Taught Me about the FOSS Community" that we posted in our 25th anniversary issue.

Go to Full Article

Oops! Debugging Kernel Panics

Wednesday 7th of August 2019 11:30:00 PM
by Petros Koutoupis

A look into what causes kernel panics and some utilities to help gain more information.

Working in a Linux environment, how often have you seen a kernel panic? When it happens, your system is left in a crippled state until you reboot it completely. And, even after you get your system back into a functional state, you're still left with the question: why? You may have no idea what happened or why it happened. Those questions can be answered though, and the following guide will help you root out the cause of some of the conditions that led to the original crash.

Figure 1. A Typical Kernel Panic

Let's start by looking at a set of utilities known as kexec and kdump. kexec allows you to boot into another kernel from an existing (and running) kernel, and kdump is a kexec-based crash-dumping mechanism for Linux.

Installing the Required Packages

First and foremost, your kernel should have the following components statically built in to its image:


You can find this in /boot/config-`uname -r`.

Make sure that your operating system is up to date with the latest-and-greatest package versions:

$ sudo apt update && sudo apt upgrade

Install the following packages (I'm currently using Debian, but the same should and will apply to Ubuntu):

$ sudo apt install gcc make binutils linux-headers-`uname -r` ↪kdump-tools crash `uname -r`-dbg

Note: Package names may vary across distributions.

During the installation, you will be prompted with questions to enable kexec to handle reboots (answer whatever you'd like, but I answered "no"; see Figure 2).

Figure 2. kexec Configuration Menu

And to enable kdump to run and load at system boot, answer "yes" (Figure 3).

Figure 3. kdump Configuration Menu

Configuring kdump

Open the /etc/default/kdump-tools file, and at the very top, you should see the following:

Go to Full Article

Loadsharers: Funding the Load-Bearing Internet Person

Wednesday 7th of August 2019 11:00:00 PM
by Eric S. Raymond

The internet has a sustainability problem. Many of its critical services depend on the dedication of unpaid volunteers, because they can't be monetized and thus don't have any revenue stream for the maintainers to live on. I'm talking about services like DNS, time synchronization, crypto libraries—software without which the net and the browser you're using couldn't function.

These volunteer maintainers are the Load-Bearing Internet People (LBIP). Underfunding them is a problem, because underfunded critical services tend to have gaps and holes that could have been fixed if there were more full-time attention on them. As our civilization becomes increasingly dependent on this software infrastructure, that attention shortfall could lead to disastrous outages.

I've been worrying about this problem since 2012, when I watched a hacker I know wreck his health while working on a critical infrastructure problem nobody else understood at the time. Billions of dollars in e-commerce hung on getting the particular software problem he had spotted solved, but because it masqueraded as network undercapacity, he had a lot of trouble getting even technically-savvy people to understand where the problem was. He solved it, but unable to afford medical insurance and literally living in a tent, he eventually went blind in one eye and is now prone to depressive spells.

More recently, I damaged my ankle and discovered that although there is such a thing as minor surgery on the medical level, there is no such thing as "minor surgery" on the financial level. I was looking—still am looking—at a serious prospect of either having my life savings wiped out or having to leave all 52 of the open-source projects I'm responsible for in the lurch as I scrambled for a full-time job. Projects at risk include the likes of GIFLIB, GPSD and NTPsec.

That refocused my mind on the LBIP problem. There aren't many Load-Bearing Internet People—probably on the close order of 1,000 worldwide—but they're a systemic vulnerability made inevitable by the existence of common software and internet services that can't be metered. And, burning them out is a serious problem. Even under the most cold-blooded assessment, civilization needs the mean service life of an LBIP to be long enough to train and acculturate a replacement.

(If that made you wonder—yes, in fact, I am training an apprentice. Different problem for a different article.)

Alas, traditional centralized funding models have failed the LBIPs. There are a few reasons for this:

Go to Full Article

Documenting Proper Git Usage

Wednesday 7th of August 2019 10:30:00 PM
by Zack Brown

Jonathan Corbet wrote a document for inclusion in the kernel tree, describing best practices for merging and rebasing git-based kernel repositories. As he put it, it represented workflows that were actually in current use, and it was a living document that hopefully would be added to and corrected over time.

The inspiration for the document came from noticing how frequently Linus Torvalds was unhappy with how other people—typically subsystem maintainers—handled their git trees.

It's interesting to note that before Linus wrote the git tool, branching and merging was virtually unheard of in the Open Source world. In CVS, it was a nightmare horror of leechcraft and broken magic. Other tools were not much better. One of the primary motivations behind git—aside from blazing speed—was, in fact, to make branching and merging trivial operations—and so they have become.

One of the offshoots of branching and merging, Jonathan wrote, was rebasing—altering the patch history of a local repository. The benefits of rebasing are fantastic. They can make a repository history cleaner and clearer, which in turn can make it easier to track down the patches that introduced a given bug. So rebasing has a direct value to the development process.

On the other hand, used poorly, rebasing can make a big mess. For example, suppose you rebase a repository that has already been merged with another, and then merge them again—insane soul death.

So Jonathan explained some good rules of thumb. Never rebase a repository that's already been shared. Never rebase patches that come from someone else's repository. And in general, simply never rebase—unless there's a genuine reason.

Since rebasing changes the history of patches, it relies on a new "base" version, from which the later patches diverge. Jonathan recommended choosing a base version that was generally thought to be more stable rather than less—a new version or a release candidate, for example, rather than just an arbitrary patch during regular development.

Jonathan also recommended, for any rebase, treating all the rebased patches as new code, and testing them thoroughly, even if they had been tested already prior to the rebase.

"If", he said, "rebasing is limited to private trees, commits are based on a well-known starting point, and they are well tested, the potential for trouble is low."

Moving on to merging, Jonathan pointed out that nearly 9% of all kernel commits were merges. There were more than 1,000 merge requests in the 5.1 development cycle alone.

Go to Full Article

Understanding Python's asyncio

Wednesday 7th of August 2019 10:00:00 PM
by Reuven M. Lerner

How to get started using Python's asyncio.

Earlier this year, I attended PyCon, the international Python conference. One topic, presented at numerous talks and discussed informally in the hallway, was the state of threading in Python—which is, in a nutshell, neither ideal nor as terrible as some critics would argue.

A related topic that came up repeatedly was that of "asyncio", a relatively new approach to concurrency in Python. Not only were there formal presentations and informal discussions about asyncio, but a number of people also asked me about courses on the subject.

I must admit, I was a bit surprised by all the interest. After all, asyncio isn't a new addition to Python; it's been around for a few years. And, it doesn't solve all of the problems associated with threads. Plus, it can be confusing for many people to get started with it.

And yet, there's no denying that after a number of years when people ignored asyncio, it's starting to gain steam. I'm sure part of the reason is that asyncio has matured and improved over time, thanks in no small part to much dedicated work by countless developers. But, it's also because asyncio is an increasingly good and useful choice for certain types of tasks—particularly tasks that work across networks.

So with this article, I'm kicking off a series on asyncio—what it is, how to use it, where it's appropriate, and how you can and should (and also can't and shouldn't) incorporate it into your own work.

What Is asyncio?

Everyone's grown used to computers being able to do more than one thing at a time—well, sort of. Although it might seem as though computers are doing more than one thing at a time, they're actually switching, very quickly, across different tasks. For example, when you ssh in to a Linux server, it might seem as though it's only executing your commands. But in actuality, you're getting a small "time slice" from the CPU, with the rest going to other tasks on the computer, such as the systems that handle networking, security and various protocols. Indeed, if you're using SSH to connect to such a server, some of those time slices are being used by sshd to handle your connection and even allow you to issue commands.

All of this is done, on modern operating systems, via "pre-emptive multitasking". In other words, running programs aren't given a choice of when they will give up control of the CPU. Rather, they're forced to give up control and then resume a little while later. Each process running on a computer is handled this way. Each process can, in turn, use threads, sub-processes that subdivide the time slice given to their parent process.

Go to Full Article

RV Offsite Backup Update

Wednesday 7th of August 2019 09:15:00 PM
by Kyle Rankin

Having an offsite backup in your RV is great, and after a year of use, I've discovered some ways to make it even better.

Last year I wrote a feature-length article on the data backup system I set up for my RV (see Kyle's "DIY RV Offsite Backup and Media Server" from the June 2018 issue of LJ). If you haven't read that article yet, I recommend checking it out first so you can get details on the system. In summary, I set up a Raspberry Pi media center PC connected to a 12V television in the RV. I connected an 8TB hard drive to that system and synchronized all of my files and media so it acted as a kind of off-site backup. Finally, I set up a script that would attempt to sync over all of those files from my NAS whenever it detected that the RV was on the local network. So here, I provide an update on how that system is working and a few tweaks I've made to it since.

What Works

Overall, the media center has worked well. It's been great to have all of my media with me when I'm on a road trip, and my son appreciates having access to his favorite cartoons. Because the interface is identical to the media center we have at home, there's no learning curve—everything just works. Since the Raspberry Pi is powered off the TV in the RV, you just need to turn on the TV and everything fires up.

It's also been great knowing that I have a good backup of all of my files nearby. Should anything happen to my house or my main NAS, I know that I can just get backups from the RV. Having peace of mind about your important files is valuable, and it's nice knowing in the worst case when my NAS broke, I could just disconnect my USB drive from the RV, connect it to a local system, and be back up and running.

The WiFi booster I set up on the RV also has worked pretty well to increase the range of the Raspberry Pi (and the laptops inside the RV) when on the road. When we get to a campsite that happens to offer WiFi, I just reset the booster and set up a new access point that amplifies the campsite signal for inside the RV. On one trip, I even took it out of the RV and inside a hotel room to boost the weak signal.

Go to Full Article

Another Episode of "Seems Perfectly Feasible and Then Dies"--Script to Simplify the Process of Changing System Call Tables

Wednesday 7th of August 2019 08:45:00 PM
by Zack Brown

David Howells put in quite a bit of work on a script, ./scripts/, to simplify the entire process of changing the system call tables. With this script, it was a simple matter to add, remove, rename or renumber any system call you liked. The script also would resolve git conflicts, in the event that two repositories renumbered the system calls in conflicting ways.

Why did David need to write this patch? Why weren't system calls already fairly easy to manage? When you make a system call, you add it to a master list, and then you add it to the system call "tables", which is where the running kernel looks up which kernel function corresponds to which system call number. Kernel developers need to make sure system calls are represented in all relevant spots in the source tree. Renaming, renumbering and making other changes to system calls involves a lot of fiddly little details. David's script simply would do everything right—end of story no problemo hasta la vista.

Arnd Bergmann remarked, "Ah, fun. You had already threatened to add that script in the past. The implementation of course looks fine, I was just hoping we could instead eliminate the need for it first." But, bowing to necessity, Arnd offered some technical suggestions for improvements to the patch.

However, Linus Torvalds swooped in at this particular moment, saying:

Ugh, I hate it.

I'm sure the script is all kinds of clever and useful, but I really think the solution is not this kind of helper script, but simply that we should work at not having each architecture add new system calls individually in the first place.

IOW, we should look at having just one unified table for new system call numbers, and aim for the per-architecture ones to be for "legacy numbering".

Maybe that won't happen, but in the _hope_ that it happens, I really would prefer that people not work at making scripts for the current nasty situation.

And the portcullis came crashing down.

It's interesting that, instead of accepting this relatively obvious improvement to the existing situation, Linus would rather leave it broken and ugly, so that someone someday somewhere might be motivated to do the harder-yet-better fix. And, it's all the more interesting given how extreme the current problem is. Without actually being broken, the situation requires developers to put in a tremendous amount of care and effort into something that David's script could make trivial and easy. Even for such an obviously "good" patch, Linus gives thought to the policy and cultural implications, and the future motivations of other people working in that region of code.

Note: if you're mentioned above and want to post a response above the comment section, send a message with your response text to

Go to Full Article

Experts Attempt to Explain DevOps--and Almost Succeed

Wednesday 7th of August 2019 08:00:00 PM
by Bryan Lunduke

What is DevOps? How does it relate to other ideas and methodologies within software development? Linux Journal Deputy Editor and longtime software developer, Bryan Lunduke isn't entirely sure, so he asks some experts to help him better understand the DevOps phenomenon.

The word DevOps confuses me.

I'm not even sure confuses me quite does justice to the pain I experience—right in the center of my brain—every time the word is uttered.

It's not that I dislike DevOps; it's that I genuinely don't understand what in tarnation it actually is. Let me demonstrate. What follows is the definition of DevOps on Wikipedia as of a few moments ago:

DevOps is a set of software development practices that combine software development (Dev) and information technology operations (Ops) to shorten the systems development life cycle while delivering features, fixes, and updates frequently in close alignment with business objectives.

I'm pretty sure I got three aneurysms just by copying and pasting that sentence, and I still have no clue what DevOps really is. Perhaps I should back up and give a little context on where I'm coming from.

My professional career began in the 1990s when I got my first job as a Software Test Engineer (the people that find bugs in software, hopefully before the software ships, and tell the programmers about them). During the years that followed, my title, and responsibilities, gradually evolved as I worked my way through as many software-industry job titles as I could:

  • Automation Engineer: people that automate testing software.
  • Software Development Engineer in Test: people that make tools for the testers to use.
  • Software Development Engineer: aka "Coder", aka "Programmer".
  • Dev Lead: "Hey, you're a good programmer! You should also manage a few other programmers but still code just as much as you did before, but, don't worry, we won't give you much of a raise! It'll be great!"
  • Dev Manager: like a Dev Lead, with less programming, more managing.
  • Director of Engineering: the manager of the managers of the programmers.
  • Vice President of Technology/Engineering: aka "The big boss nerd man who gets to make decisions and gets in trouble first when deadlines are missed."

During my various times with fancy-pants titles, I managed teams that included:

Go to Full Article

DNA Geometry with cadnano

Wednesday 7th of August 2019 07:30:00 PM
by Joey Bernard

This article introduces a tool you can use to work on three-dimensional DNA origami. The package is called cadnano, and it's currently being developed at the Wyss Institute. With this package, you'll be able to construct and manipulate the three-dimensional representations of DNA structures, as well as generate publication-quality graphics of your work.

Because this software is research-based, you won't likely find it in the package repository for your favourite distribution, in which case you'll need to install it from the GitHub repository.

Since cadnano is a Python program, written to use the Qt framework, you'll need to install some packages first. For example, in Debian-based distributions, you'll want to run the following commands:

sudo apt-get install python3 python3-pip

I found that installation was a bit tricky, so I created a virtual Python environment to manage module installations.

Once you're in your activated virtualenv, install the required Python modules with the command:

pip3 install pythreejs termcolor pytz pandas pyqt5 sip

After those dependencies are installed, grab the source code with the command:

git clone

This will grab the Qt5 version. The Qt4 version is in the repository

Changing directory into the source directory, you can build and install cadnano with:

python install

Now your cadnano should be available within the virtualenv.

You can start cadnano simply by executing the cadnano command from a terminal window. You'll see an essentially blank workspace, made up of several empty view panes and an empty inspector pane on the far right-hand side.

Figure 1. When you first start cadnano, you get a completely blank work space.

In order to walk through a few of the functions available in cadnano, let's create a six-strand nanotube. The first step is to create a background that you can use to build upon. At the top of the main window, you'll find three buttons in the toolbar that will let you create a "Freeform", "Honeycomb" or "Square" framework. For this example, click the honeycomb button.

Figure 2. Start your construction with one of the available geometric frameworks.

Go to Full Article

Running GNOME in a Container

Wednesday 7th of August 2019 07:00:00 PM
by Adam Verslype

Containerizing the GUI separates your work and play.

Virtualization has always been a rich man's game, and more frugal enthusiasts—unable to afford fancy server-class components—often struggle to keep up. Linux provides free high-quality hypervisors, but when you start to throw real workloads at the host, its resources become saturated quickly. No amount of spare RAM shoved into an old Dell desktop is going to remedy this situation. If a properly decked-out host is out of your reach, you might want to consider containers instead.

Instead of virtualizing an entire computer, containers allow parts of the Linux kernel to be portioned into several pieces. This occurs without the overhead of emulating hardware or running several identical kernels. A full GUI environment, such as GNOME Shell can be launched inside a container, with a little gumption.

You can accomplish this through namespaces, a feature built in to the Linux kernel. An in-depth look at this feature is beyond the scope of this article, but a brief example sheds light on how these features can create containers. Each kind of namespace segments a different part of the kernel. The PID namespace, for example, prevents processes inside the namespace from seeing other processes running in the kernel. As a result, those processes believe that they are the only ones running on the computer. Each namespace does the same thing for other areas of the kernel as well. The mount namespace isolates the filesystem of the processes inside of it. The network namespace provides a unique network stack to processes running inside of them. The IPC, user, UTS and cgroup namespaces do the same for those areas of the kernel as well. When the seven namespaces are combined, the result is a container: an environment isolated enough to believe it is a freestanding Linux system.

Container frameworks will abstract the minutia of configuring namespaces away from the user, but each framework has a different emphasis. Docker is the most popular and is designed to run multiple copies of identical containers at scale. LXC/LXD is meant to create containers easily that mimic particular Linux distributions. In fact, earlier versions of LXC included a collection of scripts that created the filesystems of popular distributions. A third option is libvirt's lxc driver. Contrary to how it may sound, libvirt-lxc does not use LXC/LXD at all. Instead, the libvirt-lxc driver manipulates kernel namespaces directly. libvirt-lxc integrates into other tools within the libvirt suite as well, so the configuration of libvirt-lxc containers resembles those of virtual machines running in other libvirt drivers instead of a native LXC/LXD container. It is easy to learn as a result, even if the branding is confusing.

Go to Full Article

Digging Through the DevOps Arsenal: Introducing Ansible

Wednesday 7th of August 2019 06:15:00 PM
by Petros Koutoupis

If you need to deploy hundreds of server or client nodes in parallel, maybe on-premises or in the cloud, and you need to configure each and every single one of them, what do you do? How do you do it? Where do you even begin? Many configuration management frameworks exist to address most, if not all, of these questions and concerns. Ansible is one such framework.

You may have heard of Ansible already, but for those who haven't or don't know what it is, Ansible is a configuration management and provisioning tool. (I'll get to exactly what that means shortly.) It's very similar to other tools, such as Puppet, Chef and Salt.

Why use Ansible? Well, because it's simple to master. I don't mean that the others are not simple, but Ansible makes it easy for individuals to pick up quickly. That's because Ansible uses YAML as its base to provision, configure and deploy. And because of this approach, tasks are executed in a specific order. During execution, if you trip over a syntax error, it will fail once you hit it, potentially making it easier to debug.

Now, what's YAML? YAML (or YAML Ain't Markup Language) is a human-readable data-serialization language mostly used to capture configuration files. You know how JSON is easier to implement and use over XML? Well, YAML takes a more simplistic approach than JSON. Here's an example of a typical YAML structure containing a list:

data: - var1: a: 1 b: 2 - var2: a: 1 b: 2 c: 3

Now, let's swing back to Ansible. Ansible is an open-source automation platform freely available for Linux, macOS and BSD. Again, it's very simple to set up and use, without compromising any power. Ansible is designed to aid you in configuration management, application deployment and the automation of assorted tasks. It works great in the realm of IT orchestration, in which you need to run specific tasks in sequence and create a chain of events that must happen on multiple and different servers or devices.

Here's a good example: say you have a group of web servers behind a load balancer. You need to upgrade those web servers, but you also need to ensure that all but one server remains online for the upgrade process. Ansible can handle such a complex task.

Ansible uses SSH to manage remote systems across the network, and those systems are required to have a local installation of not only SSH but also Python. That means you don't have to install and configure a client-server environment for Ansible.

Install Ansible

Although you can build the package from source (either from the public Git repository or from a tarball), most modern Linux distributions will have binary packages available in their local package repositories. You need to have Ansible installed on at least one machine (your control node). Remember, all that's required on the remote machines are SSH and Python.

Go to Full Article

Build a Versatile OpenStack Lab with Kolla

Wednesday 7th of August 2019 05:30:00 PM
by John S. Tonello

Hone your OpenStack skills with a full deployment in a single virtual machine.

It's hard to go anywhere these days without hearing something about the urgent need to deploy on-premises cloud environments that are agile, flexible and don't cost an arm and a leg to build and maintain, but getting your hands on a real OpenStack cluster—the de facto standard—can be downright impossible.

Enter Kolla-Ansible, an official OpenStack project that allows you to deploy a complete cluster successfully—including Keystone, Cinder, Neutron, Nova, Heat and Horizon—in Docker containers on a single, beefy virtual machine. It's actually just one of an emerging group of official OpenStack projects that containerize the OpenStack control plane so users can deploy complete systems in containers and Kubernetes.

To date, for those who don't happen to have a bunch of extra servers loaded with RAM and CPU cores handy, DevStack has served as the go-to OpenStack lab environment, but it comes with some limitations. Key among those is your inability to reboot a DevStack system effectively. In fact, rebooting generally bricks your instances and renders the rest of the stack largely unusable. DevStack also limits your ability to experiment beyond core OpenStack modules, where Kolla lets you build systems that can mimic full production capabilities, make changes and pick up where you left off after a shutdown.

In this article, I explain how to deploy Kolla, starting from the initial configuration of your laptop or workstation, to configuration of your cluster, to putting your OpenStack cluster into service.

Why OpenStack?

As organizations of all shapes and sizes look to speed development and deployment of mission-critical applications, many turn to public clouds like Amazon Web Services (AWS), Microsoft Azure, Google Compute Engine, RackSpace and many others. All make it easy to build the systems you and your organization need quickly. Still, these public cloud services come at a price—sometimes a steep price you only learn about at the end of a billing cycle. Anyone in your organization with a credit card can spin up servers, even ones containing proprietary data and inadequate security safeguards.

OpenStack, a community-driven open-source project with thousands of developers worldwide, offers a robust, enterprise-worthy alternative. It gives you the flexibility of public clouds in your own data center. In many ways, it's also easier to use than public clouds, particularly when OpenStack administrators properly set up networks, carve out storage and compute resources, and provide self-service capabilities to users. It also has tons of add-on capabilities to suit almost any use case you can imagine. No wonder 75% of private clouds are built using OpenStack.

Go to Full Article

The Best Command-Line-Only Video Games

Wednesday 7th of August 2019 05:00:00 PM
by Bryan Lunduke

A rundown of the biggest, most expansive and impressive games that you can run entirely in your Linux shell.

The original UNIX operating system was created, in large part, to facilitate porting a video game to a different computer. And, without UNIX, we wouldn't have Linux, which means we owe the very existence of Linux games.

It's crazy, but it's true.

With that in mind, and in celebration of all things shell/terminal/command line, I want to introduce some of the best video games that run entirely in a shell—no graphics, just ASCII jumping around the screen.

And, when I say "best", I mean the very best—the terminal games that really stand out above the rest.

Although these games may not be considered to have "modern fancy-pants graphics" (also known as MFPG—it's a technical term), they are fantastically fun. Some are big, sprawling adventures, and others are smaller time-wasters. Either way, none of them are terribly large (in terms of drive storage space), and they deserve a place on any Linux rig.


AsciiPatrol is, in my opinion, one of the most impressive terminal games out there. A clone of the classic Moon Patrol, which is a ton of fun already, this terminal-based game provides surprisingly good visuals for a game using only ASCII characters for artwork.

It has color, parallax scrolling backgrounds, animated enemies, sound effects—I mean, even the opening screen is impressive looking in a terminal.

Figure 1. Shooting Aliens and Dodging Potholes in AsciiPatrol

For a quick round of arcade-style fun, this one really can't be beat.

Cataclysm: Dark Days Ahead

Cataclysm: Dark Days Ahead is absolutely huge in scale. Think of it as a top-down, rogue-like, survival game with zombies, monsters and real end-of-the-world-type stuff.

The game features a crafting system, bodily injuries (such as a broken arm), bionic implants, farming, building of structures and vehicles, a huge map (with destructible terrain)—this game is massive. The visuals may be incredibly simple, but the gameplay is deep and open-ended.

Figure 2. Running from zombies in Cataclysm


The Tron-inspired light-cycle games (and non-Tron-themed variants, such as Snake) have been a staple of gaming since the 1980s. And, SSHTron provides a four-player version right in your terminal.

Simply open your terminal and type in the following:

Go to Full Article

Writing GitHub Web Hooks with Bash

Wednesday 7th of August 2019 04:30:00 PM
by Andy Carlson

Bring your GitHub repository to the next level of functionality.

For the past year since Microsoft has acquired GitHub, I've been hosting my Git repositories on a private server. Although I relished the opportunity and challenge of setting it all up, and the end product works well for my needs, doing this was not without its sacrifices. GitHub offers a clean interface for configuring many Git features that otherwise would require more time and effort than simply clicking a button. One of the features made easier to implement by GitHub that I was most fond of was web hooks. A web hook is executed when a specific event occurs within the GitHub application. Upon execution, data is sent via an HTTP POST to a specified URL.

This article walks through how to set up a custom web hook, including configuring a web server, processing the POST data from GitHub and creating a few basic web hooks using Bash.

Preparing Apache

For the purpose of this project, let's use the Apache web server to host the web hook scripts. The module that Apache uses to run server-side shell scripts is mod_cgi, which is available on major Linux distributions.

Once the module is enabled, it's time to configure the directory permissions and virtual host within Apache. Use the /opt/hooks directory to host the web hooks, and give ownership of this directory to the user that runs Apache. To determine the user running an Apache instance, run the following command (provided Apache is currently running):

ps -e -o %U%c| grep 'apache2\|httpd'

These commands will return a two-column output containing the name of the user running Apache and the name of the Apache binary (typically either httpd or apache2). Grant directory permission with the following chown command (where USER is the name of the user shown in the previous ps command):

chown -R USER /opt/hooks

Within this directory, two sub-directories will be created: html and cgi-bin. The html folder will be used as a web root for the virtual host, and cgi-bin will contain all shell scripts for the virtual host. Be aware that as new sub-directories and files are created under /opt/hooks, you may need to rerun the above chown to verify proper access to files and sub-directories.

Here's the configuration for the virtual host within Apache:

Go to Full Article

My Favorite Infrastructure

Wednesday 7th of August 2019 03:45:00 PM
by Kyle Rankin

Take a tour through the best infrastructure I ever built with stops in architecture, disaster recovery, configuration management, orchestration and security.

Working at a startup has many pros and cons, but one of the main benefits over a traditional established company is that a startup often gives you an opportunity to build a completely new infrastructure from the ground up. When you work on a new project at an established company, you typically have to account for legacy systems and design choices that were made for you, often before you even got to the company. But at a startup, you often are presented with a truly blank slate: no pre-existing infrastructure and no existing design choices to factor in.

Brand-new, from-scratch infrastructure is a particularly appealing prospect if you are at a systems architect level. One of the distinctions between a senior-level systems administrator and architect level is that you have been operating at a senior level long enough that you have managed a number of different high-level projects personally and have seen which approaches work and which approaches don't. When you are at this level, it's very exciting to be able to build a brand-new infrastructure from scratch according to all of the lessons you've learned from past efforts without having to support any legacy infrastructure.

During the past decade, I've worked at a few different startups where I was asked to develop new infrastructure completely from scratch but with high security, uptime and compliance requirements, so there was no pressure to cut corners for speed like you might normally face at a startup. I've not only gotten to realize the joy of designing new infrastructure, I've also been able to do it multiple times. Each time, I've been able to bring along all of the past designs that worked, leaving behind the bits that didn't, and updating all the tools to take advantage of new features. This series of infrastructure designs culminated in what I realize looking back on it is my favorite infrastructure—the gold standard on which I will judge all future attempts.

In this article, I dig into some of the details of my favorite infrastructure. I describe some of the constraints around the design and explore how each part of the infrastructure fits together, why I made the design decisions I did, and how it all worked. I'm not saying that what worked for me will work for you, but hopefully you can take some inspiration from my approach and adapt it for your needs.

Go to Full Article

Tutanota Interviews Tim Verheyden, the Journalist Who Broke the Story on Google Employees Listening to People's Audio Recordings

Wednesday 7th of August 2019 02:30:00 PM
by Matthias Pfau

Google employees listen to you, but the issue of "ghost workers" transcends Google. 

Investigative journalist Tim Verheyden, who broke the story on how Google employees listen to people’s audio recordings, explains in an interview how he got hold of the story, why he is now using the encrypted contact form Secure Connect by Tutanota and why the growing number of "ghost workers" in and around Silicon Valley is becoming a big issue in Tech.

Tutanota: Tim, you have broken a great story on VRT News about how employees of Google subcontractors listen to our conversations when using devices such as Google Home. What was that story about? What was the privacy violation?

Tim Verheyden: Google provides a range of information on privacy—and data gathering. In this particular case, Google says on audio gathering that it can save your audio to learn the sound of your voice, learn how we say phrases and words, recognize when we say "Ok Google" to improve speech recognition. Google does not speak about the human interaction in the chain of training the AI on speech recognition. For some experts, this is a violation of the new GDPR law.

Tutanota: How did the employee of the Google subcontractor who leaked the story get in touch with you?

Tim: By email, he shared his thoughts on an article we wrote about Alexa (Amazon) after Bloomberg broke the news about humans listening.

Tutanota: Tutanota has recently launched Secure Connect, and you had added this encrypted contact form to your website a few weeks ago. What do you expect from Secure Connect?

Tim: I hope it will encourage people with a story to get in contact. It does not always need to be a whitsleblower story. Because of security concerns—and other reasons—people are sometimes reluctant to contact a journalist. I hope Secure Connect will help build trust in relationships with journalists.

Tutanota: More and more journalists are offering Secure Connect so that whistleblowers can drop important information or get in touch with investigative journalists confidentially. Why do you believe a secure communication channel is important?

Go to Full Article

Words, Words, Words--Introducing OpenSearchServer

Wednesday 7th of August 2019 01:46:43 PM
by Marcel Gagné

How to create your own search engine combined with a crawler that will index all sorts of documents.

In William Shakespeare's Hamlet, one of my favorite plays, Prince Hamlet is approached by Polonius, chief counselor to Claudius, King of Denmark, who happens to be Hamlet's stepfather, and uncle, and the new husband of his mother, Queen Gertrude, whose recently deceased last husband was the previous King of Denmark. That would be Hamlet's biological father for those who might be having trouble following along. He was King Hamlet. Polonius, I probably should mention, is also the father of Hamlet's sweetheart, Ophelia. Despite this hilarious sounding setup, Hamlet is most definitely not a comedy. (Note: if you need a refresher, you can read Hamlet here.)

For reasons I won't go into here, Hamlet is doing a great job of trying to convince people that he's completely lost it and is pretending to be reading a book when Polonius approaches and asks, "What do you read, my lord?"

Hamlet replies by saying, "'Words, words, words." In other words, ahem, nothing of any importance, you annoying little man.

Shakespeare wrote a lot of words. In fact, writers, businesses and organizations of any size tend to amass a lot of words in the form of countless documents, many of which seem to contain a great deal of importance at the time they are written and subsequently stored on some lonely corporate server. There, locked in their digital prisons, these many texts await the day when somebody will seek out their wisdom. Trouble is, there are so many of them, in many different formats, often with titles that tell you nothing about the content inside. What you need is a search engine.

Google is a pretty awesome search engine, but it's not for everybody, especially if the documents in question aren't meant for consumption by the public at large. For those times, you need your own search engine, combined with a crawler that will index all sorts of documents, from OpenDocument format, to old Microsoft Docs, to PDFs and even plain text. That's where OpenSearchServer comes into play. OpenSearchServer is, as the name implies, an open-source project designed to perform the function of crawling through and indexing large collections of documents, such as you would find on a website.

I'm going to show you how to go about getting this documentation site set up from scratch so that you can see all the steps. You may, of course, already have a web server up and running, and that's fine. I've gone ahead and spun up a Linode server running Ubuntu 18.04 LTS. This is a great way to get a server up and running quickly without spending a lot of money if you don't want to, and if you've never done this, it's also kind of fun.

Go to Full Article

Open Source Is Good, but How Can It Do Good?

Monday 5th of August 2019 11:00:00 AM
by Glyn Moody

Open-source coders: we know you are good—now do good.

The ethical use of computers has been at the heart of free software from the beginning. Here's what Richard Stallman told me when I interviewed him in 1999 for my book Rebel Code:

The free software movement is basically a movement for freedom. It's based on values that are not purely material and practical. It's based on the idea that freedom is a benefit in itself. And that being allowed to be part of a community is a benefit in itself, having neighbors who can help you, who are free to help you—they are not told that they are pirates if they help you—is a benefit in itself, and that that's even more important than how powerful and reliable your software is.

The Open Source world may not be so explicit about the underlying ethical aspect, but most coders probably would hope that their programming makes the world a better place. Now that the core technical challenge of how to write good, world-beating open-source code largely has been met, there's another, trickier challenge: how to write open-source code that does good.

One obvious way is to create software that boosts good causes directly. A recent article on discussed eight projects that are working in the area of the environment. Helping to tackle the climate crisis and other environmental challenges with free software is an obvious way to make the world better in a literal sense, and on a massive scale. Particularly notable is Greenpeace's Platform 4—not just open-source software, but an entire platform for doing good. And external coders are welcome:

Co-develop Planet 4!

Planet 4 is 100% open source. If you would like to get involved and show us what you've got, you're very welcome to join us.

Every coder can contribute to the success of P4 by joining forces to code features, review plugins or special functionalities. The help of Greenpeace offices with extra capacity and of the open source community is most welcome!

This is a great model for doing good with open source, by helping established groups build powerful codebases that have an impact on a global scale. In addition, it creates communities of like-minded free software programmers interested in applying their skills to that end. The Greenpeace approach to developing its new platform, usefully mapped out on the site, provides a template for other organizations that want to change the world with the help of ethical coders.

Go to Full Article

Reality 2.0 Episode 24: A Chat About Redis Labs (Podcast Transcript)

Friday 2nd of August 2019 03:40:07 PM
by Katherine Druckman

Doc Searls and Katherine Druckman talk to Yiftach Shoolman of Redis Labs about Redis, Open Source licenses, company culture and more.

Listen to the podcast here.

Katherine Druckman: Hey, Linux Journal readers, I am Katherine Druckman, joining you again for our awesome, cool podcast. As always, joining us is Doc Searls, our editor-in-chief. Our special guest this time is Yiftach Shoolman of Redis Labs. He is the CTO and co-founder, and he was kind enough to join us. We’ve talked a bit, in preparation for the podcast, about Redis Labs, but I wondered if you could just give us sort of an overview for the tiny fraction of the people listening that don’t know all about Redis Labs and Redis. If you could just give us a little brief intro, that’d be great. 


Yiftach Shoolman: Thank you very much for hosting me, first. Redis is an extremely popular in-memory data structure database that’s used by many people as just a caching system, but many of them have shifted from just simple cache to a real database, even in the open source world. Just in terms of numbers, only on Docker Hub, Redis has been launched for almost 1.8 billion times, something like five million every day, so it’s extremely popular. It’s used everywhere. Redis Labs is the company behind the open source. When I say “behind the open source,” we sponsor, I would say, 99% of all the open source activities, if not 100%. We also have enterprise products, which is called Redis Enterprise. 

It is available as a cloud service on all the public clouds, as well as a fully-managed Redis cloud service, as well as softwares that you can download and install everywhere. This is our story in general. The way we split between open source and commercial, which is today very tricky, is that we keep the Redis core as open-core BSD, by the way. On top of that, we added what we call enterprise layers that allows Redis to be deployed in an enterprise environment in the most scalable and highly available way. We have all the goodies that you need, including active-active, including data persistence layer, etc., all the boring stuff that the enterprise needs, in addition to that, a lot of security features. In addition to that, we extended Redis with what we call modules. Some of them were initially open source, and then we changed the license. This is probably the reason that you called me.


Katherine Druckman: Right. That was in the news, certainly.


Go to Full Article

Episode 24: A Chat About Redis Labs

Friday 2nd of August 2019 02:49:36 PM
Your browser does not support the audio element. Reality 2.0 - Episode 24: A Chat About Redis Labs

Doc Searls and Katherine Druckman talk to Yiftach Shoolman of Redis Labs about Redis, Open Source licenses, company culture and more.

Links mentioned:

Read the transcript here.

More in Tux Machines

MX-19 Release Candidate 1 now available

We are pleased to offer MX-19 RC 1 for testing purposes. As usual, this iso includes the latest updates from debian 10.1 (buster), antiX and MX repos. Read more

The Linux Mint 19.2 Gaming Report: Promising But Room For Improvement

When I started outlining the original Linux Gaming Report, I was still a fresh-faced Linux noob. I didn’t understand how fast the ecosystem advanced (particularly graphics drivers and Steam Proton development), and I set some lofty goals that I couldn’t accomplish given my schedule. Before I even got around to testing Ubuntu 18.10, for example, Ubuntu 19.04 was just around the corner! And since all the evaluation and benchmarking takes a considerable amount of time, I ended up well behind the curve. So I’ve streamlined the process a bit, while adding additional checkpoints such as out-of-the-box software availability and ease-of-installation for important gaming apps like Lutris and GameHub. Read more

Something exciting is coming with Ubuntu 19.10

ZFS is a combined file system and logical volume manager that is scalable, supplying support for high storage capacity and a more efficient data compression, and includes snapshots and rollbacks, copy-on-write clones, continuous integrity checking, automatic repair, and much more. So yeah, ZFS is a big deal, which includes some really great features. But out of those supported features, it's the snapshots and rollbacks that should have every Ubuntu user/admin overcome with a case of the feels. Why? Imagine something has gone wrong. You've lost data or an installation of a piece of software has messed up the system. What do you do? If you have ZFS and you've created a snapshot, you can roll that system back to the snapshot where everything was working fine. Although the concept isn't new to the world of computing, it's certainly not something Ubuntu has had by default. So this is big news. Read more

Pack Your Bags – Systemd Is Taking You To A New Home

Home directories have been a fundamental part on any Unixy system since day one. They’re such a basic element, we usually don’t give them much thought. And why would we? From a low level point of view, whatever location $HOME is pointing to, is a directory just like any other of the countless ones you will find on the system — apart from maybe being located on its own disk partition. Home directories are so unspectacular in their nature, it wouldn’t usually cross anyone’s mind to even consider to change anything about them. And then there’s Lennart Poettering. In case you’re not familiar with the name, he is the main developer behind the systemd init system, which has nowadays been adopted by the majority of Linux distributions as replacement for its oldschool, Unix-style init-system predecessors, essentially changing everything we knew about the system boot process. Not only did this change personally insult every single Perl-loving, Ken-Thompson-action-figure-owning grey beard, it engendered contempt towards systemd and Lennart himself that approaches Nickelback level. At this point, it probably doesn’t matter anymore what he does next, haters gonna hate. So who better than him to disrupt everything we know about home directories? Where you _live_? Although, home directories are just one part of the equation that his latest creation — the systemd-homed project — is going to make people hate him even more tackle. The big picture is really more about the whole concept of user management as we know it, which sounds bold and scary, but which in its current state is also a lot more flawed than we might realize. So let’s have a look at what it’s all about, the motivation behind homed, the problems it’s going to both solve and raise, and how it’s maybe time to leave some outdated philosophies behind us. Read more