Language Selection

English French German Italian Portuguese Spanish

Linux Journal

Syndicate content
Updated: 3 hours 8 min ago

Develop a Linux command-line Tool to Track and Plot Covid-19 Stats

Tuesday 13th of April 2021 04:00:00 PM
by Nawaz Abbasi

It’s been over a year and we are still fighting with the pandemic at almost every aspect of our life. Thanks to technology, we have various tools and mechanisms to track Covid-19 related metrics which help us make informed decisions. This introductory-level tutorial discusses developing one such tool at just Linux command-line, from scratch.

We will start with introducing the most important parts of the tool – the APIs and the commands. We will be using 2 APIs for our tool - COVID19 API and Quickchart API and 2 key commands – curl and jq. In simple terms, curl command is used for data transfer and jq command to process JSON data.

The complete tool can be broken down into 2 keys steps:

1. Fetching (GET request) data from the COVID19 API and piping the JSON output to jq so as to process out only global data (or similarly, country specific data).

$ curl -s --location --request GET 'https://api.covid19api.com/summary' | jq -r '.Global' { "NewConfirmed": 561661, "TotalConfirmed": 136069313, "NewDeaths": 8077, "TotalDeaths": 2937292, "NewRecovered": 487901, "TotalRecovered": 77585186, "Date": "2021-04-13T02:28:22.158Z" }

2. Storing the output of step 1 in variables and calling the Quickchart API using those variables, to plot a chart. Subsequently piping the JSON output to jq so as to filter only the link to our chart.

$ curl -s -X POST \ -H 'Content-Type: application/json' \ -d '{"chart": {"type": "bar", "data": {"labels": ["NewConfirmed ('''${newConf}''')", "TotalConfirmed ('''${totConf}''')", "NewDeaths ('''${newDeath}''')", "TotalDeaths ('''${totDeath}''')", "NewRecovered ('''${newRecover}''')", "TotalRecovered ('''${totRecover}''')"], "datasets": [{"label": "Global Covid-19 Stats ('''${datetime}''')", "data": ['''${newConf}''', '''${totConf}''', '''${newDeath}''', '''${totDeath}''', '''${newRecover}''', '''${totRecover}''']}]}}}' \ https://quickchart.io/chart/create | jq -r '.url' https://quickchart.io/chart/render/zf-be27ef29-4495-4e9a-9180-dbf76f485eaf

That’s it! Now we have our data plotted out in a chart:

Go to Full Article

FSF’s LibrePlanet 2021 Free Software Conference Is Next Weekend, Online Only

Tuesday 9th of March 2021 05:00:00 PM
by George Whittaker

On Saturday and Sunday, March 20th and 21st, 2021, free software supporters from all over the world will log in to share knowledge and experiences, and to socialize with others within the free software community. This year’s theme is “Empowering Users,” and keynotes will be Julia Reda, Nathan Freitas, and Nadya Peek. Free Software Foundation (FSF) associate members and students attend gratis at the Supporter level. 

You can see the schedule and learn more about the conference at https://libreplanet.org/2021/, and participants are encouraged to register in advance at https://u.fsf.org/lp21-sp

The conference will also include workshops, community-submitted five-minute Lightning Talks, Birds of a Feather (BoF) sessions, and an interactive “exhibitor hall” and “hallway” for socializing.

Go to Full Article

Review: The New weLees Visual LVM, a new style of LVM management, has been released

Tuesday 2nd of March 2021 05:00:00 PM
by George Whittaker

Maintenance of the storage system is a daily job for system administrators. Linux provides users with a wealth of storage capabilities, and powerful built-in maintenance tools. However, these tools are hardly friendly to system administrators while generally considerable effort is required for mastery.

As a Linux built-in storage model, LVM provides users with plenty flexible management modes to fit various needs. For users who can fully utilize its functions, LVM could meet almost all needs. But the premise is thorough understanding of the LVM model, dozens of commands as well as accompanying parameters.

The graphical interface would dramatically simplify both learning curve and operation with LVM, in a similar approach as partition tools that are widely used on Windows/Linux platforms. Although scripts with commands are suitable for daily, automatic tasks, the script could not handle all functions in LVM. For instance, manual calculation and processing are still required by many tasks.

Significant effort had been spent on this problem. Nowadays, several graphical LVM management tools are already available on the Internet, some of them are built-in with Linux distributions and others are developed by third parties. But there remains a critical problem: desire for remote machines or headless servers are completely ignored.

This is now solved by Visual LVM Remote. Front end of this tool is developed based on the HTTP protocol. With any smart device that can connect to the storage server, Users can perform management operations.

Visual LVM is developed by weLees Corporation and supports all Linux distributions. In addition to working with remote/headless servers, it also supports more advanced features of LVM compared with various on-shelf graphic LVM management tools.

Dependences of Visual LVM Remote

Visual LVM Remote can work on any Linux distribution that including two components below:

  • LVM2

  • Libstdc++.so

UI of Visual LVM Remote

With a concise UI, partitions/physical volumes/logical volumes are displayed by disk layout. With a glance, disk/volume group information can be obtained immediately. In addition, detailed relevant information of the object will be displayed in the information bar below with the mouse hover on the concerned object.

Go to Full Article

Nvidia Linux drivers causing random hard crashes and now a major security risk still not fixed after 5+ months

Tuesday 23rd of February 2021 05:00:00 PM
Image

The recent fiasco with Nvidia trying to block Hardware Unboxed from future GPU review samples for the content of their review is one example of how they choose to play this game. This hatred is not only shared by reviewers, but also developers and especially Linux users.

The infamous Torvalds videos still traverse the web today as Nvidia conjures up another evil plan to suck up more of your money and market share. This is not just one off shoot case; oh how much I wish it was. I just want my computer to work.

If anyone has used Sway-WM with an Nvidia GPU I’m sure they would remember the –my-next-gpu-wont-be-nvidia option.

These are a few examples of many.

The Nvidia Linux drivers have never been good but whatever has been happening at Nvidia for the past decade has to stop today. The topic in question today is this bug: [https://forums.developer.nvidia.com/t/bug-report-455-23-04-kernel-panic-due-to-null-pointer-dereference]

This bug causes hard irrecoverable crashes from driver 440+. This issue is still happening 5+ months later with no end in sight. At first users could work around this by using an older DKMS driver along with a LTS kernel. However today this is no longer possible. Many distributions of Linux are now dropping the old kernels. DKMS cannot build. The users are now FORCED with this “choice”:

{Use an older driver and risk security implications} or {“use” the new drivers that cause random irrecoverable crashes.}

This issue is only going to get more and more prevalent as the kernel is a core dependency by definition. This is just another example of the implications of an unsafe older kernel causing issue for users: https://archlinux.org/news/moving-to-zstandard-images-by-default-on-mkinitcpio/

If you use Linux or care about the implications of a GPU monopoly, consider AMD. Nvidia is already rearing its ugly head and AMD is actually putting up a fight this year.

#Linux NVIDIA News

Parallel shells with xargs: Utilize all your cpu cores on UNIX and Windows

Tuesday 16th of February 2021 05:00:00 PM
by Charles Fisher Introduction

One particular frustration with the UNIX shell is the inability to easily schedule multiple, concurrent tasks that fully utilize CPU cores presented on modern systems. The example of focus in this article is file compression, but the problem rises with many computationally intensive tasks, such as image/audio/media processing, password cracking and hash analysis, database Extract, Transform, and Load, and backup activities. It is understandably frustrating to wait for gzip * running on a single CPU core, while most of a machine's processing power lies idle.

This can be understood as a weakness of the first decade of Research UNIX which was not developed on machines with SMP. The Bourne shell did not emerge from the 7th edition with any native syntax or controls for cohesively managing the resource consumption of background processes.

Utilities have haphazardly evolved to perform some of these functions. The GNU version of xargs is able to exercise some primitive control in allocating background processes, which is discussed at some length in the documentation. While the GNU extensions to xargs have proliferated to many other implementations (notably BusyBox, including the release for Microsoft Windows, example below), they are not POSIX.2-compliant, and likely will not be found on commercial UNIX.

Historic users of xargs will remember it as a useful tool for directories that contained too many files for echo * or other wildcards to be used; in this situation xargs is called to repeatedly batch groups of files with a single command. As xargs has evolved beyond POSIX, it has assumed a new relevance which is useful to explore.

Why is POSIX.2 this bad?

A clear understanding of the lack of cohesive job scheduling in UNIX requires some history of the evolution of these utilities.

Go to Full Article

Bypassing Deep Packet Inspection: Tunneling Traffic Over TLS VPN

Thursday 11th of February 2021 05:00:00 PM
by Dmitriy Kuptsov

In some countries, network operators employ deep packet inspection techniques to block certain types of traffic. For example, Virtual Private Network (VPN) traffic can be analyzed and blocked to prevent users from sending encrypted packets over such networks.

By observing that HTTPS works all over the world (configured for an extremely large number of web-servers) and cannot be easily analyzed (the payload is usually encrypted), we argue that in the same manner VPN tunneling can be organized: By masquerading the VPN traffic with TLS or its older version - SSL, we can build a reliable and secure network. Packets, which are sent over such tunnels, can cross multiple domains, which have various (strict and not so strict) security policies. Despite that the SSH can be potentially used to build such network, we have evidence that in certain countries connections made over such tunnels are analyzed statistically: If the network utilization by such tunnels is high, bursts do exist, or connections are long-living, then underlying TCP connections are reset by network operators.

Thus, here we make an experimental effort in this direction: First, we describe different VPN solutions, which exist on the Internet; and, second, we describe our experimental effort with Python-based software and Linux, which allows users to create VPN tunnels using TLS protocol and tunnel small office/home office (SOHO) traffic through such tunnels.

I. INTRODUCTION

Virtual private networks (VPN) are crucial in the modern era. By encapsulating and sending client’s traffic inside protected tunnels it is possible for users to obtain network services, which otherwise would be blocked by a network operator. VPN solutions are also useful when accessing a company’s Intranet network. For example, corporate employees can access the internal network in a secure way by establishing a VPN connection and directing all traffic through the tunnel towards the corporate network. This way they can get services, which otherwise would be impossible to get from the outside world.

II. BACKGROUND

There are various solutions that can be used to build VPNs. One example is Host Identity Protocols (HIP) [7]. HIP is a layer 3.5 solution (it is in fact located between transport and network layers) and was originally designed to split the dual role of IP addresses - identifier and locator. For example, a company called Tempered Networks uses HIP protocol to build secure networks (for sampling see [4]).

Go to Full Article

How to Save Time Running Automated Tests with Parallel CI Machines

Thursday 4th of February 2021 05:00:00 PM
by Artur Trzop

Automated tests are part of many programming projects, ensuring the software is flawless. The bigger the project, the larger the test suite can be.This can result in automated tests taking a lot of time to run. In this article you will learn how to run automated tests faster with parallel Continuous Integration machines (CI) and what problems can be encountered. The article covers common parallel testing problems, based on Ruby & JavaScript tests.

Slow automated tests

Automated tests can be considered slow when programmers stop running the whole test suite on their local machine because it is too time consuming. Most of the time you use CI servers such as Jenkins, CircleCI, Github Actions to run your tests on an external machine instead of your own. When you have a test suite that runs for an hour then it’s not efficient to run it on your computer. Browser end-to-end tests for your web project can take a really long time to execute. Running tests on a CI server for an hour is also not efficient. You as a developer need a fast feedback loop to know if your software works fine. Automated tests should help you with that.

Split tests between many CI machines to save time

A way to save you time is to make CI build as fast as possible. When you have tests taking e.g. 1 hour to run then you could leverage your CI server config and setup parallel jobs (parallel CI machines/nodes). Each of the parallel jobs can run a chunk of the test suite. 

You need to divide your tests between parallel CI machines. When you have a 60 minutes test suite you can run 20 parallel jobs where each job runs a small set of tests and this should save you time. In an optimal scenario you would run tests for 3 minutes per job. 

How to make sure each job runs for 3 minutes? As a first step you can apply a simple solution. Sort all of your test files alphabetically and divide them by the number of parallel jobs. Each of your test files can have a different execution time depending on how many test cases you have per test file and how complex each test case is. But you can end up with test files divided in a suboptimal way, and this is problematic. The image below illustrates a suboptimal split of tests between parallel CI jobs where one job runs too many tests and ends up being a bottleneck.

Go to Full Article

The KISS Web Development Framework

Thursday 28th of January 2021 05:00:00 PM
by Blake McBride

Perhaps the most popular platform for applications is the web. There are many reasons for this including portability across platforms, no need to update the program, data backup, sharing data with others, and many more. This popularity has driven many of us to the platform.

Unfortunately, the platform is a bit complex. Rather than developing in a particular environment, with web applications it is necessary to create two halves of a program utilizing vastly different technologies. On top of that, there are many additional challenges such as the communications and security between the two halves.

A typical web application would include all of the following building blocks:

  1. Front-end layout (HTML/CSS)
  2. Front-end functionality (JavaScript)
  3. Back-end server code (Java, C#, etc.)
  4. Communications (REST, etc.)
  5. Authentication
  6. Data persistence (SQL, etc.)

All these don't even touch on all the other pieces that are not part of your application proper, such as the server (Apache, tomcat, etc), the database server (PostgreSQL, MySQL, MongoDB, etc), the OS (Linux, etc.), domain name, DNS, yadda, yadda, yadda.

The tremendous complexity notwithstanding, most application developers mainly have to concern themselves with the six items listed above. These are their main concerns.

Although there are many fine solutions available for these main concerns, in general, these solutions are siloed, complex, and incongruent. Let me explain.

Many solutions are siloed because they are single-solution packages that are complete within themselves and disconnected from other pieces of the system.

Some solutions are so complex that they can take years to learn well. Developers can struggle more with the framework they are using than the language or application they are trying to write. This is a major problem.

Lastly, by incongruent I mean that the siloed tools do not naturally fit well together. A bunch of glue code has to be written, learned, and supported to fit the various pieces together. Each tool has a different feel, a different approach, a different way of thinking.

Being frustrated with all of these problems, I wrote the KISS Web Development Framework. At first it was just various solutions I had developed. But later it evolved into a single, comprehensive web development framework. KISS, an open-source project, was specifically designed to solve these exact challenges.

KISS is a single, comprehensive, fully integrated web development framework that includes integrated solutions for:

Front-end

  1. Custom HTML controls
  2. Easy communications with the back-end with built-in authentication
  3. Browser cache control (so the user never has to clear their cache)
  4. A variety of general purpose utilities

Back-end

Go to Full Article

Linux in Healthcare - Cutting Costs & Adding Safety

Friday 22nd of January 2021 05:00:00 PM
by Alex Gosselin

Healthcare domain directly deals with our health and lives. Healthcare is prevention, diagnosis, and treatment of any disease, injury, illness, or any other physical and mental impairments in humans. Emergency situations are often dealt with by the healthcare sector very frequently. With immense scope for improvisations, a thriving healthcare domain deals from telemedicine to insurance, and inpatient hospitals to outpatient clinics.  With practitioners practicing in multiple areas like medicine, chiropractic, nursing, dentistry, pharmacy, allied health, and others, it's an industry with complex processes and data-oriented maintenance systems often difficult to manage manually with paperwork.

Need is the mother of innovation and hence people across the world have invented software and systems to manage:

  • Patients’ data or rather medical history
  • Bills and claims for own and third-party services
  • Inventory management
  • Communication channels among various departments like reception, doctor’s room, investigation rooms, wards, Operation theaters, etc.
  • Controlled Medical equipment and much more.

Thus, saving our precious time, making life easier, and minimizing human errors.

HealthCare integrated with Linux: With high availability, critical workloads, low power consumption and reliability, Linux has established itself in the likes of windows, and Mac OS. With a “stripped-down” graphical interface and minimal OS version, it provides a strong impetus for performance restricting many services from running and direct control over hardware. Integrating Linux with the latest technological solutions in healthcare (check out Elinext healthcare solutions, as an example), businesses are saving a lot along with enhanced security.

 

Few drivers promoting Linux in healthcare are: 

Open Source: One of the utmost benefits of Linux is its open-source saving license cost for  health care organizations. Most of the software and programs running on Linux OS are largely open sources too. Anyone can modify Linux kernel based on open source license, resulting customization as per your needs. Using open-source, there is no need to request additional resources or sign additional agreements. It provides you vendor independence. With a creditable Linux community backed by various organizations, you have satisfactory support.

Go to Full Article

MuseScore Created New Font in Memory of Original SCORE Program Creator

Thursday 21st of January 2021 05:00:00 PM
Image

MuseScore represents a free notation software for operating systems such as Windows, macOS and Linux. It is designed and suitable for music teachers, students & both amateur and professional composers. MuseScore is released as FOSS under the GNU GPL license and it’s accompanied by freemium MuseScore.com sheet music catalogue with mobile score viewer, playback app and an online score sharing platform. In 2018, the MuseScore company was acquired by Ultimate Guitar, which included full-time paid developers in the open source team. Since 2019 the MuseScore design team has been led by Martin Keary, known as blogger Tantacrul, who has consistently criticized composer software in connection with design and usability. From that moment on, a qualitative change was set in motion in MuseScore.

Historically, the engraving quality in MuseScore has not been entirely satisfactory. After the review by Martin Keary, MuseScore product owner (previously known as MuseScore head of design) and Simon Smith, an engraving expert, who has produced multiple detailed reports on the engraving quality of MuseScore 3.5, it has become apparent that some key engraving issues should be resolved immediately.That would have a significant impact on the overall quality of our scores. Therefore, these changes will considerably improve the quality of scores published in the sheet music catalog, MuseScore.com.

The MuseScore 3.6 was called 'engraving release,' which addressed many of the biggest issues affecting sheet music's layout and appearance and resulted from a massive collaboration between the community and internal team.

 

Two of the most notable additions in this release are Leland, our new notation font and Edwin, our new typeface.

Leland is a highly sophisticated notation style created by Martin Keary & Simon Smith. Leland aims to provide a classic notation style that feels 'just right' with a balanced, consistent weight and a finessed appearance that avoids overly stylized quirks.

The new typeface, Edwin, is based on the New Century Schoolbook, which has long been the typeface of choice by some of the world's leading publishers, explicitly chosen as a complementary companion to Leland. We have also provided new default style settings (margins, line thickness, etc.) to compliment Leland and Edwin, which match conventions used by the world's leading publishing houses.

“Then there's our new typeface, Edwin, which is an open license version of new Century Schoolbook - long a favourite of professional publishers, like Boosey and Hawkes. But since there is no music written yet, you'll be forgiven for missing the largest change of all: our new notation font: Leland, which is named after Leland Smith, the creator of a now abandoned application called SCORE, which was known for the amazing quality of its engraving. We have spent a lot of time finessing this font to be a world beater.”

— Martin Keary, product owner of MuseScore

Equally as important as the new notation style is the new vertical layout system. This is switched on by default for new scores and can be activated on older scores too. It is a tremendous improvement to how staves are vertically arranged and will save the composer’s work hours by significantly reducing his reliance on vertical spacers and manual adjustment.

MuseScore 3.6 developers also created a system for automatically organizing the instruments on your score to conform with a range of common conventions (orchestral, marching band, etc.). Besides, newly created scores will also be accurately bracketed by default. A user can even specify soloists, which will be arranged and bracketed according to your chosen convention. These three new systems result from a collaboration between Simon Smith and the MuseScore community member, Niek van den Berg.

MuseScore team has also greatly improved how the software displays the notation fonts: Emmentaler and Bravura, which more accurately match the original designers' intentions and have included a new jazz font called 'Petaluma' designed by Anthony Hughes at Steinberg.

Lastly, MuseScore has made some beneficial improvements to the export process, including a new dialog containing lots of practical and time-saving settings. This work was implemented by one more community member, Casper Jeukendrup.

The team's current plans are to improve the engraving capabilities of MuseScore, including substantial overhauls to the horizontal spacing and beaming systems. MuseScore 3.6 may be a massive step, although there is a great deal of work ahead.

Links

Official release notes: MuseScore 3.6

Martin Keary’s video: “How I Designed a Free Music Font for 5 Million Musicians (MuseScore 3.6)”

Official video: “MuseScore 3.6 - A Massive Engraving Overhaul!”

Download MuseScore for free: MuseScore.org

#Linux Music Software FOSS

Virtual Machine Startup Shells Closes the Digital Divide One Cloud Computer at a Time

Thursday 14th of January 2021 05:00:00 PM
Image Startup turns devices you probably already own - from smartphones and tablets to smart TVs and game consoles - into full-fledged computers.

Shells (shells.com), a new entrant in the virtual machine and cloud computing space, is excited to launch their new product which gives new users the freedom to code and create on nearly any device with an internet connection.  Flexibility, ease, and competitive pricing are a focus for Shells which makes it easy for a user to start-up their own virtual cloud computer in minutes.  The company is also offering multiple Linux distros (and continuing to add more offerings) to ensure the user can have the computer that they “want” to have and are most comfortable with.

The US-based startup Shells turns idle screens, including smart TVs, tablets, older or low-spec laptops, gaming consoles, smartphones, and more, into fully-functioning cloud computers. The company utilizes real computers, with Intel processors and top-of-the-line components, to send processing power into your device of choice. When a user accesses their Shell, they are essentially seeing the screen of the computer being hosted in the cloud - rather than relying on the processing power of the device they’re physically using.

Shells was designed to run seamlessly on a number of devices that most users likely already own, as long as it can open an internet browser or run one of Shells’ dedicated applications for iOS or Android. Shells are always on and always up to date, ensuring speed and security while avoiding the need to constantly upgrade or buy new hardware.

Shells offers four tiers (Lite, Basic, Plus, and Pro) catering to casual users and professionals alike. Shells Pro targets the latter, and offers a quad-core virtual CPU, 8GB of RAM, 160GB of storage, and unlimited access and bandwidth which is a great option for software engineers, music producers, video editors, and other digital creatives.

Using your Shell for testing eliminates the worry associated with tasks or software that could potentially break the development environment on your main computer or laptop. Because Shells are running round the clock, users can compile on any device without overheating - and allow large compile jobs to complete in the background or overnight. Shells also enables snapshots, so a user can revert their system to a previous date or time. In the event of a major error, simply reinstall your operating system in seconds.

“What Dropbox did for cloud storage, Shells endeavors to accomplish for cloud computing at large,” says CEO Alex Lee. “Shells offers developers a one-stop shop for testing and deployment, on any device that can connect to the web. With the ability to use different operating systems, both Windows and Linux, developers can utilize their favorite IDE on the operating system they need. We also offer the added advantage of being able to utilize just about any device for that preferred IDE, giving devs a level of flexibility previously not available.”

“Shells is hyper focused on closing the digital divide as it relates to fair and equal access to computers - an issue that has been unfortunately exacerbated by the ongoing pandemic,” Lee continues. “We see Shells as more than just a cloud computing solution - it’s leveling the playing field for anyone interested in coding, regardless of whether they have a high-end computer at home or not.”

Follow Shells for more information on service availability, new features, and the future of “bring your own device” cloud computing:

Website: https://www.shells.com

Twitter: @shellsdotcom

Facebook: https://www.facebook.com/shellsdotcom

Instagram: https://www.instagram.com/shellscom

#virtual-machine #cloud-computing #Shells

An Introduction to Linux Gaming thanks to ProtonDB

Wednesday 6th of January 2021 05:00:00 PM
by Zachary Renz Video Games On Linux? 

In this article, the newest compatibility feature for gaming will be introduced and explained for all you dedicated video game fanatics. 

Valve releases its new compatibility feature to innovate Linux gaming, included with its own community of play testers and reviewers.

In recent years we have made leaps and strides on making Linux and Unix systems more accessible for everyone. Now we come to a commonly asked question, can we play games on Linux? Well, of course! And almost, let me explain.

Proton compatibility layer for Steam client 

With the rising popularity of Linux systems, valve is going ahead of the crowd yet again with proton for their steam client (computer program that runs your purchased games from Steam). Proton is a variant of Wine and DXVK that lets Microsoft Games run on Linux operating systems. Proton is backed by Valve itself and can easily be added to any steam account for Linux gaming, through an integration called "Steam Play." 

Lately, there has been a lot of controversy as Microsoft is rumored to someday release its own app store and disable downloading software online. In response, many companies and software developers are pressured to find a new "haven" to share content with the internet. Proton might be Valve's response to this and is working to make more of its games accessible to Linux users. 

Activating Proton with Steam Play 

Proton is integrated into the Steam Client with "Steam Play." To activate proton, go into your steam client and click on Steam in the upper right corner. Then click on settings to open a new window.

Steam Client's settings window

 

From here, click on the Steam Play button at the bottom of the panel. Click "Enable Steam Play for Supported Titles." After, it will ask you to restart steam, click yes and you are ready to play after the restart.

Your computer will now play all of steam's whitelisted games seamlessly. But, if you would like to try other games that are not guaranteed to work on Linux, then click "Enable Steam Play for All Other Titles."

What Happens if a Game has Issues?

Don't worry, this can and will happen for games that are not in Steam's whitelisted games archive. But, there is help for you online on steam and in proton's growing community. Be patient and don't give up! There will always be a solution out there.

Go to Full Article

More in Tux Machines

Noise With Blanket

Videos/Audiocasts/Shows: Linux Journal Expats, Linux Experiment, and Krita Artwork

  • You Should Open Source Now, Ask Me How!

    Katherine Druckman chats with Petros Koutoupis and Kyle Rankin about FOSS (Free and Open Source Software), the benefits of contributing to the projects you use, and why you should be a FOSS fan as well.

  • System76 starts their own desktop environment, Arch goes the easy route - Linux & Open Source news

    This time, we have System76 working on their own desktop environment based on GNOME, Arch Linux adding a guided installer, Google winning its court case against Oracle on the use of Java in Android, and Facebook is leaking data online, again. Become a channel member to get access to a weekly patroncast and vote on the next topics I'll cover

  • Timelapse: inking a comic page in Krita (uncommented)

    An uncommented timelapse while inking this page 6 of episode 34 of my webcomic Pepper&Carrot ( https://www.peppercarrot.com/ ). During the process, I thought about activating the recorder and I even put a webcam so you can see what I'm doing on the tablet too. I'm not doing it for everypages; because you can imagine the weight on disk about saving around 10h of videos like this; and also how it is not multi-tasking: when I record, you don't see me open the door to get the mail of the postman, you don't see me cleaning temporary accident of a cat bringing back a mouse at home, you don't see me typing to solve a merge request issue to merge a translation of Pepper&Carrot.

Kernel Leftovers

  • [Intel-gfx] [RFC 00/28] Old platform/gen kconfig options series
  • Patches Resubmitted For Linux With Selectable Intel Graphics Platform Support

    Back in early 2018 were patches proposed for selectable platform support when building Intel's kernel graphics driver so users/distributions if desired could disable extremely old hardware support and/or cater kernel builds for specific Intel graphics generations. Three years later those patches have been re-proposed. The patches then and now are about allowing selectable Intel graphics "Gen" support at kernel configure/build time so that say the i8xx support could be removed or other specific generations of Intel graphics handled by the i915 kernel driver. This disabling could be done if phasing out older hardware support, seeking smaller kernel images, or other similar purposes. The patches don't change any default support levels but leaves things as-is and simply provides the knobs for disabling select generations of hardware.

  • Linux Kernel Runtime Guard 0.9.0 Is Released

    Linux Kernel Runtime Guard (LKRG) is a security module for the Linux kernel developed by Openwall. The latest release adds compatibility with Linux kernels up to soon to be released 5.12, support for building LKRG into kernel images, support for old 32-bit x86 machines and more. Loading the LKRG 0.9.0 module will cause a kernel panic and a complete halt if SELinux is enabled.

  • Hans de Goede: Logitech G15 and Z-10 LCD-screen support under Linux

    A while ago I worked on improving Logitech G15 LCD-screen support under Linux. I recently got an email from someone who wanted to add support for the LCD panel in the Logitech Z-10 speakers to lcdproc, asking me to describe the process I went through to improve G15 support in lcdproc and how I made it work without requiring the unmaintained g15daemon code.

Devuan 4.0 Alpha Builds Begin For Debian 11 Without Systemd

Debian 11 continues inching closer towards release and it looks like the developers maintaining the "Devuan" fork won't be far behind with their re-base of the distribution focused on init system freedom. The Devuan fork of Debian remains focused on providing Debian GNU/Linux without systemd. Devuan Beowulf 3.1 is their latest release based on Debian 10 while Devuan Chimaera is in the works as their re-base for Debian 11. Read more