Language Selection

English French German Italian Portuguese Spanish

Linux Journal

Syndicate content
Updated: 1 hour 42 min ago

SQLite for Secrecy Management - Tools and Methods

Wednesday 28th of September 2022 04:00:00 PM
by Charles Fisher Introduction

Secrets pervade enterprise systems. Access to critical corporate resources will always require credentials of some type, and this sensitive data is often inadequately protected. It is rife both for erroneous exposure and malicious exploitation. Best practices are few, and often fail.

SQLite is a natural storage platform, approved by the Library of the U.S. Congress as a long-term archival medium. “SQLite is likely used more than all other database engines combined.” The software undergoes extensive testing as it has acquired DO-178B certification for reliability due to the needs of the avionics industry, and is currently used on the Airbus A350's flight systems. The need for SQLite emerged from a damage control application tasked for the U.S. battleship DDG-79 Oscar Austin. An Informix database was running under HP-UX on this vessel, and during ship power losses, the database would not always restart without maintenance, presenting physical risks for the crew. SQLite is an answer to that danger; when used properly, it will transparently recover from such crashes. Despite a small number of CVEs patched in CentOS 7 (CVE-2015-3414, CVE-2015-3415, CVE-2015-3416, CVE-2019-13734), few databases can match SQLite's reliability record, and none that are commercially prevalent.

SQLite specifically avoids any question of access control. It does not implement GRANT and REVOKE as found in other databases, and delegates permissions to the OS. Adapting it for sensitive data always requires strong security to be implemented upon it.

The free releases of CyberArk Conjur and Summon build a basic platform for secrecy management. These tools are somewhat awkward, as conjur requires a running instance of PostgreSQL, which brings an attack surface that is far larger than hoped. Slaving an enterprise to a free, centralized instance of conjur and PostgreSQL is a large risk, as CyberArk's documentation attests.

CyberArk summon, however, can be configured with custom backend providers, which have simple interfacing requirements. SQLite is a fit both for summon and as a standalone secrecy provider.

Go to Full Article

Pwndrop on Linode

Tuesday 27th of September 2022 04:00:00 PM
by David Burgess

When I first ran across PwnDrop, I was intrigued at what the developers had in mind with it. For instance, if you're a white-hat hacker and are looking to share exploits safely with your client, you might use a service like PwnDrop. If you're a journalist communicating with, well, just about anyone who is trying to keep their identity secret, you might use a service like PwnDrop.

In this tutorial, we're going to look at how easy it is to set up and use in just a few minutes.

Prerequisites for PwnDrop in Docker

First things first, you’ll need a Docker server set up. Linode has made that process very simple and you can set one up for just a few bucks a month and can add a private IP address (for free) and backups for just a couple bucks more per month.

Another thing you’ll need is a domain name, which you can buy from almost anywhere online for a wide range of prices depending on where you make your purchase. Be sure to point the domain's DNS settings to Linode. You can find more information about that here: https://www.linode.com/docs/guides/dns-manager/

You’ll also want a reverse proxy set up on your Docker Server so that you can do things like route traffic and manage SSLs on your server. I made a video about the process of setting up a Docker server with Portainer and a reverse proxy called Nginx Proxy Manager that you can check out here: https://www.youtube.com/watch?v=7oUjfsaR0NU

Once you’ve got your Docker server set up, you can begin the process of setting up your PwnDrop password manager on that server.

There are 2 primary ways you can do this:

  1. In the command line via SSH.
  2. In Portainer via the Portainer dashboard.

We're going to take a look at how to do this in Portainer so that we can have a user interface to work with.

Head over to http://your-server-ip-address:9000 and get logged into Portainer with the credentials we set up in our previous post/video.

On the left side of the screen, we're going to click the "Stacks" link and then, on the next page, click the "+ Add stack" button.

This will bring up a page where you'll enter the name of the stack. Below that that you can then copy and paste the following:

Go to Full Article

FileRun on Linode

Tuesday 20th of September 2022 04:00:00 PM
by David Burgess

You may want to set up a file server like FileRun for any number of reasons. The main reason, I would think, would be so you can have your own Google Drive alternative that is under your control instead of Google's.

FileRun claims to be "Probably the best File Manager in the world with desktop Sync and File Sharing," but I think you'll have to be the judge of that for yourself.

Just to be completely transparent here, I like FileRun, but there is a shortcoming that I hope they will eventually fix. That shortcoming is that there are some, in my opinion, very important settings that are locked away behind an Enterprise License requirement.

That aside, I really like the ease-of-use and flexibility of FileRun. So let's take a look at it.

Prerequisites for FileRun in Docker

First things first, you’ll need a Docker server set up. Linode has made that process very simple and you can set one up for just a few bucks a month and can add a private IP address (for free) and backups for just a couple bucks more per month.

Another thing you’ll need is a domain name, which you can buy from almost anywhere online for a wide range of prices depending on where you make your purchase. Be sure to point the domain's DNS settings to point to Linode. You can find more information about that here: https://www.linode.com/docs/guides/dns-manager/

You’ll also want a reverse proxy set up on your Docker Server so that you can do things like route traffic and manage SSLs on your server. I made a video about the process of setting up a Docker server with Portainer and a reverse proxy called Nginx Proxy Manager that you can check out here: https://www.youtube.com/watch?v=7oUjfsaR0NU

Once you’ve got your Docker server set up, you can begin the process of setting up your VaultWarden password manager on that server.

There are 2 primary ways you can do this:

  1. In the command line via SSH.
  2. In Portainer via the Portainer dashboard.

We're going to take a look at how to do this in Portainer so that we can have a user interface to work with.

Head over to http://your-server-ip-address:9000 and get logged into Portainer with the credentials we set up in our previous post/video.

On the left side of the screen, we're going to click the "Stacks" link and then, on the next page, click the "+ Add stack" button.

This will bring up a page where you'll enter the name of the stack. Below that that you can then copy and paste the following:

Go to Full Article

Static Site Generation with Hugo

Tuesday 13th of September 2022 04:00:00 PM
by Brandon Hopkins

Hugo is quickly becoming one of the best ways to create a website. Hugo is a free and open source static website generator that allows you to build beautiful static websites with ease. Static websites are awesome because they take very little system resources to host. Compared to something like Wordpress that replies on databases, php, and more static sites are simply HTML, CSS, and the occasional line JavaScript. So static sites are perfect for simple blogs, documentation sites, portfolios, and more.

What is a Static Site?

Static websites are simply sites that consist of basic HTML and CSS files for each individual page. A static site can be easily created and published as server requirments are small and their is very limited server-side software requirements to publish them. You don’t need to know coding and database design to build a static website.

In the early days of the internet most everything was static, but sites were bland and poorly designed. Also if you wanted to make a site wide change such as a link in the footer you’d need to go though every file for your website and make changes on a page by page basis. Maintaining a huge number of fixed pages of files seems to be impractical without having automated tools. However, with the modern web template systems, this scenario is changing.

Over the past few years, static sites are again becoming popular. This is due to advances in programming languages and libraries. Now, with static site generators you can host blogs, large websites, and more with the ability to make site wide changes on the fly.

Advantages of Static

Static files are lightweight making the site faster to load. Cost efficiency is another vital reason why companies tend to migrate to static sites. Below are some of the advantages of static sites over traditional sites based on content management systems and server-side scripting, like PHP, MySQL and others.

Speed

With server-side rendering, potential difficulties regarding web page loading are lesser. Here, your site’s content is presented as an entirely pre-rendered web page. Whereas, in traditional sites, the web page is built separately for every visitor. Better speed provides a better SEO ranking and better site performance as a whole.

Flexibility

Static websites have multiple options in terms of using frameworks for rendering. You’re free to choose any programming language and framework from Ruby, JavaScript, Vue, React, etc. This makes the build and maintenance smoother than the traditional sites. Also, static sites have fewer dependencies. So, you can easily leverage your cloud infrastructure and migrate easily.

Go to Full Article

How to Use Sar (System Activity Reporter)

Wednesday 10th of August 2022 04:00:00 PM
by Jeremy 'Jay' LaCroix Overview

In this article, we're going to take a look at the System Activity Reporter, also known as the sar command. This command will help us with seeing a historical view of the performance of our server. You'll see examples of installing it, running it manually, and more. Let's get started!

Prerequisites

Before we do get started, there's a few quick things to mention. If your server is a production server, then I hope you've already installed all available updates. There's already articles within Linode's documentation when it comes to updating packages.

To get started, we'll first need to install the sar command, which is available in the sysstat package:

sudo apt update sudo apt install sysstat

Installation of the sysstat package should be fairly fast.

However, having the sysstat package installed by itself isn't enough - we'll need to configure its defaults. We can use the nano text editor, for example, to edit the /etc/default/sysstat file:

sudo nano /etc/default/sysstat

The first change to make within this file, is to enable stat collection:

ENABLED="true"

Save the file, and then we're all set with that file in particular.

Optionally, you could consider editing other configuration files that configure sar:

  • /etc/cron.d/sysstat

  • /etc/sysstat/sysstat

The first configures how often stats are collected, the second example will give you even more options to fine-tun sar, which might be useful. Feel free to take a look at it.

The data file

List the storage of the /var/log/sysstat/ directory:

ls -l /var/log/sysstat/

sar will run every ten minutes by default, so if ten minutes hasn't passed since you've enabled stat collecting, then wait a bit, and it should be present.

Running the sar command

Here's an example of sar in action:

sudo sar -u -f /var/log/sysstat/saNUM

Note: NUM in the example is a placeholder for the number next to your data file, which will actually be the same as the date, specifically the day of the month (for example, sa22 corresponds to the 22nd of the current month). The output will give you the overall performance for your server at a given time.

Continuing, let's look at a simpler example:

sar -u

This should give you the same output as before, but without waiting for the data file to be updated.

Yet enother example to show you, is the

Go to Full Article

How to Use the rsync Command

Wednesday 3rd of August 2022 04:00:00 PM
by Jeremy 'Jay' LaCroix Overview

One of my favorite utilities on the Linux command-line, and block storage is one of my favorite features on Linode's platform, so in this article I get to combine both of these together - because what I'm going to be doing is show you how to use rsync to copy data from one server to another, in the form of a backup. What's really cool about this, is that this example will utilize block storage.

Note: I'll be using the Nextcloud server that was set up in a previous article, but it doesn't really matter if it's Nextcloud - you can back up any server that you'd like.

Setting up our environment

On the Linode dashboard, I created an instance named "backup-server" to use as the example here. On your side, be sure to have a Linode instance ready to go in order to have a destination to copy your files to. Also, create a block storage volume to hold the backup files. If you don't already have block storage set up, you can check out other articles and videos on Linode's documentation and YouTube channel respectively, to see an overview of the process.

Again, in the examples, I'm going to be backing up a Nextcloud instance, but feel free to back up any server you may have set up - just be sure to update the paths accordingly to ensure everything matches your environment. In the Nextcloud video, I set up the data volume onto a block storage volume, so block storage is used at both ends.

First, let's create a new directory where we will mount our block storage volume on the backup server. I decided to use /mnt/backup-data:

sudo mkdir /mnt/backup-data

Since the backup server I used in the example stores backups for more than one Linode instance, I decided to have each server back up to a sub-directory within the /mnt/backup-data directory.

sudo mkdir /mnt/backup-data/nextcloud.learnlinux.cloud

Note: I like to name the sub-directories after the fully qualified domain name for that instance, but that is not required.

Continuing, let's make sure our local user (or a backup user) owns the destination directory:

sudo chown jay:jay /mnt/backup-data/nextcloud.learnlinux.cloud

After running that command, the user and group you specify will become the owner of the target directory, as well as everything underneath it (due to the -R option).

Note: Be sure to update the username, group name, and directory names to match your environment.

Go to Full Article

How to use Block Storage to Increase Space on Your Nextcloud Instance

Wednesday 27th of July 2022 04:00:00 PM
by Jeremy 'Jay' LaCroix Overview

In a previous article, I showed you how to build your very own Nextcloud server. In this article, we're going to extend the storage for our Nextcloud instance by utilizing block storage. To follow along, you'll either need your own Nextcloud server to extend, or perhaps you can add block storage to a different type of server you may control, which would mean you'd need to update the paths accordingly as we go along. Block storage is incredibly useful, so we'll definitely want to take advantage of this.

Let's begin!

Setting up the block storage volume

First, use SSH to log in to your Nextcloud instance:

ssh

If we execute df -h, we can see the current list of attached storage volumes:

df -h

One of the benefits of block storage, is that you can have a smaller instance (but still have a bigger disk). Right now, unless you're working ahead, we won't have a block storage volume attached yet, so create one within the Linode dashboard.

You can do this by clicking on "Volumes" within the dashboard, and then you can get started with the process. Fill out each of the fields while creating the block storage device. But pay special attention to the region - you want to set this to be the same region that your Linode instance is in.

After creating the volume, you should see some example commands that give you everything you need to set up the volume. The first command, the one we will use to format the volume, we can copy and paste that command directly into a command shell. For example, it might look similar to this:

sudo mkfs.ext4 "/dev/disk/by-id/scsi-0Linode_Volume_nextcloud-data"

Of course, that's just an example command, it's best to use the command provided from the Linode dashboard, so if you'd like to copy and paste - use the command that you're provided within the dashboard.

At this point, the volume will be formatted, but we'll need to mount it in order to start using it. The second command presented in the dashboard will end up creating a directory into which to mount the volume:

sudo mkdir "/mnt/nextcloud-data"

The third command will actually mount the new volume to your filesystem. Be sure to use the command from the dashboard, the one below is presented only as an example of what that generally looks like:

sudo mount "/dev/disk/by-id/scsi-0Linode_Volume_nextcloud-data"

Next, check the output of the df command and ensure the new volume is listed within the output:

df -h

Next, let's make sure we update /etc/fstab for the new volume, to ensure that it's automatically mounted every time the server starts up:

Go to Full Article

How To Install Nextcloud On An Ubuntu Server

Wednesday 20th of July 2022 04:00:00 PM
by Jeremy 'Jay' LaCroix Introduction, and Getting Started

Nextcloud is a powerful productivity platform that gives you access to some amazing features, such as collaborative editing, cloud file sync, private audio/video chat, email, calendar, and more! Best of all, Nextcloud is under your control and is completely customizable. In this article, we're going to be setting up our very own Nextcloud server on Linode. Alternatively, you can also spin up a Nextcloud server by utilizing the Linode marketplace, which you can use to set up Nextcloud in a single click. However, this article will walk you through the manual installation method. While this method has more steps, by the end you'd have built your very own Nextcloud server from scratch, which will be not only a valuable learning experience - you'll become intimately familiar with the process of setting up Nextcloud. Let's get started!

In order to install Nextcloud, we'll need a Linux instance to install it onto. That's the easy part - there's no shortage of Linux on Linode, so what we'll do in order to get started, is create a brand-new Ubuntu 20.04 Linode instance to serve as our base. Many of the commands we'll be using have changed since Ubuntu 20.04, so while you might be tempted to start with a newer instance, these commands were all tested on Ubuntu 20.04. And considering that Ubuntu 20.04 is supported until April of 2025, it's not a bad choice at all.

Creating your instance

During the process of creating your new Linode instance, choose a region that's closest to you geographically (or close to your target audience). For the instance type, be sure to choose a plan with 2GB of RAM (preferably 4GB). You can always increase the plan later, should you need to do so. You can save some additional money by choosing an instance from the Shared CPU section. For the label, give it a label that matches the designated purpose for the instance. A good name might be something like "nextcloud", but if you have a domain for you instance, you an use that as the name as well.

Continuing, you can consider using tags, which are basically basically a name value pair you can add to your instance. This is completely optional, but you could create whatever tags for your instance if you have a need to do so. For example, you could have a "production" tag, or maybe a "development" tag depending on whether or not you intend to use the instance for production. Again, this is optional, and there's no right or wrong way to tag an instance. If in doubt, you can just leave this blank.

Next, the root password should be unique, and preferably, randomly-generated. This password in particular is going to be the password we will use to log into our instance so make sure you remember it. SSH keys are preferred, and if you have one set up within your profile, you can check a box on this page to add it to your instance.

Go to Full Article

The Echo Command

Wednesday 13th of July 2022 04:00:00 PM
by Jeremy 'Jay' LaCroix

In this article, we're going to look at the echo command, which is useful for showing text on the terminal, as well as the contents of variables. Let's get started!

Basic usage of the echo command

Basic usage of the echo command is very simple. For example:

echo "Hello World"

The output will be what you expect, it will echo Hello World onto the screen. You can also use echo to view the contents of a variable:

msg="Hello World" echo $msg

This also works for built-in shell variables as well:

echo $HOMEAdditional Examples

As with most Linux commands, there's definitely more that we can do with echo;than what we've seen so far.

Audible Alerts

You can also sound an audible alert with echo as well:

echo -e "\aHello World"

The -e option allows you to change the format of the output while using echo.

echo -e "This is a\bLinux server."

The example used \b within the command, which actually lets you call backspace, which gives you the same behavior as actually pressing backspace. In the above example, the letter "a" will not print, because \b backspaces that.

Truncating

The ability to truncate, means you can remove something from the output. For example:

echo -e "This is a Linux\c server."

The output of the above command will actually truncate the word right after \c, which means we'll see the following output:

This is a Linux

Adding a new line

To force a new line to be created:

echo -e "This is a Linux\n server"

The output will end up becoming:

This is a Linux server.

Adding a tab character

To add a tab character to the output:

echo -e "This is a\t Linux\t server."

This will produce the following output:

This is a Linux server.

Redirecting output to a text file

Rather than showing the output on the terminal, we can instruct echo to instead send its output to a text file.

echo "Logfile started: $(date +'%D %T'$" > log.txtClosing

The basics of the echo command were covered in this article. Of course, there's more options where that came from - but this should be more than enough to get you started!

You can watch the tutorial here:

Go to Full Article

Open Source Community to Gather in LA for SCALE 19x

Tuesday 21st of June 2022 04:00:00 PM
by Larry Cafiero

The Southern California Linux Expo – SCALE 19x – returns to its regularly scheduled annual program this year from July 28-31 at the Hilton Los Angeles Airport hotel.

As this continent’s largest community-run Linux/FOSS expo, SCALE 19x continues a nearly two-decade tradition of bringing the latest Free/Open Source Software developments, DevOps, Security and related trends to the general public during the course of the four-day event. Whether you are interested in low level system tuning, how to scale and secure your applications, or how to use OSS at home - SCALE is for you.

Some of this year's highlights include keynotes by Internet pioneer Vint Cerf, who now serves as Chief Internet Evangelist for Google, and Demetris Cheatham, Senior Director, Diversity and Inclusion.

Along with over 100 speakers in sessions spanning the four-day event, SCALE 19x also brings about 100 exhibitors to the expo floor providing their latest software and other developments. In addition, co-located events return to SCALE 19x, which include sessions by IEEE SA Open, AWS, FreeBSD, PostgreSQL, and DevOps Day LA among others. More information on the co-located events can be found at https://www.socallinuxexpo.org/scale/19x/events

Sponsors – both long-time friends of the Expo and newcomers with whom we expect a long relationship – have lined up to support SCALE 19x. Amazon Web Services – AWS for short – leads off the Platinum List, along with Portworx and Mattermost.

Returning to the Hilton Los Angeles Airport hotel provides that there’s one place to stay and attend during the four-day Expo. The Hilton LAX offers a special deal for SCALE 19x attendees, and to take advantage of the savings, visit https://book.passkey.com/event/50305242/owner/50954/home

And, of course, SCALE wouldn’t be SCALE without the attendees – registration for SCALE 19x ranges from an expo-only pass to an all-access SCALE Pass for the exhibit floor and speakers. To register, visit https://register.socallinuxexpo.org/reg6/

For more information, visit https://www.socallinuxexpo.org/scale/19x

Go to Full Article

More in Tux Machines

today's howtos

  • How to install go1.19beta on Ubuntu 22.04 – NextGenTips

    In this tutorial, we are going to explore how to install go on Ubuntu 22.04 Golang is an open-source programming language that is easy to learn and use. It is built-in concurrency and has a robust standard library. It is reliable, builds fast, and efficient software that scales fast. Its concurrency mechanisms make it easy to write programs that get the most out of multicore and networked machines, while its novel-type systems enable flexible and modular program constructions. Go compiles quickly to machine code and has the convenience of garbage collection and the power of run-time reflection. In this guide, we are going to learn how to install golang 1.19beta on Ubuntu 22.04. Go 1.19beta1 is not yet released. There is so much work in progress with all the documentation.

  • molecule test: failed to connect to bus in systemd container - openQA bites

    Ansible Molecule is a project to help you test your ansible roles. I’m using molecule for automatically testing the ansible roles of geekoops.

  • How To Install MongoDB on AlmaLinux 9 - idroot

    In this tutorial, we will show you how to install MongoDB on AlmaLinux 9. For those of you who didn’t know, MongoDB is a high-performance, highly scalable document-oriented NoSQL database. Unlike in SQL databases where data is stored in rows and columns inside tables, in MongoDB, data is structured in JSON-like format inside records which are referred to as documents. The open-source attribute of MongoDB as a database software makes it an ideal candidate for almost any database-related project. This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you the step-by-step installation of the MongoDB NoSQL database on AlmaLinux 9. You can follow the same instructions for CentOS and Rocky Linux.

  • An introduction (and how-to) to Plugin Loader for the Steam Deck. - Invidious
  • Self-host a Ghost Blog With Traefik

    Ghost is a very popular open-source content management system. Started as an alternative to WordPress and it went on to become an alternative to Substack by focusing on membership and newsletter. The creators of Ghost offer managed Pro hosting but it may not fit everyone's budget. Alternatively, you can self-host it on your own cloud servers. On Linux handbook, we already have a guide on deploying Ghost with Docker in a reverse proxy setup. Instead of Ngnix reverse proxy, you can also use another software called Traefik with Docker. It is a popular open-source cloud-native application proxy, API Gateway, Edge-router, and more. I use Traefik to secure my websites using an SSL certificate obtained from Let's Encrypt. Once deployed, Traefik can automatically manage your certificates and their renewals. In this tutorial, I'll share the necessary steps for deploying a Ghost blog with Docker and Traefik.

Red Hat Hires a Blind Software Engineer to Improve Accessibility on Linux Desktop

Accessibility on a Linux desktop is not one of the strongest points to highlight. However, GNOME, one of the best desktop environments, has managed to do better comparatively (I think). In a blog post by Christian Fredrik Schaller (Director for Desktop/Graphics, Red Hat), he mentions that they are making serious efforts to improve accessibility. Starting with Red Hat hiring Lukas Tyrychtr, who is a blind software engineer to lead the effort in improving Red Hat Enterprise Linux, and Fedora Workstation in terms of accessibility. Read more

Today in Techrights

Android Leftovers