Language Selection

English French German Italian Portuguese Spanish

About Tux Machines

Thursday, 21 Feb 19 - Tux Machines is a community-driven public service/news site which has been around for over a decade and primarily focuses on GNU/LinuxSubscribe now Syndicate content

Search This Site

RadeonSI Primitive Culling Yields Mixed Benchmark Results

Filed under
Graphics/Benchmarks

Yesterday's patches introducing RadeonSI primitive culling via async compute yielded promising initial results, at least for the ParaView workstation application. I've been running some tests of this new functionality since yesterday and have some initial results to share on Polaris and Vega.

I've been running tests using a Radeon RX 590 and RX Vega 64 graphics cards. Tests were run with the latest Mesa Git branch of Marek's that provides this primitive culling implementation. That Mesa version was built against LLVM 9.0 SVN, which is a requirement otherwise the very latest LLVM 8.0 release state otherwise this functionality will not work. Additionally, it depends upon the AMDGPU DRM-Next material in the kernel as well so I was running a fresh kernel build off Alex Deucher's latest code branch.

Read more

Ubuntu 18.04.2 LTS Released with Linux Kernel 4.18 from Ubuntu 18.10, More

Filed under
Ubuntu

Initially planned for release on February 7th, 2019, the Ubuntu 18.04.2 LTS operating system has been delayed by Canonical until Valentine's Day, February 14th, due to a bug in the Linux 4.18 kernel inherited from Ubuntu 18.10 (Cosmic Cuttlefish) causing boot failures with certain graphics chipsets.

The kernel regression was quickly addressed in the Linux 4.18 kernel package of both Ubuntu 18.10 and Ubuntu 18.04 LTS systems, so Canonical now released Ubuntu 18.04.2 LTS (Bionic Beaver) with updated graphics and kernel stacks from Ubuntu 18.10 (Cosmic Cuttlefish), as well as all the latest security and software updates.

Read more

Also: Ubuntu 18.04.2 LTS Now Available With The New HWE Stack

The SheevaPlug NAS mini-PC is back with dual -A53 Sheeva64

Filed under
Ubuntu

Globalscale announced a $89 “Sheeva64” version of the old SheevaPlug NAS mini-PC that runs Ubuntu on Marvell’s dual-core -A53 Armada 3720 with 2x GbE, 3x USB, optional wireless, and a wall-power plug.

Globascale Technologies has resurrected Marvell’s old open-spec SheevaPlug mini-PC NAS design built around the same dual-core, Cortex-A53 Marvell Armada 3720 SoC it used in its circa-2016, Pico-ITX form-factor EspressoBin network switching SBC. The long-time Marvell partner has opened $89 pre-orders for the Ubuntu-powered Sheeva64, with shipments due in April.

Read more

SUSE and Red Hat Server Software

Filed under
Red Hat
Server
SUSE
  • SUSE OpenStack Cloud 9 Release Candidate 1 is here!
  • The New News on OpenShift 3.11

    Greetings fellow OpenShift enthusiasts! Not too long ago, Red Hat announced that OKD v3.11, the last release in the 3.x stream, is now generally available. The latest release of OpenShift enhances a number of current features that we know and love, as well as a number of interesting updates and technology previews for features that may or may not be included in OpenShift 4.0. Let’s take a look at one of the more exciting releases that may be part of The Great Updates coming in OpenShift 4.0.

  • Red Hat Satellite 6.4.2 has just been released

    Red Hat Satellite 6.4.2 is now generally available. The main drivers for the 6.4.2 release are upgrade and stability fixes. Eighteen bugs have been addressed in this release - the complete list is at the end of the post. The most notable issue is support of cloning for Satellite 6.4.

    Cloning allows you to copy your Satellite installation to another host to facilitate testing or upgrading the underlying operating system. For example, when moving a Satellite installation from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7. An overview of this feature is available on Red Hat’s Customer Portal.

KDE on Chakra and on Phones

Filed under
KDE

  • Chakra GNU/Linux Users Get KDE Plasma 5.15 Desktop and KDE Applications 18.12.2

    Users of the Chakra GNU/Linux distribution have received yet another batch of updates that bring them all the latest KDE technologies and security fixes.

    Less than a week after the previous update, which brought the KDE Plasma 5.14.5, KDE Frameworks 5.54.0, and KDE Applications 18.2.1 releases, Chakra GNU/Linux users can now install the recently released KDE Plasma 5.15 desktop environment, along with the KDE Frameworks 5.55.0 and KDE Applications 18.12.2 open-source software suites.

  • A mobile Plasma Sprint

    I was last week in Berlin at the Plasma Mobile sprint, graciously hosted by Endocode, almost exactly 9 years after the first Plasma Mobile sprint in which we first started to explore Plasma and other software by KDE on mobile phones, which at the time were just starting to become powerful enough to run a full Linux stack (Hi N900!)

    Now the project got a wide breath of fresh air: the thing that impressed me the most was how many new faces came at the sprint and are now part of the project.

    [...]

    As for Plasma Mobile software in itself, we did many bugfixes on the main shell/homescreen to have a better first impact, and a significant improvement came in KWin about high DPI scaling when running on an Halium system.

    Also, many improvoements were done in the Kirigami framework, which is the main toolkit recommended to be used to build applications for Plasma Mobile: as developers of several applications that use Kirigami were present there, we could do very fast feedback and debug sessions.

Governments Are Spending Billions on Software They Can Get with Freedom

Filed under
OSS

In the proprietary software world, when software is released, and when users buy that software, they don’t usually buy the entire software, but instead, they buy what’s known as an end-user license agreement (EULA). This EULA gives them the right to do only some specific things with that software. Usually, users are not allowed to copy, redistribute, share or modify the software, which is the main difference between proprietary software and free software (as in freedom).

It’s extremely annoying and sad that in the 21st century, governments all around the world are still paying millions of dollars for software each year; It’s more sad, because they are not paying for software, they are paying for a license to use a software in a specific way on yearly basis. Now say you were a country with millions of machines, can you just imagine the amounts of money that we are spending worldwide just to get those computers working?

More importantly, you don’t get the software. You just get a usage license that you must renew after a year. While free software gives you the 4 basic freedoms: The ability to read, modify, redistribute and use the software in any way you want.

Choosing to run a proprietary software over free software-where alternatives do exist-is an extremely wrong decision that governments are doing worldwide. And by choosing proprietary software over free software, we are losing huge amounts of money that instead could’ve been spent on health, education, public infrastructure or anything else in the country.

More importantly, the money that’s going to to pay for this software and its support could’ve been invested in developing alternative free solutions their selves; How about instead of spending $50M per year on the EULAs of Microsoft Office because “LibreOffice is not good”, that you just try to invest $25M in LibreOffice itself for one time only and see what happens? As a country, your technical infrastructure will develop if you turn it not just to a user of free software, but a producer as well. And only free software would allow you to do that.

During the period of our investigation, we checked the financial reports of many governments worldwide and their spending on IT. The amounts of money that governments are paying per year for proprietary software is very huge. And most of it isn’t actually for the licenses of using that software, but for the support.

It’s an issue, because it’s not going to end. Governments are claiming problems and issues in transferring to free software, and in doing that, they keep paying millions and millions of dollars each year, and continue to do so indefinitely; They have no plans to switch to free (as in freedom) locally-developed alternatives.

Let’s see some examples of how governments worldwide are spending their money on software.

Read more

Security: Updates, Thread Safety and Crypto Policies in Red Hat Enterprise Linux 8

Filed under
Security
  • Security updates for Thursday
  • Hacks.Mozilla.Org: Fearless Security: Thread Safety

    While this allows programs to do more faster, it comes with a set of synchronization problems, namely deadlocks and data races. From a security standpoint, why do we care about thread safety? Memory safety bugs and thread safety bugs have the same core problem: invalid resource use. Concurrency attacks can lead to similar consequences as memory attacks, including privilege escalation, arbitrary code execution (ACE), and bypassing security checks.

    Concurrency bugs, like implementation bugs, are closely related to program correctness. While memory vulnerabilities are nearly always dangerous, implementation/logic bugs don’t always indicate a security concern, unless they occur in the part of the code that deals with ensuring security contracts are upheld (e.g. allowing a security check bypass). However, while security problems stemming from logic errors often occur near the error in sequential code, concurrency bugs often happen in different functions from their corresponding vulnerability, making them difficult to trace and resolve. Another complication is the overlap between mishandling memory and concurrency flaws, which we see in data races.

    Programming languages have evolved different concurrency strategies to help developers manage both the performance and security challenges of multi-threaded applications.

  • Consistent security by crypto policies in Red Hat Enterprise Linux 8

    Software development teams, whether open or closed source, are often composed of many groups that own individual components. Database applications typically come from a different team than ones developed by HTTP or SSH services, and others. Each group chooses libraries, languages, utilities, and cryptographic providers for their solution. Having specialized teams contributing to an application may improve the final product, but it often makes it challenging to enforce a consistent cryptographic policy on a system.

Android Things is now only for smart speakers and displays

Filed under
Android

Android Things will now focus solely on OEM-built smart speakers and displays. Google is discontinuing public access to i.MX8M, Snapdragon, and MediaTek based production modules for the OS.

Google announced it is scaling back Android Things as a general-purpose IoT platform. The Arm-based production boards from Innocomm, Intrinsyc, and MediaTek that that Google was reselling to vendors with pre-loaded Android Things will no longer be publicly supported. Instead: “Given the successes we have seen with our partners in smart speakers and smart displays, we are refocusing Android Things as a platform for OEM partners to build devices in those categories moving forward,” wrote Dave Smith, Google’s Developer Advocate for IoT.

Read more

today's leftovers

Filed under
Misc
  • Futatabi video out

    After some delay, the video of my FOSDEM talk about Futatabi, my instant replay system, is out on YouTube. Actually, the official recording has been out for a while, but this is a special edit; nearly all of the computer content has been replaced with clean 720p59.94 versions.

  • Meeks of The Document Foundation

    Valentine's day release of Collabora Online 4.0 with an associated CODE update too. Tons of rather excellent work from the team there - its a privilege to be able work with them, and to fund almost all of that at Collabora. Then again - if you'd like to help out with both the funding, and directing the next round of feature work, we'd really appreciate you as a partner or customer.

  • RIP Dr. Bernard L. Peuto, Porting Android 9 Pie Go Stack to Rpi 3, LibreOffice v6.2 Coming Soon, Red Hat Virtualization Platform 4.3 Beta Released, Deepin Desktop Environment

    LibreOffice version 6.2 is right around the corner and the killer feature it will be sporting is a new tabbed layout for the menu items, making it similar to the competitive Microsoft Office suite.

  • Snapd flaw gives attackers root access on Linux systems
  • Dimitri John Ledkov: Encrypt all the things

    Went into blogger settings and enabled TLS on my custom domain blogger blog. So it is now finally a https://blog.surgut.co.uk However, I do use feedburner and syndicate that to the planet. I am not sure if that is end-to-end TLS connections, thus I will look into removing feedburner between my blog and the ubuntu/debian planets. My experience with changing feeds in the planets is that I end up spamming everyone. I wonder, if I should make a new tag and add that one, and add both feeds to the planet config to avoid spamming old posts.

    Next up went into gandi LiveDNS platform and enabled DNSSEC on my domain. It propagated quite quickly, but I believe my domain is now correctly signed with DNSSEC stuff. Next up I guess, is to fix DNSSEC with captive portals. I guess what we really want to have on "wifi" like devices, is to first connect to wifi and not set it as default route. Perform captive portal check, potentially with a reduced DNS server capabilities (ie. no EDNS, no DNSSEC, etc) and only route traffic to the captive portal to authenticate. Once past the captive portal, test and upgrade connectivity to have DNSSEC on. In the cloud, and on the wired connections, I'd expect that DNSSEC should just work, and if it does we should be enforcing DNSSEC validation by default.

    So I'll start enforcing DNSSEC on my laptop I think, and will start reporting issues to all of the UK banks if they dare not to have DNSSEC. If I managed to do it, on my own domain, so should they!

  • Cameron Kaiser: So long, Opportunity rover

    Both Opportunity and Spirit were powered by the 20MHz BAE RAD6000, a radiation-hardened version of the original IBM POWER1 RISC Single Chip CPU and the indirect ancestor of the PowerPC 601. Many PowerPC-based spacecraft are still in operation, both with the original RAD6000 and its successor the RAD750, a radiation-hardened version of the G3.

  • Six Hallmarks Of Successful Crowdfunding Campaigns

    While crowdfunding can be a good way to raise funds, it is risky. Here are six hallmarks of successful projects which you can use to hotrod your own campaigns.

    forbes.com

  • Welcoming a new Firefox/Toolkit peer

    Please join me in welcoming Bianca Danforth to the set of peers blessed with reviewing patches to Firefox and Toolkit. She’s been doing great work making testing experiment extensions easy and so it’s time for her to level-up.

Audiocasts: BSD Strategy, FLOSS Weekly, Linux in the Ham Shack

Filed under
Interviews
  • BSD Strategy | BSD Now 285

    Strategic thinking to keep FreeBSD relevant, reflecting on the soul of a new machine, 10GbE Benchmarks On Nine Linux Distros and FreeBSD, NetBSD integrating LLVM sanitizers in base, FreeNAS 11.2 distrowatch review, and more.

  • FLOSS Weekly 517: Liverpool MakeFest

    Caroline is the co-founder of the free event called Liverpool Makefest, a festival to promote stem, foss and maker-education for young people. The festival is now in its fifth year has attracted over 20,000 visitors and is being expanded across the national libraries within the UK.

  • LHS Episode #271: The Discord Accord

    Welcome to Episode 271 of Linux in the Ham Shack. In this week's episode, the hosts discuss ARISS Phase 2, the Peanut Android app for D-STAR and DMR linking, a geostationary satellite from Qatar, open source software in the public sector, a new open-source color management tool, Linux distributions for ham radio and much more. Thank you to everyone for listening and don't forget our Hamvention 2019 fundraiser!

MakuluLinux 2019.01.25, Netrunner 19.01 and Virtual Desktops

Filed under
GNU
Linux
  • MakuluLinux 2019.01.25 overview
  • Netrunner 19.01 Core Run Through

    In this video we look at Netrunner 19.01 Core. Enjoy!

  • Google Chrome is getting virtual desktops (probably)

    If you’re the sort of person who regularly runs a bunch of programs on your computer at once, you may already be a fan of using multiple monitors. You can put one set of apps on one screen and a different set on another and tilt your head a bit to switch your focus from one to the other.

    But if you have a laptop, you’re probably confined to using a single screen from time to time (unless you have a portable monitor that you take everywhere you go).

    Enter virtual desktops. Most modern operating systems offer a way to create multiple virtual workspaces that you can flip between. It’s not quite as seamless as using multiple displays, but it’s certainly more compact (and more energy efficient, for that matter).

Server: UNIX, Server Virtualization, Red Hat and Fedora, Networking and PostgreSQL

Filed under
Server
  • The long, slow death of commercial Unix [Ed: Microsoft propagandist Andy Patrizio should also do an article about the death of Windows Server.]

    In the 1990s and well into the 2000s, if you had mission-critical applications that required zero downtime, resiliency, failover and high performance, but didn’t want a mainframe, Unix was your go-to solution.

    If your database, ERP, HR, payroll, accounting, and other line-of-business apps weren’t run on a mainframe, chances are they ran on Unix systems from four dominant vendors: Sun Microsystems, HP, IBM and SGI. Each had its own flavor of Unix and its own custom RISC processor. Servers running an x86 chip were at best used for file and print or maybe low-end departmental servers.

  • What is Server Virtualization: Is It Right For Your Business?

    In the modern world of IT application deployment, server virtualization is a commonly used term. But what exactly is server virtualization and is it right for your business?

    Server virtualization in 2019 is a more complicated and involved topic than it was when the concept first started to become a popular approach nearly two decades ago. However, the core basic concepts and promises remain the same.

  • Transitioning Red Hat SSO to a highly-available hybrid cloud deployment

    About two years ago, Red Hat IT finished migrating our customer-facing authentication system to Red Hat Single Sign-On (Red Hat SSO). As a result, we were quite pleased with the performance and flexibility of the new platform. Due to some architectural decisions that were made in order to optimize for uptime using the technologies at our disposal, we were unable to take full advantage of Red Hat SSO’s robust feature set until now. This article describes how we’re now addressing database and session replication between global sites.

  • Red Hat named to Fortune’s 100 Best Companies to Work For list

    People come to work at Red Hat for our brand, but they stay for the people and the culture. It's integral to our success as an organization. It's what makes the experience of being a Red Hatter and working with other Red Hatters different. And it's what makes us so passionate about our customers’ and Red Hat’s success. In recognition of that, Red Hat has been ranked No. 50 on Fortune Magazine's list of 100 Best Companies to Work For! Hats off--red fedoras, of course--to all Red Hatters!

  • News from Fedora Infrastructure

    One of the first tasks we have achieved is to move as many application we maintain to use CentOS CI for our Continuous Integration pipeline. CentOS CI provides us with a Jenkins instance that is running in an OpenShift cluster, you can have a look at the this instance here.

    Since a good majority of our application are developed in Python, we agreed on using tox to execute our CI tests. Adopting tox on our application allows us to use a really convenient way to configure the CI pipeline in Jenkins. In fact we only needed to create .cico.pipeline file in the application repository with the following.

  • Mirantis to Help Build AT&T's Edge Computing Network for 5G On Open Source

    The two companies hope other telcos will follow AT&T's lead in building their 5G networks on open source software.

  • The Telecom Industry Has Moved to Open Source

    The telecom industry is at the heart of the fourth industrial revolution. Whether it’s connected IoT devices or mobile entertainment, the modern economy runs on the Internet.
    However, the backbone of networking has been running on legacy technologies. Some telecom companies are centuries old, and they have a massive infrastructure that needs to be modernized.
    The great news is that this industry is already at the forefront of emerging technologies. Companies such as AT&T, Verizon, China Mobile, DTK, and others have embraced open source technologies to move faster into the future. And LF Networking is at the heart of this transformation.
    “2018 has been a fantastic year,” said Arpit Joshipura, General Manager of Networking at Linux Foundation, speaking at Open Source Summit in Vancouver last fall. “We have seen a 140-year-old telecom industry move from proprietary and legacy technologies to open source technologies with LF Networking.”

  • Monroe Electronics Releases Completely Redesigned HALO Version 2.0

    With improvements including a new web-based interface and its shift to a unified web-server platform, HALO V2.0 simplifies and streamlines all of these critical processes. The new web-based interface for HALO V2.0 allows users to work with their preferred web browser (e.g., Chrome, Firefox, Safari). The central HALO server now runs on a Linux OS (Ubuntu and CentOS 7) using a PostgreSQL database.

  • PostgreSQL 11.2, 10.7, 9.6.12, 9.5.16, and 9.4.21 released

    The PostgreSQL project has put out updated releases for all supported versions. "This release changes the behavior in how PostgreSQL interfaces with 'fsync()' and includes fixes for partitioning and over 70 other bugs that were reported over the past three months."

5 Gorgeous Examples Of Truly Customized Linux Desktops

Filed under
Linux

Using Linux is anything but boring, especially when it comes to personalizing your OS. That extends way beyond just the ability to install multiple Desktop Environments like Budgie, Pantheon and KDE Plasma. Sure I've tinkered with them, tweaked the appearance a bit, installed some cool desktop widgets. But nothing prepared me for my first trip to /r/unixporn.

I repeatedly insist that Linux makes your PC feel personal again, but the level of customization and pure creative beauty on display below left my jaw on the floor, and me with a desire to learn how to accomplish what's been done here.

Join me in a brief but drool-worthy tour of some truly unique Linux desktops.

Read more

Linux Kernel: Rusty Russell and More

Filed under
Linux
  • Rusty's reminiscences

    Rusty Russell was one of the first developers paid to work on the Linux kernel and the founder of the conference now known as linux.conf.au (LCA); he is one of the most highly respected figures in the Australian free-software community. The 2019 LCA was the 20th edition of this long-lived event; the organizers felt that it was an appropriate time to invite Russell to deliver the closing keynote talk. He used the opportunity to review his path into free software and the creation of LCA, but first a change of clothing was required.

    [...]

    He found his way into the Unix world in 1992, working on an X terminal connected to a SunOS server. SunOS was becoming the dominant Unix variant at that time, and there were a number of "legendary hackers" working at Sun to make that happen. But then Russell discovered another, different operating system: Emacs. This system was unique in that it was packaged with a manifesto describing a different way to create software. The idea of writing an entire operating system and giving it away for free seemed fantastical at the time, but the existence of Emacs meant that it couldn't be dismissed.

    Even so, he took the normal path for a few more years, working on other, proprietary Unix systems; toward the end he ended up leading a research project developed in C++. The proprietary compilers were too expensive, so he was naturally using GCC instead. He did some digging in preparation for this talk and found his first free-software contribution, which was a patch to GCC in 1995. The experience of collaborating to build better software for everybody was exhilarating, but even with as much fun as he was having there was another level to aim for.

  • Fixing page-cache side channels, second attempt

    The kernel's page cache, which holds copies of data stored in filesystems, is crucial to the performance of the system as a whole. But, as has recently been demonstrated, it can also be exploited to learn about what other users in the system are doing and extract information that should be kept secret. In January, the behavior of the mincore() system call was changed in an attempt to close this vulnerability, but that solution was shown to break existing applications while not fully solving the problem. A better solution will have to wait for the 5.1 development cycle, but the shape of the proposed changes has started to come into focus.
    The mincore() change for 5.0 caused this system call to report only the pages that are mapped into the calling process's address space rather than all pages currently resident in the page cache. That change does indeed take away the ability for an attacker to nondestructively test whether specific pages are present in the cache (using mincore() at least), but it also turned out to break some user-space applications that legitimately needed to know about all of the resident pages. The kernel community is unwilling to accept such regressions unless there is absolutely no other solution, so this change could not remain; it was thus duly reverted for 5.0-rc4.

    Regressions are against the community's policy, but so is allowing known security holes to remain open. A replacement for the mincore() change is thus needed; it can probably be found in this patch set posted by Vlastimil Babka at the end of January. It applies a new test to determine whether mincore() will report on the presence of pages in the page cache; in particular, it will only provide that information for memory regions that (1) are anonymous memory, or (2) are backed by a file that the calling process would be allowed to open for write access. In the first case, anonymous mappings should not be shared across security boundaries, so there should be no need to protect information about page-cache residency. For the second case, the ability to write a given file would give an attacker the ability to create all kinds of mischief, of which learning about which pages are cached is relatively minor.

  • Linux Kernel Getting io_uring To Deliver Fast & Efficient I/O

    The Linux kernel is getting a new ring for Valentine's Day... io_uring. The purpose of io_uring is to deliver faster and more efficient I/O operations on Linux and should be coming with the next kernel cycle. 

    Linux block maintainer and developer behind io_uring, Jens Axboe of Facebook, queued the new interface overnight into the linux-block/for-next on Git. The io_uring interface provides submission and completion queue rings that are shared between the application and kernel to avoid excess copies. The new interface has just two new system calls (io_uring_setup and io_uring_enter) for dealing with I/O. Axboe previously worked on this code under the "aioring" name.

  • AMDGPU DC Gets Fixes For Seamless Boot, Disappearing Cursor On Raven Ridge

    Should you be running into any display problems or just want to help in testing out the open-source AMD Linux driver's display code, a new round of patches were published today.

Ethical Hacking, Ubuntu-Based BackBox Linux OS Is Now Available on AWS

Filed under
OS
Ubuntu

If you want to run BackBox Linux in the cloud, on your AWS account, you should know that the ethical hacking operating system is now available on the Amazon Web Services cloud platform as an Amazon Machine Image (AMI) virtual appliance that you can install with a few mouse clicks.

The BackBox Linux operating system promises to offer Amazon Web Services users an optimal environment for professional penetration testing operations as it puts together a collection of some of the best ethical hacking tools, which are already configured and ready for production use.

Read more

KDE neon Systems Based on Ubuntu 16.04 LTS Have Reached End of Life, Upgrade Now

With the rebase of KDE neon on Ubuntu 18.04 LTS (Bionic Beaver) on September 2018, the development team have decided it's time to put the old series based on Ubuntu 16.04 LTS (Xenial Xerus) to rest once and for all as most users already managed to upgrade their systems to the new KDE neon series based on Canonical's latest Ubuntu LTS release.

"KDE neon was rebased onto Ubuntu bionic/18.04 last year and upgrades have gone generally smooth. We have removed xenial/16.04 build from our machines (they only hang around for as long as they did because it took a while to move the Snap builds away from them) and the apt repo will remove soon," said the devs.

Read more

Benchmarking The Python Optimizations Of Clear Linux Against Ubuntu, Intel Python

Filed under
Graphics/Benchmarks

Stemming from Clear Linux detailing how they optimize Python's performance using various techniques, there's been reader interest in seeing just how their Python build stacks up. Here's a look at the Clear Linux Python performance compared to a few other configurations as well as Ubuntu Linux.

For this quick Python benchmarking roundabout, the following configurations were tested while using an Intel Core i9 7980XE system throughout:

- Clear Linux's default Python build with the performance optimizations they recently outlined to how they ship their Python binary.

Read more

Syndicate content

More in Tux Machines

Games: Surviving Mars and OpenMW

Kernel and Security: BPF, Mesa, Embedded World, Kernel Address Sanitizer and More

  • Concurrency management in BPF
    In the beginning, programs run on the in-kernel BPF virtual machine had no persistent internal state and no data that was shared with any other part of the system. The arrival of eBPF and, in particular, its maps functionality, has changed that situation, though, since a map can be shared between two or more BPF programs as well as with processes running in user space. That sharing naturally leads to concurrency problems, so the BPF developers have found themselves needing to add primitives to manage concurrency (the "exchange and add" or XADD instruction, for example). The next step is the addition of a spinlock mechanism to protect data structures, which has also led to some wider discussions on what the BPF memory model should look like. A BPF map can be thought of as a sort of array or hash-table data structure. The actual data stored in a map can be of an arbitrary type, including structures. If a complex structure is read from a map while it is being modified, the result may be internally inconsistent, with surprising (and probably unwelcome) results. In an attempt to prevent such problems, Alexei Starovoitov introduced BPF spinlocks in mid-January; after a number of quick review cycles, version 7 of the patch set was applied on February 1. If all goes well, this feature will be included in the 5.1 kernel.
  • Intel Ready To Add Their Experimental "Iris" Gallium3D Driver To Mesa
    For just over the past year Intel open-source driver developers have been developing a new Gallium3D-based OpenGL driver for Linux systems as the eventual replacement to their long-standing "i965 classic" Mesa driver. The Intel developers are now confident enough in the state of this new driver dubbed Iris that they are looking to merge the driver into mainline Mesa proper.  The Iris Gallium3D driver has now matured enough that Kenneth Graunke, the Intel OTC developer who originally started Iris in late 2017, is looking to merge the driver into the mainline code-base of Mesa. The driver isn't yet complete but it's already in good enough shape that he's looking for it to be merged albeit marked experimental.
  • Hallo Nürnberg!
    Collabora is headed to Nuremberg, Germany next week to take part in the 2019 edition of Embedded World, "the leading international fair for embedded systems". Following a successful first attendance in 2018, we are very much looking forward to our second visit! If you are planning on attending, please come say hello in Hall 4, booth 4-280! This year, we will be showcasing a state-of-the-art infrastructure for end-to-end, embedded software production. From the birth of a software platform, to reproducible continuous builds, to automated testing on hardware, get a firsthand look at our platform building expertise and see how we use continuous integration to increase productivity and quality control in embedded Linux.
  • KASAN Spots Another Kernel Vulnerability From Early Linux 2.6 Through 4.20
    The Kernel Address Sanitizer (KASAN) that detects dynamic memory errors within the Linux kernel code has just picked up another win with uncovering a use-after-free vulnerability that's been around since the early Linux 2.6 kernels. KASAN (along with the other sanitizers) have already proven quite valuable in spotting various coding mistakes hopefully before they are exploited in the real-world. The Kernel Address Sanitizer picked up another feather in its hat with being responsible for the CVE-2019-8912 discovery.
  • io_uring, SCM_RIGHTS, and reference-count cycles
    The io_uring mechanism that was described here in January has been through a number of revisions since then; those changes have generally been fixing implementation issues rather than changing the user-space API. In particular, this patch set seems to have received more than the usual amount of security-related review, which can only be a good thing. Security concerns became a bit of an obstacle for io_uring, though, when virtual filesystem (VFS) maintainer Al Viro threatened to veto the merging of the whole thing. It turns out that there were some reference-counting issues that required his unique experience to straighten out. The VFS layer is a complicated beast; it must manage the complexities of the filesystem namespace in a way that provides the highest possible performance while maintaining security and correctness. Achieving that requires making use of almost all of the locking and concurrency-management mechanisms that the kernel offers, plus a couple more implemented internally. It is fair to say that the number of kernel developers who thoroughly understand how it works is extremely small; indeed, sometimes it seems like Viro is the only one with the full picture. In keeping with time-honored kernel tradition, little of this complexity is documented, so when Viro gets a moment to write down how some of it works, it's worth paying attention. In a long "brain dump", Viro described how file reference counts are managed, how reference-count cycles can come about, and what the kernel does to break them. For those with the time to beat their brains against it for a while, Viro's explanation (along with a few corrections) is well worth reading. For the rest of us, a lighter version follows.

Blacklisting insecure filesystems in openSUSE

The Linux kernel supports a wide variety of filesystem types, many of which have not seen significant use — or maintenance — in many years. Developers in the openSUSE project have concluded that many of these filesystem types are, at this point, more useful to attackers than to openSUSE users and are proposing to blacklist many of them by default. Such changes can be controversial, but it's probably still fair to say that few people expected the massive discussion that resulted, covering everything from the number of OS/2 users to how openSUSE fits into the distribution marketplace. On January 30, Martin Wilck started the discussion with a proposal to add a blacklist preventing the automatic loading of a set of kernel modules implementing (mostly) old filesystems. These include filesystems like JFS, Minix, cramfs, AFFS, and F2FS. For most of these, the logic is that the filesystems are essentially unused and the modules implementing them have seen little maintenance in recent decades. But those modules can still be automatically loaded if a user inserts a removable drive containing one of those filesystem types. There are a number of fuzz-testing efforts underway in the kernel community, but it seems relatively unlikely that any of them are targeting, say, FreeVxFS filesystem images. So it is not unreasonable to suspect that there just might be exploitable bugs in those modules. Preventing modules for ancient, unmaintained filesystems from automatically loading may thus protect some users against flash-drive attacks. If there were to be a fight over a proposal like this, one would ordinarily expect it to be concerned with the specific list of unwelcome modules. But there was relatively little of that. One possible exception is F2FS, the presence of which raised some eyebrows since it is under active development, having received 44 changes in the 5.0 development cycle, for example. Interestingly, it turns out that openSUSE stopped shipping F2FS in September. While the filesystem is being actively developed, it seems that, with rare exceptions, nobody is actively backporting fixes, and the filesystem also lacks a mechanism to prevent an old F2FS implementation from being confused by a filesystem created by a newer version. Rather than deal with these issues, openSUSE decided to just drop the filesystem altogether. As it happens, the blacklist proposal looks likely to allow F2FS to return to the distribution since it can be blacklisted by default. Read more

gitgeist: a git-based social network proof of concept

Are you tired of not owning the data or the platform you use for social postings? I know I am. It's hard to say when I "first" used a social network. I've been on email for about 30 years and one of the early ad-hoc forms of social networks were chain emails. Over the years I was asked to join all sorts of "social" things such as IRC, ICQ, Skype, MSN Messenger, etc. and eventually things like Orkut, MySpace, Facebook, etc. I'll readily admit that I'm not the type of person that happily jumps onto every new social bandwagon that appears on the Internet. I often prefer preserving the quietness of my own thoughts. That, though, hasn't stopped me from finding some meaningfulness participating in Twitter, Facebook, LinkedIn and more recently Google+. Twitter was in fact the first social network that I truly embraced. And it would've remained my primary social network had they not killed their own community by culling the swell of independently-developed Twitter clients that existed. That and their increased control of their API effectively made me look for something else. Right around that time Google+ was being introduced and many in the open source community started participating in that, in some ways to find a fresh place where techies can aggregate away from the noise and sometimes over-the-top nature of Facebook. Eventually I took to that too and started using G+ as my primary social network. That is, until Google recently decided to pull the plug on G+. While Google+ might not have represented a success for Google, it had become a good place for sharing information among the technically-inclined. As such, I found it quite useful for learning and hearing about new things in my field. Soon-to-be-former users of G+ have gone in all sorts of directions. Some have adopted a "c'mon guys, get over it, Facebook is the spot" attitude, others have adopted things like Mastodon, others have fallen back to their existing IDs on Twitter, and yet others, like me, are still looking. Read more