Language Selection

English French German Italian Portuguese Spanish

About Tux Machines

Monday, 20 Aug 18 - Tux Machines is a community-driven public service/news site which has been around for over a decade and primarily focuses on GNU/LinuxSubscribe now Syndicate content

Search This Site

Debian GNU/Linux 9 "Stretch" Receives L1 Terminal Fault Mitigations, Update Now

Filed under
Debian

According to the security advisory published on Monday, the new kernel security update addresses both CVE-2018-3620 and CVE-2018-3646 vulnerabilities, which are known as L1 Terminal Fault (L1TF) or Foreshadow. These vulnerabilities had an impact on normal systems, as well as virtualized operating systems, allowing a local attacker to expose sensitive information from the host OS or other guests.

"Multiple researchers have discovered a vulnerability in the way the Intel processor designs have implemented speculative execution of instructions in combination with handling of page-faults. This flaw could allow an attacker controlling an unprivileged process to read memory from arbitrary (non-user controlled) addresses," reads today's security advisory.

Read more

Rugged, sandwich-style Sitara SBC has optimized Linux stack

Filed under
Linux

Forlinx’s sandwich-style, industrial temp “OK5718-C” SBC runs Linux on a “FET5718-C” module with a Cortex-A15 based TI AM5718 SoC. Other features include SATA, HDMI, MIPI-CSI, USB 3.0, CAN, and mini-PCIe.

Forlinx Embedded Technology, the Chinese company behind Linux-friendly SBCs such as the TI Sitara AM3354 based OK335xS-II and
The Forlinx i.MX6 SBC, has posted details on a new OK5718-C SBC. Like the OK335xS-II, it’s a Sitara based board, in this case tapping TI’s single-core, Cortex-A15 based Sitara AM5718. Like the i.MX6 SBC, it’s a sandwich-style offering, with the separately available FET5718-C module hosting the up to 1.5GHz AM5718.

Read more

RISC-V and NVIDIA

Filed under
Graphics/Benchmarks
Hardware
  • Open-Source RISC-V-Based SoC Platform Enlists Deep Learning Accelerator

    SiFive introduces what it’s calling the first open-source RISC-V-based SoC platform for edge inference applications based on NVIDIA's Deep Learning Accelerator (NVDLA) technology.

    A demo shown at the Hot Chips conference consists of NVDLA running on an FPGA connected via ChipLink to SiFive's HiFive Unleashed board powered by the Freedom U540, the first Linux-capable RISC-V processor. The complete SiFive implementation is suited for intelligence at the edge, where high-performance with improved power and area profiles are crucial. SiFive's silicon design capabilities and innovative business model enables a simplified path to building custom silicon on the RISC-V architecture with NVDLA.

  • SiFive Announces First Open-Source RISC-V-Based SoC Platform With NVIDIA Deep Learning Accelerator Technology

    SiFive, the leading provider of commercial RISC-V processor IP, today announced the first open-source RISC-V-based SoC platform for edge inference applications based on NVIDIA's Deep Learning Accelerator (NVDLA) technology.

    The demo will be shown this week at the Hot Chips conference and consists of NVDLA running on an FPGA connected via ChipLink to SiFive's HiFive Unleashed board powered by the Freedom U540, the world's first Linux-capable RISC-V processor. The complete SiFive implementation is well suited for intelligence at the edge, where high-performance with improved power and area profiles are crucial. SiFive's silicon design capabilities and innovative business model enables a simplified path to building custom silicon on the RISC-V architecture with NVDLA.

  • SiFive Announces Open-Source RISC-V-Based SoC Platform with Nvidia Deep Learning Accelerator Technology

    SiFive, a leading provider of commercial RISC-V processor IP, today announced the first open-source RISC-V-based SoC platform for edge inference applications based on NVIDIA’s Deep Learning Accelerator (NVDLA) technology.

    The demo will be shown this week at the Hot Chips conference and consists of NVDLA running on an FPGA connected via ChipLink to SiFive’s HiFive Unleashed board powered by the Freedom U540, the world’s first Linux-capable RISC-V processor. The complete SiFive implementation is well suited for intelligence at the edge, where high-performance with improved power and area profiles are crucial. SiFive’s silicon design capabilities and innovative business model enables a simplified path to building custom silicon on the RISC-V architecture with NVDLA.

  • NVIDIA Unveils The GeForce RTX 20 Series, Linux Benchmarks Should Be Coming

    NVIDIA CEO Jensen Huang has just announced the GeForce RTX 2080 series from his keynote ahead of Gamescom 2018 this week in Cologne, Germany.

  • NVIDIA have officially announced the GeForce RTX 2000 series of GPUs, launching September

    The GPU race continues on once again, as NVIDIA have now officially announced the GeForce RTX 2000 series of GPUs and they're launching in September.

    This new series will be based on their Turing architecture and their RTX platform. These new RT Cores will "enable real-time ray tracing of objects and environments with physically accurate shadows, reflections, refractions and global illumination." which sounds rather fun.

today's leftovers

Filed under
Misc

GNOME Shell, Mutter, and Ubuntu's GNOME Theme

Filed under
GNOME
Ubuntu

Benchmarks on GNU/Linux

Filed under
GNU
Graphics/Benchmarks
Linux
  • Linux vs. Windows Benchmark: Threadripper 2990WX vs. Core i9-7980XE Tested

    The last chess benchmark we’re going to look at is Crafty and again we’re measuring performance in nodes per second. Interestingly, the Core i9-7980XE wins out here and saw the biggest performance uplift when moving to Linux, a 5% performance increase was seen opposed to just 3% for the 2990WX and this made the Intel CPU 12% faster overall.

  • Which is faster, rsync or rdiff-backup?

    As our data grows (and some filesystems balloon to over 800GBs, with many small files) we have started seeing our night time backups continue through the morning, causing serious disk i/o problems as our users wake up and regular usage rises.

    For years we have implemented a conservative backup policy - each server runs the backup twice: once via rdiff-backup to the onsite server with 10 days of increments kept. A second is an rsync to our offsite backup servers for disaster recovery.

    Simple, I thought. I will change the rdiff-backup to the onsite server to use the ultra fast and simple rsync. Then, I'll use borgbackup to create an incremental backup from the onsite backup server to our off site backup servers. Piece of cake. And with each server only running one backup instead of two, they should complete in record time.

    Except, some how the rsync backup to the onsite backup server was taking almost as long as the original rdiff-backup to the onsite server and rsync backup to the offsite server combined. What? I thought nothing was faster than the awesome simplicity of rsync, especially compared to the ancient python-based rdiff-backup, which hasn't had an upstream release since 2009.

OSS Leftovers

Filed under
OSS
  • Haiku: R1/beta1 release plans - at last

    At last, R1/beta1 is nearly upon us. As I’ve already explained on the mailing list, only two non-“task” issues remain in the beta1 milestone, and I have prototype solutions for both. The buildbot and other major services have been rehabilitated and will need only minor tweaking to handle the new branch, and mmlr has been massaging the HaikuPorter buildmaster so that it, too, can handle the new branch, though that work is not quite finished yet.

  • Haiku OS R1 Beta Is Finally Happening In September

    It's been five years since the last Haiku OS alpha release for their inaugural "R1" release but next month it looks like this first beta will be released, sixteen years after this BeOS-inspired open-source operating system started development.

  • IBM Scores More POWER Open-Source Performance Optimizations

    Following our POWER9 Linux benchmarks earlier this year, IBM POWER engineers have continued exploring various areas for optimization within the interesting open-source workloads tested. Another batch of optimizations are pending for various projects.

  • DevConf.in 2018

    Earlier this month, I attended DevConf.in 2018 conference in Bengaluru, KA, India. It was sort of culmination of a cohesive team play that began for me at DevConf.cz 2018 in Brno, CZ. I say sort of because the team is already gearing up for DevConf.in 2019.

  • The Unitary Fund: a no-strings attached grant program for Open Source quantum computing

    Quantum computing has the potential to be a revolutionary technology. From the first applications in cryptography and database search to more modern quantum applications across simulation, optimization, and machine learning. This promise has led industrial, government, and academic efforts in quantum computing to grow globally. Posted jobs in the field have grown 6 fold in the last two years. Quantum computing hardware and platforms, designed by startups and tech giants alike, continue to improve. Now there are new opportunities to discover how to best program and use these new machines. As I wrote last year: the first quantum computers will need smart software.

    Quantum computing also remains a place where small teams and open research projects can make a big difference. The open nature is important as Open Source software has the lowest barriers  for others to understand, share and build upon existing projects. In a new field that needs to grow, this rapid sharing and development is especially important. I’ve experienced this myself through leading the Open Source Forest project at Rigetti Computing and also by watching the growing ecosystem of open projects like QISKit, OpenFermion, ProjectQ, Strawberry Fields, XaCC, Cirq, and many others. The hackathons and community efforts from around the world are inspiring.

  • SiFive Announces First Open-Source RISC-V-Based SoC Platform With NVIDIA Deep Learning Accelerator Technology

    SiFive, the leading provider of commercial RISC-V processor IP, today announced the first open-source RISC-V-based SoC platform for edge inference applications based on NVIDIA's Deep Learning Accelerator (NVDLA) technology.

Programming: FOAAS, Jenkins 2, LLVM 6/7 and New Patches

Filed under
Development
  • rfoaas 2.0.0: Updated and extended

    FOAAS upstream recently went to release 2.0.0, so here we are catching up bringing you all the new accessors from FOAAS 2.0.0: bag(), equity(), fts(), ing(), particular(), ridiculous(), and shit(). We also added off_with() which was missing previously. Documentation and tests were updated. The screenshot shows an example of the new functions.

  • Introduction to writing pipelines-as-code and implementing DevOps with Jenkins 2

    One of the key ideas of DevOps is infrastructure-as-code—having the infrastructure for your delivery/deployment pipeline expressed in code—just as the products that flow it.

  • Intel's Beignet OpenCL Driver Updated To Work With LLVM 6/7

    Intel stopped developing their Beignet open-source Linux OpenCL driver in February to concentrate all efforts now around their new Intel OpenCL NEO platform. But commits landed today with a few improvements for those still using Beignet.

    Independent contributor to the Beignet OpenCL stack Rebecca Palmer submitted a number of patches recently that were added to mainline Beignet, the first commits to this OpenCL library since early February.

  • Security updates for Monday

What Does "Ethical" AI Mean for Open Source?

Filed under
OSS
Sci/Tech

It would be an understatement to say that artificial intelligence (AI) is much in the news these days. It's widely viewed as likely to usher in the next big step-change in computing, but a recent interesting development in the field has particular implications for open source. It concerns the rise of "ethical" AI.

In October 2016, the White House Office of Science and Technology Policy, the European Parliament's Committee on Legal Affairs and, in the UK, the House of Commons' Science and Technology Committee, all released reports on how to prepare for the future of AI, with ethical issues being an important component of those reports. At the beginning of last year, the Asilomar AI Principles were published, followed by the Montreal Declaration for a Responsible Development of Artificial Intelligence, announced in November 2017.

Abstract discussions of what ethical AI might or should mean became very real in March 2018. It was revealed then that Google had won a share of the contract for the Pentagon's Project Maven, which uses artificial intelligence to interpret huge quantities of video images collected by aerial drones in order to improve the targeting of subsequent drone strikes. When this became known, it caused a firestorm at Google. Thousands of people there signed an internal petition addressed to the company's CEO, Sundar Pichai, asking him to cancel the project. Hundreds of researchers and academics sent an open letter supporting them, and some Google employees resigned in protest.

Read more

SUSE is Still Working for Microsoft

Filed under
Microsoft
SUSE

Red Hat Leftovers

Filed under
Red Hat
  • Kubernetes on Metal with OpenShift

    My first concert was in the mid-80s, when AC/DC came to the Providence Civic Center in Rhode Island, and it was glorious. Music fans who grew up in the 80s will fondly remember the birth of MTV, the emergence of the King of Pop and the heyday of rock-n-roll’s heavy metal gone mainstream era, when long hair and guitar riffs both flowed freely. So recently when Def Leppard joined Journey at Fenway Park in Boston for their 2018 joint tour, I knew I had to be there.

    Metal also dominated the datacenter in the 80s and 90s, as mainframes and minicomputers made way for bare-metal servers running enterprise applications on UNIX and, soon after, open source Linux operating systems powered by Red Hat. Just like heavy metal eventually made way for the angst-filled grunge rock era of the 90s, so too did application provisioning on bare metal make way for the era of virtualization driven by VMWare – with subsequent VM sprawl and costly ELAs creating much angst to this day for many IT organizations.

  • Security Technologies: Stack Smashing Protection (StackGuard)

    In our previous blog, we saw how arbitrary code execution resulting from stack-buffer overflows can be partly mitigated by marking segments of memory as non-executable, a technology known as Execshield. However stack-buffer overflow exploits can still effectively overwrite the function return address, which leads to several interesting exploitation techniques like ret2libc, ret2gets, and ret2plt. With all of these methods, the function return address is overwritten and attacker controlled code is executed when the program control transfers to overwritten address on the stack.

  • Keeping both of your OpenShift Container Platforms Highly Available with Keepalived and HAproxy

    Until Kubernetes Federation hits the prime time, a number of solutions have sprung up as stop gaps to address geographically dispersing multiple cluster endpoints: stretch clusters and multiple clusters across multiple datacenters. The following article discusses how to configure Keepalived for maximum uptime of HAproxy with multiple cluster endpoints. In the following documentation an HAproxy and Keepalived configuration will be discussed in detail to load balance to the cluster(s) endpoints.

    In a production environment a Global server load balancing (GSLB) or Global Traffic Manager (GTM) would be used to give a differing IP address based on the originating location of the request. This would help to ensure traffic from Virginia or New York would get the closest location to the originating request.

  • How to integrate A-MQ 6.3 on Red Hat JBoss EAP 7
  • The Open Brand Project | The helpful guy in the red hat.

    A big part of the Red Hat Open Brand Project has been looking back at our past and examining our roots. It is important that we imbue the new symbol with as much shared meaning from our history and culture as possible. To represent ourselves, we have to understand our origins.

    Before there was Shadowman, before there was a red fedora, before we were an enterprise technology company, and before we helped make open source a driving force of technology innovation, we had our name.

  • October 19th Options Now Available For Red Hat (RHT)
  • Decentralize common Fedora apps with Cjdns

    Are you worried about a few huge corporations controlling the web? Don’t like censorship on centralized social media sites like facebook and twitter? You need to decentralize! The internet was designed to be decentralized. Many common activities, from social media to email to voice calls, don’t actually require a centralized service.

    The basic requirement for any peer to peer application is that the peers be able to reach each other. This is impossible today for most people using IP4 behind NAT (as with most household routers). The IP4 address space was exhausted over a decade ago. Most people are in “IP4 NAT Jail.”

    Your device is assigned a private IP, and translated to the public IP by the router. Without port forwarding to a specific private IP, incoming TCP connections or UDP sessions can’t tell where to forward to, and are dropped. As a result, nothing can connect to your device. You must connect to various public servers to do anything. IP4 NAT Jail forces centralization.

Mozilla: FCC, Brotli Compression and an Extension

Filed under
Moz/FF
  • Mozilla files arguments against the FCC – latest step in fight to save net neutrality

    Today, Mozilla is filing our brief in Mozilla v. FCC – alongside other companies, trade groups, states, and organizations – to defend net neutrality rules against the FCC’s rollback that went into effect early this year. For the first time in the history of the public internet, the FCC has disavowed interest and authority to protect users from ISPs, who have both the incentives and means to interfere with how we access online content.

    We are proud to be a leader in the fight for net neutrality both through our legal challenge in Mozilla v. FCC and through our deep work in education and advocacy for an open, equal, accessible internet. Users need to know that their access to the internet is not being blocked, throttled, or discriminated against. That means that the FCC needs to accept statutory responsibility in protecting those user rights — a responsibility that every previous FCC has supported until now. That’s why we’re suing to stop them from abdicating their regulatory role in protecting the qualities that have made the internet the most important communications platform in history.

    This case is about your rights to access content and services online without your ISP blocking, throttling, or discriminating against your favorite services. Unfortunately, the FCC made this a political issue and followed party-lines rather than protecting your right to an open internet in the US. Our brief highlights how this decision is just completely flawed...

  • Using Brotli compression to reduce CDN costs

    The Snippets Service allows Mozilla to communicate with Firefox users directly by placing a snippet of text and an image on their new tab page. Snippets share exciting news from the Mozilla World, useful tips and tricks based on user activity and sometimes jokes.

    To achieve personalized, activity based messaging in a privacy respecting and efficient manner, the service creates a Bundle of Snippets per locale. Bundles are HTML documents that contain all Snippets targeted to a group of users, including their Style-Sheets, images, metadata and the JS decision engine.

    The Bundle is transferred to the client where the locally executed decision engine selects a snippet to display. A carefully designed system with multiple levels of caching takes care of the delivery. One layer of caching is a CloudFront CDN.

  • Working around the extension popout-tab refusing to close on Firefox for Android

    How do you close an web extension popout-winndow (the small window that appears when you click on on extension’s toolbar button)? On the desktop, all you need is a simple window.close(). Because of the limited available screen space Firefox on Android have popout-tabs instead of popout-windows. Users can dismiss these tabs by pressing the back button, closing them manually, or switching to another tab. However, they’re deceptively difficult to close pragmatically.

    This article was last verified for Firefox 61, and applies to Firefox for Android versions 57 and newer.

    It’s common for web extension popout-windows to close themselves after the user has completed an action in them. While many web extensions work on Firefox for Android, users often have to manually close the popout-tabs on their own.

KDE: Akademy 2018, Chakra GNU/Linux, and Krita Interview with Margarita Gadrat

Filed under
KDE
  • Akademy 2018

    The time for Akademy came this year as well, this year it was in the gorgeous Vienna, Austria.
    This year marks my 10th Akademy in a row, starting from my first one in Belgium in 2008.

Kernel: Linux 4.19, 2018 Linux Plumbers Conference and More

Filed under
Linux
  • Icelake LPSS, ChromeOS EC CEC Driver On Way To Linux 4.19 Kernel

    The Linux "multi-function device" code updates were sent in overnight for the 4.19 kernel merge window with a few interesting additions.

    Worth pointing out in the MFD subsystem for the Linux 4.19 kernel includes:

    - The ChromeOS EC CEC driver being added. Google's embedded controller for ChromeOS devices is able to expose an HDMI CEC (Consumer Electronics Control) bus for interacting with HDMI-connected devices for controlling them via supported commands. The Linux kernel's HDMI CEC support has got into shape the past few kernel cycles and now the ChromeOS EC support can expose its HDMI CEC abilities with this new driver.

  • Testing and Fuzzing Microconference Accepted into 2018 Linux Plumbers Conference

    Testing, fuzzing, and other diagnostics have greatly increased the robustness of the Linux ecosystem, but embarrassing bugs still escape to end users. Furthermore, a million-year bug would happen several tens of times per day across Linux’s installed base (said to number more than 20 billion), so the best we can possibly do is hardly good enough.

  • Latest Linux 4.19 Code Merge Introduces ChromeOS EC CEC Drivers and Cirrus Logic Detection

    Some interesting code updates were just recently put into the Linux 4.19 kernel merge window regarding “multi-function device” capabilities – mostly, this includes several new drivers and driver support, but perhaps most interesting is the ChromeOS EC CEC driver being added.

    Google’s embedded controller for ChromeOS has been able to expose an HDMI CEC (Consumer Electronics Control) bus for interacting with HDMI-connected devices, which in turn is able to control them via supported commands. So now Linux kernel’s HDMI CEC support has been improved over the past few kernel cycles until now, which means that the ChromeOS EC support will be able to expose the HDMI CEC abilities utilizing the new driver added in this merge window.

  • Linux 4.19 Had A Very Exciting First Week Of New Features

    The Linux 4.19 kernel merge window opened one week ago and there's been a lot of new features and improvements to be merged during this front-half of the merge period. If you are behind on your Phoronix reading, here's a look at the highlights for week one.

Games Leftovers

Filed under
Gaming

Canonical Apologizes for Ubuntu 14.04 LTS Linux Kernel Regression, Releases Fix

Filed under
Security
Ubuntu

The kernel security update addressed both the L1 Terminal Fault vulnerabilities, as well as two other security flaws (CVE-2018-5390 and CVE-2018-5391) discovered by Juha-Matti Tilli in Linux kernel's TCP and IP implementations, which could allow remote attackers to cause a denial of service.

Unfortunately, on Ubuntu 14.04 LTS (Trusty Tahr) systems, users reported that the mitigations also introduced a regression in the Linux kernel packages, which could cause kernel panics for some users that booted the OS in certain desktop environments.

Read more

A Fresh Look At The NVIDIA vs. Radeon Linux Performance & Perf-Per-Watt For August 2018

Filed under
Graphics/Benchmarks

With NVIDIA expected to announce the Turing-based GeForce RTX 2080 series today as part of their Gamescom press conference, here is a fresh look at the current NVIDIA Linux OpenGL/Vulkan performance with several Pascal graphics cards compared to AMD Polaris and Vega offerings. Additionally, with these latest Linux drivers, the current look at the performance-per-Watt.

It will be interesting to learn more about the GeForce RTX 2080 series in a short time, which will surely deliver significantly better performance and power efficiency improvements over the GeForce GTX 1000 "Pascal" hardware. But for a current look at how those cards are running under Linux, this morning are benchmarks for the GeForce GTX 1060, GTX 1070 Ti, GTX 1080, and GTX 1080 Ti while using the latest NVIDIA 396.51 graphics driver. For the competition on the AMD side was the Radeon RX Vega 64 and RX 580 (the GTX 1060 / RX 580 included in this article for a more mature look at the Linux driver support, namely for the AMDGPU+RADV/RadeonSI side). The Radeon tests were done with the latest Linux 4.18 AMDGPU DRM state and using Mesa 18.3-dev from the Oibaf PPA as of 19 August.

Read more

Latest Deepin Linux Release Promises to Consume Less Memory Than Ubuntu, Windows

Filed under
Linux

Coming just two months after the Deepin 15.6 release that introduced new Light and Dark themes, Deepin 15.7 is now available with a focus on performance. It smaller ISO size by removing unnecessary components and optimizing the core system structure, better power optimization for laptops for up to 20 percent battery life, and improved memory usage.

"Deepin 15.7 has made a series of adjustments and optimizations in memory usage. In the standard configuration, the boot memory has decreased from 1.1G to 830M, and reduced to less than 800M on a discrete graphics card," wrote the devs in today's announcement, where they compared the memory consumptions of Deepin 15.7, Deepin 15.6 and other operating systems on the same computer.

Read more

Ubuntu 18.10 (Cosmic Cuttlefish) Daily Lives Now Ship with Yaru Theme by Default

Filed under
Ubuntu

We've been waiting for this moment for a couple of weeks now and we're proud to be the first to report that the Yaru theme developed by various members of the Ubuntu Linux community has now finally been enabled by default in the daily builds of the Ubuntu 18.10 (Cosmic Cuttlefish) operating system.

Of course, we immediately took a screenshot tour of the Yaru theme on today's Ubuntu 18.10 (Cosmic Cuttlefish) daily build so we can show you how great it looks. We think it's a professional theme that matures Ubuntu to the next level, and it is definitely a step in the right direction for the look and feel of the Ubuntu Desktop.

Read more

Syndicate content

More in Tux Machines

today's leftovers

GNOME Shell, Mutter, and Ubuntu's GNOME Theme

Benchmarks on GNU/Linux

  • Linux vs. Windows Benchmark: Threadripper 2990WX vs. Core i9-7980XE Tested
    The last chess benchmark we’re going to look at is Crafty and again we’re measuring performance in nodes per second. Interestingly, the Core i9-7980XE wins out here and saw the biggest performance uplift when moving to Linux, a 5% performance increase was seen opposed to just 3% for the 2990WX and this made the Intel CPU 12% faster overall.
  • Which is faster, rsync or rdiff-backup?
    As our data grows (and some filesystems balloon to over 800GBs, with many small files) we have started seeing our night time backups continue through the morning, causing serious disk i/o problems as our users wake up and regular usage rises. For years we have implemented a conservative backup policy - each server runs the backup twice: once via rdiff-backup to the onsite server with 10 days of increments kept. A second is an rsync to our offsite backup servers for disaster recovery. Simple, I thought. I will change the rdiff-backup to the onsite server to use the ultra fast and simple rsync. Then, I'll use borgbackup to create an incremental backup from the onsite backup server to our off site backup servers. Piece of cake. And with each server only running one backup instead of two, they should complete in record time. Except, some how the rsync backup to the onsite backup server was taking almost as long as the original rdiff-backup to the onsite server and rsync backup to the offsite server combined. What? I thought nothing was faster than the awesome simplicity of rsync, especially compared to the ancient python-based rdiff-backup, which hasn't had an upstream release since 2009.

OSS Leftovers

  • Haiku: R1/beta1 release plans - at last
    At last, R1/beta1 is nearly upon us. As I’ve already explained on the mailing list, only two non-“task” issues remain in the beta1 milestone, and I have prototype solutions for both. The buildbot and other major services have been rehabilitated and will need only minor tweaking to handle the new branch, and mmlr has been massaging the HaikuPorter buildmaster so that it, too, can handle the new branch, though that work is not quite finished yet.
  • Haiku OS R1 Beta Is Finally Happening In September
    It's been five years since the last Haiku OS alpha release for their inaugural "R1" release but next month it looks like this first beta will be released, sixteen years after this BeOS-inspired open-source operating system started development.
  • IBM Scores More POWER Open-Source Performance Optimizations
    Following our POWER9 Linux benchmarks earlier this year, IBM POWER engineers have continued exploring various areas for optimization within the interesting open-source workloads tested. Another batch of optimizations are pending for various projects.
  • DevConf.in 2018
    Earlier this month, I attended DevConf.in 2018 conference in Bengaluru, KA, India. It was sort of culmination of a cohesive team play that began for me at DevConf.cz 2018 in Brno, CZ. I say sort of because the team is already gearing up for DevConf.in 2019.
  • The Unitary Fund: a no-strings attached grant program for Open Source quantum computing
    Quantum computing has the potential to be a revolutionary technology. From the first applications in cryptography and database search to more modern quantum applications across simulation, optimization, and machine learning. This promise has led industrial, government, and academic efforts in quantum computing to grow globally. Posted jobs in the field have grown 6 fold in the last two years. Quantum computing hardware and platforms, designed by startups and tech giants alike, continue to improve. Now there are new opportunities to discover how to best program and use these new machines. As I wrote last year: the first quantum computers will need smart software. Quantum computing also remains a place where small teams and open research projects can make a big difference. The open nature is important as Open Source software has the lowest barriers  for others to understand, share and build upon existing projects. In a new field that needs to grow, this rapid sharing and development is especially important. I’ve experienced this myself through leading the Open Source Forest project at Rigetti Computing and also by watching the growing ecosystem of open projects like QISKit, OpenFermion, ProjectQ, Strawberry Fields, XaCC, Cirq, and many others. The hackathons and community efforts from around the world are inspiring.
  • SiFive Announces First Open-Source RISC-V-Based SoC Platform With NVIDIA Deep Learning Accelerator Technology
    SiFive, the leading provider of commercial RISC-V processor IP, today announced the first open-source RISC-V-based SoC platform for edge inference applications based on NVIDIA's Deep Learning Accelerator (NVDLA) technology.