Language Selection

English French German Italian Portuguese Spanish

Red Hat

Fedora Project and IBM/Red Hat

Filed under
Red Hat
  • GNOME Internet Radio Locator 3.0.1 for Fedora Core 32

    GNOME Internet Radio Locator 3.0.1 features updated language translations, new, improved map marker palette and now also includes radio from Washington, United States of America; London, United Kingdom; Berlin, Germany; Radio Eins, and Paris, France; France Inter/Info/Culture, as well as 118 other radio stations from around the world with audio streaming implemented through GStreamer.

  • Fedora program update: 2020-27

    Here’s your report of what has happened in Fedora this week. I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

  • Outreachy design internship: budget templates and infographics

    Hey, I’m Smera. I’m one of the Outreachy interns this year, working on creating new designs for the Fedora Project. I work with Marie Nordin (FCAIC) and the Fedora Design team. I started on the 19th of May and this is what I have been up to!

  • Will Red Hat Rule the Supercomputing Industry with Red Hat Enterprise Linux (RHEL)?

    Red Hat Enterprise Linux has achieved a significant milestone after serving as an operating system for the world's fastest supercomputer, according to Top500. This opens up the debate on why Linux is the most preferred operating system for supercomputers.

    Supercomputers process vast datasets and conduct complex simulations much faster than traditional computers. From weather modeling, disease control, energy efficiency,nuclear testing, and quantum mechanics, supercomputers can tackle numerous scientific challenges. Countries like the U.S. and China have forever been in the race to develop the most powerful and fastest supercomputers. However, this year technological superpower Japan stole the show, when its Fugaku ARM-based supercomputer was ranked the no.1 supercomputer in the world by the Top500 list. The system runs on the Red Hat Enterprise Linux (RHEL) platform. In fact, the June 2020 Top500 list of supercomputers declared that the top three supercomputers in the world and four out of the top 10 supercomputers run on the Red Hat Enterprise Linux (RHEL) platform. That is a pretty powerful validation of RHEL’s capability to meet demanding computing environments.

  • A developer-centered approach to application development

    Do you dream of a local development environment that’s easy to configure and works independently from the software layers that you are currently not working on? I do!

    As a software engineer, I have suffered the pain of starting projects that were not easy to configure. Reading the technical documentation does not help when much of it is outdated, or even worse, missing many steps. I have lost hours of my life trying to understand why my local development environment was not working.

  • Automate workshop setup with Ansible playbooks and CodeReady Workspaces

    At Red Hat, we do many in-person and virtual workshops for customers, partners, and other open source developers. In most cases, the workshops are of the “bring your own device” variety, so we face a range of hardware and software setups and corporate endpoint-protection schemes, as well as different levels of system knowledge.

    In the past few years, we’ve made heavy use of Red Hat CodeReady Workspaces (CRW). Based on Eclipse Che, CodeReady Workspaces is an in-browser IDE that is familiar to most developers and requires no pre-installation or knowledge of system internals. You only need a browser and your brain to get hands-on with this tech.

    We’ve also built a set of playbooks for Red Hat Ansible to automate our Quarkus workshop. While they are useful, the playbooks are especially helpful for automating at-scale deployments of CodeReady Workspaces for Quarkus development on Kubernetes. In this article, I introduce our playbooks and show you how to use them for your own automation efforts.

  • What does a scrum master do?

    Turning a love of open source communities into a career is possible, and there are plenty of directions you can take. The path I'm on these days is as a scrum master.

    Scrum is a framework in which software development teams deliver working software in increments of 30 days or less called "sprints." There are three roles: scrum master, product owner, and development team. A scrum master is a facilitator, coach, teacher/mentor, and servant/leader that guides the development team through executing the scrum framework correctly.

IBM/Red Hat/Fedora: Fedora 33, Fedora 32 New Builds, Change Data Capture (CDC), OpenPOWER, HPC and More

Filed under
Red Hat
  • Fedora 33 SwapOnZRam Test Day 2020-07-06

    The Workstation Working Group has proposed a change for Fedora 33 to use swap on zram. This would put swap space on a compressed RAM drive instead of a disk partition. The QA team is organizing a test day on Monday, July 06, 2020. Refer to the wiki page for links to the test cases and materials you’ll need to participate. Read below for details.

  • F32-20200701 Updated Live isos released

    The Fedora Respins SIG is pleased to announce the latest release of Updated F32-20200701-Live ISOs, carrying the 5.6.19-300 kernel.

    This set of updated isos will save considerable amounts of updates after install. ((for new installs.)(New installs of Workstation have about 900+MB of updates)).

    A huge thank you goes out to irc nicks dowdle, dbristow, nasirhm, Southern-Gentleman for testing these iso.

  • Build a simple cloud-native change data capture pipeline

    Change data capture (CDC) is a well-established software design pattern for a system that monitors and captures data changes so that other software can respond to those events. Using KafkaConnect, along with Debezium Connectors and the Apache Camel Kafka Connector, we can build a configuration-driven data pipeline to bridge traditional data stores and new event-driven architectures.

    This article walks through a simple example.

  • OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

    Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time last year, IBM announced open sourcing its Power instruction set (ISA) and Open Coherent Accelerator Processor Interface (OpenCAPI) and Open Memory Interface (OMI). That’s also when IBM said OpenPOWER would become a Linux Foundation entity. Then a few weeks ago, OpenPOWER named a new executive director, James Kulina.

    Change is afoot at the OpenPOWER Foundation. Will it be enough to prompt wider (re)consideration and adoption of the OpenPOWER platform and ecosystem?

  • Red Hat Powers the Future of Supercomputing with Red Hat Enterprise Linux

    Fugaku is the first Arm-based system to take first place on the TOP500 list, highlighting Red Hat’s commitment to the Arm ecosystem from the data center to the high-performance computing laboratory. Sierra, Summit and Marconi-100 all boast IBM POWER9-based infrastructure with NVIDIA GPUs; combined, these four systems produce more than 680 petaflops of processing power to fuel a broad range of scientific research applications.

    In addition to enabling this immense computation power, Red Hat Enterprise Linux also underpins six out of the top 10 most power-efficient supercomputers on the planet according to the Green500 list. Systems on the list are measured in terms of both performance results and the power consumed achieving those. When it comes to sustainable supercomputing the premium is put on finding a balanced approach for the most energy-efficient performance.

  • Red Hat Powers the Future of Supercomputing with Red Hat Enterprise Linux

    Modern supercomputers are no longer purpose-built monoliths constructed from expensive bespoke components. Each supercomputer deployment powered by Red Hat Enterprise Linux uses hardware that can be purchased and integrated into any datacenter, making it feasible for organizations to use enterprise systems that are similar to those breaking scientific barriers. Regardless of the underlying hardware, Red Hat Enterprise Linux provides the common control plane for supercomputers to be run, managed and maintained in the same manner as traditional IT systems.

    Red Hat Enterprise Linux also opens supercomputing applications up to advancements in enterprise IT, including Linux containers. Working closely in open source communities with organizations like the Supercomputing Containers project, Red Hat is helping to drive advancements to make Podman, Skopeo and Buildah, components of Red Hat’s distributed container toolkit, more accessible for building and deploying containerized supercomputing applications.

  • Red Hat Enterprise Linux serves as operating system for supercomputers

    Red Hat announced that Red Hat Enterprise Linux provides the operating system backbone for the top three supercomputers in the world and four out of the top 10, according to the newest TOP500 ranking.

    Already serving as a catalyst for enterprise innovation across the hybrid cloud, these rankings also show that the world’s leading enterprise Linux platform can deliver a foundation to meet even the most demanding computing environments.

  • Lessons learned from standing up a front-end development program at IBM

    In 2015, we created the FED@IBM program to support front-end developers and give them the opportunity to learn new skills and teach other devs about their specific areas of expertise. While company programs often die out due to lack of funding, executive backing, interest, or leadership, our community is thriving in spite of losing the funding, executive support, and resources we had at the program’s inception.

    What’s the secret behind the success of this grassroots employee support program? As I have been transitioning leadership of the FED@IBM Program and Community, I have been reflecting on our program’s success and how to define how we have been able to sustain the program.

Between Two Releases of Ubuntu 20.04 and Fedora 32

Filed under
Red Hat
Ubuntu

Both Ubuntu Focal Fossa and Fedora 32 released in the same time April this year. They are two operating systems from different families namely Debian and Red Hat. One of their most interesting things in common is the arrival of computer companies like Dell and Star Labs (and Lenovo's coming) that sell special preinstalled laptops and PCs. I make this summary to remind myself and inform you all growth of these great operating systems. Enjoy!

Read more

IBM/Red Hat/Fedora: Systemd, Containers, Ansible, IBM Cloud Pak and More

Filed under
Red Hat
  • Systemd 246 Is On The Way With Many Changes

    With it already having been a few months since systemd 245 debuted with systemd-homed, the systemd developers have begun their release dance for what will be systemd 246.

  • Containers: Understanding the difference between portability, compatibility and supportability

    Portability alone does not offer the entire promise of Linux containers. You also need Compatibility and Supportability.

  • Red Hat Updates Ansible Automation Platform

    Red Hat recently announced key enhancements to the Ansible Automation portfolio, including the latest version of Red Hat Ansible Automation Platform and new Red Hat Certified Ansible Content Collections available on Automation Hub.

  • IBM Cloud Pak for Integration in 2 minutes
  • Introducing modulemd-tools

    A lot of teams are involved in the development of Fedora Modularity and vastly more people are affected by it as packagers and end-users. It is obvious, that each group has its own priorities, use-cases and therefore different opinions on what is good or bad about the current state of the project. Personally, I was privileged (or maybe doomed) to represent yet another, often forgotten, group of users - third-party build systems.

    Our team is directly responsible for the development and maintenance of Copr and a few years ago we decided to support building modules alongside building just regular packages. We stumbled upon many frustrating pitfalls that I don’t want to discuss right now but the major one was definitely not enough tools for working with modules. That was understandable in the early stages of the development process but it has been years and we still don’t have the right tools for building modules on our own, without relying on the Fedora infrastructure. You may recall me expressing the need for them at the Flock 2019 conference.

  • GSoC 2020 nmstate project update for June

    This blog is about my experience working in nmstate project and first month in GSoC coding period. I was able to start working on implementing the varlink support mid of community bonding period. This was very helpful because I was able to identify some issues in the python varlink package that was not mentioned in documentation and I had to spend more time finding the cause of the issue. There have been minor changes to proposed code structure and project timeline after the feedback from the community members. In the beginning it was difficult to identify syntax errors in varlink interface definitions. This has been slow progress because of new issues and following are the tasks I have completed so far.

Storage Instantiation Daemon in Fedora, IBM/Spark and Talospace Project/POWER

Filed under
Red Hat
  • Fedora Looks To Introduce The Storage Instantiation Daemon

    As one of the last minute change proposals for Fedora 33 is to introduce the Red Hat backed Storage Instantiation Daemon "SID" though at least for this first release would be off by default. The Storage Instantiation Daemon is one of the latest storage efforts being worked on by Red Hat engineers.

    The Storage Instantiation Daemon is intended to help manage Linux storage device state tracking atop udev and reacts to changes via uevents. This daemon can offer an API for various device subsystems and provides insight into the Linux storage stack. More details on this newer open-source effort via sid-project.github.io.

  • Explore best practices for Spark performance optimization

    I am a senior software engineer working with IBM’s CODAIT team. We work on open source projects and advocacy activities. I have been working on open source Apache Spark, focused on Spark SQL. I have also been involved with helping customers and clients with optimizing their Spark applications. Apache Spark is a distributed open source computing framework that can be used for large-scale analytic computations. In this blog, I want to share some performance optimization guidelines when programming with Spark. The assumption is that you have some understanding of writing Spark applications. These are guidelines to be aware of when developing Spark applications.

    [...]

    Spark has a number of built-in user-defined functions (UDFs) available. For performance, check to see if you can use one of the built-in functions since they are good for performance. Custom UDFs in the Scala API are more performant than Python UDFs. If you have to use the Python API, use the newly introduced pandas UDF in Python that was released in Spark 2.3. The pandas UDF (vectorized UDFs) support in Spark has significant performance improvements as opposed to writing a custom Python UDF. Get more information about writing a pandas UDF.

  • The Talospace Project: Firefox 78 on POWER

    Firefox 78 is released and is running on this Talos II. This version in particular features an updated RegExp engine but is most notable (notorious) for disabling TLS 1.0/1.1 by default (only 1.2/1.3). Unfortunately, because of craziness at $DAYJOB and the lack of a build waterfall or some sort of continuous integration for ppc64le, a build failure slipped through into release but fortunately only in the (optional) tests. The fix is trivial, another compilation bug in the profiler that periodically plagues unsupported platforms, and I have pushed it upstream in bug 1649653. You can either apply that bug to your tree or add ac_add_options --disable-tests to your .mozconfig. Speaking of, as usual, the .mozconfigs we use for debug and optimized builds have been stable since Firefox 67.

IBM/Red Hat/Fedora Leftovers

Filed under
Red Hat
  • Ask the experts during Red Hat Summit Virtual Experience: Open House

    One of the most popular activities during the Red Hat Summit Virtual Experience was the Ask the Experts sessions, where attendees could engage with Red Hat experts and leadership in real time, so we're bringing it back for our Open House in July.

  • Making open source more inclusive by eradicating problematic language

    Open source has always been about differing voices coming together to share ideas, iterate, challenge the status quo, solve problems, and innovate quickly. That ethos is rooted in inclusion and the opportunity for everyone to meaningfully contribute, and open source technology is better because of the diverse perspectives and experiences that are represented in its communities. Red Hat is fortunate to be able to see the impact of this collaboration daily, and this is why our business has also always been rooted in these values.

    Like so many others, Red Hatters have been coming together the last few weeks to talk about ongoing systemic injustice and racism. I’m personally thankful to Red Hat’s D+I communities for creating awareness and opportunities for Red Hatters to listen in order to learn, and I’m grateful that so many Red Hatters are taking those opportunities to seek understanding.

  • The latest updates to Red Hat Runtimes

    Today, we are happy to announce that the latest release of Red Hat Runtimes is now available. This release includes updates that build upon the work the team has done over the past year for building modern, cloud-native applications.

    Red Hat Runtimes, part of the Red Hat Application Services portfolio, is a set of products, tools and components for developing and maintaining cloud-native applications. It offers lightweight runtimes and frameworks for highly-distributed cloud architectures, such as microservices or serverless applications. We continuously make updates and improvements to meet the changing needs of our customers, and to help developers better build business-critical applications. Read on for the latest.

  • Kourier: A lightweight Knative Serving ingress

    Until recently, Knative Serving used Istio as its default networking component for handling external cluster traffic and service-to-service communication. Istio is a great service mesh solution, but it can add unwanted complexity and resource use to your cluster if you don’t need it.

    That’s why we created Kourier: To simplify the ingress side of Knative Serving. Knative recently adopted Kourier, so it is now a part of the Knative family! This article introduces Kourier and gets you started with using it as a simpler, more lightweight way to expose Knative applications to an external network.

    Let’s begin with a brief overview of Knative and Knative Serving.

  • CodeTheCurve: A blockchain-based supply chain solution to address PPE shortages

    This past April, creative techies from all over the world gathered online for CodeTheCurve, a five-day virtual hackathon organized by the United Nations Educational, Scientific, and Cultural Organization (UNESCO) in partnership with IBM and SAP. Participants all worked toward the goal of creating digital solutions to address the global pandemic.

    Our team focused on the goal of improving the efficiency of the personal protective equipment (PPE) supply chain in order to prevent shortages for health care workers. With the rise of the current global pandemic, supplies of medical equipment have become more critical, particularly PPE for medical workers. In many places, PPE shortages have been a serious problem. To address this challenge, we proposed that a blockchain-based supply chain could help make this process faster and more reliable, thereby connecting health ministries, hospitals, producers, and banks, and making it easier to track and report information on supplies.

  • Analyze your Spark application using explain

    It is important that you have some understanding of Spark execution plan when you are optimizing your Spark applications. Spark provides an explain API to look at the Spark execution plan for your Spark SQL query. In this blog, I will show you how to get the Spark query plan using the explain API so you can debug and analyze your Apache Spark application. The explain API is available on the Dataset API. You can use it to know what execution plan Spark will use for your Spark query without actually running it. Spark also provides a Spark UI where you can view the execution plan and other details when the job is running. For Spark jobs that have finished running, you can view the Spark plan that was used if you have the Spark history server set up and enabled on your cluster. This is useful when tuning your Spark jobs for performance optimizations.

  • What’s new in Apache Spark 3.0

    The Apache Spark community announced the release of Spark 3.0 on June 18 and is the first major release of the 3.x series. The release contains many new features and improvements. It is a result of more than 3,400 fixes and improvements from more than 440 contributors worldwide. IBM Center of Open Source for Data and AI Technology (CODAIT) focuses on a number of selective open source technologies on machine learning, AI workflow, trusted AI, metadata, and big data process platform, etc. has delivered approximate hundreds of commits, including a couple of key features in this release.

  • GSoC Progress Report: Dashboard for Packit

    Hi, I am Anchit, a 19 y.o. from Chandigarh, India. I love programming, self-hosting, gaming, reading comic books, and watching comic-book based movies/tv.

    The first version of Fedora I tried was 21 when I came across it during my distro-hopping spree. I used it for a couple of months and then moved on to other distros. I came back to Fedora in 2017 after a couple of people on Telegram recommended it and have been using it ever since. A big reason why I stuck with Fedora this time is the community. Shout out to @fedora on Telegram. They’re nice, wholesome and helpful. They also got me into self-hosting and basic sys-admin stuff.

  • Fedora Looking To Offer Better Upstream Solution For Hiding/Showing GRUB Menu

    Fedora for the past few releases doesn't show the GRUB boot-loader menu by default when only Fedora is installed on the system as there is little purpose for most users and it just interrupts the boot flow. But for those wanting to access the GRUB bootloader menu on reboot, they offer integration in GNOME to easily reboot into this menu. The other exception is the menu will be shown if the previous boot failed. This functionality has relied on downstream patches but now they are working towards a better upstream solution.

    Hans de Goede of Red Hat who led the original GRUB hidden boot menu functionality is looking to clean up this feature for Fedora 33. The hope is to get the relevant bits upstream into GNOME and systemd for avoiding the downstream patches they have been carrying. This reduces their technical debt and also makes it easier for other distributions to provide similar functionality.

  • Fedora Developers Discussing Possibility Of Dropping Legacy BIOS Support

    Fedora stakeholders are debating the merits of potentially ending legacy BIOS support for the Linux distribution and to only support UEFI-based installations.

    Given Fedora 33 GRUB changes planned and things being easier if they were to just switch to the UEFI-based systemd sd-boot as well as Intel planning to end legacy BIOS support in 2020 and UEFI being very common to x86_64 systems for many years now, Fedora developers are discussing whether it's a good time yet for their bleeding-edge platform to also begin phasing out legacy BIOS support.

IBM/Red Hat: Sysadmins, Success Stories, Apache Kafka and IBM "AI" Marketing/Hype

Filed under
Red Hat
  • Sysadmin stories from the trenches: Funny user mistakes

    I was a noob IT guy in the late 90s. I provided desktop support to a group of users who were, shall we say, not the most technical of users. I sometimes wonder where those users are today, and I silently salute the staff that's had to support them since I left long ago.

    I suffered many indignities during that time. I can chuckle about the situations now.

  • Sneak peek: Podman's new REST API

    This one is just between you and me, don't tell anyone else! Promise? Okay, I have your word, so here goes: There's a brand new REST API that is included with version 2.0 of Podman! That release has just hit testing on the Fedora Project and may have reached stable by the time this post is published. With this new REST API, you can call Podman from platforms such as cURL, Postman, Google's Advanced REST client, and many others. I'm going to describe how to begin using this new API.

    The Podman service only runs on Linux. You must do some setup on Linux to get things going.

  • Red Hat Success Stories: Creating a foundation for a containerized future

    Wondering how Red Hat is helping its customers succeed? We regularly publish customer success stories that highlight how we're helping customers gain efficiency, cut costs, and transform the way they deliver software. This month we'll look at how Slovenská sporiteľňa and Bayport Financial Services have worked with Red Hat to improve their business.

  • Apache Kafka and Kubernetes is making real time processing in payments a bit easier

    The introduction of the real time payments network in the United States has presented an unique opportunity for organizations to revisit their messaging infrastructure. The primary goal of real time payments is to support real time processing, but a secondary goal is to reduce the toil of the ongoing operations and make real time ubiquitous across the organization.

    Traditional message systems, have been around for quite some time, but have been a bit clunky to operate. Many times, tasks such as software upgrades and routine patches meant the messaging infrastructure would be down while the update was performed, causing delays in payment processing.This may have been reasonable in a world where payment processing was not expected outside of normal banking hours, but in our always-on digital world, customers expect their payments to clear and settle in real time. Today, outages and delays disrupt both business processes and customer experience.

  • IBM and LFAI move forward on trustworthy and responsible AI

    For over a century, IBM has created technologies that profoundly changed how humans work and live: the personal computer, ATM, magnetic tape, Fortran Programming Language, floppy disk, scanning tunneling microscope, relational database, and most recently, quantum computing, to name a few. With trust as one of our core principles, we’ve spent the past century creating products our clients can trust and depend on, guiding their responsible adoption and use, and respecting the needs and values of all users and communities we serve.

    Our current work in artificial intelligence (AI) is bringing a transformation of similar scale to the world today. We infuse these guiding principles of trust and transparency into all of our work in AI. Our responsibility is to not only make the technical breakthroughs required to make AI trustworthy and ethical, but to ensure these trusted algorithms work as intended in real-world AI deployments.

  • IBM donates "Trusted AI" projects to Linux Foundation AI

    IBM on Monday announced it's donating a series of open-source toolkits designed to help build trusted AI to a Linux Foundation project, the LF AI Foundation. As real-world AI deployments increase, IBM says the contributions can help ensure they're fair, secure and trustworthy.

    "Donation of these projects to LFAI will further the mission of creating responsible AI-powered technologies and enable the larger community to come forward and co-create these tools under the governance of Linux Foundation," IBM said in a blog post, penned by Todd Moore, Sriram Raghavan and Aleksandra Mojsilovic.

  • IBM donates AI toolkits to Linux Foundation to ‘mitigate bias’ in datasets

    As artificial intelligence (AI) deployments increase around the world, IBM says it’s determined to ensure that they’re fair, secure and trustworthy.

    To that end, it has donated a series of open-source toolkits designed to help build trusted AI to a Linux Foundation project, the LF AI Foundation, as reported in ZDNet.

    “Donation of these projects to LFAI will further the mission of creating responsible AI-powered technologies and enable the larger community to come forward and co-create these tools under the governance of Linux Foundation,” IBM said in a blog post, penned by Todd Moore, Sriram Raghavan and Aleksandra Mojsilovic.

  • PionerasDev wins IBM Open Source Community Grant to increase women’s participation in programming

    Last fall, IBM’s open source community announced a new quarterly grant to award nonprofit organizations that are dedicated to education, inclusiveness, and skill-building for women, underrepresented minorities, and underserved communities in the open source world. The Open Source Community Grant aims to help create new tech opportunities for underrepresented communities and foster the adoption and use of open source.

  • Ansible 101 live streaming series - a retrospective

    That last metric can be broken down further: on average, I spent 3.5 hours prepping for each live stream, 1 hour doing the live stream, and then 1 hour doing post-production (setting chapter markers, reading chat messages, downloading the recording, etc.).

    So each video averaged $30 in ad revenue, and by ad revenue alone, the total hourly wage equivalent based on direct video revenue is... $5.45/hour.

    Subtract the cost of the equipment I use for the streaming (~$1,000, most of it used, though I already owned it), and now I'm a bit in the hole!

What Are Fedora Labs and How Are They Useful to You?

Filed under
Red Hat

Fedora Labs are pre-built images of Fedora 32 Workstation, a Linux distribution known for solid performance and new software packages. What the Labs do is provide users of a few common use cases access to an image that comes with all of the software they’d want in order to hit the ground running after they install the system.

There are eight different labs right now, covering everything from astronomy to gaming to design. They’re all live systems, so there is no need to install anything to your system, which is potentially an attractive option for those users who have a system already up and running. Let’s look at all eight in brief.

1. The Fedora Astronomy Lab

The Astronomy Lab comes with a wide array of tools useful in astronomy, including visualization software, scientific Python tools, and free astronomical image processing software. Also of note is a library designed to support the control of astronomical instruments. This Lab will absolutely be great for both experienced and amateur astronomers.

2. The Fedora Comp-Neuro Lab

The Comp-Neuro Lab is similar in its philosophy to the Astronomy Lab: it comes pre-installed with an array of free Neuroscience modelling software to allow you to get to work quickly. This includes SciPy, a scientific Python library, and NEURON, a detailed neuron simulation environment that allows you to work down to the single-neuron level.

Read more

virt-manager is deprecated in RHEL (but only RHEL)

Filed under
Red Hat

I'm the primary author of virt-manager. virt-manager is deprecated in RHEL8 in favor of cockpit, but ONLY in RHEL8 and future RHEL releases. The upstream project virt-manager is still maintained and is still relevant for other distros.

Google 'virt-manager deprecated' and you'll find some discussions suggesting virt-manager is no longer maintained, Cockpit is replacing virt-manager, virt-manager is going to be removed from every distro, etc. These conclusions are misinformed.

The primary source for this confusion is the section 'virt-manager has been deprecated' from the RHEL8 release notes virtualization deprecation section.

Read more

Also: RHEL Deprecating The Virt-Manager UI In Favor Of The Cockpit Web Console

There's A Proposal To Switch Fedora 33 On The Desktop To Using Btrfs

Filed under
Red Hat

More than a decade ago Fedora was routinely trying to pursue the Btrfs file-system by default but those hopes were abandoned long ago. Heck, Red Hat Enterprise Linux no longer even supports Btrfs. While all Red Hat / Fedora interests in Btrfs seemed abandoned years ago especially with Red Hat developing their Stratis storage technology, there is a new (and serious) proposal about moving to Btrfs for Fedora 33 desktop variants.

There is a new proposal to use Btrfs as the default file-system for desktop variants starting with Fedora 33. This proposal is being backed by various Fedora developers, Facebook, and other stakeholders in believing Btrfs is more featureful than the current EXT4 while now is stable enough following years of testing.

Read more

Also: Fedora program update: 2020-26

Syndicate content

More in Tux Machines

GNU, GTK/GNOME, and More Development News

  • GNU Emacs 27.1 Adds HarfBuzz Text Shaping, Native JSON Parsing

    GNU Emacs 27.1 is the latest feature release for this very extensible text editor. With Emacs 27.1 there is support for utilizing the HarfBuzz library for text shaping. HarfBuzz is also what's already used extensively by GNOME, KDE, Android, LibreOffice, and many other open-source applications. Emacs 27.1 also adds built-in support for arbitrary-size integers, native support for JSON parsing, better support for Cairo drawing, support for XDG conventions for init files, the lexical binding is now used by default, built-in support for tab bar and tab-line, and support for resizing/rotating images without ImageMagick, among other changes.

  • Philip Withnall: Controlling safety vs speed when writing files

    g_file_set_contents() has worked fine for many years (and will continue to do so). However, it doesn’t provide much flexibility. When writing a file out on Linux there are various ways to do it, some slower but safer — and some faster, but less safe, in the sense that if your program or the system crashes part-way through writing the file, the file might be left in an indeterminate state. It might be garbled, missing, empty, or contain only the old contents. g_file_set_contents() chose a fairly safe (but not the fastest) approach to writing out files: write the new contents to a temporary file, fsync() it, and then atomically rename() the temporary file over the top of the old file. This approach means that other processes only ever see the old file contents or the new file contents (but not the partially-written new file contents); and it means that if there’s a crash, either the old file will exist or the new file will exist. However, it doesn’t guarantee that the new file will be safely stored on disk by the time g_file_set_contents() returns. It also has fewer guarantees if the old file didn’t exist (i.e. if the file is being written out for the first time).

  • Daniel Espinosa: Training Maintainers

    Is not just help others to help you, is a matter of responsibility with Open Source Community. Your life have wonders and should change for better, so you will be lost opportunities or simple can’t work on your favorite open source project. Prepare your self to be a maintainer professor, change your mind for the beginning and help others, that is also a great contribution to open source software. Be kind. Your potential contributors will take over when required. Making sure they have the abilities and use best practices in the project, is not just good for your project, is good for all others out there; they will use them to help other projects.

  • nanotime 0.3.1: Misc Build Fixes for Yuge New Features!

    The nanotime 0.3.0 release four days ago was so exciting that we decided to do it again! Kidding aside, and fairly extensive tests notwithstanding we were bitten by a few build errors: who knew clang on macOS needed extra curlies to be happy, another manifestation of Solaris having no idea what a timezone setting “America/New_York” is, plus some extra pickyness from the SAN tests and whatnot. So Leonardo and I gave it some extra care over the weekend, uploaded it late yesterday and here we are with 0.3.1. Thanks again to CRAN for prompt processing even though they are clearly deluged shortly before their (brief) summer break.

  • Explore 10 popular open source development tools

    There is no shortage of closed-source development tools on the market, and most of them work quite well. However, developers who opt for open source tools stand to gain a number of benefits. In this piece, we'll take a quick look at the specific benefits of open source development tools, and then examine 10 of today's most popular tooling options. [...] Git is a distributed code management and version-control system, often used with web-based code management platforms like GitHub and GitLab. The integration with these platforms makes it easy for teams to contribute and collaborate, however getting the most out of Git will require some kind of third-party platform. Some claim, however, that Git support for Windows is not as robust as it is for Linux, which is potentially a turnoff for Windows-centric developers. [...] NetBeans is a Java-based IDE similar to Eclipse, and also supports development in a wide range of programming languages. However, NetBeans focuses on providing functionality out of the box, whereas Eclipse leans heavily on its plugin ecosystem to help developers set up needed features.

  • Andre Roberge: Rich + Friendly-traceback: first look

    After a couple of hours of work, I have been able to use Rich to add colour to Friendly-traceback. Rich is a fantastic project, which has already gotten a fair bit of attention and deserves even more. The following is just a preview of things to come; it is just a quick proof of concept.

  • Growing Dask To Make Scaling Python Data Science Easier At Coiled

    Python is a leading choice for data science due to the immense number of libraries and frameworks readily available to support it, but it is still difficult to scale. Dask is a framework designed to transparently run your data analysis across multiple CPU cores and multiple servers. Using Dask lifts a limitation for scaling your analytical workloads, but brings with it the complexity of server administration, deployment, and security. In this episode Matthew Rocklin and Hugo Bowne-Anderson discuss their recently formed company Coiled and how they are working to make use and maintenance of Dask in production. The share the goals for the business, their approach to building a profitable company based on open source, and the difficulties they face while growing a new team during a global pandemic.

today's howtos and instructional sessions/videos

TDF Annual Report and LibreOffice Latest

           
  • TDF Annual Report 2019

    The Annual Report of The Document Foundation for the year 2019 is now available in PDF format from TDF Nextcloud in two different versions: low resolution (6.4MB) and high resolution (53.2MB). The annual report is based on the German version presented to the authorities in April. The 54 page document has been entirely created with free open source software: written contents have obviously been developed with LibreOffice Writer (desktop) and collaboratively modified with LibreOffice Writer (online), charts have been created with LibreOffice Calc and prepared for publishing with LibreOffice Draw, drawings and tables have been developed or modified (from legacy PDF originals) with LibreOffice Draw, images have been prepared for publishing with GIMP, and the layout has been created with Scribus based on the existing templates.

  • LibreOffice QA/Dev Report: July 2020

    LibreOffice 6.4.5 was announced on July, 2

  • Physics Based Animation Effects Week#10

    This week, I was mainly working on cleaning up and migrating the patches from my experimental branch to LO master.

Better Than Top: 7 System Monitoring Tools for Linux to Keep an Eye on Vital System Stats

Top command is good but there are better alternatives to Top. Take a look at these system monitoring tools in Linux that are similar to top but are actually better. Read more