Language Selection

English French German Italian Portuguese Spanish

Red Hat

Red Hat Enterprise Linux and CentOS Now Patched Against Latest Intel CPU Flaws

Filed under
Linux
Red Hat
Security

After responding to the latest security vulnerabilities affecting Intel CPU microarchitectures, Red Hat has released new Linux kernel security updates for Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 operating systems to address the well-known ZombieLoad v2 flaw and other issues. The CentOS community also ported the updates for their CentOS Linux 6 and CentOS Linux 7 systems.

The security vulnerabilities patched in this new Linux kernel security update are Machine Check Error on Page Size Change (IFU) (CVE-2018-12207), TSX Transaction Asynchronous Abort (TAA) (CVE-2019-11135), Intel GPU Denial Of Service while accessing MMIO in lower power state (CVE-2019-0154), and Intel GPU blitter manipulation that allows for arbitrary kernel memory write (CVE-2019-0155).

Read more

Red Hat: Oracle Linux 8 Update 1 (RHEL 8.1), SDNs and NFV

Filed under
Red Hat
  • Announcing Oracle Linux 8 Update 1

    Oracle is pleased to announce the general availability of Oracle Linux 8 Update 1. Individual RPM packages are available on the Unbreakable Linux Network (ULN) and the Oracle Linux yum server. ISO installation images will soon be available for download from the Oracle Software Delivery Cloud and Docker images will soon be available via Oracle Container Registry and Docker Hub.

    Oracle Linux 8 Update 1 ships with Red Hat Compatible Kernel (RHCK) (kernel-4.18.0-147.el8) kernel packages for x86_64 Platform (Intel & AMD), that include bug fixes, security fixes, and enhancements; the 64-bit Arm (aarch64) platform is also available for installation as a developer preview release.

  • Oracle Linux 8 Update 1 Announced With Udica, Optane DCPM Support

    Fresh off the release of Red Hat Enterprise Linux 8.1 at the beginning of November, Oracle is now shipping Oracle Linux 8 Update 1 as their spin of RHEL 8.1 with various changes on top -- including their "Unbreakable Enterprise Kernel" option.

  • Telco revolution or evolution: Depends on your perspective, but your network is changing

    As the market embraces edge computing and 5G networks, telecommunications service providers are increasingly looking for ways to migrate their monolithic services to microservices and containers. These providers are moving from legacy hardware appliances to virtualized network functions to containerized network functions on cloud infrastructure. Red Hat’s partnership with a rich ecosystem of software-defined networking (SDN) vendors, independent software vendors (ISVs), network equipment providers (NEPs), as well as its deep involvement in the open source projects powering these initiatives, give customers the choices and long-life support they need to build the services infrastructure that supports their business needs both today and tomorrow – as well as the journey in between.

  • The rise of the network edge and what it means for telecommunications

    5G. Software-defined networking (SDN) and network functions virtualization (NFV). IoT. Edge computing. Much has been said about these technologies and the impact they will have on the telecommunications services of tomorrow. But it’s when they’re talked about together—as part of the broader digital transformation of service provider networks and business models—that things really get interesting. It’s a story that may impact every corner of the telecommunications ecosystem, from mobile network operators (MNOs), traditional service providers, and cable network operators to cellular tower companies, data center operators, managed services providers, and vendors.

    SDN and NFV hold the promise of replacing enormous networks of proprietary, single-purpose appliances with racks of off-the-shelf compute and storage platforms that are running software from a variety of vendors for a variety of services. Progress on this front has been slowed by several issues, leaving operators looking for their next opportunity. It has emerged in the form of 5G, and whether they are early adopters or taking a wait-and-see approach, every telco company is looking for its 5G play.

Fedora: Qubes, rpminspect, rpminspect, and ProcDump

Filed under
Red Hat
  • PoC to auto attach USB devices in Qubes

    Here is PoC based on qubesadmin API which can auto attach USB devices to any VM as required. By default Qubes auto attaches any device to the sys-usb VM, that helps with bad/malware full USB devices. But, in special cases, we may want to select special devices to be auto attached to certain VMs. In this PoC example, we are attaching any USB storage device, but, we can add some checks to mark only selected devices (by adding more checks), or we can mark few vms where no device can be attached.

  • David Cantrell: rpminspect-0.9 released

    Very large packages (VLPs) are something I am working on with rpminspect. For example, the kernel package. A full build of the kernel source package generates a lot of files. I am working on improving rpminspect's speed and fixing issues found with individual inspections. These are only showing up when I do test runs comparing VLPs. The downside here is that it takes a little longer than with any other typical package.

  • Fedora pastebin and fpaste updates

    A pastebin lets you save text on a website for a length of time. This helps you exchange data easily with other users. For example, you can post error messages for help with a bug or other issue.

    The CentOS Pastebin is a community-maintained service that keeps pastes around for up to 24 hours. It also offers syntax highlighting for a large number of programming and markup languages.

  • ProcDump for Linux in Fedora

    ProcDump is a nifty debugging utility which is able to dump the core of a running application once a user-specified CPU or memory usage threshold is triggered. For instance, the invocation procdump -C 90 -p $MYPID instructs ProcDump to monitor the process with ID $MYPID, waiting for a 90 % CPU usage spike. Once it hits, it creates the coredump and exits. This allows you to later inspect the backtrace and memory state in the moment of the spike without having to attach a debugger to the process, helping you determine which parts of your code might be causing performance issues.

Red Hat Leftovers

Filed under
Red Hat
  • Red Hat Adds AI Capabilities to Process Automation Suite
  • Department of Defense Enlists Red Hat to Help Improve Squadron Operations and Flight Training

    Red Hat, Inc., the world's leading provider of open source solutions, today announced that the Department of Defense (DoD) worked with Red Hat to help improve aircraft and pilot scheduling for United States Marine Corps (USMC), United States Navy (USN) and United States Air Force (USAF) aircrews. Using modern development practices and processes from Red Hat Open Innovation Labs that prioritized end user needs, the project team identified unaddressed roadblocks and gained new skills to build the right solution, a digital "Puckboard" application, for their unique scheduling challenge.

    [...]

    The problem facing squadrons was seemingly straightforward: how to improve and digitize the management of flight training operations. The existing process was entirely manual, each representing pertinent information like a pilot’s name, associated with their training syllabus, location and time of flights. Simple at a glance, the number of cognitive variables contained within this undertaking made it stressful for the operator and difficult to scale across squadrons and bases.

    For more than a decade, various project teams within the DoD had tried to improve the system via custom built applications, aircraft scheduling software and hybrid solutions. None of these deployments withstood the test of time or could be replicated if the operator took a new role elsewhere. The Defense Innovation Unit (DIU), an organization tasked with accelerating commercial technologies into the military, took on this challenge.

  • It's RedHat, And Everyone Else

    As time passes, it appears that corporations are primarily considering one distribution when considering installing Linux, and that distro is clearly RedHat. That probably does not come as any major surprise, but it appears RedHat's dominance continues to get stronger. What use to be a landscape littered with a multitude of choices has nearly been rendered down to one. Wow! That didn't take long. The open source software dynamic seemed to be formed on the premise that users were never again going to be pigeon-holed into using one piece of software. Or, perhaps better stated, that was a byproduct of making the source code readily available. And, that is still true to this day. However, as a corporate citizen in today's business climate, one finds themselves with limited possibilities.

    It was a mere 20 years ago when the buzz of Linux was starting to hit its stride. Everywhere you looked, there was a different flavor of Linux. There were nearly too many to count. And, these were not just hobbyist distros. Instead, they were corporations rising like corn stalks all over the place. Sure, there were more dominant players, but one had the ability to analyze at least 10 different fully corporate supported distributions when making a decision. With that amount of possibilities, the environment was ripe for consolidation or elimination. And, we have all watched that take place. But, did we ever think we were going to find ourselves in the current predicament?

    The data that has been collected over the past five years paints a concerning picture. Even a mere five years ago, it seemed likely that at a minimum RedHat would always have Suse as a legitimate competitor. After all, those were the two distros that seemed to win the consolidation and elimination war. At least in the corporate space. As was widely reported during that time, RedHat had somewhere in the neighborhood of 70% marketshare. It was always the gorilla in the room. But, Suse was always looked upon as an eager and willing participant, no matter its stature, and tended to garner most of the remaining marketshare. That is the way it appeared for a length of time prior to this decline over the past few years.

  • Scale testing the Red Hat OpenStack Edge with OpenShift

    Red Hat Openstack offers an Edge computing architecture called Distributed Compute Nodes (DCN), which allows for many hundreds or thousands of Edge sites by deploying hundreds or thousands of compute nodes remotely, all interacting with a central control plane over a routed (L3) network. Distributed compute nodes allow compute node sites to be deployed closer to where they are used, and are generally deployed in greater numbers than would occur in a central datacenter.

    With all the advantages that this architecture brings, there are also several scale challenges due to the large number of compute nodes that are managed by the OpenStack controllers. A previous post details deploying, running and testing a large scale environment using Red Hat OpenStack Director on real hardware, but this post is about how we can simulate far greater scale and load on the OpenStack control plane for testing using containers running on OpenShift without needing nearly as much hardware.

    In order to prove the effectiveness of Red Hat's DCN architecture, we'd like to be able to get quantitative benchmarks on Red Hat Openstack's performance when many hundreds or thousands of compute nodes are deployed.

Red Hat: Fedora BoF at Ohio LinuxFest 2019, Technical Projects and More

Filed under
Red Hat
  • Fedora BoF at Ohio LinuxFest 2019

    I held a Fedora Birds-of-a-Feather (BoF) session at Ohio LinuxFest in Columbus, Ohio on November 1. Ohio LinuxFest is a regional conference for free and open source software professionals and enthusiasts. Since it’s just a few hours drive from my house, it seemed like an obvious event for me to attend. We had a great turnout and a lively conversation of the course of an hour.

    The session started a little slowly as many people were still in the keynote. But a few minutes later, the room was nearly full. I didn’t take a count, but at the peak, we probably had about two dozen attendees. Some were existing Fedora users and some were there to learn more about Fedora.

    I didn’t plan any particular content, since I wanted to let the group drive the discussion based on what was interesting to them. We ended up talking about documentation a fair amount. Two of the attendees created a FAS account that weekend so they can start contributing to the docs! Several more claimed the OLF BoF badge, and I sent them all a follow-up email directing them to the Join SIG’s Welcome page.

    In addition to docs, we talked about the general Fedora release process—how we determine our schedule and how we decide when to release. I brought some USB sticks with Fedora 31 Workstation for people to try. And of course I had stickers, pens, and pins to give away.

  • Java Applications Go Cloud-Native with Project Quarkus

    Getting Java applications to run well in a cloud-native environment hasn't always been easy, but that could soon change thanks to the open-source Quarkus framework.

  • Open Liberty Java runtime now available to Red Hat Runtimes subscribers

    Open Liberty is a lightweight, production-ready Java runtime for containerizing and deploying microservices to the cloud, and is now available as part of a Red Hat Runtimes subscription. If you are a Red Hat Runtimes subscriber, you can write your Eclipse MicroProfile and Jakarta EE apps on Open Liberty and then run them in containers on Red Hat OpenShift, with commercial support from Red Hat and IBM.

  • Tracing Kubernetes applications with Jaeger and Eclipse Che

    Developing distributed applications is complicated. You can wait to monitor for performance issues once you launch the application on your test or staging servers, or in production if you’re feeling lucky, but why not track performance as you develop? This allows you to identify improvement opportunities before rolling out changes to a test or production environment. This article demonstrates how two tools can work together to integrate performance monitoring into your development environment: Eclipse Che and Jaeger.

  • 3 key strategies for becoming a diversity and inclusion leader

    Issues of inclusivity also make workplaces challenging for underrepresented developers. Women are 45% more likely to leave their jobs in the first year than men. While some point to factors outside the workplace to account for this, we know that women tend to leave their roles because of feelings of isolation and poor sponsorship. This exacerbates the $16 billion-a-year problem the tech industry faces in hiring and retraining costs.

  • How to contribute to Kubernetes if you have a fulltime job

    I started contributing to Kubernetes (K8s) in October 2018, when I was working on the Product Security Incident Response Team at IBM. I was drawn to distributed systems, but I couldn't work with them in my day job, so my mentor, Lin Sun, suggested I contribute to open source distributed systems in my spare time. I became interested in K8s and have never looked back!

Red Hat: Project Quay, DoD, IBM Shares, Prometheus, DPDK/vDPA

Filed under
Red Hat
  • Red Hat Introduces open source Project Quay container registry

    Today Red Hat is introducing the open sourcing of Project Quay, the upstream project representing the code that powers Red Hat Quay and Quay.io. Newly open sourced, as per Red Hat’s open source commitment, Project Quay represents the culmination of years of work around the Quay container registry since 2013 by CoreOS, and now Red Hat.

    Quay was the first private hosted registry on the market, having been launched in late 2013. It grew in users and interest with its focus on developer experience and highly responsive support, and capabilities such as image rollback and zero-downtime garbage collection. Quay was acquired in 2014 by CoreOS to bolster its mission to secure the internet through automated operations, and shortly after the CoreOS acquisition, the on-premise offering of Quay was released. This product is now known as Red Hat Quay.

  • DoD Taps Red Hat To Improve Squadron Operations

    The United States Department of Defense (DoD) partnered with Red Hat to help improve aircraft and pilot scheduling for United States Marine Corps (USMC), United States Navy (USN) and United States Air Force (USAF) aircrews.

  • There’s a Reason IBM Stock Is Dirt Cheap as Tech Stocks Soar

    Red Hat has a strong moat in the Unix operating system space. It is bringing innovation to the market by leveraging Linux, containers, and Kubernetes. And it is standardizing on the Red Hat OpenShift platform and bringing it together with IBM’s enterprisRed Hat has a strong moat in the Unix operating system space. It is bringing innovation to the market by leveraging Linux, containers, and Kubernetes. And it is standardizing on the Red Hat OpenShift platform and bringing it together with IBM’s enterprise. This will position IBM to lead in the hybrid cloud market.

  • Federated Prometheus with Thanos Receive

    OpenShift Container Platform 4 comes with a Prometheus monitoring stack preconfigured. This stack is in charge of getting cluster metrics to ensure everything is working seamlessly, so cool, isn’t it?

    But what happens if we have more than one OpenShift cluster and we want to consume those metrics from a single tool, let me introduce you to Thanos.

    In the words of its creators, Thanos is a set of components that can be composed into a highly available metrics system with unlimited storage capacity, which can be added seamlessly on top of existing Prometheus deployments.

  • Making high performance networking applications work on hybrid clouds

    In the previous post we covered the details of a vDPA related proof-of-concept (PoC) showing how Containerized Network Functions (CNFs) could be accelerated using a combination of vDPA interfaces and DPDK libraries. This was accomplished by using the Multus CNI plugin adding vDPA as secondary interfaces to kubernetes containers.

    We now turn our attention from NFV and accelerating CNFs to the general topic of accelerating containerized applications over different types of clouds. Similar to the previous PoC our focus remains on providing accelerated L2 interfaces to containers leveraging kubernetes to orchestrate the overall solution. We also continue using DPDK libraries to consume the packet efficiently within the application.

    In a nutshell, the goal of the second PoC is to have a single container image with a secondary accelerated interface that can run over multiple clouds without changes in the container image. This implies that the image will be certified only once decoupled from the cloud it’s running on.

    As will be explained, in some cases we can provide wirespeed/wirelatency performance (vDPA and full virtio HW offloading) and in other cases reduced performance if translations are needed such as AWS and connecting to its Elastic Network Adapter (ENA) interface. Still, as will be seen it’s the same image running on all clouds.

  • Pod Lifecycle Event Generator: Understanding the “PLEG is not healthy” issue in Kubernetes

Fedora: Updates, Upgrade and Fedora Women’s Day in Peru

Filed under
Red Hat
  • Fedora status updates: October 2019

    The Fedora Silverblue team was not able to get the necessary changes into Fedora 31 to support having Flatpak pre-installed. They are looking at the possibility of re-spinning the Silverblue ISO to incorporate the changes. But they did update the Fedora 31 Flatpak runtime. The team updated the Flatpak’ed GNOME applications to GNOME 3.34 and built them against the Fedora 31 runtime.

  • Upgrade Fedora 30 to Fedora 31
  • Fedora Women’s Day (FWD) 2019

Red Hat Leftovers

Filed under
Red Hat
  • GitHub report surprises, serverless hotness, and more industry trends

    Now, let's discuss how developers can use Quarkus to bring Java into serverless, a place where previously, it was unable to go. Quarkus introduces a comprehensive and seamless approach to generating an operating system specific (aka native) executable from your Java code, as you do with languages like Go and C/C++. Environments such as event-driven and serverless, where you need to start a service to react to an event, require a low time-to-first-response, and traditional Java stacks simply cannot provide this. Knative enables developers to run cloud-native applications as serverless containers in seconds and the containers will go down to zero on demand.

    In addition to compiling Java to Knative, Quarkus aims to improve developer productivity. Quarkus works out of the box with popular Java standards, frameworks and libraries like Eclipse MicroProfile, Apache Kafka, RESTEasy, Hibernate, Spring, and many more. Developers familiar with these will feel at home with Quarkus, which should streamline code for the majority of common use cases while providing the flexibility to cover others that come up.

  • When Quarkus Meets Knative Serverless Workloads

    Daniel Oh is a principal technical product marketing manager at Red Hat and works CNCF ambassador as well. He's well recognized in cloud-native application development, senior DevOps practices in many open source projects and international conferences.

  • Making things Go: Command Line Heroes draws infrastructure

    Most of our episodes feature languages that have clear arcs. "The Infrastructure Effect" was different. By all accounts, COBOL is a language heading the way of Latin. There are only a few specialists who are proficient COBOL coders. But it’s still vital to many long-lasting institutions that affect millions: the banking industry, the IRS, and manufacturing. And the world of tech infrastructure is moving on—to Go. Where does that leave COBOL in the next few years? And how do you tease all of that in an image?

    We had to decide what visual themes could we use to depict each language—and then, how to combine them into a single, coherent frame. COBOL and Go have a similar function, so we wanted to make sure each language had clear, distinct imagery. We decided to rely on some of their real-world applications: the bank and subways for COBOL, and the cloud-based applications for Go.

  • Using the Red Hat OpenShift tuned Operator for Elasticsearch

    I recently assisted a client to deploy Elastic Cloud on Kubernetes (ECK) on Red Hat OpenShift 4.x. They had run into an issue where Elasticsearch would throw an error similar to:

    Max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
    According to the official documentation, Elasticsearch uses a mmapfs directory by default to store its indices. The default operating system limits on mmap counts are likely to be too low, which may result in out of memory exceptions. Usually, administrators would just increase the limits by running:

    sysctl -w vm.max_map_count=262144
    However, OpenShift uses Red Hat CoreOS for its worker nodes and, because it is an automatically updating, minimal operating system for running containerized workloads, you shouldn’t manually log on to worker nodes and make changes. This approach is unscalable and results in a worker node becoming tainted. Instead, OpenShift provides an elegant and scalable method to achieve the same via its Node Tuning Operator.

  • bcc-tools brings dynamic kernel tracing to Red Hat Enterprise Linux 8.1

    In Red Hat Enterprise Linux 8.1, Red Hat ships a set of fully supported on x86_64 dynamic kernel tracing tools, called bcc-tools, that make use of a kernel technology called extended Berkeley Packet Filter (eBPF). With these tools, you can quickly gain insight into certain aspects of system performance that would have previously required more time and effort from the system and operator.

    The eBPF technology allows dynamic kernel tracing without requiring kernel modules (like systemtap) or rebooting of the kernel (as with debug kernels). eBPF accomplishes this while maintaining minimal overhead for each trace point, making these tools an ideal way to instrument running kernels in production.

  • What open communities teach us about empowering customers

    When it comes to digital transformation, businesses seem to be on the right track improving their customers' experiences through the use of technologies. Today, so much digital transformation literature describes the benefits of "delivering new value to customers" or "delivering value to customers in new ways."

Red Hat and SUSE Servers: Boston Children’s Hospital, IBM and SUSE in High-Performance Computing (HPC)

Filed under
Red Hat
SUSE
  • How Boston Children’s Hospital Augments Doctors Cognition with Red Hat OpenShift

    Software can be an enabler for healers. At Red Hat, we’ve seen this first hand from customers like Boston Children’s Hospital. That venerable infirmary is using Red Hat OpenShift and Linux containers to enhance their medical capabilities, and to augment their doctors cognitive capacity.

  • Entry Server Bang For The Buck, IBM i Versus Red Hat Linux

    In last week’s issue, we did a competitive analysis of the entry, single-socket Power S914 machines running IBM i against Dell PowerEdge servers using various Intel Xeon processors as well as an AMD Epyc chip running a Windows Server and SQL Server stack from Microsoft. This week, and particularly in the wake of IBM’s recent acquisition of Red Hat, we are looking at how entry IBM i platforms rate in terms of cost and performance against X86 machines running a Linux stack and an appropriate open source relational database that has enterprise support.

    Just as a recap from last week’s story, the IBM i matchup against Windows Server systems were encouraging in that very small configurations of the Power Systems machine running IBM i were less expensive per unit of online transaction processing performance as well as per user. However, on slightly larger configurations of single socket machines, thanks mostly to the very high cost per core of the IBM i operating system and its integrated middleware and database as you move from the P05 to P10 software tiers on the Power S914, the capital outlay can get very large at list price for the Power iron, and the software gets very pricey, too. The only thing that keeps the IBM i platform in the running is the substantially higher performance per core that the Power9 chip offers on machines with four, six, or eight cores.

    Such processors are fairly modest by 2019 standards, by the way, when a high-end chip has 24, 28, 32, or now 64 cores, and even mainstream ones have 12, 16, or 18 cores. If you want to see the rationale of the hardware configurations that we ginned up for the comparisons, we suggest that you review the story from last week. Suffice it to say, we tried to get machines with roughly the same core counts and configuration across the Power and X86 machines, and generally, the X86 cores for these classes of single socket servers do a lot less work.

  • Rise of the Chameleon – SUSE at SC19

    The impact of High-Performance Computing (HPC) goes beyond traditional research boundaries to enhance our daily lives.  SC19 is the international conference for High Performance Computing, networking, storage and analysis taking place in Denver November 17-22.  SUSE will once again have a strong presence at SC19 – and if you are attending we would love to talk to you!  Our SUSE booth (#1917) will include our popular Partner Theater as well as a VR light saber game with a Star Wars themed backdrop.  We will showcase SUSE’s HPC core solutions (OS, tools and Services) as well as AI/ML, Storage and Cloud open source products.  Plus, during the gala opening reception we will premier our new mini-movie “Sam the IT Manager in The Way of the Chameleon: The Quest for HPC” which you don’t want to miss (we’ll provide the popcorn)!

Fedora 31 and Control Group v2

Filed under
Red Hat

Over the last few years, I have seen the Linux kernel team working on Control Group (cgroup) v2, adding new features and fixing lots of issues with cgroup v1. The kernel team announced that cgroup v2 was stable back in 2016.

Last year at the All Systems Go conference, I met a lot of the engineers who are working on cgroup v2, most of them from Facebook, as well as the systemd team. We talked about the issues and problems with cgroup v1 and the deep desire to get Linux distributions to use cgroup v2 by default. The last few versions of Fedora have supported cgroup v2, but it was not enabled as the default. Almost no one will modify the defaults for something as fundamental as the default resource-constraint system in Fedora, causing cgroup v2 to languish in obscurity.

Read more

Also: Linking Linux system automation to the bottom line

Syndicate content

More in Tux Machines