Language Selection

English French German Italian Portuguese Spanish

Login

Enter your Tux Machines username.
Enter the password that accompanies your username.

More in Tux Machines

today's howtos

“What Librem 5 batch am I in?”

Previously we’ve indicated that we would contact people as their particular batch is being prepared for shipping. For instance, we have started sending out emails to backers who will receive Birch in the coming days and weeks. As we mentioned in our post Supplying the Demand, we were surprised at the demand for our early batches. We also expect that some customers will change their mind one (or more) times about which batch they’d prefer as each batch comes out and more videos, pictures, and articles are posted. For these and other reasons we’ve been reluctant to notify people which batch they are likely to be in, as it could change as people change their minds and slots open up. Read more

Graphics: Adreno, Vulkan, Khronos, and Radeon Pro Software for Enterprise

  • Qualcomm's Adreno 640 GPU Is Working Easily With The Freedreno OpenGL/Vulkan Drivers

    The Adreno 640 GPU that is used by Qualcomm's Snapdragon 855/855+ SoCs is now working with the open-source Freedreno Gallium3D OpenGL and "TURNIP" Vulkan drivers with the newest Mesa 20.0 development code. Besides the forthcoming Adreno 680/685 GPUs for Snapdragon-powered Windows laptops, the Adreno 640 is at the top of the Adreno 600 series line-up. The Adreno 640 is 7nm based and has more ALUs than the Adreno 630 and older, an 899~1037 GFLOPS rating, and other improvements.

  • Intel's Vulkan Linux Driver Lands Timeline Semaphore Support

    A change to look forward to with Mesa 20.0 due out next quarter is Vulkan timeline semaphore support (VK_KHR_timeline_semaphore) for Intel's "ANV" open-source driver. Vulkan timeline semaphore support is the latest synchronization model for the Vulkan graphics API and building upon earlier primitives. The Vulkan Timeline Semaphore extends VkSemaphore and supports signal/wait from host threads, better platform support, a monotonically increasing counter than can be used for more descriptive purposes, and other design improvements.

  • Khronos Next Pursuing An Analytic Rendering API

    The Khronos Group has been expanding into a lot of new areas in recent times from OpenXR to 3D Commerce to NNEF and now forming an exploratory group for creating an analytic rendering API. The Khronos Analytic Rendering API would be an industry standard API around data visualizations. This API would be a step above graphics APIs like Vulkan and be catered to data presentation purposes. The API has yet to be formalized as it's still in the early stages but would likely be akin to a vendor-neutral equivalent of NVIDIA VisRTX or Intel OSPray.

  • Radeon Pro Software for Enterprise 19.Q4 for Linux Released

    AMD on Tuesday released their Radeon Pro Software for Enterprise 19.Q4 for Linux package as their newest quarterly driver release intended for their professional graphics card offerings. Radeon Pro Software for Enterprise 19.Q4 for Linux is arriving as scheduled and continues to provide both the AMDGPU-PRO and AMDGPU-Open driver stacks depending upon your preferences.

Red Hat: Project Quay, DoD, IBM Shares, Prometheus, DPDK/vDPA

  • Red Hat Introduces open source Project Quay container registry

    Today Red Hat is introducing the open sourcing of Project Quay, the upstream project representing the code that powers Red Hat Quay and Quay.io. Newly open sourced, as per Red Hat’s open source commitment, Project Quay represents the culmination of years of work around the Quay container registry since 2013 by CoreOS, and now Red Hat. Quay was the first private hosted registry on the market, having been launched in late 2013. It grew in users and interest with its focus on developer experience and highly responsive support, and capabilities such as image rollback and zero-downtime garbage collection. Quay was acquired in 2014 by CoreOS to bolster its mission to secure the internet through automated operations, and shortly after the CoreOS acquisition, the on-premise offering of Quay was released. This product is now known as Red Hat Quay.

  • DoD Taps Red Hat To Improve Squadron Operations

    The United States Department of Defense (DoD) partnered with Red Hat to help improve aircraft and pilot scheduling for United States Marine Corps (USMC), United States Navy (USN) and United States Air Force (USAF) aircrews.

  • There’s a Reason IBM Stock Is Dirt Cheap as Tech Stocks Soar

    Red Hat has a strong moat in the Unix operating system space. It is bringing innovation to the market by leveraging Linux, containers, and Kubernetes. And it is standardizing on the Red Hat OpenShift platform and bringing it together with IBM’s enterprisRed Hat has a strong moat in the Unix operating system space. It is bringing innovation to the market by leveraging Linux, containers, and Kubernetes. And it is standardizing on the Red Hat OpenShift platform and bringing it together with IBM’s enterprise. This will position IBM to lead in the hybrid cloud market.

  • Federated Prometheus with Thanos Receive

    OpenShift Container Platform 4 comes with a Prometheus monitoring stack preconfigured. This stack is in charge of getting cluster metrics to ensure everything is working seamlessly, so cool, isn’t it? But what happens if we have more than one OpenShift cluster and we want to consume those metrics from a single tool, let me introduce you to Thanos. In the words of its creators, Thanos is a set of components that can be composed into a highly available metrics system with unlimited storage capacity, which can be added seamlessly on top of existing Prometheus deployments.

  • Making high performance networking applications work on hybrid clouds

    In the previous post we covered the details of a vDPA related proof-of-concept (PoC) showing how Containerized Network Functions (CNFs) could be accelerated using a combination of vDPA interfaces and DPDK libraries. This was accomplished by using the Multus CNI plugin adding vDPA as secondary interfaces to kubernetes containers. We now turn our attention from NFV and accelerating CNFs to the general topic of accelerating containerized applications over different types of clouds. Similar to the previous PoC our focus remains on providing accelerated L2 interfaces to containers leveraging kubernetes to orchestrate the overall solution. We also continue using DPDK libraries to consume the packet efficiently within the application. In a nutshell, the goal of the second PoC is to have a single container image with a secondary accelerated interface that can run over multiple clouds without changes in the container image. This implies that the image will be certified only once decoupled from the cloud it’s running on. As will be explained, in some cases we can provide wirespeed/wirelatency performance (vDPA and full virtio HW offloading) and in other cases reduced performance if translations are needed such as AWS and connecting to its Elastic Network Adapter (ENA) interface. Still, as will be seen it’s the same image running on all clouds.

  • Pod Lifecycle Event Generator: Understanding the “PLEG is not healthy” issue in Kubernetes