Language Selection

English French German Italian Portuguese Spanish

Red Hat

Red Hat: Kubernetes, RHEL Impact and Halloween Release

Filed under
Red Hat
  • Why you don't have to be afraid of Kubernetes

    It was fun to work at a large web property in the late 1990s and early 2000s. My experience takes me back to American Greetings Interactive, where on Valentine's Day, we had one of the top 10 sites on the internet (measured by web traffic). We delivered e-cards for AmericanGreetings.com, BlueMountain.com, and others, as well as providing e-cards for partners like MSN and AOL. Veterans of the organization fondly remember epic stories of doing great battle with other e-card sites like Hallmark. As an aside, I also ran large web properties for Holly Hobbie, Care Bears, and Strawberry Shortcake.

    I remember like it was yesterday the first time we had a real problem. Normally, we had about 200Mbps of traffic coming in our front doors (routers, firewalls, and load balancers). But, suddenly, out of nowhere, the Multi Router Traffic Grapher (MRTG) graphs spiked to 2Gbps in a few minutes. I was running around, scrambling like crazy. I understood our entire technology stack, from the routers, switches, firewalls, and load balancers, to the Linux/Apache web servers, to our Python stack (a meta version of FastCGI), and the Network File System (NFS) servers. I knew where all of the config files were, I had access to all of the admin interfaces, and I was a seasoned, battle-hardened sysadmin with years of experience troubleshooting complex problems.

    But, I couldn't figure out what was happening...

    Five minutes feels like an eternity when you are frantically typing commands across a thousand Linux servers. I knew the site was going to go down any second because it's fairly easy to overwhelm a thousand-node cluster when it's divided up and compartmentalized into smaller clusters.

  • The economic impact of Red Hat Enterprise Linux: How IT professionals benefit

    It’s not overstated to say that the IT landscape completely changed with the introduction of Red Hat Enterprise Linux, more than a decade and a half ago. For 2019, IDC estimated a global business revenue of $188 trillion. Of this, they estimate that at least 40% is touched by software, leaving the IT footprint to be an estimated $81 trillion. Yes, you read that right, $81 trillion. As all of this software forming the global business IT footprint has to run on an operating system, IDC estimates that over 50% is running on Linux, with Red Hat Enterprise Linux accounting for 25% of that.

    That’s a lot of big numbers but what does it all mean? It means that Red Hat Enterprise Linux has changed the experience of many IT professionals around the globe. In a software-centric world, ongoing we have seen higher demand in support and IT services which in turn further helps fuel the global IT ecosystem.

    When IDC asked IT organizations how Red Hat Enterprise Linux benefitted them, they discovered a 12% savings in IT staff productivity. This means that IT professionals spend less time managing servers, doing routine IT tasks, resolving support calls, deploying new business apps and upgrading mission-critical apps. But that’s not all.

  • The spooktacular tale of Red Hat's Halloween release

    In many stories and myths, naming is important. Knowing the proper name of something gives you power over it. Likewise, naming has been important for Red Hat Linux over the years.

    The Halloween release was actually a paid beta and not a 1.0. The Halloween release was dubbed Red Hat Software Linux 0.9, and started a tradition of having a codename for the release that lasted through the final Red Hat Linux release (9.0.93, "Severn"), and carried over to Fedora for many years.

    The tradition was to have a name for a release that was somewhat related to the previous release name. For example, the 1.0 release was "Mother's Day," and "Rembrandt" followed "Picasso," and "Colgate" followed it. (For the record, the best release name was a Fedora release, dubbed "Zod." Allowing many fun headlines playing off the Superman II villain.)

Red Hat Storage and Fedora

Filed under
Red Hat
  • Achieving maximum performance from a fixed size Ceph object storage cluster

    We have tested a variety of configurations, object sizes, and client worker counts in order to maximize the throughput of a seven node Ceph cluster for small and large object workloads. As detailed in the first post the Ceph cluster was built using a single OSD (Object Storage Device) configured per HDD, having a total of 112 OSDs per Ceph cluster. In this post, we will understand the top-line performance for different object sizes and workloads.

    Note: The terms "read" and HTTP GET is used interchangeably throughout this post, as are the terms HTTP PUT and "write."

  • File systems unfit as distributed storage backends: lessons from 10 years of Ceph evolution

    For a decade, the Ceph distributed file system followed the conventional wisdom of building its storage backend on top of local file systems. This is a preferred choice for most distributed file systems today because it allows them to benefit from the convenience and maturity of battle-tested code. Ceph's experience, however, shows that this comes at a high price. First, developing a zero-overhead transaction mechanism is challenging. Second, metadata performance at the local level can significantly affect performance at the distributed level. Third, supporting emerging storage hardware is painstakingly slow.

    Ceph addressed these issues with BlueStore, a new back- end designed to run directly on raw storage devices. In only two years since its inception, BlueStore outperformed previous established backends and is adopted by 70% of users in production. By running in user space and fully controlling the I/O stack, it has enabled space-efficient metadata and data checksums, fast overwrites of erasure-coded data, inline compression, decreased performance variability, and avoided a series of performance pitfalls of local file systems. Finally, it makes the adoption of backwards-incompatible storage hardware possible, an important trait in a changing storage landscape that is learning to embrace hardware diversity.

  • podman-compose: Review Request

    Want to use docker-compose.yaml files with podman on Fedora?

Firefox tips for Fedora 31

Filed under
Red Hat
Moz/FF

Fedora 31 Workstation comes with a Firefox backend moved from X11 to Wayland by default. That’s just another step in the outgoing effort of moving to Wayland. This affects Gnome on Wayland only. There is a firefox-wayland package available to activate the Wayland backend on other desktop environments (KDE, Sway)

Wayland architecture is completely different than X11. The team merged various aspects of Firefox internals to the new protocol where possible. However, some X11 features are missing completely. For such cases you can install and run firefox-x11 package as a fallback.

Read more

Three New Container Capabilities in Red Hat Enterprise Linux 7.7

Filed under
Red Hat
Server

We are proud to announce that users of RHEL 7.7 can now use Podman 1.4.4 to find, run, build and share containers as regular users (also called rootless). This builds on the work we did in RHEL 7.6 (A preview of running containers without root in RHEL 7.6).

The new rootless feature can be tested with a fresh installation of RHEL 7.7 or by upgrading from RHEL 7.6. When doing a fresh install, just add a new user ID and the new version of the shadow-utils package will take care of everything (/etc/subuid and /etc/subgid entries). With an upgrade from RHEL 7.6, you will need to add the UID/GID mappings for existing users. For more detailed information, follow the Managing Containers guide in the RHEL 7 documentation.

The tech preview of rootless containers offers only the the VFS driver (no fuse-overlay support). This has the trade-off of better runtime performance at the expense of using more disk space. The VFS driver does not use copy-on-write, so when the container is started it will copy all of the data from lower layers of the container image.

The runtime performance is improved because there is no copy-on-write cost, though it will result in slower start up and can consume quite a bit more disk space. We are currently working on backporting the fuse-overlay capabilities to the 3.10 kernel with an eye towards full fuse-overlay support during the RHEL 7 life cycle.

Read more

Also: What Service Meshes Are, and Why Istio Leads the Pack

Trimming systemd Halved The Boot Time On A PocketBeagle ARM Linux Board

Filed under
Linux
Red Hat

Happening this week over in Lyon, France is the Embedded Linux Conference Europe and Open-Source Summit Europe events. Developer Chris Simmonds spoke today about systemd and boot time optimizations around it.

Besides going over the basics of systemd that all Phoronix readers should be well familiar with, much of his talk was on reducing the boot time with systemd. For reference he talked about his optimizations using a PocketBeagle ARM board running Debian Stretch.

Debian on this low-power ARM board took 66 seconds to boot with some 18 seconds for the kernel and over 47 seconds for the user-space bits. With some basic tuning, he was able to chop that in half to around 30 seconds.

Read more

Red Hat: OpenShift, RHEL and More

Filed under
Red Hat
  • A PodPreset Based Webhook Admission Controller

    One of the fundamental principles of cloud native applications is the ability to consume assets that are externalized from the application itself during runtime. This feature affords portability across different deployment targets as properties may differ from environment to environment. This pattern is also one of the principles of the Twelve Factor app and is supported through a variety of mechanisms within Kubernetes. Secrets and ConfigMaps are implementations in which assets can be stored whereas the injection point within an application can include environment variables or volume mounts. As Kubernetes and cloud native technologies have matured, there has been an increasing need to dynamically configure applications at runtime even though Kubernetes makes use of a declarative configuration model. Fortunately, Kubernetes contains a pluggable model that enables the validation and modification of applications submitted to the platform as pods, known as admission controllers. These controllers can either accept, reject or accept with modifications the pod which is attempting to be created.

    The ability to modify pods at creation time allows both application developers and platform managers the ability to offer capabilities that surpass any limitation that may be imposed by strict declarative configurations. One such implementation of this feature is a concept called PodPresets which enables the injection of ConfigMaps, Secrets, volumes, volume mounts, and environment variables at creation time to pods matching a set of labels. Kubernetes has supported enabling the use of this feature since version 1.6 and the OpenShift Container Platform (OCP) made it available in the 3.6 release. However, due to a perceived direction change for dynamically injecting these types of resources into pods, the feature became deprecated in version 3.7 and removed in 3.11 which left a void for users attempting to take advantage of the provided capabilities.

  • Verifying signatures of Red Hat container images

    Security-conscious organizations are accustomed to using digital signatures to validate application content from the Internet. A common example is RPM package signing. Red Hat Enterprise Linux (RHEL) validates signatures of RPM packages by default.

    In the container world, a similar paradigm should be adhered to. In fact, all container images from Red Hat have been digitally signed and have been for several years. Many users are not aware of this because early container tooling was not designed to support digital signatures.

    In this article, I’ll demonstrate how to configure a container engine to validate signatures of container images from the Red Hat registries for increased security of your containerized applications.

    In the lack of widely accepted standards, Red Hat designed a simple approach to provide security to its customers. This approach is based on detached signatures served by a standard HTTP server. The Linux container tools (Podman, Skopeo, and Buildah) have built-in support for detached signatures, as well as the CRI-O container engine from Kubernetes and the Red Hat OpenShift Container Platform.

  • Advanced telco services and better customer experience need modern support systems

    It seems nearly everything we do these days involves the internet – communication, commerce, entertainment, banking, filing taxes, home security, even monitoring our health – creating a wealth of opportunity for communications service providers (CSPs) to deliver innovative and advanced services, increasing and expanding their revenue streams. But it’s a significant challenge to do so using the traditional, proprietary and monolithic infrastructures in place for decades. To achieve success, it’s critical to modernize business and network systems with open source, cloud-native solutions, and move operations support systems (OSS) and business support systems (BSS) to microservices-based architectures.

    Red Hat believes that by transforming OSS/BSS to a more modern architecture, service providers will be in a better position to improve customer experience and create new revenue and business models, and operate more efficiently. But moving to a modern OSS/BSS architecture isn’t without challenges.

  • Red Hat Customer Success Stories: Automating management and improving communications security

    Datacom is a IT-based service provider in Asia Pacific with more than 5,000 staff and a vision of designing, building, and running IT systems and processes that are aligned to its clients’ business goals. As a Red Hat Advanced Business Partner, Datacom provides solutions to its market across Red Hat's product lines.

    Because Ansible was getting the attention of many Datacom customers, the company chose to focus on using Ansible as the orchestration glue for automation. Datacom constructed the platform which made it easily consumable while allowing customers to leverage the automation elements. Datacom is witnessing application developers use the infrastructure stack to deploy the apps on different technologies.

    Joseph Tejal is Datacom’s Red Hat Certified Specialist in Ansible Automation based in Wellington. Tejal explained that it wasn’t by chance that Datacom standardized on Red Hat Ansible Automation.

Fedora 31: Peering into Red Hat Enterprise Linux's future

Filed under
Linux
Red Hat

After a brief delay, while last-minute bugs were fixed, Fedora 31 has just rolled out the door, and besides being a worthy Linux distribution in its own right, it's even more interesting for what it tells us about parent company Red Hat's future plans for Red Hat Enterprise Linux (RHEL).

We tend to think of Fedora as a desktop operating system, but while it's great at that role, it's far more than that. Besides golden oldies such as the self-explanatory Fedora Workstation and Fedora Server, we also now have Fedora CoreOS, Fedora IoT, and Fedora Silverblue.

Read more

Servers: Kubernetes, Linode and Red Hat

Filed under
Red Hat
Server
  • Flavors of Data Protection in Kubernetes

    As containerized applications go through an accelerated pace of adoption, Day 2 services have become a here and now problem.

  • Doing the cloud differently

    Jeff Dike, one of the contributors to Linux, had developed a technology called User-mode Linux. UML, as it was known, allowed developers to create virtual Linux machines within a Linux computer. This Matrix-like technology was groundbreaking and opened the door for the virtualized cloud we know today.

    One of the developers Dike’s technology enabled was a young technologist named Christopher Aker. He saw an opportunity to use this technology not to build the next Salesforce or Amazon, but to make cloud computing less complicated, less expensive, and more accessible to every developer regardless of where they were located, what their financial resources were or who they worked for. The company he built — Linode — helped pioneer modern cloud computing.

  • A day in the life of a quality engineering sysadmin

    Let me begin by saying that I was neither hired nor trained to be a sysadmin. But I was interested in the systems side of things such as virtualization, cloud, and other technologies, even before I started working at Red Hat. I am a Senior Software Engineer in Test (Software Quality Engineering), but Red Hat, being positioned so uniquely because its products are something primarily used by sysadmins (or people with job responsibilities along similar lines) and also most of Red Hat’s products are primarily focused on backend systems-level instead of user application level. Our testing efforts include routine interaction with Red Hat’s Virtualization, OpenStack, Ansible Tower, and Hyperconverged Infrastructure.

    When I was hired, I was purely focused on testing Red Hat CloudForms, which is management software for the aforementioned environments. But as one of our previous senior software engineers departed to take on another role within Red Hat, I saw an opportunity that interested me. I was already helping him and learning sysadmin tasks by then, so after looking at my progress and interest, I was a natural successor for the work in my team’s perspective. And hence, I ended up becoming a sysadmin who is working partly as a software engineer in testing.

  • Getting to know Jae-Hyung Jin, Red Hat general manager for Korea

    We’re delighted to welcome Jae-Hyung Jin to Red Hat as a general manager for Korea. In the new role, he will be responsible for Red Hat’s business operations in Korea.
    Prior to joining Red Hat, Jae-Hyung Jin served as head of the enterprise sales and marketing group as a vice president at Samsung Electronics. He has held several key leadership positions in the past at leading technology and trading companies, including Cisco Systems, LG Electronics and Daewoo International. Jae-Hyung brings in nearly 25 years of experience in various industries, including telecommunications, manufacturing, finance and public.

  • Enterprise JavaBeans, infrastructure predictions, and more industry trends

    As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.

IBM/Red Hat Buzzwords Admission/Confession and 'Extreme Performance For Ansible'

Filed under
Red Hat
  • Scaling Red Hat OpenStack Platform to more than 500 Overcloud Nodes

    At Red Hat, performance and scale are treated as first class citizens and a lot of time and effort are put into making sure our products scale. We have a dedicated team of performance and scale engineers that work closely with product management, developers, and quality engineering to identify performance regressions, provide product performance data and guidance to customers, come up with tuning and test Red Hat OpenStack Platform deployments at scale. As much as possible, we test our products to match real-world use cases and scale. 

    In the past, we had scale tested director-based Red Hat OpenStack Platform deployments to about 300 bare metal overcloud nodes in external labs with the help of customers/partners. While the tests helped surface issues, we were more often than not limited by hardware availability than product scale when trying to go past the 300 count. 

  • Introducing the Red Hat Global Transformation Office

    Change is the brutal truth for global enterprises. True business transformation requires fundamental shifts in behaviors. Software, systems, networking and storage all need to be aligned with the people working on these technologies and the associated processes, in order to increase the number of strategic opportunities an organization is capable of effectively responding to. To help drive this evolution, today we’re pleased to announce the launch of the Global Transformation Office, which will be focused on accelerating our customers digital visions while bringing holistic change across their technological AND social systems.

    Market conditions, competitive pressures, the technology landscape and even customer requirements change on a weekly, if not daily, basis. Since the launch of Red Hat Enterprise Linux more than 15 years ago, Red Hat has provided the technologies and expertise helping to fuel the next generation of business innovation, from the world’s leading enterprise Linux platform to the industry’s most comprehensive enterprise Kubernetes platform in Red Hat OpenShift. The Global Transformation Office formalizes and matures our commitment to delivering the software, the skills and the expertise required to serve as a change catalyst for global IT organizations, founded on the success and demand for our regional Transformation practices.

  • Operon: Extreme Performance For Ansible

    I'm very excited to unveil Operon, a high performance replacement for Ansible® Engine, tailored for large installations and offered by subscription. Operon runs your existing playbooks, modules, plug-ins and third party tools without modification using an upgraded engine, dramatically increasing the practical number of nodes addressable in a single run, and potentially saving hours on every invocation.

    Operon can be installed independently or side-by-side with Ansible Engine, enabling it to be gradually introduced to your existing projects or employed on a per-run basis.

IBM and Red Hat: OpenStack, Ceph, Ansible, RHEL and Power/Microbenchmarks

Filed under
Red Hat
  • [Older] Red Hat Collaborates with Vodafone Idea to Build Network as a Platform

    Red Hat, Inc., the world's leading provider of open source solutions, today announced that Vodafone Idea Limited (VIL), India’s leading telecom service provider, is leveraging Red Hat OpenStack Platform, Red Hat Ceph Storage, Red Hat Ansible Automation Platform and Red Hat Enterprise Linux to transform its distributed network data centers to open standards, open interfaces based ‘Universal Cloud’. These will be also extended to serve third party workloads.

  • Wanted: A Real ROI Study For Midrange Platforms

    There is no shortage of IBM i shops that are sitting on back releases of the operating system and related systems software, or older Power Systems iron, or both. Sometimes, it takes a little convincing to get upper management to listen about how IT operations could be improved and extended if the company would only make some investments in upgrading the hardware and systems software. Sometimes it takes a lot of convincing, particularly when many small and medium businesses are run by their owners and in a certain sense any money that would be allocated for an upgrade is their own.

    But in other cases, the very idea of being on the IBM i platform has become suspect, particularly with the rise of public clouds, which by and large are designed to run Linux and Windows Server workloads on virtualized X86 servers with clever networking stitching it together to storage that keeps the system fed. So sometimes, even before you can make the case for investing in the IBM i platform, you have to make the case for why the company should not be investing in some other, supposedly more modern platform.

    That is why IBM has commissioned the consultants at IDC to put together all of the arguments about getting modern with hardware and systems software in a new whitepaper entitled, For Many Businesses, It’s Time to Upgrade Their Best-Kept Secret: IBM i. You can go to the IBM i portion of Big Blue’s site, which seems allergic to talking about systems even though this is where, one way or another, IBM gets its money. The IBM i area on IBM’s site is at this link, and you have to be pretty tenacious to find it from the homepage, and we give that to you just in case IBM moves the IDC whitepaper around someday. The direct link to the whitepaper is here.

  • Microbenchmarks for AI applications using Red Hat OpenShift on PSI in project Thoth

    Project Thoth is an artificial intelligence (AI) R&D Red Hat research project as part of the Office of the CTO and the AI Center of Excellence (CoE). This project aims to build a knowledge graph and a recommendation system for application stacks based on the collected knowledge, such as machine learning (ML) applications that rely on popular open source ML frameworks and libraries (TensorFlow, PyTorch, MXNet, etc.). In this article, we examine the potential of project Thoth’s infrastructure running in Red Hat Openshift and explore how it can collect performance observations.

    Several types of observations are gathered from various domains (like build time, run time and performance, and application binary interfaces (ABI)). These observations are collected through the Thoth system and enrich the knowledge graph automatically. The knowledge graph is then used to learn from the observations. Project Thoth architecture requires multi-namespace deployment in an OpenShift environment, which is run on PnT DevOps Shared Infrastructure (PSI), a shared multi-tenant OpenShift cluster.

Syndicate content

More in Tux Machines

RedisInsight Revealed and WordPress 5.2.4 Released

  • Redis Labs eases database management with RedisInsight

    The robust market of tools to help users of the Redis database manage their systems just got a new entrant. Redis Labs disclosed the availability of its RedisInsight tool, a graphical user interface (GUI) for database management and operations. Redis is a popular open source NoSQL database that is also increasingly being used in cloud-native Kubernetes deployments as users move workloads to the cloud. Open source database use is growing quickly according to recent reports as the need for flexible, open systems to meet different needs has become a common requirement. Among the challenges often associated with databases of any type is ease of management, which Redis is trying to address with RedisInsight.

  • WordPress 5.2.4 Update

    Late-breaking news on the 5.2.4 short-cycle security release that landed October 14. When we released the news post, I inadvertently missed giving props to Simon Scannell of RIPS Technologies for finding and disclosing an issue where path traversal can lead to remote code execution. Simon has done a great deal of work on the WordPress project, and failing to mention his contributions is a huge oversight on our end. Thank you to all of the reporters for privately disclosing vulnerabilities, which gave us time to fix them before WordPress sites could be attacked.

Desktop GNU/Linux: Rick and Morty, Georges Basile Stavracas Neto on GNOME and Linux Format on Eoan Ermine

  • We know where Rick (from Rick and Morty) stands on Intel vs AMD debate

    For one, it appears Rick is running a version of Debian with a very old Linux kernel (3.2.0) — one dating back to 2012. He badly needs to install some frickin’ updates. “Also his partitions are real weird. It’s all Microsoft based partitions,” a Redditor says. “A Linux user would never do [this] unless they were insane since NTFS/Exfat drivers on Linux are not great.”

  • Georges Basile Stavracas Neto: Every shell has a story

    … a wise someone once muttered while walking on a beach, as they picked up a shell lying on the sand. Indeed, every shell began somewhere, crossed a unique path with different goals and driven by different motivations. Some shells were created to optimize for mobility; some, for lightness; some, for speed; some were created to just fit whoever is using it and do their jobs efficiently. It’s statistically close to impossible to not find a suitable shell, one could argue. So, is this a blog about muttered shell wisdom? In some way, it actually is. It is, indeed, about Shell, and about Mutter. And even though “wisdom” is perhaps a bit of an overstatement, it is expected that whoever reads this blog doesn’t leave it less wise, so the word applies to a certain degree. Evidently, the Shell in question is composed of bits and bytes; its protection is more about the complexities of a kernel and command lines than sea predators, and the Mutter is actually more about compositing the desktop than barely audible uttering.

  • Adieu, 32

    The tenth month of the year arrives and so does a new Ubuntu 19.10 (Eoan Ermine) update. Is it a portent that this is the 31st release of Ubuntu and with the 32nd release next year, 32-bit x86 Ubuntu builds will end?

Linux Kernel and Linux Foundation

  • Linux's Crypto API Is Adopting Some Aspects Of Zinc, Opening Door To Mainline WireGuard

    Mainlining of the WireGuard secure VPN tunnel was being held up by its use of the new "Zinc" crypto API developed in conjunction with this network tech. But with obstacles in getting Zinc merged, WireGuard was going to be resorting to targeting the existing kernel crypto interfaces. Instead, however, it turns out the upstream Linux crypto developers were interested and willing to incorporate some elements of Zinc into the existing kernel crypto implementation. Back in September is when Jason Donenfeld decided porting WireGuard to the existing Linux crypto API was the best path forward for getting this secure networking functionality into the mainline kernel in a timely manner. But since then other upstream kernel developers working on the crypto subsystem ended up with patches incorporating some elements of Zinc's design.

  • zswap: use B-tree for search
    The current zswap implementation uses red-black trees to store
    entries and to perform lookups. Although this algorithm obviously
    has complexity of O(log N) it still takes a while to complete
    lookup (or, even more for replacement) of an entry, when the amount
    of entries is huge (100K+).
    
    B-trees are known to handle such cases more efficiently (i. e. also
    with O(log N) complexity but with way lower coefficient) so trying
    zswap with B-trees was worth a shot.
    
    The implementation of B-trees that is currently present in Linux
    kernel isn't really doing things in the best possible way (i. e. it
    has recursion) but the testing I've run still shows a very
    significant performance increase.
    
    The usage pattern of B-tree here is not exactly following the
    guidelines but it is due to the fact that pgoff_t may be both 32
    and 64 bits long.
    
    
  • Zswap Could See Better Performance Thanks To A B-Tree Search Implementation

    For those using Zswap as a compressed RAM cache for swapping on Linux systems, the performance could soon see a measurable improvement. Developer Vitaly Wool has posted a patch that switches the Zswap code from using red-black trees to a B-tree for searching. Particularly for when having to search a large number of entries, the B-trees implementation should do so much more efficiently.

  • AT&T Finally Opens Up dNOS "DANOS" Network Operating System Code

    One and a half years late, the "DANOS" (known formerly as "dNOS") network operating system is now open-source under the Linux Foundation. AT&T and the Linux Foundation originally announced their plan in early 2018 wish pushing for this network operating system to be used on more mobile infrastructure. At the time they expected it to happen in H2'2018, but finally on 15 November 2019 the goal came to fruition.

Security Patches and FUD/Drama