Language Selection

English French German Italian Portuguese Spanish

Syndicate content
News For Open Source Professionals
Updated: 2 hours 31 min ago

Top Sysadmin content February 2021

Tuesday 2nd of March 2021 04:46:54 AM

Top Sysadmin content February 2021

Be sure to catch up on all of our best content from the last month.
Mon, 3/1/2021 at 8:46pm


Photo by Pexels

Even though it was a short month, February of 2021 was another great month for the Enable Sysadmin community. We generated 29 articles from 20 different authors; generating over 475k pageviews and bringing in more than 325k unique visitors. It was also a great month for some of our older content.

In this month’s top content, you’ll find topics ranging from Ansible automation and reboot modules to cryptography and career advice. No matter your role or skill level, there is sure to be something of interest to you, so enjoy it.

Read More at Enable Sysadmin

The post Top Sysadmin content February 2021 appeared first on

Linux sysadmins: What’s your favorite IDE?

Sunday 28th of February 2021 04:50:00 AM

Linux sysadmins: What’s your favorite IDE?

If you program or script in Linux, what’s your favorite IDE? The old standby vi or something a little newer?
Sat, 2/27/2021 at 8:50pm


Image by StockSnap from Pixabay

When you think of the tools a sysadmin relies on every day, an IDE isn’t necessarily the first thing that comes to mind. IDEs are for developers. It’s literally in the name: Integrated Development Environment (IDE).

Read More at Enable Sysadmin

The post Linux sysadmins: What’s your favorite IDE? appeared first on

Here Is How To Create A Clean, Resilient Electrical Grid (Forbes)

Thursday 25th of February 2021 09:36:41 PM

Erik Kobayashi-Solomon writes at Forbes:

One leading thinker in the Grid Evolution space, Dr. Shuli Goodman, believes that the success of Linux to transform the tech world can and should be applied to next-generation electrical grids.

Dr. Shuli Goodman, Executive Director of LF Energy


Dr. Goodman is the executive director of LF Energy, a young offshoot of the Linux Foundation (“LF”) that partners with prominent organizations to develop open-source software for utilities and grid operators to instantaneously understand and manage various new pools of energy supply (e.g. renewables, batteries, etc.). This software offers a single, common reference code base that all organizations can use as a base to build its own customized solutions. The advantage of the LF Energy approach is standardization and, more crucially, speed of implementation.

At this point, you may be asking the same question I asked Dr. Goodman: “Why do utilities and grid operators need software to run things anyway?”

The fact is that they never did. Back in the “good ole days” utilities were “communicating” with their customers in the same way someone with a megaphone communicates with an audience – shouting unidirectionally all the time. In this model, there is no room for complex multidirectional signals or need for software to manage the communication process.

Contrast that with the model that LF Energy is pioneering which, in our communication analogy, would be more similar to an Internet chat room than the old megaphone model. In an evolved, modern system, all parties are able to communicate bidirectionally in real-time with every other party.

Read more at Forbes

The post Here Is How To Create A Clean, Resilient Electrical Grid (Forbes) appeared first on

Tips for using tmux

Thursday 25th of February 2021 11:02:03 AM

Tips for using tmux

The tmux command replaced screen. It allows you to reconnect to dropped sessions, helping you maintain long-running applications or processes.
Peter Gervase
Thu, 2/25/2021 at 3:02am


Image by Michael Gaida from Pixabay

More Linux resources

Command line utilities  
Read More at Enable Sysadmin

The post Tips for using tmux appeared first on

Tips for using screen

Thursday 25th of February 2021 08:53:58 AM

The screen command allows you to reconnect to dropped sessions, helping you maintain long-running applications or processes.
Read More at Enable Sysadmin

The post Tips for using screen appeared first on

State of FinOps 2021 Report Shows Massive Growth in Cloud Financial Management

Thursday 25th of February 2021 08:00:33 AM

Teams working with FinOps, the field of cloud financial management, are expected to grow 40% in 2021 according to a new report from the FinOps Foundation, a Linux Foundation non-profit trade association focused on codifying and promoting cloud financial management best practices and standards. The survey of over 800 FinOps practitioners – with a collective $30+ billion in annual cloud spend – underscores the need for more education around how to manage cloud finances.

Key survey findings include:

  • Nearly half of survey respondents (49%) had little or no automation of managing cloud spend—one of the core disciplines of a FinOps practice. 
  • Of those with some automation, almost one-third rely only on automated notifications (31%) and tagging hygiene (29%); only 13% automated rightsizing and 9% spot use, which indicates that companies are likely missing opportunities to optimize cloud spend.  
  • Half of compute spend on public cloud was for on-demand, the highest-price service, and 49% for reserved, savings or committed use coverage, the next costliest option. Only 13% was for spot use, the least expensive service, even though respondents identified 28% as being an “excellent” target for that option.
  • Getting engineers to act on cost optimization was cited by 40% of respondents as the biggest challenge, followed by dealing with shared costs (33%) and accurate forecasting spend (26%).
  • Just 15% of respondents said their FinOps practice was in the “run” phase of maturity, meaning they can continually improve a built out practice. Four in 10 firms are in the “walk” phase, with core processes running but with much maturing remaining, and 44% are in the  “crawl” phase and just getting to basics.

There are resources to help. Those who are directly involved with or responsible for cloud spend should also consider advanced training and certification. The FinOps Certification Practitioner exam allows individuals in a large variety of cloud, finance and technology roles to validate their FinOps knowledge and enhance their professional credibility by testing them on FinOps fundamentals and an overview of key concepts in each of the three sections of the FinOps lifecycle: Inform, Optimize and Operate. Instructor-led and online training options are available to help gain the skills necessary to succeed in a role managing cloud finances, and to be prepared to pass the exam.

For total newbies – whether they be technical professionals (IT, DevOps, engineers, architects), finance, procurement, and accounting professionals, business unit or product managers, or executives – the FinOps Foundation partnered with Linux Foundation Training & Certification to offer a free Introduction to FinOps self-paced, online training course. This is a great resource for your whole organization to learn the benefits of implementing FinOps best practices, and the dangers of ignoring cloud spend.

As cloud usage continues to accelerate and costs increase, skills managing these costs are paramount. Gaining the necessary education to do so can help your organization manage cloud spend more efficiently, and also give you an in demand skill set that will benefit your career into the future.

The post State of FinOps 2021 Report Shows Massive Growth in Cloud Financial Management appeared first on Linux Foundation – Training.

The post State of FinOps 2021 Report Shows Massive Growth in Cloud Financial Management appeared first on

Building a Linux container by hand using namespaces

Tuesday 23rd of February 2021 10:11:59 PM

How user namespaces related to container security.
Read More at Enable Sysadmin

The post Building a Linux container by hand using namespaces appeared first on

KubeEdge: Reliable Connectivity Between The Cloud & Edge

Tuesday 23rd of February 2021 04:02:00 PM

KubeEdge is an open source project that originated at Huawei and contributed to CNCF. The project is created for extending containerized application orchestration capabilities to hosts at the edge. It is built on top of Kubernetes and provides infrastructure support for network, application deployment, and metadata synchronization between the cloud and the edge. We sat down with Zefeng Wang (Kevin), Lead of Cloud Native Open Source Team at Huawei, to learn more about the project.

The post KubeEdge: Reliable Connectivity Between The Cloud & Edge appeared first on

Review of Three Hyperledger Tools – Caliper, Cello and Avalon

Monday 22nd of February 2021 11:00:58 PM

By Matt Zand


In our previous article (Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy), we discussed the following Hyperledger Distributed Ledger Technologies (DLTs).

  1. Hyperledger Indy
  2. Hyperledger Fabric
  3. Hyperledger Iroha
  4. Hyperledger Sawtooth
  5. Hyperledger Besu

To continue our journey, in this article we discuss three Hyperledger tools (Hyperledger Caliper, Cello and Avalon) that act as great accessories for any of Hyperledger DLTs. It is worth mentioning that, as of this writing, all of three tools discussed in this article are at the incubation stage.

Hyperledger Caliper

Caliper is a benchmarking tool for measuring blockchain performance and is written in JavaScript. It utilizes the following four performance indicators: success rate, Transactions Per Second (or transaction throughput), transaction latency, and resource utilization. Specifically, it is designed to perform benchmarks on a deployed smart contract, enabling the analysis of said four indicators on a blockchain network while smart contract is being used.

Caliper is a unique general tool and has become a useful reference for enterprises to measure the performance of their distributed ledgers. The Caliper project will be one of the most important tools to use along with other Hyperledger projects (even in Quorum or Ethereum projects since it also supports those types of blockchains). It offers different connectors to various blockchains, which gives it greater power and usability. Likewise, based on its documentation, Caliper is ideal for:

  • Application developers interested in running performance tests for their smart contracts
  • System architects interested in investigating resource constraints during test loads

To better understand how Caliper works, one should start with its architecture. Specifically, to use it, a user should start with defining the following configuration files:

  • benchmark file defining the arguments of a benchmark workload
  • blockchain file specifying the necessary information, which helps to interact with the system being tested
  • Smart contracts defining what contracts are going to be deployed

The above configuration files act as inputs for the Caliper CLI, which creates an admin client (acts as a superuser) and factory (being responsible for running test loads). Based on a chosen benchmark file, a client could be transacting with the system by adding or querying assets.

While testing is in progress, all transactions are saved. The statistics of these transactions are logged and stored. Further, a resource monitor logs the consumption of resources. All of this data is eventually aggregated into a single report. For more detailed discussion on its implementation, visit the link provided in the References section.

Hyperledger Cello

As blockchain applications eventually deployed at the enterprise level, developers had to do a lot of manual work when deploying/managing a blockchain. This job does not get any easier if multiple tenants need to access separate chains simultaneously. For instance, interacting with Hyperledger Fabric requires manual installation of each peer node on different servers, as well as setting up scripts (e.g., Docker-Composer) to start a Fabric network. Thus, to address said challenges while automating the process for developers, Hyperledger Cello got incubated. Cello brings the on-demand deployment model to blockchains and is written in the Go language. Cello is an automated application for deploying and managing blockchains in the form of plug-and-play, particularly for enterprises looking to integrate distributed ledger technologies.

Cello also provides a real-time dashboard for blockchain statuses, system utilization, chain code performance, and the configuration of blockchains. It currently supports Hyperledger Fabric. According to its documentation, Cello allows for:

  • Provisioning customized blockchains instantly
  • Maintaining a pool of running blockchains healthy without any need for manual operation
  • Checking the system’s status, scaling the chain numbers, changing resources, etc. through a dashboard

Likewise, according to its documentation, the major Cello’s features are:

  • Management of multiple blockchains (e.g., create, delete, and maintain health automatically)
  • Almost instant response, even with hundreds of chains or nodes
  • Support for customized blockchains request (e.g., size, consensus) — currently, there is support for Hyperledger Fabric
  • Support for a native Docker host or a Swarm host as the compute nodes
  • Support for heterogeneous architecture (e.g., z Systems, Power Systems, and x86) from bare-metal servers to virtual machines
  • Extensible with monitoring, logging, and health features through employing additional components

According to its developers, Cello’s architecture follows the principles of the microservices, fault resilience, and scalability. In particular, Cello has three functional layers:

  • The access layer, which also includes web UI dashboards operated by users
  • The orchestration layer, which on receiving the request from the access layer, makes a call to the agents to operate the blockchain resources
  • The agent layer, which embodies real workers that interact with underlying infrastructures like Docker, Swarm, or Kubernetes

According to its documentation, each layer should maintain stable APIs for upper layers to achieve pluggability without changing the upper-layer code. For more detailed discussion on its implementation, visit the link provided in the References section.

Hyperledger Avalon

To boost the performance of blockchain networks, developers decided to store non-essential data into off-the-chain databases. While this approach improved blockchain scalability, it led to some confidentiality issues. So, the community was in search of an approach that can achieve scalability and confidentiality goals at once; thus, it led to the incubation of Avalon. Hyperledger Avalon (formerly Trusted Compute Framework) enables privacy in blockchain transactions, shifting heavy processing from a main blockchain to trusted off-chain computational resources in order to improve scalability and latency, and to support attested Oracles.

The Trusted Compute Specification was designed to assist developers gain the benefits of computational trust and to overcome its drawbacks. In the case of the Avalon, a blockchain is used to enforce execution policies and ensure transaction auditability, while associated off-chain trusted computational resources execute transactions. By utilizing trusted off-chain computational resources, a developer can accelerate throughput and improve data privacy. By using Hyperledger Avalon in a distributed ledger, we can:

  • Maintain a registry of the trusted workers (including their attestation info)
  • Provide a mechanism for submitting work orders from a client(s) to a worker
  • Preserve a log of work order receipts and acknowledgments

To put it simply, the off-chain parts related to the main-network are  executing the transactions with the help of trusted compute resources. What guarantees the enforcement of confidentiality along with the integrity of execution is the Trusted Compute option with the following features:

  • Trusted Execution Environment (TEE)
  • MultiParty Commute (MPC)
  • Zero-Knowledge Proofs (ZKP)

By means of Trusted Execution Environments, a developer can enhance the integrity of the link in the off-chain and on-chain execution. Intel’s SGX play is a known example of TEEs, which have capabilities such as code verification, attestation verification, and execution isolation which allows the creation of a trustworthy link between main-chain and off-chain compute resources. For more detailed discussion on its implementation, visit the link provided in the References section.

Note- Hyperledger Explorer Tool (deprecated)

Hyperledger Explorer, in a nutshell, provides a dashboard for peering into block details which are primarily written in JavaScript. Hyperledger Explorer is known to all developers and system admins that have done work in Hyperledger in past few years. In spite of its great features and popularity, Hyperledger announced last year that they no longer maintain it. So this tool is deprecated.

Next Article

In our upcoming article, we move on covering the below four Hyperledger libraries:

  1. Hyperledger Aries
  2. Hyperledger Quilt
  3. Hyperledger Ursa
  4. Hyperledger Transact


To recap, we covered three Hyperledger tools (Caliper, Cello and Avalon) in this article. We started off by explaining that Hyperledger Caliper is designed to perform benchmarks on a deployed smart contract, enabling the analysis of four indicators (like success rate or transaction throughout) on a blockchain network while smart contract is being used. Next, we learned that Hyperledger Cello is an automated application for deploying and managing blockchains in the form of plug-and-play, particularly for enterprises looking to integrate distributed ledger technologies. At last, Hyperledger Avalon enables privacy in blockchain transactions, shifting heavy processing from a main blockchain to trusted off-chain computational resources in order to improve scalability and latency, and to support attested Oracles.


For more references on all Hyperledger projects, libraries and tools, visit the below documentation links:

  1. Hyperledger Indy Project
  2. Hyperledger Fabric Project
  3. Hyperledger Aries Library
  4. Hyperledger Iroha Project
  5. Hyperledger Sawtooth Project
  6. Hyperledger Besu Project
  7. Hyperledger Quilt Library
  8. Hyperledger Ursa Library
  9. Hyperledger Transact Library
  10. Hyperledger Cactus Project
  11. Hyperledger Caliper Tool
  12. Hyperledger Cello Tool
  13. Hyperledger Explorer Tool
  14. Hyperledger Grid (Domain Specific)
  15. Hyperledger Burrow Project
  16. Hyperledger Avalon Tool


About the Author

Matt Zand is a serial entrepreneur and the founder of 3 tech startups: DC Web Makers, Coding Bootcamps and High School Technology Services. He is a leading author of Hands-on Smart Contract Development with Hyperledger Fabric book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms at sites such as IBM, SAP, Alibaba Cloud, Hyperledger, The Linux Foundation, and more. As a public speaker, he has presented webinars at many Hyperledger communities across USA and Europe.. At DC Web Makers, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, angel investor, business advisor for a few startup companies. You can connect with him on LI:

The post Review of Three Hyperledger Tools – Caliper, Cello and Avalon appeared first on Linux Foundation – Training.

The post Review of Three Hyperledger Tools – Caliper, Cello and Avalon appeared first on

New Open Source Projects to Confront Racial Justice

Friday 19th of February 2021 08:00:00 AM

Today the Linux Foundation announced that it would be hosting seven projects that originated at Call for Code for Racial Justice, an initiative driven by IBM to urge the global developer ecosystem and open source community to contribute to solutions that confront racial inequalities.

Launched by IBM in October 2020, Call for Code for Racial Justice facilitates the adoption and innovation of open source projects by developers, ecosystem partners, and communities across the world to promote racial justice across three distinct focus areas: Police & Judicial Reform and Accountability; Diverse Representation; and Policy & Legislation Reform.

The initiative builds upon Call for Code, created by IBM in 2018 and has grown to over 400,000 developers and problem solvers in 179 countries.

As part of today’s announcement, the Linux Foundation and IBM unveiled two new solution starters, Fair Change and TakeTwo:

Fair Change is a platform to help record, catalog, and access evidence of potentially racially charged incidents to enable transparency, reeducation, and reform as a matter of public interest and safety. For example, real-world video footage related to routine traffic stops, stop and search, or other scenarios may be recorded and accessed by the involved parties and authorities to determine whether the incidents were handled in a biased manner. Fair Change consists of a mobile application for iOS and Android built using React Native, an API for capturing data from various sources built using Node JS. It also includes a website with a geospatial map view of incidents built using Google Maps and React. Data can be stored in a cloud-hosted database and object-store. Visit the tutorial or project page to learn more.

TakeTwo aims to help mitigate digital content bias, whether overt or subtle, focusing on text across news articles, headlines, web pages, blogs, and even code. The solution is designed to leverage directories of inclusive terms compiled by trusted sources like the Inclusive Naming Initiative, which the Linux Foundation and CNCF co-founded. The terminology is categorized to train an AI model to enhance its accuracy over time. TakeTwo is built using open source technologies, including Python, FastAPI, and Docker. The API can be run locally with a CouchDB backend database or IBM Cloudant database. IBM has already deployed TakeTwo within its existing IBM Developer tools that are used to publish new content produced by hundreds of IBMers each week. IBM is trialing TakeTwo for IBM Developer website content. Visit the tutorial or project page to learn more.

In addition to the two new solution starters, The Linux Foundation will now host five existing and evolving open source projects from Call for Code for Racial Justice:

  • Five-Fifths Voter: This web app empowers minorities to exercise their right to vote and ensures their voice is heard by determining optimal voting strategies and limiting suppression issues.
  • Legit-Info: Local legislation can significantly impact areas as far-reaching as jobs, the environment, and safety. Legit-Info helps individuals understand the legislation that shapes their lives.
  • Incident Accuracy Reporting System: This platform allows witnesses and victims to corroborate evidence or provide additional information from multiple sources against an official police report.
  • Open Sentencing: To help public defenders better serve their clients and make a stronger case, Open Sentencing shows racial bias in data such as demographics.
  • Truth Loop: This app helps communities simply understand the policies, regulations, and legislation that will impact them the most.

These projects were built using open source technologies that include Red Hat OpenShift, IBM Cloud, IBM Watson, Blockchain ledger, Node.js, Vu.js, Docker, Kubernetes, and Tekton. The Linux Foundation and IBM ask developers and ecosystem partners to contribute to these solutions by testing, extending, implementing them, and adding their own diverse perspectives and expertise to make them even stronger.

For more information and to begin contributing, please visit:

The post New Open Source Projects to Confront Racial Justice appeared first on Linux Foundation.

The post New Open Source Projects to Confront Racial Justice appeared first on

Interview with KubeCF project leaders Dieu Cao and Paul Warren

Thursday 18th of February 2021 04:26:28 PM

KubeCF is a distribution of Cloud Foundry Application Runtime (CFAR) for Kubernetes. Originated at SUSE, the project is a bridge between Cloud Foundry and Kubernetes. KubeCF provides developers the productivity they love from Cloud Foundry and allows platform operators to manage the infrastructure abstraction with Kubernetes tools and APIs. To learn more about the project we hosted a discussion with Dieu Cao, CF Open Source Product Lead at VMware, and Paul Warren, Product Manager cf-for-k8s at VMWare.

The post Interview with KubeCF project leaders Dieu Cao and Paul Warren appeared first on

Free Introduction to Node.js Online Training Now Available

Thursday 18th of February 2021 08:00:23 AM

Node.js is the extremely popular open source JavaScript runtime, used by some of the biggest names in technology, including Bloomberg, LinkedIn, Netflix, NASA, and more. Node.js is prized for its speed, lightweight footprint, and ability to easily scale, making it a top choice for microservices architectures. With no sign of Node.js use and uptake slowing, there is a continual need for more individuals with knowledge and skills in using this technology.

For those wanting to start learning Node.js, the path has not always been clear. While there are many free resources and forums available to help, they require individual planning, research and organization which can make it difficult for some to learn these skills. That’s why The Linux Foundation and OpenJS Foundation have released a new, free, online training course, Introduction to Node.js. This course is designed for frontend or backend developers who would like to become more familiar with the fundamentals of Node.js and its most common use cases. Topics covered include how to rapidly build command line tools, mock RESTful JSON APIs and prototype real-time services. You will also discover and use various ecosystem and Node core libraries, and come away understanding common use cases for Node.js.

By immersing yourself in a full-stack development experience, this course helps bring context to Node.js as it relates to the web platform, while providing a pragmatic foundation in building various types of real-world Node.js applications. At the same time, the general principles and key understandings introduced by this course can prepare you for further study towards the OpenJS Node.js Application Developer (JSNAD) and OpenJS Node.js Services Developer (JSNSD) certifications.

Introduction to Node.js was developed by David Mark Clements, Principal Architect, technical author, public speaker and OSS creator specializing in Node.js and browser JavaScript. David has been writing JavaScript since 1996 and has been working with, speaking and writing about Node.js since Node 0.4 (2011), including authoring the first three editions of “Node Cookbook”. He is the author of various open source projects including Pino, the fastest Node.js JSON logger available and 0x, a powerful profiling tool for Node.js. David also is the technical lead and primary author of the JSNAD and JSNSD certification exams, as well as the Node.js Application Development (LFW211) and Node.js Services Development (LFW212) courses.

Enrollment is now open for Introduction to Node.js. Auditing the course through edX is free for seven weeks, or you can opt for a paid verified certificate of completion, which provides ongoing access.

The post Free Introduction to Node.js Online Training Now Available appeared first on Linux Foundation – Training.

The post Free Introduction to Node.js Online Training Now Available appeared first on

The Linux Foundation Announces the Election of Renesas’ Hisao Munakata and GitLab’s Eric Johnson to the Board of Directors

Wednesday 17th of February 2021 04:57:19 AM

Today, the Linux Foundation announced that Renesas’ Hisao Munakata has been re-elected to its board, representing the Gold Member community. GitLab’s Eric Johnson has been elected to represent the Silver Member community. Linux Foundation elected board directors serve 2-year terms.

Directors elected to the Linux Foundation’s board are committed to building sustainable ecosystems around open collaboration to accelerate technology development and industry adoption. The Linux Foundation expands the open collaboration communities it supports with community efforts focused on building open standards, open hardware, and open data. It is dedicated to improving diversity in open source communities and working on processes, tools, and best security practices in open development communities. 

Hisao Munakata, Renesas (Gold Member)

Renesas is a global semiconductor manufacturer that provides cutting-edge SoC (system-on-chip) devices for the automotive, industry, and infrastructure. As open source support became essential for the company, Munakata-san encouraged Renesas developers to follow an “upstream-first” scheme to minimize gaps from the mainline community codebase. The industry has now accepted this as standard practice, following Renesas’ direction and pioneering work. 

Hisao Munakata

Munakata-san has served as an LF board director since 2019 and has reflected the voice from the embedded industry. 

Renesas, which joined the Linux Foundation in 2011, has ranked in the top twelve kernel development contributor firms in the past 14 years. Munakata-san serves pivotal roles in various LF projects such as the AGL (Automotive Grade Linux) Advisory Board, Yocto Project Advisory Board, Core Embedded Linux Project, and OpenSSF. In these roles, Munakata-san has supported many industry participants in these projects to achieve harmony. 

As cloud-native trends break barriers between enterprise and embedded systems, Munakata-san seeks to improve close collaboration across the industry and increase contribution from participants in the embedded systems space, focusing on safety in a post-COVID world.

Eric Johnson, GitLab (Silver Member)

Eric Johnson is the Chief Technology Officer at GitLab, Inc. — the first single application for the DevSecOps lifecycle. GitLab is a free, open core software used by more than 30 million registered users to collaborate, author, test, secure, and release software quickly and efficiently. 

Eric Johnson

At GitLab, Eric is responsible for the organization that integrates the work of over a hundred external open source contributors into GitLab’s codebase every month. During his tenure Eric has contributed to a 10x+ increase in annual recurring revenue and has scaled Engineering from 100 to more than 550 people while dramatically increasing team diversity in gender, ethnicity, and country-of-residence. He’s also helped turn GitLab, Inc. into one of the most productive engineering organizations in the world, as evidenced by their substantial monthly on-premise releases.

Eric is also a veteran of 4 previous enterprise technology startups in fields as varied as marketing technology, localization software, streaming video, and commercial drone hardware/software. He currently advises two startups in the medical trial software and recycling robotics industries. 

Eric brings his open source and Linux background to the Foundation. In his professional work, he has spent 17 years hands-on or managing teams that develop software that runs on Linux systems, administrating server clusters, orchestrating containers, open-sourcing privately built software, and contributing back to open source projects. Personally, he’s also administered a Linux home server for the past ten years.

As a Linux Foundation board member, Eric looks forward to using his execution-focused executive experience to turn ideas into results. Collaboration with the Linux Foundation has already begun with Distributed Developer ID and Digital Bill of Materials (DBoM). As a remote work expert with years of experience developing best practices, Eric will use his expertise to help the board, the Foundation, and its partners become more efficient in a remote, asynchronous, and geographically distributed way.

The post The Linux Foundation Announces the Election of Renesas’ Hisao Munakata and GitLab’s Eric Johnson to the Board of Directors appeared first on Linux Foundation.

The post The Linux Foundation Announces the Election of Renesas’ Hisao Munakata and GitLab’s Eric Johnson to the Board of Directors appeared first on

Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy

Monday 15th of February 2021 11:00:39 PM

by Matt Zand

As companies are catching up in adopting blockchain technology, the choice of a private blockchain platform becomes very vital. Hyperledger, whose open source projects support/power more enterprise blockchain use cases than others, is currently leading the race of private Distributed Ledger Technology (DLT) implementation. Working from the assumption that you know how blockchain works and what is the design philosophy behind Hyperledger’s ecosystem, in this article we will briefly review five active Hyperledger DLTs. In addition to DLTs discussed in this article, Hyperledger ecosystem has more supporting tools and libraries that I will cover in more detail in my future articles.

This article mainly targets those who are relatively new to Hyperledger. This article would be a great resource for those interested in providing blockchain solution architect services and doing blockchain enterprise consulting and development. The materials included in this article will help you understand Hyperledger DLTs as a whole and use its high-level overview as a guideline for making the best of each Hyperledger project.

Since Hyperledger is supported by a robust open source community, new projects are being added to the Hyperledger ecosystem regularly. At the time of this writing, Feb 2021, it consists of six active projects and 10 others which are at the incubation stage. Each project has unique features and advantages.

1- Hyperledger Fabric

Hyperledger Fabric is the most popular Hyperledger framework. Smart contracts (also known as chaincode) are written in Golang or JavaScript, and run in Docker containers. Fabric is known for its extensibility and allows enterprises to build distributed ledger networks on top of an established and successful architecture. A permissioned blockchain, initially contributed by IBM and Digital Asset,  Fabric is designed to be a foundation for developing applications or solutions with a modular architecture. It takes plugin components for providing functionalities such as consensus and membership services. Like Ethereum, Hyperledger Fabric can host and execute smart contracts, which are named chaincode. A Fabric network consists of peer nodes, which execute smart contracts (chaincode), query ledger data, validate transactions, and interact with applications. User-entered transactions are channeled to an ordering service component, which initially serves to be a consensus mechanism for Hyperledger Fabric. Special nodes called Orderer nodes validate the transactions, ensure the consistency of the blockchain, and send the validated transactions to the peers of the network as well as to membership service provider (MSP) services.

Two major highlights of Hyperledger Fabric versus Ethereum are:

  • Multi-ledger: Each node on Ethereum has a replica of a single ledger in the network. However, Fabric nodes can carry multiple ledgers on each node, which is a great feature for enterprise applications.
  • Private Data: In addition to a private channel feature, unlike with Ethereum, Fabric members within a consortium can exchange private data among themselves without disseminating them through Fabric channel, which is very useful for enterprise applications.

Here is a good article for reviewing all Hyperledger Fabric components like peer, channel and, chaincode that are essential for building blockchain applications. In short, thorough understanding of all Hyperledger Fabric components is highly recommended for building, deploying and managing enterprise-level Hyperledger Fabric applications.

2- Hyperledger Besu

Hyperledger Besu is an open source Ethereum client developed under the Apache 2.0 license and written in Java. It can be run on the Ethereum public network or on private permissioned networks, as well as test networks such as Rinkeby, Ropsten, and Gorli. Hyperledger Besu supports several consensus algorithms including PoW, PoA, and IBFT, and has comprehensive permissioning schemes designed specifically for uses in a consortium environment.

Hyperledger Besu implements the Enterprise Ethereum Alliance (EEA) specification. The EEA specification was established to create common interfaces amongst the various open and closed source projects within Ethereum, to ensure users do not have vendor lock-in, and to create standard interfaces for teams building applications. Besu implements enterprise features in alignment with the EEA client specification.

As a basic Ethereum Client, Besu has the following features:

  • It connects to the blockchain network to synchronize blockchain transaction data or emit events to the network.
  • It processes transactions through smart contracts in an Ethereum Virtual Machine (EVM) environment.
  • It uses a data storage of networks (blocks).
  • It publishes client API interfaces for developers to interact with the blockchain network.

Besu implements Proof of Work and Proof of Authority (PoA) consensus mechanisms. Further, Hyperledger Besu implements several PoA protocols, including Clique and IBFT 2.0.

Clique is a proof-of-authority blockchain consensus protocol. The blockchain runs Clique protocol maintaining the list of authorized signers. These approved signers directly mine and seal all blocks without mining. Therefore, the transaction task is computationally light. When creating a block, a miner collects and executes transactions, updates the network state with the calculated hash of the block and signs the block using his private key. By using a defined period of time to create a block, Clique can limit the number of processed transactions.

IBFT 2.0 (Istanbul BFT 2.0) is a PoA Byzantine-Fault-Tolerant (BFT) blockchain consensus protocol. Transactions and blocks in the network are validated by authorized accounts, known as validators. Validators collect, validate and execute transactions and create the next block. Existing validators can propose and vote to add or remove validators and maintain a dynamic validator set. The consensus can ensure immediate finality. As the name suggests, IBFT 2.0 builds upon the IBFT blockchain consensus protocol with improved safety and liveness. In IBFT 2.0 blockchain, all valid blocks are directly added in the main chain and there are no forks.

3- Hyperledger Sawtooth

Sawtooth is the second Hyperledger project to reach 1.0 release maturity. Sawtooth-core is written in Python, while Sawtooth Raft and Sawtooth Sabre are written in Rust. It also has JavaScript and Golang components. Sawtooth supports both permissioned and permissionless deployments. It supports the EVM through a collaboration with the Hyperledger Burrow. By design, Hyperledger Sawtooth is created to address issues of performance. As such, one of its distinct features compared to other Hyperledger DLTs is that each node in Sawtooth can act as an orderer by validating and approving a transaction. Other notable features are:

  • Parallel Transaction Execution: While many blockchains use serial transaction execution to ensure consistent ordering at every node on the network, Sawtooth follows an advanced parallel scheduler that classifies transactions into parallel flows that eventually leads to the boost in transaction processing performance.
  • Separation of Application from Core: Sawtooth simplifies the development and deployment of an application by separating the application level from the core system level. It offers smart contract abstraction to allow developers to create contract logic in the programming language of their choice.
  • Custom Transaction Processors: In Sawtooth, each application can define the custom transaction processors to meet its unique requirements. It provides transaction families to serve as an approach for low-level functions, like storing on-chain permissions, managing chain-wide settings and for particular applications such as saving block information and performance analysis.

4- Hyperledger Iroha

Hyperledger Iroha is designed to target the creation and management of complex digital assets and identities. It is written in C++ and is user friendly. Iroha has a powerful role-based model for access control and supports complex analytics. While using Iroha for identity management, querying and performing commands are only limited to the participants who have access to the Iroha network. A robust permissions system ensures that all transactions are secure and controlled. Some of its highlights are:

  • Ease of use: You can easily create and manage simple, as well as complex, digital assets (e.g., cryptocurrency or personal medical data).
  • Built-in Smart Contracts: You can easily integrate blockchain into a business process using built-in smart-contracts called “commands.” As such, developers need not to write complicated smart-contracts because they are available in the form of commands.
  • BFT: Iroha uses BFT consensus algorithm which makes it suitable for businesses that require verifiable data consistency at a low cost.

5- Hyperledger Indy

As a self-sovereign identity management platform, Hyperledger Indy is built explicitly for decentralized identity management. The server portion, Indy node, is built in Python, while the Indy SDK is written in Rust. It offers tools and reusable components to manage digital identities on blockchains or other distributed ledgers. Hyperledger Indy architecture is well-suited for every application that requires heavy work on identity management since Indy is easily interpretable across multiple domains, organization silos and applications. As such, identities are securely stored and shared with all parties involved. Some notable highlights of Hyperledger Indy are:

●        Identity Correlation-resistant: According to the Hyperledger Indy documentation, Indy is completely identity correlation-resistant. So, you do not need to worry about connecting or mixing one Id with another. That means, you can not connect two Ids or find two similar Ids in the ledger.

●        Decentralized Identifiers (DIDs): According to the Hyperledger Indy documentation, all the decentralized identifiers are globally resolvable and unique without needing any central party in the mix. That means, every decentralized identity on the Indy platform will have a unique identifier that will solely belong to you. As a result, no one can claim or even use your identity on your behalf. So, it would eliminate the chances of identity theft.

●        Zero-Knowledge Proofs: With help from Zero-Knowledge Proof, you can disclose only the information necessary without anything else. So, when you have to prove your credentials, you can only choose to release the information that you need depending on the party that is requesting it. For instance, you may choose to share your data of birth only with one party whereas to release your driver license and financial docs to another. In short, Indy gives users great flexibility in sharing their private data whenever and wherever needed.


In this article, we briefly reviewed five popular Hyperledger DLTs. We started off by going over Hyperledger Fabric and its main components and some of its highlights compared to public blockchain platforms like Ethereum. Even though Fabric is currently used heavily for supply chain management, if you are doing lots of specific works in supply chain domain, you should explore Hyperledger Grid too. Then, we moved on to learning how to use Hyperledger Besu for building public consortium blockchain applications that support multiple consensus algorithms and how to manage Besu from EVM. Next, we covered some highlights of Hyperledger Sawtooth such as how it is designed for high performance. For instance, we learned how a single node in Sawtooth can act as an orderer by approving and validating transactions in the network. The last two DLTs (Hyperledger Iroha and Indy) are specifically geared toward digital asset management and identity . So if you are working on a project that heavily uses identity management, you should explore and use either Iroha or Indy instead of Fabric.

I have included reference and resource links for those interested in exploring topics discussed in this article in depth.

For more references on all Hyperledger projects, libraries and tools, visit the below documentation links:

  1. Hyperledger Indy Project
  2. Hyperledger Fabric Project
  3. Hyperledger Aries Library
  4. Hyperledger Iroha Project
  5. Hyperledger Sawtooth Project
  6. Hyperledger Besu Project
  7. Hyperledger Quilt Library
  8. Hyperledger Ursa Library
  9. Hyperledger Transact Library
  10. Hyperledger Cactus Project
  11. Hyperledger Caliper Tool
  12. Hyperledger Cello Tool
  13. Hyperledger Explorer Tool
  14. Hyperledger Grid (Domain Specific)
  15. Hyperledger Burrow Project
  16. Hyperledger Avalon Tool


About the Author

Matt Zand is a serial entrepreneur and the founder of 3 tech startups: DC Web Makers, Coding Bootcamps and High School Technology Services. He is a leading author of Hands-on Smart Contract Development with Hyperledger Fabric book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms. At DC Web Makers, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, angel investor, business advisor for a few startup companies. You can connect with him on LI:

The post Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy appeared first on Linux Foundation – Training.

The post Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy appeared first on

How to create a TLS/SSL certificate with a Cert-Manager Operator on OpenShift

Friday 12th of February 2021 12:10:23 AM

How to create a TLS/SSL certificate with a Cert-Manager Operator on OpenShift

Use cert-manager to deploy certificates to your OpenStack or Kubernetes environment.
Bryant Son
Thu, 2/11/2021 at 4:10pm


Photo by Tea Oebel from Pexels

cert-manager builds on top of Kubernetes, introducing certificate authorities and certificates as first-class resource types in the Kubernetes API. This feature makes it possible to provide Certificates as a Service to developers working within your Kubernetes cluster.

cert-manager is an open source project based on Apache License 2.0 provided by Jetstack. Since cert-manager is an open source application, it has its own GitHub page.

Read More at Enable Sysadmin

The post How to create a TLS/SSL certificate with a Cert-Manager Operator on OpenShift appeared first on

Unikraft: Pushing Unikernels into the Mainstream

Thursday 11th of February 2021 06:00:15 PM

Unikernels have been around for many years and are famous for providing excellent performance in boot times, throughput, and memory consumption, to name a few metrics [1]. Despite their apparent potential, unikernels have not yet seen a broad level of deployment due to three main drawbacks:

  • Hard to build: Putting a unikernel image together typically requires expert, manual work that needs redoing for each application. Also, many unikernel projects are not, and don’t aim to be, POSIX compliant, and so significant porting effort is required to have standard applications and frameworks run on them.
  • Hard to extract high performance: Unikernel projects don’t typically expose high-performance APIs; extracting high performance often requires expert knowledge and modifications to the code.
  • Little or no tool ecosystem: Assuming you have an image to run, deploying it and managing it is often a manual operation. There is little integration with major DevOps or orchestration frameworks.

While not all unikernel projects suffer from all of these issues (e.g., some provide some level of POSIX compliance but the performance is lacking, others target a single programming language and so are relatively easy to build but their applicability is limited), we argue that no single project has been able to successfully address all of them, hindering any significant level of deployment. For the past three years, Unikraft (, a Linux Foundation project under the Xen Project’s auspices, has had the explicit aim to change this state of affairs to bring unikernels into the mainstream. 

If you’re interested, read on, and please be sure to check out:

High Performance

To provide developers with the ability to obtain high performance easily, Unikraft exposes a set of composable, performance-oriented APIs. The figure below shows Unikraft’s architecture: all components are libraries with their own Makefile and Kconfig configuration files, and so can be added to the unikernel build independently of each other.

Figure 1. Unikraft ‘s fully modular architecture showing high-performance APIs

APIs are also micro-libraries that can be easily enabled or disabled via a Kconfig menu; Unikraft unikernels can compose which APIs to choose to best cater to an application’s needs. For example, an RCP-style application might turn off the uksched API (➃ in the figure) to implement a high performance, run-to-completion event loop; similarly, an application developer can easily select an appropriate memory allocator (➅) to obtain maximum performance, or to use multiple different ones within the same unikernel (e.g., a simple, fast memory allocator for the boot code, and a standard one for the application itself). 

Figure 2. Unikraft memory consumption vs. other unikernel projects and Linux Figure 3. Unikraft NGINX throughput versus other unikernels, Docker, and Linux/KVM.


These APIs, coupled with the fact that all Unikraft’s components are fully modular, results in high performance. Figure 2, for instance, shows Unikraft having lower memory consumption than other unikernel projects (HermiTux, Rump, OSv) and Linux (Alpine); and Figure 3 shows that Unikraft outperforms them in terms of NGINX requests per second, reaching 90K on a single CPU core.

Further, we are working on (1) a performance profiler tool to be able to quickly identify potential bottlenecks in Unikraft images and (2) a performance test tool that can automatically run a large set of performance experiments, varying different configuration options to figure out optimal configurations.

Ease of Use, No Porting Required

Forcing users to port applications to a unikernel to obtain high performance is a showstopper. Arguably, a system is only as good as the applications (or programming languages, frameworks, etc.) can run. Unikraft aims to achieve good POSIX compatibility; one way of doing so is supporting a libc (e.g., musl), along with a large set of Linux syscalls. 

Figure 4. Only a certain percentage of syscalls are needed to support a wide range of applications

While there are over 300 of these, many of them are not needed to run a large set of applications; as shown in Figure 1 (taken from [5]). Having in the range of 145, for instance, is enough to support 50% of all libraries and applications in a Ubuntu distribution (many of which are irrelevant to unikernels, such as desktop applications). As of this writing, Unikraft supports over 130 syscalls and a number of mainstream applications (e.g., SQLite, Nginx, Redis), programming languages and runtime environments such as C/C++, Go, Python, Ruby, Web Assembly, and Lua, not to mention several different hypervisors (KVM, Xen, and Solo5) and ARM64 bare-metal support.

Ecosystem and DevOps

Another apparent downside of unikernel projects is the almost total lack of integration with existing, major DevOps and orchestration frameworks. Working towards the goal of integration, in the past year, we created the kraft tool, allowing users to choose an application and a target platform simply (e.g., KVM on x86_64) and take care of building the image running it.

Beyond this, we have several sub-projects ongoing to support in the coming months:

  • Kubernetes: If you’re already using Kubernetes in your deployments, this work will allow you to deploy much leaner, fast Unikraft images transparently.
  • Cloud Foundry: Similarly, users relying on Cloud Foundry will be able to generate Unikraft images through it, once again transparently.
  • Prometheus: Unikernels are also notorious for having very primitive or no means for monitoring running instances. Unikraft is targeting Prometheus support to provide a wide range of monitoring capabilities. 

In all, we believe Unikraft is getting closer to bridging the gap between unikernel promise and actual deployment. We are very excited about this year’s upcoming features and developments, so please feel free to drop us a line if you have any comments, questions, or suggestions at

About the author: Dr. Felipe Huici is Chief Researcher, Systems and Machine Learning Group, NEC Laboratories Europe GmbH


[1] Unikernels Rethinking Cloud Infrastructure.

[2] Is the Time Ripe for Unikernels to Become Mainstream with Unikraft? FOSDEM 2021 Microkernel developer room.

[3] Severely Debloating Cloud Images with Unikraft. FOSDEM 2021 Virtualization and IaaS developer room.

[4] Welcome to the Unikraft Stand!

[5] A study of modern Linux API usage and compatibility: what to support when you’re supporting. Eurosys 2016.

The post Unikraft: Pushing Unikernels into the Mainstream appeared first on

Getting to Know the Cryptocurrency Open Patent Alliance (COPA)

Thursday 11th of February 2021 02:00:50 PM
Why is there a need for a patent protection alliance for cryptocurrency technologies?

With the recent surge in popularity of cryptocurrencies and related technologies, Square felt an industry group was needed to protect against litigation and other threats against core cryptocurrency technology and ensure the ecosystem remains vibrant and open for developers and companies.

The same way Open Invention Network (OIN) and LOT Network add a layer of patent protection to inter-company collaboration on open source technologies, COPA aims to protect open source cryptocurrency technology. Feeling safe from the threat of lawsuits is a precursor to good collaboration.

  • Locking up foundational cryptocurrency technologies in patents stifles innovation and adoption of cryptocurrency in novel and useful applications.
  • The offensive use of patents threatens the growth and free availability of cryptocurrency technologies. Many smaller companies and developers do not own patents and cannot deter or defend threats adequately.

By joining COPA, a member can feel secure it can innovate in the cryptocurrency space without fear of litigation between other members. 

What is Square’s involvement in COPA?

Square’s core purpose is economic empowerment, and they see cryptocurrency as a core technological pillar. Square helped start and fund COPA with the hope that by encouraging innovation in the cryptocurrency space, more useful ideas and products would get created. COPA management has now diversified to an independent board of technology and regulatory experts, and Square maintains a minority presence.

Do we need cryptocurrency patents to join COPA? 

No! Anyone can join and benefit from being a member of COPA, regardless of whether they have patents or not. There is no barrier to entry – members can be individuals, start-ups, small companies, or large corporations. Here is how COPA works:

  • First, COPA members pledge never to use their crypto-technology patents against anyone, except for defensive reasons, effectively making their patents freely available for all.
  • Second, members pool all of their crypto-technology patents together to form a shared patent library, which provides a forum to allow members to reasonably negotiate lending patents to one another for defensive purposes.
  • The patent pledge and the shared patent library work in tandem to help drive down the incidence and threat of patent litigation, benefiting the cryptocurrency community as a whole. 
  • Additionally, COPA monitors core technologies and entities that support cryptocurrency and does its best to research and help address litigation threats against community members.
What types of companies should join COPA?
  • Financial services companies and technology companies working in regulated industries that use distributed ledger or cryptocurrency technology
  • Companies or individuals who are interested in collaborating on developing cryptocurrency products or who hold substantial investments in cryptocurrency
What companies have joined COPA so far?
  • Square, Inc.
  • Blockchain Commons
  • Carnes Validadas
  • Request Network
  • Foundation Devices
  • ARK
  • SatoshiLabs
  • Transparent Systems
  • Horizontal Systems
  • VerifyChain
  • Blockstack
  • Protocol Labs
  • Cloudeya Ltd.
  • Mercury Cash
  • Bithyve
  • Coinbase
  • Blockstream
  • Stakenet
How to join

Please express interest and get access to our membership agreement here:

The post Getting to Know the Cryptocurrency Open Patent Alliance (COPA) appeared first on

Understanding Open Governance Networks

Thursday 11th of February 2021 08:00:00 AM

Throughout the modern business era, industries and commercial operations have shifted substantially to digital processes. Whether you look at EDI as a means to exchange invoices or cloud-based billing and payment solutions today, businesses have steadily been moving towards increasing digital operations. In the last few years, we’ve seen the promises of digital transformation come alive, particularly in industries that have shifted to software-defined models. The next step of this journey will involve enabling digital transactions through decentralized networks.

A fundamental adoption issue will be figuring out who controls and decides how a decentralized network is governed. It may seem oxymoronic at first, but decentralized networks still need governance. A future may hold autonomously self-governing decentralized networks, but this model is not accepted in industries today. The governance challenge with a decentralized network technology lies in who and how participants in a network will establish and maintain policies, network operations, on/offboarding of participants, setting fees, configurations, and software changes and are among the issues that will have to be decided to achieve a successful network. No company wants to participate or take a dependency on a network that is controlled or run by a competitor, potential competitor, or any single stakeholder at all for that matter.

Earlier this year, we presented a solution for Open Governance Networks that enable an industry or ecosystem to govern itself in an open, inclusive, neutral, and participatory model. You may be surprised to learn that it’s based on best practices in open governance we’ve developed over decades of facilitating the world’s most successful and competitive open source projects.

The Challenge

For the last few years, a running technology joke has been “describe your problem, and someone will tell you blockchain is the solution.” There have been many other concerns raised and confusion created, as overnight headlines hyped cryptocurrency schemes. Despite all this, behind the scenes, and all along, sophisticated companies understood a distributed ledger technology would be a powerful enabler for tackling complex challenges in an industry, or even a section of an industry.

At the Linux Foundation, we focused on enabling those organizations to collaborate on open source enterprise blockchain technologies within our Hyperledger community. That community has driven collaboration on every aspect of enterprise blockchain technology, including identity, security, and transparency. Like other Linux Foundation projects, these enterprise blockchain communities are open, collaborative efforts. We have had many vertical industry participants engage, from retail, automotive, aerospace, banking, and others participate with real industry challenges they needed to solve. And in this subset of cases, enterprise blockchain is the answer.

The technology is ready. Enterprise blockchain has been through many proof-of-concept implementations, and we’ve already seen that many organizations have shifted to production deployments. A few notable examples are:

  • Trust Your Supplier Network 25 major corporate members from Anheuser-Busch InBev to UPS In production since September 2019.
  • Foodtrust Launched Aug 2017 with ten members, now being used by all major retailers.
  • Honeywell 50 vendors with storefronts in the new marketplace. In its first year, GoDirect Trade processed more than $5 million in online transactions.

However, just because we have the technology doesn’t mean we have the appropriate conditions to solve adoption challenges. A certain set of challenges about networks’ governance have become a “last mile” problem for industry adoption. While there are many examples of successful production deployments and multi-stakeholder engagements for commercial enterprise blockchains already, specific adoption scenarios have been halted over uncertainty, or mistrust, over who and how a blockchain network will be governed.

To precisely state the issue, in many situations, company A does not want to be dependent on, or trust, company B to control a network. For specific solutions that require broad industry participation to succeed, you can name any industry, and there will be company A and company B.

We think the solution to this challenge will be Open Governance Networks.

The Linux Foundation vision of the Open Governance Network

An Open Governance Network is a distributed ledger service, composed of nodes, operated under the policies and directions of an inclusive set of industry stakeholders.

Open Governance Networks will set the policies and rules for participation in a decentralized ledger network that acts as an industry utility for transactions and data sharing among participants that have permissions on the network. The Open Governance Network model allows any organization to participate. Those organizations that want to be active in sharing the operational costs will benefit from having a representative say in the policies and rules for the network itself. The software underlying the Open Governance Network will be open source software, including the configurations and build tools so that anyone can validate whether a network node complies with the appropriate policies.

Many who have worked with the Linux Foundation will realize an open, neutral, and participatory governance model under a nonprofit structure that has already been thriving for decades in successful open source software communities. All we’re doing here is taking the same core principles of what makes open governance work for open source software, open standards, and open collaboration and applying those principles to managing a distributed ledger. This is a model that the Linux Foundation has used successfully in other communities, such as the Let’s Encrypt certificate authority.

Our ecosystem members trust the Linux Foundation to help solve this last mile problem using open governance under a neutral nonprofit entity. This is one solution to the concerns about neutrality and distributed control. In pan-industry use cases, it is generally not acceptable for one participant in the network to have power in any way that could be used as an advantage over someone else in the industry.  The control of a ledger is a valuable asset, and competitive organizations generally have concerns in allowing one entity to control this asset. If not hosted in a neutral environment for the community’s benefit, network control can become a leverage point over network users.

We see this neutrality of control challenge as the primary reason why some privately held networks have struggled to gain widespread adoption. In order to encourage participation, industry leaders are looking for a neutral governance structure, and the Linux Foundation has proven the open governance models accomplish that exceptionally well.

This neutrality of control issue is very similar to the rationale for public utilities. Because the economic model mirrors a public utility, we debated calling these “industry utility networks.” In our conversations, we have learned industry participants are open to sharing the cost burden to stand up and maintain a utility. Still, they want a low-cost, not profit-maximizing model. That is why our nonprofit model makes the most sense.

It’s also not a public utility in that each network we foresee today would be restricted in participation to those who have a stake in the network, not any random person in the world. There’s a layer of human trust that our communities have been enabling on top of distributed networks, which started with the Trust over IP Foundation.

Unlike public cryptocurrency networks where anyone can view the ledger or submit proposed transactions, industries have a natural need to limit access to legitimate parties in their industry. With minor adjustments to address the need for policies for transactions on the network, we believe a similar governance model applied to distributed ledger ecosystems can resolve concerns about the neutrality of control.

Understanding LF Open Governance Networks

Open Governance Networks can be reduced to the following building block components:

  • Business Governance: Networks need a decision-making body to establish core policies (e.g., network policies), make funding and budget decisions, contracting with a network manager, and other business matters necessary for the network’s success. The Linux Foundation establishes a governing board to manage the business governance.
  • Technical Governance: Networks will require software. A technical open source community will openly maintain the software, specifications, or configuration decisions implemented by the network nodes. The Linux Foundation establishes a technical steering committee to oversee technical projects, configurations, working groups, etc.
  • Transaction Entity: Networks will require a transaction entity that will a) act as counterparty to agreements with parties transacting on the network, b) collect fees from participants, and c) execute contracts for operational support (e.g., hiring a network manager).

Of these building blocks, the Linux Foundation already offers its communities the Business and Technical Governance needed for Open Governance Networks. The final component is the new, LF Open Governance Networks.

LF Open Governance Networks will enable our communities to establish their own Open Governance Network and have an entity to process agreements and collect transaction fees. This new entity is a Delaware nonprofit, a nonstock corporation that will maximize utility and not profit. Through agreements with the Linux Foundation, LF Governance Networks will be available to Open Governance Networks hosted at the Linux Foundation.

If you’re interested in learning more about hosting an Open Governance Network at the Linux Foundation, please contact us at

The post Understanding Open Governance Networks appeared first on Linux Foundation.

The post Understanding Open Governance Networks appeared first on

What’s the next Linux workload that you plan to containerize?

Wednesday 10th of February 2021 04:18:28 AM

What’s the next Linux workload that you plan to containerize?

You’re convinced that containers are a good thing, but what’s the next workload to push to a container?
Tue, 2/9/2021 at 8:18pm


Image by Gerd Altmann from Pixabay

I’m sure many of my fellow sysadmins have been tasked with cutting costs, making infrastructure more usable, making services more accessible, enhancing security, and enabling developers to be more autonomous when working with their test, development, and staging environments. You might have started your virtualization efforts by moving some web sites to containers. You might also have moved an application or two as well. This poll focuses on the next workload that you plan to containerize.

Linux Administration  
Read More at Enable Sysadmin

The post What’s the next Linux workload that you plan to containerize? appeared first on

More in Tux Machines

Today in Techrights

today's howtos

  • Hans de Goede: Changing hidden/locked BIOS settings under Linux

    This all started with a Mele PCG09 before testing Linux on this I took a quick look under Windows and the device-manager there showed an exclamation mark next to a Realtek 8723BS bluetooth device, so BT did not work. Under Linux I quickly found out why, the device actually uses a Broadcom Wifi/BT chipset attached over SDIO/an UART for the Wifi resp. BT parts. The UART connected BT part was described in the ACPI tables with a HID (Hardware-ID) of "OBDA8723", not good. Now I could have easily fixed this with an extra initrd with DSDT-overrride but that did not feel right. There was an option in the BIOS which actually controls what HID gets advertised for the Wifi/BT named "WIFI" which was set to "RTL8723" which obviously is wrong, but that option was grayed out. So instead of going for the DSDT-override I really want to be able to change that BIOS option and set it to the right value. Some duckduckgo-ing found this blogpost on changing locked BIOS settings.

  • Test Day:2021-05-09 Kernel 5.12.2 on Fedora 34

    All logs report PASSED for each test done and uploaded as prompted at instruction page.

  • James Hunt: Can you handle an argument?

    This post explores some of the darker corners of command-line parsing that some may be unaware of. [...] No, I’m not questioning your debating skills, I’m referring to parsing command-lines! Parsing command-line option is something most programmers need to deal with at some point. Every language of note provides some sort of facility for handling command-line options. All a programmer needs to do is skim read the docs or grab the sample code, tweak to taste, et voila! But is it that simple? Do you really understand what is going on? I would suggest that most programmers really don’t think that much about it. Handling the parsing of command-line options is just something you bolt on to your codebase. And then you move onto the more interesting stuff. Yes, it really does tend to be that easy and everything just works… most of the time. Most? I hit an interesting issue recently which expanded in scope somewhat. It might raise an eyebrow for some or be a minor bomb-shell for others.

  • 10 Very Stupid Linux Commands [ Some Of Them Deadly ]

    If you are reading this page then you are like all of us a Linux fan, also you are using the command line every day and absolutely love Linux. But even in love and marriage there are things that make you just a little bit annoyed. Here in this article we are going to show you some of the most stupid Linux commands that a person can find.

China Is Launching A New Alternative To Google Summer of Code, Outreachy

The Institute of Software Chinese Academy of Sciences (ISCAS) in cooperation with the Chinese openEuler Linux distribution have been working on their own project akin to Google Summer of Code and Outreachy for paying university-aged students to become involved in open-source software development. "Summer 2021" as the initiative is simply called or "Summer 2021 of Open Source Promotion Plan" is providing university-aged students around the world funding by the Institute of Software Chinese Academy of Sciences to work on community open-source projects. It's just like Google Summer of Code but with offering different funding levels based upon the complexity of the project -- funding options are 12000 RMB, 9000 RMB, or 6000 RMB. That's roughly $932 to $1,865 USD for students to devote their summer to working on open-source. There are not any gender/nationality restrictions with this initative but students must be at least eighteen years old. Read more

Kernel: Linux 5.10 and Linux 5.13

  • Linux 5.10 LTS Will Be Maintained Through End Of Year 2026 - Phoronix

    Linux 5.10 as the latest Long Term Support release when announced was only going to be maintained until the end of 2022 but following enough companies stepping up to help with testing, Linux 5.10 LTS will now be maintained until the end of year 2026. Linux 5.10 LTS was originally just going to be maintained until the end of next year while prior kernels like Linux 5.4 LTS are being maintained until 2024 or even Linux 4.19 LTS and 4.14 LTS going into 2024. Linux 5.10 LTS was short to begin with due to the limited number of developers/organizations helping to test new point release candidates and/or committing resources to using this kernel LTS series. But now there are enough participants committing to it that Greg Kroah-Hartman confirmed he along with Sasha Levin will maintain the kernel through December 2026.

  • Oracle Continues Working On The Maple Tree For The Linux Kernel

    Oracle engineers have continued working on the "Maple Tree" data structure for the Linux kernel as an RCU-safe, range-based B-tree designed to make efficient use of modern processor caches. Sent out last year was the RFC patch series of Maple Tree for the Linux kernel to introduce this new data structure and make initial use of it. Sent out last week was the latest 94 patches in a post-RFC state for introducing this data structure.

  • Linux 5.13 Brings Simplified Retpolines Handling - Phoronix

    In addition to work like Linux 5.13 addressing some network overhead caused by Retpolines, this next kernel's return trampoline implementation itself is seeing a simplification. Merged as part of x86/core last week for the Linux 5.13 kernel were enabling PPIN support for Xeon Sapphire Rapids, KProbes improvements, and other minor changes plus simplifying the Retpolines implementation used by some CPUs as part of the Spectre V2 mitigations. The x86/core pull request for Linux 5.13 also re-sorts and better documents Intel's increasingly long list of different CPU cores/models.

  • Linux 5.13 Adds Support For SPI NOR One-Time Programmable Memory Regions - Phoronix

    The Linux 5.13 kernel has initial support for dealing with SPI one-time programmable (OTP) flash memory regions. Linux 5.13 adds the new MTD OTP functions for accessing SPI one-time programmable data. The OTP are memory regions intended to be programmed once and can be used for permanent secure identification, immutable properties, and similar purposes. In addition to adding the core infrastructure support for OTP to the MTD SPI-NOR code in Linux 5.13, the functionality is wired up for Winbond and similar flash memory chips. The MTD subsystem has already supported OTP areas but not for SPI-NOR flash memory.