Language Selection

English French German Italian Portuguese Spanish

Linux.com

Syndicate content
News For Open Source Professionals
Updated: 1 hour 58 min ago

OpenPOWER Foundation Provides Microwatt for Fabrication on Skywater Open PDK Shuttle

Monday 22nd of March 2021 02:17:41 PM

The OpenPOWER based Microwatt cpu core has been selected to be included in the Efabless Open MPW Shuttle Program. Microwatt’s inclusion in the program represents a lower barrier to entry for chip manufacturing. It also demonstrates the ability to create fully designed, fabricated chips relying on a complete, end-to-end open source environment – including open governance, specifications, tooling, IP, hardware, software, and manufacturing.

Read more at OpenPOWER Foundation

The post OpenPOWER Foundation Provides Microwatt for Fabrication on Skywater Open PDK Shuttle appeared first on Linux.com.

Liquid Prep intelligent watering solution now hosted by the Linux Foundation as a Call for Code project

Monday 22nd of March 2021 01:00:00 PM

Over the past several decades farmers have been depending increasingly on groundwater to irrigate their crops due to climate change and reduced rainfall. Farmers, even in drought-prone areas, continue to need to grow water-intensive crops because these crops have a steady demand.

In 2019, as part of Call for Code, a team of IBMers came together and brainstormed on ideas they were passionate about – problems faced by farmers in developing countries due to more frequent drought conditions. The team designed an end-to-end solution that focuses on helping farmers gain insight into when to water their crops and help them optimize their water usage to grow healthy crops. This team, Liquid Prep, went on to win the IBM employee Call for Code Global Challenge.

Liquid Prep provides a mobile application that can obtain soil moisture data from a portable soil moisture sensor, fetch weather information from The Weather Company, and access crop data through a service deployed on the IBM Cloud. Their solution brings all this data together, analyzes it, and computes watering guidance to help the farmer decide whether to water their crops right now or conserve it for a better time.

To validate the Liquid Prep prototype, in December 2019, one of the team members traveled to India and interviewed several farmers in the village Nuggehalli, which is near the town Hirisave in the Hassan district of Karnataka, India. The interviews taught the team that the farmers did not have detailed information on when they should water their specific crops and by how much, as they didn’t know the specific needs on a plant-by-plant basis. They also just let the water run freely if the water was available from a nearby source, like a river or stream, and some were entirely dependent on rainfall. The farmers expressed a great interest in the described Liquid Prep solution as it could empower them to make more informed decisions that could improve yields.

A prototype is born

After winning the challenge the Liquid Prep team took on the opportunity to convert the concept to a more complete prototype through an IBM Service Corps engagement. The team was expanded with dedicated IBM volunteers from across the company and they were assigned to optimize Liquid Prep from August through October 2020. During this time the team developed the Minimum Viable Product (MVP) for the mobile solution.

The prototype consists of three primary components:

  • A hardware sensor to measure soil moisture
  • A highly visual and easy-to-use mobile web application, and
  • A back-end data service to power the app.

It works like this: the mobile web application gets soil moisture data from the soil moisture sensor. The app requests environmental conditions from The Weather Company and crop data from the plant database via the backend service deployed on the IBM Cloud. The app analyzes and computes a watering schedule to help the farmer decide if they should water their crops now or at a later time.

Partners

Liquid Prep has a developed a great working relationship with partners SmartCone Technologies, Inc., and Central New Mexico Community College. Students in the Deep Dive Coding Internet of Things (IoT) Bootcamp at CNM are designing, developing, and producing a robust IoT sensor and housing it in the shape of a stick that can be inserted into the soil and transfer the soil moisture data to the Liquid Prep mobile app via Bluetooth. The collaboration gives students important real-world experience before they enter the workforce.

“SmartCone is honored to be part of this project.  This is a perfect example of technology teams working together to help make the world a better place, “ said Jason Lee, Founder & CEO, SmartCone Technologies Inc.

Additionally, Liquid Prep will work together with J&H Nixon Farms, who largely grow soybeans and corn crops on about 2800 acres of agricultural land in Ottawa, Canada. They have offered Liquid Prep the opportunity to pilot test the prototype on several plots of land that have different soil conditions, which in turn can expand the breadth of recommendation options to a larger number of potential users.

Now available as open source

Liquid Prep is now available as an open source project hosted by the Linux Foundation. The goal of the project is to help farmers globally farm their crops with the least amount of water by taking advantage of real-time information that can help improve sustainability and build resiliency to climate change.

Participation is welcomed from software developers, designers, testers, agronomists/agri experts/soil experts, IoT engineers, researchers, students, farmers, and others that can help improve the quality and value of the solution for small farmers around the world. Key areas the team are interested in developing include localizing the mobile app, considering soil properties for the improvement of the watering advice, updating project documentation, software and hardware testing, more in-depth research, and adding more crop data to the database.

Get involved in Liquid Prep now at Call For Code

The post Liquid Prep intelligent watering solution now hosted by the Linux Foundation as a Call for Code project appeared first on Linux Foundation.

The post Liquid Prep intelligent watering solution now hosted by the Linux Foundation as a Call for Code project appeared first on Linux.com.

SEAPATH: A Software Driven Open Source Project for the Energy Sector

Friday 19th of March 2021 03:55:03 PM

LF Energy recently announced a new project called, SEAPATH, or Software Enabled Automation Platform and Artifacts (THerein). It’s the second project by the foundation in its Digital Substation Automation Systems (DSAS) initiative. SEAPATH will provide a reference design and a real-time, open-source platform for grid operators to run virtualized automation and protection applications. In this interview, Dr. Shuli Goodman, Executive Director of LF Energy, and Lucian Balea, R&D Program Director and open source manager at RTE joined Swapnil Bhartiya to talk about the project.

The post SEAPATH: A Software Driven Open Source Project for the Energy Sector appeared first on Linux.com.

Linux Foundation Support for Asian Communities

Friday 19th of March 2021 07:00:00 AM

The Linux Foundation and its communities are deeply concerned about the rise in attacks against Asian Americans and condemn this violence. It is devastating to hear over and over again of the attacks and vitriol against Asian communities, which have increased substantially during the pandemic. 

We stand in support with all those that have experienced this hate, and to the families of those who have been killed as a result. Racism, intolerance and inequality have no place in the world, our country, the tech industry or in open source communities. 

We firmly believe that we are all at our best when we work together, treat each other with respect and equality and without hate or vitriol.

The post Linux Foundation Support for Asian Communities appeared first on Linux Foundation.

The post Linux Foundation Support for Asian Communities appeared first on Linux.com.

Cloud Native Training & Certification from The Linux Foundation & CNCF

Thursday 18th of March 2021 09:59:40 PM

There’s no question that cloud computing skills are in demand, and knowing cloud can help you secure a lucrative career. In fact, the 2020 Open Source Jobs Report from The Linux Foundation and edX found that knowledge of cloud skills has the biggest impact on hiring decisions. LinkedIn also named cloud computing the second most in demand hard skill of 2020. And a recent D2iQ study found “only 23% of organizations believe they have the talent required to successfully complete their cloud native journey”. 

Closing this talent gap is why Linux Foundation Training & Certification has partnered with the Cloud Native Computing Foundation (CNCF) – the home of Kubernetes, Linkerd, Prometheus, Helm and other widely used cloud native technologies – on a variety of programs to make quality cloud native learning more accessible. The hope is that providing more educational opportunities will lead to a great ability for those aspiring to work in the field to do so.

This article examines the various training courses available to learn about cloud native technologies, as well as certification exams to demonstrate that knowledge. Sample learning paths are also included as a guide of how to get started, and how to proceed on the cloud native learning journey.

Free Training Courses

Our catalog of free cloud native courses is offered on the non-profit edX platform, and provides the fundamentals necessary to move on to more advanced training. These courses are all taken online and are self-paced. Note you can audit each course at no cost for between six and fourteen weeks depending on the course, so we recommend completing each course you are interested in before moving on to the next to ensure you do not run out of time.

  • Introduction to Cloud Infrastructure Technologies (LFS151) – This class is designed for people who have little or no prior experience with cloud technologies. Upon completion, students will possess an understanding of cloud computing and the use of open source software to maximize development and operations.
  • Introduction to Kubernetes (LFS158) – This course is for teams considering or beginning to use Kubernetes for container orchestration who need guidelines on how to start transforming their organization with Kubernetes and cloud native patterns.
  • Introduction to Service Mesh with Linkerd (LFS143) – This course is designed for site reliability engineers, DevOps professionals, cluster administrators, and developers and teaches them to use the Linkerd CLI and UI to deploy and operate Linkerd, as well as to secure, observe, and add reliability to Kubernetes applications.
  • Introduction to Serverless on Kubernetes (LFS157) – The course is designed for developers and IT operators, and explains how a serverless approach works in tandem with a Kubernetes cluster, along with the potential of serverless functions.
  • Exploring GraphQL: A Query Language for APIs (LFS141) – This course is for both management and technical teams involved in the building and management of websites and provides the skills to get started using GraphQL for a small project or professionally in production.
  • Building Microservice Platforms with TARS (LFS153) – This course is designed for engineers working in microservices, as well enterprise managers interested in exploring internal technical architectures, and helps them to understand microservices architecture and how to quickly build stable and reliable applications based on TARS. 
  • Introduction to Cloud Foundry and Cloud Native Software Architecture (LFS132) – This course is for teams that either use or would like to use Cloud Foundry to deploy applications, preparing them to deliver business value quickly, without wasting time getting apps to the cloud.
  • Introduction to FinOps (LFS175) – This course is addressed to a wide audience including technical professionals, finance, procurement, and accounting professionals, business unit or product managers, and executives, helping them understand how to build a culture of accountability around cloud use that lets your organization make good, timely, data-backed decisions in the cloud, not just to save money, but to make money.

eLearning Courses

Self-paced, eLearning courses provide a way to learn intermediate to advanced skills in a low pressure environment. These courses are offered for a fee and include access for a full year. Some courses can also be bundled with a certification exam for a discounted price.

  • Containers Fundamentals (LFS253) – This course provides the knowledge needed to work with containers to bundle an application with all its dependencies and deploy it on the platform of our choice, be it Bare-Metal, VM, Cloud, etc. 
  • Kubernetes Fundamentals (LFS258) – This course provides a strong operating knowledge of Kubernetes, including how to deploy a containerized application and manipulating resources via the API. It also serves as preparation for the Certified Kubernetes Administrator (CKA) exam.
  • Kubernetes for Developers (LFD259) – This course is ideal for developers who are looking to gain skills in Kubernetes application development. It also serves as preparation for the Certified Kubernetes Application Developer (CKAD) exam.
  • Monitoring Systems and Services with Prometheus (LFS241) – This course leads new Prometheus users through many of its major features, best practices, and use cases. It covers aspects including setting up and using Prometheus, monitoring components and services, querying, alerting, using Prometheus with Kubernetes and more.
  • Cloud Native Logging with Fluentd (LFS242) – This course explores the full range of Fluentd features, from installing Fluentd to running Fluentd in a container, and from using Fluentd as a simple log forwarder to using it as a sophisticated log aggregator and processor.
  • Service Mesh Fundamentals (LFS243) – This course introduces the challenges of distributed systems, strategies for managing these challenges, and the architecture of service meshes. It also covers key concepts such as data plane vs. control plane and the evolution of ingress.
  • Managing Kubernetes Applications with Helm (LFS244) – This course covers the history of the Helm project and its architecture, how to properly install the Helm client, the various components of a Helm chart and how to create one, the command-line actions used for managing an application’s lifecycle, and much more.
  • Containers for Developers and Quality Assurance (LFD254) – This course can help everyone involved in the application lifecycle, be it developers, quality assurance engineers, or operations engineers, by preparing them to deploy a containerized application in production from a workstation.
  • Kubernetes Security Essentials (LFS260) – This course provides the knowledge and skills needed to maintain security in dynamic, multi-project environments. It also serves as preparation for the Certified Kubernetes Security Specialist (CKS) exam.
  • Cloud Foundry for Developers (LFD232) – This course teaches developers how to use Cloud Foundry to build, deploy and manage a cloud native microservice solution. It also serves as preparation for the Cloud Foundry Certified Developer (CFCD) exam.
  • DevOps and SRE Fundamentals: Implementing Continuous Delivery (LFS261) – This course introduces the fundamentals of CI/CD within an open container ecosystem, explaining how to deliver features rapidly, while at the same time being able to achieve non-functional requirements such as availability, reliability, scalability, security, and more.

Instructor-Led Training Courses

Instructor-led training is ideal for teams or for those who need hands-on support to gain the skills needed for a job role. These courses take place either in person or virtually at a pre-scheduled time, with a live instructor leading. 

  • Kubernetes Administration (LFS458) – This course is ideal for those wishing to manage a containerized application infrastructure. This includes existing IT administrators, as well as those looking to start a cloud career. It also serves as preparation for the Certified Kubernetes Administrator (CKA) exam.
  • Kubernetes for App Developers (LFD459) – This course will teach you how to containerize, host, deploy, and configure an application in a multi-node cluster. It also serves as preparation for the Certified Kubernetes Application Developer (CKAD) exam.
  • Kubernetes Security Fundamentals (LFS460) – This course provides skills and knowledge across a broad range of best practices for securing container-based applications and Kubernetes platforms during build, deployment, and runtime. It also serves as preparation for the Certified Kubernetes Security Specialist (CKS) exam.

Certification Exams

Our certification exams demonstrate that an individual possesses the skills to be effective in a given cloud-native role. Most exams are performance-based, presenting tasks to be completed in a real-world environment. Upon passing the exam, you receive a certificate and digital badge which can be independently verified at any time by current or potential employers using our online verification tool.

  • Certified Kubernetes Administrator (CKA) – A certified K8s administrator has demonstrated the ability to do basic installation as well as configuring and managing production-grade Kubernetes clusters. They will have an understanding of key concepts such as Kubernetes networking, storage, security, maintenance, logging and monitoring, application lifecycle, troubleshooting, API object primitives and the ability to establish basic use-cases for end users.
  • Certified Kubernetes Application Developer (CKAD) – A k8s certified application developer can design, build, configure and expose cloud native applications for Kubernetes, as well as define application resources and use core primitives to build, monitor, and troubleshoot scalable applications and tools in Kubernetes.
  • Certified Kubernetes Security Specialist (CKS) – Obtaining a CKS demonstrates a candidate possesses the requisite abilities to secure container-based applications and Kubernetes platforms during build, deployment and runtime, and is qualified to perform these tasks in a professional setting.
  • Cloud Foundry Certified Developer (CFCD) – CFCD is ideal for candidates who want to validate their skill set using the Cloud Foundry platform to deploy and manage applications.
  • FinOps Certified Practitioner (FOCP) – A FOCP will bring a strong understanding of FinOps, its principles, capabilities and how to support and manage the FinOps lifecycle to manage cost and usage of cloud in an organization. 

Other Resources

Sample Learning Paths

The post Cloud Native Training & Certification from The Linux Foundation & CNCF appeared first on Linux Foundation – Training.

The post Cloud Native Training & Certification from The Linux Foundation & CNCF appeared first on Linux.com.

KernelCI: Looking back, looking forward

Wednesday 17th of March 2021 06:02:25 PM

2020 was the first year of the KernelCI project under the Linux Foundation and has been an interesting one.  Maybe slightly less “interesting” than the rest of the world-changing events of 2020, but it’s still been an adventure.  This article aims to give a quick summary of the major milestones of the first year of KernelCI project, and highlight our goals for the next year.

Read the original post on the KernelCi blog.

The post KernelCI: Looking back, looking forward appeared first on Linux.com.

Certified Hyperledger Fabric Developer (CHFD) Exam Has Relaunched

Tuesday 16th of March 2021 09:30:19 PM

The Certified Hyperledger Fabric Developer (CHFD) exam, which initially launched a year ago and enables candidates to demonstrate the knowledge to develop and maintain client applications and smart contracts using the latest Fabric programming model, is once again available for scheduling. The exam had been paused pending updates of the exam content to align with the most recent Long Term Support (LTS) version of Fabric, v2.2. 

Holding this certification provides confidence to supervisors and hiring managers that a team member or job candidate possesses the necessary skills to package and deploy Fabric applications and smart contracts, perform end-to-end Fabric application life-cycle and smart contract management, and more. The CHFD exam platform is Node.js for both Client Application and Smart Contract. 

As with other Linux Foundation certification exams, CHFD can be taken remotely from a candidate’s home or workplace. Candidates can choose from an English or Japanese speaking proctor who will verify the candidate’s identity and monitor them during the exam via webcam – the exam questions are available in both English and Japanese within the exam environment. The two-hour exam is an online, performance-based test that consists of a set of tasks or problems to be solved in a Web IDE and the command line.

CHFD and CHFD-JP are available for immediate registration. For those needing assistance preparing for the exam, the LFD272 – Hyperledger Fabric for Developers training course is also available and covers topics such as how to implement and test a chaincode in Golang for any use case, manage the chaincode life cycle, create Node.js client applications interacting with Hyperledger Fabric networks, control access to the information based on a user identity, set up and use private data collections and much more.

The post Certified Hyperledger Fabric Developer (CHFD) Exam Has Relaunched appeared first on Linux Foundation – Training.

The post Certified Hyperledger Fabric Developer (CHFD) Exam Has Relaunched appeared first on Linux.com.

Generating a Software Bill of Materials (SBOM) with Open Source Standards and Tooling

Tuesday 16th of March 2021 08:00:00 PM

Every month there seems to be a new software vulnerability showing up on social media, which causes open source program offices and security teams to start querying their inventories to see how FOSS components they use may impact their organizations. 

Frequently this information is not available in a consistent format within an organization for automatic querying and may result in a significant amount of email and manual effort. By exchanging software metadata in a standardized software bill of materials (SBOM) format between organizations, automation within an organization becomes simpler, accelerating the discovery process and uncovering risk so that mitigations can be considered quickly. 

In the last year, we’ve also seen standards like OpenChain (ISO/IEC 5320:2020) gain adoption in the supply chain. Customers have started asking for a bill of materials from their suppliers as part of negotiation and contract discussions to conform to the standard. OpenChain has a focus on ensuring that there is sufficient information for license compliance, and as a result, expects metadata for the distributed components as well. A software bill of materials can be used to support the systematic review and approval of each component’s license terms to clarify the obligations and restrictions as it applies to the distribution of the supplied software and reduces risk. 

Kate Stewart, VP, Dependable Embedded Systems, The Linux Foundation, will host a complimentary mentorship webinar entitled Generating Software Bill Of Materials on Thursday, March 25 at 7:30 am PST. This session will work through the minimum elements included in a software bill of materials and detail the reasoning behind why those elements are included. To register, please click here

Register for webinar

There are many ways this software metadata can be shared. The common SBOM document format options (SPDX, SWID, and CycloneDX) will be reviewed so that the participants can better understand what is available for those just starting. 

This mentorship session will work through some simple examples and then guide where to find the next level of details and further references. 

At the end of this session, participants will be on a secure footing and a path towards the automated generation of SBOMs as part of their build and release processes in the future. 

The post Generating a Software Bill of Materials (SBOM) with Open Source Standards and Tooling appeared first on Linux Foundation.

The post Generating a Software Bill of Materials (SBOM) with Open Source Standards and Tooling appeared first on Linux.com.

The TARS Foundation Celebrates its First Anniversary

Friday 12th of March 2021 08:01:43 PM

The TARS Foundation, an open source microservices foundation under the Linux Foundation, celebrated its first anniversary on March 10, 2021. As we all know, 2020 was a strange year, and we are all adjusting to the new normal. Meanwhile, despite being unable to meet in person, the TARS Foundation community is connected, sharing and working together virtually toward our goals. 

This year, four new projects have joined the TARS Foundation, expanding our technical community. The TARS Foundation launched TARS Landscape in July 2020, presenting an ideal and complete microservice ecosystem, which is the vision that the TARS open source community works to achieve. Furthermore, we welcome more open source projects to join the TARS community and go through our incubation process.

A view of the TARS open source project landscape

In September 2020, The Linux Foundation and TARS Foundation released a new, free training course, Building Microservice Platforms with TARS, on the edX platform. This course is designed for engineers working in microservices and enterprise managers interested in exploring internal technical architectures working for digital transmission in traditional industries. The course explains the functions, characteristics, and structure of the TARS microservices framework while demonstrating how to deploy and maintain services in different programming languages in the TARS Framework. Besides, anyone interested in software architecture will benefit from this course. 

If you are interested in TARS training resources, please check out Building Microservice Platforms with TARS on edX.

Thanking our Members and Contributors

For more updates from TARS Foundation, please read our Annual Report 2020

We would like to thank all our projects and project contributors. Thank you for your trust in the TARS Foundation. Without you and the value you bring to our entire community, our foundation would not exist. 

We also want to thank our Governing Board, Technical Oversight Committee, Outreach Committee, and Community Advisor members! Every member has demonstrated their dedication and tireless efforts to ensure that the TARS Foundation is building a complete governance structure to push out a more comprehensive range of programs and make real progress. With the guidance of these passionate and wise leaders from our governing bodies, TARS Foundation is confident to become a neutral home for additional projects that solve critical problems surrounding microservices. 

Thank you to all our members, Arm, Tencent, AfterShip, Ampere, API7, Kong, Zenlayer, and Nanjing University, for investing in the future of open source microservices. The TARS Foundation welcomes more companies and organizations to join our mission by becoming members

Thank you to our end users! The TARS Foundation End User Community Plan was released to allow more companies to get involved with the TARS community. The purpose of the plan is to enable an open and free platform for communication and discussion about microservices technology and collaboration opportunities. Currently, the TARS Foundation has eight end-user companies, and we welcome more companies to join us as End Users

What is next?

The TARS Foundation will continue to add more members and end-user companies in the next year while growing our shared resource pool for the benefit of our community. We will also look to include and incubate more projects, aiding our open source microservices ecosystem to empower any industry to turn ideas into applications at scale quickly. As part of our plan for next year, we aim to hold recurring meetup events worldwide and large-scale summits, creating a space for global developers to learn and exchange their ideas about microservices. 

Words from our partners

Kevin Ryan, Senior Director, Arm

Through our collaboration with the TARS Foundation and Tencent, we’ve leveraged a significant opportunity to build and develop the microservices ecosystem,” said Kevin Ryan, senior director of Ecosystem, Automotive and IoT Line of Business, Arm. “We look forward to future growth across the TARS community as contributions, members, and momentum continue to accelerate.”

Mark Shan, Open Source Alliance Chair, Tencent

As TARS Foundation turns one year old, Tencent will continue to collaborate with partners and build an open and free microservices ecosystem in open source. By consistently upgrading microservices technology and cultivating the TARS community, we look forward to creating more innovations and making social progress through technology.

Teddy Chan, CEO & Co-Founder, AfterShip

Best wishes to the TARS Foundation for turning one year old and continuing its positive influence on microservices. AfterShip will fully support the future development of the Foundation!

Mauri Whalen, VP of Software Engineering, Ampere

Ampere has been partnering with the TARS Foundation to drive innovation for microservices. Ampere understands the importance of this technology and is committed to providing Ampere/Arm64 Platform support and a performance testing framework for building the open microservices community. We are excited the TARS Foundation has reached its first birthday milestone. Their project is driving needed innovation for modern cloud workloads.

Ming Wen, Co-founder, API7

Congratulations to the first anniversary of the TARS Foundation! With the wave of enterprise digital transformation, microservices have become the infrastructure for connecting critical traffic. The TARS Foundation has gathered several well-known open source projects related to microservices, including the APISIX-based open source microservice gateway provided by api7.ai. We believe that under the TARS Foundation’s efforts, microservices and the TARS Foundation will play an increasingly important role in digital transformation.

Marco Palladino, CTO and Co-Founder, Kong

In this new era driven by digital transformation 2.0, organizations around the world are transforming their applications to microservices to grow their customer base faster, enter new markets, and ship products faster. None of this would be possible without agile, distributed, and decoupled architectures that drive innovation, efficiency, and reliability in our digital strategy: in one word, microservices. Kong supports the TARS foundation to accelerate microservices adoption in both open source ecosystems and enterprise landscape, and to provide a modern connectivity fabric for all our services, across every cloud and platform.”, Marco Palladino, CTO and Co-Founder at Kong.

Jim Xu, Principal Engineer & Architect, Zenlayer

Microservices are the next big thing in the cloud as they enable fast development, scaling, and time-to-market of enterprise applications. TARS Foundation leads in building a strong ecosystem for open-source microservices, from the edge to the cloud. As a leading-edge cloud service provider, Zenlayer is committed to enabling microservices in multi-cloud and hybrid cloud scenarios in collaboration with the TARS Foundation community. As the TARS Foundation enters its second year, Zenlayer will continue to innovate in infrastructure, platforms, and labs to empower microservice implementation for enterprises of all kinds.

He Zhang, Professor, Nanjing University

We fully support the development of microservices and the mission to co-build a Cloud-native ecosystem. Embracing open source and community contribution, we believe the TARS Foundation is creating a future with endless possibilities ahead. 

About the TARS Foundation

The TARS Foundation is a nonprofit, open source microservice foundation under the Linux Foundation umbrella to support the rapid growth of contributions and membership for a community focused on building an open microservices platform. It focuses on open source technology that helps businesses to embrace the microservices architecture as they innovate into new areas and scale their applications. For more information, please visit tarscloud.org.

The post The TARS Foundation Celebrates its First Anniversary appeared first on Linux.com.

Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact

Thursday 11th of March 2021 05:00:31 PM

By Matt Zand

Recap

In our two previous articles, first we covered “Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy” where we discussed the following five Hyperledger Distributed Ledger Technologies (DLTs):

  1. Hyperledger Indy
  2. Hyperledger Fabric
  3. Hyperledger Iroha
  4. Hyperledger Sawtooth
  5. Hyperledger Besu

Then, we moved on to our second article (Review of three Hyperledger Tools- Caliper, Cello and Avalon) where we surveyed the following three Hyperledger t     ools:

  1. Hyperledger Caliper
  2. Hyperledger Cello
  3. Hyperledger Avalon

So in this follow-up article, we review four (as listed below) Hyperledger libraries that work very well with other Hyperledger DLTs.      As of this writing, all of these libraries are at the incubation stage except for Hyperledger Aries,      which has graduated to      active.

  1. Hyperledger Aries
  2. Hyperledger Quilt
  3. Hyperledger Ursa
  4. Hyperledger Transact

Hyperledger Aries

Identity has been adopted by the industry as one of the most promising use cases of DLTs. Solutions and initiatives around creating, storing, and transmitting verifiable digital credentials will result in a reusable, shared, interoperable tool kit. In response to such growing demand, Hyperledger has come up with three       projects (Hyperledger Indy, Hyperledger Iroha and Hyperledger Aries) that are specifically focused on identity management.

Hyperledger Aries is infrastructure for blockchain-rooted, peer-to-peer interactions. It includes a shared cryptographic wallet (the secure storage tech, not a UI) for blockchain clients as well as a communications protocol for allowing off-ledger interactions between those clients.      This project consumes the cryptographic support provided by Hyperledger Ursa      to provide secure secret management and decentralized key management functionality.

According to Hyperledger Aries’ documentation, Aries includes the following features:

  • An encrypted messaging system for off-ledger interactions using multiple transport protocols between clients.
  • A blockchain interface layer that is also called as a resolver. It is used for creating and signing blockchain transactions.
  • A cryptographic wallet to enable secure storage of cryptographic secrets and other information that is used for building blockchain clients.
  • An implementation of ZKP-capable W3C verifiable credentials with the help of the ZKP primitives that are found in Hyperledger Ursa.
  • A mechanism to build API-like use cases and higher-level protocols based on secure messaging functionality.
  • An implementation of the specifications of the Decentralized Key Management System (DKMS) that are being currently incubated in Hyperledger Indy.
  • Initially, the generic interface of Hyperledger Aries will support the Hyperledger Indy resolver. But the interface is flexible in the sense that anyone can build a pluggable method using DID method resolvers such as Ethereum and Hyperledger Fabric, or any other DID method resolver they wish to use. These resolvers would support the resolving of transactions and other data on other ledgers.
  • Hyperledger Aries will additionally provide the functionality and features outside the scope of the Hyperledger Indy ledger to be fully planned and supported. Owing to these capabilities, the community can now build core message families to facilitate interoperable interactions using a wide range of use cases that involve blockchain-based identity.

For more detailed discussion on its implementation, visit the link provided in the References section.

Hyperledger Quilt

The widespread adoption of blockchain technology by global businesses      has coincided with the emergence of tons of isolated and disconnected networks or ledgers. While users can easily conduct transactions within their own network or ledger, they experience technical difficultly (and in some cases impracticality) for doing transactions with parties residing      on different networks or ledgers. At best, the process of cross-ledger (or cross-network) transactions is slow, expensive, or manual. However, with the advent and adoption of Interledger Protocol (ILP), money and other forms of value can be routed, packetized, and delivered over ledgers and payment networks.

Hyperledger Quilt is a tool for      interoperability      between ledger systems and is written in Java      by implementing the ILP for atomic swaps. While the Interledger is a protocol for making transactions across ledgers, ILP is a payment protocol designed to transfer value across non-distributed and distributed ledgers. The standards and specifications of Interledger protocol are governed by the open-source community under the World Wide Web Consortium umbrella. Quilt is an enterprise-grade implementation of the ILP, and provides libraries and reference implementations for the core Interledger components used for payment networks. With the launch of Quilt, the JavaScript (Interledger.js) implementation of Interledger was maintained by the JS Foundation.

According to the Quilt documentation, as a result of ILP implementation, Quilt offers the following features:

  • A framework to design higher-level use-case specific protocols.
  • A set of rules to enable interoperability with basic escrow semantics.
  • A standard for data packet format and a ledger-dependent independent address format to enable connectors to route payments.

For more detailed discussion on its implementation, visit the link provided in the References section.

Hyperledger Ursa

Hyperledger Ursa is a shared cryptographic library that      enables people (and projects) to avoid duplicating other cryptographic work and hopefully increase security in the process. The library is      an opt-in repository for Hyperledger projects (and, potentially others) to place and use crypto.

Inside Project Ursa, a complete library of modular signatures and symmetric-key primitives      is at the disposal of developers to swap in and out different cryptographic schemes through configuration and without having to modify their code. On top its base library, Ursa      also includes newer cryptography, including pairing-based, threshold, and aggregate signatures. Furthermore, the zero-knowledge primitives including SNARKs are also supported by Ursa.

According to the Ursa’s documentation, Ursa offers the following benefits:

  • Preventing duplication of solving similar security requirements across different blockchain
  • Simplifying the security audits of cryptographic operations since the code is consolidated into a single location. This reduces maintenance efforts of these libraries while improving the security footprint for developers with beginner knowledge of distributed ledger projects.
  • Reviewing all cryptographic codes in a single place will reduce the likelihood of dangerous security bugs.
  • Boosting cross-platform interoperability when multiple platforms, which require cryptographic verification, are using the same security protocols on both platforms.
  • Enhancing the architecture via modularity of common components will pave the way for future modular distributed ledger technology platforms using common components.
  • Accelerating the time to market for new projects as long as an existing security paradigm can be plugged-in without a project needing to build it themselves.

For more detailed discussion on its implementation, visit the link provided in the References section.

Hyperledger Transact

Hyperledger Transact, in a nutshell, makes writing distributed ledger software easier by providing a shared software library that handles the execution of smart contracts, including all aspects of scheduling, transaction dispatch, and state management. Utilizing Transact, smart contracts can be executed irrespective of DLTs being used. Specifically, Transact achieves that by offering an extensible approach to implementing new smart contract languages called “smart contract engines.” As such, each smart contract engine implements a virtual machine or interpreter that processes smart contracts.

At its core, Transact is solely a transaction processing system for state transitions. That is, s     tate data is normally stored in a key-value or an SQL database. Considering an initial state and a transaction, Transact executes the transaction to produce a new state. These state transitions are deemed “pure” because only the initial state and the transaction are used as input. (In contrast to     other systems such as Ethereum where state and block information are mixed to produce the new state). Therefore, Transact is agnostic about DLT framework features other than transaction execution and state.

According to Hyperledger Transact’s documentation, Transact comes with the following components:

  • State. The Transact state implementation provides get, set, and delete operations against a database. For the Merkle-Radix tree state implementation, the tree structure is implemented on top of LMDB or an in-memory database.
  • Context manager. In Transact, state reads and writes are scoped (sandboxed) to a specific “context” that contains a reference to a state ID (such as a Merkle-Radix state root hash) and one or more previous contexts. The context manager implements the context lifecycle and services the calls that read, write, and delete data from state.
  • Scheduler. This component controls the order of transactions to be executed. Concrete implementations include a serial scheduler and a parallel scheduler. Parallel transaction execution is an important innovation for increasing network throughput.
  • Executor. The Transact executor obtains transactions from the scheduler and executes them against a specific context. Execution is handled by sending the transaction to specific execution adapters (such as ZMQ or a static in-process adapter) which, in turn, send the transaction to a specific smart contract.
  • Smart Contract Engines. These components provide the virtual machine implementations and interpreters that run the smart contracts. Examples of engines include WebAssembly, Ethereum Virtual Machine, Sawtooth Transactions Processors, and Fabric Chaincode.

For more detailed discussion on its implementation, visit the link provided in the References section.

 Summary

In this article, we reviewed four Hyperledger libraries that are great resources for managing Hyperledger DLTs. We started by explaining Hyperledger Aries, which is infrastructure for blockchain-rooted, peer-to-peer interactions and includes a shared cryptographic wallet for blockchain clients as well as a communications protocol for allowing off-ledger interactions between those clients. Then, we learned that Hyperledger Quilt is the interoperability tool between ledger systems and is written in Java by implementing the ILP for atomic swaps. While the Interledger is a protocol for making transactions across ledgers, ILP is a payment protocol designed to transfer value across non-distributed and distributed ledgers. We also discussed that Hyperledger Ursa is a shared cryptographic library that would enable people (and projects) to avoid duplicating other cryptographic work and hopefully increase security in the process. The library would be an opt-in repository for Hyperledger projects (and, potentially others) to place and use crypto. We concluded our article by reviewing Hyperledger Transact by which smart contracts can be executed irrespective of DLTs being used. Specifically, Transact achieves that by offering an extensible approach to implementing new smart contract languages called “smart contract engines.”

References

For more references on all Hyperledger projects, libraries and tools, visit the below documentation links:

  1. Hyperledger Indy Project
  2. Hyperledger Fabric Project
  3. Hyperledger Aries Library
  4. Hyperledger Iroha Project
  5. Hyperledger Sawtooth Project
  6. Hyperledger Besu Project
  7. Hyperledger Quilt Library
  8. Hyperledger Ursa Library
  9. Hyperledger Transact Library
  10. Hyperledger Cactus Project
  11. Hyperledger Caliper Tool
  12. Hyperledger Cello Tool
  13. Hyperledger Explorer Tool
  14. Hyperledger Grid (Domain Specific)
  15. Hyperledger Burrow Project
  16. Hyperledger Avalon Tool

Resources

About Author

Matt Zand is a serial entrepreneur and the founder of three tech startups: DC Web Makers, Coding Bootcamps and High School Technology Services. He is a leading author of Hands-on Smart Contract Development with Hyperledger Fabric book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms at sites such as IBM, SAP, Alibaba Cloud, Hyperledger, The Linux Foundation, and more. As a public speaker, he has presented webinars at many Hyperledger communities across USA and Europe     . At DC Web Makers, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, angel investor, business advisor for a few startup companies. You can connect with him on LinkedIn.

The post Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact appeared first on Linux Foundation – Training.

The post Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact appeared first on Linux.com.

New open source project helps musicians jam together even when they’re not together

Thursday 11th of March 2021 05:00:00 PM

Today, the Linux Foundation announced that it would be adding Rend-o-matic to the list of Call for Code open source projects that it hosts. The Rend-o-matic technology was originally developed as part of the Choirless project during a Call for Code challenge as a way to enable musicians to jam together regardless of where they are. Initially developed to help musicians socially distance because of COVID 19, the application has many other benefits, including bringing together musicians from different parts of the world and allowing for multiple versions of a piece of music featuring various artist collaborations. The artificial intelligence powering Choirless ensures that the consolidated recording stays accurately synchronized even through long compositions, and this is just one of the pieces of software being released under the new Rend-o-matic project.

Developer Diaries – Uniting musicians with AI and IBM Cloud Functions

Created by a team of musically-inclined IBM developers, the Rend-o-matic project features a web-based interface that allows artists to record their individual segments via a laptop or phone. The individual segments are processed using acoustic analysis and AI to identify common patterns across multiple segments which are then automatically synced and output as a single track. Each musician can record on their own time in their own place with each new version of the song available as a fresh MP3 track. In order to scale the compute needed by the AI, the application uses IBM Cloud Functions in a serverless environment that can effortlessly scale up or down to meet demand without the need for additional infrastructure updates. Rend-o-matic is itself built upon open source technology, using Apache OpenWhisk, Apache CouchDB, Cloud Foundry, Docker, Python, Node.js, and FFmpeg. 

Since its creation, Choirless has been incubated and improved as a Call for Code project, with an enhanced algorithm, increased availability, real-time audio-level visualizations, and more. The solution has been released for testing, and as of January, users of the hosted Choirless service built upon the Rend-o-matic project – including school choirs, professional musicians, and bands – have recorded 2,740 individual parts forming 745 distinct performances.

Call for Code invites developers and problem-solvers around the world to build and contribute to sustainable, open source technology projects that address social and humanitarian issues while ensuring the top solutions are deployed to make a demonstrable difference.  Learn more about Call for Code. You can learn more about Rend-o-matic, sample the technology, and contribute back to the project at https://choirless.github.io/ 

The post New open source project helps musicians jam together even when they’re not together appeared first on Linux Foundation.

The post New open source project helps musicians jam together even when they’re not together appeared first on Linux.com.

How open source communities are driving 5G’s future, even within a large government like the US

Thursday 11th of March 2021 05:00:00 PM

In mid-February, the Linux Foundation announced it had signed a collaboration agreement with the Defense Advanced Research Projects Agency (DARPA), enabling US Government suppliers to collaborate on a common open source platform that will enable the adoption of 5G wireless and edge technologies by the government. Governments face similar issues to enterprise end-users — if all their suppliers deliver incompatible solutions, the integration burden escalates exponentially.

The first collaboration, Open Programmable Secure 5G (OPS-5G), currently in the formative stages, will be used to create open source software and systems enabling end-to-end 5G and follow-on mobile networks.

The road to open source influencing 5G: The First, Second, and Third Waves of Open Source

If we examine the history of open source, it is informative to observe it from the perspective of evolutionary waves. Many open-source projects began as single technical projects, with specific objectives, such as building an operating system kernel or an application. This isolated, single project approach can be viewed as the first wave of open source.

We can view the second wave of open source as creating platforms seeking to address a broad horizontal solution, such as a cloud or networking stack or a machine learning and data platform.

The third wave of open source collaboration goes beyond isolated projects and integrates them for a common platform for a specific industry vertical. Additionally, the third wave often focuses on reducing fragmentation — you commonly will see a conformance program or a specification or standard that anyone in the industry can cite in procurement contracts.

Industry conformance becomes important as specific solutions are taken to market and how cross-industry solutions are being built — especially now that we have technologies requiring cross-industry interaction, such as end-to-end 5G, the edge, or even cloud-native applications and environments that span any industry vertical.

The third wave of open source also seeks to provide comprehensive end-to-end solutions for enterprises and verticals, large institutional organizations, and government agencies. In this case, the community of government suppliers will be building an open source 5G stack used in enterprise networking applications. The end-to-end open source integration and collaboration supported by commercial investment with innovative products, services, and solutions accelerate the technology adoption and transformation.

Why DARPA chose to partner with the Linux Foundation

DARPA at the US Department of Defense has tens of thousands of contractors supplying networking solutions for government facilities and remote locations. However, it doesn’t want dozens, hundreds, or thousands of unique and incompatible hardware and software solutions originating from its large contractor and supplier ecosystem. Instead, it desires a portable and open access standard to provide transparency to enable advanced software tools and systems to be applied to a common code base various groups in the government could build on. The goal is to have a common framework that decouples hardware and software requirements and enabling adoption by more groups within the government.

Naturally, as a large end-user, the government wants its suppliers to focus on delivering secure solutions. A common framework can ideally decrease the security complexity versus having disparate, fragmented systems.

The Linux Foundation is also the home of nearly all the important open source projects in the 5G and networking space. Out of the $54B of the Linux Foundation community software projects that have been valued using the COCOMO2 model, the open source projects assisting with building a 5G stack are estimated to be worth about $25B in shared technology investment. The LF Networking projects have been valued at $7.4B just by themselves.

The support programs at Linux Foundation provide the key foundations for a shared community innovations pool. These programs include IP structure and legal frameworks, an open and transparent development process, neutral governance, conformance, and DevOps infrastructure for end-to-end project lifecycle and code management. Therefore, it is uniquely suited to be the home for a community-driven effort to define an open source 5G end-to-end architecture, create and run the open source projects that embody that architecture, and support its integration for scaling-out and accelerating adoption.

The foundations of a complete open source 5G stack

The Linux Foundation worked in the telecommunications industry early on in its existence, starting with the Carrier Grade Linux initiatives to identify requirements and building features to enable the Linux kernel to address telco requirements. In 2013, The Linux Foundation’s open source networking platform started with bespoke projects such as OpenDaylight, the software-defined networking controller. OPNFV (now Anuket), the network function virtualization stack, was introduced in 2014-2015, followed by the first release of Tungsten Fabric, the automated software-defined networking stack. FD.io, the secure networking data plane, was announced in 2016, a sister project of the Data Plane Development Kit (DPDK) released into open source in 2010.

Linux Foundation & Other Open Source Component Projects for 5G

At the time, the telecom/network and wireless carrier industry sought to commoditize and accelerate innovation across a specific piece of the stack as software-defined networking became part of their digital transformation. Since the introduction of these projects at LFN, the industry has seen heavy adoption and significant community contribution by the largest telecom carriers and service providers worldwide. This history is chronicled in detail in our whitepaper, Software-Defined Vertical Industries: Transformation Through Open Source.

The work that the member companies will focus on will require robust frameworks for ensuring changes to these projects are contributed back upstream into the source projects. Upstreaming, which is a key benefit to open source collaboration, allows the contributions specific to this 5G effort to roll back into their originating projects, thus improving the software for every end-user and effort that uses them.

The Linux Foundation networking stack continues to evolve and expand into additional projects due to an increased desire to innovate and commoditize across key technology areas through shared investments among its members. In February of 2021, Facebook contributed the Magma project, which transcends platform infrastructure such as the others listed above. Instead, it is a network function application that is core to 5G network operations.

The E2E 5G Super Blueprint is being developed by the LFN Demo working group. This is an open collaboration and we encourage you to join us. Learn more here

Building through organic growth and cross-pollination of the open source networking and cloud community

Tier 2 operators, rural operators, and governments worldwide want to reap the benefits of economic innovation as well as potential cost-savings from 5G. How is this accomplished?

With this joint announcement and its DARPA supplier community collaboration, the Linux Foundation’s existing projects can help serve the requirements of other large end-users. Open source communities are advancing and innovating some of the most important and exciting technologies of our time. It’s always interesting to have an opportunity to apply the results of these communities to new use cases.

The Linux Foundation understands the critical dynamic of cross-pollination between community-driven open source projects needed to help make an ecosystem successful. Its proven governance model has demonstrated the ability to maintain and mature open source projects over time and make them all work together in one single, cohesive ecosystem.

As a broad set of contributors work on components of an open source stack for 5G, there will be cross-community interactions. For example, that means that Project EVE, the cloud-native edge computing platform, will potentially be working with Project Zephyr, the scalable real-time operating system (RTOS) kernel, so that Eve can potentially orchestrate Zephyr devices. It’s all based on contributors’ self-interests and motivations to contribute functionality that enables these projects to work together. Similarly, ONAP, the network automation/orchestration platform, is tightly integrated with Akraino so that it has architectural deployment templates built around network edge clouds and multi-edge clouds.

An open source platform has implications not just for new business opportunities for government suppliers but also for other institutions. The projects within an open source platform have open interfaces that can be integrated and used with other software so that other large end-users like the World Bank, can have validated and tested architectural blueprints, with which can go ahead and deploy effective 5G solutions in the marketplace in many host countries, providing them a turnkey stack. This will enable them to encourage providers through competition or challenges native to their in-country commercial ecosystem to implement those networks.

This is a true solutions-oriented open source for 5G stack for enterprises, governments, and the world.

The post How open source communities are driving 5G’s future, even within a large government like the US appeared first on Linux Foundation.

The post How open source communities are driving 5G’s future, even within a large government like the US appeared first on Linux.com.

LF Edge’s State of the Edge 2021 Report Predicts Global Edge Computing Infrastructure Market to be Worth Up to $800 Billion by 2028

Wednesday 10th of March 2021 02:00:00 PM
  • COVID-19 highlighted that expertise in legacy data centers could be obsolete in the next few years as the pandemic forced the development of new tools enabled by edge computing for remote monitoring, provisioning, repair and management.
  • Open source hardware and software projects are driving innovation at the edge by accelerating the adoption and deployment of applications for cloud-native, containerized and distributed applications.
  • The LF Edge taxonomy, which offers terminology standardization with a balanced view of the edge landscape, is based on inherent technical and logistical trade offs spanning the edge to cloud continuum is gaining widespread industry adoption.
  • Seven out of 10 areas of edge computing experienced growth in 2020 with a number of new use cases that are driven by 5G. 

SAN FRANCISCO – March 10, 2020 –  State of the Edge, a project under the LF Edge umbrella organization that established an open, interoperable framework for edge independent of hardware, silicon, cloud, or operating system, today announced the release of the 4th annual, State of the Edge 2021 Report. The market and ecosystem report for edge computing shares insight and predictions on how the COVID-19 pandemic disrupted the status quo, how new types of critical infrastructure have emerged to service the next-level requirements, and open source collaboration as the only way to efficiently scale Edge Infrastructure.

Tolaga Research, which led the market forecasting research for this report, predicts that between 2019 and 2028, cumulative capital expenditures of up to $800 billion USD will be spent on new and replacement IT server equipment and edge computing facilities. These expenditures will be relatively evenly split between equipment for the device and infrastructure edges.

“Our 2021 analysis shows demand for edge infrastructure accelerating in a post COVID-19 world,” said Matt Trifiro, co-chair of State of the Edge and CMO of edge infrastructure company Vapor IO. “We’ve been observing this trend unfold in real-time as companies re-prioritize their digital transformation efforts to account for a more distributed workforce and a heightened need for automation. The new digital norms created in response to the pandemic will be permanent. This will intensify the deployment of new technologies like wireless 5G and autonomous vehicles, but will also impact nearly every sector of the economy, from industrial manufacturing to healthcare.”

The pandemic is accelerating digital transformation and service adoption.

Government lockdowns, social distancing and fragile supply chains had both consumers and enterprises using digital solutions last year that will permanently change the use cases across the spectrum. Expertise in legacy data centers could be obsolete in the next few years as the pandemic has forced the development of tools for remote monitoring, provisioning, repair and management, which will reduce the cost of edge computing. Some of the areas experiencing growth in the Global Infrastructure Edge Power are automotive, smart grid and enterprise technology. As businesses began spending more on edge computing, specific use cases increased including:

  • Manufacturing increased from 3.9 to 6.2 percent, as companies bolster their supply chain and inventory management capabilities and capitalize on automation technologies and autonomous systems.
  • Healthcare, which increased from 6.8 to 8.6 percent, was buoyed by increased expectations for remote healthcare, digital data management and assisted living.
  • Smart cities increased from 5.0 to 6.1 percent in anticipation of increased expenditures in digital infrastructure in the areas such as surveillance, public safety, city services and autonomous systems.

“In our individual lock-down environments, each of us is an edge node of the Internet and all our computing is, mostly, edge computing,” said Wenjing Chu, senior director of Open Source and Research at Futurewei Technologies, Inc. and LF Edge Governing Board member. “The edge is the center of everything.”

Open Source is driving innovation at the edge by accelerating the adoption and deployment of edge applications.

Open Source has always been the foundation of innovation and this became more prevalent during the pandemic as individuals continued to turn to these communities for normalcy and collaboration. LF Edge, which hosts nine projects including State of the Edge, is an important driver of standards for the telecommunications, cloud and IoT edge. Each project collaborates individually and together to create an open infrastructure that creates an ecosystem of support. LF Edge’s projects (Akraino Edge Stack, Baetyl, EdgeX Foundry, Fledge, Home Edge, Open Horizon, Project EVE, and Secure Device Onboard) support emerging edge applications across areas such as non-traditional video and connected things that require lower latency, and  faster processing and mobility.

“State of the Edge is shaping the future of all facets of just edge computing and the ecosystem that surrounds it,” said Arpit Joshipura, General Manager of Networking, IoT and Edge. “The insights in the report reflect the entire LF Edge community and our mission to unify edge computing and support a more robust solution at the IoT, Enterprise, Cloud and Telco edge. We look forward to sharing the ongoing work State of the Edge that amplifies innovations across the entire landscape.”

Other report highlights and methodology

For the report, researchers modeled the growth of edge infrastructure from the bottom up, starting with the sector-by-sector use cases likely to drive demand. The forecast considers 43 use cases spanning 11 verticals in calculating the growth, including those represented by smart grids, telecom, manufacturing, retail, healthcare, automotive and mobile consumer services. The vendor-neutral report was edited by Charlie Ashton, Senior Director of Business Development at Napatech, with contributions from Phil Marshall, Chief Research officer at Tolaga Research; Phil Shih, Founder and Managing Director of Structure Research; Technology Journalists Mary Branscombe and Simon Bisson; and Fay Arjomandi, Founder and CEO of mimik. Other highlights from the State of the Edge 2021 Report include:

  • Off-the-shelf services and applications are emerging that accelerate and de-risk the rapid deployment of edge in these segments. The variety of emerging use cases is in turn driving a diversity in edge-focused processor platforms, which now include Arm-based solutions, SmartNICs with FPGA-based workload acceleration and GPUs.
  • Edge facilities will also create new types of interconnection. Similar to how data centers became meeting points for networks, the micro data centers at wireless towers and cable headends that will power edge computing often sit at the crossroads of terrestrial connectivity paths. These locations will become centers of gravity for local interconnection and edge exchange, creating new and newly efficient paths for data.
  • 5G, next-generation SD-WAN and SASE have been standardized. They are well suited to address the multitude of edge computing use cases that are being adopted and are contemplated for the future. As digital services proliferate and drive demand for edge computing, the diversity of network performance requirements will continue to increase.

“The State of the Edge report is an important industry and community resource. This year’s report features the analysis of diverse experts, mirroring the collaborative approach that we see thriving in the edge computing ecosystem,” said Jacob Smith, co-chair of State of the Edge and Vice President of Bare Metal at Equinix. “The 2020 findings underscore the tremendous acceleration of digital transformation efforts in response to the pandemic, and the critical interplay of hardware, software and networks for servicing use cases at the edge.”

Download Report

Download the report here.

State of the Edge Co-Chairs Matt Trifiro and Jacob Smith, VP Bare Metal Strategy & Marketing of Equinix, will present highlights from the report in a keynote presentation at Open Networking & Edge Executive Forum, a virtual conference on March 10-12. Register here ($50 US) to watch the live presentation on March 12 at 7 am PT or access the video on-demand.

Trifiro and Smith will also host an LF Edge webinar to showcase the key findings on March 18 at 8 am PT. Register here.

About The Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact:

Maemalynn Meanor

maemalynn@linuxfoundation.org

The post LF Edge’s State of the Edge 2021 Report Predicts Global Edge Computing Infrastructure Market to be Worth Up to $800 Billion by 2028 appeared first on Linux Foundation.

The post LF Edge’s State of the Edge 2021 Report Predicts Global Edge Computing Infrastructure Market to be Worth Up to $800 Billion by 2028 appeared first on Linux.com.

Industry-Wide Initiative to Support Open Source Security Gains New Commitments

Wednesday 10th of March 2021 01:00:59 AM

Open Source Security Foundation adds new members, Citi, Comcast, DevSamurai, HPE, Mirantis and Snyk

SAN FRANCISCO, Calif., March 9, 2021 OpenSSF, a cross-industry collaboration to secure the open source ecosystem, today announced new membership commitments to advance open source security education and best practices. New members include Citi, Comcast, DevSamurai, Hewlett Packard Enterprise (HPE), Mirantis, and Snyk.

Open source software (OSS) has become pervasive in data centers, consumer devices and services, representing its value among technologists and businesses alike. Because of its development process, open source has a chain of contributors and dependencies before it ultimately reaches its end users. It is important that those responsible for their user or organization’s security are able to understand and verify the security of this dependency supply chain.

“Open source software is embedded in the world’s technology infrastructure and warrants our dedication to ensuring its security,” said Kay Williams, Governing Board Chair, OpenSSF, and Supply Chain Security Lead, Azure Office of the CTO, Microsoft. “We welcome the latest OpenSSF new members and applaud their commitment to advancing supply chain security for open source software and its technology and business ecosystem.”

The OpenSSF is a cross-industry collaboration that brings together technology leaders to improve the security of OSS. Its vision is to create a future where participants in the open source ecosystem use and share high quality software, with security handled proactively, by default, and as a matter of course. Its working groups include Securing Critical Projects, Security Tooling, Identifying Security Threats, Vulnerability Disclosures, Digital Identity Attestation, and Best Practices. 

OpenSSF has more than 35 members and associate members contributing to working groups, technical initiatives and governing board and helping to advance open source security best practices. For more information on founding and new members, please visit: https://openssf.org/about/members/

Membership is not required to participate in the OpenSSF. For more information and to learn how to get involved, including information about participating in working groups and advisory forums, please visit https://openssf.org/getinvolved.

New Member Comments

Citi

“Working with the open source community is a key component in our security strategy, and we look forward to supporting the OpenSSF in its commitment to collaboration,” said Jonathan Meadows, Citi’s Managing Director for Cloud Security Engineering.

Comcast

“Open source software is a valuable resource in our ongoing work to create and continuously evolve great products and experiences for our customers, and we know how important it is to build security at every stage of development. We’re honored to be part of this effort and look forward to collaborating,” said Nithya Ruff, head of Comcast Open Source Program Office.

DevSamurai

“We are living in an interesting era, in which new IT technologies are changing all aspects of our lives everyday. Benefits come with risks, that can’t be truer with open source software. Being a part of OpenSSF we expect to learn from and contribute to the community, together we strengthen security and eliminate risks throughout the software supply chain,” Said Tam Nguyen, head of DevSecOps at DevSamurai.

Mirantis

“As open source practitioners from our very founding, Mirantis has demonstrated its commitment to the values of transparency and collaboration in the open source community,” said Chase Pettet, lead product security architect, Mirantis. “As members of the OpenSSF, we recognize the need for cross-industry security stakeholders to strengthen each other. Our customers will continue to rely on open source for their safety and assurance, and we will continue to support the development of secure open solutions.”

Snyk

“As the number of digital transformation projects has exploded the world over, the mission of the Open Source Security Foundation has never been more critical than it is today,” said Geva Solomonovich, CTO, Global Alliances, Snyk. “Snyk is thrilled to become an official Foundation member, and we look forward to working with the entire community to together push the industry to make all digital environments safer.”

About the Open Source Security Foundation (OpenSSF)

Hosted by the Linux Foundation, the OpenSSF (launched in August 2020) is a cross-industry organization that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. It combines the Linux Foundation’s Core Infrastructure Initiative (CII), founded in response to the 2014 Heartbleed bug, and the Open Source Security Coalition, founded by the GitHub Security Lab to build a community to support the open source security for decades to come. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Jennifer Cloer

for the Linux Foundation

503-867-2304

jennifer@storychangesculture.com

The post Industry-Wide Initiative to Support Open Source Security Gains New Commitments appeared first on Linux Foundation.

The post Industry-Wide Initiative to Support Open Source Security Gains New Commitments appeared first on Linux.com.

Linux Foundation Announces Free sigstore Signing Service to Confirm Origin and Authenticity of Software

Wednesday 10th of March 2021 01:00:40 AM

Red Hat, Google and Purdue University lead efforts to ensure software maintainers, distributors and consumers have full confidence in their code, artifacts and tooling

SAN FRANCISCO, Calif., March 9, 2021 –  The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the sigstore project. sigstore improves the security of the software supply chain by enabling the easy adoption of cryptographic software signing backed by transparency log technologies.

sigstore will empower software developers to securely sign software artifacts such as release files, container images and binaries. Signing materials are then stored in a tamper-proof public log. The service will be free to use for all developers and software providers, with the sigstore code and operation tooling developed by the sigstore community. Founding members include Red Hat, Google and Purdue University.

“sigstore enables all open source communities to sign their software and combines provenance, integrity and discoverability to create a transparent and auditable software supply chain,” said Luke Hinds, Security Engineering Lead, Red Hat office of the CTO. “By hosting this collaboration at the Linux Foundation, we can accelerate our work in sigstore and support the ongoing adoption and impact of open source software and development.”

Understanding and confirming the origin and authenticity of software relies on an often disparate set of approaches and data formats. The solutions that do exist, often rely on digests that are stored on insecure systems that are susceptible to tampering and can lead to various attacks such as swapping out of digests or users falling prey to targeted attacks.

“Securing a software deployment ought to start with making sure we’re running the software we think we are. Sigstore represents a great opportunity to bring more confidence and transparency to the open source software supply chain,” said Josh Aas, executive director, ISRG | Let’s Encrypt.

Very few open source projects cryptographically sign software release artifacts. This is largely due to the challenges software maintainers face on key management, key compromise / revocation and the distribution of public keys and artifact digests. In turn, users are left to seek out which keys to trust and learn steps needed to validate signing. Further problems exist in how digests and public keys are distributed, often stored on websites susceptible to hacks or a README file situated on a public git repository. sigstore seeks to solve these issues by utilization of short lived ephemeral keys with a trust root leveraged from an open and auditable public transparency logs.

“I am very excited about the prospects of a system like sigstore. The software ecosystem is in dire need of something like it to report the state of the supply chain. I envision that, with sigstore answering all the questions about software sources and ownership, we can start asking the questions regarding software destinations, consumers, compliance (legal and otherwise), to identify criminal networks and secure critical software infrastructure. This will set a new tone in the software supply chain security conversation,” said Santiago Torres-Arias, Assistant Professor of Electrical and Computer Engineering, University of Purdue / in-toto project founder.

“sigstore is poised to advance the state of the art in open source development,” said Mike Dolan, senior vice president and general manager of Projects at the Linux Foundation. “We are happy to host and contribute to work that enables software maintainers and consumers alike to more easily manage their open source software and security.”

“sigstore aims to make all releases of open source software verifiable, and easy for users to actually verify them. I’m hoping we can make this easy as exiting vim,” Dan Lorenc, Google Open Source Security Team. “Watching this take shape in the open has been fun. It’s great to see sigstore in a stable home.”

For more information and to contribute, please visit: https://sigstore.dev

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Jennifer Cloer

for Linux Foundation

503-867-2304

jennifer@storychangesculture.com

The post Linux Foundation Announces Free sigstore Signing Service to Confirm Origin and Authenticity of Software appeared first on Linux Foundation.

The post Linux Foundation Announces Free sigstore Signing Service to Confirm Origin and Authenticity of Software appeared first on Linux.com.

Overview of the Kubernetes Security Essentials Training Course

Monday 8th of March 2021 11:00:11 PM

We recently launched the LFS260 – Kubernetes Security Essentials eLearning course in partnership with the Cloud Native Computing Foundation (CNCF), the home of Kubernetes. This course provides the skills and knowledge on a broad range of best practices for securing container-based applications and Kubernetes platforms during build, deployment and runtime. It also gets you ready to sit for the Certified Kubernetes Security Specialist (CKS) exam.

In this new video, Linux Foundation Training & Certification instructor Tim Serewicz, who created the eLearning course and was instrumental in creating the CKS exam, provides an overview of what you can expect during this training, with topics including:

  • Cloud security overview
  • Preparing to install
  • Installing the cluster
  • Securing the kube-apiserver
  • Networking
  • Workload considerations
  • Issue detection
  • And more…

Watch Tim’s video to learn more about this exciting course and how it can help you improve the security of your cloud native applications!

The post Overview of the Kubernetes Security Essentials Training Course appeared first on Linux Foundation – Training.

The post Overview of the Kubernetes Security Essentials Training Course appeared first on Linux.com.

An Introduction to WebAssembly

Thursday 4th of March 2021 11:00:14 PM

By Marco Fioretti

What on Earth is WebAssembly?

WebAssembly, also called Wasm, is a Web-optimized code format and API (Application Programming Interface) that can greatly improve the performances and capabilities of websites. Version 1.0 of WebAssembly, was released in 2017, and became an official W3C standard in 2019.

The standard is actively supported by all major browser suppliers, for obvious reasons: the official list of “inside the browser” use cases mentions, among other things, video editing, 3D games, virtual and augmented reality, p2p services, and scientific simulations. Besides making browsers much more powerful than JavaScript could, this standard may even extend the lifespan of websites: for example, it is WebAssembly that powers the continued support of Flash animations and games at the Internet Archive.

WebAssembly isn’t just for browsers though; it is currently being used in mobile and edge based environments with such products as Cloudflare Workers.

How WebAssembly works

Files in .wasm format contain low level binary instructions (bytecode), executable at “near CPU-native speed” by a virtual machine that uses a common stack. The code is packaged in modules – that is objects that are directly executable by a browser – and each module can be instantiated multiple times by a web page. The functions defined inside modules are listed in one dedicated array, or Table, and the corresponding data are contained in another structure, called arraybuffer. Developers can explicitly allocate memory for .wasm code with the Javascript WebAssembly.memory() call.

A pure text version of the .wasm format – that can greatly simplify learning and debugging – is also available. WebAssembly, however, is not really intended for direct human use. Technically speaking, .wasm is just a browser-compatible compilation target: a format in which software compilers can automatically translate code written in high-level programming languages.

This choice is exactly what allows developers to program directly for the preferred user interface of billions of people, in languages they already know (C/C++, Python, Go, Rust and others) but could not be efficiently used by browsers before. Even better, programmers would get this – at least in theory – without ever looking directly at WebAssembly code or worrying (since the target is a virtual machine) about which physical CPUs will actually run their code.

But we already have JavaScript. Do we really need WebAssembly?

Yes, for several reasons. To begin with, being binary instructions, .wasm files can be much smaller – that is much faster to download – than JavaScript files of equivalent functionality. Above all, Javascript files must be fully parsed and verified before a browser can convert them to bytecode usable by its internal virtual machine.

.wasm files, instead, can be verified and compiled in a single pass, thus making “Streaming Compilation” possible: a browser can start to compile and execute them the moment it starts downloading them, just like happens with streaming movies.

This said, not all conceivable WebAssembly applications would surely be faster – or smaller – than equivalent JavaScript ones that are manually optimized by expert programmers. This may happen, for example, if some .wasm needed to include libraries that are not needed with JavaScript.

Does WebAssembly make JavaScript obsolete?

In a word: no. Certainly not for a while, at least inside browsers. WebAssembly modules still need JavaScript because by design they cannot access the Document Object Model (DOM), that is the main API made to modify web pages. Besides, .wasm code cannot make system calls or read the browser’s memory. WebAssembly only runs in a sandbox and, in general, can interact with the outside world even less than JavaScript can, and only through JavaScript interfaces.

Therefore – at least in the near future – .wasm modules will just provide, through JavaScript, the parts that would consume much more bandwidth, memory or CPU time if they were written in that language.

How web browsers run WebAssembly

In general, a browser needs at least two pieces to handle dynamic applications: a virtual machine (VM) that runs the app code and standard APIs that that code can use to modify both the behaviour of the browser, and the content of the web page that it displays.

The VMs inside modern browsers support both JavaScript and WebAssembly in the following way:

  1. The browser downloads a web page written in the HTML markup language, and renders it
  2. if that HTML calls JavaScript code, the browser’s VM executes it. But…
  3. if that JavaScript code contains an instance of a WebAssembly module, that one is fetched as explained above, and then used as needed by JavaScript, via the WebAssembly APIs
  4. and when the WebAssembly code produces something that would alter the DOM – that is the structure of the “host” web page – the JavaScript code receives it and proceeds to the actual alteration.
How can I create usable WebAssembly code?

There are more and more programming language communities that are supporting compiling to Wasm directly, we recommend looking at the introductory guides from webassembly.org as a starting point depending what language you work with. Note that not all programming languages have the same level of Wasm support, so your mileage may vary. 

We plan to release a series of articles in the coming months providing more information about WebAssembly. To get started using it yourself, you can enroll in The Linux Foundation’s free Introduction to WebAssembly online training course.

The post An Introduction to WebAssembly appeared first on Linux Foundation – Training.

The post An Introduction to WebAssembly appeared first on Linux.com.

The Linux Foundation Continues to Expand Japanese Language Training & Certification

Thursday 4th of March 2021 09:00:00 AM

Japan is one of the world’s biggest markets for open source software, which means there is a constant need for upskilling of existing talent and to bring new individuals into the community to meet hiring demand. The Linux Foundation is committed to expanding access to quality open source training and certification opportunities, which is why we have developed a number of Japanese language offerings. 

The newest is LFS272-JP Hyperledger Fabric Administration, which became available this week. Hyperledger Fabric – a distributed ledger (blockchain) technology – is intended as a foundation for developing applications or solutions with a modular architecture. Hyperledger Fabric allows components, such as consensus and membership services, to be plug-and-play. Its modular and versatile design satisfies a broad range of industry use cases, and it offers a unique approach to consensus that enables performance at scale while preserving privacy. 

LFS272-JP provides a deep understanding of the Hyperledger Fabric network and how to administer and interact with chaincode, manage peers, and operate basic CA-level functions. Upon completion, participants will have a good understanding of the Hyperledger Fabric network topology, chaincode operations, administration of identities, permissions, how and where to configure component logging, and much more. The course also serves as preparation for the Certified Hyperledger Fabric Administrator (CHFA-JP) exam, which can be taken with a Japanese proctor (the exam itself is conducted in English).

While Hyperledger Fabric Administration is the newest Japanese course offered by Linux Foundation Training & Certification, it is far from alone. Our catalog of Japanese-language offerings includes:

System Administration/Engineering

Cloud & Containers

Blockchain

We also partnered with LPI-Japan recently to make certifications even more accessible in Japan, creating new stacked certifications leveraging LPI-Japan’s LinuC 1 and LinuC 2 with The Linux Foundation’s CKA and CKAD.

Linux Foundation Executive Director Jim Zemlin commented, “Japan is one of the top contributors to the open source community globally, in terms of code as well as financial support and end user adoption. We know how important it is to support the open source community in Japan, which is why The Linux Foundation is proud to offer Japanese language training and certification options for that community. Our team looks forward to continuing to expand these learning opportunities in the future.”

The post The Linux Foundation Continues to Expand Japanese Language Training & Certification appeared first on Linux Foundation – Training.

The post The Linux Foundation Continues to Expand Japanese Language Training & Certification appeared first on Linux.com.

New Mobile Native Foundation to Foster Development Collaboration

Wednesday 3rd of March 2021 12:00:00 AM

Linux Foundation hosts effort to improve processes and technologies for large-scale mobile Android and iOS applications; Lyft makes initial contributions

SAN FRANCISCO, Calif., March 2, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the Mobile Native Foundation (MNF). The MNF will bring developers together to improve processes and technologies that support large-scale Android and iOS applications. Organizations contributing to this effort include Airbnb, Capital One, Corellium, Elotl, Flare.build, GitHub, GogoApps, Haystack, Line, LinkedIn, Lyft, Microsoft, Peloton, Robinhood, Sauce Labs, Screenplay.dev, Slack, Solid Software, Spotify, Square and Uber.

“Like many of our industry peers, Lyft discovered that platform vendors did not solve all of the problems we faced as our mobile team grew from a dozen engineers to hundreds of active contributors,” said Keith Smiley, Staff Engineer, Lyft. “The Mobile Native Foundation will foster a diverse community that encourages collaboration and builds libraries and tools to move the industry forward.”

The MNF is a forum for collaboration on open source software, standards and best practices that can result in common UI frameworks, architectural patterns, build systems and networking stacks that can accelerate time to market and reduce duplicative work across companies.

“The mobile developer community is innovating and we know that open source and collaboration can ensure that continues,” said Mike Dolan, executive vice president and GM of Projects at the Linux Foundation. “The MNF will accelerate and smooth mobile app development and brings new contributions to the Linux Foundation ecosystem.”

Lyft is making early project contributions to the MNF that includes Kronos, index-import and set-simulator-location. Matthew Edwards is also contributing Flank.

For more information and to begin contributing, please visit: https://mobilenativefoundation.org

Partner Statements

Elotl

“We are excited to pioneer the state of art Kubernetes stack to build, test, and run modern mobile applications at cloud scale. We appreciate the opportunity to collaborate with industry leaders on this vision! “said Madhuri Yechuri, Founder & CEO, Elotl.

Flare.build

“We look forward to collaborating with the community on many projects related to our core vision of decreasing friction and boosting productivity for teams creating applications at scale,” said Zach Gray, co-founder and CEO, Flare.build.

LinkedIn

“The Mobile Native Foundation will advance the state-of-the-art in mobile development by bringing together open source developers and leading tech companies in a place where we can collaborate and enable anyone to build and operate large scale mobile applications. We are excited to be part of the launch and look forward to what we can accomplish together,” said Oscar Bonilla, Engineer, LinkedIn.

Microsoft

“We see this as a great opportunity to more inclusively collaborate on challenges we face across the industry and we can’t wait to see the improvements to mobile development we can make when we all work together,” said Mike Borysenko, distinguished engineer, Microsoft.

Robinhood

“Robinhood’s award-winning mobile apps wouldn’t be possible without the open source tools we rely on and contribute back to. We look forward to working together with the open source community as we continue to scale and address shared technical challenges,” said Lee Byron, Engineering Manager, Robinhood.

Screenplay.dev

“We could not be more humbled or more excited to have the opportunity to work with industry leaders to push the state of mobile development forward,” said Tomas Reimers, Co-founder, Screenplay.

Slack

Slack’s mobile engineering has benefited tremendously from the open source community. We’re excited to see the energy and experience behind MNF and look forward to participating in shaping the future of mobile development at scale,” said Valera Zakharov, Tech Lead of the Mobile Developer Experience Team.

Spotify

“We are excited to join forces with the community in the mission of solving issues and providing better technologies to ship mobile apps at scale,” said Patrick Balestra, iOS Infrastructure Engineer, Spotify.

Uber

“Uber mobile apps have scaled with the help of a thriving open source community and we are now proud to collaborate with other organizations on the Mobile Native Foundation to further give back,” said Ty Smith, Android Tech Lead, Uber.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Jennifer Cloer
for the Linux Foundation
503-867-2304
jennifer@storychangesculture.com

The post New Mobile Native Foundation to Foster Development Collaboration appeared first on Linux Foundation.

The post New Mobile Native Foundation to Foster Development Collaboration appeared first on Linux.com.

Learn About the RISC-V ISA with Two Free Training Courses from The Linux Foundation and RISC-V International

Tuesday 2nd of March 2021 10:00:04 PM

The online courses are offered on edX.org and will make RISC-V training more accessible

SAN FRANCISCO – EMBEDDED WORLD – March 2, 2021The Linux Foundation, the non-profit organization enabling mass innovation through open source, and RISC-V International, a non-profit corporation controlled by its members to drive the adoption and implementation of the free and open RISC-V instruction set architecture (ISA), have announced the release of two new free online training courses to help individuals get started with the RISC-V ISA. The courses are available on edX.org, the online learning platform founded by Harvard and MIT. 

“RISC-V International is committed to providing opportunities for people to gain a deeper understanding of the RISC-V ISA and expand their skills,” shared Calista Redmond, CEO, RISC-V International. “These courses will allow everyone to build deeper technical insight, learn more about the benefits of open collaboration, and engage with RISC-V for design freedom.”

With the recent market momentum of RISC-V cores, systems-on-chips (SoCs), developer boards, and software and tools across computing from embedded to enterprise, there is a strong community need to empower individuals who understand how to implement and utilize  RISC-V. In order to help meet that demand, The Linux Foundation and RISC-V International designed these free online courses to significantly reduce the barrier to entry for those interested in gaining RISC-V skills.

The first course, Introduction to RISC-V (LFD110x), guides participants through the various aspects of understanding the RISC-V ecosystem, RISC-V International, the RISC-V specifications, how to curate and develop RISC-V specifications, and the technical aspects of working with RISC-V both as a developer and end-user. The course provides the foundational knowledge needed to effectively engage in the RISC-V community, contribute to the ISA specifications, and develop a wide range of RISC-V software and hardware projects. Introduction to RISC-V was developed by Jeffrey “Jefro” Osier-Mixon, program manager for RISC-V International, and Stephano Cetola, technical program manager for RISC-V International. 

The second course, Building a RISC-V CPU Core (LFD111x), focuses on digital logic design and basic central processing unit (CPU) microarchitecture. Using the Makerchip online integrated development environment (IDE), participants will implement technologies ranging from logic gates to a simple and complete RISC-V CPU core. The class will allow participants to familiarize themselves with a variety of emerging technologies supporting an open source hardware ecosystem, including RISC-V, transaction-level verilog, and the online Makerchip IDE. Building a RISC-V CPU Core was developed by Steve Hoover, founder of Redwood EDA.

Enrollment is now open for Introduction to RISC-V and Building a RISC-V CPU Core. Auditing each course through edX is free for seven weeks, or you can opt for a paid verified certificate of completion, which provides access to the course for a full year and additional assessments and content to deepen their learning experience. 

About  RISC-V International

RISC-V is a free and open ISA enabling a new era of processor innovation through open collaboration. Founded in 2015, RISC-V International is composed of more than 1,200 members building the first open, collaborative community of software and hardware innovators powering a new era of processor innovation. The RISC-V ISA delivers a new level of free, extensible software and hardware freedom on architecture, paving the way for the next 50 years of computing design and innovation.

RISC-V International, a non-profit organization controlled by its members, directs the future development and drives the adoption of the RISC-V ISA. Members of RISC-V International have access to and participate in the development of the RISC-V ISA specifications and related HW / SW ecosystem. 

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

# # #

The post Learn About the RISC-V ISA with Two Free Training Courses from The Linux Foundation and RISC-V International appeared first on Linux Foundation – Training.

The post Learn About the RISC-V ISA with Two Free Training Courses from The Linux Foundation and RISC-V International appeared first on Linux.com.

More in Tux Machines

Today in Techrights

today's howtos

  • Hans de Goede: Changing hidden/locked BIOS settings under Linux

    This all started with a Mele PCG09 before testing Linux on this I took a quick look under Windows and the device-manager there showed an exclamation mark next to a Realtek 8723BS bluetooth device, so BT did not work. Under Linux I quickly found out why, the device actually uses a Broadcom Wifi/BT chipset attached over SDIO/an UART for the Wifi resp. BT parts. The UART connected BT part was described in the ACPI tables with a HID (Hardware-ID) of "OBDA8723", not good. Now I could have easily fixed this with an extra initrd with DSDT-overrride but that did not feel right. There was an option in the BIOS which actually controls what HID gets advertised for the Wifi/BT named "WIFI" which was set to "RTL8723" which obviously is wrong, but that option was grayed out. So instead of going for the DSDT-override I really want to be able to change that BIOS option and set it to the right value. Some duckduckgo-ing found this blogpost on changing locked BIOS settings.

  • Test Day:2021-05-09 Kernel 5.12.2 on Fedora 34

    All logs report PASSED for each test done and uploaded as prompted at instruction page.

  • James Hunt: Can you handle an argument?

    This post explores some of the darker corners of command-line parsing that some may be unaware of. [...] No, I’m not questioning your debating skills, I’m referring to parsing command-lines! Parsing command-line option is something most programmers need to deal with at some point. Every language of note provides some sort of facility for handling command-line options. All a programmer needs to do is skim read the docs or grab the sample code, tweak to taste, et voila! But is it that simple? Do you really understand what is going on? I would suggest that most programmers really don’t think that much about it. Handling the parsing of command-line options is just something you bolt on to your codebase. And then you move onto the more interesting stuff. Yes, it really does tend to be that easy and everything just works… most of the time. Most? I hit an interesting issue recently which expanded in scope somewhat. It might raise an eyebrow for some or be a minor bomb-shell for others.

  • 10 Very Stupid Linux Commands [ Some Of Them Deadly ]

    If you are reading this page then you are like all of us a Linux fan, also you are using the command line every day and absolutely love Linux. But even in love and marriage there are things that make you just a little bit annoyed. Here in this article we are going to show you some of the most stupid Linux commands that a person can find.

China Is Launching A New Alternative To Google Summer of Code, Outreachy

The Institute of Software Chinese Academy of Sciences (ISCAS) in cooperation with the Chinese openEuler Linux distribution have been working on their own project akin to Google Summer of Code and Outreachy for paying university-aged students to become involved in open-source software development. "Summer 2021" as the initiative is simply called or "Summer 2021 of Open Source Promotion Plan" is providing university-aged students around the world funding by the Institute of Software Chinese Academy of Sciences to work on community open-source projects. It's just like Google Summer of Code but with offering different funding levels based upon the complexity of the project -- funding options are 12000 RMB, 9000 RMB, or 6000 RMB. That's roughly $932 to $1,865 USD for students to devote their summer to working on open-source. There are not any gender/nationality restrictions with this initative but students must be at least eighteen years old. Read more

Kernel: Linux 5.10 and Linux 5.13

  • Linux 5.10 LTS Will Be Maintained Through End Of Year 2026 - Phoronix

    Linux 5.10 as the latest Long Term Support release when announced was only going to be maintained until the end of 2022 but following enough companies stepping up to help with testing, Linux 5.10 LTS will now be maintained until the end of year 2026. Linux 5.10 LTS was originally just going to be maintained until the end of next year while prior kernels like Linux 5.4 LTS are being maintained until 2024 or even Linux 4.19 LTS and 4.14 LTS going into 2024. Linux 5.10 LTS was short to begin with due to the limited number of developers/organizations helping to test new point release candidates and/or committing resources to using this kernel LTS series. But now there are enough participants committing to it that Greg Kroah-Hartman confirmed he along with Sasha Levin will maintain the kernel through December 2026.

  • Oracle Continues Working On The Maple Tree For The Linux Kernel

    Oracle engineers have continued working on the "Maple Tree" data structure for the Linux kernel as an RCU-safe, range-based B-tree designed to make efficient use of modern processor caches. Sent out last year was the RFC patch series of Maple Tree for the Linux kernel to introduce this new data structure and make initial use of it. Sent out last week was the latest 94 patches in a post-RFC state for introducing this data structure.

  • Linux 5.13 Brings Simplified Retpolines Handling - Phoronix

    In addition to work like Linux 5.13 addressing some network overhead caused by Retpolines, this next kernel's return trampoline implementation itself is seeing a simplification. Merged as part of x86/core last week for the Linux 5.13 kernel were enabling PPIN support for Xeon Sapphire Rapids, KProbes improvements, and other minor changes plus simplifying the Retpolines implementation used by some CPUs as part of the Spectre V2 mitigations. The x86/core pull request for Linux 5.13 also re-sorts and better documents Intel's increasingly long list of different CPU cores/models.

  • Linux 5.13 Adds Support For SPI NOR One-Time Programmable Memory Regions - Phoronix

    The Linux 5.13 kernel has initial support for dealing with SPI one-time programmable (OTP) flash memory regions. Linux 5.13 adds the new MTD OTP functions for accessing SPI one-time programmable data. The OTP are memory regions intended to be programmed once and can be used for permanent secure identification, immutable properties, and similar purposes. In addition to adding the core infrastructure support for OTP to the MTD SPI-NOR code in Linux 5.13, the functionality is wired up for Winbond and similar flash memory chips. The MTD subsystem has already supported OTP areas but not for SPI-NOR flash memory.