Language Selection

English French German Italian Portuguese Spanish

Linux.com

Syndicate content
News For Open Source Professionals
Updated: 6 hours 3 min ago

How WebAssembly Modules Safely Exchange Data

Thursday 24th of June 2021 09:00:40 PM

By Marco Fioretti

The WebAssembly binary format (Wasm) has been developed to allow software written in any language to “compile once, run everywhere”, inside web browsers or stand-alone virtual machines (runtimes) available for any platform, almost as fast as code directly compiled for those platforms. Wasm modules can interact with any host environment in which they run in a really portable way, thanks to the WebAssembly System Interface (WASI).

That is not enough, though. In order to be actually usable without surprises in as many scenarios as possible, Wasm executable files need at least two more things. One is the capability to interact directly not just with the operating system, but with any other program of the same kind. The way to do this with Wasm is called “module linking”, and will be the topic of the next article of this series. The other feature, that is a prerequisite for module linking to be useful, is the capability to exchange data structures of any kind, without misunderstandings or data loss.

What happens when Wasm modules exchange data?

Since it is only a compilation target, the WebAssembly format provides only low-level data types that aim to be as close to the underlying machine as possible. It is this choice that provides highly portable, high performing modules, while leaving programmers to write software in whatever language they want. The burden of mapping complex data structures in that language to native Wasm data types is left to software libraries, and to the compilers that use them.

The problem here is that in order to be efficient, the first generation of Wasm syntax and WASI do not natively support strings and other equally basic data types. Therefore, there is no intrinsic guarantee that, for example, a Wasm module compiled from Python sources and another from Rust ones will have exactly the same concept of “string” in every circumstance where string may be used.

The consequence is that, if Wasm modules compiled from different languages want to exchange more complex data structures, something important may be, so to speak, “lost in translation” every time some data goes from one module to another. Concretely, this prevents both direct embedding of Wasm modules into generic applications and direct calls from Wasm modules to external software.

In order to understand the nature of the problem, it is useful to look at how such data are passed around in first-generation Wasm and WASI modules.

The original way for WebAssembly to communicate with JavaScript and C programs is to simulate things like strings by manually managing chunks of memory.

For example, in the function path_open, a string is passed as a pair of integer numbers (i32) that represent the offset and, respectively, the length of that string in the linear memory reserved to a Wasm module. This would already be bad enough when, to mention just the simplest and most frequent cases, different character encodings or Garbage Collection (GC) are used. To make things worse, WASI modules that exchange strings would be forced to access each other’s memory, making this way of working far from optimal for both performance and security reasons.

Theoretically, Wasm modules that want to exchange data may also use traditional, JavaScript-compatible data passing mechanisms like WebIDL. This is the Interface Description Language used to describe all the components, including of course data types for any Web application programming interface (API).

In practice however, this would not solve anything. First because Web IDL functions can accept, that is pass back to the Wasm module that called them, higher level constructs than WebAssembly would understand. Second because using WebAssembly means exchanging data not directly but through ECMAScript Bindings, which have their own complexities and performance penalties. Summarizing, certain tricks work today, but not in all cases, and are by no means future-proof.

The solution: Interface and Reference Types

The real solution to all the problems mentioned above is to extend both the Wasm binary format and WASI in ways that:

directly support more complex data structures like strings or lists
allow Wasm modules to statically type-check the corresponding variables, and exchange them directly, but without having to share their internal linear memory.

There are two specifications that are being deployed just for this purpose. The main one is simply called Interface Types and its companion Reference Types.

Both Types rely on lower level features already added to the original Wasm core, namely “multi-value” and multi-memory support. The first extension allows Wasm functions to return an arbitrary number of values, instead of just one as before, and Wasm instruction sequences to handle an arbitrary number of stack values. The other lets a whole Wasm module, or single functions, use multiple memories at the same time, which is good for a whole lot of reasons besides exchanging variables.

Building on these features, Interface Types define strings and other “high-level” data structures, in ways that any unmodified Wasm runtime can use. Reference Types complete the picture, specifying how Wasm applications must actually exchange those data structures with external applications.

The specifications are not fully completed yet. Interface Types can exchange values, but not handles to resources and buffers, which would be required, for example, to “read a file and write directly into a buffer”.

Working together however, all the features described here already enable Wasm modules and WASI interfaces to handle and exchange most complex data structures efficiently, without corrupting them and regardless of what language they were used in, before compiling to Wasm.

The post How WebAssembly Modules Safely Exchange Data appeared first on Linux Foundation – Training.

The post How WebAssembly Modules Safely Exchange Data appeared first on Linux.com.

Sigstore: A New Tool Wants to Save Open Source From Supply Chain Attacks (WIRED)

Wednesday 23rd of June 2021 05:16:18 PM

“The founders of Sigstore hope that their platform will spur adoption of code signing, an important protection for software supply chains but one that popular and widely used open source software often overlooks. Open source developers don’t always have the resources, time, expertise, or wherewithal to fully implement code signing on top of all the other nonnegotiable components they need to build for their code to function.”

Read more at WIRED

The post Sigstore: A New Tool Wants to Save Open Source From Supply Chain Attacks (WIRED) appeared first on Linux.com.

Linux Foundation Launches GitOps Training

Tuesday 22nd of June 2021 10:45:10 PM

The two new courses were created in partnership with the Cloud Native Computing Foundation and Continuous Delivery Foundation

SAN FRANCISCO – GITOPS SUMMIT – June 22, 2021 – Today at GitOps Summit, The Linux Foundation, the nonprofit organization enabling mass innovation through open source, the Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, and Continuous Delivery Foundation (CDF), the open-source software foundation that seeks to improve the world’s capacity to deliver software with security and speed, have announced the immediate availability of two new, online training courses focused on GitOps, or operation by pull request, a powerful developer workflow that enables organizations to unlock the promise of cloud native continuous delivery. 

Cloud native technologies enable organizations to scale rapidly and deliver software faster than ever before. GitOps is the set of practices that enable developers to carry out tasks that traditionally fell to operations personnel. As development practices evolve, GitOps is becoming an essential skill for many job roles. These two new online, self-paced training courses are designed to teach the skills necessary to begin implementing GitOps practices:

Introduction to GitOps (LFS169)

LFS169 is a free introductory course providing foundational knowledge about key GitOps principles, tools and practices, to help build an operational framework for cloud native applications primarily running on Kubernetes. The course explains how to set up and automate a continuous delivery pipeline to Kubernetes, leading to increased productivity and efficiency for tech roles.

This course walks through a series of demonstrations with a fully functional GitOps environment, which explains the true power of GitOps and how it can help build infrastructures, deploy applications, and even do progressive releases, all via pull requests and git-based workflows. By the end of this course, participants will be familiar with the need for GitOps, and understand the different reconciliation patterns and implementation options available, helping them make the right technological choices for their particular needs.

GitOps: Continuous Delivery on Kubernetes with Flux (LFS269)

LFS269 will benefit software developers interested in learning how to deploy their cloud native applications using familiar GitHub-based workflows and GitOps practices; quality assurance engineers interested in setting up continuous delivery pipelines, and implementing canary analysis, A/B testing, etc. on Kubernetes; site reliability engineers interested in automating deployment workflows and setting up multi-tenant, multi-cluster GitOps-based Continuous Delivery workflows and incorporating them with existing Continuous Integration and monitoring setups; and anyone looking to understand the landscape of GitOps and learn how to choose and implement the right tools.

This course provides a deep dive into GitOps principles and practices, and how to implement them using Flux CD, a CNCF project. Flux CD uses a reconciliation approach to keep Kubernetes clusters in sync using Git repositories as the source of truth. This course helps build essential Git and Kubernetes knowledge for a GitOps practitioner by setting up Flux v2 on an existing Kubernetes cluster, automating the deployment of Kubernetes manifests with Flux, and incorporating Kustomize and Helm to create customizable deployments. It explains how to set up notifications and monitoring with Prometheus, Grafana and Slack, integrate Flux with Tekton-based workflows to set up CI/CD pipelines, build release strategies, including canary, A/B testing, and blue/green, deploying to multi-cluster and multi-tenant environments, integrate GitOps with service meshes such as Linkerd, and Istio, securing GitOps workflows with Flux, and much more.

“GitOps is an essential methodology for shifting left and using cloud native effectively. We are already seeing the demand for it with the adoption of CNCF projects like Argo and Flux,” said Priyanka Sharma, General Manager of the Cloud Native Computing foundation. “I am thrilled that we now offer two GitOps courses so developers of all levels can build a foundation and learn how to integrate GitOps with Kubernetes. I encourage every practitioner to check it out!”

“Our partnership with Cloud Native Computing Foundation (CNCF) has resulted in the creation of this high quality course for software developers who want a better understanding of the GitOps landscape. It includes information on integrating Flux CD with Tekton-based workflows, a great example of CNCF and CDF projects closely working together. By taking the course, you will be able to evaluate and implement GitOps to meet your development needs,” said Tracy Miranda, Executive Director of the Continuous Delivery Foundation. “The launch of these courses is a result of the strong increase in demand for cloud-native applications. This program will directly benefit those interested in expanding their Git and Kubernetes knowledge and following best practices for GitOps techniques.”

Introduction to GitOps consists of 3-4 hours of course material including video lessons. It is available at no cost for up to a year.

GitOps: Continuous Delivery on Kubernetes with Flux consists of 30-40 hours of course material, including video lessons, hands-on labs, and more. The $299 course fee includes a full year of access to all materials.

About Cloud Native Computing Foundation

Cloud native computing empowers organizations to build and run scalable applications with an open source software stack in public, private, and hybrid clouds. The Cloud Native Computing Foundation (CNCF) hosts critical components of the global technology infrastructure, including Kubernetes, Prometheus, and Envoy. CNCF brings together the industry’s top developers, end users, and vendors, and runs the largest open source developer conferences in the world. Supported by more than 500 members, including the world’s largest cloud computing and software companies, as well as over 200 innovative startups, CNCF is part of the nonprofit Linux Foundation. For more information, please visit www.cncf.io

About the Continuous Delivery Foundation

The Continuous Delivery Foundation (CDF) seeks to improve the world’s capacity to deliver software with security and speed. The CDF is a vendor-neutral organization that is establishing best practices of software delivery automation, propelling education and adoption of CD tools, and facilitating cross-pollination across emerging technologies. The CDF is home to many of the fastest-growing projects for CD, including Jenkins, Jenkins X, Tekton, and Spinnaker. The CDF is part of the Linux Foundation, a nonprofit organization. For more information about the CDF, please visit https://cd.foundation.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

 # # #

The post Linux Foundation Launches GitOps Training appeared first on Linux Foundation – Training.

The post Linux Foundation Launches GitOps Training appeared first on Linux.com.

The Linux Foundation Appoints Industry Veteran as Chief Marketing Officer

Tuesday 22nd of June 2021 10:05:00 PM

Derek Weeks brings proven leadership to accelerate growth in open source innovation and security

SAN FRANCISCO, June 22, 2021 — The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the appointment of a key addition to its executive management team, Derek Weeks, who joins the organization in the newly created role of SVP and Chief Marketing Officer. Weeks will oversee all developer and product marketing, brand strategy, direct and digital marketing, acquisition and retention marketing, analytics and marketing operations, and communications for The Linux Foundation.

As an accomplished enterprise software executive, Weeks brings over 25 years of experience in product, corporate, brand, and community marketing to the Linux Foundation. Weeks most recently helped create new market categories in open source software development and security as a marketing executive at Sonatype, while he also built massive online developer communities as a co-founder of All Day DevOps. Prior to Sonatype, Weeks held global marketing leadership positions for software portfolios in private and public companies, including: Global 360 (acquired by OpenText), Systar (acquired by Axway), Hyperformix (acquired by CA, Inc.), and Hewlett-Packard in the US and Germany.

Weeks has received wide recognition for his achievements in the industry where he has been named to the DevOps 100 by TechBeacon, distinguished as the DevOps Evangelist of the Year by DevOps.com, and received the Industry Executive of the Year from Advanced Technology Academic Research Center (ATARC). Weeks, who spent his adolescent years growing up and working in Silicon Valley, holds a degree from San Jose State University in International Business.

Jim Zemlin, Executive Director of the Linux Foundation, said, “Derek is a natural addition to our leadership team as he is a highly experienced, much-admired marketing and community leader. Finding the right CMO, with a proven track record of helping open source professionals innovate faster while also recognizing the need for improved security across software supply chains they rely upon, was critical. I am delighted to have Derek join us. He has an impressive track record of growing organizations to massive scale, championing open source security research, and building some of the industry’s largest communities for education and collaboration.”

“Open source software is fueling a massive tidal wave of transformation across all industries, and I couldn’t think of a more exciting time to come onboard this exceptional team at The Linux Foundation,” said Weeks. “With its respected position in the community, I look forward to driving growth and further elevating the organization’s contributions to open software, hardware, data, and standards that serve millions of developers worldwide.”

About The Linux Foundation

Founded in 2000, The Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. The Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Jennifer Cloer
for the Linux Foundation
503-867-2304
jennifer@storychangesculture.com

The post The Linux Foundation Appoints Industry Veteran as Chief Marketing Officer appeared first on Linux Foundation.

The post The Linux Foundation Appoints Industry Veteran as Chief Marketing Officer appeared first on Linux.com.

Linux Foundation Introduces Open Voice Network to Prioritize Trust and Interoperability in a Voice-Based Digital Future

Tuesday 22nd of June 2021 10:00:00 PM

Target, Schwarz Gruppe, Wegmans, Microsoft, Veritone and Deutsche Telekom lead standards effort to advance voice assistance

SAN FRANCISCO, June 22, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the Open Voice Network, an open source association dedicated to advancing open standards that support the adoption of AI-enabled voice assistance systems. Founding members include Target, Schwarz Gruppe, Wegmans Food Markets, Microsoft, Veritone, and Deutsche Telekom.

Organizations are beginning to develop, design and manage their own voice assistant systems that are independent of today’s general-purpose voice platforms. This transition is being driven by the desire to manage the entirety of the user experience – from the sound of the voice, the sonic branding and the content – to integrating voice assistance into multiple business processes and brand environments from the call center, to the branch office and the store. Perhaps most importantly, organizations know they must protect the consumer and the proprietary data that flows through voice. The Open Voice Network will support this evolution by delivering standards and usage guidelines for voice assistant systems that are trustworthy, inclusive and open.

“At Target, we’re continuously exploring and embracing new technologies that can help provide joyful, easy and convenient experiences for our guests. We look forward to working with the Open Voice Network community to create global standards and share best practices that will enable businesses to accelerate innovation in this space and better serve consumers,” said Joel Crabb, Vice President, Architecture, Target Corporation. “The Linux Foundation, with its role in advancing open source for all, is the perfect home for this initiative.”

Voice is expected to be a primary digital interface going forward and will result in a hybrid ecosystem of general-purpose platforms and independent voice assistants that demand interoperability between conversational agents of different platforms and voice assistants. Open Voice Network is dedicated to supporting this transformation with industry guidance on the voice-specific protection of user privacy and data security.

“Voice is expected to be a primary interface to the digital world, connecting users to billions of sites, smart environments and AI bots. It is already increasingly being used beyond smart speakers to include applications in automobiles, smartphones and home electronics devices of all types. Key to enabling enterprise adoption of these capabilities and consumer comfort and familiarity is the implementation of open standards,” said Mike Dolan, senior vice president and general manager of Projects at the Linux Foundation. “The potential impact of voice on industries including commerce, transportation, healthcare and entertainment is staggering and we’re excited to bring it under the open governance model of the Linux foundation to grow the community and pave a way forward.”

“To speak is human, and voice is rapidly becoming the primary interaction modality between users and their devices and services at home and work. The more devices and services can interact openly and safely with one another, the more value we unlock for consumers and businesses across a wide spectrum of use cases, such as Conversational AI for customer service and commerce,” said Ali Dalloul, General Manager, Microsoft Azure AI, Strategy & Commercialization.

Much as open standards in the earliest days of the Internet brought a uniform way to exchange information and connect with any site anywhere, the Open Voice Network will bring the same standardized ease of development and use to voice assistant systems and conversational agents, leading to huge growth and value for businesses and consumers alike.  Voice assistance depends upon technologies like Automatic Speech Recognition (ASR), Natural Language Processing (NLP), Advanced Dialog Management (ADM) and Machine Learning (ML).  The Open Voice Network will initially be focused on the following areas:

●               Standards development: research and recommendations toward the global standards that will enable user choice, inclusivity, and trust.

●               Industry value and awareness: identification and sharing of conversational AI best practices that are both horizontal and specific to vertical industries, serving as the source of insight and value for voice assistance.

●               Advocacy: working with and through existing industry associations on relevant regulatory and legislative issues, including those of data privacy.

“Voice is transforming the relationships between brands and consumers,” said Rolf Schumann, Chief Digital Officer, Schwarz Gruppe. “Voice is changing the way we are interacting with our digital devices. For instance, when shopping through our smart home appliances. However, voice includes more information than a fingerprint and can entail data about the emotional state or mental health of a user. Therefore, it is of utmost importance to put data protection standards in place to protect the user’s privacy. This is the only way we will contribute to the future of voice.”

“Self-regulation of synthetic voice content creation and use, to protect the voice owner as well as establishing trust with the consumer is foundational,” said Ryan Steelberg, president and cofounder of Veritone. “Having an open network through Open Voice Network for education and global standards is the only way to keep pace with the rate of innovation and demand for influencer marketing. Veritone’s MARVEL.ai, a Voice as a Service solution, is proud to partner with Open Voice Network on building the best practices to protect the voice brands we work with across sports, media and entertainment.”

Membership to the Open Voice Network includes a commitment of resources in support of the its research, awareness and advocacy activities and active participation in the its symposia and workshops. The Linux Foundation open governance model will allow for community-wide contributions that will accelerate conversational AI standards rollout and adoption.

For more information, please visit: https://openvoicenetwork.org/

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

Media Contact

Jennifer Cloer
for Linux Foundation and Open Voice Network
jennifer@storychangesculture.com
503-867-2304

The post Linux Foundation Introduces Open Voice Network to Prioritize Trust and Interoperability in a Voice-Based Digital Future appeared first on Linux Foundation.

The post Linux Foundation Introduces Open Voice Network to Prioritize Trust and Interoperability in a Voice-Based Digital Future appeared first on Linux.com.

A study of the Linux kernel PCI subsystem with QEMU

Tuesday 22nd of June 2021 10:00:00 PM

Using QEMU to analyze the Linux kernel PCI subsystem.
Click to Read More at Oracle Linux Kernel Development

The post A study of the Linux kernel PCI subsystem with QEMU appeared first on Linux.com.

Enabling Easier Collaboration on Open Data for AI and ML with CDLA-Permissive-2.0

Tuesday 22nd of June 2021 10:00:00 PM

The Linux Foundation is pleased to announce the release of the CDLA-Permissive-2.0 license agreement, which is now available on the CDLA website at https://cdla.dev/permissive-2-0/. We believe that CDLA-Permissive-2.0 will meet a genuine need for a short, simple, and broadly permissive license agreement to enable wider sharing and usage of open data, particularly to bring clarity to the use of open data for artificial intelligence and machine learning models. 

We’re happy to announce that IBM and Microsoft are making data sets available today using CDLA-Permissive-2.0.

In this blog post, we’ll share some background about the original versions of the Community Data License Agreement (CDLA), why we worked with the community to develop the new CDLA-Permissive-2.0 agreement, and why we think it will benefit producers, users, and redistributors of open data sets.

Background: Why would you need an open data license agreement?

Licenses and license agreements are legal documents that define how content can be used, modified, and shared. They operate within the legal frameworks for copyrights, patents, and other rights that are established by laws and regulations around the world. These laws and regulations are not always clear and are not always in sync with one another.

Decades of practice have established a collection of open source software licenses and open content licenses that are widely used. These licenses typically work within the frameworks established by laws and regulations mentioned above to permit broad use, modification, and sharing of software and other copyrightable content in exchange for following the license requirements.

Open data is different. Various laws and regulations treat data differently from software or other creative content. Depending on what the data is and which country’s laws you’re looking at, the data often may not be subject to copyright protection, or it might be subject to different laws specific to databases, i.e., sui generis database rights in the European Union. 

Additionally, data may be consumed, transformed, and incorporated into Artificial Intelligence (AI) and Machine Learning (ML) models in ways that are different from how software and other creative content are used. Because of all of this, assumptions made in commonly-used licenses for software and creative content might not apply in expected ways to open data.

Choice is often a good thing, but too many choices can be problematic. To be clear, there are other licenses in use today for open data use cases. In particular, licenses and instruments from Creative Commons (such as CC-BY-4.0 and CC0-1.0) are used to share data sets and creative content. It was also important in drafting the CDLA agreements to enable collaboration with similar licenses. The CDLA agreements are in no way meant as a criticism of those alternatives, but rather the CDLA agreements are focused on addressing newer concerns born out of AI and ML use cases. AI and ML models generated from open data are the primary use case organizations have struggled with — CDLA was designed to address those concerns. Our goal was to strike a balance between updated choices and too many options.

First steps: CDLA version 1.0

Several years ago, in talking with members of the Linux Foundation member counsel community, we began collaborating to develop a license agreement that would clearly enable use, modification, and open data sharing, with a particular eye to AI and ML applications.

In October 2017, The Linux Foundation launched version 1.0 of the CDLA. The CDLA was intended to provide clear and explicit rights for recipients of data under CDLA to use, share and modify the data for any purpose. Importantly, it also explicitly permitted using the results from analyzed data to create AI and ML models, without any of the obligations that apply under the CDLA to sharing the data itself. It was launched with two initial types: a Permissive variant, with attribution-style obligations, and a Sharing variant, with a “copyleft”-style reciprocal commitment when resharing the raw data.

The CDLA-Permissive-1.0 agreement saw some amount of uptake and use. However, subsequent feedback revealed that some potential licensors and users of data under the CDLA-Permissive-1.0 agreement found it to be overly complex for non-lawyers to use. Many of its provisions were targeted at addressing specific and nuanced considerations for open data under various legal frameworks. While these considerations were worthwhile, we saw that communities may balance that specificity and clarity against the value of a concise set of easily comprehensible terms to lawyers and non-lawyers alike.

Partly in response to this, in 2019, Microsoft launched the Open Use of Data Agreement (O-UDA-1.0) to provide a more concise and simplified set of terms around the sharing and use of data for similar purposes. Microsoft graciously contributed stewardship of the O-UDA-1.0 to the CDLA effort. Given the overlapping scope of the O-UDA-1.0 and the CDLA-Permissive-1.0, we saw an opportunity to converge on a new draft for a CDLA-Permissive-2.0. 

Moving to version 2.0: Simplifying, clarifying, and making it easier

Following conversations with various stakeholders and after a review and feedback period with the Linux Foundation Member Counsel community, we have prepared and released CDLA-Permissive-2.0

In response to perceptions of CDLA-Permissive-1.0 as overly complex, CDLA-Permissive-2.0 is short and uses plain language to express the grant of permissions and requirements. Like version 1.0, the version 2.0 agreement maintains the clear rights to use, share and modify the data, as well as to use without restriction any “Results” generated through computational analysis of the data.

Unlike version 1.0, the new CDLA-Permissive-2.0 is less than a page in length.

The only obligation it imposes when sharing data is to “make available the text of this agreement with the shared Data,” including the disclaimer of warranties and liability. 

In a sense, you might compare its general “character” to that of the simpler permissive open source licenses, such as the MIT or BSD-2-Clause licenses, albeit specific to data (and with even more limited obligations).

One key point of feedback from users of the license and lawyers from organizations involved in Open Data were the challenges involved with associating attribution information with data (or versions of data sets). 

Although “attribution-style” provisions may be common in permissive open source software licenses, there was feedback that:

As data technologies continue to evolve beyond what the CDLA drafters might anticipate today, it is unclear whether typical ways of sharing attributions for open source software will fit well with open data sharing. 

Removing this as a mandated requirement was seen as preferable.

Recipients of Data under CDLA-Permissive-2.0 may still choose to provide attribution about the data sources. Attribution will often be important for appropriate norms in communities, and understanding its origination source is often a key aspect of why an open data set will have value. The CDLA-Permissive-2.0 simply does not make it a condition of sharing data.

CDLA-Permissive-2.0 also removes some of the more confusing terms that we’ve learned were just simply unnecessary or not useful in the context of an open data collaboration. Removing these terms enables the CDLA-Permissive-2.0 to present the terms in a concise, easy to read format that we believe will be appreciated by data scientists, AI/ML users, lawyers, and users around the world where English is not a first language.

We hope and anticipate that open data communities will find it easy to adopt it for releases of their own data sets.

Voices from the Community

“The open source licensing and collaboration model has made AI accessible to everyone, and formalized a two-way street for organizations to use and contribute to projects with others helping accelerate applied AI research. CDLA-Permissive-2.0 is a major milestone in achieving that type of success in the Data domain, providing an open source license specific to data that enables access, sharing and using data among individuals and organizations. The LF AI & Data community appreciates the clarity and simplicity CDLA-Permissive-2.0 provides.” Dr. Ibrahim Haddad, Executive Director of LF AI & Data 

“We appreciate the simplicity of the CDLA-Permissive-2.0, and we appreciate the community ensuring compatibility with Creative Commons licensed data sets.” Catherine Stihler, CEO of Creative Commons

“IBM has been at the forefront of innovation in open data sets for some time and as a founding member of the Community Data License Agreement. We have created a rich collection of open data sets on our Data Asset eXchange that will now utilize the new CDLAv2, including the recent addition of CodeNet – a 14-million-sample dataset to develop machine learning models that can help in programming tasks.” Ruchir Puri, IBM Fellow, Chief Scientist, IBM Research

“Sharing and collaborating with open data should be painless – and sharing agreements should be easy to understand and apply. We applaud the clear and understandable approach in the new CDLA-Permissive-2.0 agreement.” Jennifer Yokoyama, Vice President and Chief IP Counsel, Microsoft

“It’s exciting to see communities of legal and AI/ML experts come together to work on cross-organizational challenges to develop a framework to support data collaboration and sharing.” Nithya Ruff, Chair of the Board, The Linux Foundation and Executive Director, Open Source Program Office, Comcast

“Data is an essential component of how companies build their operations today, particularly around Open Data sets that are available for public use. At OpenUK, we welcome the CDLA-Permissive-2.0 license as a tool to make Open Data more available and more manageable over time, which will be key to addressing the challenges that organisations have coming up. This new approach will make it easier to collaborate around Open Data and we hope to use it in our upcoming work in this space.” Amanda Brock, CEO of OpenUK

“Verizon supports community efforts to develop clear and scalable solutions to legal issues around building artificial intelligence and machine learning, and we welcome the CDLA-Permissive-2.0 as a mechanism for data providers and software developers to work together in building new technology.” Meghna Sinha, VP – AI Center, Verizon

“Sony believes that the spread of clear and simple Open Data licenses like CDLA-2.0 activates Open Data ecosystem and contributes to innovation with AI. We support CDLA’s effort and hope CDLA will be used widely.” Hisashi Tamai, SVP, Sony Group Corporation

Data Sets Available under CDLA-Permissive-2.0

With today’s release of CDLA-Permissive-2.0, we are also pleased to announce several data sets that are now available under the new agreement. 

The IBM Center for Open Source Data and AI Technologies (CODAIT) will begin to re-license its public datasets hosted here using the CDLA-Permissive 2.0, starting with Project CodeNet, a large-scale dataset with 14 million code samples developed to drive algorithmic innovations in AI for code tasks like code translation, code similarity, code classification, and code search.

Microsoft Research is announcing that the following data sets are now being made available under CDLA-Permissive-2.0:

The Hippocorpus dataset, which comprises diary-like short stories about recalled and imagined events to help examine the cognitive processes of remembering and imagining and their traces in language;The Public Perception of Artificial Intelligence data set, comprising analyses of text corpora over time to reveal trends in beliefs, interest, and sentiment about a topic;The Xbox Avatars Descriptions data set, a corpus of descriptions of Xbox avatars created by actual gamers;         A Dual Word Embeddings data set, trained on Bing queries, to facilitate information retrieval about documents; andA GPS Trajectory data set, containing 17,621 trajectories with a total distance of about 1.2 million kilometers and a total duration of 48,000+ hours.

Next Steps and Resources

If you’re interested in learning more, please check out the following resources:

The text of CDLA-Permissive-2.0 on the CDLA websiteThe updated CDLA FAQThe CDLA repositories on GitHub

The post Enabling Easier Collaboration on Open Data for AI and ML with CDLA-Permissive-2.0 appeared first on Linux Foundation.

The post Enabling Easier Collaboration on Open Data for AI and ML with CDLA-Permissive-2.0 appeared first on Linux.com.

What Is OpenIDL, the Open Insurance Data Link platform?

Tuesday 22nd of June 2021 01:56:27 PM

OpenIDL is an open-source project created by the American Association of Insurance Services (AAIS) to reduce the cost of regulatory reporting for insurance carriers, provide a standardized data repository for analytics, and a connection point for third parties to deliver new applications to members. To learn more about the project, we sat down with Brian Behlendorf, General Manager for Blockchain, Healthcare and Identity at Linux Foundation, Joan Zerkovich, Senior Vice President, Operations at American Association of Insurance Services (AAIS) and Truman Esmond, Vice President, Membership & Solutions at AAIS.

The post What Is OpenIDL, the Open Insurance Data Link platform? appeared first on Linux.com.

Determining the Source of Truth for Software Components

Friday 18th of June 2021 01:00:26 PM

Abstract: Having access to a list of software components and their respective meta-data is critical to performing various DevOps tasks successfully. After considering the varying requirements of the different tasks, we determined that representing a software component as a “collection of files” provided an optimal representation. Conversely, when file-level information is missing, most tasks become more costly or outright impossible to complete.

Introduction

Having access to the list of software components that comprise a software solution, sometimes referred to as the Software Bill of Materials (SBOM), is a requirement for the successful execution of the following DevOps tasks:

  • Open Source and Third-party license compliance
  • Security Vulnerability Management
  • Malware Protection
  • Export Compliance
  • Functionally Safe Certification

A community effort, led by the National Telecommunications and Information Administration (NTIA) [1], is underway to create an SBOM exchange format driven mainly by the Security Vulnerability Management task. The holy grail of an effective SBOM design is twofold:

  1. Define a commonly agreed-upon data structure that best represents a software       component and;
  2. Devise a method that uniquely and effectively identifies each software component.

A component must represent a broad spectrum of software types including (but not limited to): a single source file, a library, an application executable, a container, a Linux runtime, or a more complex system that is composed of some combination of these types. For instance, a collection of source files (e.g., busybox 1.27.2) or a collection of three containers (e.g., a half dozen scripts and documentation) are two examples.

Because we must handle an eclectic range of component types, finding the right granular level of representation is critical. If it is too large, we will not be able to represent all the component types and the corresponding meta-information required to support the various DevOps tasks. On the other hand, it may add unnecessary complexity, cost, and friction to adoption if it is too small.

Traditionally, components have been represented at the software package or archive level, where the name and version are the primary means of identifying the component. This has several challenges, with the two biggest ones being:

  1. The fact that two different software components can have the same name yet be different, and
  2. Conversely, two copies of software with different names could be identical.

Another traditional method is to rely on the hash of the software using one of several methods – e.g., SHA1, SHA256, or MD5. This works well when your software component represents a single file, but it presents a problem when describing more complex components composed of multiple files. For example, the same collection of source files (e.g., busybox-1.27.2 [2]) could be packaged using different archive methods (e.g., .zip, .gz, .bz2), resulting in the same set of files having different hashes due to the different archive methods used. 

After considering the different requirements for the various DevOps tasks listed above, and given the broad range of software component types, we concluded that representing a software component as a “collection of files” where the “file” serves as the atomic unit provides an optimal representation. 

This granular level enables access to metadata at the file level, leading to a higher quality outcome when performing the various DevOps tasks (e.g., file-level licensing for license compliance, file-level vulnerability data for security management, and file-level cryptography info for export compliance).  To compute the unique identifier for a given component we recommend taking the “hash” of all the “file hashes” of the files that comprise a component. This enables unique identification independent of how the files are packaged. We discuss this approach in more detail in the sections that follow.

Why the File Level Matters

To obtain the most accurate information to support the various DevOps tasks sufficiently, one would need access to metadata at the atomic file level. This should not be surprising given that files serve as the building blocks from which software is built. If we represented software at any higher level (e.g., just name and version), pertinent information would be lost. 

License Compliance

If you want to understand all the licenses that impose obligations and restrictions on a program or library, you will need to know all the licenses of the files from which it was built (derived). There are many instances where, although an open source software component’s top level license is declared to be one license, it is common to find a half dozen or more other licenses within the codebase which typically impose additional obligations.  Popular open source projects usually borrow from other projects with different licenses. The open sharing of code is the force behind the success of the Open Source movement. For this reason, we must accept license diversity as the rule rather than the exception.  

This means that a project is often subject to the obligations of multiple licenses. Consider the impact of this on the use of busybox, which provides a lot of latitude regarding the features included in a build. How one configures busybox will determine which files are used. Knowing which files are used is the only way to know which licenses are applicable. For instance, although the top-level license is GPL-2.0, the source file math.c [3] has three licenses governing it (GPL-2.0, MIT, and BSD) because it was derived from three different projects.  

If one distributed a solution that includes an instance of busybox derived from math.c and provided a written offer for source code, one would need to reproduce their respective license notices in the documentation to comply with the MIT and BSD licenses. Furthermore, we have recently seen an open source component with Apache as the top-level license, yet deep within the bowels of the source code lies a set of proprietary files. These examples illustrate why having file level information is mission-critical.

Security Vulnerability Management

The Heartbleed vulnerability was identified within the OpenSSL component in 2014. Many web servers used OpenSSL to provide secure communication between a browser and a website. If left unpatched, it would allow attackers unprecedented access to sensitive information such as login and password credentials [4]. This vulnerability could be isolated to a single line within a single file. Therefore, the easiest and most definitive way to understand whether one was exposed was to determine whether their instance of OpenSSL was built using that file.

The Amnesia:33 vulnerability announcement [5], reported in November 2020, suggested that any software solution that included the FNET component was affected. With only the name and version of the FNET component to go on, one would have incorrectly concluded the Zephyr LTS 1.14 operating system was vulnerable.  However, by examining the file level source, one could have quickly determined the impacted files were not part of the Zephyr build, making it definitively clear that Zephyr was not in fact vulnerable [6].  Having to conduct a product recall when a product is not affected would be highly unproductive and costly. However, in the absence of file-level information, the analysis would not be possible and would have likely caused unnecessary worry, work, and cost. These examples further illustrate why having access to file-level information is mission-critical.

Export Compliance

The output quality of an Export Compliance program also depends on having access to file-level data. Although different governments have different rules and requirements concerning software export license compliance, most policies center around the use of cryptography methods and algorithms. To understand which cryptography libraries and algorithms are implemented, one needs to inspect the file-level source code. Depending on how a given software solution is built and which cryptography-based files are used (or not used), one should classify the software concerning the different jurisdiction policies. Having access to file-level data would also enable one to determine the classification for any given jurisdiction dynamically. The requirements of the export compliance task also mean that knowing what is at the file level is mission-critical.

Functional Safety

The objective of the functional safety software certification process is to mitigate the unacceptable risk of physical injury or of damage to the health of people and/or property. The standards that govern functional safety (e.g., ISO/IEC 61508, ISO 26262, …) require that the full system context is known to assess and mitigate risk successfully. Therefore, full system transparency requires verification and validation at the source file level, which includes understanding all the source code and build tools used, and how it was configured. The inclusion of components of unknown content and provenance would increase risk and prohibit most certifications. Thus, functionally safe certification represents yet another task where having file-level information becomes mission-critical.

Component Identification

One of the biggest challenges in managing software components is the ability to identify each one uniquely. Developing a high confidence method ensures that two copies of a component are the same when they represent identical content and different if the content is not identical. Furthermore, we want to avoid creating a dependency on a central component registry as a requirement for determining a component’s identifier. Therefore, an additional requirement is to be able to compute a unique identifier simply by examining the component’s contents.

Understanding a component’s file-level composition can play a critical role in designing such a method. Recall that our goal is to allow a software component to represent a wide spectrum of component types ranging from a single source file to a collection of containers and other files. Each component could therefore be broken down into a collection of files. This representation enables the construction of a method that can uniquely identify any given component. 

File hash methods such as SHA1, SHA256, and MD5 are effective at uniquely identifying a single file. However, when representing a component as a collection of files, we can uniquely represent it by creating a meta-hash – i.e., by taking the hash of “all file hashes” of the files that comprise a component. That is, i) generate a hash for each file (e.g., using SHA256), ii) sort the list of hashes, and iii) take the hash of the sorted list. Thus, the meta-hash approach would enable us to identify a component based solely on its content uniquely, and no registry or repository of truth is required.  

Conclusion

Having access to software components and their respective metadata is mission-critical to executing various DevOps tasks. Therefore, it is vital to establish the right level of granularity to ensure we can capture all the required data. This challenge is further complicated by the need to handle an eclectic range of component types. Therefore, finding the right granular level of representation is critical. If it is too large, we will not represent all the component types and the meta-information needed to support the DevOps function. If it is too small, we could add unnecessary complexity, cost, and friction to adoption. We have determined that a file-level representation is optimal for representing the various component types, capturing all the necessary information, and providing an effective method to identify components uniquely.

References

[1] NTIA: Software Bill of Materials web page, https://www.ntia.gov/SBOM

[2] Busybox Project, https://busybox.net/

[3] Software Heritage archive: math.c, https://archive.softwareheritage.org/api/1/content/sha1:695d7abcac1da03e484bcb0defbee53d4652c347/raw/ 

[4] Wikipedia: Heartbleed, https://en.wikipedia.org/wiki/Heartbleed

[5] “AMNESIA:33: Researchers Disclose 33 Vulnerabilities Across Four Open Source TCP/IP Libraries”, https://www.tenable.com/blog/amnesia33-researchers-disclose-33-vulnerabilities-tcpip-libraries-uip-fnet-picotcp-nutnet

[6] Zephyr Security Update on Amnesia:33, https://www.zephyrproject.org/zephyr-security-update-on-amnesia33/

The post Determining the Source of Truth for Software Components appeared first on Linux.com.

Linux Foundation Announces Software Bill of Materials (SBOM) Industry Standard, Research, Training, and Tools to Improve Cybersecurity Practices

Thursday 17th of June 2021 10:00:31 PM

The Linux Foundation responds to increasing demand for SBOMs that can improve supply chain security

SAN FRANCISCO, June 17, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced new industry research, training, and tools – backed by the SPDX industry standard – to accelerate the use of a Software Bill of Materials (SBOM) in secure software development.

The Linux Foundation is accelerating the adoption of SBOM practices to secure software supply chains with:

SBOM standard: stewarding SPDX, the de-facto standard for requirements and data sharingSBOM survey: highlighting the current state of industry practices to establish benchmarks and best practicesSBOM training: delivering a new course on Generating a Software Bill of Materials to accelerate adoptionSBOM tools:  enabling development teams to create SBOMs for their applications

“As the architects of today’s digital infrastructure, the open source community is in a position to advance the understanding and adoption of SBOMs across the public and private sectors,” said Mike Dolan, Senior Vice President and General Manager Linux Foundation Projects. “The rise in cybersecurity threats is driving a necessity that the open source community anticipated many years ago to standardize on how we share what is in our software. The time has never been more pressing to surface new data and offer additional resources that help increase understanding about how to adopt and generate SBOMs, and then act on the information.” 

Ninety percent (90%) of a modern application is assembled from open source software components. An SBOM accounts for the open source software components contained in an application that details their quality, license, and security attributes. SBOMs are used to ensure developers understand what components are flowing throughout their software supply chains, proactively identify issues and risks, and establish a starting point for their remediation.

The recent presidential Executive Order on Improving the Nation’s Cybersecurity referenced the importance of SBOMs in protecting and securing the software supply chain. The National Telecommunications and Information Administration (NTIA) followed the issuance of this order by asking for wide-ranging feedback to define a minimum SBOM. The Linux Foundation has responded to the NTIA’s SBOM inquiry here, and the presidential Executive Order here

SPDX: The De-Facto SBOM Open Industry Standard

SPDX – a Linux Foundation Project, is the de-facto open standard for communicating SBOM information, including open source software components, licenses, and known security vulnerabilities. SPDX evolved organically over the last ten years by collaborating with hundreds of companies, including the leading Software Composition Analysis (SCA) vendors – making it the most robust, mature, and adopted SBOM standard in the market. 

SBOM Readiness Survey

Linux Foundation Research is conducting the SBOM Readiness Survey. It will be deployed next week and will examine obstacles to adoption for SBOMs and future actions required to overcome them related to the security of software supply chains. The recent US Executive Order on Cybersecurity emphasizes SBOMs, and this survey will help identify industry gaps in SBOM applications. Survey questions address tooling, security measures, and industries leading in producing and consuming SBOMs, among other topics.

New Course: Generating a Software Bill of Materials

The Linux Foundation is also announcing a free, online training course, Generating a Software Bill of Materials (LFC192). This course provides foundational knowledge about the options and the tools available for generating SBOMs and how to use them to improve the ability to respond to cybersecurity needs. It is designed for directors, product managers, open source program office staff, security professionals, and developers in organizations building software. Participants will walk away with the ability to identify the minimum elements for an SBOM, how they can be assembled, and an understanding of some of the open source tooling available to support the generation and consumption of an SBOM. 

New Tools: SBOM Generator

Also announced today is the availability of the SPDX SBOM generator, which uses a command-line interface (CLI) to generate SBOM information, including components, licenses, copyrights, and security references of your application using SPDX v2.2 specification and aligning with the current known minimum elements from NTIA. Currently, the CLI supports GoMod (go), Cargo (Rust), Composer (PHP), DotNet (.NET), Maven (Java), NPM (Node.js), Yarn (Node.js), PIP (Python), Pipenv (Python), and Gems (Ruby). It is easily embeddable in automated processes such as continuous integration (CI) pipelines and is available for Windows, macOS, and Linux. 

Additional Resources

What is an SBOM?Build an SBOM training courseFree SBOM tool and APIs

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure, including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

Media Contacts

Jennifer Cloer

for Linux Foundation

jennifer@storychangesculture.com

503-867-2304

The post Linux Foundation Announces Software Bill of Materials (SBOM) Industry Standard, Research, Training, and Tools to Improve Cybersecurity Practices appeared first on Linux Foundation.

The post Linux Foundation Announces Software Bill of Materials (SBOM) Industry Standard, Research, Training, and Tools to Improve Cybersecurity Practices appeared first on Linux.com.

Free Training Course Explores Software Bill of Materials

Thursday 17th of June 2021 08:00:49 PM

At the most basic level, a Software Bill of Materials (SBOM) is a list of components contained in a piece of software. It can be used to support the systematic review and approval of each component’s license terms to clarify the obligations and restrictions as it applies to the distribution of the supplied software. This is important to reducing risk for organizations building software that uses open source components.

There is often confusion concerning the minimum data elements required for an SBOM and the reasoning behind why those elements are included. Understanding how components interact in a product is key for providing support for security processes, compliance processes, and other software supply chain use cases. 

This is why The Linux Foundation has taken the step of creating a free, online training course, Generating a Software Bill of Materials (LFC192). This course provides foundational knowledge about the options and the tools available for generating SBOMs and will help with understanding the benefits of adopting SBOMs and how to use them to improve the ability to respond to cybersecurity needs. It is designed for directors, product managers, open source program office staff, security professionals, and developers in organizations building software. Participants will walk away with the ability to identify the minimum elements for a SBOM, how they can be coded up, and an understanding of some of the open source tooling available to support the generation and consumption of an SBOM. 

The course takes around 90 minutes to complete. It features video content from Kate Stewart, VP, Dependable Embedded Systems at The Linux Foundation, who works with the safety, security, and license compliance communities to advance the adoption of best practices into embedded open source projects. A quiz is included to help confirm learnings.

Enroll today to start improving your development practices.

The post Free Training Course Explores Software Bill of Materials appeared first on Linux Foundation – Training.

The post Free Training Course Explores Software Bill of Materials appeared first on Linux.com.

What is an SBOM?

Tuesday 15th of June 2021 11:00:00 PM

The National Telecommunications and Information Administration (NTIA) recently asked for wide-ranging feedback to define a minimum Software Bill of Materials (SBOM). It was framed with a single, simple question (“What is an SBOM?”), and constituted an incredibly important step towards software security and a significant moment for open standards.

From NTIA’s SBOM FAQ  “A Software Bill of Materials (SBOM) is a complete, formally structured list of components, libraries, and modules that are required to build (i.e. compile and link) a given piece of software and the supply chain relationships between them. These components can be open source or proprietary, free or paid, and widely available or restricted access.”  SBOMs that can be shared without friction between teams and companies are a core part of software management for critical industries and digital infrastructure in the coming decades.

The ISO International Standard for open source license compliance (ISO/IEC 5230:2020 – Information technology — OpenChain Specification) requires a process for managing a bill of materials for supplied software. This aligns with the NTIA goals for increased software transparency and illustrates how the global industry is addressing challenges in this space. For example, it has become a best practice to include an SBOM for all components in supplied software, rather than isolating these materials to open source.

The open source community identified the need for and began to address the challenge of SBOM “list of ingredients” over a decade ago. The de-facto industry standard, and most widely used approach today, is called Software Package Data Exchange (SPDX). All of the elements in the NTIA proposed minimum SBOM definition can be addressed by SPDX today, as well as broader use-cases.

SPDX evolved organically over the last decade to suit the software industry, covering issues like license compliance, security, and more. The community consists of hundreds of people from hundreds of companies, and the standard itself is the most robust, mature, and adopted SBOM in the market today. 

The full SPDX specification is only one part of the picture. Optional components such as SPDX Lite, developed by Pioneer, Sony, Hitachi, Renesas, and Fujitsu, among others, provide a focused SBOM subset for smaller supplier use. The nature of the community approach behind SPDX allows practical use-cases to be addressed as they arose.

In 2020, SPDX was submitted to ISO via the PAS Transposition process of Joint Technical Committee 1 (JTC1) in collaboration with the Joint Development Foundation. It is currently in the approval phase of the transposition process and can be reviewed on the ISO website as ISO/IEC PRF 5962.

The Linux Foundation has prepared a submission for NTIA highlighting knowledge and experience gained from practical deployment and usage of SBOM in the SPDX and OpenChain communities. These include isolating the utility of specific actions such as tracking timestamps and including data licenses in metadata. With the backing of many parties across the worldwide technology industry, the SPDX and OpenChain specifications are constantly evolving to support all stakeholders.

Industry Comments

The Sony team uses various approaches to managing open source compliance and governance… An example is using an OSS management template sheet based on SPDX Lite, a compact subset of the SPDX standard. Teams need to be able to review the type, version, and requirements of software quickly, and using a clear standard is a key part of this process.

Hisashi Tamai, SVP, Sony Group Corporation, Representative of the Software Strategy Committee

“Intel has been an early participant in the development of the SPDX specification and utilizes SPDX, as well as other approaches, both internally and externally for a number of open source software use-cases.”

Melissa Evers, Vice President – Intel Architecture, Graphics, Software / General Manager – Software Business Strategy

Scania corporate standard 4589 (STD 4589) was just made available to our suppliers and defines the expectations we have when Open Source is part of a delivery to Scania. So what is it we ask for in a relationship with our suppliers when it comes to Open Source? 

1) That suppliers conform to ISO/IEC 5230:2020 (OpenChain). If a supplier conforms to this specification, we feel confident that they have a professional management program for Open Source.  

2) If in the process of developing a solution for Scania, a supplier makes modifications to Open Source components, we would like to see those modifications contributed to the Open Source project. 

3) Supply a Bill of materials in ISO/IEC DIS 5962 (SPDX) format, plus the source code where there’s an obligation to offer the source code directly, so we don’t need to ask for it.

Jonas Öberg, Open Source Officer – Scania (Volkswagen Group)

The SPDX format greatly facilitates the sharing of software component data across the supply chain. Wind River has provided a Software Bill of Materials (SBOM) to its customers using the SPDX format for the past eight years. Often customers will request SBOM data in a custom format. Standardizing on SPDX has enabled us to deliver a higher quality SBOM at a lower cost.

Mark Gisi, Wind River Open Source Program Office Director and OpenChain Specification Chair

The Black Duck team from Synopsys has been involved with SPDX since its inception, and I had the pleasure of coordinating the activities of the project’s leadership for more than a decade. In addition, representatives from scores of companies have contributed to the important work of developing a standard way of describing and communicating the content of a software package.

Phil Odence, General Manager, Black Duck Audits, Synopsys

With the rapidly increasing interest in the types of supply chain risk that a Software Bill of Materials helps address, SPDX is gaining broader attention and urgency. FossID (now part of Snyk) has been using SPDX from the start as part of both software component analysis and for open source license audits. Snyk is stepping up its involvement too, already contributing to efforts to expand the use cases for SPDX by building tools to test out the draft work on vulnerability profiles in SPDX v3.0.

Gareth Rushgrove, Vice President of Products, Snyk

For more information on OpenChain: https://www.openchainproject.org/

For more information on SPDX: https://spdx.dev/

References: https://www.ntia.gov/files/ntia/publications/frn-sbom-rfc-06022021.pdfhttps://www.ntia.doc.gov/files/ntia/publications/ntia_sbom_faq_-_april_15_draft.pdfSection 3.1.1 “Bill of Materials” in https://github.com/OpenChain-Project/Specification/raw/master/Official/en/2.1/openchainspec-2.1.pdfhttps://www.openchainproject.org/news/2020/02/24/openchain-spdx-lite-credit-where-credit-is-due

The post What is an SBOM? appeared first on Linux Foundation.

The post What is an SBOM? appeared first on Linux.com.

When will my instance be ready? — understanding cloud launch time performance metrics

Tuesday 15th of June 2021 10:00:00 PM

Understanding cloud launch time performance metrics
Click to Read More at Oracle Linux Kernel Development

The post When will my instance be ready? — understanding cloud launch time performance metrics appeared first on Linux.com.

Adoption of a “COVID-19 Vaccine Required” Approach for our Fall 2021 Event Line-up

Tuesday 15th of June 2021 07:00:00 PM

After careful consideration, we have decided that the safest course of action for returning to in-person events this fall is to take a “COVID-19 vaccine required” approach to participating in-person. Events that will be taking this approach include:

Open Source Summit + Embedded Linux Conference (and co-located events), Sept 27-30, Seattle, WAOSPOCon, Sept 27-29, Seattle, WALinux Security Summit, Sept 27-29, Seattle, WAOpen Source Strategy Forum, Oct 4-5, London, UKOSPOCon Europe, Oct 6, London, UKOpen Networking & Edge Summit + Kubernetes on Edge Day, Oct 11-12, Los Angeles, CAKubeCon + CloudNativeCon (and co-located events), Oct 11-15, Los Angeles, CAThe Linux Foundation Member Summit, Nov 2-4, Napa, CAOpen Source Strategy Forum, Nov 9-10, New York, NY

We are still evaluating whether to keep this requirement in place for events in December and beyond. We will share more information once we have an update.

Proof of full COVID-19 vaccination will be required to attend any of the events listed above. A person is considered fully vaccinated 2 weeks after the second dose of a two-dose series, or two weeks after a single dose of a one-dose vaccine.

Vaccination proof will be collected via a digitally secure vaccine verification application that will protect attendee data in accordance with EU GDPR, California CCPA, and US HIPAA regulations. Further details on the app we will be using, health and safety protocols that will be in place onsite at the events, and a full list of accepted vaccines will be added to individual event websites in the coming months. 

While this has been a difficult decision to make, the health and safety of our community and our attendees are of the utmost importance to us. Mandating vaccines will help infuse confidence and alleviate concerns that some may still have about attending an event in person. Additionally, it helps us keep our community members safe who have not yet been able to get vaccinated or who are unable to get vaccinated. 

This decision also allows us to be more flexible in pivoting with potential changes in guidelines that venues and municipalities may make as organizations and attendees return to in person events. Finally, it will allow for a more comprehensive event experience onsite by offering more flexibility in the structure of the event.

For those that are unable to attend in-person, all of our Fall 2021 events will have a digital component that anyone can participate in virtually. Please visit individual event websites for more information on the virtual aspect of each event.

We hope everyone continues to stay safe, and we look forward to seeing you, either in person or virtually, this fall. 

The Linux Foundation

FAQ

Q:If I’ve already tested positive for COVID-19, do I still need to show proof of COVID-19 vaccination to attend in person? 

A: Yes, you will still need to show proof of COVID-19 vaccination to attend in-person.

Q: Are there any special circumstances in which you will accept a negative COVID-19 test instead of proof of a COVID-19 vaccination? 

A: Unfortunately, no. For your own safety, as well as the safety of all our onsite attendees, everyone who is not vaccinated against COVID-19 will need to participate in these events virtually this year, and will not be able to attend in-person.

Q: I cannot get vaccinated for medical, religious, or other reasons. Does this mean I cannot attend?

A: For your own safety, as well as the safety of all our onsite attendees, everyone who is not vaccinated against COVID-19 – even due to medical, religious or other reasons – will need to participate in these events virtually this year, and will not be able to attend in-person.

Q: Will I need to wear a mask and socially distance at these events if everyone is vaccinated? 

A: Mask and social distancing requirements for each event will be determined closer to event dates, taking into consideration venue and municipality guidelines.

Q: Can I bring family members to any portion of an event (such as an evening reception) if they have not provided COVID-19 vaccination verification in the app? 

A: No. Anyone that attends any portion of an event in-person will need to register for the event, and upload COVID vaccine verification into our application.

Q: Will you provide childcare onsite at events again this year?

A: Due to COVID-19 restrictions, we unfortunately cannot offer child care services onsite at events at this time. We can, however, provide a list of local childcare providers. We apologize for this disruption to our normal event plans. We will be making this service available as soon as we can for future events.

Q: Will international (from outside the US) be able to attend? Will you accept international vaccinations?

A: Absolutely. As mentioned above, a full list of accepted vaccines will be added to individual event websites in the coming months. 

The post Adoption of a “COVID-19 Vaccine Required” Approach for our Fall 2021 Event Line-up appeared first on Linux Foundation.

The post Adoption of a “COVID-19 Vaccine Required” Approach for our Fall 2021 Event Line-up appeared first on Linux.com.

Adoption of a “COVID-19 Vaccine Required” Approach for our Fall 2021 Event Line-up

Tuesday 15th of June 2021 07:00:00 PM

After careful consideration, we have decided that the safest course of action for returning to in-person events this fall is to take a “COVID-19 vaccine required” approach to participating in-person. Events that will be taking this approach include:

Open Source Summit + Embedded Linux Conference (and co-located events), Sept 27-30, Seattle, WAOSPOCon, Sept 27-29, Seattle, WALinux Security Summit, Sept 27-29, Seattle, WAOpen Source Strategy Forum, Oct 4-5, London, UKOSPOCon Europe, Oct 6, London, UKOpen Networking & Edge Summit + Kubernetes on Edge Day, Oct 11-12, Los Angeles, CAKubeCon + CloudNativeCon (and co-located events), Oct 11-15, Los Angeles, CAThe Linux Foundation Member Summit, Nov 2-4, Napa, CAOpen Source Strategy Forum, Nov 9-10, New York, NY

We are still evaluating whether to keep this requirement in place for events in December and beyond. We will share more information once we have an update.

Proof of full COVID-19 vaccination will be required to attend any of the events listed above. A person is considered fully vaccinated 2 weeks after the second dose of a two-dose series, or two weeks after a single dose of a one-dose vaccine.

Vaccination proof will be collected via a digitally secure vaccine verification application that will protect attendee data in accordance with EU GDPR, California CCPA, and US HIPAA regulations. Further details on the app we will be using, health and safety protocols that will be in place onsite at the events, and a full list of accepted vaccines will be added to individual event websites in the coming months. 

While this has been a difficult decision to make, the health and safety of our community and our attendees are of the utmost importance to us. Mandating vaccines will help infuse confidence and alleviate concerns that some may still have about attending an event in person. Additionally, it helps us keep our community members safe who have not yet been able to get vaccinated or who are unable to get vaccinated. 

This decision also allows us to be more flexible in pivoting with potential changes in guidelines that venues and municipalities may make as organizations and attendees return to in person events. Finally, it will allow for a more comprehensive event experience onsite by offering more flexibility in the structure of the event.

For those that are unable to attend in-person, all of our Fall 2021 events will have a digital component that anyone can participate in virtually. Please visit individual event websites for more information on the virtual aspect of each event.

We hope everyone continues to stay safe, and we look forward to seeing you, either in person or virtually, this fall. 

The Linux Foundation

FAQ

Q:If I’ve already tested positive for COVID-19, do I still need to show proof of COVID-19 vaccination to attend in person? 

A: Yes, you will still need to show proof of COVID-19 vaccination to attend in-person.

Q: Are there any special circumstances in which you will accept a negative COVID-19 test instead of proof of a COVID-19 vaccination? 

A: Unfortunately, no. For your own safety, as well as the safety of all our onsite attendees, everyone who is not vaccinated against COVID-19 will need to participate in these events virtually this year, and will not be able to attend in-person.

Q: I cannot get vaccinated for medical, religious, or other reasons. Does this mean I cannot attend?

A: For your own safety, as well as the safety of all our onsite attendees, everyone who is not vaccinated against COVID-19 – even due to medical, religious or other reasons – will need to participate in these events virtually this year, and will not be able to attend in-person.

Q: Will I need to wear a mask and socially distance at these events if everyone is vaccinated? 

A: Mask and social distancing requirements for each event will be determined closer to event dates, taking into consideration venue and municipality guidelines.

Q: Can I bring family members to any portion of an event (such as an evening reception) if they have not provided COVID-19 vaccination verification in the app? 

A: No. Anyone that attends any portion of an event in-person will need to register for the event, and upload COVID vaccine verification into our application.

Q: Will you provide childcare onsite at events again this year?

A: Due to COVID-19 restrictions, we unfortunately cannot offer child care services onsite at events at this time. We can, however, provide a list of local childcare providers. We apologize for this disruption to our normal event plans. We will be making this service available as soon as we can for future events.

Q: Will international (from outside the US) be able to attend? Will you accept international vaccinations?

A: Absolutely. As mentioned above, a full list of accepted vaccines will be added to individual event websites in the coming months. 

The post Adoption of a “COVID-19 Vaccine Required” Approach for our Fall 2021 Event Line-up appeared first on Linux Foundation.

The post Adoption of a “COVID-19 Vaccine Required” Approach for our Fall 2021 Event Line-up appeared first on Linux.com.

How Linux Has Impacted Your Lives – Celebrating 30 Years of Open Source

Monday 14th of June 2021 11:50:11 PM

In April, The Linux Foundation asked the open source community: How has Linux impacted your life? Needless to say, responses poured in from across the globe sharing memories, sentiments and important moments that changed your lives forever. We are grateful you took the time to tell us your stories.

We’re thrilled to share 30 of the responses we received, randomly selected from all submissions. As a thank you to these 30 folks for sharing their stories, and in celebration of the 30th Anniversary of Linux, 30 penguins were adopted* from the Southern African Foundation for the Conservation of Coastal Birds in their honor, and each of our submitters got to name their adopted penguin. 

Check out the slides below to read these stories, get a glimpse of their newly adopted penguins and their new names!

Thank you to all who contributed for inspiring us and the community for the next 30 years of innovation and beyond. 

*Each of the adopted wild African penguins have been rescued and are being rehabilitated with the goal of being released back into the wild by the wonderful and dedicated staff at SANCCOB.

The post How Linux Has Impacted Your Lives – Celebrating 30 Years of Open Source appeared first on Linux Foundation.

The post How Linux Has Impacted Your Lives – Celebrating 30 Years of Open Source appeared first on Linux.com.

Interview with Stephen Hendrick, Vice President of Research, Linux Foundation Research

Monday 14th of June 2021 10:07:50 PM
Jason Perlow, Director of Project Insights and Editorial Content, spoke with Stephen Hendrick about Linux Foundation Research and how it will promote a greater understanding of the work being done by open source projects, their communities, and the Linux Foundation.

JP: It’s great to have you here today, and also, welcome to the Linux Foundation. First, can you tell me a bit about yourself, where you are from, and your interests outside work?

SH: I’m from the northeastern US.  I started as a kid in upstate NY and then came to the greater Boston area when I was 8.  I grew up in the Boston area, went to college back in upstate NY, and got a graduate degree in Boston.  I’ve worked in the greater Boston area since I was out of school and have really had two careers.  My first career was as a programmer, which evolved into project and product management doing global cash management for JPMC.  When I was in banking, IT was approached very conservatively, with a tagline like yesterday’s technology, tomorrow.  The best thing about JPMC was that it was where I met my wife.  Yes, I know, you’re never supposed to date anybody from work.  But it was the best decision I ever made.  After JPMC, my second career began as an industry analyst working for IDC, specializing in application development and deployment tools and technologies.  This was a long-lived 25+ year career followed by time with a couple of boutique analyst firms and cut short by my transition to the Linux Foundation.

Until recently, interests outside of work mainly included vertical pursuits — rock climbing during the warm months and ice climbing in the winter.  The day I got engaged, my wife (to be) and I had been climbing in the morning, and she jokes that if she didn’t make it up that last 5.10, I wouldn’t have offered her the ring.  However, having just moved to a house overlooking Mt. Hope bay in Rhode Island, our outdoor pursuits will become more nautically focused.

JP: And from what organization are you joining us?

SH: I was lead analyst at Enterprise Management Associates, a boutique industry analyst firm.  I initially focused my practice area on DevOps, but in reality, since I was the only person with application development and deployment experience, I also covered adjacent markets that included primary research into NoSQL, Software Quality, PaaS, and decisioning.  

JP: Tell me a bit more about your academic and quantitative analysis background; I see you went to Boston University, which was my mom’s alma mater as well. 

SH:  I went to BU for an MBA.  In the process, I concentrated in quantitative methods, including decisioning, Bayesian methods, and mathematical optimization.  This built on my undergraduate math and economics focus and was a kind of predecessor to today’s data science focus.  The regression work that I did served me well as an analyst and was the foundation for much of the forecasting work I did and industry models that I built.  My qualitative and quantitative empirical experience was primarily gained through experience in the more than 100 surveys and in-depth interviews I have fielded.  

JP: What disciplines do you feel most influence your analytic methodology? 

SH: We now live in a data-driven world, and math enables us to gain insight into the data.  So math and statistics are the foundation that analysis is built on.  So, math is most important, but so is the ability to ask the right questions.  Asking the right questions provides you with the data (raw materials) shaped into insights using math.  So analysis ends up being a combination of both art and science.

JP: What are some of the most enlightening research projects you’ve worked on in your career? 

SH:  One of the most exciting projects I cooked up was to figure out how many professional developers there were in the world, by country, with five years of history and a 5-year forecast.  I developed a parameterized logistics curve tuned to each country using the CIA, WHO, UN, and selected country-level data.  It was a landmark project at the time and used by the world’s leading software and hardware manufacturers. I was flattered to find out six years later that another analyst firm had copied it (since I provided the generalized equation in the report).

I was also interested in finding that an up-and-coming SaaS company had used some of my published matrix data on language use, which showed huge growth in Ruby.  This company used my findings and other evidence to help drive its acquisition of a successful Ruby cloud application platform.

JP: I see that you have a lot of experience working at enterprise research firms, such as IDC, covering enterprise software development. What lessons do you think we can learn from the enterprise and how to approach FOSS in organizations adopting open source technologies?

SH: The analyst community has struggled at times to understand the impact of OSS. Part of this stems from the economic foundation of the supply side research that gets done.  However, this has changed radically over the past eight years due to the success of Linux and the availability of a wide variety of curated open source products that have helped transform and accelerate the IT industry.  Enterprises today are less concerned about whether a product/service is open or closed source.  Primarily they want tools that are best able to address their needs. I think of this as a huge win for OSS because it validates the open innovation model that is characteristic of OSS. 

JP: So you are joining the Linux Foundation at a time when we have just gotten our research division off the ground. What are the kind of methodologies and practices that you would like to take from your years at firms like IDC and EMA and see applied to our new LF Research?

SH: LF is in the enviable position of having close relationships with IT luminaries, academics, hundreds of OSS projects, and a significant portion of the IT community.  The LF has an excellent opportunity to develop world-class research that helps the IT community, industry, and governments better understand OSS’s pivotal role in shaping IT going forward.

I anticipate that we will use a combination of quantitative and qualitative research to tell this story.  Quantitative research can deliver statistically significant findings, but qualitative interview-based research can provide examples, sound bites, and perspectives that help communicate a far more nuanced understanding of OSS’s relationship with IT.

JP: How might these approaches contrast with other forms of primary research, specifically human interviews? What are the strengths and weaknesses of the interview process?

SH: Interviews help fill in the gaps around discrete survey questions in ways that can be insightful, personal, entertaining, and unexpected.  Interviews can also provide context for understanding the detailed findings from surveys and provide confirmation or adjustments to models based on underlying data.

JP: What are you most looking forward to learning through the research process into open source ecosystems?

SH: The transformative impact that OSS is having on the digital economy and helping enterprises better understand when to collaborate and when to compete.

JP: What insights do you feel we can uncover with the quantitative analysis we will perform in our upcoming surveys? Are there things that we can learn about the use of FOSS in organizations?

SH: A key capability of empirical research is that it can be structured to highlight how enterprises are leveraging people, policy, processes, and products to address market needs.  Since enterprises are widely distributed in their approach and best/worst practices to a particular market, data can help us build maturity models that provide advice on how enterprises can shape strategy and decision based on the experience and best practices of others.

JP: Trust in technology (and other facets of society) is arguably at an all-time low right now. Do you see a role for LF Research to help improve levels of trust in not only software but in open source as an approach to building secure technologies? What are the opportunities for this department?

SH: I’m reminded by the old saying that there are “lies, damned lies, and then there are statistics.” If trust in technology is at an all-time low, it’s because there are people in this world with a certain moral flexibility, and the IT industry has not yet found effective ways to prevent the few from exploiting the many.  LF Research is in the unique position to help educate and persuade through factual data and analysis on accelerating improvements in IT security.

JP: Thanks, Steve. It’s been great talking to you today!

The post Interview with Stephen Hendrick, Vice President of Research, Linux Foundation Research appeared first on Linux Foundation.

The post Interview with Stephen Hendrick, Vice President of Research, Linux Foundation Research appeared first on Linux.com.

Podman is gaining rootless overlay support

Saturday 12th of June 2021 08:31:58 PM

Podman is gaining rootless overlay support

What does a native overlayfs mean to you and your container workloads?
Dan Walsh
Sat, 6/12/2021 at 1:31pm

Image

Image by Kawin Piboonsawat from Pixabay

Podman can use native overlay file system with the Linux kernel versions 5.13. Up until now, we have been using fuse-overlayfs. The kernel gained rootless support in the 5.11 kernel, but a bug prevented SELinux use with the file system; this bug was fixed in 5.13.

Topics:  
Containers  
Linux  
Podman  
Read More at Enable Sysadmin

The post Podman is gaining rootless overlay support appeared first on Linux.com.

FINOS Announces 2021 State of Open Source in Financial Services Survey

Thursday 10th of June 2021 11:00:00 PM

FINOS, the fintech open source foundation, and its research partners, Linux Foundation Research, Scott Logic, WIPRO, and GitHub, are conducting a survey as part of a research project on the state of open source adoption, contribution, and readiness in the financial services industry. 

The increased prevalence, importance, and value of open source is well understood and widely reported by many industry surveys and studies. However, the rate at which different industries are acknowledging this shift and adapting their own working practices to capitalize on the new world of open source-first differs considerably.

The financial services industry has been a long-time consumer of open source software, however many are struggling in contributing to, and publishing, open source software and standards, and adopting open source methodologies. A lack of understanding of how to build and deploy efficient tooling and governance models are often seen as a limiting factor.

This survey and report seeks to explore open source within the context of financial services organizations; including banks, asset managers, and hedge funds but will be designed as a resource to be used by all financial services organizations, with the goal to make this an annual survey with a year-on-year tracing of metrics. 

Please participate now; we intend to close the survey in early July. Privacy and confidentiality are important to us. Neither participant names, nor their company names, will be published in the final results.

To take the 2021 FINOS Survey, click the button below:

Take Survey (EN) BONUS

As a thank-you for completing this survey, you will receive a 75% discount code on enrollment in the Linux Foundation’s Open Source Management & Strategy training program, a $375 savings. This seven-course online training series is designed to help executives, managers, and software developers understand and articulate the basic concepts for building effective open source practices within their organization.

PRIVACY

Your name and company name will not be published. Reviews are attributed to your role, company size, and industry. Responses will be subject to the Linux Foundation’s Privacy Policy, available at https://linuxfoundation.org/privacy. Please note that survey partners who are not Linux Foundation employees will be involved in reviewing the survey results. If you do not want them to have access to your name or email address, please do not provide this information.

VISIBILITY

We will summarize the survey data and share the findings during Open Source Strategy Forum, 2021. The summary report will be published on the FINOS and Linux Foundation websites. 

QUESTIONS

If you have questions regarding this survey, please email us at info@finos.org

The post FINOS Announces 2021 State of Open Source in Financial Services Survey appeared first on Linux Foundation.

The post FINOS Announces 2021 State of Open Source in Financial Services Survey appeared first on Linux.com.

More in Tux Machines

Kernel: Graphics and Linux M1 Support

  • AMD + Valve Focusing On P-State / CPPC Driver With Schedutil For Better Linux Efficiency - Phoronix

    As reported at the start of August, AMD and Valve have been working on Linux CPU performance/frequency scaling improvements with the Steam Deck being one of the leading motivators. As speculated at that time, their work would likely revolve around use of ACPI CPPC found with Zen 2 CPUs and newer. Published last week was that AMD P-State driver for Linux systems indeed now leveraging CPPC information. AMD formally presented this new driver yesterday at XDC2021.

  • DRM Driver Posted For AI Processing Unit - Initially Focused On Mediatek SoCs - Phoronix

    BayLibre developer Alexandre Bailon has posted a "request for comments" of a new open-source Direct Rendering Manager (DRM) driver for AI Processing Unit (APU) functionality. Initially the driver is catering to Mediatek SoCs with an AI co-processor but this DRM "APU" driver could be adapted to other hardware too. Alexandre Bailon sums up this DRM AI Processing Unit driver as "a DRM driver that implements communication between the CPU and an APU. This uses VirtIO buffer to exchange messages. For the data, we allocate a GEM object and map it using IOMMU to make it available to the APU. The driver is relatively generic, and should work with any SoC implementing hardware accelerator for AI if they use support remoteproc and VirtIO."

  • Apple M1 USB Type-C Linux Support Code Sent Out For Testing - Phoronix

    he latest patches sent out for review/testing on the long mission for enabling Apple M1 support on Linux is the USB Type-C connectivity. Sven Peter has sent out the initial USB Type-C enablement work for the Apple ACE1/2 chips used by Apple M1 systems. In turn this Apple design is based on the TI TPS6598x IP but various differences. The Linux kernel support is being added onto the existing TIPD driver.

Proprietary Security Issues

Audiocasts/Videos: GNU World Order, Sioyek, LUTs

today's howtos

  • How to Install VirtualBox on Debian 11 (Bullseye)

    As we all know that VirtualBox is a free virtualization tool which allows us to install and run multiple virtual machines of different distributions at the same time. VirtualBox is generally used at desktop level where geeks used to create test environment inside the virtual machines. Recently Debian 11 (bullseye) is released with latest updates and improved features. In this post, we will cover how to install VirtualBox and its extension pack on Debian 11 system.

  • How To Install Opera Browser on Debian 11 - idroot

    In this tutorial, we will show you how to install Opera Browser on Debian 11. For those of you who didn’t know, Opera is one of the most popular cross-platform web browsers in the world. Opera offers many useful features such as free VPN, AdBlocker, integrated messengers, and private mode help you browse securely and smoothly. Share files instantly between your desktop and mobile browsers and experience web 3.0 with a free crypto wallet. This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you through the step-by-step installation of Opera Web Browser on a Debian 11 (Bullseye).

  • Get your Own URL Shortening Service With YOURLS and Raspberry PI

    Online URL shortening are services able to transform a long, hard to manage url into a shorter one, usually composed by a domain ana a short casual string (the most famous being Bitly, Adfly and Shortest). With YOURLS and Raspberry PI you can create your own, private shortening service In this tutorial, I’m going to show you how to install and configure YOURLS with a Raspberry PI computer board and publish it. Please note that this can’t be performed with a Raspberry PI Pico as it is a microncotroller and not a Linux computer. YOURLS stands for Your Own URL Shortener. It is an open source software, running on a LAMP server and using a small set of PHP scripts that allow you to run your own URL shortening service.

  • How to play Orcs Must Die! 2 on linux

    Create your own, self hosted url shortener service with YOURLS and Raspberry PI. Step-by-step guide to have it working in a very few time

  • Configure External RAID on Ubuntu/Centos/RedHat - Unixcop

    RAID: Stands For Redundant Array Of Independent Disks (Hardware Raid) or Redundant Array Of Inexpensive Disks (Software Raid) and that is technology that keeps data redundant to avoid data loss if any disk falls or is corrupted .

  • Don’t like Visual Studio Code? Try these 5 Alternatives Apps - itsfoss.net [Ed: Some of the 'alternatives' are also Microsoft and also proprietary software. Rather awful list...]

    When it comes to programming, we are going to need a plain text editor that allows us to easily modify files or take notes. One of the most complete and professional tools is Visual Studio Code. Although this Microsoft program is not indicated for users with little experience, so, if it is our case, surely we want to know what the best alternatives are. Anyone can download Virtual Studio Code, since it is completely free, but without a doubt, it has been designed to be used by programmers. In this field we find many other good options for professional work, especially if we are interested in knowing anything about a program developed by Microsoft.

  • How to Access BBSes in Linux Using Telnet

    In the '80s and early '90s, the most popular way to get online in the US was through Bulletin Board Systems or BBSes. While they're nowhere near as numerous as they were during their mid-90s heyday, there are still hobbyists operating these systems scattered around the world. And you can access them from Linux, without a dial-up modem.

  • How to solve the undefined variable/index/offset PHP error - Anto ./ Online

    This guide will you how to solve the notice undefined variable, index, or offset error that you are experiencing in PHP. This error is easy to spot in the warning and error message logs. Consequently, you will typically see a descriptive error message like this...