Language Selection

English French German Italian Portuguese Spanish

Syndicate content
News For Open Source Professionals
Updated: 31 min 18 sec ago

2020 Open Source Jobs Report Reveals Spike in Demand for DevOps Talent

Monday 26th of October 2020 04:37:02 PM

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, and edX, the trusted platform for learning, has released the 2020 Open Source Jobs Report, examining demand for open source talent and trends amongst open source professionals.

Despite the pandemic, demand for open source technology skills continues to be strong. Companies and organizations continue to increase their recruitment of open source technology talent while offering increased educational opportunities for existing staff to fill skills gaps. 93% of hiring managers report difficulty finding open source talent, and 63% say their organizations have begun to support open source projects with code or other resources for the explicit reason of recruiting individuals with those software skills, a significant jump from the 48% who stated this in 2018. DevOps has also become the top role hiring managers are looking to fill (65% are looking to hire DevOps talent), moving demand for developers to second (59%) for the first time in this report’s history. 74% of employers are now offering to pay for employee certifications, up from 55% in 2018, 47% in 2017, and only 34% in 2016.

Read More: Linux Foundation Training

The post 2020 Open Source Jobs Report Reveals Spike in Demand for DevOps Talent appeared first on

Role of Training and Certification at the Linux Foundation

Friday 23rd of October 2020 04:59:34 PM

Open source allows anyone to dip their toes in the code, read up on the documentation, and learn everything on their own. That’s how most of us did it, but that’s just the first step. Those who want to have successful careers in building, maintaining, and managing IT infrastructures of companies need more structured hands-on learning with real-life experience. That’s where Linux Foundation’s Training and Certification unit enters the picture. It helps not only greenhorn developers but also members of the ecosystem who seek highly trained and certified engineers to manage their infrastructure. Swapnil Bhartiya sat down with Clyde Seepersad, SVP and GM of Training and Certification at the Linux Foundation, to learn more about the Foundation’s efforts to create a generation of qualified professionals.

Swapnil Bhartiya: Can you tell us a bit about what is the primary goal of Training and Certification at the Linux Foundation?

Clyde Seepersad: If you look at the history of open source, the first wave of folks was very DIY. They would jump in, they would read the docs, and they would get on IRC channels. That was a sort of true way in which you get into open source — by figuring it out yourself. But of course, that’s never been true for software in general. People always get trained on commercial products. Companies have whole market enablement arms.

One of the things that we identified a few years ago is that there was this gap between amazing quality open source products that were changing how computing gets done, and the talent development side of things. How do we create on-ramps for talent in an age of open source where you don’t necessarily have the same quarter market commercial organizations, making sure that people get trained and up-to-speed technically on the software products? And so that’s a piece where we’re really trying to fill — that entry-level talent gap.

Swapnil Bhartiya: I have seen that there is no shortage of all these training but mostly, they are specific to a vendor and its product. So, when it comes to vendor-neutral technologies, core technologies, that is where there is a huge void. I think that’s the void LF is trying to fill?

Clyde Seepersad: Correct. It’s for the core technologies. And we always say you need a starting point that’s most useful to most people and that is a vendor-neutral understanding of the core technologies. We really try to focus on entry-level talent because we recognize that the commercial ecosystems are really valuable. When you get up into the intermediate and advanced layers, by definition, you’re working with specific tool sets and it’s appropriate for you to move into that more specific type of training. But when you’re getting started, you really need that broadest possible foundation because you don’t know if you’re going to be working in an Azure shop. You don’t know if you’re going to be in a GCP shop. You don’t know which district you’re going to be using. And so the broader the footprint that we can give people to start with, the better. That’s kind of where we focus —entry level, vendor-neutral— so people are best prepared for the maximum number of career opportunities.

Swapnil Bhartiya: The way we learned was we just learned everything ourselves: find it on the Internet, read a lot of books, download stuff, get series. But in today’s world where everybody’s connected, what kind of demand is there for this very basic entry level, for respecting the open source space?

Clyde Seepersad: Yeah. I’ll give you a good example, Swapnil. Just this past week, we actually announced the 1,000,000th enrollment in our free Intro to Linux course on edX, which kind of blows my mind. We were able to get out on the internet and find one million people from 222 different countries who wanted to learn the fundamentals of Linux, what it is and what it can do. I think that really shines a good light on just how broad the basis is and I think more importantly, how global the basis is, right? There’s a lot of data on-ramps into a technology career if you’re in North America or if you’re in Western Europe, as they are more mature ecosystems with different entry points. When you look globally, there are a lot fewer of those. So that part of our mission is, “Hey, this is not an isolated technical challenge for the US or for France or Germany. This is a global technical challenge.” And we’re seeing that demand.

The second highest number of enrollments for free Linux courses is from India and that there’s a broad, deep move to this. The example I’ve taken to giving recently is with the pandemic of 2020, my favorite local Chinese restaurant, which is a small mom-and-pop operation, shut down. When they came back online, they came back online with a website and an online ordering system. And I asked, “How did you guys get that set up?” And she said, “Oh, we had to go hire somebody. We had to go figure out how to make a mom-and-pop strip model Chinese food business into a web-enabled business.” It gives an example of the breadth that also is increasingly true: every business is now a technology business.

Swapnil Bhartiya: One more thing that people do not give credit or recognize is that despite these tough times, open source technologies, the way they have democratized, because building your own stack is so hard and so expensive. At the same time, if you want to start a business, having your own data centers so cloud and open source, you gave an example. As is the case with my Indian store because of this social distancing, or we did not want to go out, now they never did that, but suddenly everything was available online, you can just go online, place the order and get it delivered to your home. What enabled them to move quickly was all these democratization that has happened here. And at the same time, there is enough talent pool that you guys help create who can actually handle that kind of work. Because of that, suddenly, there is a surge. When we do look at all these technologies, we hear buzzwords like Kubernetes and all those things, they are intimidating. For somebody who is kind of new who wants to get into that, but they have no experience in any of these technologies, or any of these industries, how should they get started?

Clyde Seepersad: That’s a great point, Swapnil. Actually, that exact challenge is why we recently announced the creation of a new entry-level exam for what we call “IT associates”. It’s one of these recognitions that for those of us in tech, getting started seems like a fairly obvious thing, right? You learn the basic operating system, you get familiar with the cloud technologies, and you start thinking about the problems of stability and scale insecurity. If you’re on the outside looking in and you have never learned this stuff, you don’t know anybody in your community or your family who does it. It is a very tall ask to say, “Hey, go start by getting certified in Linux” or “Go start by getting the Azure certification or an AWS certification.” It’s just too much to ask for folks. You need some intermediate step to help people build confidence that this is something that they can do, even if they don’t have a support system and a network and a set of role models around them.

So, we developed this program to see if we can create a pre-professional certification exam that demonstrates that somebody has understood the fundamental concepts in terms of the new cloud infrastructure, the microservice infrastructure, the cloud native infrastructure without forcing them to get to the finish line of “Hey, I’m a competent cloud administrator,” right? It’s too much to ask folks to get in one go. It’s too much in terms of the time, it’s too much in terms of the level of effort without giving them some midpoint to see, “Okay, I feel confident that I can do this. I have the aptitude. I’ve been able to demonstrate that I can learn some of the basics.” And that really is the audience that we’re targeting. These are folks who are coming from the outside, new to IT, who understand the potential and they can see themselves doing it, but we have to give them somewhere to hang their hat to see, “Okay, it’s going to be fine. It’s a lot to learn, but I’ve shown that I can do it. I’ve shown competence. I’ve shown the aptitude. And potentially, I’ve shown enough to start getting a look from a potential employer or for a potential internship, but it’s some entry rung on the ladder.” That’s really what we’re going after: the recognition of it can be a daunting task to try to get somebody all the way up to technical competence. A pre-professional stepping stone could really help make IT seem like a more realistic career option for a lot of folks. 

Swapnil Bhartiya: If you look at open source, we all know a lot of core developer maintainers, they have no formal training. Somebody was a doctor and suddenly became a maintainer of a major open source project. But when we look at this whole “serving the enterprise space”, why do we need formal training when you can just go online and learn everything on your own?

Clyde Seepersad: That’s true. It reminds me of the last time I went to the doctor and he had a cartoon printed out on the wall that said that “Your Google search is not as good as my medical degree.” This is not a technology problem. The explosion of information on the Internet has made it possible to access a lot of knowledge and a lot of information. What it doesn’t do is make it easy and structured. So there are always going to be folks, just like they have been historically, who can go between the documentation and the discussion boards and the YouTube videos. They can figure it out for themselves. And our perspective is that’s great. Those people probably don’t need our help, but they’re probably in the single digits if you think of the percentages of people. 

Most folks need more structure. They need more guidance. They need labs that they can get through to have a solution that if they get stuck, they could go say, “Oh, that’s it, I forgot to open that port.” It’s not that training brings any dramatic new content to the table. What it does is it creates a structured path to help people go through a structured set of exercises and the availability of help if you get stuck. We have discussion boards and different forums for providing that help. It’s not that you couldn’t do it by scouring the web. It’s that the vast majority of people who take an already daunting topic and make it just impossible, right? We’ve got to put the breadcrumbs now to help people find it. That’s where we focus. We’re saying this information exists, but it doesn’t exist in a way that most people can digest it and can wrap their head around and stay committed to a path of getting from here to there. The training program, that’s what it does. It helps people find the path to get to where they want to go without having to invent the path by themselves.

Swapnil Bhartiya: Right. Also, the reason you need this structured training is you’re going to serve a particular industry, you are not just learning something. There’s a big difference in learning about something versus serving a specific industry. There are a lot of challenges. There are a lot of sets of procedures. So yes, it does play a very big role. You can learn everything yourself, but you should go through that specific training to prepare yourself for the job. Now, if you look at Linux Foundation, you guys do a lot of work in this training space. Can you kind of just give a few examples of the work that you’re doing to kind of help that talent gap? Linux Foundation also comes up with the report every year where we see there is such a huge gap between supply and demand of talent.

Clyde Seepersad: Correct. And we’re actually going to publish our newest version, the 2020 version of that report, the Open Source Jobs Report shortly. I’ll give you sort of a sneak preview. Even with the pandemic going on, more than 50% of the respondents said that they’re going to be hiring entry-level talent. And it’s really because there’s only so many times you can go to LinkedIn and try to poach somebody, right? Companies have realized that it’s a zero-sum game. You’re going to have to build and grow talent in-house, especially if you’re taking legacy loads and try to make them cloud-native and move them into the cloud, right? Getting brand-new people is not necessarily going to be the best way to make that happen. As LF, what we were doing is trying to say, “You need a portfolio of solutions to try to help fill that gap in the market.”

So, we do things like the Intro to Linux course I was talking about, which is available for free on the web. Anybody can go sign up for it and you don’t have to pay a dime. We have new exams like this entry-level certification exam. We’ve got instructor training for folks who want that. We’ve got affordable e-learning options for folks who want that. We recently put together some bootcamp tech programs to train people, to have that extra layer of instructor support. We recognize that there is no one silver bullet. It’s a portfolio of different actions to try to figure out different people who are in different places. How do we create solutions for them to find a path to get to where they want to go with the right level of intensity, the right level of support, and importantly, the right level of availability, and the right affordability. Because that, in reality, is a barrier for a lot of folks. Not everybody can drop $10,000 on a coding bootcamp. 

Swapnil Bhartiya: That also made me think that how do you also help individuals meet their own educational goals. As you said, sometimes, you need so many resources there?

Clyde Seepersad: Yeah. The structured training programs help because it helps folks see that there is a sequence in which they can learn and grow. It’s also helpful for them to just get into the discussion boards that we provide and be able to engage, not just with the instructors, but with the other people in the programs, to figure out “These are the challenges we’re all facing, I’m not alone in this. Other people are stuck in similar places.” Just like we were talking about with the new certified IT associate, folks see that they’re not alone and helping them get help and making it easy for them to access that help is an important part of making it accessible. I mean, ultimately, what we want is to create a pathway where people can succeed, where the barriers to entry come down. 

A lot of that is around building the community, the affordability, the accessibility, and coming from a place where we are fortunate in the Foundation that we’re a nonprofit. Folks get that we’re not trying to appease shareholders. We really are a mission-driven organization and I think that also helps give people the confidence that the agenda here really is to expand the talent pool. It really is to try to help folks. I think the mantra for my team has been, “Great code alone can’t change the world.” You still need people in there implementing systems, implementing solutions, providing support. So, the open source revolution does need a talent revolution to help sustain it.

Swapnil Bhartiya: Now, we did touch upon this point at different points, but do you need to have some specific qualification or you should be in a specific location or you should be of certain age to join these training programs?

Clyde Seepersad: No. We really do make this, as my training director likes to joke about it, we try to go down to what is the file level, right? If you look at our Intro to Linux course, for instance, it really starts by saying, “What’s an OS? What’s a file? How do you install it?” And the beauty of doing this stuff as self-paced learning is it allows people to skip ahead. Usually, you would look at the outline and you can figure out, “Oh, okay, Chapter 7 is where my journey needs to start.” So, it allows people to opt into a training program and find their level, but it also allows people who truly are new to this to find an accessible path in.

Swapnil Bhartiya: Awesome. Clyde, thank you so much for taking time out today and talk about this training and certification. And I look forward to talking to you again. Thank you.

Clyde Seepersad: Same here. I really appreciate you having me, Swapnil.




The post Role of Training and Certification at the Linux Foundation appeared first on

New Training Course Provides a Deep Dive Into Node.js Services Development

Tuesday 20th of October 2020 05:29:31 PM

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the availability of a new training course, LFW212 – Node.js Services Development.

LFW212, developed in conjunction with the OpenJS Foundation, is geared toward developers on their way to senior level who wish to master and demonstrate their Node.js knowledge and skills, in particular how to use Node with frameworks to rapidly and securely compose servers and services. This course provides a deep dive into Node core HTTP clients and servers, web servers, RESTful services, and web security essentials.

Source: Linux Foundation Training

The post New Training Course Provides a Deep Dive Into Node.js Services Development appeared first on

Goldman Sachs Open Sources its Data Modeling Platform through FINOS

Monday 19th of October 2020 10:36:51 PM

The Linux Foundation has announced that FINOS has started a new open source project, Legend, a data management and governance platform contributed by Goldman Sachs:

The Fintech Open Source Foundation (“FINOS“), together with platinum member Goldman Sachs (GS), today announced the launch of Legend, Goldman’s flagship data management and data governance platform. Developed internally and used by both engineers and non-engineers alike across all divisions of the bank, the source code for five of the platforms’ modules have today been made available as open source within FINOS.

Today’s launch comes on the heels of the completion of a six-month pilot in which other leading investment banks, such as Deutsche Bank, Morgan Stanley and RBC Capital Markets, used a shared version of Legend, hosted on FINOS infrastructure in the public cloud, to prototype interbank collaborative data modeling and standardization, in particular to build extensions to the Common Domain Model (CDM), developed by the International Swaps and Derivatives Association (ISDA). This shared environment is now, starting today, generally available for industry participants to use and build models collaboratively. With the Legend code now available as open source, organizations may also launch and operate their own instances. The components open-sourced today allow any individual and organization across any industry to harness the power of Goldman Sachs’ internal data platform for their own data management and governance needs as well as contribute to the open code base.

“Legend provides both engineers and non-engineers a single platform that allows everyone at Goldman Sachs to develop data-centric applications and data-driven insights,” said Atte Lahtiranta, chief technology officer at Goldman Sachs. “The platform allows us to serve our clients better, automate some of the most difficult data governance challenges, as well as provide self-service tools to democratize data and analytics. We anticipate that the broad adoption of Legend will bring real, tangible value for our clients as well as greater standardization and efficiency across the entire financial services ecosystem.”

Read more at the Linux Foundation

The post Goldman Sachs Open Sources its Data Modeling Platform through FINOS appeared first on

YAML for beginners

Friday 16th of October 2020 07:05:11 AM

Click to Read More at Enable Sysadmin

The post YAML for beginners appeared first on

Introducing the Open Governance Network Model

Thursday 15th of October 2020 01:00:34 PM

The Linux Foundation has long served as the home for many of the world’s most important open source software projects. We act as the vendor-neutral steward of the collaborative processes that developers engage in to create high quality and trustworthy code. We also work to build the developer and commercial communities around that code to sponsor each project’s members. We’ve learned that finding ways for all sorts of companies to benefit from using and contributing back to open source software development is key to the project’s sustainability.

Over the last few years, we have also added a series of projects focused on lightweight open standards efforts — recognizing the critical complementary role that standards play in building the open technology landscape. Linux would not have been relevant if not for POSIX, nor would the Apache HTTPD server have mattered were it not for the HTTP specification. And just as with our open source software projects, commercial participants’ involvement has been critical to driving adoption and sustainability.

On the horizon, we envision another category of collaboration, one which does not have a well-established term to define it, but which we are today calling “Open Governance Networks.” Before describing it, let’s talk about an example.

Consider ICANN, the agency that arose after demands emerged from evolving the global domain name system (DNS) from its single-vendor control by Network Solutions. With ICANN, DNS became something more vendor-neutral, international, and accountable to the Internet community. It evolved to develop and manage the “root” of the domain name system, independent from any company or nation. ICANN’s control over the DNS comes primarily through its establishment of an operating agreement among domain name registrars that establishes rules for registrations, guarantees your domain names are portable, and a uniform dispute resolution protocol (the UDRP) for times when a domain name conflicts with an established trademark or causes other issues.

ICANN is not a standards body; they happily use the standards for DNS developed at the IETF. They also do not create software other than software incidental to their mission, perhaps they also fund some DNS software development, but that’s not their core. ICANN is not where all DNS requests go to get resolved to IP addresses, nor even where everyone goes to register their domain name — that is all pushed to registrars and distributed name servers. In this way, ICANN is not fully decentralized but practices something you might call “minimum viable centralization.” Its management of the DNS has not been without critics, but by pushing as much of the hard work to the edge and focusing on being a neutral core, they’ve helped the DNS and the Internet achieve a degree of consistency, operational success, and trust that would have been hard to imagine building any other way.

There are similar organizations that interface with open standards and software but perform governance functions. A prime example of this is the CA Browser Forum, who manages the root certificates for the SSL/TLS web security infrastructure.

Do we need such organizations? Can’t we go completely decentralized? While some cryptocurrency networks claim not to need formal human governance, it’s clear that there are governance roles performed by individuals and organizations within those communities. Quite a bit of governance is possible to automate via smart contracts (and repairing damage from exploiting them), promoting the platform’s adoption to new users, onboarding new organizations, or even coordinating hard fork upgrades still require humans in the mix. And this is especially important in environments where competitors need to participate in the network to succeed, but do not trust one competitor to make the decisions.

Network governance is not a solved problem

Network governance is not just an issue for the technical layers. As one moves up the stack into more domain-specific applications, it turns out that there are network governance challenges up here as well, which look very familiar.

Consider a typical distributed application pattern: supply chain traceability, where participants in the network can view, on a distributed database or ledger, the history of the movement of an object from source to destination, and update the network when they receive or send an object. You might be a raw materials supplier, or a manufacturer, or distributor, or retailer. In any case, you have a vested interest in not only being able to trust this distributed ledger to be an accurate and faithful representation of the truth. You also want the version you see to be the same ledger everyone else sees, be able to write to it fairly, and understand what happens if things go wrong. Achieving all of these desired characteristics requires network governance!

You may be thinking that none of this is strictly needed if only everyone agreed to use one organization’s centralized database to serve as the system of record. Perhaps that is a company like eBay, or Amazon, Airbnb, or Uber. Or perhaps, a non-profit charity or government agency can run this database for us. There are some great examples of shared databases managed by non-profits, such as Wikipedia, run by the Wikimedia Foundation. This scenario might work for a distributed crowdsourced encyclopedia, but would it work for a supply chain?

This participation model requires everyone engaging in the application ecosystem to trust that singular institution to perform a very critical role — and not be hacked, or corrupted, or otherwise use that position of power to unfair ends. There is also a trust the entity will not become insolvent or otherwise unable to meet the community’s needs. How many Wikipedia entries have been hijacked or subject to “edit wars” that go on forever? Could a company trust such an approach for its supply chain? Probably not.

Over the last ten years, we’ve seen the development of new tools that allow us to build better-distributed data networks without that critical need for a centralized database or institution holding all the keys and trust. Most of these new tools use distributed ledger technology (“DLT”, or “blockchain”) to build a single source of truth across a network of cooperating peers, and embed programmatic functionality as “smart contracts” or “chaincode” across the network.

The Linux Foundation has been very active in DLT, first with the launch of Hyperledger in December of 2015. The launch of the Trust Over IP Foundation earlier this year focused on the application of self-sovereign identity, and in many examples, usually using a DLT as the underlying utility network.

As these efforts have focused on software, they left the development, deployment, and management of these DLT networks to others. Hundreds of such networks built on top of Hyperledger’s family of different protocol frameworks have launched, some of which (like the Food Trust Network) have grown to hundreds of participating organizations. Many of these networks were never intended to extend beyond an initial set of stakeholders, and they are seeing very successful outcomes.

However, many of these networks need a critical mass of industry participants and have faced difficulty achieving their goal. A frequently cited reason is the lack of clear or vendor-neutral governance of the network. No business wants to place its data, or the data it depends upon, in the hands of a competitor; and many are wary even of non-competitors if it locks down competition or creates a dependency on a market participant. For example, what if the company doesn’t do well and decides to exit this business segment? And at the same time, for most applications, you need a large percentage of any given market to make it worthwhile, so addressing these kinds of business, risk, or political objections to the network structure is just as important as ensuring the software works as advertised.

In many ways, this resembles the evolution of successful open source projects, where developers working at a particular company realize that just posting their source code to a public repository isn’t sufficient. Nor even is putting their development processes online and saying “patches welcome.”

To take an open source project to the point where it becomes the reference solution for the problem being solved and can be trusted for mission-critical purposes, you need to show how its governance and sustainability are not dependent upon a single vendor, corporate largess, or charity. That usually means a project looks for a neutral home at a place like the Linux Foundation, to provide not just that neutrality, but also competent stewarding of the community and commercial ecosystem.

Announcing LF Open Governance Networks

To address this need, today, we are announcing that the Linux Foundation is adding “Open Governance Networks” to the types of projects we host. We have several such projects in development that will be announced before the end of the year. These projects will operate very similarly to the Linux Foundation’s open source software projects, but with some additional key functions. Their core activities will include:

  • Hosting a technical steering committee to specify the software and standards used to build the network, to monitor the network’s health, and to coordinate upgrades, configurations, and critical bug fixes
  • Hosting a policy and legal committee to specify a network operating agreement the organizations must agree to for connecting their nodes to the network
  • Running a system for identity on the network, so participants to trust other participants who they say they are, monitor the network for health, and take corrective action if required.
  • Building out a set of vendors who can be hired to deploy peers-as-a-service on behalf of members, in addition to allowing members’ technical staff to run their own if preferred.
  • Convene a Governing Board composed of sponsoring members who oversee the budget and priorities.
  • Advocate for the network’s adoption by the relevant industry, including engaging relevant regulators and secondary users who don’t run their own peers.
  • Potentially manage an open “app store” approach to offering vetted re-usable deployable smart contracts of add-on apps for network users.

These projects will be sustained through membership dues set by the Governing Board on each project, which will be kept to what’s needed for self-sufficiency. Some may also choose to establish transaction fees to compensate operators of peers if usage patterns suggest that would be beneficial. Projects will have complete autonomy regarding technical and software choices – there are no requirements to use other Linux Foundation technologies.

To ensure that these efforts live up to the word “open” and the Linux Foundation’s pedigree, the vast majority of technical activity on these projects, and development of all required code and configurations to run the software that is core to the network will be done publicly. The source code and documentation will be published under suitable open source licenses, allowing for public engagement in the development process, leading to better long-term trust among participants, code quality, and successful outcomes. Hopefully, this will also result in less “bike-shedding” and thrash, better visibility into progress and activity, and an exit strategy should the cooperation efforts hit a snag.

Depending on the industry that it services, the ledger itself might or might not be public. It may contain information only authorized for sharing between the parties involved on the network or account for GDPR or other regulatory compliance. However, we will certainly encourage long term approaches that do not treat the ledger data as sensitive. Also, an organization must be a member of the network to run peers on the network, required to see the ledger, and particularly write to it or participate in consensus.

Across these Open Governance Network projects, there will be a shared operational, project management, marketing, and other logistical support provided by Linux Foundation personnel who will be well-versed in the platform issues and the unique legal and operational issues that arise, no matter which specific technology is chosen.

These networks will create substantial commercial opportunity:

  • For software companies building DLT-based applications, this will help you focus on the truly value-delivering apps on top of such a shared network, rather than the mechanics of forming these networks.
  • For systems integrators, DLT integration with back-office databases and ERP is expected to grow to be billions of dollars in annual activity.
  • For end-user organizations, the benefits of automating thankless, non-differentiating, perhaps even regulatorily-required functions could result in huge cost savings and resource optimization.

For those organizations acting as governing bodies on such networks today, we can help you evolve those projects to reach an even wider audience while taking off your hands the low margin, often politically challenging, grunt work of managing such networks.

And for those developers concerned before about whether such “private” permissioned networks would lead to dead cul-de-sacs of software and wasted effort or lost opportunity, having the Linux Foundation’s bedrock of open source principles and collaboration techniques behind the development of these networks should help ensure success.

We also recognize that not all networks should be under this model. We expect a diversity of approaches that will be long term sustainable, and encourage these networks to find a model that works for them. Let’s talk to see if it would be appropriate.

LF Governance Networks will enable our communities to establish their own Open Governance Network and have an entity to process agreements and collect transaction fees. This new entity is a Delaware nonprofit, a nonstock corporation that will maximize utility and not profit. Through agreements with the Linux Foundation, LF Governance Networks will be available to Open Governance Networks hosted at the Linux Foundation.

If you’re interested in learning more about hosting an Open Governance Network at the Linux Foundation, please contact us at



The post Introducing the Open Governance Network Model appeared first on The Linux Foundation.

The post Introducing the Open Governance Network Model appeared first on

Why Congress should invest in open-source software (Brookings)

Wednesday 14th of October 2020 04:30:36 PM

Frank Nagle at Brookings writes:

As the pandemic has highlighted, our economy is increasingly reliant on digital infrastructure. As more and more in-person interactions have moved online, products like Zoom have become critical infrastructure supporting business meetings, classroom education, and even congressional hearings. Such communication technologies build on FOSS and rely on the FOSS that is deeply ingrained in the core of the internet. Even grocery shopping, one of the strongholds of brick and mortar retail, has seen an increased reliance on digital technology that allows higher-risk shoppers to pay someone to shop for them via apps like InstaCart (which itself relies on, and contributes to, FOSS).

As the pandemic has highlighted, our economy is increasingly reliant on digital infrastructure. As more and more in-person interactions have moved online, products like Zoom have become critical infrastructure supporting business meetings, classroom education, and even congressional hearings. Such communication technologies build on FOSS and rely on the FOSS that is deeply ingrained in the core of the internet. Even grocery shopping, one of the strongholds of brick and mortar retail, has seen an increased reliance on digital technology that allows higher-risk shoppers to pay someone to shop for them via apps like InstaCart (which itself relies on, and contributes to, FOSS).

Read more at Brookings

The post Why Congress should invest in open-source software (Brookings) appeared first on

Sysadmin careers: the correlation between mentors and success

Tuesday 13th of October 2020 01:30:00 PM

Click to Read More at Enable Sysadmin

The post Sysadmin careers: the correlation between mentors and success appeared first on

Open Source Processes Driving Software-Defined Everything (LinuxInsider)

Monday 12th of October 2020 10:13:07 PM

Jack Germain writes at LinuxInsider:

The Linux Foundation (LF) has been quietly nudging an industrial revolution. It is instigating a unique change towards software-defined everything that represents a fundamental shift for vertical industries.

LF on Sept. 24 published an extensive report on how software-defined everything and open-source software is digitally transforming essential vertical industries worldwide.

“Software-defined vertical industries: transformation through open source” delves into the major vertical industry initiatives served by the Linux Foundation. It highlights the most notable open-source projects and why the foundation believes these key industry verticals, some over 100 years old, have transformed themselves using open source software.

Digital transformation refers to a process that turns all businesses into tech businesses driven by software. This change towards software-defined everything is a fundamental shift for vertical industry organizations, many of which typically have small software development teams relative to most software vendors.

Read more at LinuxInsider

The post Open Source Processes Driving Software-Defined Everything (LinuxInsider) appeared first on

Linux interface analytics on-demand with iftop

Saturday 10th of October 2020 07:13:56 AM

Click to Read More at Enable Sysadmin

The post Linux interface analytics on-demand with iftop appeared first on

Deconstructing an Ansible playbook

Saturday 10th of October 2020 06:20:58 AM

Click to Read More at Enable Sysadmin

The post Deconstructing an Ansible playbook appeared first on

Kubernetes basics for sysadmins

Friday 9th of October 2020 05:19:32 AM

Click to Read More at Enable Sysadmin

The post Kubernetes basics for sysadmins appeared first on

Amundsen: one year later (Lyft Engineering)

Thursday 8th of October 2020 08:09:55 PM

On October 30, 2019, we officially open sourced Amundsen, our solution to solve metadata catalog and data discovery challenges. Ten months later, Amundsen joined the Linux foundation AI (LFAI) as its incubation project.

In almost every modern data-driven company, each interaction with the platform is powered by data. As data resources are constantly growing, it becomes increasingly difficult to understand what data resources exist, how to access them, and what information is available in those sources without tribal knowledge. Poor understanding of data leads to bad data quality, low productivity, duplication of work, and most importantly, a lack of trust in the data. The complexity of managing a fragmented data landscape is not just a problem unique to Lyft, but a common one that exists throughout the industry.

In a nutshell, Amundsen is a data discovery and metadata platform for improving the productivity of data analysts, data scientists, and engineers when interacting with data. By indexing the data resources (tables, dashboards, users, etc.) and powering a page-rank style search based on usage patterns (e.g. highly-queried tables show up earlier than less-queried tables), these customers are able to address their data needs faster.

Read more at Lyft Engineering

The post Amundsen: one year later (Lyft Engineering) appeared first on

How to install and set up SeedDMS

Wednesday 7th of October 2020 08:19:45 PM

Click to Read More at Enable Sysadmin

The post How to install and set up SeedDMS appeared first on

Telcos Move from Black boxes to Open Source

Wednesday 7th of October 2020 02:24:06 PM

Linux Foundation Networking (LFN) organized its first virtual event last week and we sat down with Arpit Joshipura, the General Manager of Networking, IoT and Edge at the Linux Foundation, to talk about the key points of the event and how LFN is leading the adoption of open source within the telco space. 

Swapnil Bhartiya: Today, we have with us Arpit Joshipura, General Manager of Networking, IoT and Edge, at the Linux Foundation. Arpit, what were some of the highlights of this event? Some big announcements that you can talk about?

Arpit Joshipura: This was a global event with more than 80 sessions and was attended by attendees from over 75 countries. The sessions were very diverse. A lot of the sessions were end-user driven, operator driven as well as from our vendors and partners. If you take LF Networking and LFH as two umbrellas that are leading the Networking and Edge implementations here, we had a very significant announcement. I would probably group them into 5 main things:

Number one, we released a white paper at the Linux Foundation level where we had a bunch of vertical industries transformed using open source. These are over 100-year-old industries like telecom, automotive, finance, energy, healthcare, etc. So, that’s kind of one big announcement where vertical industries have taken advantage of open source.

The second announcement was easy enough: Google Cloud joins Linux Foundation Networking as a partner. That announcement comes on the basis of the telecom market and the cloud market converging together and building on each other.

The third major announcement was a project under LF Networking. If you remember, two years ago, a project collaboration with GSMA was started. It was called CNTT, which really defined and narrowed the scope of interoperability and compliance. And we have OPNFV under LFN. What we announced at Open Networking and Edge summit is the two projects are going to come together. This would be fantastic to a global community of operators who are simplifying the deployment and interoperability of implementation of NFVI manual VNFs and CNFs.

The next announcement was around a research study that we released on open source code that was created by Linux Foundation Networking, using LFN analytics and COCOMO estimation. We’re talking $7.2 billion worth of IP investment, right? This is the power of shared technology.

And finally, we released a survey on the Edge community asking them, “Why are you contributing to open source?” And the answer was fascinating. It was all-around innovation, speed to deployment, market creation. Yes, cost was important, but not initially.

So those were the 5 big highlights of the show from an LFN and LFH perspective.

Swapnil Bhartiya: There are two things that I’m interested in. One is the consolidation that you talk about, and the second is survey. The fact is that everybody is using open source. There is no doubt about it. But the problem that is happening is since everybody’s using it, there seems to be some gap between the awareness of how to be a good open source citizen as well. What have you seen in the telco space?

Arpit Joshipura: First of all, 5 years ago, they were all using black box and proprietary technologies. Then, we launched a project called OpenDaylight. And of course, OpenDaylight announced its 13th release today, but that’s kind of on their 6-year anniversary from being proprietary to today in one of the more active projects called ONAP. The telcos are 4 of the Top 10 contributors of source code and open source, right? Who would have imagined that an AT&T, Verizon, Amdocs, DT, Vodafone, and a China mobile and a China telecom, you name it are all actively contributing code? So that’s a paradigm shift in terms of not only consuming it, but also contributing towards it.

Swapnil Bhartiya: And since you mentioned ONAP, if I’m not wrong, I think AT&T released its own work as E-com. And then the projects within the Foundation were merged to create ONAP. And then you mentioned actually NTD. So, what I want to understand from you is how many projects are there that you see within the Foundation? The problem is that Linux Foundation and all those other foundations are open servers. It’s a very good place for those products to come in. It’s obvious that there will be some projects that will overlap. So what is the situation right now? Where do you see some overlap happening and, at the same time, are there still gaps that you need to fill?

Arpit Joshipura: So that’s a question of the philosophies of a foundation, right? I’ll start off with the most loose situation, which is GitHub. Millions and millions of projects on GitHub. Any PhD student can throw his code on GitHub and say that’s open source and at the end of the day, if there’s no community around it, that project is dead. Okay. That’s the most extreme scenario. Then, there are foundations like CNCF who have a process of accepting projects that could have competing solutions. May the best project win.

From an LF Networking and LFH perspective, the process is a little bit more restrictive: there is a formal project life cycle document and a process available on the Wiki that looks at the complementary nature of the project, that looks at the ecosystem, that looks at how it will enable and foster innovation. Then based on that, the governing board and the neutral governance that we have set up under the Linux Foundation, they would approve it.

Overall, it depends on the philosophy for LFN and LFH. We have 8 projects each in the umbrella, and most of these projects are quite complementary when it comes to solving different use cases in different parts of the network.

Swapnil Bhartiya: Awesome. Now, I want to talk about 5G a bit. I did not hear any announcements, but can you talk a bit about what is the word going on to help the further deployment of 5G technologies?

Arpit Joshipura: Yeah. I’m happy and sad to say that 5G is old news, right? The reality is all of the infrastructure work on 5G already was released earlier this year. So ONAP Frankfurt release, for example, has a blueprint on 5G slicing, right? All the work has been done, lots of blueprint and Akraino using 5G and mech. So, that work is done. The cities are getting lit up by the carriers. You see announcements from global carriers on 5G deployments. I think there are 2 missing pieces of work remaining for 5G.

One is obviously the O-RAN support, right? The O-RAN software community, which we host at the Linux Foundation also is coming up with a second release. And, all the support for 5G is in there.

The second part of 5G is really the compliance and verification testing. A lot of work is going into CMTT and OPN and feed. Remember that merge project we talked about where 5G is in context of not just OpenStack, but also Kubernetes? So the cloud-native aspects of 5G are all being worked on this year. I think we’ll see a lot more cloud-native 5G deployments next year primarily because projects like ONAP or cloud native integrate with projects like ONAP and Anthos or Azure stack and things like that.

Swapnil Bhartiya: What are some of the biggest challenges that the telco industry is facing? I mean, technically, no externalization and all those things were there, but foundations have solved the problem. Some rough ideas are still there that you’re trying to resolve for them.

Arpit Joshipura: Yeah. I think the recent pandemic caused a significant change in the telcos’ thinking, right? Fortunately, because they had already started on a virtualization and open-source route, you heard from Android, and you heard from Deutsche Telekom, and you heard from Achronix, all of the operators were able to handle the change in the network traffic, change in the network, traffic direction, SLS workloads, etc., right? All because of the softwarization as we call it on the network.

Given the pandemic, I think the first challenge for them was, can the network hold up? And the answer is, yes. Right? All the work-from-home and all these video recordings, we have to hang out with the web, that was number one.

Number two is it’s good to hold up the network, but did I end up spending millions and millions of dollars for operational expenditures? And the answer to that is no, especially for the telcos who have embraced an open-source ecosystem, right? So people who have deployed projects like SDN or ONAP or automation and orchestration or closed-loop controls, they automatically configure and reconfigure based on workloads and services and traffic, right? And that does not require manual labor, right? Tremendous amounts of costs were saved from an opex perspective, right?

For operators who are still in the old mindset have significantly increased their opex, and what that has caused is a real strain on their budget sheets.

So those were the 2 big things that we felt were challenges, but have been solved. Going forward, now it’s just a quick rollout/build-out of 5G, expanding 5G to Edge, and then partnering with the public cloud providers, at least, here in the US to bring the cloud-native solutions to market.

Swapnil Bhartiya: Awesome. Now, Arpit, if I’m not wrong, LF Edge is I think, going to celebrate its second anniversary in January. What do you feel the product has achieved so far? What are its accomplishments? And what are some challenges that the project still has to tackle?

Arpit Joshipura: Let me start off with the most important accomplishment as a community and that is terminology. We have a project called State of the Edge and we just issued a white paper, which outlines terminology, terms and definitions of what Edge is because, historically, people use terms like thin edge, thick edge, cloud edge, far edge, near edge and blah, blah, blah. They’re all relative terms. Okay. It’s an edge in relation to who I am.

Instead of that, the paper now defines absolute terms. If I give you a quick example, there are really 2 kinds of edges. There’s a device edge, and then there is a service provider edge. A device edge is really controlled by the operator, by the end user, I should say. Service provider edge is really shared as a service and the last mile typically separates them.

Now, if you double click on each of these categories, then you have several incarnations of an edge. You can have an extremely constrained edge, microcontrollers, etc., mostly manufacturing, IIoT type. You could have a smart device edge like gateways, etc. Or you could have an on-prem silver type device edge. Either way, an end user controls that edge versus the other edge. Whether it’s on the radio-based stations or in a smart central office, the operator controls it. So that’s kind of the first accomplishment, right? Standardizing on terminology.

The second big Edge accomplishment is around 2 projects: Akraino and EdgeX Foundry. These are stage 3 mature projects. They have come out with significant [results]. Akraino, for example, has come out with 20 plus blueprints. These are blueprints that actually can be deployed today, right? Just to refresh, a blueprint is a declarative configuration that has everything from end to end to solve a particular use case. So things like connected classrooms, AR/VR, connected cars, right? Network cloud, smart factories, smart cities, etc. So all these are available today.

EdgeX is the IoT framework for an industrial setup, and that’s kind of the most downloaded. Those 2 projects, along with Fledge, EVE, Baetyl, Home Edge, Open Horizon, security advanced onboarding, NSoT, right? Very, very strong growth over 200% growth in terms of contributions. Huge growth in membership, huge growth in new projects and the community overall. We’re seeing that Edge is really picking up great. Remember, I told you Edge is 4 times the size of the cloud. So, everybody is in it.

Swapnil Bhartiya: Now, the second part of the question was also some of the challenges that are still there. You talked about accomplishment. What are the problems that you see that you still think that the project has to solve for the industry and the community?

Arpit Joshipura: The fundamental challenge that remains is we’re still working as a community in different markets. I think the vendor ecosystem is trying to figure out who is the customer and who is the provider, right? Think of it this way: a carrier, for example, AT&T, could be a provider to a manufacturing factory, who actually could consume something from a provider, and then ship it to an end user. So, there’s like a value shift, if you may, in the business world, on who gets the cut, if you may. That’s still a challenge. People are trying to figure out, I think people who are going to be quick to define, solve and implement solutions using open technology will probably turn out to be winners.

People who will just do analysis per analysis will be left behind like any other industry. I think that is kind of fundamentally number one. And number two, I think the speed at which we want to solve things. The pandemic has just accelerated the need for Edge and 5G. I think people are just eager to get gaming with low latency, get manufacturing, predictive maintenance with low latency, home surveillance with low latency, connected cars, autonomous driving, all the classroom use cases. They should have been done next year, but because of the pandemic, it just got accelerated.

The post Telcos Move from Black boxes to Open Source appeared first on

New Training Course from Continuous Delivery Foundation Helps Gain Expertise with Jenkins CI/CD

Tuesday 6th of October 2020 04:19:58 PM

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the availability of a new training course, LFS267 – Jenkins Essentials.

LFS267, developed in conjunction with the Continuous Delivery Foundation, is designed for DevOps engineers, Quality Assurance personnel, SREs as well as software developers and architects who want to gain expertise with Jenkins for their continuous integration (CI) and continuous delivery (CD) activities.

Source: Linux Foundation Training

The post New Training Course from Continuous Delivery Foundation Helps Gain Expertise with Jenkins CI/CD appeared first on

Quantum networks: The next generation of secure computing

Tuesday 6th of October 2020 12:43:08 PM

Click to Read More at Enable Sysadmin

The post Quantum networks: The next generation of secure computing appeared first on

Setting up a webserver to use HTTPS

Saturday 3rd of October 2020 10:12:21 PM

Click to Read More at Enable Sysadmin

The post Setting up a webserver to use HTTPS appeared first on

Top five Vim plugins for sysadmins

Saturday 3rd of October 2020 07:34:00 PM

Click to Read More at Enable Sysadmin

The post Top five Vim plugins for sysadmins appeared first on

More in Tux Machines

Kernel: XFS and WiMAX in Linux

  • Prepare To Re-Format If You Are Using An Older XFS Filesystem - LinuxReviews

    Linux 5.10 brings several new features to the XFS filesystem. It solves the year 2038 problem, it supports metadata checksumming and it has better metadata verification. There's also a new configuration option: CONFIG_XFS_SUPPORT_V4. Older XFS filesystems using the v4 layout are now deprecated and there is no upgrade path beyond "backup and re-format". The Linux kernel will support older XFS v4 filesystems by default until 2025 and optional support will remain available until 2030. A new CONFIG_XFS_SUPPORT_V4 option in Linux 5.10. In case you want to.. still be able to mount existing XFS filesystems if/when you upgrade to Linux 5.10. We previously reported that XFS patches for Linux 5.10 delay the 2038 problem to 2486. That's not the only new feature Linux 5.10 brings to the XFS filesystem when it is released early December: It supports metadata checksumming, it has better built-in metadata verification and there is a new CONFIG_XFS_SUPPORT_V4 configuration option. Make sure you don't accidentally say N to that one if you have an older XFS filesystem you'd like to keep using if/when you upgrade your kernel.

  • The Linux Kernel Looks To Eventually Drop Support For WiMAX

    With the WiMAX 802.16 standard not being widely used outside of the Aeronautical Mobile Airport Communication System (AeroMACS) and usage in some developing nations, the Linux kernel may end up dropping its support for WiMAX but first there is a proposal to demote it to staging while seeing if any users remain. Longtime kernel developer Arnd Bergmann is proposing that the WiMAX Linux kernel infrastructure and the lone Intel 2400m driver be demoted from the networking subsystem to staging. In a future kernel release, the WiMAX support would be removed entirely if no active users are expressed. The Linux kernel WiMAX infrastructure is just used by the Intel 2400m driver for hardware with Sandy Bridge and prior, thus of limited relevance these days. That Intel WiMAX implementation doesn't support the frequencies that AeroMACS operates at and there are no other large known WiMAX deployments around the world making use of the frequencies supported by the 2400m implementation or users otherwise of this Linux kernel code.

  • Linux Is Dropping WiMAX Support - LinuxReviews

    It's no loss. There is a reason why you have probably never seen a WiMAX device or heard of it, WiMAX was a wireless last-mile Internet solution mostly used in a few rural areas in a limited number of countries between 2005 and 2010. There is very little use for it today so it is almost natural that Linux is phasing out support for WiMAX and the one WiMAX device it supports. WiMAX is a wireless protocol, much like IP by Avian Carriers except that it has less bandwidth and significantly lower latency. WiMAX (Worldwide Interoperability for Microwave Access) is a set of wireless standards that were used to provide last-mile Internet connectivity where DSL and other solutions were unavailable. WiMAX can work over long distances (up to 50 km), something WiFi can't. The initial design could provide around 25 megabit/s downstream, which was competitive when WiMAX base-stations and modems become widely available around 2005. That changed around 2010 when 4G/LTE become widely available. The WiMAX Forum, who maintains the WiMAX standard, tried staying relevant with a updated standard called WiMAX 2 in 2011. Some equipment for it was made, but it never became a thing. WiMAX was pretty much dead by the time WiMAX 2 arrived. The standard NetworkManager utility GNU/Linux distributions come with supported WiMAX until 2015. The Linux kernel still supports it and exactly one WiMAX device from Intel as of Linux 5.9, but that's about to change.

Fedora Elections and IBM/Red Hat Leftovers

  • Fedora 33 elections nominations now open

    Candidates may self-nominate. If you nominate someone else, please check with them to ensure that they are willing to be nominated before submitting their name. The steering bodies are currently selecting interview questions for the candidates. Nominees submit their questionnaire answers via a private Pagure issue. The Election Wrangler or their backup will publish the interviews to the Community Blog before the start of the voting period. Fedora Podcast episodes will be recorded and published as well. Please note that the interview is mandatory for all nominees. Nominees not having their interview ready by end of the Interview period (2020-11-19) will be disqualified and removed from the election.

  • 12 Tips for a migration and modernization project

    Sometimes migration/modernization projects are hard to execute because there are many technical challenges, like the structure of legacy code, customer environment, customer bureaucracy, network issues, and the most feared of all, production bugs. In this post I'm going to explain the 12-step migration / modernization procedure I follow as a consultant using a tip-based approach. I have some experience with this kind of situation because I’ve already passed by different kinds of projects with several kinds of problems. Over time you start to recognize patterns and get used to solving the hard problems. So, I thought: Wouldn't it be cool to create a procedure based on my experience, so that I can organize my daily work and give the transparency that the customers and managers want? To test this out, I did this for one customer in my hometown. They were facing a Red Hat JBoss EAP migration/modernization project. The results of the project were outstanding. The customer said they were even more satisfied with the transparency. The project manager seemed really comfortable knowing all about the details through the project and pleased with reducing the risk of unexpected news.

  • Awards roll call: June 2020 to October 2020

    We are nearly at the end of 2020 and while the pace continues to increase, we want to take a moment to acknowledge and celebrate some of the successes of Red Hat's people and their work. In the last four months, several Red Hatters and Red Hat products are being recognized by leading industry publications and organizations for efforts in driving innovation.

  • How developers can build the next generation of AI advertising technology – IBM Developer

    As we look across the most rapidly transforming industries like financial services, healthcare, retail – and now advertising, developers are putting open source technologies to work to deliver next-generation features. Our enterprise clients are looking for AI solutions that will scale with trust and transparency to solve business problems. At IBM®, I have the pleasure of focusing on equipping you, the developers, with the capabilities you need to meet the heightened expectations you face at work each day. We’re empowering open source developers to drive the critical transformation to AI in advertising. For instance, at the IBM Center for Open source Data and AI Technologies (CODAIT), enterprise developers can find open source starting points to tackle some of your thorniest challenges. We’re making it easy for developers to use and create open source AI models that can ultimately help brand marketers go deeper with AI to reach consumers more effectively.

Programming: Qt, PHP, JS and Bash

  • Qt 6 To Ship With Package Manager For Extra Libraries - Phoronix

    Adding to the list of changes coming with the Qt 6 toolkit, The Qt Company has now outlined their initial implementation of a package manager to provide additional Qt6 modules.

  • Qt for MCUs 1.5 released

    A new release of Qt for MCUs is now available in the Qt Installer. If you are new to Qt for MCUs, you can try it out here. Version 1.5 introduces new platform APIs for easy integration of Qt for MCUs on any microcontroller, along with an in-depth porting guide to get you going. Additionally, it includes a set of C++ APIs to load new images at runtime into your QML GUI. As with every release, 1.5 also includes API improvements and bug fixes, enhancing usability and stability.

  • KDDockWidgets v1.1 has been released! - KDAB - KDAB on Qt

    KDDockWidgets v1.1 is now available! Although I just wrote about v1.0 last month, the 1.1 release still managed to get a few big features.

  • KDAB TV celebrates its first year - KDAB

    A year ago KDAB started a YouTube channel dedicated to software development with Qt, C++ and 3D technologies like OpenGL. We talked to Sabine Faure, who is in charge of the program, about how it worked out so far and what we can expect in the future.

  • How to build a responsive contact form with PHP – Linux Hint

    Contact forms are commonly used in web applications because they allow the visitors of the website to communicate with the owner of the website. For most websites, responsive contact forms can be easily accessed from various types of devices such as desktops, laptops, tablets, and mobile phones. In this tutorial, a responsive contact form is implemented, and the submitted data is sent as an email using PHP.

  • Applying JavaScript’s setTimeout Method

    With the evolution of the internet, JavaScript has grown in popularity as a programming language due to its many useful methods. For example, many websites use JavaScript’s built-in setTimeout method to delay tasks. The setTimeout method has many use cases, and it can be used for animations, notifications, and functional execution delays.Because JavaScript is a single-threaded, translative language, we can perform only one task at a time. However, by using call stacks, we can delay the execution of code using the setTimeout method. In this article, we are going to introduce the setTimeout method and discuss how we can use it to improve our code.

  • Removing Characters from String in Bash – Linux Hint

    At times, you may need to remove characters from a string. Whatever the reason is, Linux provides you with various built-in, handy tools that allow you to remove characters from a string in Bash. This article shows you how to use those tools to remove characters from a string. [...] Sed is a powerful and handy utility used for editing streams of text. It is a non-interactive text editor that allows you to perform basic text manipulations on input streams. You can also use sed to remove unwanted characters from strings. For demonstration purposes, we will use a sample string and then pipe it to the sed command.

Python Programming

  • Dissecting a Web stack - The Digital Cat

    Having recently worked with young web developers who were exposed for the first time to proper production infrastructure, I received many questions about the various components that one can find in the architecture of a "Web service". These questions clearly expressed the confusion (and sometimes the frustration) of developers who understand how to create endpoints in a high-level language such as Node.js or Python, but were never introduced to the complexity of what happens between the user's browser and their framework of choice. Most of the times they don't know why the framework itself is there in the first place. The challenge is clear if we just list (in random order), some of the words we use when we discuss (Python) Web development: HTTP, cookies, web server, Websockets, FTP, multi-threaded, reverse proxy, Django, nginx, static files, POST, certificates, framework, Flask, SSL, GET, WSGI, session management, TLS, load balancing, Apache. In this post, I want to review all the words mentioned above (and a couple more) trying to build a production-ready web service from the ground up. I hope this might help young developers to get the whole picture and to make sense of these "obscure" names that senior developers like me tend to drop in everyday conversations (sometimes arguably out of turn). As the focus of the post is the global architecture and the reasons behind the presence of specific components, the example service I will use will be a basic HTML web page. The reference language will be Python but the overall discussion applies to any language or framework. My approach will be that of first stating the rationale and then implementing a possible solution. After this, I will point out missing pieces or unresolved issues and move on with the next layer. At the end of the process, the reader should have a clear picture of why each component has been added to the system.

  • Introducing AutoScraper: A Smart, Fast and Lightweight Web Scraper For Python | Codementor

    In the last few years, web scraping has been one of my day to day and frequently needed tasks. I was wondering if I can make it smart and automatic to save lots of time. So I made AutoScraper!

  • django-render-block 0.8 (and 0.8.1) released!

    A couple of weeks ago I released version 0.8 of django-render-block, this was followed up with a 0.8.1 to fix a regression. django-render-block is a small library that allows you render a specific block from a Django (or Jinja) template, this is frequently used for emails when you want multiple pieces of an email together in a single template (e.g. the subject, HTML body, and text body), but they need to be rendered separately before sending.

  • Pyston v2: 20% faster Python | The Pyston Blog

    We’re very excited to release Pyston v2, a faster and highly compatible implementation of the Python programming language. Version 2 is 20% faster than stock Python 3.8 on our macrobenchmarks. More importantly, it is likely to be faster on your code. Pyston v2 can reduce server costs, reduce user latencies, and improve developer productivity. Pyston v2 is easy to deploy, so if you’re looking for better Python performance, we encourage you to take five minutes and try Pyston. Doing so is one of the easiest ways to speed up your project.

  • Pyston v2 Released As ~20% Faster Than Python 3.8 - Phoronix

    Version 2.0 of Pyston is now available, the Python implementation originally started by Dropbox that builds on LLVM JIT for offering faster Python performance. Pyston developers believe their new release is about 20% faster than the standard Python 3.8 and should be faster for most Python code-bases.

  • Python int to string – Linux Hint

    Python is one of the universal languages that support various types of data types like integer, decimal point number, string, and complex number. We can convert one type of data type to another data type in Python. This data type conversion process is called typecasting. In Python, an integer value can easily be converted into a string by using the str() function. The str() function takes the integer value as a parameter and converts it into the string. The conversion of int to string is not only limited to the str() function. There are various other means of int to string conversion. This article explains the int to string conversion with various methods.

  • Python isinstance() Function – Linux Hint

    Python is one of the best and efficient high-level programming languages. It has a very straightforward and simple syntax. It has very built-in modules and functions that help us to perform the basic tasks efficiently. The Python isinstance() function evaluates either the given object is an instance of the specified class or not.