Language Selection

English French German Italian Portuguese Spanish

Chef Liberated

Chef Goes All-In on Open Source

  • DevOps Chat: Chef Goes All-In on Open Source

    In front of next month’s ChefConf in Seattle, Chef has announced a major refinement to its business model. Affirming and clarifying its commitment to open source business models, Chef will now and in the future release all of its software as open source.

LWN and original from Chef

  • Chef becomes 100% free software

    Chef, the purveyor of a popular configuration-management system, has announced a move away from the open-core business model and the open-sourcing of all of its software.

  • Introducing the New Chef: 100% Open, Always

    Today, Chef is announcing meaningful changes to the way that we build and distribute our software. Chef has always believed in the power of open source. This philosophy is core to the way that we think about software innovation. There is no better way to build software than in the open in partnership with individuals and companies who use our stack in the real world. And for enterprises and other organizations facing complex challenges, Chef backs up our software by building and supporting distributions for our projects with the resources necessary for these organizations to succeed.

    Going forward, we are doubling down on our commitment to OSS development as we extend our support for the needs of enterprise-class transformation. Starting today, we will expand the scope of our open source licensing to include 100% of our software under the Apache 2.0 license (consistent with our existing Chef Infra, Chef InSpec, and Chef Habitat license terms) without any restrictions on the use, distribution or monetization of our source code as long as our trademark policy is respected. We welcome anyone to use and extend our software for any purpose in alignment with the four essential freedoms of Free Software.

Chef-paid 'analysi' comments

  • Chef’s Different Recipe

    Over the course of the past three plus years, the market has seen a growing number of commercial open source organizations – with encouragement from some investors – drifting away from traditional open source licensing and norms. While there have been significant differences in the precise mechanics of their respective approaches, the common thread between the likes of Confluent, Elastic, MongoDB, Redis Labs and TimeScale has been their willingness to violate open source norms long considered sacrosanct.

    Contrary to internet opinions, however, little if any of this was done with malicious intent, or absent due consideration for the grave implications of the various moves. In the majority of cases, the difficult decisions were made reluctantly, under duress. Whether that duress was real or more theoretical in nature is a matter of some debate, but there can be no doubt that the companies involved embarked on these courses because they felt they had to, not because they particularly wanted to.

    There have been many contributing factors to this drift away from open source, including the intrinsic problems of compelling payment for assets otherwise available for free, but by far the biggest driver has been the once cold war that recently escalated into a full scale hot war between cloud vendors and commercial open source providers. The relationships between these archetypes is fraught, and has evolved from relatively benign indifference on the part of open source providers to existential, unmanageable dread.

By Sean Michael Kerner

  • Chef Opens Up DevOps Platform With Enterprise Automation Stack

    Chef has been at the forefront of the DevOps movement with its namesake open-source Chef project. Not all of Chef's platforms, however, have been open-source, with some available under commercial proprietary licenses.

    On April 2, Chef announced a major shift in its company alignment, making all of its products available under the Apache 2.0 open-source license and revealing a new supported platform called the Enterprise Automation Stack. The move to being 100 percent open-source is an effort to provide more transparency and encourage broader collaboration. Rather than moving the projects to an independent, third-party open-source foundation and governance model, however, Chef will continue to lead and operate the projects.

    "When looking at foundations and the shape of open-source, a big issue is deciding who controls and builds the upstream asset. As soon as you put software into a foundation, the foundation controls the asset," Adam Jacob, co-founder and CTO of Chef, told eWEEK. "One of the things that we're doing is aligning our own commercial interests with our interest in being the upstream that provides the project."

Goodbye Open Core — Good Riddance to Bad Rubbish

  • Goodbye Open Core — Good Riddance to Bad Rubbish

    This morning, Chef Software announced that it will be releasing 100% of its software as Open Source, under the Apache License. Going forward, all of its product development will be done in the open, with the community, and released as Open Source Software. Chef is done with being Open Core, and is now a Free Software Product company. Good riddance to bad rubbish.
    As a Co-Founder of Chef, a board member, and a community member, I couldn’t be more thrilled. For me, it eliminates the longest-running source of friction and frustration from my time at Chef. On the one hand, we have a community that cares about the software, and about each other, where we develop the software in concert with our users and customers. On the other, we produced a proprietary software stack, which we use to make money. Deciding what’s in, and what’s out, or where to focus, was the hardest part of the job at Chef. I’m stoked nobody has to do it anymore. I’m stoked we can have the entire company participating in the open source community, rather than burning out a few dedicated heroes. I’m stoked we no longer have to justify the value of what we do in terms of what we hold back from collaborating with people on.
    As an insider, I got to witness first-hand the boldness and deep thought put in to this transition by the team at Chef. Our incredible CEO, Barry Crist; Corey Scobie, the SVP of Product and Technology; Brian Goldfarb, CMO; Katie Long, our VP of Legal; and so many others. Thank you.

Yet more coverage

Chef Goes All Open Source

  • Chef Goes All Open Source

    The Chef automation tool, a popular solution for DevOps IT management scenarios, has announced that it will be become a 100% open source platform. In the past, the basic Chef application was available in open source form, but the company also provided several enhancements and add-on tools with proprietary licenses. Rather than building proprietary tools around an open source core, Chef will now open source all of its software under an Apache 2.0 license.

    According to Chef CEO Barry Crist, “Over the years we have experimented with and learned from a variety of different open source, community, and commercial models, in search of the right balance. We believe that this change, and the way we have made it, best aligns the objectives of our communities with our own business objectives. Now we can focus all of our investment and energy on building the best possible products in the best possible way for our community without having to choose between what is “proprietary” and what is “in the commons.”

Configuration Management Tool Chef Announces to go 100% OSS

  • Configuration Management Tool Chef Announces to go 100% Open Source

    You are here: Home / News / Configuration Management Tool Chef Announces to go 100% Open Source
    Configuration Management Tool Chef Announces to go 100% Open Source
    Last updated April 5, 2019 By Ankush Das 2 Comments
    In case you did not know, among the most popular automation software services, Chef is one of the best out there.

    Recently, it announced some new changes to its business model and the software. While we know that everyone here believes in the power of open source – and Chef supports that idea too. So, now they have decided to go 100% open source.

    It will included all of their software under the Apache 2.0 license. You can use, modify, distribute and monetize their source code as long as you respect the trademark policy.

    In addition to this, they’ve also introduced a new service for enterprises, we’ll take a look at that as you read on.

"Chef Plates All Software as Open Source"

  • Chef Plates All Software as Open Source

    The IT automation and dev ops software vendors is embracing the open source model to clarify its product line and enhance its value proposition.

  • Chef says it’s going 100% open source, forks optional…

    Chef has lifted a page from Red Hat’s recipe book, and is making all of its software 100 per cent open source, under the Apache 2 license.

    But if you’re planning to use the automation and configuration specialist’s software in production, you’re still going to be expected to cough up for a subscription.

    Chef CEO Barry Crist said in a blog post today that the company “has always believed in the power of open source”.

  • Leading DevOps program Chef goes all in with open source [Ed: CBS repeats Microsoft talking points: "Some companies are wriggling out of open-source to maximize their profits." And proprietary software companies never do?]

    Some companies are wriggling out of open-source to maximize their profits.

  • “Chef” DevOps leading program goes all-in on open source

    Chef, one of the leading DevOps companies announced from here on out it would be developing all of its software as open source under the Apache 2.0 license and revealing a new supported platform called the Enterprise Automation Stack.

Some belated coverage

  • Chef open sources 100% under Apache 2.0 license

    “Chef Enterprise Automation Stack lets teams establish and maintain a consistent path to production for any application, in order to increase velocity and improve efficiency, so deployment and updates of mission-critical software become easier, move faster and work flawlessly.”

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

Fedora Workstation 31, AAC Support

  • Fedora Workstation 31 to come with Wayland support, improved core features of PipeWire, and more

    On Monday, Christian F.K. Schaller, Senior Manager for Desktop at Red Hat, shared a blog post that outlined the various improvements and features coming in Fedora Workstation 31. These include Wayland improvements, more PipeWire functionality, continued improvements around Flatpak, Fleet Commander, and more.

  • Fedora's AAC Support Finally Seeing Audio Quality Improvements

    Fedora's version of the FDK-AAC library that they began shipping in 2017 to finally provide AAC audio support strips out what was patented encumbered functionality. But that gutting of the code did cause some problems like audio playback glitches that are now being addressed. Fortunately, better AAC support is on the way to Fedora. There is this F30 update pending to provide an updated AAC implementation with quality enhancements.

Mozilla: Firefox's Gecko Media Plugin & EME Architecture, Accessibility, Firefox 68 Beta 10 Testday Results

  • Chris Pearce: Firefox's Gecko Media Plugin & EME Architecture

    For rendering audio and video Firefox typically uses either the operating system's audio/video codecs or bundled software codec libraries, but for DRM video playback (like Netflix, Amazon Prime Video, and the like) and WebRTC video calls using baseline H.264 video, Firefox relies on Gecko Media Plugins, or GMPs for short. This blog post describes the architecture of the Gecko Media Plugin system in Firefox, and the major class/objects involved, as it looked in June 2019. For DRM video Firefox relies upon Google's Widevine Content Decryption Module, a dynamic shared library downloaded at runtime. Although this plugin doesn't conform to the GMP ABI, we provide an adapter to allow it to be run through the GMP system. We use the same Widevine CDM plugin that Chrome uses. For decode and encode of H.264 streams for WebRTC, Firefox uses OpenH264, which is provided by Cisco. This plugin implements the GMP ABI.

  • Hacks.Mozilla.Org: How accessibility trees inform assistive tech

    The web is accessible by default. It was designed with features to make accessibility possible, and these have been part of the platform pretty much from the beginning. In recent times, inspectable accessibility trees have made it easier to see how things work in practice. In this post we’ll look at how “good” client-side code (HTML, CSS and JavaScript) improves the experience of users of assistive technologies, and how we can use accessibility trees to help verify our work on the user experience.

  • QMO: Firefox 68 Beta 10 Testday Results

    As you may already know, Friday June 14th – we held a new Testday event, for Firefox 68 Beta 10.

Security Leftovers/FUD

  • New Linux Worm Attacks IoT Devices [Ed: How to blame "Linux" for default passwords in devices (and some now also blame "Iran", citing a CIA 'proxy' Recorded Future in relation to this because they want war)]

    Silex has 'bricked' more than 2000 Linux-based IoT devices so far.

  • Your server remote login isn't root:password, right? Cool. You can keep your data. Oh sh... your IoT gear, though? [Ed: All this "Silex" 'news' tries to blame Iran for cracking by guessing default passwords; but this is attempted every day by dozens of nations, every minute in a lot of cases. Any political motivation behind this Iran angle?]

    Earlier this week, infosec outfit Recorded Future claimed a Tehran-backed group known as Elfin, or APT33, has been increasingly active in recent months, largely targeting industrial facilities and companies within Saudi Arabia that do business with the US and other Western countries.
  • 'Silex' Malware Renders Internet-of-Things Devices Useless. Here's How to Prevent It [Ed: War lovers' media, e.g. Fortune (see parent) and CBS (through ZDNet) push this whole "Iran" angle, manufactured in part by Recorded Future, which works with the CIA. This is the source of all these "Iran is cracking your gear" stories (every large nation does it all the time, so why the focus on Iran all of a sudden?)]
  • Silex malware targeting IoT devices spotted by security researchers
  • Daily News Roundup: Hackers Broke into Ten Telecom Networks [Ed: Definitely sounds like they used Windows, which executes malware without obstructing the users (who might just open an E-mail or click on a link)]

    Security researchers have revealed hackers spent years burrowing into ten different telecoms. Using a common method of an email with a link leading to malware, the hackers then used sophisticated techniques to target specific individuals. Security researchers at Cybereason revealed details of years-long attempts to break into telecom services (cell phone carriers). Starting in 2017, and possibly before, hackers sent emails to unsuspecting telecom employees with malicious links. The initial payload gave the hackers access to the telecom networks. Once in, the hackers ultimately compromised the network, gaining administrative privileges, and even creating a VPN on the system that let hackers access large amounts of data and empowered them even to shut down the telecom network entirely. The hackers had so much power that Amit Serper, Principal Security Researcher at Cybereason, described them as essentially a “de facto shadow IT department of the company.”

Kernel: LWN's Latest (SACK etc.) and Phoronix on Saitek R440 Force Racing Wheel Support Coming to Linux

  • The TCP SACK panic

    Selective acknowledgment (SACK) is a technique used by TCP to help alleviate congestion that can arise due to the retransmission of dropped packets. It allows the endpoints to describe which pieces of the data they have received, so that only the missing pieces need to be retransmitted. However, a bug was recently found in the Linux implementation of SACK that allows remote attackers to panic the system by sending crafted SACK information. Data sent via TCP is broken up into multiple segments based on the maximum segment size (MSS) specified by the other endpoint—or some other network hardware in the path it traversed. Those segments are transmitted to that endpoint, which acknowledges that it has received them. Originally, those acknowledgments (ACKs) could only indicate that it had received segments up to the first gap; so if one early segment was lost (e.g. dropped due to congestion), the endpoint could only ACK those up to the lost one. The originating endpoint would have to retransmit many segments that had actually been received in order to ensure the data gets there; the status of the later segments is unknown, so they have to be resent. In simplified form, sender A might send segments 20-50, with segments 23 and 37 getting dropped along the way. Receiver B can only ACK segments 20-22, so A must send 23-50 again. As might be guessed, if the link is congested such that segments are being dropped, sending a bunch of potentially redundant traffic is not going to help things.

  • Short waits with umwait

    If a user-space process needs to wait for some event to happen, there is a whole range of mechanisms provided by the kernel to make that easy. But calling into the kernel tends not to work well for the shortest of waits — those measured in small numbers of microseconds. For delays of this magnitude, developers often resort to busy loops, which have a much smaller potential for turning a small delay into a larger one. Needless to say, busy waiting has its own disadvantages, so Intel has come up with a set of instructions to support short delays. A patch set from Fenghua Yu to support these instructions is currently working its way through the review process. The problem with busy waiting, of course, is that it occupies the processor with work that is even more useless than cryptocoin mining. It generates heat and uses power to no useful end. On hyperthreaded CPUs, a busy-waiting process could prevent the sibling thread from running and doing something of actual value. For all of these reasons, it would be a lot nicer to ask the CPU to simply wait for a brief period until something interesting happens. To that end, Intel is providing three new instructions. umonitor provides an address and a size to the CPU, informing it that the currently running application is interested in any writes to that range of memory. A umwait instruction tells the processor to stop executing until such a write occurs; the CPU is free to go into a low-power state or switch to a hyperthreaded sibling during that time. This instruction provides a timeout value in a pair of registers; the CPU will only wait until the timestamp counter (TSC) value exceeds the given timeout value. For code that is only interested in the timeout aspect, the tpause instruction will stop execution without monitoring any addresses.

  • Dueling memory-management performance regressions

    The 2019 Linux Storage, Filesystem, and Memory-Management Summit included a detailed discussion about a memory-management fix that addressed one performance regression while causing another. That fix, which was promptly reverted, is still believed by most memory-management developers to implement the correct behavior, so a patch posted by Andrea Arcangeli in early May has relatively broad support. That patch remains unapplied as of this writing, but the discussion surrounding it has continued at a slow pace over the last month. Memory-management subsystem maintainer Andrew Morton is faced with a choice: which performance regression is more important? The behavior in question relates to the intersection of transparent huge pages and NUMA policy. Ever since this commit from Aneesh Kumar in 2015, the kernel will, for memory areas where madvise(MADV_HUGEPAGE) has been called, attempt to allocate huge pages exclusively on the current NUMA node. It turns out that the kernel will try so hard that it will go into aggressive reclaim and compaction on that node, forcing out other pages, even if free memory exists on other nodes in the system. In essence, enabling transparent huge pages for a range of memory has become an equivalent to binding that memory to a single NUMA node. The result, as observed by many, can be severe swap storms and a dramatic loss of performance. In an attempt to fix this problem, Arcangeli applied a patch in November 2018 that loosened the tight binding to the current node. But, it turned out, some workloads want that binding behavior. Local huge pages will perform better than huge pages on a remote node; even local small pages tend to be better than remote huge pages. For some tasks, the performance penalty for using remote pages is high enough that it is worth going to great lengths — even enduring a swap storm at application startup — to avoid it. No such workload has been publicly posted, but the patch was reverted by David Rientjes in December after a huge discussion.

  • Rebasing and merging in kernel repositories

    What follows is a kernel document I have been working on for the last month in the hope of reducing the number of subsystem maintainers who run into trouble during the merge window. If all goes according to plan, this text will show up in 5.3 as Documentation/maintainer/rebasing-and-merging.txt. On the off chance that some potentially interested readers might not be monitoring additions to the nascent kernel maintainer's handbook, I'm publishing the text here as well. Maintaining a subsystem, as a general rule, requires a familiarity with the Git source-code management system. Git is a powerful tool with a lot of features; as is often the case with such tools, there are right and wrong ways to use those features. This document looks in particular at the use of rebasing and merging. Maintainers often get in trouble when they use those tools incorrectly, but avoiding problems is not actually all that hard. One thing to be aware of in general is that, unlike many other projects, the kernel community is not scared by seeing merge commits in its development history. Indeed, given the scale of the project, avoiding merges would be nearly impossible. Some problems encountered by maintainers result from a desire to avoid merges, while others come from merging a little too often.

  • Years Late But Saitek R440 Force Racing Wheel Support Is On The Way For Linux

    If you happen to have a Saitek R440 Force Wheel or looking to purchase a cheap and used racing wheel for enjoying the various Linux racing game ports or even the number of games working under Steam Play like F1 2018 and DiRT Rally 2.0, Linux support is on the way. The Saitek R440 Force Wheel can still be found from the likes of eBay for those wanting a cheap/used PC game racing wheel. Now coming soon to the Linux kernel is support for this once popular gaming wheel -- which was originally released back in 2004. The Linux kernel patch originally adding the Saitek R440 was sent last year only to be resent out recently in an attempt for mainline acceptance.