Language Selection

English French German Italian Portuguese Spanish

Server

6 Best Log Management Tools For Linux in 2019

Filed under
Server
Software

Before we can talk about log management, let’s define what a log is. Simply defined, a log is the automatically-produced and time-stamped documentation of an event relevant to a particular system. In other words, whenever an event takes place on a system, a log is generated. Systems and devices will generate logs for different types of events and many systems give administrators some degree of control over which event generates a log and which doesn’t.

As for log management, It is simply referring to the processes and policies used to administer and facilitate the generation, transmission, analysis, storage, archiving and eventual disposal of large volumes of log data. Although not clearly stated, log management implies a centralized system where logs from multiple sources are collected. Log management is not just log collection, though. It is the management part which is the most important. And log management systems often have multiple functionalities, collecting logs being just one of them.

Once logs are received by the log management system, they need to be standardized into a common format as different systems format logs differently and include different data. Some start a log with the date and time, some start it with an event number. Some only include an event ID while others include a full-text description of the event. One of the purposes of log management systems is to ensure that all collected log entries are stored in a uniform format. This will event correlation and eventual searching much easier down the line.

Even correlation and searching are two additional major functions of several log management systems. The best of them feature a powerful search engine that allows administrators to zero-in on precisely what they need. Correlation functions will automatically group related events, even if they are from different sources. How—and how successfully—different log management system accomplish that is a major differentiating factor.

Read more

Announcing Oracle Solaris 11.4 SRU12

Filed under
OS
Server

Today we are releasing the SRU 12 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1.

Read more

Also: Oracle Solaris 11.4 SRU12 Released - Adds GCC 9.1 Compiler & Python 3.7

Replicating Particle Collisions at CERN with Kubeflow

Filed under
Server
OSS
Ubuntu

This is where Kubeflow comes in. They started by training their 3DGAN on an on-prem OpenStack cluster with 4 GPUs. To verify that they were not introducing overhead by using Kubeflow, they ran training first with native containers, then on Kubernetes, and finally on Kubeflow using the MPI operator. They then moved to an Exoscale cluster with 32 GPUs and ran the same experiments, recording only negligible performance overhead. This was enough to convince them that they had discovered a flexible, versatile means of deploying their models to a wide variety of physical environments.

Beyond the portability that they gained from Kubeflow, they were especially pleased with how straightforward it was to run their code. As part of the infrastructure team, Ricardo plugged Sofia’s existing Docker image into Kubeflow’s MPI operator. Ricardo gave Sofia all the credit for building a scalable model, whereas Sofia credited Ricardo for scaling her team’s model. Thanks to components like the MPI operator, Sofia’s team can focus on building better models and Ricardo can empower other physicists to scale their own models.

Read more

Also: Issue #2019.08.19 – Kubeflow at CERN

Fedora and Red Hat: New F30 Builds, Flock Report, Servers and Package Management Domain Model

Filed under
Red Hat
Server
  • Ben Williams: F30-20190818 updated isos released.

    The Fedora Respins SIG is pleased to announce the latest release of Updated F30-20190816 Live ISOs, carrying the 5.2.8-200 kernel.

    This set of updated isos will save considerable amounts of updates after install. ((for new installs.)(New installs of Workstation have 1.2GB of updates)).

    A huge thank you goes out to irc nicks dowdle, satellite,Southern-Gentlem for testing these iso.

  • Flock to Fedora 2019 Conference report

    Last week I attended “Flock to Fedora” conference in Budapest, Hungary. It was a Fedora contributors conference where I met some developers, project leaders, GSoC interns. Below is a brief report of my attendance.

  • What salary can a sysadmin expect to earn?

    The path to reliable salary data sometimes is sometimes paved with frustration. That’s because the honest answer to a reasonable question—what should I be paid for this job?—is usually: "It depends."

    Location, experience, skill set, industry, and other factors all impact someone’s actual compensation. For example, there’s rarely a single, agreed-upon salary for a particular job title or role.

    All of the above applies to system administrators. It’s a common, long-established IT job that spans many industries, company sizes, and other variables. While sysadmins may share some common fundamentals, it’s certainly not a one-size-fits-all position, and it’s all the truer as some sysadmin roles evolve to take on cloud, DevOps, and other responsibilities.

    What salary can you expect to earn as a sysadmin? Yeah, it depends. However, that doesn’t mean you can’t get a clear picture of what sysadmin compensation looks like, including specific numbers. This is information worth having handy if you’re a sysadmin on the job market or seeking a promotion.

    Let’s start with some good news from a compensation standpoint. Sysadmins—like other IT pros these days—are in demand.

    "In today’s business environment, companies are innovating and moving faster than ever before, and they need systems that can keep up with the pace of their projects and communications, as well as help everything run smoothly," says Robert Sutton, district president for the recruiting firm Robert Half Technology. "That’s why systems administrators are among the IT professionals who can expect to see a growing salary over the next year or so."

  • Run Mixed IT Efficiently, The Adient – SUSE Way.

    When you have multiple distributions, such as Red Hat and SUSE, you can reduce administration complexity and save administration time and resources with a common management tool. Adient had applications running on both SUSE Linux Enterprise Server and Red Hat Enterprise Linux. Adient deployed SUSE Manager to manage their Mixed IT environment involving both distributions.

  • Package Management Domain Model

    When I wrote this model, we were trying to unify a few different sorts of packages. Coming from SpaceWalk, part of the team was used to wokring on RPMS with the RPM Database for storage, and Yum as the mechanism for fetching them. The other part of the team was coming from the JBoss side, working with JAR, WAR, EAR and associated files, and the Ivy or Maven building and fetching the files.

    We were working within the context of the Red Hat Network (as it was then called) for delivering content to subscribers. Thus, we had the concept of Errata, Channels, and Entitlements which are somewhat different from what other organizations call these things, but the concepts should be general enough to cover a range of systems.

    There are many gaps in this diagram. It does not discuss the building of packages, nor the relationship between source and binary packages. It also does not provide a way to distinguish between the package storage system and the package fetch mechanism.

    But the bones are solid. I’ve used this diagram for a few years, and it is useful.

Apache: Self Assessment and Security

Filed under
Server
OSS
  • The Apache® Software Foundation Announces Annual Report for 2019 Fiscal Year

    The Apache® Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today the availability of the annual report for its 2019 fiscal year, which ended 30 April 2019.

  • Open Source at the ASF: A Year in Numbers

    332 active projects, 71 million lines of code changed, 7,000+ committers…

    The Apache Software Foundation has published its annual report for fiscal 2019. The hub of a sprawling, influential open source community, the ASF remains in rude good health, despite challenges this year including the need for “an outsized amount of effort” dealing with trademark infringements, and “some in the tech industry trying to exploit the goodwill earned by the larger Open Source community.”

    [...]

    The ASF names 10 “platinum” sponsors: AWS, Cloudera, Comcast, Facebook, Google, LeaseWeb, Microsoft, the Pineapple Fund, Tencent Cloud, and Verizon Media

  • Apache Software Foundation Is Worth $20 Billion

    Yes, Apache is worth $20 billion by its own valuation of the software it offers for free. But what price can you realistically put on open source code?

    If you only know the name Apache in connection with the web server then you are missing out on some interesting software. The Apache Software Foundation ASF, grew out of the Apache HTTP Server project in 1999 with the aim of furthering open source software. It provides a licence, the Apache licence, a decentralized governance and requires projects to be licensed to the ASF so that it can protect the intellectual property rights.

  • Apache Security Advisories Red Flag Wrong Versions in Patching Gaffe

    Researchers have pinpointed errors in two dozen Apache Struts security advisories, which warn users of vulnerabilities in the popular open-source web app development framework. They say that the security advisories listed incorrect versions impacted by the vulnerabilities.

    The concern from this research is that security administrators in companies using the actual impacted versions would incorrectly think that their versions weren’t affected – and would thus refrain from applying patches, said researchers with Synopsys who made the discovery, Thursday.

    “The real question here from this research is whether there remain unpatched versions of the newly disclosed versions in production scenarios,” Tim Mackey, principal security strategist for the Cybersecurity Research Center at Synopsys, told Threatpost. “In all cases, the Struts community had already issued patches for the vulnerabilities so the patches exist, it’s just a question of applying them.”

Cockpit and the evolution of the Web User Interface

Filed under
Server

This article only touches upon some of the main functions available in Cockpit. Managing storage devices, networking, user account, and software control will be covered in an upcoming article. In addition, optional extensions such as the 389 directory service, and the cockpit-ostree module used to handle packages in Fedora Silverblue.

The options continue to grow as more users adopt Cockpit. The interface is ideal for admins who want a light-weight interface to control their server(s).

Read more

Server: Managing GNU/Linux Servers and Cost of Micro-services Complexity

Filed under
Server
  • Keeping track of Linux users: When do they log in and for how long?

    The Linux command line provides some excellent tools for determining how frequently users log in and how much time they spend on a system. Pulling information from the /var/log/wtmp file that maintains details on user logins can be time-consuming, but with a couple easy commands, you can extract a lot of useful information on user logins.

  • Daily user management tasks made easy for every Linux administrator

    In this article, we will be going over some tasks that a Linux administrator may need to perform daily related to user management.

  • The cost of micro-services complexity

    It has long been recognized by the security industry that complex systems are impossible to secure, and that pushing for simplicity helps increase trust by reducing assumptions and increasing our ability to audit. This is often captured under the acronym KISS, for "keep it stupid simple", a design principle popularized by the US Navy back in the 60s. For a long time, we thought the enemy were application monoliths that burden our infrastructure with years of unpatched vulnerabilities.

    So we split them up. We took them apart. We created micro-services where each function, each logical component, is its own individual service, designed, developed, operated and monitored in complete isolation from the rest of the infrastructure. And we composed them ad vitam æternam. Want to send an email? Call the rest API of micro-service X. Want to run a batch job? Invoke lambda function Y. Want to update a database entry? Post it to A which sends an event to B consumed by C stored in D transformed by E and inserted by F. We all love micro-services architecture. It’s like watching dominoes fall down. When it works, it’s visceral. It’s when it doesn’t that things get interesting. After nearly a decade of operating them, let me share some downsides and caveats encountered in large-scale production environments.

    [...]

    And finally, there’s security. We sure love auditing micro-services, with their tiny codebases that are always neat and clean. We love reviewing their infrastructure too, with those dynamic security groups and clean dataflows and dedicated databases and IAM controlled permissions. There’s a lot of security benefits to micro-services, so we’ve been heavily advocating for them for several years now.

    And then, one day, someone gets fed up with having to manage API keys for three dozen services in flat YAML files and suggests to use oauth for service-to-service authentication. Or perhaps Jean-Kevin drank the mTLS Kool-Aid at the FoolNix conference and made a PKI prototype on the flight back (side note: do you know how hard it is to securely run a PKI over 5 or 10 years? It’s hard). Or perhaps compliance mandates that every server, no matter how small, must run a security agent on them.

Announcing Oracle Linux 7 Update 7

Filed under
GNU
Linux
Red Hat
Server

Oracle is pleased to announce the general availability of Oracle Linux 7 Update 7. Individual RPM packages are available on the Unbreakable Linux Network (ULN) and the Oracle Linux yum server. ISO installation images will soon be available for download from the Oracle Software Delivery Cloud and Docker images will soon be available via Oracle Container Registry and Docker Hub.

Read more

Also: Oracle Linux 7 Update 7 Released

Server: Kata Containers in Tumbleweed, Ubuntu on 'Multi' 'Cloud', and Containers 101

Filed under
Server
  • Kubic Project: Kata Containers now available in Tumbleweed

    Kata Containers is an open source container runtime that is crafted to seamlessly plug into the containers ecosystem.

    We are now excited to announce that the Kata Containers packages are finally available in the official openSUSE Tumbleweed repository.

    It is worthwhile to spend few words explaining why this is a great news, considering the role of Kata Containers (a.k.a. Kata) in fulfilling the need for security in the containers ecosystem, and given its importance for openSUSE and Kubic.

  • Why multi-cloud has become a must-have for enterprises: six experts weigh in

    Remember the one-size-fits-all approach to cloud computing? That was five years ago. Today, multi-cloud architectures that use two, three, or more providers, across a mix of public and private platforms, are quickly becoming the preferred strategy at most companies.

    Despite the momentum, pockets of hesitation remain. Some sceptics are under the impression that deploying cloud platforms and services from multiple vendors can be a complex process. Others worry about security, regulatory, and performance issues.

  • Containers 101: Containers vs. Virtual Machines (And Why Containers Are the Future of IT Infrastructure)

    What exactly is a container and what makes it different -- and in some cases better -- than a virtual machine?

Server: Surveillance Computing, Kubernetes Ingress, MongoDB 4.2, Linux Foundation on 'DevOps'

Filed under
Server
  • Linux and Cloud Computing: Can Pigs Fly? Linux now Dominates Microsoft Azure Servers [Ed: This is not about "Linux" dominating Microsoft but Microsoft trying to dominate GNU/Linux]

    Over the last five years things have changed dramatically at Microsoft. Microsoft has embraced Linux. Earlier in the year, Sasha Levin, Microsoft Linux kernel developer, said that now more than half of the servers in Microsoft Azure are running Linux.

  • Google Cloud Adds Compute, Memory-Intensive VMs

    Google added virtual machine (VM) types on Google Compute Engine including second-generation Intel Xeon scalable processor machines and new VMs for compute- and memory-heavy applications.

  • Kubernetes Ingress

    On a similar note, if your application doesn’t serve a purpose outside the Kubernetes cluster, does it really matter whether or not your cluster is well built? Probably not.

    To give you a concrete example, let’s say we have a classical web app composed of a frontend written in Nodejs and a backend written in Python which uses MySQL database. You deploy two corresponding services on your Kubernetes cluster.

    You make a Dockerfile specifying how to package the frontend software into a container, and similarly you package your backend. Next in your Kubernetes cluster, you will deploy two services each running a set of pods behind it. The web service can talk to the database cluster and vice versa.

  • MongoDB 4.2 materialises with $merge operator and indexing help for unstructured data messes

    Document-oriented database MongoDB is now generally available in version 4.2 which introduces enhancements such as on-demand materialised views and wildcard indexing.

    Wildcard indexing can be useful in scenarios where unstructured, heterogeneous datasets make creating appropriate indexes hard. Admins can use the function to create a filter of sorts that matches fields, arrays, or sub-documents in a collection, and adds the hits to a sparse index.

    [...]

    Speaking of cloud, last year MongoDB decided to step away from using the GNU Affero General Public License for the Community Edition of its database and switched to an altered version. The Server-Side Public License is meant to place a condition – namely, to open source the code used to serve the software from the cloud – on offering MongoDB as a service to clients.

  • Announcing New Course: DevOps and SRE Fundamentals-Implementing Continuous Delivery

    The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced today that enrollment is now open for the new DevOps and SRE Fundamentals – Implementing Continuous Delivery eLearning course. The course will help an organization be more agile, deliver features rapidly, while at the same time being able to achieve non-functional requirements such as availability, reliability, scalability, security, etc.

    According to Chris Aniszczyk, CTO of the Cloud Native Computing Foundation, “The rise of cloud native computing and site reliability engineering are changing the way applications are built, tested, and deployed. The past few years have seen a shift towards having Site Reliability Engineers (SREs) on staff instead of just plain old sysadmins; building familiarity with SRE principles and continuous delivery open source projects are an excellent career investment.”

Syndicate content

More in Tux Machines

today's howtos

Databases: MariaDB, ScyllaDB, Percona, Cassandra

  • MariaDB opens US headquarters in California

    MariaDB Corporation, the database company born as a result of forking the well-known open-source MySQL database...

  • ScyllaDB takes on Amazon with new DynamoDB migration tool

    There are a lot of open-source databases out there, and ScyllaDB, a NoSQL variety, is looking to differentiate itself by attracting none other than Amazon users. Today, it announced a DynamoDB migration tool to help Amazon customers move to its product.

  • ScyllaDB Announces Alternator, an Open Source Amazon DynamoDB-Compatible API

    ScyllaDB today announced the Alternator project, open-source software that will enable application- and API-level compatibility between Scylla and Amazon’s NoSQL cloud database, Amazon DynamoDB. Scylla’s DynamoDB-compatible API will be available for use with Scylla Open Source, supporting the majority of DynamoDB use cases and features.

  • ScyllaDB Secures $25 Million to Open Source Amazon DynamoDB-compatible API

    Fast-growing NoSQL database company raises funds to extend operations and bring new deployment flexibility to users of Amazon DynamoDB.

  • ScyllaDB Announces Alternator, an Open Source Amazon DynamoDB-Compatible API

    ScyllaDB today announced the Alternator project, open-source software that will enable application- and API-level compatibility between Scylla and Amazon’s NoSQL cloud database, Amazon DynamoDB. Scylla’s DynamoDB-compatible API will be available for use with Scylla Open Source, supporting the majority of DynamoDB use cases and features.

  • ScyllaDB powers up Alternator: an open Amazon DynamoDB API

    Companies normally keep things pretty quiet in the run up to their annual user conferences, so they can pepper the press with a bag of announcements designed to show how much market momentum and traction that have going. Not so with ScyllaDB, the company has been dropping updates in advance of its Scylla Summit event in what is perhaps an unusually vocal kind of way. [...] Scylla itself is a real-time big data database that is fully compatible with Apache Cassandra and is known for its ‘shared-nothing’ approach (a distributed-computing architecture in which each update request is satisfied by a single node –processor/memory/storage unit to increase throughput and storage capacity.

  • Percona Announces Full Conference Schedule for Percona Live Open Source Database Conference Europe 2019

    The Percona Live Open Source Database Conference Europe 2019 is the premier open source database event. Percona Live conferences provide the open source database community with an opportunity to discover and discuss the latest open source trends, technologies and innovations. The conference includes the best and brightest innovators and influencers in the open source database industry.

  • Thwarting Digital Ad Fraud at Scale: An Open Source Experiment with Anomaly Detection

    Our experiment assembles Kafka, Cassandra, and our anomaly detection application in a Lambda architecture, in which Kafka and our streaming data pipeline are the speed layer, and Cassandra acts as the batch and serving layer. In this configuration, Kafka makes it possible to ingest streaming digital ad data in a fast and scalable manner, while taking a “store and forward” approach so that Kafka can serve as a buffer to protect the Cassandra database from being overwhelmed by major data surges. Cassandra’s strength is in storing high-velocity streams of ad metric data in its linearly scalable, write-optimized database. In order to handle automation for provisioning, deploying, and scaling the application, the anomaly detection experiment relies on Kubernetes on AWS EKS.

Today in Techrights

OSS Leftovers

  • Workarea Commerce Goes Open-source

    The enterprise commerce platform – Workarea is releasing its software to the open-source community. In case you don’t already know, Workarea was built to unify commerce, content management, merchant insights, and search. It was developed upon open-source technologies since its inception like Elasticsearch, MongoDB, and Ruby on Rails. Workarea aims to provide unparalleled services in terms of scalability and flexibility in modern cloud environments. Its platform source code and demo instructions are available on GitHub here.

  • Wyoming CV Pilot develops open-source RSU monitoring system

    The team working on the US Department of Transportation’s (USDOT) Connected Vehicle Pilot Deployment Program in Wyoming have developed open-source applications for the operation and maintenance of Roadside Units (RSUs) that can be viewed by all stakeholders. The Wyoming Department of Transportation (WYDOT) Connected Vehicle Pilot implementation includes the deployment of 75 RSUs along 400 miles (644km) of I-80. With long drive times and tough winters in the state, WYDOT needed an efficient way to monitor the performance of and manage and update these units to maintain peak performance. With no suitable product readily available, the WYDOT Connected Vehicle team developed an open-source application that allows authorized transportation management center (TMC) operators to monitor and manage each RSU at the roadside. The WYDOT team found that the application can also be used as a public-facing tool that shows a high-level status report of the pilot’s equipment. [...] For other state or local agencies and departments of transportation (DOTs) wishing to deploy a similar capability to monitor and manage RSUs, the application code has been made available on the USDOT’s Open Source Application Development Portal (OSADP). The code is downloadable and can be used and customized by other agencies free of charge. WYDOT developed this capability using USDOT funds under the CV Pilot program as open-source software and associated documentation. The application represents one of six that the program will be providing during its three phases.

  • You Too Can Make These Fun Games (No Experience Necessary)

    Making a videogame remained a bucket list item until I stumbled on an incredibly simple open source web app called Bitsy. I started playing around with it, just to see how it worked. Before I knew it, I had something playable. I made my game in a couple of hours.

  • From maverick to mainstream: why open source software is now indispensable for modern business

    Free and open source software has a long and intriguing history. Some of its roots go all the way back to the 1980s when Richard Stallman first launched the GNU project.

  • Analyst Watch: Is open source the great equalizer?

    If you had told me 25 years ago that open source would be the predominant force in software development, I would’ve laughed. Back then, at my industrial software gig, we were encouraged to patent as much IP as possible, even processes that seemed like common-sense business practices, or generally useful capabilities for any software developer. If you didn’t, your nearest competitor would surely come out with their own patent claims, or inevitable patent trolls would show up demanding fees for any uncovered bit of code. We did have this one developer who was constantly talking about fiddling with his Linux kernel at home, on his personal time. Interesting hobby.

  • Scientists Create World’s First Open Source Tool for 3D Analysis of Advanced Biomaterials

    Materials scientists and programmers from the Tomsk Polytechnic University in Russia and Germany's Karlsuhe Institute of Technology have created the world’s first open source software for the 2D and 3D visualization and analysis of biomaterials used for research into tissue regeneration. [...] Scientists have already tested the software on a variety of X-ray tomography data. “The results have shown that the software we’ve created can help other scientists conducting similar studies in the analysis of the fibrous structure of any polymer scaffolds, including hybrid ones,” Surmenev emphasised.

  • Making Collaborative Data Projects Easier: Our New Tool, Collaborate, Is Here

    On Wednesday, we’re launching a beta test of a new software tool. It’s called Collaborate, and it makes it possible for multiple newsrooms to work together on data projects. Collaborations are a major part of ProPublica’s approach to journalism, and in the past few years we’ve run several large-scale collaborative projects, including Electionland and Documenting Hate. Along the way, we’ve created software to manage and share the large pools of data used by our hundreds of newsrooms partners. As part of a Google News Initiative grant this year, we’ve beefed up that software and made it open source so that anybody can use it.

  • Should open-source software be the gold standard for nonprofits?

    Prior to its relaunch, nonprofit organization Cadasta had become so focused on the technology side of its work that it distracted from the needs of partners in the field. “When you’re building out a new platform, it really is all consuming,” said Cadasta CEO Amy Coughenour, reflecting on some of the decisions that were made prior to her joining the team in 2018.

  • Artificial intelligence: an open source future

    At the same time, we’re seeing an increasing number of technology companies invest in AI development. However, what’s really interesting is that these companies - including the likes of Microsoft, Salesforce and Uber - are open sourcing their AI research. This move is already enabling developers worldwide to create and improve AI & Machine Learning (ML) algorithms faster. As such, open source software has become a fundamental part of enabling fast, reliable, and also secure development in the AI space. So, why all the hype around open source AI? Why are businesses of all sizes, from industry behemoths to startups, embracing open source? And where does the future lie for AI and ML as a result?

  • How open source is accelerating innovation in AI

    By eradicating barriers like high licensing fees and talent scarcity, open source is accelerating the pace of AI innovation, writes Carmine Rimi No other technology has captured the world’s imagination quite like AI, and there is perhaps no other that has been so disruptive. AI has already transformed the lives of people and businesses and will continue to do so in endless ways as more startups uncover its potential. According to a recent study, venture capital funding for AI startups in the UK increased by more than 200 percent last year, while a Stanford University study observed a 14-times increase in the number of AI startups worldwide in the last two years.

  • Adam Jacob Advocates for Building Healthy OSS Communities in “The War for the Soul of Open Source”

    Chef co-founder and former CTO Adam Jacob gave a short presentation at O’Reilly Open Source Software Conference (OSCON) 2019 titled “The War for the Soul of Open Source.” In his search for meaning in open source software today, Jacob confronts the notion of open source business models. “We often talk about open source business models,” he said. “There isn’t an open source business model. That’s not a thing and the reason is open source is a channel. Open source is a way that you, in a business sense, get the software out to the people, the people use the software, and then they become a channel, which [companies] eventually try to turn into money.” [...] In December 2018, Jacob launched the Sustainable Free and Open Source Communities (SFOSC) project to advocate for these ideas. Instead of focusing on protecting revenue models of OSS companies, the project’s contributors work together to collaborate on writing core principles, social contracts, and business models as guidelines for healthy OSS communities.

  • New Open Source Startups Emerge After Acquisition, IPO Flurry

    After a flurry of mega-acquisitions and initial public offerings of open source companies, a new batch of entrepreneurs are trying their hands at startups based on free software projects.

  • TC9 selected by NIST to develop Open Source Software for Transactive Energy Markets

    TC9, Inc. was selected by National Institute of Standards and Technology (NIST) to develop open source software for Transactive Energy Bilateral Markets based on the NIST Common Transactive Services. Under the contract, TC9 will develop open source software (OSS) for agents for a transactive energy market. The software will be used to model the use of transactive energy to manage power distribution within a neighborhood. Transactive Energy is a means to balance volatile supply and consumption in real time. Experts anticipate the use of Transactive Energy to support wide deployment of distributed energy resources (DER) across the power grid.

  • Open Source Software Allows Auterion to Move Drone Workflows into the Cloud

    “Until today, customizing operations in the MAVLink protocol required a deep understanding of complex subjects such as embedded systems, drone dynamics, and the C++ programming language,” said Kevin Sartori, co-founder of Auterion. “With MAVSDK, any qualified mobile developer can write high-level code for complex operations, meaning more developers will be able to build custom applications and contribute to the community.”

  • ApacheCon 2019 Keynote: James Gosling's Journey to Open Source

    At the recent ApacheCon North America 2019 in Las Vegas, James Gosling delivered a keynote talk on his personal journey to open-source. Gosling's main takeaways were: open source allows programmers to learn by reading source code, developers must pay attention to intellectual property rights to prevent abuse, and projects can take on a life of their own.

  • 20 Years of the Apache Software Foundation: ApacheCon 2019 Opening Keynote

    At the recent ApacheCon North America 2019 in Las Vegas, the opening keynote session celebrated the 20th anniversary of the Apache Software Foundation (ASF), with key themes being: the history of the ASF, a strong commitment to community and collaboration, and efforts to increase contributions from the public. The session also featured a talk by astrophysicist David Brin on the potential dangers of AI.