Language Selection

English French German Italian Portuguese Spanish

Server

Databases and Data With FOSS (Ish)

Filed under
Server
OSS
  • Redgate acquires (but commits to widening) open source Flyway

    Database development company Redgate has been to the shops.

    The Cambridge, UK-based firm has bought eggs, fresh bloomers (no, the bread kind) and, direct from the meat counter, a US$10 million portion (i.e. all of it) of cross-platform database migrations tool, Flyway.

    Redgate’s mission in life is to enable the database to be included in DevOps, whatever database its customers are working on.

  • NuoDB 4.0 beats drum for cloud-native cloud-agnosticism

    Distributed SQL database company NuoDB has reached its version 4.0 iteration… and aligned further to core open source cloud platform technologies.

  • Neo4j charts tighter grip on graph data-at-rest

    A graph database is a database designed to treat the relationships between data as equally important to the data itself — it is intended to hold data without constricting it to a pre-defined model… instead, the data is stored showing how each individual entity connects with or is related to others.

  • Open source databases are not just about the licensing dollar

    The majority of organizations now use technology in ways that are quite different from ten years ago. The concept of using pre-built solutions or platforms hosted remotely is nothing new: mainframes and thin terminals dominated the enterprise from the 1970s until the arrival of the trusty desktop PC.

    The cloud has also shifted our notions of how and why we pay for technology solutions. Commercial and proprietary platforms designed to be installed on-premises were, just a few years ago, accompanied by a hefty up-front bill for licenses. Today, paying per seat, per gigabyte throughput, or even per processor cycle, is becoming standard.

  • Cloudera Update: Open source route seeks to keep big data alive

    Cloudera has had a busy 2019. The vendor started off the year by merging with its primary rival Hortonworks to create a new Hadoop big data juggernaut. However, in the ensuing months, the newly merged company has faced challenges as revenue has come under pressure and the Hadoop market overall has shown signs of weakness. Against that backdrop, Cloudera said July 10 that it would be changing its licensing model, taking a fully open source approach. The Cloudera open source route is a new strategy for the vendor. In the past, Cloudera had supported and contributed to open source projects as part of the larger Hadoop ecosystem but had kept its high-end product portfolio under commercial licenses.

Servers: Capsule8, SUSE and More

Filed under
Server
  • Capsule8 Announces New Investigations Capability
  • Capsule8 'Investigations' To Provide More Proactive Prevention for Linux-Based Environments

    Brooklyn, N.Y.-based Capsule8 today announced new "full endpoint detection and response (EDR)-like investigations functionality for cloud workloads"...

  • A Pen Plotter Powered by Artificial Intelligence

    As you can see, to process a picture of this size which contains only one short mathematical question, the time consumed is around 11 minutes. It is very likely that the time consumed for the entire process can be reduced by 50 percent, if the code is changed and sends the text detection job to ‘cloud’ instead of to the native Raspberry Pi 3, or if you use Raspberry Pi 3 with Neural Compute Stick(s) for accelerating the inference. But this assumption still would have to be proven Smile.

  • From 30 to 230 docker container per host

    In the beginning there were virtual machines running with 8 vCPUs and 60GB of RAM. They started to serve around 30 containers per VM. Later on we managed to squeeze around 50 containers per VM.

    Initial orchestration was done with swarm, later on we moved to nomad. Access was initially fronted by nginx with consul-template generating the config. When it did not scale anymore nginx was replaced by Traefik. Service discovery is managed by consul. Log shipping was initially handled by logspout in a container, later on we switched to filebeat. Log transformation is handled by logstash. All of this is running on Debian GNU/Linux with docker-ce.

    At some point it did not make sense anymore to use VMs. We've no state inside the containerized applications anyway. So we decided to move to dedicated hardware for our production setup. We settled with HPe DL360G10 with 24 physical cores and 128GB of RAM.

Red Hat/IBM: EPEL, Ceph, OpenShift and Call for Code Challenge,

Filed under
Red Hat
Server
  • Kevin Fenzi: epel8-playground

    We have been working away at getting epel8 ready (short status: we have builds and are building fedpkg and bodhi and all the other tools maintainers need to deal with packages and hope to have some composes next week), and I would like to introduce a new thing we are trying with epel8: The epel8-playground.

    epel8-playground is another branch for all epel8 packages. By default when a package is setup for epel8 both branches are made, and when maintainers do builds in the epel8 branch, fedpkg will build for _both_ epel8 and epel8-playground. epel8 will use the bodhi updates system with an updates-testing and stable repo. epel8-playground will compose every night and use only one repo.

  • Red Hat OpenStack Platform with Red Hat Ceph Storage: MySQL Database Performance on Ceph RBD

    In Part 1 of this series, we detailed the hardware and software architecture of our testing lab, as well as benchmarking methodology and Ceph cluster baseline performance. In this post, we?ll take our benchmarking to the next level by drilling down into the performance evaluation of MySQL database workloads running on top of Red Hat OpenStack Platform backed by persistent block storage using Red Hat Ceph Storage.

  • OpenShift Persistent Storage with a Spring Boot Example

    One of the great things about Red Hat OpenShift is the ability to develop both Cloud Native and traditional applications. Often times, when thinking about traditional applications, the first thing that comes to mind is the ability to store things on the file system. This could be media, metadata, or any type of content that your application relies on but isn’t stored in a database or other system.

    To illustrate the concept of persistent storage (i.e. storage that will persist even when a container is stopped or recreated), I created a sample application for tracking my electronic books that I have in PDF format. The library of PDF files can be stored on the file system, and the application relies on this media directory to present the titles to the user. The application is written in Java using the Spring Boot framework and scans the media directory for PDF files. Once a suitable title is found, the application generates a thumbnail image of the book and also determines how many pages it contains. This can be seen in the following image:

  • IBM and Linux Foundation Call on Developers to Make Natural Disasters Less Deadly

    On a stormy Tuesday in July, a group of 30 young programmers gathered in New York City to take on natural disasters. The attendees—most of whom were current college students and alumnae of the nonprofit Girls Who Code—had signed up for a six-hour hackathon in the middle of summer break.

    Flash floods broke out across the city, but the atmosphere in the conference room remained upbeat. The hackathon was hosted in the downtown office of IBM as one of the final events in this year’s Call for Code challenge, a global competition sponsored by IBM and the Linux Foundation. The challenge focuses on using technology to assist survivors of catastrophes including tropical storms, fires, and earthquakes.

    Recent satellite hackathon events in the 2019 competition have recruited developers in Cairo to address Egypt’s national water shortage; in Paris to brainstorm AI solutions for rebuilding the Notre Dame cathedral; and in Bayamón, Puerto Rico, to improve resilience in the face of future hurricanes.

    Those whose proposals follow Call for Code’s guidelines are encouraged to submit to the annual international contest for a chance to win IBM membership and Linux tech support, meetings with potential mentors and investors, and a cash prize of US $200,000. But anyone who attends one of these optional satellite events also earns another reward: the chance to poke around inside the most prized software of the Call for Code program’s corporate partners.

SUSE and IBM/Red Hat Leftovers

Filed under
Red Hat
Server
SUSE
  • No More Sleepless Nights and Long Weekends Doing Maintenance

    Datacenter maintenance – you dread it, right? Staying up all night to make sure everything runs smoothly and nothing crashes, or possibly losing an entire weekend to maintenance if something goes wrong. Managing your datacenter can be a real drag. But it doesn’t have to be that way.

    At SUSECON 2019, Raine and Stephen discussed how SUSE can help ease your pain with SUSE Manager, a little Salt and a few best practices for datacenter management and automation.

  • Fedora Has Formed A Minimization Team To Work On Shrinking Packaged Software

    The newest initiative within the Fedora camp is a "Minimization Team" seeking to reduce the size of packaged applications, run-times, and other software available on Fedora Linux.

    The hope of the Fedora Minimization Team is that they can lead to smaller containers, eliminating package dependencies where not necessary, and reducing the patching foot-print.

  • DevNation Live: Easily secure your cloud-native microservices with Keycloak

    DevNation Live tech talks are hosted by the Red Hat technologists who create our products. These sessions include real solutions and code and sample projects to help you get started. In this talk, you’ll learn about Keycloak from Sébastien Blanc, Principal Software Engineer at Red Hat.

    This tutorial will demonstrate how Keycloak can help you secure your microservices. Regardless of whether it’s a Node.js REST Endpoint, a PHP app, or a Quarkus service, Keycloak is completely agnostic of the technology being used by your services. Learn how to obtain a JWT token and how to propagate this token between your different secured services. We will also explain how to add fine-grained authorizations to these services.

Server: 'Cloud', virtualisation and IBM/Red Hat

Filed under
Server
  • Cloud Native Applications in AWS supporting Hybrid Cloud – Part 1

    Let us talk first about what is cloud native and the benefits of SUSE Cloud Application Platform and AWS when building cloud native applications.

  • Cloud Native Applications in AWS supporting Hybrid Cloud – Part 2

    In my previous post , I wrote about using SUSE Cloud Application Platform on AWS for cloud native application delivery. In this follow-up, I’ll discuss two ways to get SUSE Cloud Application Platform installed on AWS and configure the service broker:

  • 10 Top Data Virtualization Tools

    With the continuing expansion of data mining by enterprises, it's no longer possible or advisable for an organization to keep all data in a single location or silo. Yet having disparate data analytics stores of both structured and unstructured data, as well as Big Data, can be complex and seemingly chaotic.

    Data virtualization is one increasingly common approach for dealing with the challenge of ever-expanding data. Data virtualization integrates data from disparate big data software and data warehouses - among other sources – without copying or moving the data. Most helpful, it provides users with a single virtual layer that spans multiple applications, formats, and physical locations, making data more useful and easier to manage.

  • Running MongoDB with OCS3 and using different types of AWS storage options (part 3)

    In the previous post I explained how to performance test MongoDB pods on Red Hat OpenShift with OpenShift Container Storage 3 volumes as the persistent storage layer and Yahoo! Cloud System Benchmark (YCSB) as the workload generator.

    The cluster I’ve used in the prior posts was based on the AWS EC2 m5 instance series and using EBS storage of type gp2. In this blog I will compare these results with a similar cluster that is based on the AWS EC2 i3 instance family that is using local attached storage (sometimes referred as "instance storage" or "local instance store").

  • OpenShift 4.1 Bare Metal Install Quickstart

    In this blog we will go over how to get you up and running with a Red Hat OpenShift 4.1 Bare Metal install on pre-existing infrastructure. Although this quickstart focuses on the bare metal installer, this can also be seen as a “manual” way to install OpenShift 4.1. Moreover, this is also applicable to installing to any platform which doesn’t have the ability to provide ignition pre-boot. For more information about using this generic approach to install on untested platforms, please see this knowledge base article.

Continuous Integration/Continuous Development with FOSS Tools

Filed under
Development
Server

One of the hottest topics within the DevOps space is Continuous Integration and Continuous Deployment (CI/CD). This attention has drawn lots of investment dollars, and a vast array of proprietary Software As A Service (SaaS) tools have been created in the CI/CD space, which traditionally has been dominated by free open-source software (FOSS) tools. Is FOSS still the right choice with the low cost of many of these SaaS options?

It depends. In many cases, the cost of self-hosting these FOSS tools will be greater than the cost to use a non-FOSS SaaS option. However, even in today's cloud-centric and SaaS-saturated world, you may have good reasons to self-host FOSS. Whatever those reasons may be, just don't forget that "Free" isn't free when it comes to keeping a service running reliably 24/7/365. If you're looking at FOSS as a means to save money, make sure you account for those costs.

Even with those costs accounted for, FOSS still delivers a lot of value, especially to small and medium-sized organizations that are taking their first steps into DevOps and CI/CD. Starting with a commercialized FOSS product is a great middle ground. It gives a smooth growth path into the more advanced proprietary features, allowing you to pay for those only once you need them. Often called Open Core, this approach isn't universally loved, but when applied well, it has allowed for a lot of value to be created for everyone involved.

Read more

Servers ('Cloud'), IBM, and Fedora

Filed under
Red Hat
Server
  • Is the cloud right for you?

    Corey Quinn opened his lightning talk at the 17th annual Southern California Linux Expo (SCaLE 17x) with an apology. Corey is a cloud economist at The Duckbill Group, writes Last Week in AWS, and hosts the Screaming in the Cloud podcast. He's also a funny and engaging speaker. Enjoy this video "The cloud is a scam," to learn why he wants to apologize and how to find out if the cloud is right for you.

  • Google Cloud to offer VMware data-center tools natively

    Google this week said it would for the first time natively support VMware workloads in its Cloud service, giving customers more options for deploying enterprise applications.

    The hybrid cloud service called Google Cloud VMware Solution by CloudSimple will use VMware software-defined data center (SDCC) technologies including VMware vSphere, NSX and vSAN software deployed on a platform administered by CloudSimple for GCP.

  • Get started with reactive programming with creative Coderland tutorials

    The Reactica roller coaster is the latest addition to Coderland, our fictitious amusement park for developers. It illustrates the power of reactive computing, an important architecture for working with groups of microservices that use asynchronous data to work with each other.

    In this scenario, we need to build a web app to display the constantly updated wait time for the coaster.

  • Fedora Has Deferred Its Decision On Stopping Modular/Everything i686 Repositories

    The recent proposal to drop Fedora's Modular and Everything repositories for the upcoming Fedora 31 release is yet to be decided after it was deferred at this week's Fedora Engineering and Steering Committee (FESCo) meeting.

    The proposal is about ending the i686 Modular and Everything repositories beginning with the Fedora 31 cycle later this year. But this isn't about ending multi-lib support, so 32-bit packages will continue to work from Fedora x86_64 installations. But as is the trend now, if you are still running pure i686 (32-bit x86) Linux distributions, your days are numbered. Separately, Fedora is already looking to drop their i686 kernels moving forward and they are not the only Linux distribution pushing for the long overdue retirement of x86 32-bit operating system support.

Servers: Twitter Moves to Kubernetes, Red Hat/IBM News and Tips

Filed under
Red Hat
Server
  • Twitter Announced Switch from Mesos to Kubernetes

    On the 2nd of May at 7:00 PM (PST), Twitter held a technical release conference and meetup at its headquarters in San Francisco. At the conference, David McLaughlin, Product and Technical Head of Twitter Computing Platform, announced that Twitter's infrastructure would completely switch from Mesos to Kubernetes.

    For a bit of background history, Mesos was released in 2009, and Twitter was one of the early companies in support and use Mesos. As one of the most successful social media giants in the world, Twitter has received much attention due to its large production cluster scale (having tens of thousands of nodes). In 2010, Twitter started to develop the Aurora project based on the Mesos project to make it more convenient to manage both its online and offline business and gradually adopt to Mesos.

  • Linux Ending Support for the Floppy Drive, Unity 2019.2 Launches Today, Purism Unveils Final Librem 5 Smartphone Specs, First Kernel Security Update for Debian 10 "Buster" Is Out, and Twitter Is Switching from Mesos to Kubernetes

    Twitter is switching from Mesos to Kubernetes. Zhang Lei, Senior Technical Expert on Alibaba Cloud Container Platform and Co-maintainer of Kubernetes Project, writes "with the popularity of cloud computing and the rise of cloud-based containerized infrastructure projects like Kubernetes, this traditional Internet infrastructure starts to show its age—being a much less efficient solution compared with that of Kubernetes". See Zhang's post for some background history and more details on the move.

  • Three ways automation can help service providers digitally transform

    As telecommunication service providers (SPs) look to stave off competitive threats from over the top (OTT) providers, they are digitally transforming their operations to greatly enhance customer experience and relevance by automating their networks, applying security, and leveraging infrastructure management. According to EY’s "Digital transformation for 2020 and beyond" study, process automation can help smooth the path for SP IT teams to reach their goals, with 71 percent of respondents citing process automation as "most important to [their] organization’s long-term operational excellence."

    There are thousands of virtual and physical devices that comprise business, consumer, and mobile services in an SP’s environment, and automation can help facilitate and accelerate the delivery of those services.

    [...]

    Some SPs are turning to Ansible and other tools to embark on their automation journey. Red Hat Ansible Automation, including Red Hat Ansible Engine and Red Hat Ansible Tower, simplifies software-defined infrastructure deployment and management, operations, and business processes to help SPs more effectively deliver consumer, business, and mobile services.

    Red Hat Process Automation Manager (formerly Red Hat JBoss BPM Suite) combines business process management, business rules management, business resource optimization, and complex event processing technologies in a platform that also includes tools for creating user interfaces and decision services. 

  • Deploy your API from a Jenkins Pipeline

    In a previous article, 5 principles for deploying your API from a CI/CD pipeline, we discovered the main steps required to deploy your API from a CI/CD pipeline and this can prove to be a tremendous amount of work. Hopefully, the latest release of Red Hat Integration greatly improved this situation by adding new capabilities to the 3scale CLI. In 3scale toolbox: Deploy an API from the CLI, we discovered how the 3scale toolbox strives to automate the delivery of APIs. In this article, we will discuss how the 3scale toolbox can help you deploy your API from a Jenkins pipeline on Red Hat OpenShift/Kubernetes.

  • How to set up Red Hat CodeReady Studio 12: Process automation tooling

    The release of the latest Red Hat developer suite version 12 included a name change from Red Hat JBoss Developer Studio to Red Hat CodeReady Studio. The focus here is not on the Red Hat CodeReady Workspaces, a cloud and container development experience, but on the locally installed developers studio. Given that, you might have questions about how to get started with the various Red Hat integration, data, and process automation product toolsets that are not installed out of the box.

    In this series of articles, we’ll show how to install each set of tools and explain the various products they support. We hope these tips will help you make informed decisions about the tooling you might want to use on your next development project.

Kubernetes News/Views

Filed under
Server
  • Cloud Foundry and Kubernetes – The Blending Continues [Ed: Cloud Foundry Foundation dominated by proprietary software firms]

    At the recent Cloud Foundry Summit in Philadephia, Troy Topnik of SUSE participated in the latest iteration of a panel discussing how the community continues to blend Cloud Foundry and Kubernetes. There is some interesting and insightful discussion between members of the panel from Google, IBM, Microsoft, Pivotal, SAP, and Swarna Podila of the Cloud Foundry Foundation.

    Cloud Foundry Foundation has posted all recorded talks form CF Summit on YouTube.

  • Don’t Throw Your Kubernetes Away

    The adoption of Kubernetes is growing at an unprecedented rate. Companies of all sizes are running it in production. Almost all of these companies were early adopters of Kubernetes where different dev teams brought Kubernetes inside the organization.

    Kubernetes is a very engineer-driven technology. Unlike instances like virtualization or other infrastructure components that are managed by the central IT team which offers them to different development groups, Kubernetes is something that developers bring into the organization.

  • Issue #2019.07.29 – Kubeflow Releases so far (0.5, 0.4, 0.3)

    Kubeflow 0.5 simplifies model development with enhanced UI and Fairing library – The 2019 Q1 release of Kubeflow goes broader and deeper with release 0.5. Give your Jupyter notebooks a boost with the redesigned notebook app. Get nerdy with the new kfctl command line tool. Power to the people – use your favourite python IDE and send your model to a Kubeflow cluster using the Fairing python library. More training tools added as well, with an example of XGBoost and Fairing.

Server: IBM, Amazon, Elastic, Cloudera and YugaByte

Filed under
Server
  • IBM CTO: ‘Open Tech Is Our Cloud Strategy’

    IBM may not be as splashy as some of the other tech giants that make big code contributions to open source. But as Chris Ferris, CTO for open technology at IBM says, “we’ve been involved in open source before open source was cool.”

    By Ferris’ estimation, IBM ranks among the top three contributors in terms of code commits to open source project and contributors to the various open source communities. “It’s really significant,” he said. “We don’t run around with the vanity metrics the way some others do, but it’s really important to us.”

  • TurboSched Is A New Linux Scheduler Focused On Maximizing Turbo Frequency Usage

    TurboSched is a new Linux kernel scheduler that's been in development by IBM for maximizing use of turbo frequencies for the longest possible periods of time. Rather than this scheduler trying to balance the load across all available CPU cores, it tries to keep the priority tasks on a select group of cores while aiming to keep the other cores idle in order to allow for the power allowance to be used by those few turbo-capable cores with the high priority work.

    TurboSched aims to keep low utilization tasks to already active cores as opposed to waking up new cores from their idle/power-savings states. This is beneficial for allowing the CPU cores most likely to be kept in their turbo state for longer while saving power in terms of not waking up extra cores for brief periods of time when handling various background/jitter tasks.

  • AWS Turbocharges new Linux Kernel Releases in its Extras Catalogue

    Amazon says it has added AWS-optimised variants of new Linux Kernel releases to its extras catalogue in Amazon Linux 2 – a Linux server operating system (OS) – saying the boost results in higher bandwidth with lower latency on smaller instance types.

    Amazon Linux is an OS distribution supported and updated by AWS and made available for use with Elastic Compute Cloud (EC2) instances. Amazon Linux users will now be able to update the operating system to Linux Kernel 4.19, as released in October 2018.

  • Elastic Cloud Enterprise 2.3 turns admins into bouncers

    Version 2.3 of Elastic Cloud Enterprise (ECE) is now available for download, finally bringing role-based access control (RBAC) to its general user base and letting admins decide who gets to see what. ECE allows the deployment of Elastic’s search-based software as a service offerings on a company’s infrastructure of choice (public cloud, private cloud, virtual machines, bare metal).

    The new version is the first to come with four pre-configured roles to help admins control deployment access and management privileges. This is only the first step in the product’s RBAC journey, though. Customisable deployment-level permissions and greater abilities to separate users by teams are on the ECE roadmap.

  • Cloudera open source route seeks to keep big data alive

    Cloudera has had a busy 2019. The vendor started off the year by merging with its primary rival Hortonworks to create a new Hadoop big data juggernaut. However, in the ensuing months, the newly merged company has faced challenges as revenue has come under pressure and the Hadoop market overall has shown signs of weakness.

    Against that backdrop, Cloudera said July 10 that it would be changing its licensing model, taking a fully open source approach. The Cloudera open source route is a new strategy for the vendor. In the past, Cloudera had supported and contributed to open source projects as part of the larger Hadoop ecosystem but had kept its high-end product portfolio under commercial licenses.

    The new open source approach is an attempt to emulate the success that enterprise Linux vendor Red Hat has achieved with its open source model. Red Hat was acquired by IBM for $34 billion in a deal that closed in July. In the Red Hat model, the code is all free and organizations pay a subscription fee for support services.

  • YugaByte goes 100% open under Apache

    Open source distributed SQL database company YugaByte has confirmed that its eponymously named YugaByte DB is now 100 percent open source under the Apache 2.0 license.

    The additional homage to open source-ness means that previously commercial features now move into the open source core.

    YugaByte says it hopes that this will directly create more opportunities for open collaboration between users, who will have their hands on 100% open tools.

Syndicate content

More in Tux Machines

today's howtos

Databases: MariaDB, ScyllaDB, Percona, Cassandra

  • MariaDB opens US headquarters in California

    MariaDB Corporation, the database company born as a result of forking the well-known open-source MySQL database...

  • ScyllaDB takes on Amazon with new DynamoDB migration tool

    There are a lot of open-source databases out there, and ScyllaDB, a NoSQL variety, is looking to differentiate itself by attracting none other than Amazon users. Today, it announced a DynamoDB migration tool to help Amazon customers move to its product.

  • ScyllaDB Announces Alternator, an Open Source Amazon DynamoDB-Compatible API

    ScyllaDB today announced the Alternator project, open-source software that will enable application- and API-level compatibility between Scylla and Amazon’s NoSQL cloud database, Amazon DynamoDB. Scylla’s DynamoDB-compatible API will be available for use with Scylla Open Source, supporting the majority of DynamoDB use cases and features.

  • ScyllaDB Secures $25 Million to Open Source Amazon DynamoDB-compatible API

    Fast-growing NoSQL database company raises funds to extend operations and bring new deployment flexibility to users of Amazon DynamoDB.

  • ScyllaDB Announces Alternator, an Open Source Amazon DynamoDB-Compatible API

    ScyllaDB today announced the Alternator project, open-source software that will enable application- and API-level compatibility between Scylla and Amazon’s NoSQL cloud database, Amazon DynamoDB. Scylla’s DynamoDB-compatible API will be available for use with Scylla Open Source, supporting the majority of DynamoDB use cases and features.

  • ScyllaDB powers up Alternator: an open Amazon DynamoDB API

    Companies normally keep things pretty quiet in the run up to their annual user conferences, so they can pepper the press with a bag of announcements designed to show how much market momentum and traction that have going. Not so with ScyllaDB, the company has been dropping updates in advance of its Scylla Summit event in what is perhaps an unusually vocal kind of way. [...] Scylla itself is a real-time big data database that is fully compatible with Apache Cassandra and is known for its ‘shared-nothing’ approach (a distributed-computing architecture in which each update request is satisfied by a single node –processor/memory/storage unit to increase throughput and storage capacity.

  • Percona Announces Full Conference Schedule for Percona Live Open Source Database Conference Europe 2019

    The Percona Live Open Source Database Conference Europe 2019 is the premier open source database event. Percona Live conferences provide the open source database community with an opportunity to discover and discuss the latest open source trends, technologies and innovations. The conference includes the best and brightest innovators and influencers in the open source database industry.

  • Thwarting Digital Ad Fraud at Scale: An Open Source Experiment with Anomaly Detection

    Our experiment assembles Kafka, Cassandra, and our anomaly detection application in a Lambda architecture, in which Kafka and our streaming data pipeline are the speed layer, and Cassandra acts as the batch and serving layer. In this configuration, Kafka makes it possible to ingest streaming digital ad data in a fast and scalable manner, while taking a “store and forward” approach so that Kafka can serve as a buffer to protect the Cassandra database from being overwhelmed by major data surges. Cassandra’s strength is in storing high-velocity streams of ad metric data in its linearly scalable, write-optimized database. In order to handle automation for provisioning, deploying, and scaling the application, the anomaly detection experiment relies on Kubernetes on AWS EKS.

Today in Techrights

OSS Leftovers

  • Workarea Commerce Goes Open-source

    The enterprise commerce platform – Workarea is releasing its software to the open-source community. In case you don’t already know, Workarea was built to unify commerce, content management, merchant insights, and search. It was developed upon open-source technologies since its inception like Elasticsearch, MongoDB, and Ruby on Rails. Workarea aims to provide unparalleled services in terms of scalability and flexibility in modern cloud environments. Its platform source code and demo instructions are available on GitHub here.

  • Wyoming CV Pilot develops open-source RSU monitoring system

    The team working on the US Department of Transportation’s (USDOT) Connected Vehicle Pilot Deployment Program in Wyoming have developed open-source applications for the operation and maintenance of Roadside Units (RSUs) that can be viewed by all stakeholders. The Wyoming Department of Transportation (WYDOT) Connected Vehicle Pilot implementation includes the deployment of 75 RSUs along 400 miles (644km) of I-80. With long drive times and tough winters in the state, WYDOT needed an efficient way to monitor the performance of and manage and update these units to maintain peak performance. With no suitable product readily available, the WYDOT Connected Vehicle team developed an open-source application that allows authorized transportation management center (TMC) operators to monitor and manage each RSU at the roadside. The WYDOT team found that the application can also be used as a public-facing tool that shows a high-level status report of the pilot’s equipment. [...] For other state or local agencies and departments of transportation (DOTs) wishing to deploy a similar capability to monitor and manage RSUs, the application code has been made available on the USDOT’s Open Source Application Development Portal (OSADP). The code is downloadable and can be used and customized by other agencies free of charge. WYDOT developed this capability using USDOT funds under the CV Pilot program as open-source software and associated documentation. The application represents one of six that the program will be providing during its three phases.

  • You Too Can Make These Fun Games (No Experience Necessary)

    Making a videogame remained a bucket list item until I stumbled on an incredibly simple open source web app called Bitsy. I started playing around with it, just to see how it worked. Before I knew it, I had something playable. I made my game in a couple of hours.

  • From maverick to mainstream: why open source software is now indispensable for modern business

    Free and open source software has a long and intriguing history. Some of its roots go all the way back to the 1980s when Richard Stallman first launched the GNU project.

  • Analyst Watch: Is open source the great equalizer?

    If you had told me 25 years ago that open source would be the predominant force in software development, I would’ve laughed. Back then, at my industrial software gig, we were encouraged to patent as much IP as possible, even processes that seemed like common-sense business practices, or generally useful capabilities for any software developer. If you didn’t, your nearest competitor would surely come out with their own patent claims, or inevitable patent trolls would show up demanding fees for any uncovered bit of code. We did have this one developer who was constantly talking about fiddling with his Linux kernel at home, on his personal time. Interesting hobby.

  • Scientists Create World’s First Open Source Tool for 3D Analysis of Advanced Biomaterials

    Materials scientists and programmers from the Tomsk Polytechnic University in Russia and Germany's Karlsuhe Institute of Technology have created the world’s first open source software for the 2D and 3D visualization and analysis of biomaterials used for research into tissue regeneration. [...] Scientists have already tested the software on a variety of X-ray tomography data. “The results have shown that the software we’ve created can help other scientists conducting similar studies in the analysis of the fibrous structure of any polymer scaffolds, including hybrid ones,” Surmenev emphasised.

  • Making Collaborative Data Projects Easier: Our New Tool, Collaborate, Is Here

    On Wednesday, we’re launching a beta test of a new software tool. It’s called Collaborate, and it makes it possible for multiple newsrooms to work together on data projects. Collaborations are a major part of ProPublica’s approach to journalism, and in the past few years we’ve run several large-scale collaborative projects, including Electionland and Documenting Hate. Along the way, we’ve created software to manage and share the large pools of data used by our hundreds of newsrooms partners. As part of a Google News Initiative grant this year, we’ve beefed up that software and made it open source so that anybody can use it.

  • Should open-source software be the gold standard for nonprofits?

    Prior to its relaunch, nonprofit organization Cadasta had become so focused on the technology side of its work that it distracted from the needs of partners in the field. “When you’re building out a new platform, it really is all consuming,” said Cadasta CEO Amy Coughenour, reflecting on some of the decisions that were made prior to her joining the team in 2018.

  • Artificial intelligence: an open source future

    At the same time, we’re seeing an increasing number of technology companies invest in AI development. However, what’s really interesting is that these companies - including the likes of Microsoft, Salesforce and Uber - are open sourcing their AI research. This move is already enabling developers worldwide to create and improve AI & Machine Learning (ML) algorithms faster. As such, open source software has become a fundamental part of enabling fast, reliable, and also secure development in the AI space. So, why all the hype around open source AI? Why are businesses of all sizes, from industry behemoths to startups, embracing open source? And where does the future lie for AI and ML as a result?

  • How open source is accelerating innovation in AI

    By eradicating barriers like high licensing fees and talent scarcity, open source is accelerating the pace of AI innovation, writes Carmine Rimi No other technology has captured the world’s imagination quite like AI, and there is perhaps no other that has been so disruptive. AI has already transformed the lives of people and businesses and will continue to do so in endless ways as more startups uncover its potential. According to a recent study, venture capital funding for AI startups in the UK increased by more than 200 percent last year, while a Stanford University study observed a 14-times increase in the number of AI startups worldwide in the last two years.

  • Adam Jacob Advocates for Building Healthy OSS Communities in “The War for the Soul of Open Source”

    Chef co-founder and former CTO Adam Jacob gave a short presentation at O’Reilly Open Source Software Conference (OSCON) 2019 titled “The War for the Soul of Open Source.” In his search for meaning in open source software today, Jacob confronts the notion of open source business models. “We often talk about open source business models,” he said. “There isn’t an open source business model. That’s not a thing and the reason is open source is a channel. Open source is a way that you, in a business sense, get the software out to the people, the people use the software, and then they become a channel, which [companies] eventually try to turn into money.” [...] In December 2018, Jacob launched the Sustainable Free and Open Source Communities (SFOSC) project to advocate for these ideas. Instead of focusing on protecting revenue models of OSS companies, the project’s contributors work together to collaborate on writing core principles, social contracts, and business models as guidelines for healthy OSS communities.

  • New Open Source Startups Emerge After Acquisition, IPO Flurry

    After a flurry of mega-acquisitions and initial public offerings of open source companies, a new batch of entrepreneurs are trying their hands at startups based on free software projects.

  • TC9 selected by NIST to develop Open Source Software for Transactive Energy Markets

    TC9, Inc. was selected by National Institute of Standards and Technology (NIST) to develop open source software for Transactive Energy Bilateral Markets based on the NIST Common Transactive Services. Under the contract, TC9 will develop open source software (OSS) for agents for a transactive energy market. The software will be used to model the use of transactive energy to manage power distribution within a neighborhood. Transactive Energy is a means to balance volatile supply and consumption in real time. Experts anticipate the use of Transactive Energy to support wide deployment of distributed energy resources (DER) across the power grid.

  • Open Source Software Allows Auterion to Move Drone Workflows into the Cloud

    “Until today, customizing operations in the MAVLink protocol required a deep understanding of complex subjects such as embedded systems, drone dynamics, and the C++ programming language,” said Kevin Sartori, co-founder of Auterion. “With MAVSDK, any qualified mobile developer can write high-level code for complex operations, meaning more developers will be able to build custom applications and contribute to the community.”

  • ApacheCon 2019 Keynote: James Gosling's Journey to Open Source

    At the recent ApacheCon North America 2019 in Las Vegas, James Gosling delivered a keynote talk on his personal journey to open-source. Gosling's main takeaways were: open source allows programmers to learn by reading source code, developers must pay attention to intellectual property rights to prevent abuse, and projects can take on a life of their own.

  • 20 Years of the Apache Software Foundation: ApacheCon 2019 Opening Keynote

    At the recent ApacheCon North America 2019 in Las Vegas, the opening keynote session celebrated the 20th anniversary of the Apache Software Foundation (ASF), with key themes being: the history of the ASF, a strong commitment to community and collaboration, and efforts to increase contributions from the public. The session also featured a talk by astrophysicist David Brin on the potential dangers of AI.