Language Selection

English French German Italian Portuguese Spanish

Server

Servers: CNCF, IBM/Red Hat and More

Filed under
Server
  • Bringing Linux containers to small-footprint industrial-edge applications [Ed: Wind River trying to sell its proprietary stuff]

    Standards-compliant, cloud-native implementations based on open-source projects are possible thanks to the Cloud Native Computing Foundation (CNCF), which serves as the vendor-neutral home for many of the fastest-growing container-related projects. The foundation fosters collaboration between the industry’s top developers, end users and vendors.

    While initially deployed in enterprise IT environments, cloud-native architectures and containers provide benefits that are equally desirable for industrial, energy and medical embedded systems located at a factory, hospital or remote site. Code reusability, efficient maintenance, platform independence and optimized resource utilization are just as important for devices and applications developed by small teams working to meet aggressive schedules, deployed across multiple hardware architectures based on a variety of processor architectures.

  • Kubernetes Ecosystem Grows With Apple Support, New Release

    Organizations of all sizes are increasingly running their enterprise applications on top of cloud native infrastructure and more often than not, that infrastructure is Kubernetes.

    Kubernetes is an open source effort originally started by Google five years ago and now operated as a multi-stakeholder effort under the direction of the Cloud Native Computing Foundation (CNCF).

    The CNCF gained a major new supporter in June, with the addition of Apple as a Platinum End User Member. The platinum tier come swith a $370,000 annual fee that Apple will now fork over to the CNCF.

    Apple isn't just a consumer and user of CNCF projects including Kubernetes, it's also an active contributor. Apple has made code contribution to the Kubernetes container orchestration project, Envoy proxy, Helm and the gRPC project as well. Apple joining the CNCF and being an active participant is a big deal not just for the brand recognition that they bring, but also because Apple was previously a very visible user of the rival Mesos orchestration system.

  • IBM: What Red Hat Brings To The Table

    I have been bearish on IBM (NYSE:IBM) for several years. As more information is being disseminated and stored over the Internet, cloud computing has become all the rage. The company has been transitioning from mainframe computing to cloud computing. That transformation has been a long, arduous process. The company has stagnant to declining revenue growth, making it difficult to recommend the stock. Its acquisition of Red Hat (RHT) for $34 billion was expensive, but it could pay dividends down the line.

  • Red Hat Reports First Quarter Results for Fiscal Year 2020
  • How Red Hat Enterprise Linux 8 can help agencies accelerate innovation in the hybrid cloud

    Almost 10 years after the Office of Management and Budget first directed agencies to begin moving to the cloud, those agencies no longer need to be told. They’ve seen the benefits, and cloud-first policies and the need to reduce datacenters are no longer the primary drivers. Experts say flexibility in application development and deployment, a better place to innovate in digital services, and enhanced security rank among the most common reasons.

    But it can be a slow process, partly due to strict government requirements. Agencies need to meet requirements for security, uptime, disaster recovery, and workload portability with their hybrid cloud infrastructures..

  • Awards roll call: Red Hat awards, March 2019 - June 2019

    As we kick off summer, we wanted to share some of the latest awards and recognition that Red Hat has received over the last few months. Since our last award roundup, Red Hat has been honored with more than 17 new accolades across our organization.

    Since our founding 26 years ago, Red Hat has grown from a single product company to the world’s leading provider of enterprise open source software solutions and the first public open source company to generate more than $3 billion in revenue. Open source has revolutionized the software industry and it continues to be the driving force behind much of the technology innovation happening today. We see the following awards as a recognition of that journey.

  • Outcome of the CPE’s team’s face-to-face

    You may remember that we recently spoke about the Community Platform Engineering (CPE) team and the problem it is facing — our workload is growing faster than the team can scale to meet it. From June 10th to June 14th most of the CPE team members met face to face in the Red Hat office in Waterford (Ireland)

  • RPM packages explained

    Perhaps the best known way the Fedora community pursues its mission of promoting free and open source software and content is by developing the Fedora software distribution. So it’s not a surprise at all that a very large proportion of our community resources are spent on this task. This post summarizes how this software is “packaged” and the underlying tools such as rpm that make it all possible.

    [...]

    From a quick look, dnfdragora appears to provide all of dnf‘s main functions.

    There are other tools in Fedora that also manage packages. GNOME Software, and Discover are two examples. GNOME Software is focused on graphical applications only. You can’t use the graphical front end to install command line or terminal tools such as htop or weechat. However, GNOME Software does support the installation of Flatpaks and Snap applications which dnf does not. So, they are different tools with different target audiences, and so provide different functions.

    This post only touches the tip of the iceberg that is the life cycle of software in Fedora. This article explained what RPM packages are, and the main differences between using rpm and using dnf.

  • An Introduction to Docker and Why It’s Useful in IoT

    But that is not the complete story. Netflix did not just seek the help of AWS because they have unlimited servers and data centers to provide. In fact, the huge costs of renting actual data centers can make it rather expensive to keep their monthly plans affordable for 150 million end users worldwide.

  • What +1's taught us about organizational pain at Red Hat Summit

    Open tools work best in the hands of open people. At Red Hat, we understand that tackling bold challenges—like digital transformation or application modernization—requires more than technology alone. It requires new ways of thinking, working, and problem-solving. That's why we're committed to helping organizations understand how they can embrace open principles to reshape their organizational cultures in productive and innovative ways.

    So for Red Hat Summit in Boston, MA this year, members of the Open Organization community wanted to craft a unique experience that would help attendees understand the power of working openly. We wanted to create opportunities for visitors to share their culture- and process-focused challenges with us—not only so we could learn more about what's keeping people from being as innovative, agile and engaged as they'd like to be, but also so we could connect them with resources they could use to begin tackling those challenges.

    This is the story of what we did, how we did it, and what we learned.

  • Cloudflare blames widespread internet borkage on Verizon and Noction

    Noction provides a service which it claims can increase BGP efficiency by 30-50 per cent by splitting IP addresses into smaller chunks (overly simplified, but that'll get you through the explanation).

    When that went wrong, it started misdirecting traffic. A lot of it was caught by failsafes from carriers, but Verizon, it appears, didn't have the necessary safeguards (a system called RPKI) and let the erroneous traffic go all over the internet.

CNCF outlines its technical oversight goals

Filed under
Linux
Server

At KubeCon + CloudNativeCon Europe 2019 there was a public meeting of the Cloud Native Computing Foundation (CNCF) Technical Oversight Committee (TOC); its members outlined the current state of the CNCF and where things are headed. What emerged was a picture of how the CNCF's governance is evolving as it brings in more projects, launches a new special interest group mechanism, and contemplates what to do with projects that go dormant.

The CNCF has several levels in its organizational structure with the Governing Board handling the overall operation, budget, and finances, while the TOC handles the technical vision and direction, as well as approving new project additions. Though the TOC currently acts as a sort of gatekeeper for admitting projects into the CNCF, there is more that TOC member Joe Beda, the developer who made the first commit to Kubernetes, said can be done. "The TOC helps to decide which projects come in, but I think we could do an expanded role to actually make sure that we're serving those projects better and that we're creating a great value proposition for projects, so that it's a really great two-way street between the CNCF and the projects to really build some sustainability," he said.

Jeff Brewer had a different perspective on how the TOC can help projects, based on his role, which is as an end user of CNCF projects. He is excited about the fact that end users of Kubernetes are talking with one another and helping to bring a customer focus to the TOC. By having that focus, the TOC can help to ensure that the projects it takes in aren't just cool projects that nobody actually uses, but rather are efforts that have practical utility. "We have over 80 end-user organization members and we look for them to really help us lead the way with the technical direction of the CNCF," he said.

Read more

Servers: SUSE, Ubuntu, Red Hat, OpenStack and Raspberry Digital Sigange

Filed under
Red Hat
Server
SUSE
Ubuntu
  • A Native Kubernetes Operator Tailored for Cloud Foundry

    At the recent Cloud Foundry Summit in Philadephia, Troy Topnik of SUSE and Enrique Encalada of IBM discussed the progress being made on cf-operator, a project that’s part of the CF Containerization proposal. They show what the operator can do and how Cloud Foundry deployments can be managed with it. They also delve deeper, and talk about implementation techniques, Kubernetes Controllers and Custom Resources. This is a great opportunity to learn about how Cloud Foundry can work flawlessly on top of Kubernetes.

    Cloud Foundry Foundation has posted all recorded talks form CF Summit on YouTube. Check them out if you want to learn more about what is happening in the Cloud Foundry world! I’ll be posting more SUSE Cloud Application Platform talks here over the coming days. Watch Troy and Enrique’s talk below:

  • Ubuntu Server development summary – 26 June 2019

    The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list or visit the Ubuntu Server discourse hub for more discussion.

  • Redefining RHEL: Introduction to Red Hat Insights

    At Red Hat Summit we redefined what is included in a Red Hat Enterprise Linux (RHEL) subscription, and part of that is announcing that every RHEL subscription will include Red Hat Insights. The Insights team is very excited about this, and we wanted to take an opportunity to expand on what this means to you, and to share some of the basics of Red Hat Insights.

    We wanted to make RHEL easier than ever to adopt, and give our customers the control, confidence and freedom to help scale their environments through intelligent management. Insights is an important component in giving organizations the ability to predict, prevent, and remediate problems before they occur.

  • Red Hat Shares ― Special edition: Red Hat Summit recap
  • OpenShift Commons Briefing: OKD4 Release and Road Map Update with Clayton Coleman (Red Hat)

    In this briefing, Red Hat’s Clayton Coleman, Lead Architect, Containerized Application Infrastructure (OpenShift, Atomic, and Kubernetes) leads a discussion about the current development efforts for OKD4, Fedora CoreOS and Kubernetes in general as well as the philosophy guiding OKD 4 develpoment efforts. The briefing includes discussion of shared community goals for OKD4 and beyond and Q/A with some of the engineers currently working on OKD.
    The proposed goal/vision for OKD 4 is to be the perfect Kubernetes distribution for those who want to continuously be on the latest Kubernetes and ecosystem components combining an up-to-date OS, the Kubernetes control plane, and a large number of ecosystem operators to provide an easy-to-extend distribution of Kubernetes that is always on the latest released version of ecosystem tools.

  • OpenStack Foundation Joins Open Source Initiative as Affiliate Member

    The Open Source Initiative ® (OSI), steward of the Open Source Definition and internationally recognized body for approving Open Source Software licenses, today announces the affiliate membership of The OpenStack Foundation (OSF).

    Since 2012, the OSF has been the home for the OpenStack cloud software project, working to promote the global development, distribution and adoption of open infrastructure. Today, with five active projects and more than 100,000 community members from 187 countries, the OSF is recognized across industries as both a leader in open source development and an exemplar in open source practices.

    The affiliate membership provides both organizations a unique opportunity to work together to identify and share resources that foster community and facilitate collaboration to support the awareness and integration of open source technologies. While Open Source Software is now embraced and often touted by organizations large and small, for many just engaging with the community—and even some longtime participants—challenges remain. Community-based support and resources remain vital, ensuring those new to the ecosystem understand the norms and expectations, while those seeking to differentiate themselves remain authentically engaged. The combined efforts of the OSI and the OSF will compliment one another and contribute to these efforts.

  • Raspberry Digital Sigange details

    system starts in digital signage mode with the saved settings; the admin interface is always displayed after the machine bootstrap (interface can be password-protected in the donors’ build) and if not used for a few seconds, it will auto-launch the kiosk mode; the web interface can be also used remotely;

    SSH remote management is available: you can login as pi or root user with the same password set for the admin interface. Operating system can be completely customized by the administrator using this feature (donors version only);
    screen can be rotated via the graphical admin interface: normal, inverted, left, right (donors version only);

Servers: Skytap, Instaclustr, MariaDB, Quarkus, Kubernetes and More

Filed under
Server
  • Skytap Announces General Availability of IBM i in the Public Cloud, Leads Ecosystem to New Opportunities

    Skytap, a global, purpose-built cloud service, announces that its support for the IBM i operating system is now available in US-West, US-Central and EMEA-UK. Available for purchase in hourly, monthly and annual consumption models, this release broadens Skytap's support for IBM Power Systems-based applications that can be developed, tested and run in production.

    In the 2017/2018 Logicalis Global CIO Survey, most CIOs indicated they are focused on digital transformation, with 44 percent citing complex legacy infrastructure as a main barrier in this transformation.

  • Instaclustr Releases Service Broker to Seamlessly Integrate Customers’ Kubernetes Applications within the Instaclustr Open-Source-as-a-Service Platform
  • MariaDB 10.3 now available on Red Hat Enterprise Linux 7

    Red Hat Software Collections supplies the latest, stable versions of development tools and components for Red Hat Enterprise Linux via two release trains per year. As part of the Red Hat Software Collections 3.3 release, we are pleased to announce that MariaDB 10.3 is now available for Red Hat Enterprise Linux 7.

  • Quarkus 0.17.0 now available

    Quarkus continues its cadence of delivering a release every 2-3 weeks. This latest release (0.17.0) contains 125+ changes that include new features, bug fixes, and documentation updates. 

  • Recap of Kubernetes Contributor Summit Barcelona 2019

    First of all, THANK YOU to everyone who made the Kubernetes Contributor Summit in Barcelona possible. We had an amazing team of volunteers tasked with planning and executing the event, and it was so much fun meeting and talking to all new and current contributors during the main event and the pre-event celebration.

    Contributor Summit in Barcelona kicked off KubeCon + CloudNativeCon in a big way as it was the largest contributor summit to date with 331 people signed up, and only 9 didn’t pick up their badges!

  • The innovation delusion

    If traditional planning is dead, then why do so many organizations still invest in planning techniques optimized for the Industrial Revolution?

    One reason might be that we trick ourselves into thinking innovation is the kind of thing we can accomplish with a structured, linear process. When we do this, I think we're confusing our stories about innovation with the process of innovation itself—and the two are very different.

  • Top Web Based Docker Monitoring Tools

    It is an open source platform and enables administrations to manage and run Docker in creation. It offers the whole program stack that is desired to achieve containers in production and it can be simply installed on any engine that can run Docker. After installation, all nodes can be easily configured and organized through the UI Web. You can get complex functions such as load and manage balancing out of the box after a few clicks.

  • Hitting the Reset Button on Hadoop

    Hadoop has seen better days. The recent struggles of Cloudera and MapR – the two remaining independent distributors of Hadoop software -- are proof of.

Five Linux Server Administration Mistakes And How To Avoid Them

Filed under
GNU
Linux
Server

In 2017, an employee at GitLab, the version control hosting platform, was asked to replicate a database of production data. Because of a configuration error, the replication did not work as expected, so the employee decided to remove the data that had been transferred and try again. He ran a command to delete the unwanted data, only to realize with mounting horror that he had entered the command into an SSH session connected to a production server, deleting hundreds of gigabytes of user data. Every seasoned system administrator can tell you a similar story.

The Linux command line gives server admins control of their servers and the data stored on them, but it does little to stop them running destructive commands with consequences that can’t be undone. Accidental data deletion is just one type of mistake that new server administrators make.

Read more

Horde vs Roundcube vs Squirrelmail - Which Works Best

Filed under
Server
Software
Web

Webmail is a great way to access your emails from different devices and when you are away from your home. Now, most web hosting companies include email with their server plans. And all of them offer the same three, webmail clients as well: RoundCube, Horde, and SquirrelMail. They are part of the cPanel - most popular hosting control panel.

Read more

Kubernetes 1.15

Filed under
Server
OSS
  • Kubernetes 1.15: Extensibility and Continuous Improvement

    The theme of the new developments around CustomResourceDefinitions is data consistency and native behaviour. A user should not notice whether the interaction is with a CustomResource or with a Golang-native resource. With big steps we are working towards a GA release of CRDs and GA of admission webhooks in one of the next releases.

    In this direction, we have rethought our OpenAPI based validation schemas in CRDs and from 1.15 on we check each schema against a restriction called “structural schema”. This basically enforces non-polymorphic and complete typing of each field in a CustomResource. We are going to require structural schemas in the future, especially for all new features including those listed below, and list violations in a NonStructural condition. Non-structural schemas keep working for the time being in the v1beta1 API group. But any serious CRD application is urged to migrate to structural schemas in the foreseeable future.

    Details about what makes a schema structural will be published in a blog post on kubernetes.io later this week, and it is of course documented in the Kubernetes documentation.

  • Kubernetes 1.15 now available from Canonical

    Canonical announces full enterprise support for Kubernetes 1.15 using kubeadm deployments, its Charmed Kubernetes, and MicroK8s; the popular single-node deployment of Kubernetes.

    The MicroK8s community continues to grow and contribute enhancements, with Knative and RBAC support now available through the simple microk8s.enable command. Knative is a great way to experiment with serverless computing, and now you can experiment locally through MicroK8s. With MicroK8s 1.15 you can develop and deploy Kubernetes 1.15 on any Linux desktop, server or VM across 40 Linux distros. Mac and Windows are supported too, with Multipass.

    Existing Charmed Kubernetes users can upgrade smoothly to Kubernetes 1.15, regardless of the underlying hardware or machine virtualisation. Supported deployment targets include AWS, GCE, Azure, Oracle, VMware, OpenStack, LXD, and bare metal.

  • Kubernetes 1.15 Released

    The Kubernetes community has announced the release of Kubernetes 1.15, the second release of 2019. The release focuses on Continuous Improvement and Extensibility. Work on making Kubernetes installation, upgrade and configuration even more robust has been a major focus for this cycle for SIG Cluster Lifecycle. The release comes in time just before KubeCon + CloudNativeCon Shanghai, which will bring the larger cloud-native community together in China. Read more about what's new in Kubernetes 1.15 here.

All Linux, all the time: Supercomputers Top 500

Filed under
GNU
Linux
Server

Starting at the top, two IBM-built supercomputers, Summit and Sierra, at the Department of Energy's Oak Ridge National Laboratory (ORNL) in Tennessee and Lawrence Livermore National Laboratory in California, respectively to the bottom -- a Lenovo Xeon-powered box in China -- all of them run Linux.

Linux supports more hardware architectures than any other operating system. In supercomputers, it supports both clusters, such as Summit and Sierra, the most common architecture, and Massively Parallel Processing (MPP), which is used by the number three computer Sunway TaihuLight.

When it comes to high-performance computing (HPC), Intel dominates the TOP500 by providing processing power to 95.6% of all systems included on the list. That said, IBM's POWER powers the fastest supercomputers. One supercomputer works its high-speed magic with Arm processors: Sandia Labs' Astra, an HPE design, which uses over 130-thousand Cavium ThunderX2 cores.

And, what do all these processors run? Linux, of course.
.
133 systems of the Top 500 supercomputers are using either accelerator or co-processor setups. Of these most are using Nvidia GPUs. And, once more, it's Linux conducting the hardware in a symphony of speed.

Read more

Red Hat welcomes Oracle to the oVirt community

Filed under
Red Hat
Server

On behalf of the oVirt community, its contributors and Red Hat, we welcome Oracle to the oVirt community. oVirt is the open source component that enables management of the Linux Kernel Virtual Machine (KVM), the hypervisor for virtualized environments running on the Linux kernel.

At Red Hat, we believe that upstream collaboration drives innovation, even among competitors. To this end, Red Hat has a 10+ year tenure of thought leadership, contributions and collaboration in the oVirt and KVM communities. Our development and release processes are designed to ensure that Red Hat contributions to these communities are pushed upstream so the benefits gained from our efforts are available to the community at large and available for any and all to draw from.

Read more

Also: IBM-Powered Supercomputers Lead Semi-Annual Rankings

Server: Red Hat, CentOS 8, Linux On ARM Servers and IBM

Filed under
Server
  • Why Chefs Collaborate in the Kitchen

    In a large commercial kitchen, for example hotels or cafeterias, chefs collaborate to create the recipes and meals. Sure, there is more than enough work for one person, and tasks are divided into chopping, mixing, cleaning, garnishing; but the recipe is collaboratively created.

    Suppose one chef broke away and created his or her own recipe? How would the kitchen maintain standards, tastes and reputation? Developing software using open source principles follows a similar theory.

    [...]

    Red Hat is the second largest corporate contributor to the Linux kernel. This means Red Hat engineers and support staff are well versed and able to resolve customer issues involving the Linux kernel. Every application container includes part of the Linux distribution and relies on the Linux kernel, which is the center of the Linux Operating System.

  • CentOS 8 Status 17-June-2019

    Since the release of Red Hat Enterprise Linux 8 (on 07-May) we've been looking into the tools that we use to build CentOS Linux. We've chosen to use the Koji buildsystem for RPMs, paired with the Module Build Service for modules, delivered through a distribution called Mbox.

    Mbox allows us to run the Koji Hub (the central job orchestrator), and the Module Build Service in an instance of OKD that we maintain specifically for our buildsystem work. We have 2 instances of mbox; one for the primary architectures (x86_64, ppc64le, and aarch64), and one for the secondary architecture (armhfp). OKD lets us run those instances on the same hardware but in separate namespaces. The builder machines are separate from the OKD cluster, and connect back to the individual buildsystems that they're assigned to.

  • CentOS 8.0 Is Looking Like It's Still Some Weeks Out

    For those eager to see CentOS 8.0 as the community open-source rebuild of Red Hat Enterprise Linux 8.0, progress is being made but it looks like the release is still some weeks out.

    There's been the Wiki page detailing the state of affairs for CentOS 8.0. New today is a blog post summing up the current status. Progress is being made both on building the traditional RHEL8 RPM packages as well as the newer modules/streams. Koji is being used to build the RPMs while the Module Build Service with Mbox is handling the modules.

  • NVIDIA Brings CUDA to Arm, Enabling New Path to Exascale Supercomputing

    International Supercomputing Conference -- NVIDIA today announced its support for Arm CPUs, providing the high performance computing industry a new path to build extremely energy-efficient, AI-enabled exascale supercomputers.

  • NVIDIA Delivering CUDA To Linux On Arm For HPC/Servers

    NVIDIA announced this morning for ISC 2019 that they are bringing CUDA to Arm beyond their work already for supporting GPU computing with lower-power Tegra SoCs.

  • Nvidia pushes ARM supercomputing

    Graphics chip maker Nvidia is best known for consumer computing, vying with AMD's Radeon line for framerates and eye candy. But the venerable giant hasn't ignored the rise of GPU-powered applications that have little or nothing to do with gaming. In the early 2000s, UNC researcher Mark Harris began work popularizing the term "GPGPU," referencing the use of Graphics Processing Units for non-graphics-related tasks. But most of us didn't really become aware of the non-graphics-related possibilities until GPU-powered bitcoin-mining code was released in 2010, and shortly thereafter, strange boxes packed nearly solid with high-end gaming cards started popping up everywhere.

  • At ISC: DDN Launches EXA5 for AI, Big Data, HPC Workloads
  • IBM Makes Takes Another Big Step To Hybrid Computing

    Today, IBM announced the ability to leverage its unique turnkey operating environment, IBM i, and its AIX UNIX operating systems on IBM Cloud. Both OSs debuted in the 1980s and have a long history with many IBM customers. In addition, IBM i remains one of the most automated, fully integrated, and low-maintenance operating environments. Extending both OSs to IBM Cloud will allow customers to expand their resources on-demand, to migrate to the cloud, to leverage the latest Power9 servers, and to leverage IBM’s extensive resources. IBM is rolling out the service first in North America for customers using IBM i or AIX on Power servers. In conjunction with the extension of the hybrid cloud platform, IBM also announced a program to validate business partners with Power Systems expertise.

Syndicate content

More in Tux Machines

today's howtos

Databases: MariaDB, ScyllaDB, Percona, Cassandra

  • MariaDB opens US headquarters in California

    MariaDB Corporation, the database company born as a result of forking the well-known open-source MySQL database...

  • ScyllaDB takes on Amazon with new DynamoDB migration tool

    There are a lot of open-source databases out there, and ScyllaDB, a NoSQL variety, is looking to differentiate itself by attracting none other than Amazon users. Today, it announced a DynamoDB migration tool to help Amazon customers move to its product.

  • ScyllaDB Announces Alternator, an Open Source Amazon DynamoDB-Compatible API

    ScyllaDB today announced the Alternator project, open-source software that will enable application- and API-level compatibility between Scylla and Amazon’s NoSQL cloud database, Amazon DynamoDB. Scylla’s DynamoDB-compatible API will be available for use with Scylla Open Source, supporting the majority of DynamoDB use cases and features.

  • ScyllaDB Secures $25 Million to Open Source Amazon DynamoDB-compatible API

    Fast-growing NoSQL database company raises funds to extend operations and bring new deployment flexibility to users of Amazon DynamoDB.

  • ScyllaDB Announces Alternator, an Open Source Amazon DynamoDB-Compatible API

    ScyllaDB today announced the Alternator project, open-source software that will enable application- and API-level compatibility between Scylla and Amazon’s NoSQL cloud database, Amazon DynamoDB. Scylla’s DynamoDB-compatible API will be available for use with Scylla Open Source, supporting the majority of DynamoDB use cases and features.

  • ScyllaDB powers up Alternator: an open Amazon DynamoDB API

    Companies normally keep things pretty quiet in the run up to their annual user conferences, so they can pepper the press with a bag of announcements designed to show how much market momentum and traction that have going. Not so with ScyllaDB, the company has been dropping updates in advance of its Scylla Summit event in what is perhaps an unusually vocal kind of way. [...] Scylla itself is a real-time big data database that is fully compatible with Apache Cassandra and is known for its ‘shared-nothing’ approach (a distributed-computing architecture in which each update request is satisfied by a single node –processor/memory/storage unit to increase throughput and storage capacity.

  • Percona Announces Full Conference Schedule for Percona Live Open Source Database Conference Europe 2019

    The Percona Live Open Source Database Conference Europe 2019 is the premier open source database event. Percona Live conferences provide the open source database community with an opportunity to discover and discuss the latest open source trends, technologies and innovations. The conference includes the best and brightest innovators and influencers in the open source database industry.

  • Thwarting Digital Ad Fraud at Scale: An Open Source Experiment with Anomaly Detection

    Our experiment assembles Kafka, Cassandra, and our anomaly detection application in a Lambda architecture, in which Kafka and our streaming data pipeline are the speed layer, and Cassandra acts as the batch and serving layer. In this configuration, Kafka makes it possible to ingest streaming digital ad data in a fast and scalable manner, while taking a “store and forward” approach so that Kafka can serve as a buffer to protect the Cassandra database from being overwhelmed by major data surges. Cassandra’s strength is in storing high-velocity streams of ad metric data in its linearly scalable, write-optimized database. In order to handle automation for provisioning, deploying, and scaling the application, the anomaly detection experiment relies on Kubernetes on AWS EKS.

Today in Techrights

OSS Leftovers

  • Workarea Commerce Goes Open-source

    The enterprise commerce platform – Workarea is releasing its software to the open-source community. In case you don’t already know, Workarea was built to unify commerce, content management, merchant insights, and search. It was developed upon open-source technologies since its inception like Elasticsearch, MongoDB, and Ruby on Rails. Workarea aims to provide unparalleled services in terms of scalability and flexibility in modern cloud environments. Its platform source code and demo instructions are available on GitHub here.

  • Wyoming CV Pilot develops open-source RSU monitoring system

    The team working on the US Department of Transportation’s (USDOT) Connected Vehicle Pilot Deployment Program in Wyoming have developed open-source applications for the operation and maintenance of Roadside Units (RSUs) that can be viewed by all stakeholders. The Wyoming Department of Transportation (WYDOT) Connected Vehicle Pilot implementation includes the deployment of 75 RSUs along 400 miles (644km) of I-80. With long drive times and tough winters in the state, WYDOT needed an efficient way to monitor the performance of and manage and update these units to maintain peak performance. With no suitable product readily available, the WYDOT Connected Vehicle team developed an open-source application that allows authorized transportation management center (TMC) operators to monitor and manage each RSU at the roadside. The WYDOT team found that the application can also be used as a public-facing tool that shows a high-level status report of the pilot’s equipment. [...] For other state or local agencies and departments of transportation (DOTs) wishing to deploy a similar capability to monitor and manage RSUs, the application code has been made available on the USDOT’s Open Source Application Development Portal (OSADP). The code is downloadable and can be used and customized by other agencies free of charge. WYDOT developed this capability using USDOT funds under the CV Pilot program as open-source software and associated documentation. The application represents one of six that the program will be providing during its three phases.

  • You Too Can Make These Fun Games (No Experience Necessary)

    Making a videogame remained a bucket list item until I stumbled on an incredibly simple open source web app called Bitsy. I started playing around with it, just to see how it worked. Before I knew it, I had something playable. I made my game in a couple of hours.

  • From maverick to mainstream: why open source software is now indispensable for modern business

    Free and open source software has a long and intriguing history. Some of its roots go all the way back to the 1980s when Richard Stallman first launched the GNU project.

  • Analyst Watch: Is open source the great equalizer?

    If you had told me 25 years ago that open source would be the predominant force in software development, I would’ve laughed. Back then, at my industrial software gig, we were encouraged to patent as much IP as possible, even processes that seemed like common-sense business practices, or generally useful capabilities for any software developer. If you didn’t, your nearest competitor would surely come out with their own patent claims, or inevitable patent trolls would show up demanding fees for any uncovered bit of code. We did have this one developer who was constantly talking about fiddling with his Linux kernel at home, on his personal time. Interesting hobby.

  • Scientists Create World’s First Open Source Tool for 3D Analysis of Advanced Biomaterials

    Materials scientists and programmers from the Tomsk Polytechnic University in Russia and Germany's Karlsuhe Institute of Technology have created the world’s first open source software for the 2D and 3D visualization and analysis of biomaterials used for research into tissue regeneration. [...] Scientists have already tested the software on a variety of X-ray tomography data. “The results have shown that the software we’ve created can help other scientists conducting similar studies in the analysis of the fibrous structure of any polymer scaffolds, including hybrid ones,” Surmenev emphasised.

  • Making Collaborative Data Projects Easier: Our New Tool, Collaborate, Is Here

    On Wednesday, we’re launching a beta test of a new software tool. It’s called Collaborate, and it makes it possible for multiple newsrooms to work together on data projects. Collaborations are a major part of ProPublica’s approach to journalism, and in the past few years we’ve run several large-scale collaborative projects, including Electionland and Documenting Hate. Along the way, we’ve created software to manage and share the large pools of data used by our hundreds of newsrooms partners. As part of a Google News Initiative grant this year, we’ve beefed up that software and made it open source so that anybody can use it.

  • Should open-source software be the gold standard for nonprofits?

    Prior to its relaunch, nonprofit organization Cadasta had become so focused on the technology side of its work that it distracted from the needs of partners in the field. “When you’re building out a new platform, it really is all consuming,” said Cadasta CEO Amy Coughenour, reflecting on some of the decisions that were made prior to her joining the team in 2018.

  • Artificial intelligence: an open source future

    At the same time, we’re seeing an increasing number of technology companies invest in AI development. However, what’s really interesting is that these companies - including the likes of Microsoft, Salesforce and Uber - are open sourcing their AI research. This move is already enabling developers worldwide to create and improve AI & Machine Learning (ML) algorithms faster. As such, open source software has become a fundamental part of enabling fast, reliable, and also secure development in the AI space. So, why all the hype around open source AI? Why are businesses of all sizes, from industry behemoths to startups, embracing open source? And where does the future lie for AI and ML as a result?

  • How open source is accelerating innovation in AI

    By eradicating barriers like high licensing fees and talent scarcity, open source is accelerating the pace of AI innovation, writes Carmine Rimi No other technology has captured the world’s imagination quite like AI, and there is perhaps no other that has been so disruptive. AI has already transformed the lives of people and businesses and will continue to do so in endless ways as more startups uncover its potential. According to a recent study, venture capital funding for AI startups in the UK increased by more than 200 percent last year, while a Stanford University study observed a 14-times increase in the number of AI startups worldwide in the last two years.

  • Adam Jacob Advocates for Building Healthy OSS Communities in “The War for the Soul of Open Source”

    Chef co-founder and former CTO Adam Jacob gave a short presentation at O’Reilly Open Source Software Conference (OSCON) 2019 titled “The War for the Soul of Open Source.” In his search for meaning in open source software today, Jacob confronts the notion of open source business models. “We often talk about open source business models,” he said. “There isn’t an open source business model. That’s not a thing and the reason is open source is a channel. Open source is a way that you, in a business sense, get the software out to the people, the people use the software, and then they become a channel, which [companies] eventually try to turn into money.” [...] In December 2018, Jacob launched the Sustainable Free and Open Source Communities (SFOSC) project to advocate for these ideas. Instead of focusing on protecting revenue models of OSS companies, the project’s contributors work together to collaborate on writing core principles, social contracts, and business models as guidelines for healthy OSS communities.

  • New Open Source Startups Emerge After Acquisition, IPO Flurry

    After a flurry of mega-acquisitions and initial public offerings of open source companies, a new batch of entrepreneurs are trying their hands at startups based on free software projects.

  • TC9 selected by NIST to develop Open Source Software for Transactive Energy Markets

    TC9, Inc. was selected by National Institute of Standards and Technology (NIST) to develop open source software for Transactive Energy Bilateral Markets based on the NIST Common Transactive Services. Under the contract, TC9 will develop open source software (OSS) for agents for a transactive energy market. The software will be used to model the use of transactive energy to manage power distribution within a neighborhood. Transactive Energy is a means to balance volatile supply and consumption in real time. Experts anticipate the use of Transactive Energy to support wide deployment of distributed energy resources (DER) across the power grid.

  • Open Source Software Allows Auterion to Move Drone Workflows into the Cloud

    “Until today, customizing operations in the MAVLink protocol required a deep understanding of complex subjects such as embedded systems, drone dynamics, and the C++ programming language,” said Kevin Sartori, co-founder of Auterion. “With MAVSDK, any qualified mobile developer can write high-level code for complex operations, meaning more developers will be able to build custom applications and contribute to the community.”

  • ApacheCon 2019 Keynote: James Gosling's Journey to Open Source

    At the recent ApacheCon North America 2019 in Las Vegas, James Gosling delivered a keynote talk on his personal journey to open-source. Gosling's main takeaways were: open source allows programmers to learn by reading source code, developers must pay attention to intellectual property rights to prevent abuse, and projects can take on a life of their own.

  • 20 Years of the Apache Software Foundation: ApacheCon 2019 Opening Keynote

    At the recent ApacheCon North America 2019 in Las Vegas, the opening keynote session celebrated the 20th anniversary of the Apache Software Foundation (ASF), with key themes being: the history of the ASF, a strong commitment to community and collaboration, and efforts to increase contributions from the public. The session also featured a talk by astrophysicist David Brin on the potential dangers of AI.