Language Selection

English French German Italian Portuguese Spanish

Servers: Hadoop, Amazon Rivals, Red Hat/IBM, Kubernetes, OpenStack and More

Filed under
Server
  • Breaking Out of the Hadoop Cocoon

    The announcement last fall that top Hadoop vendors Cloudera and Hortonworks were coming together in a $5.2 billion merger – and reports about the financial toll that their competition took on each other in the quarters leading up to the deal – revived questions that have been raised in recent years about the future of Hadoop in an era where more workloads are moving into public clouds like Amazon Web Services (AWS) that offer a growing array of services that many of the jobs that the open-source technology already does.

    Hadoop gained momentum over the past several years as an open-source platform to collect, store and analyze various types of data, arriving as data was becoming the coin of the realm in the IT industry, something that has only steadily grown since. As we’ve noted here at The Next Platform, Hadoop has evolved over the years, with such capabilities as Spark in-memory processing and machine learning being added. But in recent years more workloads and data have moved to the cloud, and the top cloud providers, including Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform all offer their own managed services, such as AWS’ Elastic Map Reduce (EMR). Being in the cloud, these services also offer lower storage costs and easier management – the management of the infrastructure is done by the cloud provider themselves.

  • A guide for database as a service providers: How to stand your ground against AWS – or any other cloud

    NoSQL database platform MongoDB followed suit in October 2018 announcing a Server Side Public License (SSPL) to protect “open source innovation” and stop “cloud vendors who have not developed the software to capture all of the value while contributing little back to the community.” Event streaming company, Confluent issued its own Community License in December 2018 to make sure cloud providers could no longer “bake it into the cloud offering, and put all their own investments into differentiated proprietary offerings.”

  • The CEO of DigitalOcean explains how its 'cult following' helped it grow a $225 million business even under the shadow of Amazon Web Services

    DigitalOcean CEO Mark Templeton first taught himself to code at a small hardwood business. He wanted to figure out how to use the lumber in the factory most efficiently, and spreadsheets only got him so far.

    "I taught myself to write code to write a shop floor control and optimization system," Templeton told Business Insider. "That allowed us to grow, to run the factory 24 hours a day, all these things that grow in small business is new. As a self-taught developer, that's what launched me into the software industry."

    And now, Templeton is learning to embrace these developer roots again at DigitalOcean, a New York-based cloud computing startup. It's a smaller, venture-backed alternative to mega-clouds like Amazon Web Services, but has found its niche with individual programmers and smaller teams.

  • IBM’s Big-Ticket Purchase of Red Hat Gets a Vote of Confidence From Wall Street
  • How Monzo built a bank with open infrastructure

    When challenger bank Monzo began building its platform, the team decided it would get running with container orchestration platform Kubernetes "the hard way". The result is that the team now has visibility into outages or other problems, and Miles Bryant, platform engineer at Monzo, shared some observations at the bank at the recent Open Infrastructure Day event in London.

    Finance is, of course, a heavily regulated industry - and at the same time customer expectations are extremely exacting. If people can't access their money, they tend to get upset.

  • Kubernetes Automates Open-Source Deployment

    Whether for television broadcast and video content creation, delivery or transport of streamed media, they all share a common element, that is the technology supporting this industry is moving rapidly, consistently and definitively toward software and networking. The movement isn’t new by any means; what now seems like ages ago, in the days where every implementation required customized software on a customized hardware platform has now changed to open platforms running with open-source solution sets often developed for open architectures and collectively created using cloud-based services.

  • Using EBS and EFS as Persistent Volume in Kubernetes

    If your Kubernetes cluster is running in the cloud on Amazon Web Services (AWS), it comes with Elastic Block Storage (EBS). Or, Elastic File System (EFS) can be used for storage.

    We know pods are ephemeral and in most of the cases we need to persist the data in the pods. To facilitate this, we can mount folders into our pods that are backed by EBS volumes on AWS using AWSElasticBlockStore, a volume plugin provided by Kubernetes.

    We can also use EFS as storage by using efs-provisioner. Efs-provisioner runs as a pod in the Kubernetes cluster that has access to an AWS EFS resource.

  • Everything You Want To Know About Anthos - Google's Hybrid And Multi-Cloud Platform

    Google's big bet on Anthos will benefit the industry, open source community, and the cloud native ecosystem in accelerating the adoption of Kubernetes.

  • Raise a Stein for OpenStack: Latest release brings faster containers, cloud resource management

    The latest OpenStack release is out in the wilds. Codenamed Stein, the platform update is said to allow for much faster Kubernetes deployments, new IP and bandwidth management features, and introduces a software module focused on cloud resource management – Placement.

    In keeping with the tradition, the 19th version of the platform was named Stein after Steinstraße or "Stein Street" in Berlin, where the OpenStack design summit for the corresponding release took place in 2018.

    OpenStack is not a single piece of software, but a framework consisting of an integration engine and nearly 50 interdependent modules or projects, each serving a narrowly defined purpose, like Nova for compute, Neutron for networking and Magnum for container orchestration, all linked together using APIs.

  • OpenStack Stein launches with improved Kubernetes support

    The OpenStack project, which powers more than 75 public and thousands of private clouds, launched the 19th version of its software this week. You’d think that after 19 updates to the open-source infrastructure platform, there really isn’t all that much new the various project teams could add, given that we’re talking about a rather stable code base here. There are actually a few new features in this release, though, as well as all the usual tweaks and feature improvements you’d expect.

    While the hype around OpenStack has died down, we’re still talking about a very active open-source project. On average, there were 155 commits per day during the Stein development cycle. As far as development activity goes, that keeps OpenStack on the same level as the Linux kernel and Chromium.

  • Community pursues tighter Kubernetes integration in Openstack Stein

    The latest release of open source infrastructure platform Openstack, called 'Stein', was released today with updates to container functionality, edge computing and networking upgrades, as well as improved bare metal provisioning and tighter integration with popular container orchestration platform Kubernetes - led by super-user science facility CERN.

    It also marks roughly a year since the Openstack Foundation pivoted towards creating a more all-encompassing brand that covers under-the-bonnet open source in general, with a new umbrella organisation called the Open Infrastructure Foundation. Openstack itself had more than 65,000 code commits in 2018, with an average of 155 per day during the Stein cycle.

  • Why virtualisation remains a technology for today and tomorrow

    The world is moving from data centres to centres of data. In this distributed world, virtualisation empowers customers to secure business-critical applications and data regardless of where they sit, according to Andrew Haschka, Director, Cloud Platforms, Asia Pacific and Japan, VMware.

    “We think of server and network virtualisation as being able to enable three fundamental things: a cloud-centric networking fabric, with intrinsic security, and all of it delivered in software. This serves as a secure, consistent foundation that drives businesses forward,” said Haschka in an email interview with Networks Asia. “We believe that virtualisation offers our customers the flexibility and control to bring things together and choose which way their workloads and applications need to go – this will ultimately benefit their businesses the most.”

  • Happy 55th birthday mainframe

    7 April marked the 55th birthday of the mainframe. It was on that day in 1964 that the System/360 was announced and the modern mainframe was born. IBM’s Big Iron, as it came to be called, took a big step ahead of the rest of the BUNCH (Burroughs, UNIVAC, NCR, Control Data Corporation, and Honeywell). The big leap of imagination was to have software that was architecturally compatible across the entire System/360 line.

  • Red Hat strategy validated as open hybrid cloud goes mainstream

    “Any products, anything that would release to the market, the first filter that we run through is: Will it help our customers with their open hybrid cloud journey?” said Ranga Rangachari (pictured), vice president and general manager of storage and hyperconverged infrastructure at Red Hat.

    Rangachari spoke with Dave Vellante (@dvellante) and Stu Miniman (@stu), co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the Google Cloud Next event. They discussed adoption of open hybrid cloud and how working as an ecosystem is critical for success in solving storage and infrastructure problems (see the full interview with transcript here). (* Disclosure below.)

More in Tux Machines

Gustavo Silva: Disco Dingo Thoughts

Those already around me know I love Linux and my favourite linux distribuition is Ubuntu. One of the reasons Ubuntu is my favourite is how simple and compatible it is with pretty much all devices I have tried installing. Except my laptop, but that’s due to the graphics card. But hey, I fondly received the news that now we can select the option to automatically set nomodeset and other convenient tools when running the setup. For me, this means a major win. I previously had to set nomodeset manually and after installation I had to immediately modifiy some options in the grub’s defaults (namely set the acpi=force) but now, with this new option, the installation process which was already smooth, become (melted) butter. Thank you, honestly, person who remembered to include this option. This seems like a feature that will stick to Ubuntu 20.04, so I’m happy to now a LTS version will become even simpler to install too, so that’s great. The UI and custom-Gnome experience has been improved as well, in this custom flavour of Gnome. We now have a few more options for customization, including dark options of the themes but I am definitely pleased to say that the Gnome shell, in Ubuntu 19.04, really looks great. Read more

5 of the Best Linux Desktops for Touchscreen Monitors in 2019

The concept of using Linux on a touchscreen monitor or two-in-one computer has come a long way. Touchscreen support is now built into the Linux kernel, so theoretically any Linux distribution should run with a touchscreen. That said, not every distribution will be easy to use on a touchscreen, and this comes down to the desktop. For example, using a tiling window manager like Awesome or i3 isn’t going to do you much good on a touchscreen. Choose the right desktop (more precisely, desktop environment), and you’ll have a much better time using Linux with a touchscreen. Read more

NVIDIA Linux Drivers and Graphics News

  • NVIDIA 430.09 Linux Driver Brings GTX 1650 Support, Surprising VDPAU Improvements
    With today's GeForce GTX 1650 launch, NVIDIA has posted the 430.09 Linux driver as their first in this new driver series. The GeForce GTX 1650 is now supported by this new NVIDIA LInux driver along with its Max-Q Design and the GTX 1660 Ti Max-Q Design. The NVIDIA 430 Linux driver also adds HEVC YUV 4:4:4 decode support to VDPAU, various other VDPAU additions, raised the X.Org Server requirement to version 1.7, adds the GL_NV_vdpau_interop2 extension, and updates the NVIDIA installer to work better on the latest Linux distributions.
  • NVIDIA have two new drivers out with 430.09 and the Vulkan beta driver 418.52.05
    NVIDIA have just recently released two new drivers for Linux users, with the main series now being at 430.09 adding new GPU support and the Vulkan beta driver 418.52.05 giving ray-tracing to some older GPUs. Firstly, the Vulkan beta driver 418.52.05 was actually released last week, which adds support for the "VK_NV_ray_tracing" extension for certain older graphics cards including the TITAN Xp, TITAN X, 1080, 1070, 1060, TITAN V and 1660 (along with Ti models). It also adds support for the "VK_NV_coverage_reduction_mode" extension, which doesn't seem to have any documentation up just yet. They also cited "minor performance improvements" and two bug fixes.
  • NVIDIA Releases The GeForce GTX 1650 At $149 USD, Linux Benchmarks Incoming
    The TU117-based GeForce GTX 1650 starts out at $149 USD and aims to deliver double the performance over the GTX 950 Maxwell and doing so in only a 75 Watt TDP, meaning no external PCI Express power connector is required. There are 896 CUDA cores and 4GB of GDDR5 video memory with the GTX 1650.

Games: Killer Chambers, Elsewhere, Save Koch