Language Selection

English French German Italian Portuguese Spanish

Server

IBM, Red Hat and Servers

Filed under
Red Hat
Server
  • Using KubeFed to Deploy Applications to OCP3 and OCP4 Clusters
  • IBM Announces Three New Open Source Projects for Developing Apps for Kubernetes and the Data Asset eXchange (DAX), the Linux Foundation Is Having a Sysadmin Day Sale, London Launches Open-Source Homebuilding App and Clonezilla Live 2.6.2-15 Released

    IBM this morning announces three new open-source projects that "make it faster and easier for you to develop and deploy applications for Kubernetes". Kabanero "integrates the runtimes and frameworks that you already know and use (Node.js, Java, Swift) with a Kubernetes-native DevOps toolchain". Appsody "gives you pre-configured stacks and templates for a growing set of popular open source runtimes and frameworks, providing a foundation on which to build applications for Kubernetes and Knative deployments". And Codewind "provides extensions to popular integrated development environments (IDEs) like VS Code, Eclipse, and Eclipse Che (with more planned), so you can use the workflow and IDE you already know to build applications in containers."

    IBM also today announces the Data Asset eXchange (DAX), which is "an online hub for developers and data scientists to find carefully curated free and open datasets under open data licenses". The press release notes that whenever possible, "datasets posted on DAX will use the Linux Foundation's Community Data License Agreement (CDLA) open data licensing framework to enable data sharing and collaboration. Furthermore, DAX provides unique access to various IBM and IBM Research datasets. IBM plans to publish new datasets on the Data Asset eXchange regularly. The datasets on DAX will integrate with IBM Cloud and AI services as appropriate."

  • Data as the new oil: The danger behind the mantra

    Not a week goes by that I don’t hear a tech pundit, analyst, or CIO say “data is the new oil.” This overused mantra suggests that data is a commodity that can become extremely valuable once refined. Many technologists have used that phrase with little knowledge of where it originated – I know I wasn’t aware of its origin. 

    It turns out the phrase is attributed to Clive Humby, a British mathematician who helped create British retailer Tesco’s Clubcard loyalty program. Humby quipped, “Data is the new oil. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc., to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value.”

  • How to explain deep learning in plain English

    Understanding artificial intelligence sometimes isn’t a matter of technology so much as terminology. There’s plenty of it under the big AI umbrella – such as machine learning, natural language processing, computer vision, and more.

    Compounding this issue, some AI terms overlap. Being able to define key concepts clearly – and subsequently understand the relationships and differences between them – is foundational to your crafting a solid AI strategy. Plus, if the IT leaders in your organization can’t articulate terms like deep learning, how can they be expected to explain it (and other concepts) to the rest of the company?

  • How to make the case for service mesh: 5 benefits

    Service mesh is a trending technology, but that alone does not mean every organization needs it. As always, adopting a technology should be driven by the goals it helps you attain or, put another way, the problems it helps you solve.

    It’s certainly worth understanding what a service mesh does – in part so you can explain it to other people. Whether or not you actually need one really depends upon your applications and environments.

LXD 3.15 has been released

Filed under
Server
Software

The LXD team is very excited to announce the release of LXD 3.15!

This release both includes a number of major new features as well as some significant internal rework of various parts of LXD.

One big highlight is the transition to the dqlite 1.0 branch which will bring us more performance and reliability, both for our cluster users and for standalone installations. This rework moves a lot of the low-level database/replication logic to dedicated C libraries and significantly reduces the amount of back and forth going on between C and Go.

On the networking front, this release features a lot of improvements, adding support for IPv4/IPv6 filtering on bridges, MAC and VLAN filtering on SR-IOV devices and much improved DHCP server management.

We’re also debuting a new version of our resources API which will now provide details on network devices and storage disks on top of extending our existing CPU, memory and GPU reporting.

And that’s all before looking into the many other performance improvements, smaller features and bugfixes that went into this release.

Read more

Cloudera's New Strategy

Filed under
Server
OSS
  • Cloudera is making all of its software open-source, one month after its CEO's abrupt resignation

    Palo Alto-based cloud data provider Cloudera, Inc. plans to open-source all of its software, and focus on providing value-added services on top of its platform...

  • Cloudera flips the open source switch in search of consistency, innovation

    A post-acquisition Cloudera, one that is looking to regain its footing after a disappointing earnings report and a CEO departure, sees open source as a silver bullet.

    Making its software products available through open source can help boost adoption amid a Hadoop market contraction, while helping retain customers in the aftermath of its acquisition.

    In turn, broader Cloudera adoption will mean a bigger pool of potential customers that can pay for additional services like dedicated support or consulting.

    This strategy has worked for Amazon Web Services, Microsoft and others. Most recently, IBM finalized a $34 billion move aimed at expanding its presence in the open source space: the acquisition of enterprise software company Red Hat.

  • Cloudera will open source all its software, run business like Red Hat

    In June, Cloudera CEO Tom Reilly announced he would resign. Now, Cloudera is announcing a plan it's worked on since its merger with Hortonworks.

13 Free Open-source Content Management Systems

Filed under
Server
OSS

WordPress launched in 2003 as a blogging platform. Today, WordPress is a sophisticated content management system, built on PHP and MySQL and running much of the websites worldwide, from hobby blogs to the biggest news portals. Over 54,000 plugins and themes help customize WordPress installations — including robust ecommerce functionality, galleries, mailing lists, forums, and analytics. Price: Free.

Read more

Also: WordPress vs. Wix vs. Squarespace for SEO: An Interview with Pam Aungst

The 10 Top GUI Tools for Linux System Administrators

Filed under
GNU
Linux
Server
HowTos

A Linux administrators task is to typically install, upgrade, and monitor a company’s software and hardware while maintaining the essential applications and functions which include security tools, emails, LANs, WANs, web servers, etc.

Read Also: Top 26 Tools for VMware Administrators

Linux is undoubtedly a force to reckon with in computing technology and most system administrators work on Linux machines. You might think you are damned to using the command-line to complete administrative tasks but that is far from the truth.

Here are the 10 best GUI tools for Linux System Administrators.

Read more

Canonical and IBM/Red Hat Strategy on Servers

Filed under
Red Hat
Server
Ubuntu
  • Inside the Canonical Container Strategy

    Canonical continues to pursue a somewhat bifurcated approach to containers by announcing support for Kubernetes 1.15 while continuing to advance Snaps as an application container that enables software deployment via a single click.

    For example, Canonical recently announced in collaboration with DJI that Snaps will be supported on an instance of Ubuntu embedded in Manifold 2 drones manufactured by DJI. While that approach will make it easier to deploy containerized applications on a type of embedded system, Snaps—for the moment, at least—mostly only runs on Ubuntu.

    Docker, in contrast, provides what Canonical describes as “process containers,” which typically are immutable and share some libraries across all containers in execution. Docker registries are optional and typically contain a loose collection of Docker images identifiable by hash or tags. That approach makes it possible to run containerized applications across multiple operating systems. However, within organizations that have standardized on Ubuntu, Canonical is making the case for an application container in the form of Snaps.

    Canonical is trying to drum up support for Snaps on multiple distributions of Linux with mixed success. Most recently, it made available Snapd, a service that individual developers can employ to run Snaps on other Linux distributions. Support for Snaps running on Linux distributions other than Ubuntu generally is limited to what’s provided by Canonical, which tends to limit enthusiasm. It’s also worth noting that alternative application packaging technologies in the form of AppImage and Flatpak have been around longer than Snaps.

  • CEO Ginni Rometty: Red Hat's open-source software 'is a play that helps all of IBM'

    IBM on Tuesday closed on its $34 billion cash acquisition of Red Hat.

Linux features beyond server management

Filed under
GNU
Linux
Server

With its text-based interface, Linux provides IT administrators an easy and quick way to navigate files, grant permissions, run containers and build data processing capabilities on an open source OS.

Linux has traditionally stayed in on-premises architectures, but that's starting to change. With the development of containers and orchestration, organizations are using it beyond bare metal.

If you decide to use these newer Linux features and capabilities, however, you should still familiarize yourself with the kernel, as well as some useful commands and security protocols.

Read more

What Really IRCs Me: Mastodon

Filed under
Server
OSS
HowTos

When it comes to sending text between people, I've found IRC (in particular, a text-based IRC client) works best. I've been using it to chat for decades while other chat protocols and clients come and go. When my friends have picked other chat clients through the years, I've used the amazing IRC gateway Bitlbee to connect with them on their chat client using the same IRC interface I've always used. Bitlbee provides an IRC gateway to many different chat protocols, so you can connect to Bitlbee using your IRC client, and it will handle any translation necessary to connect you to the remote chat clients it supports. I've written about Bitlbee a number of times in the past, and I've used it to connect to other instant messengers, Twitter and Slack. In this article, I describe how I use it to connect to yet another service on the internet: Mastodon.

Like Twitter, Mastodon is a social network platform, but unlike Twitter, Mastodon runs on free software and is decentralized, much like IRC or email. Being decentralized means it works similar to email, and you can create your own instance or create an account on any number of existing Mastodon networks and then follow people either on the same Mastodon network or any other instance, as long as you know the person's user name (which behaves much like an email address).

I've found Bitlbee to be a great interface for keeping track of social media on Twitter, because I treat reading Twitter like I was the operator for a specific IRC room. The people I follow are like those I've invited and given voice to, and I can read what they say chronologically in my IRC room. Since I keep my IRC instance running at all times, I can reconnect to it and catch up with the backlog whenever I want. Since I'm reading Twitter over a purely text-based IRC client, this does mean that instead of animated gifs, I just see URLs that point to the image, but honestly, I consider that a feature!

Since Mastodon behaves in many ways like Twitter, using it with Bitlbee works just as well. Like with Twitter over Bitlbee, it does mean you'll need to learn some extra commands so that you can perform Mastodon-specific functions, like boosting a post (Mastodon's version of retweet) or replying to a post so that your comment goes into the proper thread. I'll cover those commands in a bit.

Read more

Red Hat's Statements on Being Bought by IBM (Now Official)

Filed under
Red Hat
Server
  • Unlocking the true potential of hybrid cloud with Red Hat partners

    Today, we announced that IBM’s landmark acquisition of Red Hat has closed and shared our vision for how our two companies will move forward together.

    You’ve heard that IBM is committed to preserving Red Hat’s independence, neutrality, culture and industry partnerships, and that Red Hat’s unwavering commitment to open source remains unchanged.

    There is a key part of that statement I want to focus on—partnerships.

    IBM has made a significant investment to acquire Red Hat, and respects that Red Hat wouldn’t be Red Hat without our partner ecosystem. Partners open more doors for open source than we can alone and are vital to our success.

  • Red Hat and IBM: Accelerating the adoption of open source

    Today, IBM finalized its acquisition of Red Hat. Moving forward, Red Hat will operate as a distinct unit within IBM, and I couldn't be more excited—not only for what today represents in the history of two storied technology companies, but what it means for the future of the industry, for our customers, and for open source.

    Red Hat's acquisition by IBM represents an unparalleled milestone for open source itself. It signals validation of community-driven innovation and the value that open source brings to users.

  • IBM Closes Landmark Acquisition of Red Hat for $34 Billion; Defines Open, Hybrid Cloud Future

    IBM (NYSE:IBM) and Red Hat announced today that they have closed the transaction under which IBM acquired all of the issued and outstanding common shares of Red Hat for $190.00 per share in cash, representing a total equity value of approximately $34 billion.

    The acquisition redefines the cloud market for business. Red Hat’s open hybrid cloud technologies are now paired with the unmatched scale and depth of IBM’s innovation and industry expertise, and sales leadership in more than 175 countries. Together, IBM and Red Hat will accelerate innovation by offering a next-generation hybrid multicloud platform. Based on open source technologies, such as Linux and Kubernetes, the platform will allow businesses to securely deploy, run and manage data and applications on-premises and on private and multiple public clouds.

  • Q&A: IBM’s Landmark Acquisition of Red Hat

    Paul: Red Hat is an enterprise software company with an open source development model. A fundamental tenet of that model is that everything we do, from new practices that we learn to new technologies that we develop, goes back to the upstream community. By joining forces with IBM, our reach into customers will dramatically increase so we’ll be in a position to drive open enterprise technology a lot further. As for IBM, we’ve been partners for quite some time, but now existing IBM customers will have even more direct access to next-generation open source-based technologies that are at the cornerstone of hybrid cloud innovation.

  • Jim Whitehurst email to Red Hatters on Red Hat + IBM acquisition closing

    Last October, we announced our intention to join forces with IBM, with the aim of becoming the world’s top hybrid cloud provider. Since then, the promise IBM chairman, president, and CEO Ginni Rometty and I made has not changed. In fact, our commitment to that vision has grown - Red Hat will remain a distinct unit in IBM as we work to help customers deliver any app, anywhere, realizing the true value of the hybrid cloud. This morning, we can share that the most significant tech acquisition of 2019 has officially closed and we can now begin moving forward.

    We will be hosting an all-hands company meeting today (Tuesday, July 9) where you will hear from me, Ginni, Paul Cormier and IBM senior vice president of Cloud and Cognitive Software, Arvind Krishna. Details on logistics to follow; I hope you will join us.

    Since we announced the acquisition, I’ve been having conversations with our customers, partners, open source community members and more Red Hatters than I can count (I’ve been following memo-list as well!). What struck me most from those conversations was the passion. It’s passion not just for a company, but for what we do and how we do it—the open source way. That’s not going to change.

  • IBM Acquires Red Hat For $34 Billion

    IBM today closed the acquisition of Red Hat for $34 billion, marking one of the biggest acquisition of any open source company.

Server: SAP/SUSE, IBM/Red Hat, OpenStack, YottaDB and More

Filed under
Server
  • Happy 20th Birthday SAP Linux Lab!

    1999 marks the year SAP solutions were deployed on Linux for the first time. To ensure joint support between SAP, server vendors and Linux distributors like SUSE, SAP established the Linux Lab. Over the years many, many projects were successfully concluded, starting with porting SAP R/3 to IBM zSeries or IBM pSeries, to supporting SAP’s Next-Generation in-memory database HANA, to delivering Data Hub and HANA via Containers to customers.

  • Deliver open hybrid cloud’s value with simple, customer-centric strategies

    More and more enterprises are evaluating hybrid cloud architectures to support their operations, but they have questions about integrating public clouds with their existing private clouds. Ranga Rangachari, vice president and general manager of storage and hyperconverged infrastructure at Red Hat, spoke with SiliconANGLE’s show theCUBE at the recent Google Cloud Next ‘19 event to dig into what hybrid cloud means for customers, Red Hat, and the broader ecosystem. The interview covered open hybrid cloud adoption, today’s customer priorities, and the power of the ecosystem to solve customer problems today and into the future.

  • 10 tips for reviewing code you don’t like

    As a frequent contributor to open source projects (both within and beyond Red Hat), I find one of the most common time-wasters is dealing with code reviews of my submitted code that are negative or obstructive and yet essentially subjective or argumentative in nature. I see this most often when submitting to projects where the maintainer doesn’t like the change, for whatever reason. In the best case, this kind of code review strategy can lead to time wasted in pointless debates; at worst, it actively discourages contribution and diversity in a project and creates an environment that is hostile and elitist.

    A code review should be objective and concise and should deal in certainties whenever possible. It’s not a political or emotional argument; it’s a technical one, and the goal should always be to move forward and elevate the project and its participants. A change submission should always be evaluated on the merits of the submission, not on one’s opinion of the submitter.

  • The case for making the transition from sysadmin to DevOps engineer

    The year is 2019, and DevOps is the hot topic. The day of the system administrator (sysadmin) has gone the way of mainframes if you will—but really, has it? The landscape has shifted as it so often does in technology. There is now this thing called DevOps, which can’t exist without Ops.

    I considered myself on the Ops side of the aisle prior to the evolution of DevOps as we know today. As a system administrator or engineer, it feels like you are stuck in a time warp, with a small tinge of fear because what you knew and must learn varies greatly, and is now much more time-sensitive than you might have anticipated.

  • Data as a Service in a Hybrid, Multicloud World

    As it was emerging, cloud computing was seen as a fairly straight-up proposition for enterprises of finding a cloud, putting applications and data into it and running and storing it all on someone else’s infrastructure.

    But over the past few years, it’s become a complex mix of hybrid clouds and multiclouds, with some workloads and data staying on premises while others were pushed into the public cloud, and organizations using several public clouds at the same time. In the new world where data is at the center of everything and yet housed and used in multiple sites, having access to data wherever it resides and being able to move it quickly and easily between different clouds and between the cloud and core datacenter is crucial to an enterprise’s business success.

    Containers like Docker and the Kubernetes container orchestration platform have come onto the scene in part to help ease the portability of applications across the expanded distributed landscape. Over the past couple of quarters, startup Hammerspace has begun selling its data-as-a-service platform, a product designed to make the data as agile and easy to orchestrate across hybrid and multicloud environments as containers.

  • Ambedded’s ARM-based, Ceph Storage Appliance Goes Green

    With deep knowledge in open source software, distributed storage, embedded Linux and ARM-based architecture, Ambedded burst on the scene in 2013 as an innovator of software-defined storage.

    Today, with an ARM micro-server that leverages Ceph Unified Virtual Storage Manager (UVM), Ambedded has teamed up with SUSE Embedded to introduce SUSE Enterprise Storage 6 (also based on Ceph) to its line of storage appliances. The result is unified software-defined storage that provides object storage, block storage and file system in a single cluster.
    The Ambedded appliance delivers a high performing, low power storage option that can scale with ease, while helping mid-and large-scale enterprises avoid a single-point of failure by pairing a single-server node with a single-storage device.

  • OpenStack Networking

    If you’re familiar with OpenStack at all, you’ll know that it’s a collection of different components, or projects and not a single packaged piece of software. More than 30 different pieces of software make up OpenStack in its entirety ranging from networking to compute, to storage, to bare metal, to key management, orchestration, clustering and more. While OpenStack is widely recognized as being the leading open source cloud management platform, it’s not without its complexities. This can make it difficult to build if you don’t have the right skilled resources in-house, or if you need it up and running quickly so that you can use it for your business-critical systems and data.

  • Introducing Octo

    Octo is a YottaDB plugin for using SQL to query data that is persisted in YottaDB’s key-value tuples (global variables).

    Conforming to YottaDB’s standard for plugins, Octo is installed in the $ydb_dist/plugin sub-directory with no impact on YottaDB or existing applications. In addition to YottaDB itself, Octo requires the YottaDB POSIX plugin. The popularity of SQL has produced a vast ecosystem of tools for reporting, visualization, analysis, and more. Octo opens the door to using these tools with the databases of transactional applications that use YottaDB.

    [...]

    At present (early July, 2019), following an Alpha test with an intrepid user, Beta test releases of Octo are available, and YottaDB is working with a core set of Beta testers. Based on their feedback and on additional automated testing we will follow up with a production release of Octo, which we anticipate in late 2019.
    Octo currently supports read-only access from SQL, and is therefore useful in conjunction with imperatively programmed applications which update database state. As SQL supports all “CRUD” (Create, Read, Update, Delete) database operations, following the release of a production grade version of Octo for reporting (i.e., read-only access), we intend to work towards versions that support read-write access as well.

Syndicate content

More in Tux Machines

Today in Techrights

8 Top Ubuntu server Web GUI Management Panels

Ubuntu Server with command-line interface might sound little bit wired to newbies because of no previous familiarization. Thus, if you are new to Ubuntu Linux server running on your local hardware or some Cloud hosting and planning to install some Linux Desktop Graphical environment (GUI) over it; I would like to recommend don’t, until and unless you don’t have supported hardware. Instead, think about free and open-source Ubuntu server Web GUI Management panels. Moreover, for a moment, you can think about Desktop Graphical environment for your local server but if you have some Linux cloud hosting server, never do it. I am saying this because Ubuntu or any other Linux server operating systems are built to run on low hardware resources, thus even old computer/server hardware can easily handle it. GUI means more RAM and hard disk storage space. Read more

Android Leftovers

Ubuntu 18.10 Cosmic Cuttlefish reaches end of life on Thursday, upgrade now

Canonical, earlier this month, announced that Ubuntu 18.10 Cosmic Cuttlefish will be reaching end-of-life status this Thursday, making now the ideal time to upgrade to a later version. As with all non-Long Term Support (LTS) releases, 18.10 had nine months of support following its release last October. When distributions reach their end-of-life stage, they no longer receive security updates. While you may be relatively safe at first, the longer you keep running an unpatched system, the more likely it is that your system will become compromised putting your data at risk. If you’d like to move on from Ubuntu 18.10, you’ve got two options; you can either perform a clean install of a more up-to-date version of Ubuntu or you can do an in-place upgrade. Read more