Language Selection

English French German Italian Portuguese Spanish

Red Hat

IBM, Red Hat and Fedora Leftovers

Filed under
Red Hat
  • OpenShift 4: Image Builds

    One of the key differentiators of Red Hat OpenShift as a Kubernetes distribution is the ability to build container images using the platform via first class APIs. This means there is no separate infrastructure or manual build processes required to create images that will be run on the platform. Instead, the same infrastructure can be used to produce the images and run them. For developers, this means one less barrier to getting their code deployed.

    With OpenShift 4, we have significantly redesigned how this build infrastructure works. Before that sets off alarm bells, I should emphasize that for a consumer of the build APIs and resulting images, the experience is nearly identical. What has changed is what happens under the covers when a build is executed and source code is turned into a runnable image.

  • libinput's new thumb detection code

    The average user has approximately one thumb per hand. That thumb comes in handy for a number of touchpad interactions. For example, moving the cursor with the index finger and clicking a button with the thumb. On so-called Clickpads we don't have separate buttons though. The touchpad itself acts as a button and software decides whether it's a left, right, or middle click by counting fingers and/or finger locations. Hence the need for thumb detection, because you may have two fingers on the touchpad (usually right click) but if those are the index and thumb, then really, it's just a single finger click.

    libinput has had some thumb detection since the early days when we were still hand-carving bits with stone tools. But it was quite simplistic, as the old documentation illustrates: two zones on the touchpad, a touch started in the lower zone was always a thumb. Where a touch started in the upper thumb area, a timeout and movement thresholds would decide whether it was a thumb. Internally, the thumb states were, Schrödinger-esque, "NO", "YES", and "MAYBE". On top of that, we also had speed-based thumb detection - where a finger was moving fast enough, a new touch would always default to being a thumb. On the grounds that you have no business dropping fingers in the middle of a fast interaction. Such a simplistic approach worked well enough for a bunch of use-cases but failed gloriously in other cases.

  • 21 to 1: How Red Hat amplifies partner revenue

    At Red Hat Summit, we announced new research from IDC looking at the contributions of Red Hat Enterprise Linux (RHEL) to the global economy. The study, sponsored by Red Hat, found that the workloads running on Red Hat Enterprise Linux are expected to "touch" more than $10 trillion worth of global business revenues in 2019 - powering roughly 5% of the worldwide economy. While that statistic alone is eye popping, these numbers, according to the report, are only expected to grow in the coming years, fueled by more organizations embracing hybrid cloud infrastructures. As a result, there is immense opportunity for Red Hat partners and potential partners to capitalize on the growth and power of RHEL.

  • Executing .NET Core functions in a separate process [Ed: IBM/Red Hat is pushing Microsoft patent traps again (and yes, Microsoft still suing]
  • DevNation Live: 17-million downloads of Visual Studio Code Java extension [Ed: Also celebrating for Microsoft again (as if helping the proprietary MSVS 'ecosystem' is their goal now)]
  • The NeuroFedora Blog: NEURON in NeuroFedora needs testing

    We have been working on including the NEURON simulator in NeuroFedora for a while now. The build process that NEURON uses has certain peculiarities that make it a little harder to build.

    For those that are interested in the technical details, while the main NEURON core is built using the standard ./configure; make ; make install process that cleanly differentiates the "build" and "install" phases, the Python bits are built as a "post-install hook". That is to say, they are built after the other bits in the "install" step instead of the "build" step. This implies that the build is not quite straightforward and must be slightly tweaked to ensure that the Fedora packaging guidelines are met.

From Linux to cloud, why Red Hat matters for every enterprise

Filed under
Linux
Red Hat

In 1994, if you wanted to make money from Linux, you were selling Linux CDs for $39.95. By 2016, Red Hat became the first $2 billion Linux company. But, in the same year, Red Hat was shifting its long-term focus from Linux to the cloud.

Here's how Red Hat got from mail-order CDs to the top Linux company and a major cloud player. And, now that Red Hat is owned by IBM, where it will go from here.

Read more

IBM, Red Hat and Servers

Filed under
Red Hat
Server
  • Using KubeFed to Deploy Applications to OCP3 and OCP4 Clusters
  • IBM Announces Three New Open Source Projects for Developing Apps for Kubernetes and the Data Asset eXchange (DAX), the Linux Foundation Is Having a Sysadmin Day Sale, London Launches Open-Source Homebuilding App and Clonezilla Live 2.6.2-15 Released

    IBM this morning announces three new open-source projects that "make it faster and easier for you to develop and deploy applications for Kubernetes". Kabanero "integrates the runtimes and frameworks that you already know and use (Node.js, Java, Swift) with a Kubernetes-native DevOps toolchain". Appsody "gives you pre-configured stacks and templates for a growing set of popular open source runtimes and frameworks, providing a foundation on which to build applications for Kubernetes and Knative deployments". And Codewind "provides extensions to popular integrated development environments (IDEs) like VS Code, Eclipse, and Eclipse Che (with more planned), so you can use the workflow and IDE you already know to build applications in containers."

    IBM also today announces the Data Asset eXchange (DAX), which is "an online hub for developers and data scientists to find carefully curated free and open datasets under open data licenses". The press release notes that whenever possible, "datasets posted on DAX will use the Linux Foundation's Community Data License Agreement (CDLA) open data licensing framework to enable data sharing and collaboration. Furthermore, DAX provides unique access to various IBM and IBM Research datasets. IBM plans to publish new datasets on the Data Asset eXchange regularly. The datasets on DAX will integrate with IBM Cloud and AI services as appropriate."

  • Data as the new oil: The danger behind the mantra

    Not a week goes by that I don’t hear a tech pundit, analyst, or CIO say “data is the new oil.” This overused mantra suggests that data is a commodity that can become extremely valuable once refined. Many technologists have used that phrase with little knowledge of where it originated – I know I wasn’t aware of its origin. 

    It turns out the phrase is attributed to Clive Humby, a British mathematician who helped create British retailer Tesco’s Clubcard loyalty program. Humby quipped, “Data is the new oil. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc., to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value.”

  • How to explain deep learning in plain English

    Understanding artificial intelligence sometimes isn’t a matter of technology so much as terminology. There’s plenty of it under the big AI umbrella – such as machine learning, natural language processing, computer vision, and more.

    Compounding this issue, some AI terms overlap. Being able to define key concepts clearly – and subsequently understand the relationships and differences between them – is foundational to your crafting a solid AI strategy. Plus, if the IT leaders in your organization can’t articulate terms like deep learning, how can they be expected to explain it (and other concepts) to the rest of the company?

  • How to make the case for service mesh: 5 benefits

    Service mesh is a trending technology, but that alone does not mean every organization needs it. As always, adopting a technology should be driven by the goals it helps you attain or, put another way, the problems it helps you solve.

    It’s certainly worth understanding what a service mesh does – in part so you can explain it to other people. Whether or not you actually need one really depends upon your applications and environments.

Network Security Toolkit 30-11210

Filed under
GNU
Linux
Red Hat
Security

We are pleased to announce the latest NST release: "NST 30 SVN:11210". This release is based on Fedora 30 using Linux Kernel: "kernel-5.1.17-300.fc30.x86_64". This release brings the NST distribution on par with Fedora 30.

Read more

Fedora Community Blog: Application service categories and community handoff

Filed under
Red Hat

The Community Platform Engineering (CPE) team recently wrote about our face-to-face meeting where we developed a team mission statement and developed a framework for making our workload more manageable. Having more focus will allow us to progress higher priority work for multiple stakeholders and move the needle on more initiatives in a more efficient manner than how we are working right now.

During the F2F we walked through the process of how to gracefully remove ourselves from applications that are not fitting our mission statement. The next couple of months will be a transition phase as we want to ensure continuity and cause minimum disruption to the community. To assist in that strategy, we analysed our applications and came up with four classifications to which they could belong.

Read more

IBM, Red Hat and Fedora

Filed under
Red Hat
  • IBM Takes A Hands Off Approach With Red Hat

    IBM has been around long enough in the IT racket that it doesn’t have any trouble maintaining distinct portfolios of products that have overlapping and often incompatible functions. The System/3, which debuted in 1969, is only five years older than the System/360, which laid the foundation and set the pace for corporate computing when it launched in 1964. Both styles of machines continue to exist today as the IBM i on Power Systems platform and the System z.

    With the $34 billion acquisition of Red Hat, which closed last week, neither of those two legacy products are under threat and IBM does not seem to be inclined whatsoever in ceasing development of the legacy operating system and middleware stacks embodied in the IBM i and System z lines.

    As Arvind Krishna, senior vice president in charge of IBM’s cloud and cognitive software products, put it bluntly in a call after the deal closed, IBM’s customers expect for Big Blue to maintain its own operating systems, middleware, storage, databases, and security software in the IBM i, AIX, and System z lines, and that is precisely what Big Blue is going to do. Krisha estimated that there is only about 5 percent overlap in products between Big Blue and Red Hat – something we talked about at length when the deal was announced last October – and added that in many enterprise accounts that use both Red Hat and IBM platforms, companies invest in both sets of software for different purposes – perhaps using JBoss in one case and WebSphere in another, for instance.

  • Tech cos go for Edtech tie-ups to get that ready workforce

    Companies like Wipro, Accenture, IBM and others are tying up with edtech partners like upGrad, Simplilearn and Udacity to have a ready-trained workforce they can deploy on projects. Additional benefits include minimal training cost incurred post recruitment and a lesser churn as learners develop more ownership in their roles.

    The edtech firms provide campus recruits the required platform, content, assignments and project work in their last semester of college to ensure they are prepared with programming skills and emerging digital skills before they join.

  • Red Hat Enterprise Linux 8 improves performance for modern workloads

    Red Hat Enterprise Linux (RHEL) 8 can provide significant performance improvements over RHEL 7 across a range of modern workloads. To put this in context, we used RHEL 7.6 to execute multiple benchmarks with Intel's 2nd generation of Intel Xeon Scalable processors, and our hardware partners set 35 new world record performance results using the same OS version. This post will highlight RHEL 8 performance gains over RHEL 7.

    How did we get here? The performance engineering team at Red Hat collaborates with software partners and hardware OEMs to measure and optimize performance across workloads that range from high-end databases, NoSQL databases packaged in RHEL, Java applications, and third party databases and applications from Oracle, Microsoft SQL Server, SAS, and SAP HANA ERP applications.

    We run multiple benchmarks and measure the performance of CPU, memory, disk I/O and networking. Testing includes the filesystems we ship with Red Hat Enterprise Linux, such as XFS, Ext4, GFS2, Gluster and Ceph.

  • Federation V2 is now KubeFed

    Some time ago we talked about how Federation V2 on Red Hat OpenShift 3.11 enables users to spread their applications and services across multiple locales or clusters. As a fast moving project, lots of changes happened since our last blog post. Among those changes, Federation V2 has been renamed to KubeFed and we have released OpenShift 4.

    In today’s blog post we are going to look at KubeFed from an OpenShift 4 perspective, as well as show you a stateful demo application deployed across multiple clusters connected with KubeFed.

    There are still some unknowns around KubeFed; specifically in storage and networking. We are evaluating different solutions because we want to we deliver a top-notch product to manage your clusters across multiple regions/clouds in a clear and user-friendly way. Stay tuned for more information to come!

  • Duplicity 0.8.01

    Duplicity 0.8.01 is now in rawhide. The big change here is that it now uses Python 3. I’ve tested it in my own environment, both on it’s own and with deja-dup, and both work.

    Please test and file bugs. I expect there will be more, but with Python 2 reaching EOL soon, it’s important to move everything we can to Python 3.

Fedora vs. Ubuntu: Linux Distros Compared

Filed under
Red Hat
Ubuntu

Fedora is a free and open source Linux-based operating system that has been around since 2003. Red Hat, the world’s largest open source company prior to being bought by IBM, sponsors the project. Fedora serves as the foundation for Red Hat Enterprise Linux, a version of Linux intended for companies and servers rather than personal desktop use.

Ubuntu became the most popular Linux-based operating system not long after launching in 2004. Billionaire Mark Shuttleworth created a company called Canonical whose purpose was to create a version of Linux for general computer users. Ubuntu was that desktop.

Read more

OpenShift and Fedora Program Management Under IBM

Filed under
Red Hat
  • OpenShift Commons Briefing: Quay v3 Release Update and Road Map

    In this briefing, Dirk Herrmann, Red Hat’s Quay Product Manager walks through Quay v3.0’s features, and discusses the road map for future Quay releases, including a progress update on the open sourcing of Quay.

    Built for storing container images, Quay offers visibility over images themselves, and can be integrated into your CI/CD pipelines and existing workflows using its API and other automation features. Quay was first released in 2013, as the first enterprise hosted registry. Six years later, we’ve celebrated the first major release of the container registry since it joined the Red Hat portfolio of products through the acquisition of CoreOS in 2018.

  • FPgM report: 2019-28

    Here’s your report of what has happened in Fedora Program Management this week. I am on PTO the week of 15 July, so there will be no FPgM report or FPgM office hours next week.

    I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

Upcoming FSF Talks and Alex Oliva (Linux-libre/GNU) Leaves Red Hat

Filed under
GNU
Red Hat

Find lost files with Scalpel

Filed under
Red Hat
Software
HowTos

As a system administrator, part of your responsibility is to help users manage their data. One of the vital aspects of doing that is to ensure your organization has a good backup plan, and that your users either make their backups regularly, or else don’t have to because you’ve automated the process.

However, sometimes the worst happens. A file gets deleted by mistake, a filesystem becomes corrupt, or a partition gets lost, and for whatever reason, the backups don’t contain what you need.

As we discussed in How to prevent and recover from accidental file deletion in Linux, before trying to recover lost data, you must find out why the data is missing in the first place. It’s possible that a user has simply misplaced the file, or that there is a backup that the user isn’t aware of. But if a user has indeed removed a file with no backups, then you know you need to recover a deleted file. If a partition table has become scrambled, though, then the files aren’t really lost at all, and you might want to consider using TestDisk to recover the partition table, or the partition itself.

What happens if your file or partition recovery isn’t successful, or is only in part? Then it’s time for Scalpel. Scalpel performs file carving operations based on patterns describing unique file types. It looks for these patterns based on binary strings and regular expressions, and then extracts the file accordingly.

Read more

Syndicate content

More in Tux Machines

Audiocasts/Shows: Linux in the Ham Shack, FLOSS Weekly, Test and Code

  • LHS Episode #292: Digital Operation Deep Dive

    Welcome to Episode 292 of Linux in the Ham Shack. In this episode, the hosts are joined by Rob, KA2PBT, in a deep disucussion of digital mode operation on the amateur radio bands including what modes are available, the technology behind the creation and operation of those modes and even dive into current controversy behind FCC rules regarding encryption, PACTOR-4 and much more. Thank you for tuning in and we hope you have a wonderful week.

  • FLOSS Weekly 538: Leo Laporte

    Randal Schwartz and Jonathan Bennett talk to Leo Laporte about FLOSS's history and the TWiT Network.

  • Test and Code: 81: TDD with flit

    In the last episode, we talked about going from script to supported package. I worked on a project called subark and did the packaging with flit. Today's episode is a continuation where we add new features to a supported package and how to develop and test a flit based package.

Windows vs Ubuntu

Kubuntu is my favorite derivative of all the Ubuntu-based operating systems. I can not point out any features as favorite because I like all of them. Everything mentioned above is part of my daily workflow. Now when you know all of this it is worth trying them out. I was skeptical at first but later when I built my flow and learned how to utilize these features I can do everything faster, with fewer keystrokes and the most important thing is that I have a nicely organized desktop that helps me to minimize brain fatigue while doing my job. Kubuntu is a great distro to switch to if you’re coming from Windows. They have a quite similar UI, and Kubuntu has all the features Windows has, plus more. Read more

KDE: KDevelop 5.3.3 Released, Latte Dock Update and Release of Kaidan 0.4.1

  • KDevelop 5.3.3 released

    We today provide a stabilization and bugfix release with version 5.3.3. This is a bugfix-only release, which introduces no new features and as such is a safe and recommended update for everyone currently using a previous version of KDevelop 5.3. You can find a Linux AppImage as well as the source code archives on our download page. Windows installers are no longer offered, we are looking for someone interested to take care of that.

  • Latte, Documentation and Reports...

    First Latte beta release for v0.9.0 is getting ready and I am really happy about it :) . But today instead of talking for the beta release I am going to focus at two last minute "arrivals" for v0.9; that is Layouts Reports and Documentation. If you want to read first the previous article you can do so at Latte and "Flexible" settings...

  • Kaidan 0.4.1 released!

    After some problems were encountered in Kaidan 0.4.1, we tried to fix the most urgent bugs.

Security: Linux 5.2 Dissection, New Patches, New ZDNet (CBS) FUD and Kali NetHunter App Store

  • Kees Cook: security things in Linux v5.2

    Gustavo A. R. Silva is nearly done with marking (and fixing) all the implicit fall-through cases in the kernel. Based on the pull request from Gustavo, it looks very much like v5.3 will see -Wimplicit-fallthrough added to the global build flags and then this class of bug should stay extinct in the kernel. That’s it for now; let me know if you think I should add anything here. We’re almost to -rc1 for v5.3!

  • Security updates for Wednesday

    Security updates have been issued by Debian (libreoffice), Red Hat (thunderbird), SUSE (ardana and crowbar, firefox, libgcrypt, and xrdp), and Ubuntu (nss, squid3, and wavpack).

  • Malicious Python libraries targeting Linux servers removed from PyPI [Ed: Python does not run only on Linux, but Microsoft-funded sites like ZDNet (CBS) look for ways to blame everything on "Linux", even malicious software that gets caught in the supply chain]
  • Malicious Python Libraries Discovered on PyPI, Offensive Security Launches the Kali NetHunter App Store, IBM Livestreaming a Panel with Original Apollo 11 Technicians Today, Azul Systems Announces OpenJSSE and Krita 4.2.3 Released

    Offensive Security, the creators of open-source Kali Linux, has launched the Kali NetHunter App Store, "a new one stop shop for security relevant Android applications. Designed as an alternative to the Google Play store for Android devices, the NetHunter store is an installable catalogue of Android apps for pentesting and forensics". The press release also notes that the NetHunter store is a slightly modified version of F-Droid: "While F-Droid installs its clients with telemetry disabled and asks for consent before submitting crash reports, the NetHunter store goes a step further by removing the entire code to ensure that privacy cannot be accidentally compromised". See the Kali.org blog post for more details.