Language Selection

English French German Italian Portuguese Spanish

Fedora, Red Hat and IBM

Filed under
Red Hat
  • Fedora 30 Will Get Bash 5.0 But Yum's Death Sentence Postponed To F31

    Fedora's Engineering and Steering Committee approved new work around the in-development Fedora 30. 

    Originally Fedora 29 was going to drop the old Yum package manager bits now that the DNF package manager has been in good shape for years and is largely a drop-in replacement to Yum. That didn't happen for Fedora 29 and just recently was proposed to drop Yum 3 for Fedora 30, but with that change coming in late and some tooling bits not ready in time, that has been diverted to Fedora 31. FESCo approves of dropping Yum 3 for Fedora 31 and is hoping it will be removed right after Rawhide branches for F30, giving plenty of time to fix any issues that may come up or other unexpected problems. 

  • Measuring user experience success with building blocks

    PatternFly is an open source design system used by Red Hat to maintain visual consistency and usability across the product portfolio. When the PatternFly team started work on PatternFly 4, the next major version of the system, they focused a large part of their effort on evolving the visual language. But how would users respond to the new look and feel?

    To get the raw and unfiltered feedback the team needed, Sara Chizari, a UXD user researcher, planned a reaction study with a fun twist and then headed to Red Hat Summit in San Francisco.

  • Backup partners target Red Hat Ceph Storage

    Red Hat Ceph Storage provides object, block and file data services for organizations modernizing their hybrid-cloud and data analytics infrastructures. With the release of Red Hat Ceph Storage 3.2, improved performance and functionality is driving new storage use cases in the modernized datacenter.

    In addition to data security and integrity, organizations must consider their strategy around data protection, backup and archiving. Whether you are backing up your enterprise application data as part of a disaster recovery strategy, or you are performing deep archives of sensitive records, rich media, or regulated data, Red Hat works with industry-leading backup, recovery and archiving partners to certify Ceph as a backup target for your most important data.

  • Effortless API creation with full API lifecycle using Red Hat Integration (Part 1)

    Nowadays, API development with proper lifecycle management often takes days if not weeks to get a simple API service up and running. One of the main reasons behind this is there are always way too many parties involved in the process. Plus there are hours of development and configuration.

  • Announcing Kubernetes-native self-service messaging with Red Hat AMQ Online

    Microservices architecture is taking over software development discussions everywhere. More and more companies are adapting to develop microservices as the core of their new systems. However, when going beyond the “microservices 101” googled tutorial, required services communications become more and more complex. Scalable, distributed systems, container-native microservices, and serverless functions benefit from decoupled communications to access other dependent services. Asynchronous (non-blocking) direct or brokered interaction is usually referred to as messaging.

    Managing and setting up messaging infrastructure components for development use was usually a long prerequisite task requiring several days on the project calendar. Need a queue or topic? Wait at least a couple weeks. Raise a ticket with your infrastructure operations team, grab a large cup of coffee, and pray for them to have some time to provision it. When your development team is adopting an agile approach, waiting days for infrastructure is not acceptable.

  • Settling In With IBM i For The Long Haul

    If nothing else, the IBM i platform has exhibited extraordinary longevity. One might even say legendary longevity, if you want to take its history all the way back to the System/3 minicomputer from 1969. This is the real starting point in the AS/400 family tree and this is when Big Blue, for very sound legal and technical and marketing reasons, decided to fork its products to address the unique needs of large enterprises (with the System/360 mainframe and its follow-ons) and small and medium businesses (starting with the System/3 and moving on through the System/34, System/32, System/38, and System/36 in the 1970s and early 1980s and passing through the AS/400, AS/400e, iSeries, System i, and then IBM i on Power Systems platforms.

    It has been a long run indeed, and many customers who have invested in the platform started way back then and there with the early versions of RPG and moved their applications forward and changed them as their businesses evolved and the depth and breadth of corporate computing changed, moving on up through RPG II, RPG III, RPG IV, ILE RPG, and now RPG free form. Being on this platform for even three decades makes you a relative newcomer.

More on Fedora 31

  • Fedora 31 Should Be Out Around The End of November

    While Fedora 31 was once talked about to never happen or be significantly delayed to focus on re-tooling the Linux distribution, they opted for a sane approach not to throw off the release cadence while working on low-level changes around the platform. A draft of the release schedule for Fedora 31 has now been published and it puts the release date at the end of November.

    Rather than delaying or cancelling the Fedora 31 release, it will go on like normal and the developers will need to work in their changes to the confines of their traditional six month release cadence.

  • Draft Fedora 31 schedule available

    It’s almost time for me to submit the Fedora 31 schedule to FESCo for approval. Before I do that, I’m sharing it with the community for comment. After some discussion before the end of the year, we decided not to go with an extended development cycle for Fedora 31. After getting input from teams within Fedora, I have a draft schedule available.

    The basic structure of the Fedora 31 schedule is pretty similar to the Fedora 30 schedule. You may notice some minor formatting changes due to a change in the tooling, but the milestones are similar. I did incorporate changes from different teams. Some tasks that are no longer relevant were removed. I added tasks for the Mindshare teams. And I included several upstream milestones.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

Events: OpenStack, Open Source Day (OSD), and Intel

  • OpenStack Keeps One Eye on the Prize, One Over Its Shoulder
    The OpenStack Foundation (OSF) used its recent Open Infrastructure Show (OIS) to remind the open source community of its importance, maturity, and flexibility. But the event also showed that the group understands that the virtualized infrastructure environment is evolving rapidly. I must admit that heading into the OIS event I was not expecting much. Conversations I have had over the past year continued to show a strong core of OpenStack supporters, but it seemed that the platform’s innovative spirit was diminishing. And in such a rapidly evolving technology segment, any sort of diminishing momentum is the equivalent of going backwards.
  • Open Source Day 2019 focuses on the cloud, security and development
    The 12th edition of Open Source Day (OSD) will take place today at the Legia Warsaw Stadium in Poland’s capital city. The event will include presentations, forums and nine technical sessions spanning automation, containerization, cloud computing, virtualization, security, monitoring, CI/CD, software and app development and databases.
  • Inspur and Intel share Rocky testing data at premiere of OpenInfra Summit
  • Intel hosts Open Source Technology Summit - OSTS 19 - Software - News
  • Intel Pushes Open Source Hypervisor With Cloud Giants
    Intel, along with cloud giants Amazon and Google, is working on an open source hypervisor based on the rust-vmm project. The chipmaker discussed this and several other open source efforts at its Open Source Technology Summit, which kicked off yesterday. The company “is and has been one of the largest contributors to open source,” said Imad Sousou, Intel corporate vice president and general manager of system software products. “Intel is the No. 1 contributor to the Linux kernel. We write 10% to 12% of the Linux kernel code.” For the record: Red Hat is No. 2, and it contributes about 6%, according to Sousou.
  • Open Source to trickle into AI and Cloud
    Intel’s Clear Linux* Distribution is adding Clear Linux Developer Edition, which includes a new installer and store, bringing together toolkits to give developers an operating system with all Intel hardware features already enabled. Additionally, Clear Linux usages are

Latest Openwashing

Google: TensorFlow, Open Hardware and More on Collaboration

  • Beginner's guide for TensorFlow: The basics of Google's machine-learning library
    It is an open-source, accelerated-math library designed to help developers build and train machine-learning models using a wide range of hardware — CPUs, GPUs, and even specialized chips such as TPUs (Tensor Processing Units). While TensorFlow was originally designed for use with more powerful machines, it has evolved to be able to create models to run in all sorts of unlikely places, from browsers to low-power IoT devices. Today, TensorFlow can be used with a wide range of programming languages, including Python, Go, C++, Java, Swift, R, Julia, C#, Haskell, Rust, and JavaScript.
  • Google extends lowRISC FOSSi partnership
    Unlike proprietary processors, the design and instruction set architecture (ISA) for which are kept behind a typically expensive licence wall, free and open source silicon (FOSSi) does what it says on the tin: Projects like RISC-V provide both the ISA and key implementations under permissive licences, allowing anyone to use, modify, distribute, and commercialise the technology without a single license or royalty payment - including, in many cases, the ability to create a proprietary implementation, should they so choose. Following on from the news that it was a founding member of the Linux Foundation's CHIPS Alliance, an industry group set up to 'host and curate high-quality open source code relevant to the design of silicon devices', Google has now announced that it is extending its existing partnership with the lowRISC project to include additional funding, support, and the appointment of two Google staffers as board members on the project.
  • Google wants an open source silicon community for chip design
    As evidenced by Android and Chromium, Google has long been committed to open source software. The company now wants to foster a similar community for hardware and chip design, particularly open source silicon.
  • To Create Prosperity, Free Market Competition Isn’t Enough—You Need Collaboration Too
    What’s ironic is that all of this communal activity isn’t driven by beret-wearing revolutionaries plotting in coffee houses, but by many of today’s most powerful and profit-driven corporations, who act not out of altruism, but self-interest. The fact is that technology firms today who do not actively participate in open source communities are at a significant competitive disadvantage. For example, Chris DiBona, Director of Open Source at Google, once told me, “We released Android as an open source product because we knew that was the fastest way to grow adoption, which enabled us to preserve the relationships with customers for businesses like search, maps and Gmail.” That is the reality of today’s marketplace. You collaborate in order to compete effectively. Businesses that don’t accept that simple fact will find it difficult to survive. Science’s commitment to communal effort is not at all new, but is a thread running deep in America’s long history of technological dominance. And it’s not all about private companies competing with each other, either: it’s about how the market can benefit from public investment. When Vannevar Bush submitted his famous report, “Science, The Endless Frontier,” to President Truman at the end of World War II, he argued that scientific discovery should be considered a public good crucial to the competitiveness of the nation. The crux of his argument is that such efforts build capacity through creating what he called “scientific capital” and pointed out that “New products and new processes do not appear full-grown. They are founded on new principles and new conceptions, which in turn are painstakingly developed by research in the purest realms of science.”

FOSS in Telco

  • The benefits of open source networking for enterprise IT
    Open source software has proved its benefits for various aspects of the IT community in terms of costs, agility and flexibility. Open source networking software is in its early stages of deployment among enterprises. Meantime, hyperscale cloud providers and the largest service providers have made effective use of open source networking. It is standard in large IT organizations to consider open source software alongside packaged software and SaaS as part of their IT architecture. Enterprise IT shops frequently deploy open source software in test environments and when designing new applications, like in DevOps. IT organizations report a range of benefits from open source software, including innovative design, time to market and agility. While open source software helps reduce some costs, deployment in production environments is generally accompanied by a vendor-supplied support contract.
  • How “Lab as a Service” supports OPNFV and ONAP development
    The Interoperability Lab at the University of New Hampshire (UNH-IOL) is a community resource that allows developers and open source users to have access to resources that they might not have themselves. The “Lab as a Service” provides the necessary shared compute and networking resources for developers working on projects such as OPNFV. Access is remote via a VPN connection, so it acts like a remote server. Those new to OPNFV can create a virtual deployment on a single node and run small VNFs on top. It gives developers access to a bare metal system for low level checking and installs. The next step is to add better support for multi-node usage, integrating the CI work that's being done in the OPNFV project and making the system more compatible with ONAP development.
  • The modern data center and the rise in open-source IP routing suites

    Primarily, what the operators were looking for was a level of control in managing their network which the network vendors couldn’t offer. The revolution burned the path that introduced open networking, and network disaggregation to the work of networking. Let us first learn about disaggregation followed by open networking.

  • Tracking Telco Progress in Open Networking
    I recently attended the Open Infrastructure Summit (formerly the OpenStack Summit -- more on this later). While the conference has gotten smaller, I didn't hear any complaints. Instead, people attending recognized that the developers who showed up came to work on real problems, rather than to cheerlead for the latest technology. Despite some of the dire headlines of the past year, I did not sense a crisis or panic.
  • Exclusive: Google suspends some business with Huawei after Trump blacklist
  • 5G and #Huawei – Trade wars can be prevented by using Open Source
    Consumer Choice Center Managing Director Fred Roeder stressed that more openness and transparency of telephone and radio networks could lead to more trust in the soft- and hardware of infrastructure providers: “Outright bans by country of origin should only be the last resort for policy makers. Bans risk getting the global economy deeper into costly trade wars. Consumers benefit from competition and the fast rollout of new technologies such as 5G networks. At the same time, we are worried about vulnerabilities and potential backdoors in equipment and software. Closed systems have a much higher likelihood of hiding vulnerabilities. Hence more open systems and open source approaches can really help consumers, and governments, trust the security promises of 5G providers,” said Roeder. “Private efforts such as the Open Radio Access Network Alliance show that open source systems are an option for telecommunication infrastructure. It would be a win-win situation for consumers and industry if more companies would embrace open standards. An open source approach in telecommunications could revolutionize market access and rollout pace of new standards in the era of 5G, in the same way as blockchain does in the financial services and payment industry. Manufacturers that commit to open source systems show that they don’t have any vulnerabilities to hide, and at the same time have a compelling case not to be excluded on the basis of their country of origin,” he added.
  • Why 5G is a huge future threat to privacy
    The next-generation of mobile communications, 5G, is currently a hot topic in two very different domains: technology – and politics. The latter is because of President Trump’s attempts to shut the Chinese telecoms giant Huawei out of Western procurement projects. That might work in the US, but the move is meeting a lot of resistance in Europe. There seem to be two primary concerns about allowing Chinese companies to build the new 5G infrastructure. One is a fear that it will help China consolidate its position as the leading nation in 5G technologies, and that it could come to be the dominant supplier of 5G hardware and software around the world. The other is more directly relevant to this blog: a worry that if Chinese companies install key elements of 5G systems, they will be able to spy on all the traffic passing through them. As the South China Morning Post reports, in an attempt to soothe those fears, Huawei has even promised to sign “no-spy agreements with governments“. That’s a rather ridiculous suggestion – as if signing an agreement would prevent Chinese intelligence agencies from using Huawei equipment for surveillance if they could.