Language Selection

English French German Italian Portuguese Spanish

better stability & security

Rolling release is good for

Rolling release is good for one reason. You get the full security and bug fix updates as intended by upstream.

No amount of backporting fixes is enough to keep a system secure and bug free. It's as simple as that. If I backport fixes from kernel git tree to a stable kernel 2.6.2x release, I'm most likely going to miss a lot of fixes. Cherry picking fixes for popular bugs only isn't a solution and causes weakness in Static release distributions.

The only requirement for a rolling release to work is to keep the base system as simple as possible. Theoretically, no downstream patching should be done in packages such as glibc, gcc or kernel unless it is a patch waiting to be eventually merged in a future upstream release.

re: poll

For servers - Static release/repo.

The "theory" of rolling releases is great, but the real world application, not so much.

Servers MUST be stable and secure. With a rolling release, you rely too much on the upstream vendor not to fubar something your system must have (not that it can't be done - mainframes have been doing rolling upgrades for decades - it's just EXPENSIVE to do it right).

RHEL/CENTOS has the right business model. Forget the fluff (and or bleeding edge stuff), only put well tested software into their repo's, backport security as needed, and support the whole thing for 5 years (or longer for security patches)

Of course it doesn't really matter what method the upstream vendor uses, you still need to run a parallel test environment along side your production environment, and test everything (and I mean EVERYTHING) in the first before rolling it out on the second.

It's just easier (for me anyways) to plan your server environments (and their future) if you have static (but not the ridiculously short 6 month timeframe) releases.

Which would you say is better for a linux server?

I have heard the topic discussed in various forums and points of view.

Which would you say is the better choice for a linux based server?

Please give reasoning for your answers and not post "sux" or "rules" nonsense.

Big Bear

More in Tux Machines

Leftovers: OSS

OSS in the Back End

  • Open Source NFV Part Four: Open Source MANO
    Defined in ETSI ISG NFV architecture, MANO (Management and Network Orchestration) is a layer — a combination of multiple functional entities — that manages and orchestrates the cloud infrastructure, resources and services. It is comprised of, mainly, three different entities — NFV Orchestrator, VNF Manager and Virtual Infrastructure Manager (VIM). The figure below highlights the MANO part of the ETSI NFV architecture.
  • After the hype: Where containers make sense for IT organizations
    Container software and its related technologies are on fire, winning the hearts and minds of thousands of developers and catching the attention of hundreds of enterprises, as evidenced by the huge number of attendees at this week’s DockerCon 2016 event. The big tech companies are going all in. Google, IBM, Microsoft and many others were out in full force at DockerCon, scrambling to demonstrate how they’re investing in and supporting containers. Recent surveys indicate that container adoption is surging, with legions of users reporting they’re ready to take the next step and move from testing to production. Such is the popularity of containers that SiliconANGLE founder and theCUBE host John Furrier was prompted to proclaim that, thanks to containers, “DevOps is now mainstream.” That will change the game for those who invest in containers while causing “a world of hurt” for those who have yet to adapt, Furrier said.
  • Is Apstra SDN? Same idea, different angle
    The company’s product, called Apstra Operating System (AOS), takes policies based on the enterprise’s intent and automatically translates them into settings on network devices from multiple vendors. When the IT department wants to add a new component to the data center, AOS is designed to figure out what needed changes would flow from that addition and carry them out. The distributed OS is vendor-agnostic. It will work with devices from Cisco Systems, Hewlett Packard Enterprise, Juniper Networks, Cumulus Networks, the Open Compute Project and others.
  • MapR Launches New Partner Program for Open Source Data Analytics
    Converged data vendor MapR has launched a new global partner program for resellers and distributors to leverage the company's integrated data storage, processing and analytics platform.
  • A Seamless Monitoring System for Apache Mesos Clusters
  • All Marathons Need a Runner. Introducing Pheidippides
    Activision Publishing, a computer games publisher, uses a Mesos-based platform to manage vast quantities of data collected from players to automate much of the gameplay behavior. To address a critical configuration management problem, James Humphrey and John Dennison built a rather elegant solution that puts all configurations in a single place, and named it Pheidippides.
  • New Tools and Techniques for Managing and Monitoring Mesos
    The platform includes a large number of tools including Logstash, Elasticsearch, InfluxDB, and Kibana.
  • BlueData Can Run Hadoop on AWS, Leave Data on Premises
    We've been watching the Big Data space pick up momentum this year, and Big Data as a Service is one of the most interesting new branches of this trend to follow. In a new development in this space, BlueData, provider of a leading Big-Data-as-a-Service software platform, has announced that the enterprise edition of its BlueData EPIC software will run on Amazon Web Services (AWS) and other public clouds. Essentially, users can now run their cloud and computing applications and services in an Amazon Web Services (AWS) instance while keeping data on-premises, which is required for some companies in the European Union.

today's howtos

Industrial SBC builds on Raspberry Pi Compute Module

On Kickstarter, a “MyPi” industrial SBC using the RPi Compute Module offers a mini-PCIe slot, serial port, wide-range power, and modular expansion. You might wonder why in 2016 someone would introduce a sandwich-style single board computer built around the aging, ARM11 based COM version of the original Raspberry Pi, the Raspberry Pi Compute Module. First off, there are still plenty of industrial applications that don’t need much CPU horsepower, and second, the Compute Module is still the only COM based on Raspberry Pi hardware, although the cheaper, somewhat COM-like Raspberry Pi Zero, which has the same 700MHz processor, comes close. Read more