Language Selection

English French German Italian Portuguese Spanish

better stability & security

Rolling release/repo
73% (183 votes)
Static release/repo
27% (67 votes)
Total votes: 250

Rolling release is good for

Rolling release is good for one reason. You get the full security and bug fix updates as intended by upstream.

No amount of backporting fixes is enough to keep a system secure and bug free. It's as simple as that. If I backport fixes from kernel git tree to a stable kernel 2.6.2x release, I'm most likely going to miss a lot of fixes. Cherry picking fixes for popular bugs only isn't a solution and causes weakness in Static release distributions.

The only requirement for a rolling release to work is to keep the base system as simple as possible. Theoretically, no downstream patching should be done in packages such as glibc, gcc or kernel unless it is a patch waiting to be eventually merged in a future upstream release.

re: poll

For servers - Static release/repo.

The "theory" of rolling releases is great, but the real world application, not so much.

Servers MUST be stable and secure. With a rolling release, you rely too much on the upstream vendor not to fubar something your system must have (not that it can't be done - mainframes have been doing rolling upgrades for decades - it's just EXPENSIVE to do it right).

RHEL/CENTOS has the right business model. Forget the fluff (and or bleeding edge stuff), only put well tested software into their repo's, backport security as needed, and support the whole thing for 5 years (or longer for security patches)

Of course it doesn't really matter what method the upstream vendor uses, you still need to run a parallel test environment along side your production environment, and test everything (and I mean EVERYTHING) in the first before rolling it out on the second.

It's just easier (for me anyways) to plan your server environments (and their future) if you have static (but not the ridiculously short 6 month timeframe) releases.

Which would you say is better for a linux server?

I have heard the topic discussed in various forums and points of view.

Which would you say is the better choice for a linux based server?

Please give reasoning for your answers and not post "sux" or "rules" nonsense.

Big Bear

More in Tux Machines

Openwashing Apple and Microsoft Proprietary Frameworks/Services

Viperr Linux Keeps Crunchbang Alive with a Fedora Flair

Do you remember Crunchbang Linux? Crunchbang (often referred to as #!) was a fan-favorite, Debian-based distribution that focused on using a bare minimum of resources. This was accomplished by discarding the standard desktop environment and using a modified version of the Openbox Window Manager. For some, Crunchbang was a lightweight Linux dream come true. It was lightning fast, easy to use, and hearkened back to the Linux of old. Read more

Openwashing Cars

  • Open source: sharing patents to speed up innovation
    Adjusting to climate change will require a lot of good ideas. The need to develop more sustainable forms of industry in the decades ahead demands vision and ingenuity. Elon Musk, chief executive of Tesla and SpaceX, believes he has found a way for companies to share their breakthroughs and speed up innovation. Fond of a bold gesture, the carmaker and space privateer announced back in 2014 that Tesla would make its patents on electric vehicle technology freely available, dropping the threat of lawsuits over its intellectual property (IP). Mr Musk argued the removal of pesky legal barriers would help “accelerate the advent of sustainable transport”. The stunning move has already had an impact. Toyota has followed Tesla by sharing more than 5,600 patents related to hydrogen fuel cell cars, making them available royalty free. Ford has also decided to allow competitors to use its own electric vehicle-related patents, provided they are willing to pay for licences. Could Telsa’s audacious strategy signal a more open approach to patents among leading innovators? And if more major companies should decide to adopt a carefree attitude to IP, what are the risks involved?
  • Autonomous car platform Apollo doesn't want you to reinvent the wheel
    Open source technologies are solving many of our most pressing problems, in part because the open source model of cooperation, collaboration, and almost endless iteration creates an environment where problems are more readily solved. As the adage goes, "given enough eyeballs, all bugs are shallow." However, self-driving vehicle technology is one rapidly growing area that hasn't been greatly influenced by open source. Most of today's autonomous vehicles, including those from Volkswagen, BMW, Volvo, Uber, and Google, ride on proprietary technology, as companies seek to be the first to deliver a successful solution. That changed recently with the launch of Baidu's Apollo.

today's leftovers

  • KDE Applications 18.04 Brings Dolphin Improvements, JuK Wayland Support
    The KDE community has announced the release today of KDE Applications 18.04 as the first major update to the open-source KDE application set for 2018.
  • Plasma Startup
    Startup is one of the rougher aspects of the Plasma experience and therefore something we’ve put some time into fixing [...] The most important part of any speed work is correctly analysing it. systemd-bootchart is nearly perfect for this job, but it’s filled with a lot of system noise.
  • Announcing Virtlyst – a web interface to manage virtual machines
    Virtlyst is a web tool that allows you to manage virtual machines. In essence it’s a clone of webvirtmgr, but using Cutelyst as the backend, the reasoning behind this was that my father in law needs a server for his ASP app on a Win2k server, the server has only 4 GiB of RAM and after a week running webvirtmgr it was eating 300 MiB close to 10% of all available RAM. To get a VNC or SPICE tunnel it spawns websockify which on each new instance around 20 MiB of RAM get’s used. I found this unacceptable, a tool that is only going to be used once in a while, like if the win2k freezes or goes BSOD, CPU usage while higher didn’t play a role on this.
  • OPNFV: driving the network towards open source "Tip to Top"
    Heather provides an update on the current status of OPNFV. How is its work continuing and how is it pursuing the overall mission? Heather says much of its work is really ‘devops’ and it's working on a continuous integration basis with the other open source bodies. That work continues as more bodies join forces with the Linux Foundation. Most recently OPNFV has signed a partnership agreement with the open compute project. Heather says the overall OPNFV objective is to work towards open source ‘Tip to top’ and all built by the community in ‘open source’. “When we started, OPNFV was very VM oriented (virtual machine), but now the open source movement is looking more to cloud native and containerisation as the way forward,” she says. The body has also launched a C-RAN project to ensure that NFV will be ready to underpin 5G networks as they emerge.
  • Ubuntu Podcast from the UK LoCo: S11E07 – Seven Years in Tibet - Ubuntu Podcast
  • Failure to automate: 3 ways it costs you
    When I ask IT leaders what they see as the biggest benefit to automation, “savings” is often the first word out of their mouths. They’re under pressure to make their departments run as efficiently as possible and see automation as a way to help them do so. Cost savings are certainly a benefit of automation, but I’d argue that IT leaders who pursue automation for cost-savings alone are missing the bigger picture of how it can help their businesses. The true value of automation doesn’t lie in bringing down expenses, but rather in enabling IT teams to scale their businesses.
  • Docker Enterprise Edition 2.0 Launches With Secured Kubernetes
    After months of development effort, Kubernetes is now fully supported in the stable release of the Docker Enterprise Edition. Docker Inc. officially announced Docker EE 2.0 on April 17, adding features that have been in development in the Docker Community Edition (CE) as well as enhanced enterprise grade capabilities. Docker first announced its intention to support Kubernetes in October 2017. With Docker EE 2.0, Docker is providing a secured configuration of Kubernetes for container orchestration. "Docker EE 2.0 brings the promise of choice," Docker Chief Operating Officer Scott Johnston told eWEEK. "We have been investing heavily in security in the last few years, and you'll see that in our Kubernetes integration as well."