Language Selection

English French German Italian Portuguese Spanish

Five common mistakes that Linux IT managers make

Filed under
OSS

After seeing the same mistakes repeated by different IT managers over the years, I've noticed a pattern of common errors. Here are the five common mistakes, along with tips for avoiding them.

Mistake #1: Reactive, not proactive

The number one mistake made by IT managers of all stripes is to carry out their duties by reacting to problems only as they come up, rather than looking ahead to potential problems and having a solution in place ahead of time. This trait isn't unique to IT managers, but it can be a fatal flaw when the head of an IT department fails to take a proactive approach.

For example, proactive IT managers make sure they have disaster plans in place rather than trying to implement a disaster plan on the fly. A good IT manager will have plans to deal with hardware failure, natural disaster, compromised systems, and any other problem that might crop up. If you spend most of your time putting out fires, you're probably not being proactive enough -- or you're still cleaning up the previous manager's messes.

A proactive manager plans for the future in terms of capacity planning, upgrades, and support. What happens when an application outgrows a single server, for example? How difficult will it be to deploy on multiple servers, and is this even possible? When deploying open source software, you'll want to assess the health of the community that supports it. If it's a project without a long history and an active developer community, it may not be the best choice. Projects like Samba and Apache are safe bets, while a project that made its SourceForge debut three weeks ago is risky business.

Many organizations experience costly growing pains because the IT infrastructure wasn't planned with an eye to the future. You may only need to support five or 50 users now, but what about three to five years down the road? If that seems like it's too far off to worry about, you've already failed.

When it comes to hardware, be picky and choose servers and equipment that will have a long life span and support. Be cautious about choosing non-standard hardware with binary-only kernel modules. The vendor may provide support and upgrades now, but what happens if it chooses to stop supporting a piece of hardware after a few years? That leaves you with the choice of ditching hardware or not being able to upgrade the kernel.

Mistake #2: Failing to emphasize documentation and training

Full Article.

More in Tux Machines

Android Leftovers

Baidu puts open source deep learning into smartphones

A year after it open sourced its PaddlePaddle deep learning suite, Baidu has dropped another piece of AI tech into the public domain – a project to put AI on smartphones. Mobile Deep Learning (MDL) landed at GitHub under the MIT license a day ago, along with the exhortation “Be all eagerness to see it”. MDL is a convolution-based neural network designed to fit on a mobile device. Baidu said it is suitable for applications such as recognising objects in an image using a smartphone's camera. Read more

AMD and Linux Kernel

  • Ataribox runs Linux on AMD chip and will cost at least $250
    Atari released more details about its Ataribox game console today, disclosing for the first time that the machine will run Linux on an Advanced Micro Devices processor and cost $250 to $300. In an exclusive interview last week with GamesBeat, Ataribox creator and general manager Feargal Mac (short for Mac Conuladh) said Atari will begin a crowdfunding campaign on Indiegogo this fall and launch the Ataribox in the spring of 2018. The Ataribox will launch with a large back catalog of the publisher’s classic games. The idea is to create a box that makes people feel nostalgic about the past, but it’s also capable of running the independent games they want to play today, like Minecraft or Terraria.
  • Linux 4.14 + ROCm Might End Up Working Out For Kaveri & Carrizo APUs
    It looks like the upstream Linux 4.14 kernel may end up playing nicely with the ROCm OpenCL compute stack, if you are on a Kaveri or Carrizo system. While ROCm is promising as AMD's open-source compute stack complete with OpenCL 1.2+ support, its downside is that for now not all of the necessary changes to the Linux kernel drivers, LLVM Clang compiler infrastructure, and other components are yet living in their upstream repositories. So for now it can be a bit hairy to setup ROCm compute on your own system, especially if running a distribution without official ROCm packages. AMD developers are working to get all their changes upstreamed in each of the respective sources, but it's not something that will happen overnight and given the nature of Linux kernel development, etc, is something that will still take months longer to complete.
  • Latest Linux kernel release candidate was a sticky mess
    Linus Torvalds is not noted as having the most even of tempers, but after a weekend spent scuba diving a glitch in the latest Linux kernel release candidate saw the Linux overlord merely label the mess "nasty". The release cycle was following its usual cadence when Torvalds announced Linux 4.14 release candidate 2, just after 5:00PM on Sunday, September 24th.
  • Linus Torvalds Announces the Second Release Candidate of Linux Kernel 4.14 LTS
    Development of the Linux 4.14 kernel series continues with the second Release Candidate (RC) milestone, which Linus Torvalds himself announces this past weekend. The update brings more updated drivers and various improvements. Linus Torvalds kicked off the development of Linux kernel 4.14 last week when he announced the first Release Candidate, and now the second RC is available packed full of goodies. These include updated networking, GPU, and RDMA drivers, improvements to the x86, ARM, PowerPC, PA-RISC, MIPS, and s390 hardware architectures, various core networking, filesystem, and documentation changes.

Red Hat: ‘Hybrid Cloud’, University of Alabama, Red Hat Upgrades Ansible and Expectations