Language Selection

English French German Italian Portuguese Spanish

Trimming the fat with Fluxbox

Filed under
Fluxbox

One of the oft touted reasons to use openSUSE is the stellar support and packaging for a wide-variety of desktop environments, normally leading people to think of the four mainstays of the Linux desktop: GNOME, KDE, LXDE, XFCE. While the amount attention focused on the "big four" is certainly the lion's share, there is still a lot of attention paid towards less popular window managers and desktop environments like Enlightenment, Openbox, Window Maker or Fluxbox.

Window managers, unlike desktop environments, strive to follow the unix philosophy of "do one thing, do it well" so you won't see a lot of the bells and whistles you might be familiar with from KDE or GNOME. On the flip side however, you'll see reduced memory and CPU usage.

rest here




More in Tux Machines

Fedora and Red Hat: Fedora's Modularity Initiative, Git, Servers, Buildah and Ansible

  • Fedora's modularity mess

    Fedora's Modularity initiative has been no stranger to controversy since its inception in 2016. Among other things, there were enough problems with the original design that Modularity went back to the drawing board in early 2018. Modularity has since been integrated with both the Fedora and Red Hat Enterprise Linux (RHEL) distributions, but the controversy continues, with some developers asking whether it's time for yet another redesign — or to abandon the idea altogether. Over the last month or so, several lengthy, detailed, and heated threads have explored this issue; read on for your editor's attempt to integrate what was said. The core idea behind Modularity is to split the distribution into multiple "streams", each of which allows a user to follow a specific project (or set of projects) at a pace that suits them. A Fedora user might appreciate getting toolchain updates as soon as they are released upstream while sticking with a long-term stable release of LibreOffice, for example. By installing the appropriate streams, this sort of behavior should be achievable, allowing a fair degree of customization. Much of the impetus — and development resources — behind Modularity come from the RHEL side of Red Hat, which has integrated Modularity into the RHEL 8 release as "Application Streams". This feature makes some sense in that setting; RHEL is famously slow-moving, to the point that RHEL 7 did not even support useful features like Python 3. Application Streams allow Red Hat (or others) to make additional options available with support periods that differ from that of the underlying distribution, making RHEL a bit less musty and old, but only for the applications a specific user cares about. The use case for Modularity in Fedora is arguably less clear. A given Fedora release has a support lifetime of 13 months, so there are limits to the level of stability that it can provide.

  • Moving bugzilla overrides to dist-git

    A while ago Fedora had pkgdb to configure ACLs for each package repo and package related admin actions. When we moved to 'pagure over dist-git', pagure already provided some of these capabilities. pkgdb would have needed a lot of effort to make it work with the modern package branching (modularity) [1] with different lifecycles for each package that are unrelated to Fedora releases and thus we've decided to retire it and replace it with a different solution. One of the missing parts after retireing pkgdb was the ability to set different default bugzilla assignees for EPEL and Fedora. This was solved by creating a new repository called fedora-scm-requests [2]. A script would then parse the contents of the repository, merge that information with the main package admins and repo watchers from dist-git and sync this information to bugzilla so that new bugs get assigned to the correct maintainers and all the interested parties get put on CC:. Each change required a pull request to this repo and someone from the infrastructure team to review and merge the patch. It is obvious that this doesn't scale with the huge number of packages that Fedora and EPEL have.

  • Red Hat customers want the hybrid cloud

    If you listen to some people, everyone and their corner office wants to move to the public cloud. Red Hat's global customers have a different take. Thirty-one percent of Red Hat's customers say "hybrid" describes their strategy best, 21% are leaning toward a private cloud approach, while only 4% see the public cloud as their first choice. There's only one little problem: Finding the staff with the right skills to make the jump from old-school IT to the cloud. Businesses prefer the hybrid cloud strategy for many different reasons -- but, overall, data security, cost benefits, and data integration led the pack. For years, the hybrid cloud wasn't that popular. With the rise of the Kubernetes-based hybrid cloud model and with Red Hat being one of the new-model hybrid cloud's leading proponents, customers are embracing the hybrid cloud.

  • Building with Buildah: Dockerfiles, command line, or scripts
  • How to write a multitask playbook in ansible

VirtualBox 6.1 Officially Released with Linux Kernel 5.4 Support, Improvements

Oracle released today the final version of the VirtualBox 6.1 open-source and cross-platform virtualization software for GNU/Linux, macOS, and Windows operating systems. VirtualBox 6.1 is the first major release in the VirtualBox 6 series of the popular virtualization platform and promises some exciting new features, such as support for the latest and greatest Linux 5.4 kernel series, the ability to import virtual machines from the Oracle Cloud Infrastructure, as well as enhanced support for nested virtualization. "Support for nested virtualization enables you to install a hypervisor, such as Oracle VM VirtualBox or KVM, on an Oracle VM VirtualBox guest. You can then create and run virtual machines in the guest VM. Support for nested virtualization allows Oracle VM VirtualBox to create a more flexible and sophisticated development and testing environment," said Oracle. Read more

Programming Leftovers

  • A static-analysis framework for GCC

    One of the features of the Clang/LLVM compiler that has been rather lacking for GCC may finally be getting filled in. In a mid-November post to the gcc-patches mailing list, David Malcolm described a new static-analysis framework for GCC that he wrote. It could be the starting point for a whole range of code analysis for the compiler. According to the lengthy cover letter for the patch series, the analysis runs as an interprocedural analysis (IPA) pass on the GIMPLE static single assignment (SSA) intermediate representation. State machines are used to represent the code parsed and the analysis looks for places where bad state transitions occur. Those state transitions represent constructs where warnings can be emitted to alert the user to potential problems in the code. There are two separate checkers that are included with the patch set: malloc() pointer tracking and checking for problems in using the FILE * API from stdio. There are also some other proof-of-concept state machines included: one to track sensitive data, such as passwords, that might be leaked into log files and another to follow potentially tainted input data that is being used for array indexes and the like. The malloc() state machine is found in sm-malloc.cc, which is added by this patch, looks for typical problems that can occur with pointers returned from malloc(): double free, null dereference, passing a non-heap pointer to free(), and so on. Similarly, one of the patches adds sm-file.c for the FILE * checking. It looks for double calls to fclose() and for the failure to close a file.

  • RUST howto getting started – hello world

    if one is viewing this site using Firefox or Gecko-Engine… one is running RUST already. At the beginning – one was big fan of Java – Java was/still is all the rage – theoretically write once – run anywhere linux, osx and (thanks to Google) on mobile and even on the closed source OS who’s name shall not be mentioned, nobody knows what the Java Virtual Machine does besides running bytecode, Java on slow ARM CPUs is kind of a burden.

  • Async Interview #2: cramertj, part 3

    This blog post is continuing my conversation with cramertj. This will be the last post. In the first post, I covered what we said about Fuchsia, interoperability, and the organization of the futures crate. In the second post, I covered cramertj’s take on the Stream, AsyncRead, and AsyncWrite traits. We also discussed the idea of attached streams and the importance of GATs for modeling those.

  • Python 3.7.6rc1 and 3.6.10rc1 are now available for testing

    Python 3.7.6rc1 and 3.6.10rc1 are now available. 3.7.6rc1 is the release preview of the next maintenance release of Python 3.7;  3.6.10rc1 is the release preview of the next security-fix release of Python 3.6. Assuming no critical problems are found prior to 2019-12-18, no code changes are planned between these release candidates and the final releases. These release candidates are intended to give you the opportunity to test the new security and bug fixes in 3.7.6 and security fixes in 3.6.10. While we strive to not introduce any incompatibilities in new maintenance and security releases, we encourage you to test your projects and report issues found to bugs.python.org as soon as possible. Please keep in mind that these are preview releases and, thus, their use is not recommended for production environments.

  • Print all git repos from a user (only curl and grep)
  • Linux Fu: Debugging Bash Scripts

    A recent post about debugging constructs surprised me. There were quite a few comments about how you didn’t need a debugger, as long as you had printf. For that matter, we’ve all debugged systems where you had nothing but an LED to flash or otherwise turn on to communicate with the user. However, it is hard to deny that a debugger can help with complex code. To say you only need printf would be like saying you only need machine language. Technically accurate — you can do anything in machine language. But it sure makes things easier to have an assembler or some language to help you work out your problem. If you write a simple bash script, you can use the equivalent to printf — maybe that’s the echo command, although there is usually a printf command on a typical system, if you want to use it. However, there are other things you can do with bash including a pretty cool debugger if you know how to find it. I assume you already know how to use echo and printf, but let’s dig into how to use trace execution line by line without the need for echo statements on every other line. Along the way, you’ll learn how to get started with the bash debugger.

Kernel: LWN Articles and Radeon Linux 5.6 Changes

  • Fixing SCHED_IDLE

    The scheduler implements many "scheduling classes", an extensible hierarchy of modules, and each class may further encapsulate "scheduling policies" that are handled by the scheduler core in a policy-independent way. The scheduling classes are described below in descending priority order; the Stop class has the highest priority, and Idle class has the lowest. The Stop scheduling class is a special class that is used internally by the kernel. It doesn't implement any scheduling policy and no user task ever gets scheduled with it. The Stop class is, instead, a mechanism to force a CPU to stop running everything else and perform a specific task. As this is the highest-priority class, it can preempt everything else and nothing ever preempts it. It is used by one CPU to stop another in order to run a specific function, so it is only available on SMP systems. The Stop class creates a single, per-CPU kernel thread (or kthread) named migration/N, where N is the CPU number. This class is used by the kernel for task migration, CPU hotplug, RCU, ftrace, clock events, and more. The Deadline scheduling class implements a single scheduling policy, SCHED_DEADLINE, and it handles the highest-priority user tasks in the system. It is used for tasks with hard deadlines, like video encoding and decoding. The task with the earliest deadline is served first under this policy. The policy of a task can be set to SCHED_DEADLINE using the sched_setattr() system call by passing three parameters: the run time, deadline, and period. To ensure deadline-scheduling guarantees, the kernel must prevent situations where the current set of SCHED_DEADLINE threads is not schedulable within the given constraints. The kernel thus performs an admittance test when setting or changing SCHED_DEADLINE policy and attributes. This admission test calculates whether the change can be successfully scheduled; if not, sched_setattr() fails with the error EBUSY. The POSIX realtime (or RT) scheduling class comes after the deadline class and is used for short, latency-sensitive tasks, like IRQ threads. This is a fixed-priority class that schedules higher-priority tasks before lower-priority tasks. It implements two scheduling policies: SCHED_FIFO and SCHED_RR. In SCHED_FIFO, a task runs until it relinquishes the CPU, either because it blocks for a resource or it has completed its execution. In SCHED_RR (round-robin), a task will run for the maximum time slice; if the task doesn't block before the end of its time slice, the scheduler will put it at the end of the round-robin queue of tasks with the same priority and select the next task to run. The priority of the tasks under the realtime policies range from 1 (low) to 99 (high).

  • Virtio without the "virt"

    One might ask why it makes sense to implement virtio devices in hardware. After all, they were originally designed for hypervisors and have been optimized for software rather than hardware implementation. Now that virtio support is widespread, the network effects allow hardware implementations to reuse the guest drivers and infrastructure. The virtio 1.1 specification defines ten device types, among them a network interface, SCSI host bus adapter, and console. Implementing a standards-compliant device interface lets hardware implementers focus on delivering the best device instead of designing a new device interface and writing guest drivers from scratch. Moreover, existing guests will work with the device out of the box, and applications utilizing user-space drivers, such as the DPDK packet processing toolkit, do not need to be relinked with new drivers — this is especially helpful when static linking is utilized. Implementing virtio in hardware also makes it easy to switch between hardware and software implementations. A software device can be substituted without changing guest drivers if the hardware device is acting up. Similarly, if the driver is acting up, it is possible to substitute a software device to make debugging the driver easier. It is possible to assign hardware devices to performance-critical guests while assigning software devices to the other guests; this decision can be changed in the future to balance resource needs. Finally, implementing virtio in hardware makes it possible to live-migrate virtual machines more easily. The destination host can have either software or hardware virtio devices.

  • 5.5 Merge window, part 1

    The 5.5 merge window got underway immediately after the release of the 5.4 kernel on November 24. The first week has been quite busy despite the US Thanksgiving holiday landing in the middle of it. Read on for a summary of what the first 6,300 changesets brought for the next major kernel release.

  • Radeon Linux 5.6 Changes Begin Queuing - Better Power Management, Adds DMCUB Controller

    While the Linux 5.5 merge window has just been over for less than one week, AMD has already submitted their first batch of feature updates to DRM-Next of new graphics driver material aiming for Linux 5.6 early next year.