Language Selection

English French German Italian Portuguese Spanish

Android Leftovers

More in Tux Machines

2nd New MakuluLinux Release Offers Flash and Substance

The MakuluLinux Flash distro is splashy and fast with a spiffy new look and new features. MakuluLinux developer Jacque Montague Raymer on Thursday announced the second of this year's three major releases in the Series 15 distro family. The Flash edition follows last month's LinDoz edition release. The much-awaited innovative Core edition will debut between the end of November and mid-December. MakuluLinux is a relatively new Linux OS. Its positive reputation has been developing since 2015. The three-year growth spurt involved a variety of desktop environments. Its small developer team has delivered a surprisingly efficient and productive desktop distribution in a relatively short time period. It is unusual to see a startup rise so quickly to offer an innovative and highly competitive computing platform. Series 15 is not an update of last year's editions. This latest release introduces some radical changes that were under development for the last two years. The Series 15 releases of LinDoz and Flash include a complete rip-and-replace rebuild on top of an in-house developed computing base. LinDoz and Flash have been reworked completely from the ground up. Read more

Kernel: LWN Linux Articles Now Outside the Paywall

  • What's a CPU to do when it has nothing to do?
    It would be reasonable to expect doing nothing to be an easy, simple task for a kernel, but it isn't. At Kernel Recipes 2018, Rafael Wysocki discussed what CPUs do when they don't have anything to do, how the kernel handles this, problems inherent in the current strategy, and how his recent rework of the kernel's idle loop has improved power consumption on systems that aren't doing anything. The idle loop, one of the kernel subsystems that Wysocki maintains, controls what a CPU does when it has no processes to run. Precise to a fault, Wysocki defined his terms: for the purposes of this discussion, a CPU is an entity that can take instructions from memory and execute them at the same time as any other entities in the same system are doing likewise. On a simple, single-core single-processor system, that core is the CPU. If the processor has multiple cores, each of those cores is a CPU. If each of those cores exposes multiple interfaces for simultaneous instruction execution, which Intel calls "hyperthreading", then each of those threads is a CPU.
  • New AT_ flags for restricting pathname lookup
    System calls like openat() have access to the entire filesystem — or, at least, that part of the filesystem that exists in the current mount namespace and which the caller has the permission to access. There are times, though, when it is desirable to reduce that access, usually for reasons of security; that has proved to be especially true in many container use cases. A new patch set from Aleksa Sarai has revived an old idea: provide a set of AT_ flags that can be used to control the scope of a given pathname lookup operation. There have been previous attempts at restricting pathname lookup, but none of them have been merged thus far. David Drysdale posted an O_BENEATH option to openat() in 2014 that would require the eventual target to be underneath the starting directory (as provided to openat()) in the filesystem hierarchy. More recently, Al Viro suggested AT_NO_JUMPS as a way of preventing lookups from venturing outside of the current directory hierarchy or the starting directory's mount point. Both ideas have attracted interest, but neither has yet been pushed long or hard enough to make it into the mainline.
  • Some numbers from the 4.19 development cycle
    The release of 4.19-rc6 on September 30 is an indication that the 4.19 development cycle is heading toward its conclusion. Naturally, that means it's time to have a look at where the contributions for this cycle came from. The upheavals currently playing out in the kernel community do not show at this level, but there are some new faces to be seen in the top contributors this time around. As of this writing, 13,657 non-merge changesets have found their way into the mainline for 4.19.
  • The modernization of PCIe hotplug in Linux
    PCI Express hotplug has been supported in Linux for fourteen years. The code, which is aging, is currently undergoing a transformation to fit the needs of contemporary applications such as hot-swappable flash drives in data centers and power-manageable Thunderbolt controllers in laptops. Time for a roundup. The initial PCI specification from 1992 had no provisions for the addition or removal of cards at runtime. In the late 1990s and early 2000s, various proprietary hotplug controllers, as well as the vendor-neutral standard hotplug controller, were conceived and became supported by Linux through drivers living in drivers/pci/hotplug. PCI Express (PCIe), instead, supported hotplug from the get-go in 2002, but its embodiments have changed over time. Originally intended to hot-swap PCIe cards in servers or ExpressCards in laptops, today it is commonly used in data centers (where NVMe flash drives need to be swapped at runtime) and by Thunderbolt (which tunnels PCIe through a hotpluggable chain of converged I/O switches, together with other protocols such as DisplayPort).

Today in Techrights

OSS Leftovers

  • Financial Services Embracing Open Source to Gain Edge in Innovation
    By now, it’s pretty much a cliché to say that all companies should be technology companies. But in the case of banks and financial services these days, it's true. Many finance companies are early adopters of new technologies such as blockchain, AI and Kubernetes as well as leaders in open source development. And as they seek an edge to retain customers and win new ones, they are not afraid to try new things. At the Linux Foundation's inaugural Open FinTech Forum here last week, attendees got a chance to discuss the latest state of open source adoption and the extent that open source strategies are changing financial service businesses. The fact is, banks really do have tech businesses inside of them. Capital One's DevExchange boasts several products that it has developed for internal use and also made available as open source, including the Cloud Custodian DevOps engine and the Hydrograph big data ETL tool.
  • Why the Open Source Enterprise Search Trend Will Only Accelerate
    Enterprise search has been going through a dramatic shift as of late. We've watched as some of the leaders in search, those platforms usually found in the upper right quadrant on Gartner reports, have fallen off through acquisition or from simply not keeping up with the market. But behind the scenes an even bigger shift is taking place: from proprietary kernels to core technologies based on open source projects. Some, like Lucidworks, have always been based on the open source Apache Solr project. Others, like Coveo, have joined the open source movement by offering the choice of using its traditional proprietary kernel or licensing the Coveo user experience built on top of the Elastic kernel.
  • Bentley Systems Releases Open-Source Library: iModel.js
  • Bentley Releases iModel.js Open-Source Library
    Bentley Systems, Inc., the leading global provider of comprehensive software solutions for advancing the design, construction, and operations of infrastructure, today announced the initial release of its iModel.js library, an open-source initiative to improve the accessibility, for both visualization and analytical visibility, of infrastructure digital twins. iModel.js can be used by developers and IT professionals to quickly and easily create immersive applications that connect their infrastructure digital twins with the rest of their digital world. iModel.js is the cornerstone of Bentley’s just-announced iTwin Services that combine iModelHub, reality modeling, and web-enabling software technologies within a Connected Data Environment (CDE) for infrastructure engineering.
  • Software Heritage Foundation Update

    I first wrote about the Software Heritage Foundation two years ago. It is four months since their Archive officially went live. Now Roberto di Cosmo and his collaborators have an article, and a video, entitled Building the Universal Archive of Source Code in Communications of the ACM describing their three challenges, of collection, preservation and sharing, and setting out their current status: [...]