Language Selection

English French German Italian Portuguese Spanish

Graphics: X.Org, Libdrm, VirGL

Filed under
  • Understanding The X.Org Server's Complex Pointer Acceleration Code

    While Peter Hutterer has been involved with the X.Org Server's input code and related projects for the past decade now and has spearheaded the projects around Multi Pointer X, X Input 2, and the Wayland/Xorg-using Libinput libraries, he's still had a tough time grasping the X.Org Server's pointer acceleration code.

    Due to recent complaints from users about libinput, Peter Hutterer has made a fresh attempt at understanding the X.Org Server's pointer acceleration code. He's now made progress in doing so after coming up with a method to visualize the pointer acceleration code.

  • Libdrm 2.4.92 Released With Meson Build Improvements, Icelake Support

    Libdrm 2.4.92 is now available as the newest version of the Mesa DRM library that most notably sits between the Mesa drivers and the Linux kernel Direct Rendering Manager code.

    Arguably the most notable addition to Libdrm 2.4.92 is now supporting Intel Icelake "Gen 11" graphics. Those initial bits are now in place to go along with Intel's ongoing upbringing of Icelake graphics within the DRM kernel driver and Mesa ANV/i965 drivers. On the Intel side is also syncing against the latest Cannonlake PCI IDs, adding a Kabylake PCI ID that was missing up until now, and other basic Intel updates.

  • Collabora Working On VirGL OpenGL ES Improvements, OpenGL ES For QEMU

    Elie Tournier, the former GSoC student developer who was working on soft FP64 support and then joined Collabora, has shared a status update on the consulting firm's work in the GPU virtualization space.

More in Tux Machines

Red Hat News/Leftovers

Cloudgizer: An introduction to a new open source web development tool

Cloudgizer is a free open source tool for building web applications. It combines the ease of scripting languages with the performance of C, helping manage the development effort and run-time resources for cloud applications. Cloudgizer works on Red Hat/CentOS Linux with the Apache web server and MariaDB database. It is licensed under Apache License version 2. Read more

James Bottomley on Linux, Containers, and the Leading Edge

It’s no secret that Linux is basically the operating system of containers, and containers are the future of the cloud, says James Bottomley, Distinguished Engineer at IBM Research and Linux kernel developer. Bottomley, who can often be seen at open source events in his signature bow tie, is focused these days on security systems like the Trusted Platform Module and the fundamentals of container technology. Read more

TransmogrifAI From Salesforce

  • Salesforce plans to open-source the technology behind its Einstein machine-learning services
    Salesforce is open-sourcing the method it has developed for using machine-learning techniques at scale — without mixing valuable customer data — in hopes other companies struggling with data science problems can benefit from its work. The company plans to announce Thursday that TransmogrifAI, which is a key part of the Einstein machine-learning services that it believes are the future of its flagship Sales Cloud and related services, will be available for anyone to use in their software-as-a-service applications. Consisting of less than 10 lines of code written on top of the widely used Apache Spark open-source project, it is the result of years of work on training machine-learning models to predict customer behavior without dumping all of that data into a common training ground, said Shubha Nabar, senior director of data science for Salesforce Einstein.
  • Salesforce open-sources TransmogrifAI, the machine learning library that powers Einstein
    Machine learning models — artificial intelligence (AI) that identifies relationships among hundreds, thousands, or even millions of data points — are rarely easy to architect. Data scientists spend weeks and months not only preprocessing the data on which the models are to be trained, but extracting useful features (i.e., the data types) from that data, narrowing down algorithms, and ultimately building (or attempting to build) a system that performs well not just within the confines of a lab, but in the real world.