Language Selection

English French German Italian Portuguese Spanish

Linux

Intel SoC, Mesa Driver, and Quad Core Cortex-A35

Filed under
Graphics/Benchmarks
Linux
  • Linux Begins Preparing For Intel's New "Lightning Mountain" SoC

    Linux kernel development activity has shown light on a new Intel SoC we haven't anything about to date... Lightning Mountain. 

    We haven't seen Intel Lightning Mountain referenced elsewhere yet but in our original monitoring of the various Linux kernel patch flow, this is a new Atom SoC on the way. 

  • ARB_gl_spirv and ARB_spirv_extension support for i965 landed Mesa master

    And something more visible thanks to that: now the Intel Mesa driver exposes OpenGL 4.6 support, the most recent version of OpenGL.

    As perhaps you could recall, the i965 Intel driver became 4.6 conformant last year. You have more details about that, and what being conformant means in this Iago blog post. On that blog post Iago mentioned that it was passing with an early version of the ARB_gl_spirv support, that we were improving and interating during this time so it could be included on Mesa master. At the same time, the CTS tests were only testing the specifics of the extensions, and we wanted a more detailed testing, so we also were adding more tests on the piglit test suite, written manually for ARB_gl_spirv or translated from existing GLSL tests.

  • Compulab CL-SOM-iMX8X SoM & SBC Feature NXP i.MX 8QuadXPlus Quad Core Cortex-A35 Processor

    NXP i.MX 8X Cortex-A35 processor designed for automotive infotainment and a variety of industrial applications was officially announced in early 2017...

LWN: Spectre, Linux and Debian Development

Filed under
Linux
Debian
  • Grand Schemozzle: Spectre continues to haunt

    The Spectre v1 hardware vulnerability is often characterized as allowing array bounds checks to be bypassed via speculative execution. While that is true, it is not the full extent of the shenanigans allowed by this particular class of vulnerabilities. For a demonstration of that fact, one need look no further than the "SWAPGS vulnerability" known as CVE-2019-1125 to the wider world or as "Grand Schemozzle" to the select group of developers who addressed it in the Linux kernel.
    Segments are mostly an architectural relic from the earliest days of x86; to a great extent, they did not survive into the 64-bit era. That said, a few segments still exist for specific tasks; these include FS and GS. The most common use for GS in current Linux systems is for thread-local or CPU-local storage; in the kernel, the GS segment points into the per-CPU data area. User space is allowed to make its own use of GS; the arch_prctl() system call can be used to change its value.

    As one might expect, the kernel needs to take care to use its own GS pointer rather than something that user space came up with. The x86 architecture obligingly provides an instruction, SWAPGS, to make that relatively easy. On entry into the kernel, a SWAPGS instruction will exchange the current GS segment pointer with a known value (which is kept in a model-specific register); executing SWAPGS again before returning to user space will restore the user-space value. Some carefully placed SWAPGS instructions will thus prevent the kernel from ever running with anything other than its own GS pointer. Or so one would think.

  • Long-term get_user_pages() and truncate(): solved at last?

    Technologies like RDMA benefit from the ability to map file-backed pages into memory. This benefit extends to persistent-memory devices, where the backing store for the file can be mapped directly without the need to go through the kernel's page cache. There is a fundamental conflict, though, between mapping a file's backing store directly and letting the filesystem code modify that file's on-disk layout, especially when the mapping is held in place for a long time (as RDMA is wont to do). The problem seems intractable, but there may yet be a solution in the form of this patch set (marked "V1,000,002") from Ira Weiny.
    The problems raised by the intersection of mapping a file (via get_user_pages()), persistent memory, and layout changes by the filesystem were the topic of a contentious session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit. The core question can be reduced to this: what should happen if one process calls truncate() while another has an active get_user_pages() mapping that pins some or all of that file's pages? If the filesystem actually truncates the file while leaving the pages mapped, data corruption will certainly ensue. The options discussed in the session were to either fail the truncate() call or to revoke the mapping, causing the process that mapped the pages to receive a SIGBUS signal if it tries to access them afterward. There were passionate proponents for both options, and no conclusion was reached.

    Weiny's new patch set resolves the question by causing an operation like truncate() to fail if long-term mappings exist on the file in question. But it also requires user space to jump through some hoops before such mappings can be created in the first place. This approach comes from the conclusion that, in the real world, there is no rational use case where somebody might want to truncate a file that has been pinned into place for use with RDMA, so there is no reason to make that operation work. There is ample reason, though, for preventing filesystem corruption and for informing an application that gets into such a situation that it has done something wrong.

  • Hardening the "file" utility for Debian

    In addition, he had already encountered problems with file running in environments with non-standard libraries that were loaded using the LD_PRELOAD environment variable. Those libraries can (and do) make system calls that the regular file binary does not make; the system calls were disallowed by the seccomp() filter.

    Building a Debian package often uses FakeRoot (or fakeroot) to run commands in a way that appears that they have root privileges for filesystem operations—without actually granting any extra privileges. That is done so that tarballs and the like can be created containing files with owners other than the user ID running the Debian packaging tools, for example. Fakeroot maintains a mapping of the "changes" made to owners, groups, and permissions for files so that it can report those to other tools that access them. It does so by interposing a library ahead of the GNU C library (glibc) to intercept file operations.

    In order to do its job, fakeroot spawns a daemon (faked) that is used to maintain the state of the changes that programs make inside of the fakeroot. The libfakeroot library that is loaded with LD_PRELOAD will then communicate to the daemon via either System V (sysv) interprocess communication (IPC) calls or by using TCP/IP. Biedl referred to a bug report in his message, where Helmut Grohne had reported a problem with running file inside a fakeroot.

IBM/Red Hat and Intel Leftovers

Filed under
GNU
Linux
Red Hat
Hardware
  • Troubleshooting Red Hat OpenShift applications with throwaway containers

    Imagine this scenario: Your cool microservice works fine from your local machine but fails when deployed into your Red Hat OpenShift cluster. You cannot see anything wrong with the code or anything wrong in your services, configuration maps, secrets, and other resources. But, you know something is not right. How do you look at things from the same perspective as your containerized application? How do you compare the runtime environment from your local application with the one from your container?

    If you performed your due diligence, you wrote unit tests. There are no hard-coded configurations or hidden assumptions about the runtime environment. The cause should be related to the configuration your application receives inside OpenShift. Is it time to run your app under a step-by-step debugger or add tons of logging statements to your code?

    We’ll show how two features of the OpenShift command-line client can help: the oc run and oc debug commands.

  • What piece of advice had the greatest impact on your career?

    I love learning the what, why, and how of new open source projects, especially when they gain popularity in the DevOps space. Classification as a "DevOps technology" tends to mean scalable, collaborative systems that go across a broad range of challenges—from message bus to monitoring and back again. There is always something new to explore, install, spin up, and explore.

  • How DevOps is like auto racing

    When I talk about desired outcomes or answer a question about where to get started with any part of a DevOps initiative, I like to mention NASCAR or Formula 1 racing. Crew chiefs for these race teams have a goal: finish in the best place possible with the resources available while overcoming the adversity thrown at you. If the team feels capable, the goal gets moved up a series of levels to holding a trophy at the end of the race.

    To achieve their goals, race teams don’t think from start to finish; they flip the table to look at the race from the end goal to the beginning. They set a goal, a stretch goal, and then work backward from that goal to determine how to get there. Work is delegated to team members to push toward the objectives that will get the team to the desired outcome.

    [...]

    Race teams practice pit stops all week before the race. They do weight training and cardio programs to stay physically ready for the grueling conditions of race day. They are continually collaborating to address any issue that comes up. Software teams should also practice software releases often. If safety systems are in place and practice runs have been going well, they can release to production more frequently. Speed makes things safer in this mindset. It’s not about doing the “right” thing; it’s about addressing as many blockers to the desired outcome (goal) as possible and then collaborating and adjusting based on the real-time feedback that’s observed. Expecting anomalies and working to improve quality and minimize the impact of those anomalies is the expectation of everyone in a DevOps world.

  • Deep Learning Reference Stack v4.0 Now Available

    Artificial Intelligence (AI) continues to represent one of the biggest transformations underway, promising to impact everything from the devices we use to cloud technologies, and reshape infrastructure, even entire industries. Intel is committed to advancing the Deep Learning (DL) workloads that power AI by accelerating enterprise and ecosystem development.

    From our extensive work developing AI solutions, Intel understands how complex it is to create and deploy applications for deep learning workloads. That?s why we developed an integrated Deep Learning Reference Stack, optimized for Intel Xeon Scalable processor and released the companion Data Analytics Reference Stack.

    Today, we?re proud to announce the next Deep Learning Reference Stack release, incorporating customer feedback and delivering an enhanced user experience with support for expanded use cases.

  • Clear Linux Releases Deep Learning Reference Stack 4.0 For Better AI Performance

    Intel's Clear Linux team on Wednesday announced their Deep Learning Reference Stack 4.0 during the Linux Foundation's Open-Source Summit North America event taking place in San Diego.

    Clear Linux's Deep Learning Reference Stack continues to be engineered for showing off the most features and maximum performance for those interested in AI / deep learning and running on Intel Xeon Scalable CPUs. This optimized stack allows developers to more easily get going with a tuned deep learning stack that should already be offering near optimal performance.

11 Best Linux Distro for hacking and programming

Filed under
Development
Linux
Security

When it comes to choosing a Linux distribution for hacking or programming, there are a number of points that you should keep in mind. The operating system should run smoothly on your system, and if you are installing one on your primary computer, you should always go for the one that you know how to use properly.

But using an operating system for more specific purposes like cybersecurity, which I have discussed here, isn’t that straightforward.

Kali Linux is one of the best cybersecurity operating systems, but there are many which offer more streamlined functionalities. I recommend you to try out at least a few of the most intriguing Kali Linux alternatives I have discussed here before you finally make your decision.
So that was my list of top 10 Kali Linux alternatives, that is worth your time. Do you have anything to add? Feel free to comment on the same down below.

Read more

How To Share Files Anonymously And Securely: Linux Alternatives to Google Drive

Filed under
GNU
Linux
Software

The ability to share files regardless of the physical distance and almost instantaneously is one of the greatest characteristics of the Internet. With 4.3 billion Internet users at the beginning of 2019, the amount of data transferred over the Web is almost unimaginable.

But not all file-sharing services are created equal. In the era where personal data is the most valuable currency we can spend, it is important to ensure we send files over the Internet in a secure and anonymous way.

Read to find out why mainstream file-sharing services are not your best bet and how to pick an alternative solution.

Read more

The Release of Raspberry Pi 4: What Does It Mean to You

Filed under
GNU
Linux
Hardware

The release of Raspberry Pi 4 has been met with much enthusiasm from developers and tech enthusiasts. Since it is a major upgrade from Raspberry Pi 3B+, some are already pronouncing the long-anticipated model as the greatest single-board computer (SBC) ever.

However, a lot of it has to do with hype, as the dazzling features of Raspberry Pi 4 have been available with other SBC boards for quite some time. Whether it is video HDMI, a powerful processor or USB 3.0 ports, it is clear that Raspberry Pi arrived pretty late on the scene.

Read more

How the Linux desktop has grown

Filed under
Linux

I first installed Linux in 1993. At that time, you really didn't have many options for installing the operating system. In those early days, many people simply copied a running image from someone else. Then someone had the neat idea to create a "distribution" of Linux that let you customize what software you wanted to install. That was the Softlanding Linux System (SLS) and my first introduction to Linux.

My '386 PC didn't have much memory, but it was enough. SLS 1.03 required 2MB of memory to run, or 4MB if you wanted to compile programs. If you wanted to run the X Window System, you needed a whopping 8MB of memory. And my PC had just enough memory to run X.

Read more

Linux-driven modules to showcase new MediaTek AIoT SoCs

Filed under
Android
Linux
Hardware

Innocomm is prepping an “SB30 SoM” with the new quad -A35 MediaTek i300 followed by an “SB50 SoM” with an AI-equipped, octa-core -A73 and -A53 MediaTek i500. Both modules ship with Linux/Android evaluation kits.

Innocomm, which has produced NXP-based compute modules such as the i.MX8M Mini driven WB15 and i.MX8M powered WB10, will soon try on some MediaTek SoCs for size. First up is an SB30 SoM due to launch in October that will run Linux or Android on MediaTek’s 1.5GHz, quad-core, Cortex-A35 based MediaTek i300 (MT8362) SoC. In November, the company plans to introduce an SB50 SoM based on the MediaTek i500 (MT8385).

Read more

Devices: Raspberry Pi and More

Filed under
GNU
Linux
Hardware

Samsung Galaxy Note 10 now links up with Windows and Mac PCs via supercharged DeX app

Filed under
Android
Linux
Ubuntu
Gadgets

And there’s a big bonus here in the form of being able to drag-and-drop files directly from your phone to your PC, and vice versa. So you could take a photo from your Note 10 and whip it onto the PC to tweak it up in a proper heavyweight image editor, for example.

Furthermore, as XDA Developers observes, Linux on DeX is available via the DeX app, allowing you to create a container and run an Ubuntu Linux image, giving you even more flexibility and options here.

It’s not clear what Samsung intends to do in terms of giving users with older Galaxy handsets backwards compatibility, but at the moment, this is strictly a Galaxy Note 10-only affair, as mentioned.

Finally, it’s worth noting that the app does warn that your phone might get hot running the DeX application, although exactly how hot likely depends on what you’ve got the hardware doing, of course.

Read more

Syndicate content

More in Tux Machines

Android Leftovers

Intel SoC, Mesa Driver, and Quad Core Cortex-A35

  • Linux Begins Preparing For Intel's New "Lightning Mountain" SoC

    Linux kernel development activity has shown light on a new Intel SoC we haven't anything about to date... Lightning Mountain.  We haven't seen Intel Lightning Mountain referenced elsewhere yet but in our original monitoring of the various Linux kernel patch flow, this is a new Atom SoC on the way. 

  • ARB_gl_spirv and ARB_spirv_extension support for i965 landed Mesa master

    And something more visible thanks to that: now the Intel Mesa driver exposes OpenGL 4.6 support, the most recent version of OpenGL. As perhaps you could recall, the i965 Intel driver became 4.6 conformant last year. You have more details about that, and what being conformant means in this Iago blog post. On that blog post Iago mentioned that it was passing with an early version of the ARB_gl_spirv support, that we were improving and interating during this time so it could be included on Mesa master. At the same time, the CTS tests were only testing the specifics of the extensions, and we wanted a more detailed testing, so we also were adding more tests on the piglit test suite, written manually for ARB_gl_spirv or translated from existing GLSL tests.

  • Compulab CL-SOM-iMX8X SoM & SBC Feature NXP i.MX 8QuadXPlus Quad Core Cortex-A35 Processor

    NXP i.MX 8X Cortex-A35 processor designed for automotive infotainment and a variety of industrial applications was officially announced in early 2017...

Red Hat/Fedora: Flock’19 Budapest, Cockpit 201 and Systemd 243 RC2

  • Flock’19 Budapest

    This was the first occurrence of the conference for me to attend. Its an annual Fedora Community gathering, which happens in a new city of Europe every year. This time it was in Budapest, the capital of Hungary, last year it was hosted in Dresden. Dates for the same were: 8th Aug through 11th Aug 2019. Also I got an opportunity to present there on my proposal: “Getting Started with Fedora QA”. Day 1 Started with a Keynote by Mathew Miller (mattdm). In here he spoke about where we as a community are and where we need to go further. It was a knowledgeable discussion for a first timer like me who was always looking out for the Vision and Mission of Fedora community. There are people who are with Fedora since its first release and you get to meet them here at the annual gathering. [...] Groups were formed and people decided for themselves where they wanted to go for the evening hangout on the Day 1. We were 7 people who decided to hangout at the Atmosphere Klub near the V.Kerulet and left at around 9:00 pm by walk. Day 2 started with a keynote by Denise Dumas, Vice President, Operating System Platform, Red Hat. She spoke on “Fedora, Red Hat and IBM”. I woke up late, 20 minutes before the first session as I went to bed late last night and had walked for around 11 kms the day before.

  • Fedora 30 : Set up the Linux Malware Detect.
  • Cockpit 201

    It’s now again possible to stop a service, without disabling it. Reloading is now available only when the service allows it. Furthermore, disabling or masking a service removes any lingering “failed” state, reducing noise.

  • Systemd 243 RC2 Released

    Released nearly one month ago was the systemd 243 release candidate while the official update has yet to materialize. It looks though like it may be on the horizon with a second release candidate being posted today. Red Hat's Zbigniew Jędrzejewski-Szmek has just tagged systemd 243-RC2 as the newest test release for this new version of this de facto Linux init system. Over the past month have been new hardware database (HWDB) additions, various fixes, new network settings, resolvectl zsh shell completion support, bumping timedated to always run at the highest priority, and other changes.

Announcing Qt for MCUs

  • Announcing Qt for MCUs

    Today we announce the launch of Qt for MCUs – a comprehensive toolkit to deliver smartphone-like user experience on displays powered by microcontrollers. What started as a research project is now in the final leg of its journey to being released as a product. Connected devices found in vehicles, wearables, smart home, industrial and healthcare often have requirements that include real-time processing capabilities, low power consumption, instant boot time and low bill of materials. These requirements can be fulfilled by a microcontroller architecture. However, as devices get smarter and offer more features and capabilities, users expect an enhanced and intuitive experience on par with today’s smartphones. Qt for MCUs delivers an immersive and enriching user interface by utilizing a new runtime specifically developed for ARM Cortex-M microcontrollers and leveraging on-chip 2D graphics accelerators such as PxP on NXP’s i.MX RT series, Chrom-Art Accelerator on STM32 series and RGL on Renesas RH850.

  • Qt for MCUs – Qt Announces support for Microcontrollers

    About Qt for MCUs Qt- The well known opensource toolkit for creating graphical interface announced their new release: Qt for MCUs, targeting MCU’s.

  • The Qt Company Is Now Working On Qt For Microcontrollers

    There have been a lot of announcements pertaining to Qt as of late, most of which have been about forthcoming efforts around Qt 6 development. A new announcement out of The Qt Company catching us off-guard is their plans for the tool-kit on micro-controllers. Qt for MCUs is the company's newest commercial endeavour. In particular, they are working on the Qt tool-kit for displays powered by micro-controllers for smartphone-like user experiences. Qt for MCUs has been a research project at the company but is now being worked out as a new commercial offering. Considering how well though Qt works on mobile devices, it's only another step down catering it to low-power micro-controllers.