Language Selection

English French German Italian Portuguese Spanish

Programming: WebAssembly, Mozilla GFX, Qt and Python

Filed under
Development
  • WebAssembly for speed and code reuse

    Imagine translating a non-web application, written in a high-level language, into a binary module ready for the web. This translation could be done without any change whatsoever to the non-web application's source code. A browser can download the newly translated module efficiently and execute the module in the sandbox. The executing web module can interact seamlessly with other web technologies—with JavaScript (JS) in particular. Welcome to WebAssembly.

    As befits a language with assembly in the name, WebAssembly is low-level. But this low-level character encourages optimization: the just-in-time (JIT) compiler of the browser's virtual machine can translate portable WebAssembly code into fast, platform-specific machine code. A WebAssembly module thereby becomes an executable suited for compute-bound tasks such as number crunching.

    Which high-level languages compile into WebAssembly? The list is growing, but the original candidates were C, C++, and Rust. Let's call these three the systems languages, as they are meant for systems programming and high-performance applications programming. The systems languages share two features that suit them for compilation into WebAssembly. The next section gets into the details, which sets up full code examples (in C and TypeScript) together with samples from WebAssembly's own text format language.

  • Mozilla GFX: moz://gfx newsletter #47

    Hi there! Time for another mozilla graphics newsletter. In the comments section of the previous newsletter, Michael asked about the relation between WebRender and WebGL, I’ll try give a short answer here.

    Both WebRender and WebGL need access to the GPU to do their work. At the moment both of them use the OpenGL API, either directly or through ANGLE which emulates OpenGL on top of D3D11. They, however, each work with their own OpenGL context. Frames produced with WebGL are sent to WebRender as texture handles. WebRender, at the API level, has a single entry point for images, video frames, canvases, in short for every grid of pixels in some flavor of RGB format, be them CPU-side buffers or already in GPU memory as is normally the case for WebGL. In order to share textures between separate OpenGL contexts we rely on platform-specific APIs such as EGLImage and DXGI.

    Beyond that there isn’t any fancy interaction between WebGL and WebRender. The latter sees the former as a image producer just like 2D canvases, video decoders and plain static images.

  • The Titler Revamp: QML Producer in the making

    At the beginning of this month, I started testing out the new producer as I had a good, rough structure for the producer code, and was only facing a few minor problems. Initially, I was unclear about how exactly the producer is going to be used by the titler so I took a small step back and spent some time figuring out how kdenlivetitle worked, which is the producer in use.

    Initially, I faced integration problems (which are the ones you’d normally expect) when I tried to make use of the QmlRenderer library for rendering and loading QML templates – and most of them were resolved by a simple refactoring of the QmlRenderer library source code. To give an example, the producer traditionally stores the QML template in global variables which is taken as a character pointer argument (which is, again, traditional C) The QmlRenderer lib takes a QUrl as its parameters for loading the Qml file, so to solve this problem all I had to do was to overload the loadQml() method with one which could accommodate the producer’s needs – which worked perfectly fine. As a consequence, I also had to compartmentalise (further) the rendering process so now we have 3 methods which go sequentially when we want to render something using the library ( initialiseRenderParams( ) -> prepareRenderer( ) -> renderQml( ) )

    [...]

    The problem was resolved (thank you JB) finally and it was not due to OpenGL but it was simply because I hadn’t created an QApplication for the producer (which is necessary for qt producers). The whole month’s been a steep curve, definitely not easy, but, I enjoyed it!

    Right now, I have a producer which is, now, almost complete and with a little more tweaking, will be put to use, hopefully. I’m still facing a few minor issues which I hope to resolve soon and get a working producer. Once we get that, I can start work on the Kdenlive side. Let’s hope for the best!

  • How to Make a Discord Bot in Python

    In a world where video games are so important to so many people, communication and community around games are vital. Discord offers both of those and more in one well-designed package. In this tutorial, you’ll learn how to make a Discord bot in Python so that you can make the most of this fantastic platform.

  • Qt Visual Studio Tools 2.4 RC Released

    The Visual Studio Project System is widely used as the build system of choice for C++ projects in VS. Under the hood, MSBuild provides the project file format and build framework. The Qt VS Tools make use of the extensibility of MSBuild to provide design-time and build-time integration of Qt in VS projects — toward the end of the post we have a closer look at how that integration works and what changed in the new release.

    Up to this point, the Qt VS Tools extension managed its own project settings in an isolated manner. This approach prevented the integration of Qt in Visual Studio to fully benefit from the features of VS projects and MSBuild. Significantly, it was not possible to have Qt settings vary according to the build configuration (e.g. having a different list of selected Qt modules for different configurations), including Qt itself: only one version/build of Qt could be selected and would apply to all configurations, a significant drawback in the case of multi-platform projects.

    Another important limitation that users of the Qt VS Tools have reported is the lack of support for importing Qt-related settings from shared property sheet files. This feature allows settings in VS projects to be shared within a team or organization, thus providing a single source for that information. Up to now, this was not possible to do with settings managed by the Qt VS Tools.

More in Tux Machines

today's howtos

Games; CHOP, LeClue - Detectivu, Nantucket, MOTHERGUNSHIP

  • Brutal local co-op platform brawler CHOP has released

    CHOP, a brutal local co-op platform brawler recently left Early Access on Steam. If you like fast-paced fighters with a great style and chaotic gameplay this is for you. There's multiple game modes, up to for players in the standard modes and there's bots as well if you don't have people over often. Speaking about the release, the developer told me they felt "many local multiplayer games fall into a major pitfall : they often lack impact and accuracy, they don't have this extra oomph that ensure players will really be into the game and hang their gamepad like their life depends on it." and that "CHOP stands out in this regard". I've actually quite enjoyed this one, the action in CHOP is really satisfying overall.

  • Mystery adventure game Jenny LeClue - Detectivu is releasing this week

    Developer Mografi has confirmed that their adventure game Jenny LeClue - Detectivu is officially releasing on September 19th. The game was funded on Kickstarter way back in 2014 thanks to the help of almost four thousand backers raising over one hundred thousand dollars.

  • Seafaring strategy game Nantucket just had a big patch and Masters of the Seven Seas DLC released

    Ahoy mateys! Are you ready top set sail? Anchors aweigh! Seafaring strategy game Nantucket is now full of even more content for you to play through. Picaresque Studio and Fish Eagle just released a big new patch adding in "100+" new events, events that can be triggered by entering a city, the Resuscitation command can now heal even if someone isn't dead during combat, the ability to rename crew to really make your play-through personal, minor quests give off better rewards and more. Quite a hefty free update!

  • MOTHERGUNSHIP, a bullet-hell FPS where you craft your guns works great on Linux with Steam Play

    Need a fun new FPS to try? MOTHERGUNSHIP is absolutely nuts and it appears to run very nicely on Linux thanks to Steam Play. There's a few reasons why I picked this one to test recently: the developers have moved onto other games so it's not too likely it will suddenly break, there's not a lot of new and modern first-person shooters on Linux that I haven't finished and it was in the recent Humble Monthly.

GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers

Yesterday, the team behind the GNU project announced Parallel GCC, a research project aiming to parallelize a real-world compiler. Parallel GCC can be used in machines with many cores where GNU cannot provide enough parallelism. A parallel GCC can be also used to design a parallel compiler from scratch. Read more

today's leftovers

  • 3 Ways to disable USB storage devices on Linux
  • Fedora Community Blog: Fedocal and Nuancier are looking for new maintainers

    Recently the Community Platform Engineering (CPE) team announced that we need to focus on key areas and thus let some of our applications go. So we started Friday with Infra to find maintainers for some of those applications. Unfortunately the first few occurrences did not seem to raise as much interest as we had hoped. As a result we are still looking for new maintainers for Fedocal and Nuancier.

  • Artificial Intelligence Confronts a 'Reproducibility' Crisis

    Lo and behold, the system began performing as advertised. The lucky break was a symptom of a troubling trend, according to Pineau. Neural networks, the technique that’s given us Go-mastering bots and text generators that craft classical Chinese poetry, are often called black boxes because of the mysteries of how they work. Getting them to perform well can be like an art, involving subtle tweaks that go unreported in publications. The networks also are growing larger and more complex, with huge data sets and massive computing arrays that make replicating and studying those models expensive, if not impossible for all but the best-funded labs.

    “Is that even research anymore?” asks Anna Rogers, a machine-learning researcher at the University of Massachusetts. “It’s not clear if you’re demonstrating the superiority of your model or your budget.”

  • When Biology Becomes Software

    If this sounds to you a lot like software coding, you're right. As synthetic biology looks more like computer technology, the risks of the latter become the risks of the former. Code is code, but because we're dealing with molecules -- and sometimes actual forms of life -- the risks can be much greater.

    [...]

    Unlike computer software, there's no way so far to "patch" biological systems once released to the wild, although researchers are trying to develop one. Nor are there ways to "patch" the humans (or animals or crops) susceptible to such agents. Stringent biocontainment helps, but no containment system provides zero risk.

  • Why you may have to wait longer to check out an e-book from your local library

    Gutierrez says the Seattle Public Library, which is one of the largest circulators of digital materials, loaned out around three million e-books and audiobooks last year and spent about $2.5 million to acquire those rights. “But that added 60,000 titles, about,” she said, “because the e-books cost so much more than their physical counterpart. The money doesn’t stretch nearly as far.”

  • Libraries are fighting to preserve your right to borrow e-books

    Libraries don't just pay full price for e-books -- we pay more than full price. We don't just buy one book -- in most cases, we buy a lot of books, trying to keep hold lists down to reasonable numbers. We accept renewable purchasing agreements and limits on e-book lending, specifically because we understand that publishing is a business, and that there is value in authors and publishers getting paid for their work. At the same time, most of us are constrained by budgeting rules and high levels of reporting transparency about where your money goes. So, we want the terms to be fair, and we'd prefer a system that wasn't convoluted.

    With print materials, book economics are simple. Once a library buys a book, it can do whatever it wants with it: lend it, sell it, give it away, loan it to another library so they can lend it. We're much more restricted when it comes to e-books. To a patron, an e-book and a print book feel like similar things, just in different formats; to a library they're very different products. There's no inter-library loan for e-books. When an e-book is no longer circulating, we can't sell it at a book sale. When you're spending the public's money, these differences matter.

  • Nintendo's ROM Site War Continues With Huge Lawsuit Against Site Despite Not Sending DMCA Notices

    Roughly a year ago, Nintendo launched a war between itself and ROM sites. Despite the insanely profitable NES Classic retro-console, the company decided that ROM sites, which until recently almost single-handedly preserved a great deal of console gaming history, need to be slayed. Nintendo extracted huge settlements out of some of the sites, which led to most others shutting down voluntarily. While this was probably always Nintendo's strategy, some sites decided to stare down the company's legal threats and continue on.

  • The Grey Havens | Coder Radio 375

    We say goodbye to the show by taking a look back at a few of our favorite moments and reflect on how much has changed in the past seven years.

  • 09/16/2019 | Linux Headlines

    A new Linux Kernel is out; we break down the new features, PulseAudio goes pro and the credential-stealing LastPass flaw. Plus the $100 million plan to rid the web of ads, and more.

  • Powering Docker App: Next Steps for Cloud Native Application Bundles (CNAB)

    Last year at DockerCon and Microsoft Connect, we announced the Cloud Native Application Bundle (CNAB) specification in partnership with Microsoft, HashiCorp, and Bitnami. Since then the CNAB community has grown to include Pivotal, Intel, DataDog, and others, and we are all happy to announce that the CNAB core specification has reached 1.0. We are also announcing the formation of the CNAB project under the Joint Development Foundation, a part of the Linux Foundation that’s chartered with driving adoption of open source and standards. The CNAB specification is available at cnab.io. Docker is working hard with our partners and friends in the open source community to improve software development and operations for everyone.

  • CNAB ready for prime time, says Docker

    Docker announced yesterday that CNAB, a specification for creating multi-container applications, has come of age. The spec has made it to version 1.0, and the Linux Foundation has officially accepted it into the Joint Development Foundation, which drives open-source development. The Cloud Native Application Bundle specification is a multi-company effort that defines how the different components of a distributed cloud-based application are bundled together. Docker announced it last December along with Microsoft, HashiCorp, and Bitnami. Since then, Intel has joined the party along with Pivotal and DataDog. It solves a problem that DevOps folks have long grappled with: how do you bolt all these containers and other services together in a standard way? It’s easy to create a Docker container with a Docker file, and you can pull lots of them together to form an application using Docker Compose. But if you want to package other kinds of container or cloud results into the application, such as Kubernetes YAML, Helm charts, or Azure Resource Manager templates, things become more difficult. That’s where CNAB comes in.