Language Selection

English French German Italian Portuguese Spanish

Mozilla: Rust, WebRender, AV1

Filed under
Moz/FF
  • Splash 2018 Mid-Week Report

    I really enjoyed this talk by Felienne Hermans entitled “Explicit Direct Instruction in Programming Education”. The basic gist of the talk was that, when we teach programming, we often phrase it in terms of “exploration” and “self-expression”, but that this winds up leaving a lot of folks in the cold and may be at least partly responsible for the lack of diversity in computer science today. She argued that this is like telling kids that they should just be able to play a guitar and create awesome songs without first practicing their chords1 – it kind of sets them up to fail.

    The thing that really got me excited about this was that it seemed very connected to mentoring and open source. If you watched the Rust Conf keynote this year, you’ll remember Aaron talking about “OSS by Serendipity” – this idea that we should just expect people to come and produce PRs. This is in contrast to the “OSS by Design” that we’ve been trying to practice and preach, where there are explicit in-roads for people to get involved in the project through mentoring, as well as explicit priorities and goals (created, of course, through open processes like the roadmap and so forth). It seems to me that the things like working groups, intro bugs, quest issues, etc, are all ways for people to “practice the basics” of a project before they dive into creating major new features.

  • WebRender newsletter #29

    To introduce this week’s newsletter I’ll write about culling. Culling refers to discarding invisible content and is performed at several stages of the rendering pipeline. During frame building on the CPU we go through all primitives and discard the ones that are off-screen by computing simple rectangle intersections. As a result we avoid transferring a lot of data to the GPU and we can skip processing them as well.

    Unfortunately this isn’t enough. Web page are typically built upon layers and layers of elements stacked on top of one another. The traditional way to render web pages is to draw each element in back-to-front order, which means that for a given pixel on the screen we may have rendered many primitives. This is frustrating because there are a lot of opaque primitives that completely cover the work we did on that pixel for element beneath it, so there is a lot of shading work and memory bandwidth that goes to waste, and memory bandwidth is a very common bottleneck, even on high end hardware.

    Drawing on the same pixels multiple times is called overdraw, and overdraw is not our friend, so a lot effort goes into reducing it.
    In its early days, to mitigate overdraw WebRender divided the screen in tiles and all primitives were assigned to the tiles they covered (primitives that overlap several tiles would be split into a primitive for each tile), and when an opaque primitive covered an entire tile we could simply discard everything that was below it. This tiling approach was good at reducing overdraw with large occluders and also made the batching blended primitives easier (I’ll talk about batching in another episode). It worked quite well for axis-aligned rectangles which is the vast majority of what web pages are made of, but it was hard to split transformed primitives.

  • Into the Depths: The Technical Details Behind AV1

    Since AOMedia officially cemented the AV1 v1.0.0 specification earlier this year, we’ve seen increasing interest from the broadcasting industry. Starting with the NAB Show (National Association of Broadcasters) in Las Vegas earlier this year, and gaining momentum through IBC (International Broadcasting Convention) in Amsterdam, and more recently the NAB East Show in New York, AV1 keeps picking up steam. Each of these industry events attract over 100,000 media professionals. Mozilla attended these shows to demonstrate AV1 playback in Firefox, and showed that AV1 is well on its way to being broadly adopted in web browsers.

More in Tux Machines

12 open source tools for natural language processing

Natural language processing (NLP), the technology that powers all the chatbots, voice assistants, predictive text, and other speech/text applications that permeate our lives, has evolved significantly in the last few years. There are a wide variety of open source NLP tools out there, so I decided to survey the landscape to help you plan your next voice- or text-based application. For this review, I focused on tools that use languages I'm familiar with, even though I'm not familiar with all the tools. (I didn't find a great selection of tools in the languages I'm not familiar with anyway.) That said, I excluded tools in three languages I am familiar with, for various reasons. The most obvious language I didn't include might be R, but most of the libraries I found hadn't been updated in over a year. That doesn't always mean they aren't being maintained well, but I think they should be getting updates more often to compete with other tools in the same space. I also chose languages and tools that are most likely to be used in production scenarios (rather than academia and research), and I have mostly used R as a research and discovery tool. Read more

Devices: Indigo Igloo, Raspberry Pi Projects and Ibase

  • AR-controlled robot could help people with motor disabilities with daily tasks
    Researchers employed the PR2 robot running Ubuntu 14.04 and an open-source Robot Operating System called Indigo Igloo for the study. The team made adjustments to the robot including padding metal grippers and adding “fabric-based tactile sensing” in certain areas.
  • 5 IoT Projects You Can Do Yourself on a Raspberry Pi
    Are you new to the Internet of Things and wonder what IoT devices can do for you? Or do you just have a spare Raspberry Pi hanging around and are wondering what you can do with it? Either way, there are plenty of ways to put that cheap little board to work. Some of these projects are easy while others are much more involved. Some you can tackle in a day while others will take a while. No matter what, you’re bound to at least get some ideas looking at this list.
  • Retail-oriented 21.5-inch panel PCs run on Kaby Lake and Bay Trail
    Ibase’s 21.5-inch “UPC-7210” and “UPC-6210” panel PCs run Linux or Windows on 7th Gen Kaby Lake-U and Bay Trail CPUs, respectively. Highlights include 64GB SSDs, mini-PCIe, mSATA, and IP65 protection.

NexDock 2 Turns Your Android Phone or Raspberry Pi into a Laptop

Ever wished your Android smartphone or Raspberry Pi was a laptop? Well, with the NexDock 2 project, now live on Kickstarter, it can be! Both the name and the conceit should be familiar to long-time gadget fans. The original NexDock was a 14.1-inch laptop shell with no computer inside. It successfully crowdfunded back in 2016. The OG device made its way in to the hands of thousands of backers. While competent enough, some of-the-time reviews were tepid about the dock’s build quality. After a brief stint fawning over Intel’s innovative (now scrapped) Compute Cards, the team behind the portable device is back with an updated, refined and hugely improved model. Read more

Graphics: Libinput 1.13 RC2, NVIDIA and AMD

  • libinput 1.12.902
    The second RC for libinput 1.13 is now available.
    
    This is the last RC, expect the final within the next few days unless
    someone finds a particulaly egregious bug.
    
    One user-visible change: multitap (doubletap or more) now resets the timer
    on release as well. This should improve tripletap detection as well as any
    tripletap-and-drag and similar gestures.
    
    valgrind is no longer a required dependency to build with tests. It was only
    used in a specific test run anyway (meson test --setup=valgrind) and not
    part of the regular build.
    
    As usual, the git shortlog is below.
    
    Benjamin Poirier (1):
          evdev: Rename button up and down states to mirror each other
    
    Feldwor (1):
          Set TouchPad Pressure Range for Toshiba L855
    
    Paolo Giangrandi (1):
          touchpad: multitap state transitions use the same timing used for taps
    
    Peter Hutterer (3):
          tools: flake8 fixes, typo fixes and missing exception handling
          meson.build: make valgrind optional
          libinput 1.12.902
  • Libinput 1.13 RC2 Better Detects Triple Taps
    Peter Hutterer of Red Hat announced the release of libinput 1.13 Release Candidate 2 on Thursday as the newest test release for this input handling library used by both X.Org and Wayland Linux systems. Libinput 1.13 will be released in the days ahead as the latest six month update to this input library. But with the time that has passed, it's not all that exciting of a release as the Logitech high resolution scrolling support as well as Dell Totem input device support for the company's Canvas display was delayed to the next release cycle. But libinput 1.13 is bringing touch arbitration improvements for tablets, various new quirks, and other fixes and usability enhancements.
  • Open-Source NVIDIA PhysX 4.1 Released
    Software releases are aplenty for GDC week and NVIDIA's latest release is their newest post-4.0 PhysX SDK. NVIDIA released the open-source PhysX 4.0 SDK just before Christmas as part of the company re-approaching open-source for this widely used physics library. Now the latest available is PhysX 4.1 and the open-source code drop is out in tandem.
  • AMD have launched an update to their open source Radeon GPU Analyzer, better Vulkan support
    AMD are showing off a little here, with an update to the Radeon GPU Analyzer open source project and it sounds great.