Language Selection

English French German Italian Portuguese Spanish

Mozilla

Syndicate content
Planet Mozilla - https://planet.mozilla.org/
Updated: 3 hours 46 min ago

Mozilla Performance Blog: Performance Sheriff Newsletter (February 2021)

7 hours 53 min ago

In February there were 201 alerts generated, resulting in 29 regression bugs being filed on average 4 days after the regressing change landed.

Welcome to the February 2021 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics, followed by some analysis on the data footprint of our performance metrics. If you’re interested (and if you have access) you can view the full dashboard.

Sheriffing efficiency
  • All alerts were triaged in an average of 1 day
  • 100% of alerts were triaged within 3 days
  • Valid regressions were associated with bugs in an average of 2.7 days
  • 88% of valid regressions were associated with bugs within 5 days

Now that we have six months of historical data on the sheriffing efficiency, I decided to add a second axis to see how the number of alerts correlates to the time it takes sheriffs to triage and raise regression bugs. As you can see there’s a close correlation.

This highlights that any improvements to our regression detection or test coverage will likely overwhelm our sheriffing team and lead to delays with regressions being reported and resolved. As increasing the team of sheriffs is not a scalable solution, we’re investing in automating as much of the sheriffing workflow as possible. I look forward to providing updates on this as they develop, and for the time to regression bug correlating more closely with the time to triage than the number of alerts.

Data Footprint Analysis

Recently we’ve faced some issues with the storage capacity demands of Perfherder (our tool for ingesting, monitoring, and sheriffing performance data), and so we’ve been reviewing our data retention policies, and looking at other ways we can reduce our data footprint. Through a recent review of the performance data, we determined we could reduce our footprint by up to 17% by stopping ingestion of version control (vcs) metrics. This data is not monitored through Perfherder, so there’s no real need for us to store it.

In November’s newsletter, I covered the performance test frameworks that we monitor for regressions, but that doesn’t cover everything that we ingest. Here’s a chart showing the distribution of datapoints across all frameworks:

In addition to the sheriffed frameworks, we have:

  • vcs – as covered above, this includes metrics covering the time taken to perform operations such as clone/update/pull on our version control system in CI. As this is not monitored within Perfherder we have stopped ingesting it.
  • platform_microbench – these are graphics micro benchmarks introduced in bug 1256408 but never considered stable enough to monitor for regressions. As it’s been a while since we’ve looked at this data it’s worth reviewing to see if it can provide additional coverage.
  • devtools – these are the performance tests for Firefox DevTools, which are monitored by the DevTools team.

If you’re interested in reading more about the vcs data that we’re no longer ingesting, it was introduced via bug 1448204 and Perfherder ingestion was disabled in bug 1692409.

Summary of alerts

Each month I’ll highlight the regressions and improvements found.

Wladimir Palant: How Amazon Assistant lets Amazon track your every move on the web

10 hours 8 min ago

I recently noticed that Amazon is promoting their Amazon Assistant extension quite aggressively. With success: while not all browsers vendors provide usable extension statistics, it would appear that this extension has beyond 10 million users across Firefox, Chrome, Opera and Edge. Reason enough to look into what this extension is doing and how.

Here I must say that the privacy expectations for shopping assistants aren’t very high to start with. Still, I was astonished to discover that Amazon built the perfect machinery to let them track any Amazon Assistant user or all of them: what they view and for how long, what they search on the web, what accounts they are logged into and more. Amazon could also mess with the web experience at will and for example hijack competitors’ web shops.

<figcaption> Image credits: Amazon, nicubunu, OpenClipart </figcaption>

Mind you, I’m not saying that Amazon is currently doing any of this. While I’m not done analyzing the code, so far everything suggests that Amazon Assistant is only transferring domain names of the web pages you visit rather than full addresses. And all website manipulations seem in line with the extension’s purpose. But since all extension privileges are delegated to Amazon web services, it’s impossible to make sure that it always works like this. If for some Amazon Assistant users the “hoover up all data” mode is switched on, nobody will notice.

Contents What is Amazon Assistant supposed to do?

On the first glance, Amazon Assistant is just the panel showing up when you click the extension icon. It will show you current Amazon deals, let you track your orders and manage lists of items to buy. So far very much confined to Amazon itself.

What’s not quite obvious: “Add to list” will attempt to recognize what product is displayed in the current browser tab. And that will work not only on Amazon properties. Clicking this button while on some other web shop will embed an Amazon Assistant into that web page and offer you to add this item to your Amazon wishlist.

But Amazon Assistant will become active on its own as well. Are you searching for “playstation” on Google? Amazon Assistant will show its message right on top of Google’s ads, because you might want to buy that on Amazon.

You will see similar messages when searching on eBay or other online shops.

So you can already guess that Amazon Assistant will ask Amazon web services what to do on any particular website: how to recognize searches, how to extract product information. There are just too many shops to keep all this information in the extension. As a side-effect that is certainly beneficial to Amazon’s business, Amazon will learn which websites you visit and what you search there. That’s your unavoidable privacy cost of this extension. But it doesn’t stop here.

The extension’s privileges

Let’s first take a look at what this extension is allowed to do. That’s the permissions entry in the extension’s manifest.json file:

"permissions": [ "tabs", "storage", "http://*/*", "https://*/*", "notifications", "management", "contextMenus", "cookies",

This is really lots of privileges. First note http://*/* and https://*/*: the extension has access to each and every website (I cut off the long list of Amazon properties here which is irrelevant then). This is necessary if it wants to inject its content there. The tabs permission then allows recognizing when tabs are created or removed, and when a new page loads into a tab.

The storage permission allows the extension to keep persistent settings. One of these settings is called ubpv2.Identity.installationId and contains (you guessed it) a unique identifier for this Amazon Assistant installation. Even if you log out of Amazon and clear your cookies, this identifier will persist and allow Amazon to connect your activity to your identity.

Two other permissions are also unsurprising. The notifications permission presumably lets the extension display a desktop notification to keep you updated about your order status. The contextMenus permission lets it add an “Add to Amazon Lists” item to the browser’s context menu.

The cookies permission is unusual however. In principle, it allows the extension to access cookies on any website. Yet it is currently only used to access Amazon cookies in order to recognize when the user logs in. The same could be achieved without this privilege, merely by accessing document.cookie on an Amazon website (which is how the extension in fact does it in one case).

Even weirder is the management permission which is only requested by the Firefox extension but not the Chrome one. This permission gives an extension access to other browser extension and even allows uninstalling them. Requesting it is highly unusual and raises suspicions. Yet there is only code to call management.uninstallSelf() and management.getSelf(), the two function that don’t require this permission! And even this code appears to be unused.

Now it’s not unusual for extensions to request wide reaching privileges. It’s not even unusual to request privileges that aren’t currently used, prompting Google to explicity forbid this in their Chrome Web Store policy. The unusual part here is how almost all of these capabilities are transferred to Amazon web properties.

The unusual setup

When you start looking into how the extension uses its privileges, it’s hard to overlook the fact that it appears to be an empty shell. Yes, there is a fair amount of code. But all of it is just glue code. Neither the extension’s user interface nor any of its logic is to be found anywhere. What’s going on? It gets clearer if you inspect the extension’s background page in Developer Tools:

Yes, that’s eight remote frames loaded into the extension’s background page, all pointing to Amazon domains. And the ninth remote frame loads when you click the extension icon, it contains the user interface of the panel shown above. All these panels communicate with each other and the extension via Amazon’s internal UBP protocol, exchanging messages via window.postMessage().

How does the extension know what page to load in the frames and what these should be allowed to do? It doesn’t, this information is downloaded as FeatureManifest.js from an Amazon server. This file defines a number of “processes,” each with its list of provided and consumed APIs and events. And while the extension code makes sure that processes only access what they are allowed to access, this file on an Amazon web service sets the rules.

Here is what this file currently has to say about AAWishlistProcess, a particularly powerful process:

"AAWishlistProcess" : { "manifestVersion" : "2015-03-26", "manifest" : { "name" : "AAWishlistProcess", "version" : {"major" : 1, "minor" : 1, "build" : 1, "revision" : 1}, "enabled" : true, "processType" : "Remote", "configuration" : { "url" : "https://horizonte.browserapps.amazon.com/wishlist/aa-wishlist-process.html", "assetTag" : "window.eTag = \"e19e28ac-784e-4e22-8e2b-6d36a9d3aaf2\"; window.lastUpdated= \"2021-01-14T22:57:46.422Z\";" }, "consumedAPIs" : { "Identity" : [ "getAllWeblabTreatments", "getCustomerPreferences" ], "Dossier" : [ "buildURLs" ], "Platform" : [ "getPlatformInfo", "getUWLItem", "getActiveTabInfo", "createElement", "createSandbox", "createSandboxById", "createLocalSandbox", "modifySandbox", "showSandbox", "sendMessageToSandbox", "destroySandbox", "scrape", "listenerSpecificationScrape", "applyStyle", "resetStyle", "registerAction", "deregisterAction", "createContextMenuItem", "deleteAllContextMenuItems", "deleteContextMenuItemById", "getCookieInfo", "bulkGetCookieInfo", "getStorageValue", "putStorageValue", "deleteStorageValue", "publish" ], "Reporter" : [ "appendMetricData" ], "Storage" : [ "get", "put", "putIfAbsent", "delete" ] }, "consumedEvents" : [ "Tabs.PageTurn", "Tabs.onRemoved", "Sandbox.Message.UBPSandboxMessage", "Action.Message", "Platform.PlatformDataUpdate", "Contextmenu.ItemClicked.AAWishlistProcess", "Identity.CustomerPreferencesUpdate", "Gateway.AddToListClick" ], "providedAPIs" : { }, "providedEvents" : [ "Wishlist.update", "Storage.onChange.*", "Storage.onChange.*.*", "Storage.onChange.*.*.*", "Storage.onChange.*.*.*.*", "Storage.onChange.*.*.*.*.*", "Storage.onDelete.*", "Storage.onDelete.*.*", "Storage.onDelete.*.*.*", "Storage.onDelete.*.*.*.*", "Storage.onDelete.*.*.*.*.*" ], "CTI" : { "Category" : "AmazonAssistant", "Type" : "Engagement", "Item" : "Wishlist" } } },

The interesting consumed APIs are the ones belonging to Platform: that “process” is provided by the extension. So the extension lets this website among other things request information on the active tab, create context menu items, retrieve cookies and access extension’s storage.

Let’s try it out!

We don’t have to speculate, it’s easy to try things out that this website is allowed to do. For this, change to the Console tab in Developer Tools and make sure aa-wishlist-process.html is selected as context rather than top. Now enter the following command making sure incoming messages are logged:

window.onmessage = event => console.log(JSON.stringify(event.data, undefined, 2));

Note: For me, console.log() didn’t work inside a background page’s frame on Firefox, so I had to do this on Chrome.

Now let’s subscribe to the Tabs.PageTurn event:

parent.postMessage({ mType: 0, source: "AAWishlistProcess", payload: { msgId: "test", mType: "rpcSendAndReceive", payload: { header: { messageType: 2, name: "subscribe", namespace: "PlatformHub" }, data: { args: { eventName: "Tabs.PageTurn" } } } } }, "*");

A message from PlatformHub comes in indicating that the call was successful ("error": null). Good, if we now open https://example.com/ in a new tab… Three messages come in, first one indicating that the page is loading, second that its title is now known and finally the third one indicating that the page loaded:

{ "mType": 0, "source": "PlatformHub", "payload": { "msgId": "3eee7d9b-ee2b-4f1d-be92-693119b5654c", "mType": "rpcSend", "payload": { "header": { "messageType": 2, "name": "publish", "namespace": "PlatformHub", "sourceProcessName": "Platform", "extensionStage": "prod" }, "data": { "args": { "eventName": "Tabs.PageTurn", "eventArgs": { "tabId": "31", "url": "http://example.com/", "status": "complete", "title": "Example Domain" } } } } } }

Yes, that’s essentially the tabs.onUpdated extension API exposed to a web page. The Tabs.onRemoved event works similarly, that’s tabs.onRemoved extension API exposed.

Now let’s try calling Platform.getCookieInfo:

parent.postMessage({ mType: 0, source: "AAWishlistProcess", payload: { msgId: "test", mType: "rpcSendAndReceive", payload: { header: { messageType: 1, name: "getCookieInfo", namespace: "Platform" }, data: { args: { url: "https://www.google.com/", cookieName: "CONSENT" } } } } }, "*");

A response comes in:

{ "mType": 0, "source": "PlatformHub", "payload": { "msgId": "fefc2939-70c3-4138-8bb6-a6120b57e563", "mType": "rpcReply", "payload": { "cookieFound": true, "cookieInfo": { "name": "CONSENT", "domain": ".google.com", "value": "PENDING+376", "path": "/", "session": false, "expirationDate": 2145916800.121322 } }, "t": 1615035509370, "rMsgId": "test", "error": null } }

Yes, that’s the CONSENT cookie I have on google.com. So that’s pretty much cookies.get() extension API available to this page.

Overview of functionality exposed to Amazon web services

Here are the Platform APIs that the extension allows Amazon web services to call:

Call name Purpose getPlatformInfo
getFeatureList Retrieves information about the extension and supported functionality openNewTab Opens a page in a new tab, not subject to the pop-up blocker removeTab Closes a given tab getCookieInfo
bulkGetCookieInfo Retrieves cookies for any website createDesktopNotification Displays a desktop notification createContextMenuItem
deleteAllContextMenuItems
deleteContextMenuItemById Manages extension’s context menu items renderButtonText Displays a “badge” on the extension’s icon (typically a number indicating unread messages) getStorageValue
putStorageValue
deleteStorageValue
setPlatformCoreInfo
clearPlatformInfoCache
updatePlatformLocale
isTOUAccepted
acceptTermsOfUse
setSmileMode
setLocale
handleLegacyExternalMessage Accesses extension storage/settings getActiveTabInfo Retrieves information about the current tab (tab ID, title, address) createSandbox
createLocalSandbox
createSandboxById
modifySandbox
showSandbox
sendMessageToSandbox
instrumentSandbox
getSandboxAttribute
destroySandbox Injects a frame (any address) into any tab and communicates with it scrape
listenerSpecificationScrape
getPageReferrer
getPagePerformanceTimingData
getPageLocationData
getPageDimensionData
getUWLItem Extracts data from any tab using various methods registerAction
deregisterAction Listens to an event on a particular element in any tab applyStyle
resetStyle Sets CSS styles on a particular element in any tab instrumentWebpage Queries information about the page in any tab, clicks elements, sends input and keydown events createElement Creates an element in any tab with given ID, class and styles closePanel Closes the extension’s drop-down panel reloadExtension Reloads the extension, installing any pending updates

And here are the interesting events it provides:

Event name Purpose Tabs.PageTurn Triggered on tab changes, contains tab ID, address, loading status, title Tabs.onRemoved Triggered when a tab is closed, contains tab ID WebRequest.onBeforeRequest
WebRequest.onBeforeSendHeaders
WebRequest.onCompleted Correspond to webRequest API listeners (this functionality is currently inactive, the extension has no webRequest permission)

Given extension’s privileges, not much is missing here. The management permission is unused as I mentioned before, so listing installed extensions isn’t possible. Cookie access is read-only, setting cookies isn’t possible. And general webpage access appears to stop short of arbitrary code execution. But does it?

The createSandbox call can be used with any frame address, no checks performed. This means that a javascript: address is possible as well. So if we run the following code in the context of aa-wishlist-process.html:

parent.postMessage({ mType: 0, source: "AAWishlistProcess", payload: { msgId: "test", mType: "rpcSendAndReceive", payload: { header: { messageType: 1, name: "createSandbox", namespace: "Platform" }, data: { args: { tabId: 31, sandboxSpecification: { proxy: "javascript:alert(document.domain)//", url: "test", sandboxCSSSpecification: "none" } } } } } }, "*");

Yes, a message pops up indicating that this successfully executed JavaScript code in the context of the example.com domain. So there is at least one way for Amazon services to do anything with the web pages you visit. This particular attack worked only on Chrome however, not on Firefox.

Is there even another way?

As I already pointed out in a previous article, it’s hard to build a shopping assistant that wouldn’t receive all its configuration from some server. This makes shopping assistants generally a privacy hazard. So maybe this privacy and security disaster was unavoidable?

No, for most part this isn’t the case. Amazon’s remote “processes” aren’t some server-side magic. They are merely static JavaScript files running in a frame. Putting these JavaScript files into the extension would have been possible with almost no code changes. And it would open up considerable potential for code simplification and performance improvements if Amazon is interested.

This design was probably justified with “we need this to deploy changes faster.” But is it really necessary? The FeatureManifest.js file mentioned above happens to contain update times of the components. Out of nine components, five had their last update five or six months ago. One was updated two months ago, another a month ago. Only two were updated recently (four and twelve days ago).

It seems that these components are maintained by different teams who work on different release schedules. But even if Amazon cannot align the release schedules here, this doesn’t look like packaging all the code with the extension would result in unreasonably frequent releases.

What’s the big deal?

Why does it make a difference where this code is located? It’s the same code doing the same things, whether it is immediately bundled with the extension or whether the extension merely downloads it from the web and gives it access to the necessary APIs, right?

Except: there is no way of knowing that it is always the same code. For example, there isn’t actually a single FeatureManifest.js file on the web but rather 15 of them, depending on your language. Similarly, there are 15 versions of the JavaScript files it references. Presumably, this is merely about adjusting download servers to the ones closer to you. The logic in all these files should be exactly identical. But I don’t have the resources to verify this, and maybe Amazon is extracting way more data for users in Brazil for example.

And this is merely what’s visible from the outside. What if some US government agency asks Amazon for the data of a particular user? Theoretically, Amazon can serve up a modified FeatureManifest.js file for that user only, one that gives them way more access. And this attack wouldn’t leave any traces whatsoever. No extension release where malicious code could theoretically be discovered. Nothing.

That’s the issue here: Amazon Assistant is an extension with very extensive privileges. How are these being used? If all logic were contained in the extension, we could analyze it. As things are right now however, all we can do is assuming that everybody gets the same logic. But that’s really at Amazon’s sole discretion.

There is another aspect here. Even the regular functionality of Amazon Assistant is rather invasive, with the extension letting Amazon know of every website you visit as well as some of your search queries. In theory, the extension has settings to disable this functionality. In practice, it’s impossible to verify that the extension will always respect these settings.

Is this allowed?

If we are talking about legal boundaries such as GDPR, Amazon provides a privacy policy for Amazon Assistant. I’m no expert, but my understanding is that this meets the legal requirements, as long as what Amazon does matches this policy. For the law, it doesn’t matter what Amazon could do.

That’s different for browser vendors however who have an interest in keeping their extensions platform secure. Things are most straightforward for Mozilla, their add-on policies state:

Add-ons must be self-contained and not load remote code for execution

While, technically speaking, no remote code is being executed in extension context here, delegating all extension privileges to remote code makes no difference in practice. So Amazon Assistant clearly violates Mozilla’s policies, and we can expect Mozilla to enforce their policies here. With Honey, another shopping assistant violating this rule the enforcement process is already in its fifth month, and the extension is still available on Mozilla Add-ons without any changes. Well, maybe at some point…

With Chrome Web Store things are rather fuzzy. The recently added policy states:

Your extension should avoid using remote code except where absolutely necessary. Extensions that use remote code will need extra scrutiny, resulting in longer review times. Extensions that call remote code and do not declare and justify it using the field shown above will be rejected.

This isn’t a real ban on remote code. Rather, remote code can be used where “absolutely necessary.” Extension authors then need to declare and justify remote code. So in case of Amazon Assistant there are two possibilities: either the developers declared this usage of remote code and Google accepted it. Or they didn’t declare it, and Google didn’t notice remote code being loaded here. There is no way for us to know which is true, and so no way of knowing whether Google’s policies are being violated. This in turn means that there is no policy violation to be reported, we can only hope for Google to detect a policy violation on their own, something that couldn’t really be relied upon in the past.

Opera again is very clear in their Acceptance Criteria:

No external JavaScript is allowed. All JavaScript code must be contained in the extension. External APIs are ok.

Arguably, what we have here is way more than “external APIs.” So Amazon Assistant violates Opera’s policies as well and we can expect enforcement action here.

Finally, there is Microsoft Edge. The only related statement I could find in their policies reads:

For example, your extension should not download a remote script and subsequently run that script in a manner that is not consistent with the described functionality.

What exactly is consistent with the described functionality? Is Amazon Assistant delegating its privileges to remote scripts consistent with its description? I have really no idea. Not that there is a working way of reporting policy violations to Microsoft, so this is largely a theoretical discussion.

Conclusions

Amazon Assistant extension requests a wide range of privileges in your browser. This in itself is neither untypical nor unjustified (for most part). However, it then provides access to these privileges to several Amazon web services. In the worst case, this allows Amazon to get full information on the user’s browsing behavior, extract information about accounts they are logged into and even manipulate websites in an almost arbitrary way.

Amazon doesn’t appear to make use of these possibilities beyond what’s necessary for the extension functionality and covered by their privacy policy. With web content being dynamic, there is no way of ensuring this however. If Amazon is spying on a subgroup of their users (be it out of their accord or on behalf of some government agency), this attack would be almost impossible to detect.

That’s the reason why the rules for Mozilla Add-ons and Opera Add-ons websites explicitly prohibit such extension design. It’s possible that Chrome Web Store and Microsoft Store policies are violated as well here. We’ll have to see which browser vendors take action.

Tantek Çelik: One Year Since The #IndieWeb Homebrew Website Club Met In Person And Other Last Times

Saturday 6th of March 2021 02:43:00 AM

March 2021 is the second March in a row where so many of us are still in countries & cities doing our best to avoid getting sick (or worse), slow the spread, and otherwise living very different lives than we did in the before times. Every day here forward will be an anniversary of sorts for an unprecedented event, experience, change, or loss. Or the last time we did something. Rather than ignore them, it’s worth remembering what we had, what we used to do, both appreciating what we have lost (allowing ourselves to mourn), and considering potential upsides of adaptations we have made.

A year ago yesterday (2020-03-04) we hosted the last in-person Homebrew Website Club meetups in Nottingham (by Jamie Tanna in a café) and San Francisco (by me at Mozilla).

Normally I go into the office on Wednesdays but I had worked from home that morning. I took the bus (#5736) inbound to work in the afternoon, the last time I rode a bus. I setup a laptop on the podium in the main community room to show demos on the displays as usual.

Around 17:34 we kicked off our local Homebrew Website Club meetup with four of us which grew to seven before we took a photo. As usual we took turns taking notes in IRC during the meetup as participants demonstrated their websites, something new they had gotten working, ideas being developed, or inspiring independent websites they’d found.

Can you see the joy (maybe with a little goofiness, a little seriousness) in our faces?

We wrapped up the meeting, and as usual a few (or in this case two) of us decided to grab a bite and keep chatting. I did not even consider the possibility that it would be the last time I would see my office for over a year (still haven’t been back), and left my desk upstairs in whatever condition it happened to be. I remember thinking I’d likely be back in a couple days.

We walked a few blocks to Super Duper Burgers on Mission near Spear. That would be the last time I went to that Super Duper Burgers. Glad I decided to indulge in a chocolate milkshake.

Afterwards Katherine and I went to the Embarcadero MUNI station and took the outbound MUNI N-Judah light rail. I distinctly remember noticing people were quieter than usual on the train. There was a palpable sense of increased anxiety.

Instinctually I felt compelled to put on my mask, despite only two cases of Covid having been reported in San Francisco (of course now we know that it was already spreading, especially by the asymptomatic, undetected in the community). Later that night the total reported would be 6.

Yes I was carrying a mask in March of 2020. Since the previous 2+ years of seasonal fires and subsequent unpredictable days of unbreathable smoke in the Bay Area, I’ve traveled with a compact N-95 respirator in my backpack.

Side note: the CDC had yet to recommend that people wear masks. However I had been reading and watching enough global media to know that the accepted practice and recommendation in the East was quite different. It seemed people in Taiwan, China, and Hong Kong were already regularly wearing masks (including N95 respirators) in close public quarters such as transit. Since SARS had hit those regions much harder than the U.S. I figured they had learned from the experience and thus it made sense to follow their lead, not the CDC (which was already under pressure from a criminally incompetent neglectful administration to not scare people). Turned out my instinct (and analysis and conclusions based on watching & reading global behaviors) was more correct than the U.S. CDC at the time (they eventually got there).

Shortly after the train doors closed I donned my mask and checked the seals. The other useful advantage of a properly fitted N95 is that it won’t (shouldn’t) let in any funky public transit smells (perfume, patchouli, or worse), like none of it. No one blinked at seeing someone put on a mask.

We reached our disembarkation stop and stepped off. I put my mask away. We hugged and said our goodbyes. Didn’t think it would be the last time I’d ride MUNI light rail. Or hug a friend without a second thought.

Also posted on IndieNews.

The Firefox Frontier: Firefox B!tch to Boss extension takes the sting out of hostile comments directed at women online

Friday 5th of March 2021 04:00:10 PM

A great swathe of the internet is positive, a place where people come together to collaborate on ideas, discuss news and share moments of levity and sorrow, too. But there’s … Read more

The post Firefox B!tch to Boss extension takes the sting out of hostile comments directed at women online appeared first on The Firefox Frontier.

Robert Kaiser: Mozilla History Talk @ FOSDEM

Thursday 4th of March 2021 10:41:17 PM
The FOSDEM conference in Brussels has become a bit of a ritual for me. Ever since 2002, there has only been a single year of the conference that I missed, and any time I was there, I did take part in the Mozilla devroom - most years also with a talk, as you can see on my slides page.

This year, things were a bit different as for obvious reasons the conference couldn't bring together thousands of developers in Brussels but almost a month ago, in its usual spot, the conference took place in a virtual setting instead. The team did an incredibly good job of hosting this huge conference in a setting completely run on Free and Open Source Software, backed by Matrix (as explained in a great talk by Matthew Hodgson) and Jitsi (see talk by Saúl Ibarra Corretgé).

On short notice, I also added my bit to the conference - this time not talking about all the shiny new software, but diving into the past with "Mozilla History: 20+ Years And Counting". After that long a time that the project exists, I figured many people may not realize its origins and especially early history, so I tried to bring that to the audience, together with important milestones and projects on the way up to today.



The video of the talk has been available for a short time now, and if you are interested yourself in Mozilla's history, then it's surely worth a watch. Of course, my slides are online as well.
If you want to watch more videos to dig deeper into Mozilla history, I heavily recommend the Code Rush documentary from when Netscape initially open-sourced Mozilla (also an awesome time capsule of late-90s Silicon Valley) and a talk on early Mozilla history from Mitchell Baker that she gave at an all-hands in 2012.
The Firefox part of the history is also where my song "Rock Me Firefox" (demo recording on YouTube) starts off, for anyone who wants some music to go along with all this!

While my day-to-day work is in bleeding-edge Blockchain technology (like right now figuring out Ethereum Layer 2 technologies, like Optimism), it's sometimes nice to dig into the past and make sure history never forgets the name - Mozilla.

And, as I said in the talk, I hope Mozilla and its mission have at least another successful 20 years to go into the future!

This Week In Rust: This Week in Rust 380

Wednesday 3rd of March 2021 05:00:00 AM

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No newsletters this week.

Official Project/Tooling Updates Observations/Thoughts Rust Walkthroughs Miscellaneous Crate of the Week

This week's crate is camino, a library with UTF-8 coded paths mimicking std::os::Path's API.

Thanks to piegames for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No calls for participation this week

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

402 pull requests were merged in the last week

Rust Compiler Performance Triage

Quiet week, a couple regressions and several nice improvements.

Triage done by @simulacrum. Revision range: 301ad8..edeee

2 Regressions, 3 Improvements, 0 Mixed

0 of them in rollups

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs Upcoming Events Online North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Crown

Polymath

Tweede golf

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

It's a great example of the different attitudes of C/C++ and Rust: In C/C++ something is correct when someone can use it correctly, but in Rust something is correct when someone can't use it incorrectly.

/u/Janohard on /r/rust

Thanks to Vlad Frolov for the suggestion.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Andrew Halberstadt: DevOps at Mozilla

Tuesday 2nd of March 2021 04:00:00 PM

I first joined Mozilla as an intern in 2010 for the “Tools and Automation Team” (colloquially called the “A-Team”). I always had a bit of difficulty describing our role. We work on tests. But not the tests themselves, the the thing that runs the tests. Also we make sure the tests run when code lands. Also we have this dashboard to view results, oh and also we do a bunch of miscellaneous developer productivity kind of things. Oh and sometimes we have to do other operational type things as well, but it varies.

Over the years the team grew to a peak of around 25 people and the A-Team’s responsibilities expanded to include things like the build system, version control, review tools and more. Combined with Release Engineering (RelEng), this covered almost all of the software development pipeline. The A-Team was eventually split up into many smaller teams. Over time those smaller teams were re-org’ed, split up further, merged and renamed over and over again. Many labels were applied to the departments that tended to contain those teams. Labels like “Developer Productivity”, “Platform Operations”, “Product Integrity” and “Engineering Effectiveness”.

Interestingly, from 2010 to present, one label that has never been applied to any of these teams is “DevOps”.

Andrew Halberstadt: A Better Terminal for Mozilla Build

Tuesday 2nd of March 2021 02:00:00 PM

If you’re working with mozilla-central on Windows and followed the official documentation, there’s a good chance the MozillaBuild shell is running in the default cmd.exe console. If you’ve spent any amount of time in this console you’ve also likely noticed it leaves a bit to be desired. Standard terminal features such as tabs, splits and themes are missing. More importantly, it doesn’t render unicode characters (at least out of the box).

Luckily Microsoft has developed a modern terminal that can replace cmd.exe, and getting it set up with MozillaBuild shell is simple.

Mozilla Privacy Blog: India’s new intermediary liability and digital media regulations will harm the open internet

Tuesday 2nd of March 2021 12:15:09 PM

Last week, in a sudden move that will have disastrous consequences for the open internet, the Indian government notified a new regime for intermediary liability and digital media regulation. Intermediary liability (or “safe harbor”) protections have been fundamental to growth and innovation on the internet as an open and secure medium of communication and commerce. By expanding the “due diligence” obligations that intermediaries will have to follow to avail safe harbor, these rules will harm end to end encryption, substantially increase surveillance, promote automated filtering and prompt a fragmentation of the internet that would harm users while failing to empower Indians. While many of the most onerous provisions only apply to “significant social media intermediaries” (a new classification scheme), the ripple effects of these provisions will have a devastating impact on freedom of expression, privacy and security.

As we explain below, the current rules are not fit-for-purpose and will have a series of unintended consequences on the health of the internet as a whole:

  • Traceability of Encrypted Content: Under the new rules, law enforcement agencies can demand that companies trace the ‘first originator’ of any message. Many popular services today deploy end-to-end encryption and do not store source information so as to enhance the security of their systems and the privacy they guarantee users. When the first originator is from outside India, the significant intermediary must identify the first originator within the country, making an already impossible task more difficult. This would essentially be a mandate requiring encrypted services to either store additional sensitive information or/and break end-to-end encryption which would weaken overall security, harm privacy and contradict the principles of data minimization endorsed in the Ministry of Electronic and Information Technology’s (MeitY) draft of the data protection bill.
  • Harsh Content Take Down and Data Sharing Timelines: Short timelines of 36 hours for content take downs and 72 hours for the sharing of user data for all intermediaries pose significant implementation and freedom of expression challenges. Intermediaries, especially small and medium service providers, would not have sufficient time to analyze the requests or seek any further clarifications or other remedies under the current rules. This would likely create a perverse incentive to take down content and share user data without sufficient due process safeguards, with the fundamental right to privacy and freedom of expression (as we’ve said before) suffering as a result.
  • User Directed Take Downs of Non-Consensual Sexually Explicit Content and Morphed/Impersonated Content: All intermediaries have to remove or disable access to information within 24 hours of being notified by users or their representatives (not necessarily government agencies or courts) when it comes to non-consensual sexually explicit content (revenge pornography, etc.) and impersonation in an electronic form (deep fakes, etc.). While it attempts to solve for a legitimate and concerning issue, this solution is overbroad and goes against the landmark Shreya Singhal judgment, by the Indian Supreme Court, which had clarified in 2015 that companies would only be expected to remove content when directed by a court order or a government agency to do so.
  • Social Media User Verification: In a move that could be dangerous for the privacy and anonymity of internet users, the law contains a provision requiring significant intermediaries to provide the option for users to voluntarily verify their identities. This would likely entail users sharing phone numbers or sending photos of government issued IDs to the companies. This provision will incentivize the collection of sensitive personal data that are submitted for this verification, which can then be also used to profile and target users (the law does seem to require explicit consent to do so). This is not hypothetical conjecture – we have already seen phone numbers collected for security purposes being used for profiling. This provision will also increase the risk from data breaches and entrench power in the hands of large players in the social media and messaging space who can afford to build and maintain such verification systems. There is no evidence to prove that this measure will help fight misinformation (its motivating factor), and it ignores the benefits that anonymity can bring to the internet, such as whistle blowing and protection from stalkers.
  • Automated Filtering: While improved from its earlier iteration in the 2018 draft, the provisions to “endeavor” to carry out automated filtering for child sexual abuse materials (CSAM), non-consensual sexual acts and previously removed content apply to all significant social media intermediaries (including end to end encrypted messaging applications). These are likely fundamentally incompatible with end to end encryption and will weaken protections that millions of users have come to rely on in their daily lives by requiring companies to embed monitoring infrastructure in order to continuously surveil the activities of users with disastrous implications for freedom of expression and privacy.
  • Digital Media Regulation: In a surprising expansion of scope, the new rules also contain government registration and content take down provisions for online news websites, online news aggregators and curated audio-visual platforms. After some self regulatory stages, it essentially gives government agencies the ability to order the take down of news and current affairs content online by publishers (which are not intermediaries), with very few meaningful checks and balances against over reach.

The final rules do contain some improvements from the 2011 original law and the 2018 draft such as limiting the scope of some provisions to significant social media intermediaries, user and public transparency requirements, due process checks and balances around traceability requests, limiting the automated filtering provision and an explicit recognition of the “good samaritan” principle for voluntary enforcement of platform guidelines. In their overall scope, however, they are a dangerous precedent for internet regulation and need urgent reform.

Ultimately, illegal and harmful content on the web, the lack of sufficient accountability  and substandard responses to it undermine the overall health of the internet and as such, are a core concern for Mozilla. We have been at the forefront of these conversations globally (such as the UK, EU and even the 2018 version of this draft in India), pushing for approaches that manage the harms of illegal content online within a rights-protective framework. The regulation of speech online necessarily calls into play numerous fundamental rights and freedoms guaranteed by the Indian constitution (freedom of speech, right to privacy, due process, etc), as well as crucial technical considerations (‘does the architecture of the internet render this type of measure possible or not’, etc). This is a delicate and critical balance, and not one that should be approached with blunt policy proposals.

These rules are already binding law, with the provisions for significant social media intermediaries coming into force 3 months from now (approximately late May 2021). Given the many new provisions in these rules, we recommend that they should be withdrawn and be accompanied by wide ranging and participatory consultations with all relevant stakeholders prior to notification.

The post India’s new intermediary liability and digital media regulations will harm the open internet appeared first on Open Policy & Advocacy.

Karl Dubost: Capping User Agent String - followup meeting

Tuesday 2nd of March 2021 02:27:00 AM

Web compatibility is about dealing with a constantly evolving biotope where things die slowly. And even when they disappear, they have contributed to the balance of the ecosystem and modified it in a way they keep their existence.

A couple of weeks ago, I mentionned the steps which have been taken about capping the User Agent String on macOS 11 for Web compatibility issues. Since then, Mozilla and Google organized a meeting to discuss the status and the issues related to this effort. We invited Apple but probably too late to find someone who could participate to the meeting (my bad). The minutes of the meeting are publicly accessible.

Meeting Summary
  • Apple and Mozilla have both shipped already the macOS 11 UA capping
  • There is an intent to ship for Google and Ken Russel is double checking that they can move forward with the release that would align chrome with Firefox and Safari.
  • Mozilla has not seen any obvious breakage since the change on the UA string. This is only deployed in nightly right now. Tantek: "My general philosophy is that the UA string has been abused for so long, freezing any part of it is a win."
  • Mozilla and Google agreed to find a venue for a more general public plans for UA reduction/freezing
Some additional news since the meeting
  • In the intent to ship for Google, some big queries on HTTP Archive are being runned to check how wide is the issue. An interesting comment from Yoav saying that "79.4% of Unity sites out there are broken in Chrome".
  • We are very close to have a place for working with other browser vendors on UA reduction and freezing. More news soon (hopefully).

To comment…

Archived copy of the minutes

This is to preserve a copy of the minutes in case they are being defaced or changed.

Capping UA string ==== (Minutes will be public) Present: Mike Taylor (Google), Karl Dubost (Mozilla), Chris Peterson (Mozilla), Aaron Tagliaboschi (Mozilla), Kenneth Russell (Google), Avi Drissman (Google), Tantek Çelik (Mozilla) ### Background * Karl’s summary/history of the issue so far on https://www.otsukare.info/2021/02/15/capping-macos-user-agent * What Apple/Safari currently does Safari caps the UA string to 10.15.7. * What is Mozilla status so far Capped UA’s macOS version at 10.15 in Firefox 87 and soon ESR 78: https://bugzilla.mozilla.org/show_bug.cgi?id=1679929 Capped Windows version to 10 (so we can control when and how we bump Firefox's Windows OS version if Microsoft ever bumps Windows's version): https://bugzilla.mozilla.org/show_bug.cgi?id=1693295 ### What is Google status so far Ken: We have 3 LGTMs on blink-dev, but some folks had concerns. We know there's broad breakage because of this issue. It's not just Unity, and it's spread across a lot of other sites. I think we should land this. Apple has already made this change. Our CL is ready to land. Avi: I have no specific concerns. It aligns with our future goals. It is unfortunate it brings us ahead of our understood schedule. Mike: Moz is on board. Google seems to be on board. Kenneth: If there are any objections from the Chromium side, there has been plenty of time to react. Mike: Have there any breakage reports for Mozilla after landing? Karl: Not yet. I've seen a lot of reports related to Cloudinary, etc, which are a larger concern for Apple. For Firefox, there was no breakage with regards to the thing that was released last week. It's not yet in release. There's still plenty of time to back it out if needed. Chris: Was there an issue for Duo Mobile? Karl: We didn't have any reports like this. But we saw a mention online... from what I understood. Apple had modified the UA string to be 10.15. Then the OS evolved to 10.16. Duo had an issue with the disparity between 10.15.7 in the OS and 10.15.6 in the browser. Since then, they modifed and there's no other issue. Karl: On the Firefox side, if we have breakage, we still have possibility to do site-specific UA interventions. Ken: did you (tantek) have concerns about this change? The review sat there for a while. Tantek: I didn't have any problems with the freezing MacOS version per the comments in our bugzilla on that. My general philosophy is that the UA string has been abused for so long, freezing any part of it is a win. I don't even think we need a 1:1 replacement for everything that's in there today. Chris: The long review time was unrelated to reservations, we were sorting out ownership of the module. ### macOS 11.0 compat issues: Unity’s UA parsing issue Cloudinary’s Safari WebP issue Firefox and Chrome send “Accept: image/webp” header. ### Recurring sync on UA Reduction efforts Which public arena should we have this discussion? Mike: we do have public plans for UA reduction/freezing. These might evolve. It would be cool to be able to meet in the future with other vendors and discuss about the options. Chris & Karl: Usual standards forums would be good. People have opinions on venues.

Otsukare!

Karl Dubost: Capping macOS User Agent String on macOS 11

Tuesday 2nd of March 2021 02:27:00 AM

Update on 2021-03-02: Capping User Agent String - followup meeting

This is to keep track and document the sequence of events related to macOS 11 and another cascade of breakages related to the change of user agent strings. There is no good solution. One more time it shows how sniffing User Agent strings are both dangerous (future fail) and source of issues.

Brace for impact!

Capping macOS 11 version in User Agent History
  • 2020-06-25 OPENED WebKit 213622 - Safari 14 - User Agent string shows incorrect OS version

    A reporter claims it breaks many websites but without giving details about which websites. There's a mention about VP9

    browser supports vp9

    I left a comment there to get more details.

  • 2020-09-15 OPENED WebKit 216593 - [macOS] Limit reported macOS release to 10.15 series.

    if (!osVersion.startsWith("10")) osVersion = "10_15_6"_s;

    With some comments in the review:

    preserve original OS version on older macOS at Charles's request

    I suspect this is the Charles, the proxy app.

    2020-09-16 FIXED

  • 2020-10-05 OPENED WebKit 217364 - [macOS] Bump reported current shipping release UA to 10_15_7

    On macOS Catalina 10.15.7, Safari reports platform user agent with OS version 10_15_7. On macOS Big Sur 11.0, Safari reports platform user agent with OS version 10_15_6. It's a bit odd to have Big Sur report an older OS version than Catalina. Bump the reported current shipping release UA from 10_15_6 to 10_15_7.

    The issue here is that macOS 11 (Big Sur) reports an older version number than macOS 10.15 (Catalina), because the previous bug harcoded the string number.

    if (!osVersion.startsWith("10")) osVersion = "10_15_7"_s;

    This is still harcoded because in this comment:

    Catalina quality updates are done, so 10.15.7 is the last patch version. Security SUs from this point on won’t increment the patch version, and does not affect the user agent.

    2020-10-06 FIXED

  • 2020-10-11 Unity [WebGL][macOS] Builds do not run when using Big Sur

    UnityLoader.js is the culprit.

    They fixed it on January 2021(?). But there are a lot of legacy codes running out there which could not be updated.

    Irony, there’s no easy way to detect the unity library to create a site intervention that would apply to all games with the issue. Capping the UA string will fix that.

  • 2020-11-30 OPENED Webkit 219346 - User-agent on macOS 11.0.1 reports as 10_15_6 which is older than latest Catalina release.

    It was closed as a duplicate of 217364, but there's an interesting description:

    Regression from 216593. That rev hard codes the User-Agent header to report MacOS X 10_15_6 on macOS 11.0+ which breaks Duo Security UA sniffing OS version check. Duo security check fails because latest version of macOS Catalina is 10.15.7 but 10.15.6 is being reported.

  • 2020-11-30 OPENED Gecko 1679929 - Cap the User-Agent string's reported macOS version at 10.15

    There is a patch for Gecko to cap the user agent string the same way that Apple does for Safari. This will solve the issue with Unity Games which have been unable to adjust the code source to the new version of Unity.

    // Cap the reported macOS version at 10.15 (like Safari) to avoid breaking // sites that assume the UA's macOS version always begins with "10.". int uaVersion = (majorVersion >= 11 || minorVersion > 15) ? 15 : minorVersion; // Always return an "Intel" UA string, even on ARM64 macOS like Safari does. mOscpu = nsPrintfCString("Intel Mac OS X 10.%d", uaVersion);

    It should land very soon, this week (week 8, February 2021), on Firefox Nightly 87. We can then monitor if anything is breaking with this change.

  • 2020-12-04 OPENED Gecko 1680516 - [Apple Chip - ARM64 M1] Game is not loaded on Gamearter.com

    Older versions of Unity JS used to run games are broken when the macOS version is 10_11_0 in the user agent string of the browser.

    The Mozilla webcompat team proposed to fix this with a Site Intervention for gamearter specifically. This doesn't solve the other games breaking.

  • 2020-12-14 OPENED Gecko 1682238 - Override navigator.userAgent for gamearter.com on macOS 11.0

    A quick way to fix the issue on Firefox for gamearter was to release a site intervention by the Mozilla webcompat team

    "use strict"; /* * Bug 1682238 - Override navigator.userAgent for gamearter.com on macOS 11.0 * Bug 1680516 - Game is not loaded on gamearter.com * * Unity < 2021.1.0a2 is unable to correctly parse User Agents with * "Mac OS X 11.0" in them, so let's override to "Mac OS X 10.16" instead * for now. */ /* globals exportFunction */ if (navigator.userAgent.includes("Mac OS X 11.")) { console.info( "The user agent has been overridden for compatibility reasons. See https://bugzilla.mozilla.org/show_bug.cgi?id=1680516 for details." ); let originalUA = navigator.userAgent; Object.defineProperty(window.navigator.wrappedJSObject, "userAgent", { get: exportFunction(function() { return originalUA.replace(/Mac OS X 11\.(\d)+;/, "Mac OS X 10.16;"); }, window), set: exportFunction(function() {}, window), }); }
  • 2020-12-16 OPENED WebKit 219977 - WebP loading error in Safari on iOS 14.3

    In this comment, Cloudinary explains they try to avoid the issue with the system bug with UA detection.

    Cloudinary is attempting to work around this issue by turning off WebP support to affected clients.

    If this is indeed about the underlying OS frameworks, rather than the browser version, as far as we can tell it appeared sometime after MacOS 11.0.1 and before or in 11.1.0. All we have been able to narrow down on the iOS side is ≥14.0.

    If you have additional guidance on which versions of the OSes are affected, so that we can prevent Safari users from receiving broken images, it would be much appreciated!

    Eric Portis (Cloudinary) created some tests: * WebPs that break in iOS ≥ 14.3 & MacOS ≥ 11.1 * Tiny WebP

    The issue seems to affect CloudFlare

  • 2021-01-05 OPENED WebKit WebP failures [ Big Sur ] fast/images/webp-as-image.html is failing

  • 2021-01-29 OPENED Blink 1171998 - Nearly all Unity WebGL games fail to run in Chrome on macOS 11 because of userAgent

  • 2021-02-06 OPENED Blink 1175225 - Cap the reported macOS version in the user-agent string at 10_15_7

    Colleagues at Mozilla, on the Firefox team, and Apple, on the Safari team, report that there are a long tail of websites broken from reporting the current macOS Big Sur version, e.g. 11_0_0, in the user agent string:

    Mac OS X 11_0_0

    and for this reason, as well as slightly improving user privacy, have decided to cap the reported OS version in the user agent string at 10.15.7:

    Mac OS X 10_15_7

  • 2021-02-09 Blink Intent to Ship: User Agent string: cap macOS version number to 10_15_7

    Ken Russell sends an intent to cap macOS in the Chrome (blink) user agent string to 10_15_7 to follow on the steps of Apple and Mozilla. In the intent to ship, there is a discussion on solving the issue with Client Hints. Sec-CH-UA-Platform-Version would be a possibility, but Client Hints is not yet deployed across browsers and there is not yet a full consensus about it. This is a specification pushed by Google and partially implemented by Google on Chrome.

    Masataka Yakura shared with me (Thanks!) two threads on the Webkit-dev mailing-list. One from May 2020 and another one from November 2020.

    In May, Maciej said:

    I think there’s a number of things in the spec that should be cleaned up before an implementation ships enabled by default, specifically around interop, privacy, and protection against UA lockouts. I know there are PRs in flight for some of these issues. I think it would be good to get more of the open issues to resolution before actually shipping this.

    And in November, Maciej did another round of spec review with one decisive issue.

    Note that Google has released, last year, on January 14, 2020 an Intent to Ship: Client Hints infrastructure and UA Client Hints and this was enabled a couple of days ago on February 11, 2021.

And I'm pretty sure the story is not over. There will be probably more breakages and more unknown bugs.

Otsukare!

Aaron Klotz: 2019 Roundup: Part 1

Monday 1st of March 2021 07:50:00 PM

In my continuing efforts to get caught up on discussing my work, I am now commencing a roundup for 2019. I think I am going to structure this one slightly differently from the last one: I am going to try to segment this roundup by project.

Here is an index of all the entries in this series:

Porting the DLL Interceptor to AArch64

During early 2019, Mozilla was working to port Firefox to run on the new AArch64 builds of Windows. At our December 2018 all-hands, I brought up the necessity of including the DLL Interceptor in our porting efforts. Since no deed goes unpunished, I was put in charge of doing the work! [I’m actually kidding here; this project was right up my alley and I was happy to do it! – Aaron]

Before continuing, you might want to review my previous entry describing the Great Interceptor Refactoring of 2018, as this post revisits some of the concepts introduced there.

Let us review some DLL Interceptor terminology:

  • The target function is the function we want to hook (Note that this is a distinct concept from a branch target, which is also discussed in this post);
  • The hook function is our function that we want the intercepted target function to invoke;
  • The trampoline is a small chunk of executable code generated by the DLL interceptor that facilitates calling the target function’s original implementation.

On more than one occasion I had to field questions about why this work was even necessary for AArch64: there aren’t going to be many injected DLLs in a Win32 ecosystem running on a shiny new processor architecture! In fact, the DLL Interceptor is used for more than just facilitating the blocking of injected DLLs; we also use it for other purposes.

Not all of this work was done in one bug: some tasks were more urgent than others. I began this project by enumerating our extant uses of the interceptor to determine which instances were relevant to the new AArch64 port. I threw a record of each instance into a colour-coded spreadsheet, which proved to be very useful for tracking progress: Reds were “must fix” instances, yellows were “nice to have” instances, and greens were “fixed” instances. Coordinating with the milestones laid out by program management, I was able to assign each instance to a bucket which would help determine a total ordering for the various fixes. I landed the first set of changes in bug 1526383, and the second set in bug 1532470.

It was now time to sit down, download some AArch64 programming manuals, and take a look at what I was dealing with. While I have been messing around with x86 assembly since I was a teenager, my first exposure to RISC architectures was via the DLX architecture introduced by Hennessy and Patterson in their textbooks. While DLX was crafted specifically for educational purposes, it served for me as a great point of reference. When I was a student taking CS 241 at the University of Waterloo, we had to write a toy compiler that generated DLX code. That experience ended up saving me a lot of time when looking into AArch64! While the latter is definitely more sophisticated, I could clearly recognize analogs between the two architectures.

In some ways, targeting a RISC architecture greatly simplifies things: The DLL Interceptor only needs to concern itself with a small subset of the AArch64 instruction set: loads and branches. In fact, the DLL Interceptor’s AArch64 disassembler only looks for nine distinct instructions! As a bonus, since the instruction length is fixed, we can easily copy over verbatim any instructions that are not loads or branches!

On the other hand, one thing that increased complexity of the port is that some branch instructions to relative addresses have maximum offsets. If we must branch farther than that maximum, we must take alternate measures. For example, in AArch64, an unconditional branch with an immediate offset must land in the range of ±128 MiB from the current program counter.

Why is this a problem, you ask? Well, Detours-style interception must overwrite the first several instructions of the target function. To write an absolute jump, we require at least 16 bytes: 4 for an LDR instruction, 4 for a BR instruction, and another 8 for the 64-bit absolute branch target address.

Unfortunately, target functions may be really short! Some of the target functions that we need to patch consist only of a single 4-byte instruction!

In this case, our only option for patching the target is to use an immediate B instruction, but that only works if our hook function falls within that ±128MiB limit. If it does not, we need to construct a veneer. A veneer is a special trampoline whose location falls within the target range of a branch instruction. Its sole purpose is to provide an unconditional jump to the “real” desired branch target that lies outside of the range of the original branch. Using veneers, we can successfully hook a target function even if it is only one instruction (ie, 4 bytes) in length, and the hook function lies more than 128MiB away from it. The AArch64 Procedure Call Standard specifies X16 as a volatile register that is explicitly intended for use by veneers: veneers load an absolute target address into X16 (without needing to worry about whether or not they’re clobbering anything), and then unconditionally jump to it.

Measuring Target Function Instruction Length

To determine how many instructions the target function has for us to work with, we make two passes over the target function’s code. The first pass simply counts how many instructions are available for patching (up to the 4 instruction maximum needed for absolute branches; we don’t really care beyond that).

The second pass actually populates the trampoline, builds the veneer (if necessary), and patches the target function.

Veneer Support

Since the DLL interceptor is already well-equipped to build trampolines, it did not take much effort to add support for constructing veneers. However, where to write out a veneer is just as important as what to write to a veneer.

Recall that we need our veneer to reside within ±128 MiB of an immediate branch. Therefore, we need to be able to exercise some control over where the trampoline memory for veneers is allocated. Until this point, our trampoline allocator had no need to care about this; I had to add this capability.

Adding Range-Aware VM Allocation

Firstly, I needed to make the MMPolicy classes range-aware: we need to be able to allocate trampoline space within acceptable distances from branch instructions.

Consider that, as described above, a branch instruction may have limits on the extents of its target. As data, this is easily formatted as a pivot (ie, the PC at the location where the branch instruction is encoutered), and a maximum distance in either direction from that pivot.

On the other hand, range-constrained memory allocation tends to work in terms of lower and upper bounds. I wrote a conversion method, MMPolicyBase::SpanFromPivotAndDistance, to convert between the two formats. In addition to format conversion, this method also constrains resulting bounds such that they are above the 1MiB mark of the process’ address space (to avoid reserving memory in VM regions that are sensitive to compatiblity concerns), as well as below the maximum allowable user-mode VM address.

Another issue with range-aware VM allocation is determining the location, within the allowable range, for the actual VM reservation. Ideally we would like the kernel’s memory manager to choose the best location for us: its holistic view of existing VM layout (not to mention ASLR) across all processes will provide superior VM reservations. On the other hand, the Win32 APIs that facilitate this are specific to Windows 10. When available, MMPolicyInProcess uses VirtualAlloc2 and MMPolicyOutOfProcess uses MapViewOfFile3. When we’re running on Windows versions where those APIs are not yet available, we need to fall back to finding and reserving our own range. The MMPolicyBase::FindRegion method handles this for us.

All of this logic is wrapped up in the MMPolicyBase::Reserve method. In addition to the desired VM size and range, the method also accepts two functors that wrap the OS APIs for reserving VM. Reserve uses those functors when available, otherwise it falls back to FindRegion to manually locate a suitable reservation.

Now that our memory management primatives were range-aware, I needed to shift my focus over to our VM sharing policies.

One impetus for the Great Interceptor Refactoring was to enable separate Interceptor instances to share a unified pool of VM for trampoline memory. To make this range-aware, I needed to make some additional changes to VMSharingPolicyShared. It would no longer be sufficient to assume that we could just share a single block of trampoline VM — we now needed to make the shared VM policy capable of potentially allocating multiple blocks of VM.

VMSharingPolicyShared now contains a mapping of ranges to VM blocks. If we request a reservation which an existing block satisfies, we re-use that block. On the other hand, if we require a range that is yet unsatisfied, then we need to allocate a new one. I admit that I kind of half-assed the implementation of the data structure we use for the mapping; I was too lazy to implement a fully-fledged interval tree. The current implementation is probably “good enough,” however it’s probably worth fixing at some point.

Finally, I added a new generic class, TrampolinePool, that acts as an abstraction of a reserved block of VM address space. The main interceptor code requests a pool by calling the VM sharing policy’s Reserve method, then it uses the pool to retrieve new Trampoline instances to be populated.

AArch64 Trampolines

It is much simpler to generate trampolines for AArch64 than it is for x86(-64). The most noteworthy addition to the Trampoline class is the WriteLoadLiteral method, which writes an absolute address into the trampoline’s literal pool, followed by writing an LDR instruction referencing that literal into the trampoline.

Thanks for reading! Coming up next time: My Untrusted Modules Opus.

Henri Sivonen: Rust Target Names Aren’t Passed to LLVM

Monday 1st of March 2021 10:06:49 AM

TL;DR: Rust’s i686-unknown-linux-gnu target requires SSE2 and, therefore, does not mean the same as GCC’s -march=i686. It is the responsibility of Linux distributions to use a target configuration that matches what they intend to support.

From time to time, claims that Rust is “not portable” flare up. “Not portable” generally means “LLVM does not support my retrocomputing hobby target.” This is mostly about dead ISAs like DEC Alpha. There is a side track about x86, though: the complaint that Rust’s default 32-bit x86 (glibc) Linux target does not support all x86 CPUs that are still supported by a given Linux distribution.

Upstream Rust ships with two preconfigured 32-bit x86 glibc Linux targets: The primary one has the kind of floating-point math that other ISAs have and requires SSE2. “Primary” here means that the Rust project considers this “guaranteed to work”. The secondary one does not require SSE2 and, therefore, works on even older CPUs but has floating-point math that differs from other ISAs. “Secondary” here means that the Rust project considers this only “guaranteed to build”. Conceptually, this is simple: x86 with SSE2 and x86 without SSE2. Pick the former if you can and the latter if you must.

The problem is that the x86 with SSE2 target is called i686-unknown-linux-gnu, but i686 is supposed to mean the P6 microarchitecture introduced in Pentium Pro. Pentium Pro didn’t have SSE2. Rust uses i686 as the first component of a target name to mean “default (for Rust) 32-bit x86”—not the P6 microarchitecture specifically. Notably, in the cases of macOS and Android, the baseline CPU is even higher despite the target name starting with i686, because those systems were introduced to x86 later. On the other hand, the x86 without SSE2 glibc Linux target is called i586-unknown-linux-gnu and it does target Pentium.

The key thing here is that i686-unknown-linux-gnu is not trying to capture the concept of i686 but is trying to capture the concept of x86 with SSE2 or x86 with the kind of floating-point math other ISAs have. If the target name started with e.g. i786, it would still not be the target that Linux distributions that target non-SSE2 x86 CPUs should use.

The second important thing to note is that the target name is an opaque string and the first component is not parsed and passed to LLVM. Instead, you need to look inside the target specification file to see what is passed to LLVM. There we see base.cpu = "pentium4".to_string(); for i686-unknown-linux-gnu. Notably, the target name existed first and the base CPU was revised subsequently in 2015.

But isn’t Rust still bad and misleading Linux distributions into believing that i686-unknown-linux-gnu targets pentiumpro instead of pentium4? Perhaps, but if distributions did minimal smoke testing of their binaries on the actual intended baseline hardware, it would be apparent that i686-unknown-linux-gnu is not the Rust target you want if you intend to support non-SSE2 CPUs. If no one bothers to test non-SSE2 CPUs as part of the release process, might that be a sign of it being the time to stop pretending to support non-SSE2 CPUs?

What if a distribution is unhappy with both pentium4 and pentium and really wants pentiumpro? Is it appropriate to blame it on Rust? No. A distribution has the freedom to mint e.g. a target like i686-distro-linux-gnu by copying and pasting the i586-unknown-linux-gnu target specification and changing the base CPU to pentiumpro and then build all its own packages with that target. What if that still generates an unwanted instruction somewhere? Chances would be that there would then be an LLVM bug, in which case it should be up to the retrocomputing community to fix it. At least it’s likely a smaller task than adding an LLVM back end for an entire ISA.

Should the upstream i686-unknown-linux-gnu target be renamed to something like i786-unknown-linux-gnu? Probably not. There are probably enough build configurations out there depending on it meaning what it means now that renaming would be disruptive. Also, baking the ISA extension level into the preconfigured target is not that nice in general, and a better solution is moving towards building the Rust standard library with the same compiler options as everything else, which makes the concept of preconfigured targets less necessary.

Jan-Erik Rediger: Three-year Moziversary

Monday 1st of March 2021 10:00:00 AM

Has it really been 3 years? I guess it has. I joined Mozilla as a Firefox Telemetry Engineer in March 2018, I blogged twice already: 2019, 2020.

And now it's 2021. 2020 was nothing like I thought it would and still been a lot like I said last year at this point. It's been Glean all over the year, but instead of working from the office and occasionally meeting my team in person, it's been working from home for 12 months now.

In September of last year I officially became the Glean SDK tech lead and thus I'm now responsible for the technical direction and organisation of the whole project, but really this is a team effort. Throughout the year we made Project FOG happen. At least the code is there and we can start migrating now. It's far from finished of course.

I blogged about Glean in 2021 and we're already seeing the first pieces of that plan being implemented. After some reorganization the team I'm in grew again so now we have Daniel, Anthony and Frank there as well (we're still Team "Browser Measurement III"). This brings two parts of the Glean project closer together (not that it was far apart).

There's more work ahead, interesting challenges to tackle and projects to support in migrating to Glean. To the next year and beyond!

Thank you

Thanks to my team: Bea, Chris, Travis, Alessio, Mike, Daniel, Anthony and Frank and also thanks to the bigger data engineering team within Mozilla. And thanks to all the other people at Mozilla I work with.

The Mozilla Blog: Notes on Addressing Supply Chain Vulnerabilities

Saturday 27th of February 2021 10:41:11 PM

Addressing Supply Chain Vulnerabilities

One of the unsung achievements of modern software development is the degree to which it has become componentized: not that long ago, when you wanted to write a piece of software you had to write pretty much the whole thing using whatever tools were provided by the language you were writing in, maybe with a few specialized libraries like OpenSSL. No longer. The combination of newer languages, Open Source development and easy-to-use package management systems like JavaScript’s npm or Rust’s Cargo/crates.io has revolutionized how people write software, making it standard practice to pull in third party libraries even for the simplest tasks; it’s not at all uncommon for programs to depend on hundreds or thousands of third party packages.

Supply Chain Attacks

While this new paradigm has revolutionized software development, it has also greatly increased the risk of supply chain attacks, in which an attacker compromises one of your dependencies and through that your software.[1] A famous example of this is provided by the 2018 compromise of the event-stream package to steal Bitcoin from people’s computers. The Register’s brief history provides a sense of the scale of the problem:

Ayrton Sparling, a computer science student at California State University, Fullerton (FallingSnow on GitHub), flagged the problem last week in a GitHub issues post. According to Sparling, a commit to the event-stream module added flatmap-stream as a dependency, which then included injection code targeting another package, ps-tree.

There are a number of ways in which an attacker might manage to inject malware into a package. In this case, what seems to have happened is that the original maintainer of event-stream was no longer working on it and someone else volunteered to take it over. Normally, that would be great, but here it seems that volunteer was malicious, so it’s not great.

Standards for Critical Packages

Recently, Eric Brewer, Rob Pike, Abhishek Arya, Anne Bertucio and Kim Lewandowski posted a proposal on the Google security blog for addressing vulnerabilities in Open Source software. They cover a number of issues including vulnerability management and security of compilation, and there’s a lot of good stuff here, but the part that has received the most attention is the suggestion that certain packages should be designated “critical”[2]:

For software that is critical to security, we need to agree on development processes that ensure sufficient review, avoid unilateral changes, and transparently lead to well-defined, verifiable official versions.

These are good development practices, and ones we follow here at Mozilla, so I certainly encourage people to adopt them. However, trying to require them for critical software seems like it will have some problems.

It creates friction for the package developer

One of the real benefits of this new model of software development is that it’s low friction: it’s easy to develop a library and make it available — you just write it put it up on a package repository like crates.io — and it’s easy to use those packages — you just add them to your build configuration. But then you’re successful and suddenly your package is widely used and gets deemed “critical” and now you have to put in place all kinds of new practices. It probably would be better if you did this, but what if you don’t? At this point your package is widely used — or it wouldn’t be critical — so what now?

It’s not enough

Even packages which are well maintained and have good development practices routinely have vulnerabilities. For example, Firefox recently released a new version that fixed a vulnerability in the popular ANGLE graphics engine, which is maintained by Google. Both Mozilla and Google follow the practices that this blog post recommends, but it’s just the case that people make mistakes. To (possibly mis)quote Steve Bellovin, “Software has bugs. Security-relevant software has security-relevant bugs”. So, while these practices are important to reduce the risk of vulnerabilities, we know they can’t eliminate them.

Of course this applies to inadvertant vulnerabilities, but what about malicious actors (though note that Brewer et al. observe that “Taking a step back, although supply-chain attacks are a risk, the vast majority of vulnerabilities are mundane and unintentional—honest errors made by well-intentioned developers.”)? It’s possible that some of their proposed changes (in particular forbidding anonymous authors) might have an impact here, but it’s really hard to see how this is actionable. What’s the standard for not being anonymous? That you have an e-mail address? A Web page? A DUNS number?[3] None of these seem particularly difficult for a dedicated attacker to fake and of course the more strict you make the requirements the more it’s a burden for the (vast majority) of legitimate developers.

I do want to acknowledge at this point that Brewer et al. clearly state that multiple layers of protection needed and that it’s necessary to have robust mechanisms for handling vulnerability defenses. I agree with all that, I’m just less certain about this particular piece.

Redefining Critical

Part of the difficulty here is that there are ways in which a piece of software can be “critical”:

  • It can do something which is inherently security sensitive (e.g., the OpenSSL SSL/TLS stack which is responsible for securing a huge fraction of Internet traffic).
  • It can be widely used (e.g., the Rust log) crate, but not inherently that sensitive.

The vast majority of packages — widely used or not — fall into the second category: they do something important but that isn’t security critical. Unfortunately, because of the way that software is generally built, this doesn’t matter: even when software is built out of a pile of small components, when they’re packaged up into a single program, each component has all the privileges that that program has. So, for instance, suppose you include a component for doing statistical calculations: if that component is compromised nothing stops it from opening up files on your disk and stealing your passwords or Bitcoins or whatever. This is true whether the compromise is due to an inadvertant vulnerability or malware injected into the package: a problem in any component compromises the whole system.[4] Indeed, minor non-security components make attractive targets because they may not have had as much scrutiny as high profile security components.

Least Privilege in Practice: Better Sandboxing

When looked at from this perspective, it’s clear that we have a technology problem: There’s no good reason for individual components to have this much power. Rather, they should only have the capabilities they need to do the job they are intended to to (the technical term is least privilege); it’s just that the software tools we have don’t do a good job of providing this property. This is a situation which has long been recognized in complicated pieces of software like Web browsers, which employ a technique called “process sandboxing” (pioneered by Chrome) in which the code that interacts with the Web site is run in its own “sandbox” and has limited abilities to interact with your computer. When it wants to do something that it’s not allowed to do, it talks to the main Web browser code and asks it to do it for it, thus allowing that code to enforce the rules without being exposed to vulnerabilities in the rest of the browser.

Process sandboxing is an important and powerful tool, but it’s a heavyweight one; it’s not practical to separate out every subcomponent of a large program into its own process. The good news is that there are several recent technologies which do allow this kind of fine-grained sandboxing, both based on WebAssembly. For WebAssembly programs, nanoprocesses allow individual components to run in their own sandbox with component-specific access control lists. More recently, we have been experimenting with a technology called called RLBox developed by researchers at UCSD, UT Austin, and Stanford which allows regular programs such as Firefox to run sandboxed components. The basic idea behind both of these is the same: use static compilation techniques to ensure that the component is memory-safe (i.e., cannot reach outside of itself to touch other parts of the program) and then give it only the capabilities it needs to do its job.

Techniques like this point the way to a scalable technical approach for protecting yourself from third party components: each component is isolated in its own sandbox and comes with a list of the capabilities that it needs (often called a manifest) with the compiler enforcing that it has no other capabilities (this is not too dissimilar from — but much more granular than — the permissions that mobile applications request). This makes the problem of including a new component much simpler because you can just look at the capabilities it requests, without needing verify that the code itself is behaving correctly.

Making Auditing Easier

While powerful, sandboxing itself — whether of the traditional process or WebAssembly variety — isn’t enough, for two reasons. First, the APIs that we have to work with aren’t sufficiently fine-grained. Consider the case of a component which is designed to let you open and process files on the disk; this necessarily needs to be able to open files, but what stops it from reading your Bitcoins instead of the files that the programmer wanted it to read? It might be possible to create a capability list that includes just reading certain files, but that’s not the API the operating system gives you, so now we need to invent something. There are a lot of cases like this, so things get complicated.

The second reason is that some components are critical because they perform critical functions. For instance, no matter how much you sandbox OpenSSL, you still have to worry about the fact that it’s handling your sensitive data, and so if compromised it might leak that. Fortunately, this class of critical components is smaller, but it’s non-zero.

This isn’t to say that sandboxing isn’t useful, merely that it’s insufficient. What we need is multiple layers of protection[5], with the first layer being procedural mechanisms to defend against code being compromised and the second layer being fine-grained sandboxing to contain the impact of compromise. As noted earlier, it seems problematic to put the burden of better processes on the developer of the component, especially when there are a large number of dependent projects, many of them very well funded.

Something we have been looking at internally at Mozilla is a way for those projects to tag the dependencies they use and depend on. The way that this would work is that each project would then be tagged with a set of other projects which used it (e.g., “Firefox uses this crate”). Then when you are considering using a component you could look to see who else uses it, which gives you some measure of confidence. Of course, you don’t know what sort of auditing those organizations do, but if you know that Project X is very security conscious and they use component Y, that should give you some level of confidence. This is really just a automating something that already happens informally: people judge components by who else uses them. There are some obvious extensions here, for instance labelling specific versions, having indications of what kind of auditing the depending project did, or allowing people to configure their build systems to automatically trust projects vouched for by some set of other projects and refuse to include unvouched projects, maintaining a database of insecure versions (this is something the Brewer et al. proposal suggests too). The advantage of this kind of approach is that it puts the burden on the people benefitting from a project, rather than having some widely used project suddenly subject to a whole pile of new requirements which they may not be interested in meeting. This work is still in the exploratory stages, so reach out to me if you’re interested.

Obviously, this only works if people actually do some kind of due diligence prior to depending on a component. Here at Mozilla, we do that to some extent, though it’s not really practical to review every line of code in a giant package like WebRTC There is some hope here as well: because modern languages such as Rust or Go are memory safe, it’s much easier to convince yourself that certain behaviors are impossible — even if the program has a defect — which makes it easier to audit.[6] Here too it’s possible to have clear manifests that describe what capabilities the program needs and verify (after some work) that those are accurate.

Summary

As I said at the beginning, Brewer et al. are definitely right to be worried about this kind of attack. It’s very convenient to be able to build on other people’s work, but the difficulty of ascertaining the quality of that work is an enormous problem[7]. Fortunately, we’re seeing a whole series of technological advancements that point the way to a solution without having to go back to the bad old days of writing everything yourself.

  1. Supply chain attacks can be mounted via a number of other mechanisms, but in this post, we are going to focus on this threat vector. ↩︎
  2. Where “critical” is defined by a somewhat complicated formula based roughly on the age of the project, how actively maintained it seems to be, how many other projects seem to use it, etc. It’s actually not clear to me that this is metric is that good a predictor of criticality; it seems mostly to have the advantage that it’s possible to evaluate purely by looking at the code repository, but presumably one could develop a metric that would be good. ↩︎
  3. Experience with TLS Extended Validation certificates, which attempt to verify company identity, suggests that this level of identity is straightforward to fake. ↩︎
  4. Allan Schiffman used to call this phenomenen a “distributed single point of failure”. ↩︎
  5. The technical term here is defense in depth. ↩︎
  6. Even better are verifiable systems such the HaCl* cryptographic library that Firefox depends on. HaCl* comes with a machine-checkable proof of correctness, which significantly reducing the need to audit all the code. Right now it’s only practical to do this kind of verification for relatively small programs, in large part because describing the specification that you are proving the program conforms to is hard, but the technology is rapidly getting better. ↩︎
  7. This is true even for basic quality reasons. Which of the two thousand ORMs for node is the best one to use? ↩︎

The post Notes on Addressing Supply Chain Vulnerabilities appeared first on The Mozilla Blog.

Mozilla Accessibility: 2021 Firefox Accessibility Roadmap Update

Friday 26th of February 2021 11:03:16 PM

We’ve spent the last couple of months finalizing our plans for 2021 and we’re ready to share them. What follows is a copy of the Firefox Accessibility Roadmap taken from the Mozilla Accessibility wiki.

Mozilla’s mission is to ensure the Internet is a global public resource, open and accessible to all. An Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent.

People with disabilities can experience huge benefits from technology but can also find it frustrating or worse, downright unusable. Mozilla’s Firefox accessibility team is committed to delivering products and services that are not just usable for people with disabilities, but a delight to use.

The Firefox accessibility (a11y) team will be spending much of 2021 re-building major pieces of our accessibility engine, the part of Firefox that powers screen readers and other assistive technologies.

While the current Firefox a11y engine has served us well for many years, new directions in browser architectures and operating systems coupled with the increasing complexity of the modern web means that some of Firefox’s venerable a11y engine needs a rebuild.

Browsers, including Firefox, once simple single process applications, have become complex multi-process systems that have to move lots of data between processes, which can cause performance slowdowns. In order to ensure the best performance and stability and to enable support for a growing, wider variety of accessibility tools in the future (such as Windows Narrator, Speech Recognition and Text Cursor Indicator), Firefox’s accessibility engine needs to be more robust and versatile. And where ATs used to spend significant resources ensuring a great experience across browsers, the dominance of one particular browser means less resources being committed to ensuring the ATs work well with Firefox. This changing landscape means that Firefox too must evolve significantly and that’s what we’re going to be doing in 2021.

The most important part of this rebuild of the Firefox accessibility engine is what we’re calling “cache the world”. Today, when an accessibility client wants to access web content, Firefox often has to send a request from its UI process to the web content process. Only a small amount of information is maintained in the UI process for faster response. Aside from the overhead of these requests, this can cause significant responsiveness problems, particularly if an accessibility client accesses many elements in the accessibility tree. The architecture we’re implementing this year will ameliorate these problems by sending the entire accessibility tree from the web content process to the UI process and keeping it up to date, ensuring that accessibility clients have the fastest possible response to their requests regardless of their complexity.

So that’s the biggest lift we’re planning for 2021 but that’s not all we’ll be doing. Firefox is always adding new features and adjusting existing features and the accessibility team will be spending significant effort ensuring that all of the Firefox changes are accessible. And we know we’re not perfect today so we’ll also be working through our backlog of defects, prioritizing and fixing the issues that cause the most severe problems for users with disabilities.

Firefox has a long history of providing great experiences for disabled people. To continue that legacy, we’re spending most of our resources this year on rebuilding core pieces of technology supporting those experiences. That means we won’t have the resources to tackle some issues we’d like to, but another piece of Firefox’s long history is that, through open source and open participation, you can help. This year, we can especially use your help identifying any new issues that take away from your experience as a disabled Firefox user, fixing high priority bugs that affect large numbers of disabled Firefox users, and spreading the word about the areas where Firefox excels as a browser for disabled users. Together, we can make 2021 a great year for Firefox accessibility.

The post 2021 Firefox Accessibility Roadmap Update appeared first on Mozilla Accessibility.

Firefox Nightly: These Weeks in Firefox: Issue 88

Friday 26th of February 2021 08:28:34 PM
Highlights
  • Firefox 86 went out on February 23rd! Here are the release notes.
  • The new, modern printing UI is rolling out to 100% of our release users in Firefox 86!
  • Devtools has a new UI for color scheme simulation (bug). This change was contributed by Sebo. The following media query can be used on the page to follow the current scheme value:
@media (prefers-color-scheme: dark) {  body { background-color: #333; color: white; } }

  • The new Devtools Performance panel is enabled by default in Nightly (bug)

  • About:welcome now supports a “Make Firefox My Primary Browser” button (bug). This button pins Firefox to the taskbar and sets it as the default browser.

Friends of the Firefox team

For contributions made from January 26, 2021 to February 23, 2021, inclusive.

Introductions/Shout-Outs Resolved bugs (excluding employees) Fixed more than one bug
  • Anurag Kalia :anuragkalia
  • msirringhaus
  • Itiel
  • Andrei Petcu
  • Sebastian Zartner [:sebo]
  • Sunday Mgbogu  [:digitalsimboja]
  • Tim Nguyen :ntim
  • Tom Schuster [:evilpie]
New contributors (

Hacks.Mozilla.Org: Here’s what’s happening with the Firefox Nightly logo

Friday 26th of February 2021 07:43:01 PM
Fox Gate

The internet was set on fire (pun intended) this week, by what I’m calling ‘fox gate’, and chances are you might have seen a meme or two about the Firefox logo. Many people were pulling up for a battle royale because they thought we had scrubbed fox imagery from our browser.

This is definitely not happening.

The logo causing all the stir is one we created a while ago with input from our users. Back in 2019, we updated the Firefox browser logo and added the parent brand logo. 

What we learned throughout this, is that many of our users aren’t actually using the browser because then they’d know (no shade) the beloved fox icon is alive and well in Firefox on your desktop.

Shameless plug – you can download the browser here

You can read more about how all this spiralled in the mini-case study on how the ‘fox gate’ misinformation spread online here.

Long story short, the fox is here to stay and for our Firefox Nightly users out there, we’re bringing back a very special version of an older logo, as a treat.

Our commitment to privacy and a safe and open web remains the same. We hope you enjoy the nightly version of the logo and take some time to read up on spotting misinformation and fake news.

 

The post Here’s what’s happening with the Firefox Nightly logo appeared first on Mozilla Hacks - the Web developer blog.

The Firefox Frontier: Remain Calm: the fox is still in the Firefox logo

Friday 26th of February 2021 06:00:27 PM

If you’ve been on the internet this week, chances are you might have seen a meme or two about the Firefox logo. And listen, that’s great news for us. Sure, … Read more

The post Remain Calm: the fox is still in the Firefox logo appeared first on The Firefox Frontier.

Patrick Cloke: django-querysetsequence 0.14 released!

Friday 26th of February 2021 05:31:00 PM

django-querysetsequence 0.14 has been released with support for Django 3.2 (and Python 3.9). django-querysetsequence is a Django package for treating multiple QuerySet instances as a single QuerySet, this can be useful for treating similar models as a single model. The QuerySetSequence class supports much of the API …

More in Tux Machines

Tor and Mozilla/Firefox

  • United Nations Whisteblower Says The Tor Anonymity Network Is Great For Human Rights Work

    US military subsidiaries such as the NSA, who use Tor for open source intelligence gathering, are not the only ones who need a secure traffic analysis resistant anonymity network like Tor. UN human rights lawyer Emma Reilly says it is "great" when working with human rights defenders. [...] We feel for her, she is not the only one who was forced to learn Pascal in her youth. We also feel for all the victims of the UN Human Rights Council who has been handing over names of human rights activists from the day it formed in March 2006. China is not only having a very negative impact on human rights activists who contact the UN for help, China is also committing grave crimes against pro-democracy activists in Hong Kong (香港). [...] The free software tool OnionShare is a very user-friendly program that lets you share files and setup chat-rooms over the Tor network in case you need to communicate with human rights activists or other endangered people in a secure fashion. You can follow human rights lawyer Emma Reilly on Twitter if you want to learn more about her important human rights work. She does not appear to have a fediverse social media account in case Twitter de-platforms her on behest of the Chinese regime.

  • How one woman fired up her online business during the pandemic

    Sophia Keys started her ceramics business, Apricity Ceramics, five years ago. But it wasn’t until a global pandemic forced everyone to sign on at home and Screen Time Report Scaries became a thing that her business really took off. She had never been active on social media, but decided to create relaxing videos of pottery throwing as a type of craft-ASMR (autonomous sensory meridian response videos that provide relaxation with a sedative, tingling sensation for some) early in the pandemic. These videos gained traction and Keys started building a community. A couple months into the pandemic, when she had more finished pieces than she knew what to do with, she posted about the sale on her Instagram page. She sold out. She now has over 21K followers and her ceramics sell out in hours. Amidst the chaos of 2020, here’s how Sophia expanded her woman-owned online business, found her own confidence on social media, and built a community around her handmade products.

  • Mozilla Performance Blog: Performance Sheriff Newsletter (February 2021)

    In February there were 201 alerts generated, resulting in 29 regression bugs being filed on average 4 days after the regressing change landed. Welcome to the February 2021 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics, followed by some analysis on the data footprint of our performance metrics. If you’re interested (and if you have access) you can view the full dashboard.

Games: Assassin’s Greed, Yorg, Wanted Raccoon and More

  • Assassin’s Greed

    I don’t think any sane person is going to disagree with the quote, “Power corrupts; absolute power corrupts absolutely.” For those unaware, that quote came from British politician Baron Acton in 1887. That’s one of the few sayings man has uttered that stands against the test of time. Keep in mind, Acton coined this phrase from politicians who said something similar even earlier than his time; Acton’s phrase just seems to be the most popular, since it reads like modern English. Now, I’m not trying to get into politics; we’re a gaming web site, after all. But sadly, after a number of events have occurred — for the gaming industry in particular — within the past couple of years, I feel like even us Linux gamers get the short end of the stick. True, we always had the short end of the stick, up until Valve stepped in and basically saved our bacon around 2012-2013. But as far as native Linux games are concerned, and as advanced as Proton gets, competition that has arisen lately can either be a plus for us, or, as I bring out here, competition can be more so of a nuisance than it is anything else. [...] Yeah, some were probably expecting me to point the gun at Microsoft first. I’m not a total Microsoft hater, as I do appreciate some of their work, like some of the code they’ve contributed to the Linux kernel. But I seem to hear it all the time. Microsoft bought this company. [...] Microsoft joined the Linux foundation late 2016. Supposedly, they’re a high-paying “Platinum Member.” I don’t know if their claim, “We love Linux,” is actually true. If anything, they consider Linux as a threat, as long as they’re not making revenue via this platform. They haven’t made any official drivers for Linux as far as their Xbox controllers are concerned. Microsoft is invested in Linux at least when it comes to their whole Azure cloud services, a competitor to AWS and Google Cloud, and they have made it easier to develop for Linux within Windows with the WSL module developed in partnership with Ubuntu. Microsoft tried to make their own locked garden during the Windows 8 era with the Windows Store and trying to force everyone to put their applications through there. Fortunately, they failed miserably, thanks in no small part to Valve creating SteamOS. But it doesn’t mean Microsoft won’t stop trying.

  • FOSS racer Yorg has a new release with improved gamepad support | GamingOnLinux

    Top-down open-source racing? Yorg is a little bit like some of the classic Micro Machines games and while rough around the edges as it's in development it's showing promise as another FOSS game. With fast arcade racing along with some amusing physics, Yorg is already a lot of fun with multiple tracks, vehicles and different drivers to pick from. You can play against AI, local multiplayer and experimental online multiplayer. There's weapons too, so you can blow everyone up.

  • Wanted Raccoon is an upcoming comedy game in the spirit of Goat Simulator

    Remember the craziness of Goat Simulator? Wanted Raccoon has a familiar theme of animals going wild and it's entering Early Access on March 19 with Linux support. A game that seems like a big gimmick but apparently there's a little more to it. The developer mentions an actual storyline and some sort of research system. You can ride skateboards, fight people, upgrade skills, and of course - steal food. Everything a good Raccoon does right? There's also something about a kidnapped family. Hero Raccoon to the rescue?

  • Building a Retro Linux Gaming Computer - Part 2: Selecting a Graphics Card

    Linux graphics support is still remarkably similar to how it was 20 years ago, even with all the progress that has been made in the years since. The Mesa 3D graphics library had its origins all the way back in 1995, and through the Utah GLX project attracted the attention of industry luminaries such as id Software’s John Carmack and vendors such as ATI, Intel, Matrox, S3, and 3dfx. By the turn of the millennium all of them had at least some support in Mesa. Nvidia went a different route, one which continues to set them apart to this day. Rather than choosing to cooperate with Mesa they instead ported their Windows drivers over to Linux directly, maintaining their own proprietary binary blob separate from the main Linux kernel. This driver model was also later adopted by ATI when they switched focus to their own proprietary “fglrx” driver, although this was largely reversed again after AMD acquired the company in 2006. By the time of Red Hat Linux 9 the Direct Rendering Infrastructure or DRI was firmly in place in Mesa and offered 3D support for a wide number of cards. This included the ATI 3D Rage Pro Turbo, which was the AGP card I had selected to test the machine. While a solid 2D performer it offered lacklustre 3D graphics even for the time of its release, and was intended more as an OEM graphics solution than for gaming. That makes them easy to find, but also not worth a lot.

10 Best Compression Tools for Linux

File compression is an integral part of system administration. Finding the best compression method requires significant determination. Luckily, there are many robust compression tools for Linux that make backing up system data easier. Here, we present ten of the best Linux compression tools that can be useful to enterprises and users in this regard. [...] A plethora of reliable Linux compression tools makes it easy to archive and back up essential data. You can choose from many lossless compressors with high compression ratios such as LZ4, lzop, and bzip2. On the other hand, tools like Zstandard and plzip allow for more advanced compression workflows. Read more

Security Leftovers

  • Security updates for Monday

    Security updates have been issued by Debian (activemq, libcaca, libupnp, mqtt-client, and xcftools), Fedora (ceph, mupdf, nagios, python-PyMuPDF, and zathura-pdf-mupdf), Mageia (cups, kernel, pngcheck, and python-pygments), openSUSE (bind, chromium, gnome-autoar, kernel, mbedtls, nodejs8, and thunderbird), and Red Hat (nodejs:10, nodejs:12, nodejs:14, screen, and virt:8.2 and virt-devel:8.2). 

  •   
  • Server Security Tips – Secure Your Server with These Best Practices

    Servers play a vital role in organizations. Their primary function is to provide both data and computational services. Because of the critical role they play, servers hold confidential organizational data and information. Information is like gold nowadays, and hackers are gold miners. An insecure server is vulnerable to all sorts of security threats and data breaches.

  •      
  • Multiple Linux Kernel Vulnerabilities Could Allow Privilege Escalation

    Fortunately, before any active exploitation, Popov fixed these bugs for the users. Popov has confirmed merging of these patches with the mainline kernel version 5.11-rc7. Also, the fixes have been “backported into the stable affected trees”. As Positive Technologies elaborated, this isn’t the first time Popov found and patched a vulnerability. Earlier, he has also caught and fixed two Linux, bugs CVE-2017-2636 and CVE-2019-18683, as well in 2017 and 2020 respectively.

  • Understanding Samsung Knox Vault: Protecting the data that matters most

    Eight years ago, Samsung set out on a mission to build the most trusted and secure mobile devices in the world. With the introduction of our Samsung Knox platform at MWC in 2013, we put in place the key elements of hardware-based security that would help defend Samsung mobile devices and our customers’ data against increasingly sophisticated cyber threats. Samsung Knox has since evolved into more than a built-in security platform, now encompassing a full suite of mobile management tools for enterprise IT administrators. But our mobile product planners, developers and security engineers have remained laser-focused on answering the primary question: how do we remain a step ahead of hackers and keep our users safe at all times? [...] In the first days of Android, the main focus was building a more open and flexible mobile operating system. Security was state-of-the-art for the time, inherited from the world of Unix and mainframe computers. But from the start, it became clear that smartphones were different; they were the most personal computers anyone had ever built.