Language Selection

English French German Italian Portuguese Spanish


Syndicate content
Planet Mozilla -
Updated: 3 hours 31 min ago

Karl Dubost: Browser developer tools timeline

6 hours 1 sec ago

I was reading In a Land Before Dev Tools by Amber, and I thought, Oh here missing in the history the beautifully chiseled Opera Dragonfly and F12 for Internet Explorer. So let's see what are all the things I myself didn't know.

(This is slightly larger than just browser devtools, but close from what the web developers had to be able to work. If you feel I had a glaring omission in this list, send me an email kdubost is working at

Note: Without WebArchive, all this history would not have been possible to retrace. The rust of the Web is still ongoing.


Daniel Stenberg: Using fixed port numbers for curl tests is now history!

9 hours 56 min ago
Test suite basics

The curl test suite fires up a whole bunch of test servers for the various supported protocols, and then command lines using curl or libcurl-using dedicated test apps are run against those servers to make sure curl is acting exactly as it is supposed to.

The curl test suite was introduced back in the year 2000 and has of course grown and been developed substantially since then.

Each test server is invoked and handles a specific protocol or set of protocols, so to make sure curl’s 24 transfer protocols are tested, a lot of test server instances are spun up. The test servers normally don’t shut down after each individual test but instead keep running in case there are more tests for that protocol for speedier operations. When all tests are completed, all the test servers are shut down again.

Port numbers

The protocols curl communicates with are all done over TCP or UDP, and therefore each test server needs to listen to a dedicated port so that the tests can be invoked and curl can use the protocols correctly.

We started the test suite once by using port number 8990 as a default “base port”, and then each subsequent server it invokes gets the next port number. A full round can use up to 30 or so ports.

The script needed to know what port number to use so that it could invoke the tests to use the correct port number. But port numbers are both a scarce resource and a resource that can only be used by one server at a time, so if you would simultaneously invoke the curl test suite a second time on the same machine, it would cause havoc since it would attempt to use the same port numbers again.

Not to mention that when various users run the test suite on their machines, on servers or in CI jobs, just assuming or hoping that we could use a range of 30 fixed port numbers is error-prone and it would occasionally cause us trouble.

Dynamic port numbers

A few months ago, in April 2020, I started the work on changing how the curl test suite uses port numbers for its test servers. Each used test server would instead get fired up on a totally random port number, and then feed back that number to the test script that then would use the actually used port number to run the tests against.

It took some rearranging of internal logic to make sure things still worked and that the comparison of the generated protocol etc still would work. Then it was “just” a matter of converting over all test servers to this new dynamic concept and made sure that there was no old-style assumptions lingering around in the test cases.

Some of the more tricky changes for this new paradigm was the protocol parts that use the port number in data where curl base64 encodes the chunk and sends it to the server.

Reaching the end of that journey

The final fixed port number in the curl test suite was removed when we merged PR 5783 on August 7 2020. Now, every test in curl that needs a port number uses a dynamic port number.

Now we can run “make test” from multiple shells on the same machine in parallel without problems.

It can allow us to improve the test suite further and maybe for example run multiple tests in parallel in order to reduce the total execution time. It will avoid future port collisions. A small, but very good step forward.



Image by Gerd Altmann from Pixabay

Mozilla Privacy Blog: By embracing blockchain, a California bill takes the wrong step forward.

Thursday 6th of August 2020 09:29:24 PM

The California legislature is currently considering a bill directing a public board to pilot the use of blockchain-type tools to communicate Covid-19 test results and other medical records. We believe the bill unduly dictates one particular technical approach, and does so without considering the privacy, security, and equity risks it poses. We urge the California Senate to reconsider.

The bill in question is A.B. 2004, which would direct the Medical Board of California to create a pilot program using verifiable digital credentials as electronic patient records to communicate COVID-19 test results and other medical information. The bill seems like a well-intentioned attempt to use modern technology to address an important societal problem, the ongoing pandemic. However, by assuming the suitability of cryptography-based verifiable credential models for this purpose, rather than setting out technology-neutral principles and guidelines for the proposed pilot program, the bill would set a dangerous precedent by effectively legislating particular technology outcomes. Furthermore, the chosen direction risks exacerbating the potential for discrimination and exclusion, a lesson Mozilla has learned in our work on digital identity models being proposed around the world. While we appreciate the safeguards that have been introduced into the legislation in its current form, such as its limitations on law enforcement use, they are insufficient. A new approach, one that maximizes public good while minimizing harms of privacy and exclusion, is needed.

A.B. 2004 is grounded in large part on legislative findings that the verifiable credential models being explored by the World Wide Web Consortium (W3C) “show great promise” (in the bill’s words) as a technology for communicating sensitive health information. However, W3C’s standards should not be misconstrued as endorsement of any particular use-case. Mozilla is an active member of and participant in W3C, but does not support the W3C’s verifiable credentials work. From our perspective, this bill over-relies on the potential of verifiable credentials without unpacking the tradeoffs involved in applying them to the sensitive public health problems at hand. The bill also fails to appreciate the many limitations of blockchain technology in this context, as others have articulated.

Fortunately, this bill is designed as the start of a process, establishing a pilot program rather than committing to a long term direction. However, a start in the wrong direction should nevertheless be avoided, rather than spending time and resources we can’t spare. Tying digital real world identities (almost certainly implicated in electronic patient records) to contact tracing solutions and, in time, vaccination and “other medical test results” is categorically concerning. Such a move risks creating new avenues for the discrimination and exclusion of vulnerable communities, who are already being disproportionately impacted by COVID-19. It sets a poor example for the rest of the United States and for the world.

At Mozilla, our view is that digital identity systems — for which verifiable credentials for medical status, the subject at issue here, are a stepping stone and test case — are a key real-world implementation challenge for central policy values of privacy, security, competition, and social inclusion. As lessons from India and Kenya have shown us, attempting to fix digital ID systems retroactively is a convoluted process that often lets real harms continue unabated for years. It’s therefore critical to embrace openness as a core methodology in system design. We published a white paper earlier this year to identify recommendations and guardrails to make an “open” ID system work in reality.

A better approach to developing the pilot program envisioned in this bill would establish design principles, guardrails, and outcome goals up front. It would not embrace any specific technical models in advance, but would treat feasible technology solutions equally, and set up a diverse working group to evaluate a broad range of approaches and paradigms. Importantly, the process should build in the possibility that no technical solution is suitable, even if this outcome forces policymakers back to the drawing board.

We stand with the Electronic Frontier Foundation and the ACLU of California in asking the California Senate to send A.B. 2004 back to the drawing board.

The post By embracing blockchain, a California bill takes the wrong step forward. appeared first on Open Policy & Advocacy.

Mozilla VR Blog: What's new in ECSY v0.4 and ECSY-THREE v0.1

Thursday 6th of August 2020 08:04:50 PM

We just released ECSY v0.4 and ECSY-THREE v0.1!

Since the initial release of ECSY we have been focusing on API stability and bug fixing as well as providing some features (such as components’ schemas) to improve the developer experience and provide better validation and descriptive errors when working in development mode.

The API layer hasn't changed that much since its original release but we introduced some new concepts:

Components schemas: Components are now required to have a schema (Unless you are defining a TagComponent).
Defining a component schema is really simple, you just need to define the properties of your component, their types, and, optionally, their default values

class Velocity extends Component {} Velocity.schema = { x: { type: Types.Number, default: 0 }, y: { type: Types.Number, default: 0 } };

We provide type definitions for the basic javascript types, and also on ecsy-three for most of the three.js data types, but you can create your own custom types too.

Defining a schema for a component will give you some benefits for free like:

  • Default copy, clone and reset implementation for the component. Although you could define your own implementation for these functions for maximum performance.
  • Components pool to reuse instances to improve performance and GC stability.
  • Tooling such as validators or editors which could show specific widgets depending on the data type and value.
  • Serializers for components so we can store them in JSON or ArrayBuffers, useful for networking, tooling or multithreading.

Registering components: One of the requirements we introduced recently is that you must always register your components before using them, and that includes registering them before registering systems that use these components too. This was initially done to support an optimization to Queries, but we feel making this an explicit rule will also allow us to add further optimizations down the road, as well as just generally being easier to reason about.

Benchmarks: When building a framework like ECSY that is expected to be doing a lot of operations per frame, it’s hard to infer how a change in the implementation will impact the overall performance. So we introduced a benchmark command to run a set of tests and measure how long they take.

We can use this data to compare across different branches and get an estimation on how much specific changes are affecting the performance.


We created ecsy-three to facilitate developing applications using ECSY and three.js by providing a set of components and systems for interacting with ThreeJS from ECSY. In this release it has gotten a major refactor and we got rid of all the “extras” in the main repository, to focus on the “core” API to make it easy to use with three.js without adding unneeded abstractions.

We introduce extended implementations of ECSY's World and Entity classes (ECSYThreeWorld and ECSYThreeEntity) which provides some helpers to interact with three.js' Object3Ds.
The main helper you will be using is Entity.addObject3DComponent(Object3D, parent) that will add the Object3DComponent to the entity holding a three.js' Object3D reference, as well as adding extra tag components to indentify the type of Object3D we are adding.

Please note that we have been adding tag components for almost every type of Object3D in three.js (Mesh, Light, Camera, Scene, …) but you could still add any other you may need.
For example:

// Create a three.js Mesh let mesh = new THREE.Mesh( new THREE.BoxBufferGeometry(), new THREE.MeshBasicMaterial() ); // Attach the mesh to a new ECSY entity let entity = world.createEntity().addObject3DComponent(mesh);

If we inspect the entity, it will have the following components:

  • Object3DComponent with { value: mesh }
  • MeshTagComponent. Automatically added by addObject3DComponent

Then you can query for each one of these components in your systems and grab the Object3D by using entity.getObject3D(), which is an alias for entity.getComponent(Object3DComponent).value.

import { MeshTagComponent, Object3DComponent } from "ecsy-three"; import { System } from "ecsy"; class RandomColorSystem extends System { execute(delta) { this.queries.entities.results.forEach(entity => { // This will always return a Mesh since // we are querying for the MeshTagComponent const mesh = entity.getObject3D(); mesh.material.color.setHex(Math.random() * 0xffffff); }); } } RandomColorSystem.queries = { entities: { components: [MeshTagComponent, Object3DComponent] } };

When removing Object3D components we also introduced the entity.removeObject3DComponent(unparent) that will get rid of the Object3DComponent as well as all the tag components introduced automatically (For example the MeshTagComponent in the previous example).

One important difference between adding or removing the Object3D components yourself or using these new helpers is that they can handle parenting/unparenting too.

For example the following code will attach the mesh to the scene (Internally it will be doing sceneEntity.getObject3D().add(mesh)):

let entity = world.createEntity().addObject3DComponent(mesh, sceneEntity);

When removing the Object3D component we can pass true as parameter to indicate that we want to unparent this Object3D from its current parent:

entity.removeObject3DComponent(true); // The previous line is equivalent to: entity.getObject3D().parent.remove(entity.getObject3D()); "Initialize" helper

Most ecsy & three.js applications will need a set of common components:

  • World
  • Camera
  • Scene
  • WebGLRenderer
  • Render loop

So we added a helper function initialize in ecsy-three that will create all these for you. It is completely optional, but is a great way to quickly bootstrap an application.

import { initialize } from "ecsy-three"; const { world, scene, camera, renderer } = initialize();

You can find a complete example using all these methods the following glitch:

Developer tools

We updated the developer tools extension (Read about them) to support the newest version of ECSY core, but it should still be backward compatible with previous versions.
We fixed the remote debugging mode that allows you to run the extension in one browser while debugging an application in another browser or device (for example an Oculus Quest).
You can grab them on your favourite browser extension store:

The community

We are so happy with how the community has been helping us build ECSY and the project that have emerged out of them.

We have a bunch of cool experiments around physics, networking, games, and boilerplates and bindings for pixijs, phaser, babylon, three.js, react, and many more!

It’s been especially useful for us to open a Discord where a lot of interesting discussions both ECSY or ECS as a paradigm have happened.

What's next?

There is still a long road ahead, and we have a lot of features and projects in mind to keep improving the ECSY ecosystem as for example:

  • Revisit the current reactive queries implementation and design, especially the deferred removal step.
  • Continue to experiment with using ECSY in projects internally at Mozilla, like Hubs and Spoke.
  • Improving the sandbox and API examples
  • Keep adding new systems and components for higher level functionality: physics, teleport, networking, hands & controllers, …
  • Keep building new demos to showcase the features we are releasing

Please feel free to use our github repositories (ecsy, ecsy-three, ecsy-devtools) to follow the development, request new features or file issues on bugs you find. Also come participate in community discussions on our discourse forum and discord server.

Support.Mozilla.Org: Review of the year so far, and looking forward to the next 6 months.

Thursday 6th of August 2020 04:21:03 PM

In 2019 we started looking into our experiences and 2020 saw us release the new responsive redesign, a new AAQ flow, a finalized Firefox Accounts migration, and a few other minor tweaks. We have also performed a Python and Django upgrade carrying on with the foundational work that will allow us to grow and expand our support platform. This was a huge win for our team and the first time we have improved our experience in years! The team is working on tracking the impact and improvement to our overall user experience.

We also know that contributors in Support have had to deal with an old, sometimes very broken, toolset, and so we wanted to work on that this year. You may have already heard the updates from Kiki and Giulia through their monthly strategy updates. The research and opportunity identification the team did was hugely valuable, and the team identified onboarding as an immediate area for improvement. We are currently working through an improved onboarding process and look forward to implementing and launching ongoing work.

Apart from that, we’ve done a quite chunk of work on the Social Support side with the transition from Buffer Reply to Conversocial. The change was planned since the beginning of this year and we worked together with the Pocket team on the implementation. We’ve also collaborated closely with the marketing team to kick off the @FirefoxSupport Twitter account that we’ll be using to focus our Social Support community effort.

Now, the community managers are focusing on supporting the Fennec to Fenix migration. A community campaign to promote the Respond Tool is lining up in parallel with the migration rollout this week and will run until the end of August as we’re completing the rollout.

We plan to continue implementing the information architecture we developed late last year that will improve our navigation and clean up a lot of the old categories that are clogging up our knowledge base editing tools. We’re also looking into redesigning our internal search architecture, re-implement it from scratch and expand our search UI.

2020 is also the year we have decided to focus more on data. Roland and JR have been busy building out our product dashboards, all internal for now – and we are now working on how we make some of this data publicly available. It is still work in progress, but we hope to make this possible sometime in early 2021.

In the meantime, we welcome feedback, ideas, and suggestions. You can also fill out this form or reach out to Kiki/Giulia for questions. We hope you are all as excited about all the new things happening as we are!

Patrick, on behalf of the SUMO team

The Mozilla Blog: Virtual Tours of the Museum of the Fossilized Internet

Thursday 6th of August 2020 11:46:57 AM
Let’s brainstorm a sustainable future together.

Imagine: We are in the year 2050 and we’re opening the Museum of the Fossilized Internet, which commemorates two decades of a sustainable internet. The exhibition can now be viewed in social VR. Join an online tour and experience what the coal and oil-powered internet of the past was like.


Visit the Museum from home

In March 2020, Michelle Thorne and I announced office tours of the Museum of the Fossilized Internet as part of our new Sustainability programme. Then the pandemic hit, and we teamed up with the Mozilla Mixed Reality team to make it more accessible while also demonstrating the capabilities of social VR with Hubs.

We now welcome visitors to explore the museum at home through their browsers.

The museum was created to be a playful source of inspiration and an invitation to imagine more positive, sustainable futures. Here’s a demo tour to help you get started on your visit.

Video Production: Dan Fernie-Harper; Spoke scene: Liv Erickson and Christian Van Meurs; Tour support: Elgin-Skye McLaren.


Foresight workshops

But that’s not all. We are also building on the museum with a series of foresight workshops. Once we know what preferable, sustainable alternatives look like, we can start building towards them so that in a few years, this museum is no longer just a thought experiment, but real.

Our first foresight workshop will focus on policy with an emphasis on trustworthy AI. In a pilot, facilitators Michelle Thorne and Fieke Jansen will specifically focus on the strategic opportunity that the European Commission is in the process of defining its AI strategy, climate agenda and COVID-19 recovery plans. Thought together, this workshop will develop options to advance both, trustworthy AI and climate justice.

More foresight workshops should and will follow. We are currently looking at businesses, technology, or the funders community as additional audiences. Updates will be available on the wiki.

You are also invited to join the sustainability team as well as our environmental champions on our Matrix instance to continue brainstorming sustainable futures. More updates on Mozilla’s journey towards sustainability will be shared here on the Mozilla Blog.

The post Virtual Tours of the Museum of the Fossilized Internet appeared first on The Mozilla Blog.

Data@Mozilla: Experimental integration Glean with Unity applications

Thursday 6th of August 2020 10:45:40 AM

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

This is a special guest post by non-Glean-team member Daosheng Mu!


You might notice Firefox Reality PC Preview has been released in HTC’s Viveport store. That is a VR web browser that provides 2D overlay browsing alongside immersive content and supports web-based immersive experiences for PC-connected VR headsets. In order to easily deploy our product into the Viveport store, we take advantage of Unity to help make our application launcher. Also because of that, it brings us another challenge about how to use Mozilla’s existing telemetry system.

As we know, Glean SDK has provided language bindings for different programming language requirements that include Kotlin, Swift, and Python. However, when we are talking about supporting applications that use Unity as their development toolkit, there are no existing bindings available to help us achieve it. Unity allows users using a Python interpreter to embed Python scripts in a Unity project; however, due to Unity’s technology being based on the Mono framework, that is not the same as our familiar Python runtime for running Python scripts. So, the alternative way we need to find out is how to run Python on .Net Framework or exactly on Mono framework. If we are discussing possible approaches to run Python script in the main process, using IronPython is the only solution. However, it is only available for Python 2.7, and the Glean SDK Python language binding needs Python 3.6. Hence, we start our plans to develop a new Glean binding for C#.


Getting started

The Glean team and I initialized the discussions about what are the requirements of running Glean in Unity to implement C# binding from Glean. We followed minimum viable product strategy and defined very simple goals to evaluate if the plan could be workable. Technically, we only need to send built-in and custom pings as the current Glean Python binding mechanism, and we are able to just use StringMetricType as our first metric in this experimental Unity project. Besides, we also notice .Net Frameworks have various versions, and it is essential to consider the compatibility with the Mono framework in Unity. Therefore, we decide Glean C# binding would be based on .Net Standard 2.0. Based on these efficient MVP goals and Glean team’s rapid production, we got our first alpha version of C# binding in a very short moment. I really appreciate Alessio, Mike, Travis, and other team members from the Glean team. Their hard work made it happen so quickly, and they were patient with my concerns and requirements.


How it works

In the beginning, it is worth it for us to explain how to integrate Glean into a Hello World C# application. We can choose either importing the C# bindings source code from glean-core/csharp or just building the csharp.sln from the Glean repository and then copy and paste the generated Glean.dll to your own project. Then, in your C# project’s Dependencies setting, add this Glean.dll. Aside from this, we also need to copy and paste glean_ffi.dll that is existing in the folder from pulling Glean after running `cargo build`. Lastly, add Serilog library into your project via NuGet. We can install it through NuGet Package Manager Console as below:

PM> Install-Package Serilog

PM> Install-Package Serilog.Sinks.Console

Defining pings and metrics

Before we start to write our program, let’s design our metrics first. Based on the current ability of Glean SDK C# language binding, we can create a custom ping and set a string type metric for this custom ping. Then, at the end of the program, we will submit this custom ping, this string metric would be collected and uploaded to our data server. The ping and metric description files are as below:

Testing and verifying it

Now, it is time for us to write our HelloWorld program.

As we can see, the code above is very straightforward. Although Glean parser in C# binding hasn’t been supported to assist us create metrics, we are still able to create these metrics in our program manually. One thing you might notice is Thread.Sleep(1000); at the bottom part of the main(). It means pausing the current thread for 1000 milliseconds. Because this HelloWorld program is a console application, the application will quit once there are no other operations. We need this line to make sure the Glean ping uploader has enough time to finish its ping uploading before the main process is quit. We also can choose to replace it with Console.ReadKey(); to let users quit the application instead of  closing the application itself directly. The better solution here is to launch a separate process for Glean uploader; this is the current limitation of our C# binding implementation. We will work on this in the future. Regarding the extended code you can go to glean/samples/csharp to see other pieces, we also will use the same code at the Unity section.

After that, we would be interested in seeing if our pings are uploaded to a data server successfully. We can set the Glean debug view environment variable by inserting set GLEAN_DEBUG_VIEW_TAG=samplepings in our command-line tool, samplepings could be any you would like to be as a tag of pings. Then, run your executable file in the same command-line window. Finally, you can see this result from Glean Debug Ping Viewer , the result is as below:

It looks great now. But you might notice the os_version looks not right if you are running it on a Windows 10 platform. This is due to the fact that we are using Environment.OSVersion to get the OS version, and it is not reliable. The follow up discussion please reference Bug 1653897. The solution is by adding a manifest file, then unmarking the lines of Windows 10 compatibility.


Your first Glean program in Unity

Now, I think we are good enough about the explanation of general C# program parts. Let’s move on the main point of this article, using Glean in Unity. Using Glean in Unity is a little bit different, but it is not too complicated. First of all, open the C# solution that your Unity project creates for you. Because Unity needs more dependencies when using Glean, we would let the Glean NuGet package help us install them in advance. As Glean NuGet package mentions, install Glean NuGet package by the below command:

Install-Package Mozilla.Telemetry.Glean -Version 0.0.2-alpha

Then, check your Packages folder, you could see there are some packages already downloaded into your folder.

However, the Unity project builder couldn’t be able to recognize this Packages folder, we need to move the libraries from these packages into the Unity Assets\Scripts\Plugins folder. You might be able to see these libraries have some runtime versions differently. The basic idea is that Unity is .Net Standard 2.0 compatible, so we can just grab them from lib\netstandard2.0 folders. In addition, Unity allows users to distribute their version to x86 and x86_64 platforms. Therefore, when moving Glean FFI library into this Plugins folder, we have to put Glean FFI library, glean_ffi.dll, x86 and x86_64 builds into Assets\Scripts\Plugins\x86 and Assets\Scripts\Plugins\x86_64 individually. Fortunately, in Unity, we don’t need to worry about the Windows compatibility manifest stuff, they already handle it for us.

Now, we can start to copy the same code to its main C# script.

We might notice we didn’t add Thread.Sleep(1000); at the end of the start() function. It is because in general, Unity is a Window application, the main approach to exit an Unity application is by closing its window. Hence, Glean ping uploader has enough time to finish its task. However, if your application needs to use a Unity specific API, Application.Quit(), to close the program. Please make sure to make the main thread wait for a couple of seconds in order to preserve time for Glean ping uploader to finish its jobs. For more details about this example code, please go to GleanUnity to see its source code.


Next steps

C# binding is still at its initial stage. Although we already support a few metric types, we still have lots of metric types and features that are unavailable compared to other bindings. We also look forward to providing a better solution of off-main process ping upload to resolve needing a main thread to wait for Glean ping uploader before the main process can quit. We will keep working on it!

Cameron Kaiser: Google, nobody asked for a new Blogger interface

Thursday 6th of August 2020 04:13:40 AM

I'm writing this post in what Google is euphemistically referring to as an improvement. I don't understand this. I managed to ignore New Blogger for a few weeks but Google's ability to fark stuff up has the same air of inevitability as rotting corpses. Perhaps on mobile devices it's better, and even that is a matter of preference, but it's space-inefficient on desktop due to larger buttons and fonts, it's noticeably slower, it's buggy, and very soon it's going to be your only choice.

My biggest objection, however, is what they've done to the HTML editor. I'm probably the last person on earth to do so, but I write my posts in raw HTML. This was fine in the old Blogger interface which was basically a big freeform textbox you typed tags into manually. There was some means to intercept tags you didn't close, which was handy, and when you added elements from the toolbar you saw the HTML as it went in. Otherwise, WYTIWYG (what you typed is what you got). Since I personally use fairly limited markup and rely on the stylesheet for most everything, this worked well.

The new one is a line editor ... with indenting. Blogger has always really, really wanted you to use <p> as a container, even though a closing tag has never been required. But now, thanks to the indenter, if you insert a new paragraph then it starts indenting everything, including lines you've already typed, and there's no way to turn this off! Either you close every <p> tag immediately to defeat this behaviour, or you start using a lot of <br>s, which specifically defeats any means of semantic markup. (More about this in a moment.) First world problem? Absolutely. But I didn't ask for this "assistance" either, nor to require me to type additional unnecessary content to get around a dubious feature.

But wait, there's less! By switching into HTML view, you lose ($#@%!, stop indenting that line when I type emphasis tags!) the ability to insert hyperlinks, images or other media by any other means other than manually typing them out. You can't even upload an image, let alone automatically insert the HTML boilerplate and edit it.

So switch into Compose view to actually do any of those things, and what happens? Like before, Blogger rewrites your document, but now this happens all the time because of what you can't do in HTML view. Certain arbitrarily-determined naughtytags(tm) like <em> become <i> (my screen-reader friends will be disappointed). All those container close tags that are unnecessary bloat suddenly appear. Oh, and watch out for that dubiously-named "Format HTML" button, the only special feature to appear in the HTML view, as opposed to anything actually useful. To defeat the HTML autocorrupt while I was checking things writing this article, I actually copied and repasted my entire text multiple times so that Blogger would stop the hell messing with it. Who asked for this?? Clearly the designers of this travesty, assuming it isn't some cruel joke perpetuated by a sadistic UI anti-expert or a covert means to make people really cheesed off at Blogger so Google can claim no one uses it and shut it down, now intend HTML view to be strictly touch-up only, if that, and not a primary means of entering a post. Heaven forbid people should learn HTML anymore and try to write something efficient.

Oh, what else? It's slower, because of all the additional overhead (remember, it used to be just a big ol' box o' text that you just typed into, and a selection of mostly static elements making up the UI otherwise). Old Blogger was smart enough (or perhaps it was a happy accident) to know you already had a preview tab open and would send your preview there. New Blogger opens a new, unnecessary tab every time. The fonts and the buttons are bigger, but the icons are of similar size, defeating any reasonable argument of accessibility and just looks stupid on the G5 or the Talos II. There's lots of wasted empty space, too. This may reflect the contents of the crania of the people who worked on it, and apparently they don't care (I complained plenty of times before switching back, I expect no reply because they owe me nothing), so I feel no shame in abusing them.

Most of all, however, there is no added functionality. There is no workflow I know of that this makes better, and by removing stuff that used to work, demonstrably makes at least my own workflow worse.

So am I going to rage-quit Blogger? Well, no, at least not for the blogs I have that presently exist (feel free to visit, linked in the blogroll). I have years of documents here going back to TenFourFox's earliest inception in 2010, many of which are still very useful to vintage Power Mac users, and people know where to find them. It was the lowest effort move at the time to start a blog here and while Blogger wasn't futzing around with their own secret sauce it worked out well.

So, for future posts, my anticipated Rube Goldbergian nightmare is to use Compose view to load my images, copy the generated HTML off, type the rest of the tags manually in a text editor as God and Sir Tim intended and cut and paste it into a blank HTML view before New Blogger has a chance to mess with it. Hopefully they don't close the hole with paste not auto-indenting, for all that's holy. And if this is the future of Blogger, then if I have any future projects in mind, I think it's time for me to start self-hosting them and take a hike. Maybe this really is Google's way of getting this place to shut down.

(I actually liked New Coke, by the way.)

Mozilla VR Blog: Introducing Firefox Reality PC Preview

Wednesday 5th of August 2020 09:01:00 PM

Have you ever played a VR game and needed a tip for beating the game... but you didn’t want to take off your headset to find that solution? Or, have you wanted to watch videos while you played your game? Or, how about wanting to immerse yourself in a 360 video on Youtube?

Released today, Firefox Reality PC Preview enables you to do these things and more. This is the newest addition to the Firefox Reality family of products. Built upon the latest version of the well-known and trusted Firefox browser, Firefox Reality PC Preview works with tethered headsets as well as wireless headsets streaming from a PC.

<figcaption>Firefox Reality PC Preview in Viveport Origin environment</figcaption>

Firefox Reality PC Preview is a new VR web browser that provides 2D overlay browsing alongside immersive apps and supports web-based immersive experiences for PC-connected VR headsets. You can access the conventional web, floating in your virtual environment as you use the controller to interact with the web content. You can also watch 360 Videos from sites such as Vimeo or KaiXR, with many showcases of immersive experiences from around the world. With Firefox Reality PC Preview, you can explore 3D web content created with WebVR or WebXR, such as Mozilla Hubs, Sketchfab, and Hello WebXR!. And, of course, Firefox Reality PC Preview contains the same privacy and security that underpin regular Firefox on the desktop.

<figcaption>Browsing the web and playing 360 videos while inside Viveport Origin, and watching a gameplay video of Pirate Space Trainer on YouTube while playing it at the same time.</figcaption>

Firefox Reality PC Preview is available for download right now in HTC’s Viveport store and runs on any device accessible from the Viveport Store, including the HTC Vive Cosmos. Our Preview release is intended to give users the chance to test the features shared above. From this initial release, we plan to deliver updates that add more functionality and stability as well as expand to other VR platforms. We would love to get feedback about your experience with this Preview. Please install it and take it for a test drive!

<figcaption>Screen shot of Firefox Reality PC Preview in HTC Viveport Store</figcaption>

Daniel Stenberg: Upcoming Webinar: curl: How to Make Your First Code Contribution

Wednesday 5th of August 2020 04:36:36 PM

When: Aug 13, 2020 10:00 AM Pacific Time (US and Canada) (17:00 UTC)
Length: 30 minutes

Abstract: curl is a wildly popular and well-used open source tool and library, and is the result of more than 2,200 named contributors helping out. Over 800 individuals wrote at least one commit so far.

In this presentation, curl’s lead developer Daniel Stenberg talks about how any developer can proceed in order to get their first code contribution submitted and ultimately landed in the curl git repository. Approach to code and commits, style, editing, pull-requests, using github etc. After you’ve seen this, you’ll know how to easily submit your improvement to curl and potentially end up running in ten billion installations world-wide.

Register in advance for this webinar:

Mozilla Attack & Defense: Understanding Web Security Checks in Firefox (Part 2)

Wednesday 5th of August 2020 10:53:35 AM


This is the second and final part of a blog post series that explains how Firefox implements Web Security fundamentals, like the Same-Origin Policy and Content-Security-Policy. While the first post explained Firefox security terminology and theoretical foundations, this second post covers how to log internal security information to the console in a human readable format. Ultimately, we hope to inspire new security research in the area of web security checks and to empower participants in our bug bounty program to do better, deeper work.

Generally, we encourage everyone to do their security testing in Firefox Nightly. That being said, the logging mechanisms described in this post, work in all versions of Firefox – from self-build, to versions of Nightly, Beta, Developer Edition, Release and ESR you may have installed locally already.

Logging Security Internals to the console

All Firefox versions ship with a logging framework that allows one to inspect Gecko’s (Firefox’ rendering engine) internals. In general, logging Firefox internals is as easy as defining an environment variable called MOZ_LOG. When testing on mobile, the document Configuring GeckoView describes and guides you through defining environment variables in GeckoView’s runtime environment. For printing web security checks we simply have to enable the logger named "CSMLog". You can also set MOZ_LOG_FILE if you want to write into a log file.

Defining MOZ_LOG="CSMLog:5" as an environment variable will print all security checks to the console, where the digit 5 indicates the highest log-level (Verbose) which naturally prints a lot of debugging information but in turn allows debugging program flow.

Understanding Security Load Information

Here is a log example when defining CSMLog:5 and opening in a new tab. For the sake of simplicity we removed line prefixes like [Parent 361301: Main Thread] which provide additional information around process type (parent/child), process id (pid) and the thread in which the logging occurred. These prefixes can also be omitted by appending ",raw" to the MOZ_LOG value.

V/CSMLog doContentSecurityCheck: V/CSMLog   - channelURI: V/CSMLog   - httpMethod: GET D/CSMLog   - loadingPrincipal: nullptr D/CSMLog   - triggeringPrincipal: SystemPrincipal D/CSMLog   - principalToInherit: NullPrincipal V/CSMLog   - redirectChain: V/CSMLog   - internalContentPolicyType: TYPE_DOCUMENT V/CSMLog   - externalContentPolicyType: TYPE_DOCUMENT V/CSMLog   - upgradeInsecureRequests: false V/CSMLog   - initialSecurityChecksDone: false V/CSMLog   - allowDeprecatedSystemRequests: false D/CSMLog   - CSP: V/CSMLog   - securityFlags: V/CSMLog     - SEC_ALLOW_CROSS_ORIGIN_SEC_CONTEXT_IS_NULL

Listing 1: Firefox Security Log (CSMLog:5) when opening in a new tab.

As of Firefox 80, every logged Web Security Check appears in the format from Listing 1. At the beginning of each line you see CSMLog which is the name of our logger and D/V are short for Debug/Verbose, which indicate the log level (4 is Debug, 5 is Verbose). The output format after this prefix is valid YAML and should be easy to parse automatically. These blocks of attributes are surrounded by #DebugDoContentSecurityCheck Begin and End for each and every outgoing request, which corresponds to the function name doContentSecurityCheck in the SecurityManager. This function contains all available information of the current security context, before a request is sent over the wire. Naturally, this does not include the security checks that can only happen after the response is received (e.g., X-Frame-Options, X-Content-Type-Options, etc.).

Most of the information below is readily available through the nsILoadinfo object that corresponds to the request:

The URI about to be loaded and evaluated within this security check.

httpMethod (optional)
Indicates the HTTP request methods like e.g., GET, HEAD or POST and hence is only available for HTTP requests.

In general, the loadingPrincipal reflects the security context (the Principal) where the result of this resource load will be used. In case of an image load for example, this would be the security context (ContentPrincipal) of the loading document. Since this is a top-level load, the loadingPrincipal is null because the result of the load is a new document.

The security context (reflected through a Principal) which triggered this load. For almost all subresource loads, the triggeringPrincipal and the loadingPrincipal are identical. Since the user entered in the URL-Bar, it is a user-triggered action which in Firefox is equivalent to a load triggered by the System and hence is using a SystemPrincipal as the triggeringPrincipal. For the sake of completeness, imagine a cross-origin CSS file which loads a background image. Then the triggeringPrincipal for that image load would be the security context (ContentPrincipal) of the CSS, but the loadingPrincipal would be the security context (ContentPrincipal) of the document where the image load will be used.

The principalToInherit is only relevant to loads of type document (top-level loads) and type subdocument (e.g., iframe loads) and indicates the principal that will be inherited, e.g., when loading a data URI.

In case a load encounters a redirect (e.g. a 302 server side redirect) then the RedirectChain contains all Principals (serialized into origin URI strings) of all the redirects this load went through.

Indicates the (internal and external) content policy type of the load. In that particular case, the load was of TYPE_DOCUMENT. The reason there is an internal and an external type is because Firefox needs to enforce different security mappings. For example, the separation of different script types allows precise mapping to e.g. CSP directives.

Indicates whether the request needs to be upgraded from HTTP to HTTPS before it hits the network due to the CSP directive upgrade-insecure-requests.

Whenever the ContentSecurityManager performs security checks the first time for a channel, then this bit gets flipped indicating that initial security checks on that channel have been completed. In turn, this flag causes certain security checks, like e.g. CORS, to be bypassed after a redirect.

In one of our hardening efforts, we started to disallow web requests in system privileged contexts, when the triggeringPrincipal is of type SystemPrincipal (see CheckAllowLoadInSystemPrivilegedContext()). However, there are certain system requests where we allow that temporarily, e.g., for OCSP requests.

The Content Security Policy that needs to be enforced for this request. If there is no CSP to enforce, then the value for this field remains empty.

The securityFlags indicate what kind of security checks need to be performed for that channel. For example, whether to enforce the same-origin-policy or whether this load requires CORS.

Analyzing and Finding Security Bugs with Logging

Now that we know all the entries in a web security log, let’s dive a little deeper and investigate a historical bug and how it could have been found using the discussed logging capabilities. We will be looking at CVE-2019-17000, a limited CSP bypass from October 2019 that we fixed in Firefox 70. A Proof of Concept could look a bit like this:

<meta http-equiv="Content-Security-Policy" content="default-src 'self'; object-src data:; img-src 'none';"/> <body> The following object element is allowed. <object data='data:text/html,<p>Object element has an image:<br> <img src="" alt="This image should not load"/><br> This text is underneath the image.'> </object> </body>

Listing 2: Document defining a CSP blocking all images and loading an <object> tag with a data: URI.

The document in Listing 2 comes with a Content-Security-Policy that allows loading object elements which point to a data URL (object-src data:), but denies all image loads (img-src ‘none’). A browser needs to perform various security checks to completely load this document: First, we’re loading the top level document in a new tab, which is allowed. Then, we adjust the document’s CSP given the <meta> element and load the <object> element according to the CSP (allowed in object-src). Given it’s data URL, the <object> element is cross-origin (i.e., loaded using a NullPrincipal). Therefore, the inner <img> element needs to be loaded using a different, unique origin. However, the image load should still be forbidden based on the img-src ‘none’ directive of the parent document!

Let’s take a closer look at just the image request when logging in an unaffected version of Firefox:

doContentSecurityCheck: - channelURI: - httpMethod: GET - loadingPrincipal: NullPrincipal - triggeringPrincipal: NullPrincipal - principalToInherit: nullptr - redirectChain: - internalContentPolicyType: TYPE_INTERNAL_IMAGE - externalContentPolicyType: TYPE_IMAGE - upgradeInsecureRequests: true - initialSecurityChecksDone: false - allowDeprecatedSystemRequests: false - CSP: - default-src 'self'; object-src data:; img-src 'none' - securityFlags: - SEC_ALLOW_CROSS_ORIGIN_INHERITS_SEC_CONTEXT - SEC_ALLOW_CHROME

Listing 3: The request is loaded into a new Security Context (using a NullPrincipal), but the CSP is inherited.

But here is what it looked like in the vulnerable Firefox version 69:

Listing 4: Screenshot of the page opened in a vulnerable version of Firefox 69. Photo by Mikael Seegen.

So, what happened here – why did the image load?

We can investigate the erroneous behavior using our logging. Here’s a listing for the very same image request from Firefox 69:

doContentSecurityCheck: - channelURI: - httpMethod: GET - loadingPrincipal: NullPrincipal - triggeringPrincipal: NullPrincipal - principalToInherit: nullptr - redirectChain: - internalContentPolicyType: TYPE_INTERNAL_IMAGE - externalContentPolicyType: TYPE_IMAGE - upgradeInsecureRequests: true - initialSecurityChecksDone: false - CSP: - securityFlags: - SEC_ALLOW_CROSS_ORIGIN_INHERITS_SEC_CONTEXT - SEC_ALLOW_CHROME

Listing 5: The very same request as in Listing 3, but logged using Firefox 69.

Can you spot the difference?

In the vulnerable case (Listing 5), the security check was not aware of an effective CSP. Even though the <object>’s content does not inherit the origin of the page (it’s loaded with a data: URL after all) – it should inherit the CSP of the document.

An attacker could use a CSP bypass like this and target users on web pages that are susceptible to XSS or content injections. However, this bug was identified in a previous version of Firefox and has been fixed for all of our users since.

To summarize, using the provided logging mechanism allows us to effectively detect security problems by visual inspection. One could take it even further and generate graph structures for nested page loads. Using these graphs to observe where the security context (e.g., the CSP) changes can be a very powerful tool for runtime security analysis.

Going Forward

We have explained how to enable logging mechanisms within Firefox which allows for visual inspection of every web security check performed. We would like to point out that finding security flaws might be eligible for a bug bounty. Finally, we hope the provided instructions foster security research and in turn allow researchers, bug bounty hunters and generally everyone interested in web security to contribute to Mozilla and the Security of the Open Web.

Nicholas Nethercote: How to speed up the Rust compiler some more in 2020

Wednesday 5th of August 2020 02:41:52 AM

I last wrote in April about my work on speeding up the Rust compiler. Time for another update.

Weekly performance triage

First up is a process change: I have started doing weekly performance triage. Each Tuesday I have been looking at the performance results of all the PRs merged in the past week. For each PR that has regressed or improved performance by a non-negligible amount, I add a comment to the PR with a link to the measurements. I also gather these results into a weekly report, which is mentioned in This Week in Rust, and also looked at in the weekly compiler team meeting.

The goal of this is to ensure that regressions are caught quickly and appropriate action is taken, and to raise awareness of performance issues in general. It takes me about 45 minutes each time. The instructions are written in such a way that anyone can do it, though it will take a bit of practice for newcomers to become comfortable with the process. I have started sharing the task around, with Mark Rousskov doing the most recent triage.

This process change was inspired by the “Regressions prevented” section of an excellent blost post from Nikita Popov (a.k.a. nikic), about the work they have been doing to improve the speed of LLVM. (The process also takes some ideas from the Firefox Nightly crash triage that I set up a few years ago when I was leading Project Uptime.)

The speed of LLVM directly impacts the speed of rustc, because rustc uses LLVM for its backend. This is a big deal in practice. The upgrade to LLVM 10 caused some significant performance regressions for rustc, though enough other performance improvements landed around the same time that the relevant rustc release was still faster overall. However, thanks to nikic’s work, the upgrade to LLVM 11 will win back much of the performance lost in the upgrade to LLVM 10.

It seems that LLVM performance perhaps hasn’t received that much attention in the past, so I am pleased to see this new focus. Methodical performance work takes a lot of time and effort, and can’t effectively be done by a single person over the long-term. I strongly encourage those working on LLVM to make this a team effort, and anyone with the relevant skills and/or interest to get involved.

Better benchmarking and profiling

There have also been some major improvements to rustc-perf, the performance suite and harness that drives, and which is also used for local benchmarking and profiling.

#683: The command-line interface for the local benchmarking and profiling commands was ugly and confusing, so much so that one person mentioned on Zulip that they tried and failed to use them. We really want people to be doing local benchmarking and profiling, so I filed this issue and then implemented PRs #685 and #687 to fix it. To give you an idea of the improvement, the following shows the minimal commands to benchmark the entire suite.

# Old target/release/collector --db <DB> bench_local --rustc <RUSTC> --cargo <CARGO> <ID> # New target/release/collector bench_local <RUSTC> <ID>

Full usage instructions are available in the README.

#675: Joshua Nelson added support for benchmarking rustdoc. This is good because rustdoc performance has received little attention in the past.

#699, #702, #727, #730: These PRs added some proper CI testing for the local benchmarking and profiling commands, which had a history of being unintentionally broken.

Mark Rousskov also made many small improvements to rustc-perf, including reducing the time it takes to run the suite, and improving the presentation of status information.


Last year I wrote about inlining and code bloat, and how they can have a major effect on compile times. I mentioned that tooling to measure code size would be helpful. So I was happy to learn about the wonderful cargo-llvm-lines, which measures how many lines of LLVM IR generated for each function. The results can be surprising, because generic functions (especially commons ones like Vec::push(), Option::map(), and Result::map_err()) can be instantiated dozens or even hundreds of times in a single crate.

I worked on multiple PRs involving cargo-llvm-lines.

#15: This PR added percentages to the output of cargo-llvm-lines, making it easier to tell how important each function’s contribution is to the total amount of code.

#20, #663: These PRs added support for cargo-llvm-lines within rustc-perf, which made it easy to measure the LLVM IR produced for the standard benchmarks.

#72013: RawVec::grow() is a function that gets called by Vec::push(). It’s a large generic function that deals with various cases relating to the growth of vectors. This PR moved most of the non-generic code into a separate non-generic function, for wins of up to 5%.

(Even after that PR, measurements show that the vector growth code still accounts for a non-trivial amount of code, and it feels like there is further room for improvement. I made several failed attempts to improve it further: #72189, #73912, #75093, #75129. Even though they reduced the amount of LLVM IR generated, they were performance losses. I suspect this is because these additional changes affected the inlining of some of these functions, which can be hot.)

#72166: This PR added some specialized Iterator methods  (for_each(), all(), any(), find(), find_map()) for slices, winning up to 9% on clap-rs, and up to 2% on various other benchmarks.

#72139: This PR added a direct implementation for Iterator::fold(), replacing the old implementation that called the more general Iterator::try_fold(). This won up to 2% on several benchmarks.

#73882: This PR streamlined the code in RawVec::allocate_in(), winning up to 1% on numerous benchmarks.

cargo-llvm-lines is also useful to application/crate authors. For example, Simon Sapin managed to speed up compilation of the largest crate in Servo by 28%! Install it with cargo install cargo-llvm-lines and then run it with cargo llvm-lines (for debug builds) or cargo llvm-lines --release (for release builds).


#71942: this PR shrunk the LocalDecl type from 128 bytes to 56 bytes, reducing peak memory usage of a few benchmarks by a few percent.

#72227: If you push multiple elements onto an empty Vec it has to repeatedly reallocate memory. The growth strategy in use resulted in the following sequence of capacities: 0, 1, 2, 4, 8, 16, etc. “Tiny vecs are dumb”, so this PR changed it to 0, 4, 8, 16, etc., in most cases, which reduced the number of allocations done by rustc itself by 10% or more and sped up many benchmarks by up to 4%. In theory, the change could increase memory usage, but in practice it doesn’t.

#74214: This PR eliminated some symbol interner accesses, for wins of up to 0.5%.

#74310: This PR changed SparseBitSet to use an ArrayVec instead of a SmallVec for its storage, which is possible because the maximum length is known in advance, for wins of up to 1%.

#75133: This PR eliminated two fields in a struct that were only used in the printing of an error message in the case of an internal compiler error, for wins of up to 2%.

Speed changes since April 2019

Since my last blog post, changes in compile times have been mixed (table, graphs). It’s disappointing to not see a sea of green in the table results like last time, and there are many regressions that seem to be alarming. But it’s not as bad as it first seems! Understanding this requires knowing a bit about the details of the benchmark suite.

Most of the benchmarks that saw large percentage regressions are extremely short-running. (The benchmark descriptions help make this clearer.) For example a non-incremental check build of helloworld went from 0.03s to 0.08s.  (#70107 and #74682) are two major causes.) In practice, a tiny additional overhead of a few 10s of milliseconds per crate isn’t going to be noticeable when many crates take seconds or tens of seconds to compile.

Among the “real-world” benchmarks, some of them saw mixed results (e.g. regex, ripgrep), while some of them saw clear improvement, some of which were large (e.g. clap-rs, style-servo, webrender, webrender-wrench).

With all that in mind, since my last post, the compiler is probably either no slower or somewhat faster for most real-world cases.

Another interesting data point about the speed of rustc over the long-term came from Hacker News: compilation of one project (lewton) got 2.5x faster over the past three years.

LLVM 11 hasn’t landed yet, so that will give some big improvements for real-world cases soon. Hopefully for my next post the results will be more uniformly positive.


Hacks.Mozilla.Org: Changes to SameSite Cookie Behavior – A Call to Action for Web Developers

Tuesday 4th of August 2020 02:45:24 PM

We are changing the default value of the SameSite attribute for cookies from None to Lax. This will greatly improve security for users. However, some web sites may depend (even unknowingly) on the old default, potentially resulting in breakage for those sites. At Mozilla, we are slowly introducing this change. And we are strongly encouraging all web developers to test their sites with the new default.


SameSite is an attribute on cookies that allows web developers to declare that a cookie should be restricted to a first-party, or same-site, context. The attribute can have any of the following values:

  • None – The browser will send cookies with both cross-site and same-site requests.
  • Strict – The browser will only send cookies for same-site requests (i.e., requests originating from the site that set the cookie).
  • Lax – Cookies will be withheld on cross-site requests (such as calls to load images or frames). However, cookies will be sent when a user navigates to the URL from an external site; for example, by following a link.

Currently, the absence of the SameSite attribute implies that cookies will be attached to any request for a given origin, no matter who initiated that request. This behavior is equivalent to setting SameSite=None. However, this “open by default” behavior leaves users vulnerable to Cross-Site Request Forgery (CSRF) attacks. In a CSRF attack, a malicious site attempts to use valid cookies from legitimate sites to carry out attacks.

Making the Web Safer

To protect users from CSRF attacks, browsers need to change the way cookies are handled. The two primary changes are:

  • When not specified, cookies will be treated as SameSite=Lax by default
  • Cookies that explicitly set SameSite=None in order to enable cross-site delivery must also set the Secure attribute. (In other words, they must require HTTPS.)

Web sites that depend on the old default behavior must now explicitly set the SameSite attribute to None. In addition, they are required to include the Secure attribute. Once this change is made inside of Firefox, if web sites fail to set SameSite correctly, it is possible those sites could break for users.

Introducing the Change

The new SameSite behavior has been the default in Firefox Nightly since Nightly 75 (February 2020). At Mozilla, we’ve been able to explore the implications of this change. Starting with Firefox 79 (June 2020), we rolled it out to 50% of the Firefox Beta user base. We want to monitor the scope of any potential breakage.

There is currently no timeline to ship this feature to the release channel of Firefox. We want to see that the Beta population is not seeing an unacceptable amount of site breakage—indicating most sites have adapted to the new default behavior. Since there is no exact definition of “breakage” and it can be difficult to determine via telemetry, we are watching for reports of site breakage in several channels (e.g. Bugzilla, social media, blogs).

Additionally, we’d like to see the proposal advance further in the IETF. As proponents of the open web, it is important that changes to the web ecosystem are properly standardized.

Industry Coordination

This is an industry-wide change for browsers and is not something Mozilla is undertaking alone. Google has been rolling this change out to Chrome users since February 2020, with SameSite=Lax being the default for a certain (unpublished) percentage of all their channels (release, beta, canary).

Mozilla is cooperating with Google to track and share reports of website breakage in our respective bug tracking databases. Together, we are encouraging all web developers to start explicitly setting the SameSite attribute as a best practice.

Call to Action for Web Developers

Testing in the Firefox Nightly and Beta channels has shown that website breakage does occur. While we have reached out to those sites we’ve encountered and encouraged them to set the SameSite attribute on their web properties, the web is clearly too big to do this on a case-by-case basis.

It is important that all web developers test their sites against this new default. This will prepare you for when both Firefox and Chrome browsers make the switch in their respective release channels.

Test your site in Firefox

To test in Firefox:

  1. Enable the new default behavior (works in any version past 75):
    1. In the URL bar, navigate to about:config. (accept the warning prompt, if shown).
    2. Type SameSite into the “Search Preference Name” bar.
    3. Set network.cookie.sameSite.laxByDefault to true using the toggle icon.
    4. Set network.cookie.sameSite.noneRequiresSecure to true using the toggle icon.
    5. Restart Firefox.
  2. Verify the browser is using the new SameSite default behavior:
    1. Navigate to
    2. Verify that all rows are green.

At this point, test your site thoroughly. In particular, pay attention to anything involving login flows, multiple domains, or cross-site embedded content (images, videos, etc.). For any flows involving POST requests, you should test with and without a long delay. This is because both Firefox and Chrome implement a two-minute threshold that permits newly created cookies without the SameSite attribute to be sent on top-level, cross-site POST requests (a common login flow).

Check your site for breakage

To see if your site is impacted by the new cookie behavior, examine the Firefox Web Console and look for either of these messages:

  • Cookie rejected because it has the “sameSite=none” attribute but is missing the “secure” attribute.
  • Cookie has “sameSite” policy set to “lax” because it is missing a “sameSite” attribute, and “sameSite=lax” is the default value for this attribute.

Seeing either of these messages does not necessarily mean your site will no longer work, as the new cookie behavior may not be important to your site’s functionality. It is critical, therefore, that each site test under the new conditions. Then, verify that the new SameSite behavior does not break anything. As a general rule, explicitly setting the SameSite attribute for cookies is the best way to guarantee that your site continues to function predictably.

Additional Resources

SameSite cookies explained

SameSite Cookies – Are you Ready?

MDN – SameSite Cookies and Common Warnings

Tracking Chrome’s rollout of the SameSite change


The post Changes to SameSite Cookie Behavior – A Call to Action for Web Developers appeared first on Mozilla Hacks - the Web developer blog.

Support.Mozilla.Org: New platform milestone completed: Python upgrade

Tuesday 4th of August 2020 02:07:41 PM

In 2020 a lot of the SUMO platform’s team work is focused on modernizing our support platform (Kitsune) and performing some foundational work that will allow us to grow and expand the platform. We have started this in H1 with the new Responsive and AAQ redesign. Last week we completed a new milestone: the Python/Django upgrade.

Why was this necessary was running on Python 2.7, meaning our core technology stack was running on a no longer supported version. We needed to upgrade to at least 3.7 and, at the same time, upgrade to the latest Django Long Term Support (LTS) version 2.2.

What have we focused on

During the last couple of weeks our work focused on upgrading the platform’s code-base from Python 2.7 to Python 3.8. We have also upgraded all the underlying libraries to their latest version compatible with Python 3.8 and replaced non compatible Python libraries with a compatible library with equivalent functionality. Furthermore we upgraded Django to the latest LTS version, augmented testing coverage and improved developer tooling.

What’s next

In H2 2020, we’re continuing the work on platform modernization, our next milestone being the full redesign of our search architecture (including the upgrade of the ElasticSearch service from and re implementation of the search functionality from scratch). With this we are also looking into expanding our Search UI and adding new features to offer a better internal search experience to our users.

The Mozilla Blog: Latest Firefox rolls out Enhanced Tracking Protection 2.0; blocking redirect trackers by default

Tuesday 4th of August 2020 01:05:08 PM

Today, Firefox is introducing Enhanced Tracking Protection (ETP) 2.0, our next step in continuing to provide a safe and private experience for our users. ETP 2.0 protects you from an advanced tracking technique called redirect tracking, also known as bounce tracking. We will be rolling out ETP 2.0 over the next couple of weeks.

Last year we enabled ETP by default in Firefox because we believe that understanding the complexities and sophistication of the ad tracking industry should not be required to be safe online. ETP 1.0 was our first major step in fulfilling that commitment to users. Since we enabled ETP by default, we’ve blocked 3.4 trillion tracking cookies. With ETP 2.0, Firefox brings an additional level of privacy protection to the browser.

Since the introduction of ETP, ad industry technology has found other ways to track users: creating workarounds and new ways to collect your data in order to identify you as you browse the web. Redirect tracking goes around Firefox’s built-in third-party cookie-blocking policy by passing you through the tracker’s site before landing on your desired website. This enables them to see where you came from and where you are going.

Firefox deletes tracking cookies every day

With ETP 2.0, Firefox users will now be protected against these methods as it checks to see if cookies and site data from those trackers need to be deleted every day. ETP 2.0 stops known trackers from having access to your information, even those with which you may have inadvertently visited. ETP 2.0 clears cookies and site data from tracking sites every 24 hours.

Sometimes trackers do more than just track. They may also offer services you engage with, such as a search engine or social network. If Firefox cleared cookies for these services we’d end up logging you out of your email or social network every day, so we don’t clear cookies from sites you have interacted with in the past 45 days, even if they are trackers. This way you don’t lose the benefits of the cookies that keep you logged in on sites you frequent, and you don’t open yourself up to being tracked indefinitely based on a site you’ve visited once. To read the technical details about how this works, visit our Security Blog post.

What does this all mean for you? You can simply continue to browse the web with Firefox. We are doing more to protect your privacy, automatically. Without needing to change a setting or preference, this new protection deletes cookies that use workarounds to track you so you can rest easy.

Check out and download the latest version of Firefox available here.

The post Latest Firefox rolls out Enhanced Tracking Protection 2.0; blocking redirect trackers by default appeared first on The Mozilla Blog.

Mozilla Security Blog: Firefox 79 includes protections against redirect tracking

Tuesday 4th of August 2020 01:00:55 PM

A little over a year ago we enabled Enhanced Tracking Protection (ETP) by default in Firefox. We did so because we recognize that tracking poses a threat to society, user safety, and the autonomy of individuals and we’re committed to protecting users against these threats by default. ETP was our first step in fulfilling that commitment, but the web provides many covert avenues trackers can use to continue their data collection.

Today’s Firefox release introduces the next step in providing a safer and more private experience for our users with Enhanced Tracking Protection 2.0, where we will block a new advanced tracking technique called redirect tracking, also known as bounce tracking. ETP 2.0 clears cookies and site data from tracking sites every 24 hours, except for those you regularly interact with. We’ll be rolling ETP 2.0 out to all Firefox users over the course of the next few weeks.

What is “redirect” tracking?

When we browse the web we constantly navigate between websites; we might search for “best running shoes” on a search engine, click a result to read reviews, and finally click a link to buy a pair of shoes from an online store. In the past, each of these websites could embed resources from the same tracker, and the tracker could use its cookies to link all of these page visits to the same person. To protect your privacy ETP 1.0 blocks trackers from using cookies when they are embedded in a third party context, but still allows them to use cookies as a first party because blocking first party cookies causes websites to break. Redirect tracking takes advantage of this to circumvent third-party cookie blocking.

Redirect trackers work by forcing you to make an imperceptible and momentary stopover to their website as part of that journey. So instead of navigating directly from the review website to the retailer, you end up navigating to the redirect tracker first rather than to the retailer. This means that the tracker is loaded as a first party and therefore is allowed to store cookies. The redirect tracker associates tracking data with the identifiers they have stored in their first-party cookies and then forwards you to the retailer.

A step-by-step explanation of redirect tracking: 

Let’s say you’re browsing a product review website and you click a link to purchase a pair of shoes from an online retailer. A few seconds later Firefox navigates to the retailer’s website and the product page loads. Nothing looks out of place to you, but behind the scenes you were tracked using redirect tracking. Here’s how it happened:

  • Step 1: On the review website you click a link that appears to take you to the retail site. The URL that was visible when you hovered over the link belonged to the retail site.
  • Step 2: A redirect tracker embedded in the review site intercepts your click and sends you to their website instead. The tracker also saves the intended destination—the retailer’s URL that you actually thought you were visiting when you clicked the link.
  • Step 3: When the redirect tracker is loaded as a first party, the tracker will be able to access its cookies. It can associate information about which website you’re coming from (and where you’re headed) with identifiers stored in those cookies. If a lot of websites redirect through this tracker, the tracker can effectively track you across the web.
  • Step 4: After it finishes saving its tracking data, it automatically redirects you to the original destination.
How does Firefox protect against redirect tracking?

Once every 24 hours ETP 2.0 will completely clear out any cookies and site data stored by known trackers. This prevents redirect trackers from being able to build a long-term profile of your activity.

When you first visit a redirect tracker it can store a unique identifier in its cookies. Any redirects to that tracker during the 24 hour window will be able to associate tracking data with that same identifying cookie. However, once ETP 2.0’s cookie clearing runs, the identifying cookies will be deleted from Firefox and you’ll look like a fresh user the next time you visit the tracker.

This only applies to known trackers; cookies from non-tracking sites are unaffected. Sometimes trackers do more than just track; trackers may also offer services you engage with, such as a search engine or social network. If Firefox cleared cookies for these services we’d end up logging you out of your email or social network every day. To prevent this, we provide a 45 day exception for any trackers that you’ve interacted with directly, so that you can continue to have a good experience on their websites. This means that the sites you visit and interact with regularly will continue to work as expected, while the invisible “redirect” trackers will have their storage regularly cleared. A detailed technical description of our protections is available on MDN.

ETP 2.0 is an upgrade to our suite of default-on tracking protections. Expect to see us continue to iterate on our protections to ensure you stay protected while using Firefox.

The post Firefox 79 includes protections against redirect tracking appeared first on Mozilla Security Blog.

The Mozilla Blog: Fast Company Recognizes Katharina Borchert as one of the Most Creative Business People

Tuesday 4th of August 2020 11:59:29 AM

We are proud to share that Katharina Borchert, Mozilla’s Chief Open Innovation Officer, has been named one of the Most Creative People by Fast Company. The award recognizes her leadership on Common Voice and helping to collect and diversify open speech data to build and train voice-enabled applications. Katharina was recognized not just for a groundbreaking idea, but because her work is having a measurable impact in the world.

Among the 74 receiving this award are leaders such as Kade Crockford of the American Civil Liberties Union of Massachusetts, for work leading to banning face surveillance in Boston, and Stina Ehrensvärd, CEO of Yubikey, for the building of WebAuthn, a heightened set of security protocols, a collaboration with Google, Mozilla and Microsoft. The full list also includes vintner Krista Scruggs, dancer and choreographer Twyla Tharp, and Ryan Reynolds: “for delivering an honest message, even when it’s difficult”.

“‘This is a real honor,” said Katharina, “which also reflects the contributions of an incredible alliance of people at Mozilla and beyond. We have a way to go before the full promise of Common Voice is realized. But I’m incredibly inspired by the different communities globally building it together with Mozilla, because language is so important for our identities and for keeping cultural diversity alive in the digital age. Extending the reach of voice recognition to more languages can only open the doors to more innovation and make tech more inclusive.”

Common Voice is Mozilla’s global crowdsourcing initiative to build multilingual open voice datasets that help teach machines how real people speak. Since 2017, we’ve made unparalleled progress in terms of language representation. There’s no comparable initiative, nor any open dataset, that includes as many (also under-resourced) languages. This makes it the largest multilingual public domain voice dataset. In June this year we released an updated edition with more than 7,200 total hours of contributed voice data in 54 languages, including English, German, Spanish, and Mandarin Chinese (Traditional), but also, Welsh, Kabyle, and Kinyarwanda.

The growing Common Voice dataset is unique not only in its size and licence model, but also in its diversity. It is powered by a global community of voice contributors, who want to help build inclusive voice technologies in their own languages, and allow for local value creation.

This is the second award for Mozilla from Fast Company in as many years, and the second time Common Voice has been recognized, after it was honored as a finalist in the experimental category in the Innovation by Design Awards in 2018. To keep up with future developments in Common Voice, follow the project on our Discourse forum.

(Photo Credit: Nick Leoni Photography)

The post Fast Company Recognizes Katharina Borchert as one of the Most Creative Business People appeared first on The Mozilla Blog.

This Week In Rust: This Week in Rust 350

Tuesday 4th of August 2020 04:00:00 AM

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Check out this week's This Week in Rust Podcast

Updates from Rust Community Official Tooling Newsletters Observations/Thoughts Learn Standard Rust Learn More Rust Project Updates Miscellaneous Crate of the Week

This week's crate is partial-io, a set of helpers to test partial, interrupted and would-block I/O operations.

Thanks to Kornel for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No issues were proposed for CfP.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

326 pull requests were merged in the last week

Rust Compiler Performance Triage
  • 2020-08-03. 8 regressions, 2 improvements, 1 of them on rollups. 1 outstanding nag from last week.
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.


No RFCs are currently in the final comment period.

Tracking Issues & PRs

No Tracking Issues or PRs are currently in the final comment period.

New RFCs Upcoming Events Online North America Asia Pacific

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Empowering is the perfect word to describe Rust in 2020. What used to be a rough adventure with many pitfalls has turned into something beautiful, something that can lift your spirit. At least, that’s what it did for me.

Thanks to Henrik Tougaard for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

The Firefox Frontier: Moth wants you to design a Firefox Theme for San Francisco Shock

Monday 3rd of August 2020 08:45:26 PM

This summer we partnered with Overwatch League’s San Francisco Shock to help the fans at home cheer on their 2019 Grand Finals Champions. This included Firefox Protection Plays and giving … Read more

The post Moth wants you to design a Firefox Theme for San Francisco Shock appeared first on The Firefox Frontier.

The Rust Programming Language Blog: Announcing Rust 1.45.2

Monday 3rd of August 2020 12:00:00 AM

The Rust team is announcing a new version of Rust, 1.45.2. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.45.2 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.45.2 on GitHub.

What's in 1.45.2 stable

1.45.2 contains two fixes, one to 1.45.1 and the other to 1.45.0.

#[track_caller] on trait objects

Trait objects with methods annotated with #[track_caller] would be miscompiled. #[track_caller] is not yet stable on 1.45. However, the standard library makes use of this on some traits for better error messages. Trait objects of SliceIndex, Index, and IndexMut were affected by this bug.

Tuple patterns binding .. to an identifier

In 1.45.1, we backported a fix for #74539, but this fix turned out to be incorrect, causing other unrelated breakage. As such, this release reverts that fix.

Contributors to 1.45.2

Many people came together to create Rust 1.45.2. We couldn't have done it without all of you. Thanks!

More in Tux Machines

Stable Kernels: 5.7.14, 5.4.57, 4.19.138, and 4.14.193

  • Linux 5.7.14
    I'm announcing the release of the 5.7.14 kernel. All users of the 5.7 kernel series must upgrade. The updated 5.7.y git tree can be found at: git:// linux-5.7.y and can be browsed at the normal git web browser:

  • Linux 5.4.57
  • Linux 4.19.138
  • Linux 4.14.193

Ubuntu Kylin Point Release Boosts Desktop Performance by 46%

More than 418 updates, tweaks, and other improvements have been made to the uniquely styled desktop environment and distro since the release of Ubuntu Kylin 20.04 back in April. And as with the Ubuntu 20.04 point release Ubuntu Kylin’s refreshed installer image comes with all of those enhancements wrapped up, ready to go, out of the box — no lengthy post-install upgrades required. Read more

Open source is more than code: Developing Red Hat Satellite documentation upstream

The code base for Satellite begins upstream and moves downstream. Until recently, the Satellite documentation did not follow the same journey. In this post, I will outline what has been happening with Satellite documentation over the last year and how this benefits both the Foreman community and Red Hat Satellite users. The Foreman and Katello projects are the upstreams of Red Hat Satellite. The discussions and contributions that take place in the vibrant upstream community help shape the Red Hat Satellite code base. Red Hat’s open source and community strategy has made Red Hat Satellite a robust and flexible product that can manage complex management workflows. Read more

Android Mirroring App ‘Scrcpy’ Improves Shortcuts, Clipboard Support

Scrcpy v1.15 picks up the ability to forward ctrl and shift keys to your handset. Why is that useful? Because it means you can now use familiar keyboard shortcuts on your device in apps that support them, e.g., ctrl + t to open a new browser tab in a browser. This nifty addition is also able to pass ctrl + c and ctrl + v to Termux, if you use it. It also supports text selection easier using shift + → and similar. With the ctrl key now in use for shortcuts Scrcpy now uses the left alt or left super key as its shortcut modifier. Don’t like this? It can be changed. Read more