Deprecated: Function get_magic_quotes_gpc() is deprecated in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 99

Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 619

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1169

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176
Chromium Blog
Nothing Special   »   [go: up one dir, main page]

We're constantly working to improve your browsing experience. To help you cut through the noise and reduce notification overload, we’re launching a new feature to automatically remove notification permission for sites you haven't interacted with recently. Today, Chrome’s Safety Check already does this for other permissions such as camera and location. The feature will be launched in Chrome on Android and desktop.

Data indicates that users frequently receive a high volume of notifications, resulting in minimal engagement and high disruption. Less than 1% of all notifications receive any interaction from users.

But notifications can be genuinely valuable and helpful. Therefore, this feature will only revoke permissions for sites when there is very low user engagement and a high volume of notifications being sent. This feature does not revoke notifications for any installed web apps.

Chrome will inform you when notification permissions are removed. If you prefer to keep getting notifications from a particular website, you can easily re-grant the permission at any time through Safety Check or alternatively by visiting the site and enabling notifications again. You can also choose to turn off the auto-revocation feature entirely.

We've already been testing this feature. Our test results show a significant reduction in notification overload with only a minimal change in total notification clicks. Our experiments also indicate that websites that send a lower volume of notifications are actually seeing an increase in clicks.

This launch is part of our ongoing commitment to user safety, privacy, and control. We believe this change will lead to a cleaner, more focused browsing experience, and we’ll continue to invest in ways to help you manage your online interactions and reduce distractions, so you can make the most of your time online.

Today's The Fast and the Curious post covers the launch of Skia's new rasterization backend, Graphite, in Chrome on Apple Silicon Macs. Graphite is instrumental in helping Chrome achieve exceptional scores on Motionmark 1.3 and is key to unlocking a ton of future improvements in Chrome Graphics.

A brief history of Skia in Chrome

In Chrome, Skia is used to render paint commands from Blink and the browser UI into pixels on your screen, a process called rasterization. Skia has powered Chrome Graphics since the very beginning. Skia eventually ran into performance issues as the web evolved and became more complex, which led Chrome and Skia to invest in a GPU accelerated rasterization backend called Ganesh.

Over the years, Ganesh matured into a solid highly performant rasterization backend and GPU rasterization launched on all platforms in Chrome on top of GL (via ANGLE on Windows D3D9/11). However, Ganesh always had a GL-centric design with too many specialized code paths and the team was hitting a wall when trying to implement optimizations that took advantage of modern graphics APIs in a principled manner.

This set the stage for the team to rethink GPU rasterization from the ground up in the form of a new rasterization backend, Graphite. Graphite was developed from the start to be principled by having fewer and more comprehensible code paths. This forward looking design helps take advantage of modern graphics APIs like Metal, Vulkan and D3D12 and paradigms like compute based path rasterization, and is multithreaded by default.

Results

With Graphite in Chrome, we increased our Motionmark 1.3 scores by almost 15% on a Macbook Pro M3. At the same time, we improved real world metrics like INP (interaction to next paint time), LCP (time to largest contentful paint), graphics smoothness (percent dropped frames), GPU process malloc memory usage, and others. This all means substantially smoother interactions, less stutter when scrolling, and less time waiting for sites to show.

Differences between Graphite and Ganesh

Modern graphics APIs

Ganesh was originally implemented on OpenGL ES, which had minimal support for multi-threading or GPU capabilities like compute shaders. Since then, modern graphics APIs like Vulkan, Metal and D3D12 have evolved to take advantage of multithreading and expose new GPU capabilities. They allow applications to have much more control over when and how expensive work such as allocating GPU resources is performed and scheduled, while utilizing both the CPU and the GPU effectively.

While we were able to adapt Ganesh to support modern graphics APIs, it had accumulated enough technical debt that it became hard to fully take advantage of the multi-threading and GPU compute capabilities of modern graphics APIs.

For Graphite in Chrome, we chose to use Chrome's WebGPU implementation, Dawn, as the abstraction layer for platform native graphics APIs like Metal, Vulkan and D3D. Dawn provides a baseline for capabilities common in modern graphics APIs and helps us reduce the long term maintenance burden by leveraging Dawn's mature well tested native backends instead of implementing them from scratch for Graphite.

2D depth(?!) testing

A core part of the GPU rendering pipeline is depth testing, which can reduce or eliminate overdraw by drawing opaque objects in front to back order, followed by translucent objects back to front. In graphics, "overdraw" refers to the unnecessary rendering of the same pixels multiple times, which can negatively impact performance and battery life, especially on mobile devices.

Ganesh never utilized the depth testing capabilities of graphics cards, which was admittedly intended for rendering 3D content and not accelerating 2D graphics. Ganesh suffers from overdraw due to its reliance on adhering to strict painters order when drawing both opaque and translucent objects.

Graphite extends Skia’s GPU rendering to take advantage of the depth test by assigning each “draw” a z value defining its painter’s ordering index. While transparent effects and images must still be drawn from back to front, opaque objects in the foreground can now automatically eliminate overdraw. This means opaque draws can be re-ordered to minimize expensive GPU state changes while relying on the depth buffer to produce correct output.

Depth testing is also used to implement clipping in Graphite by treating clip shapes as depth only draws as opposed to maintaining a clip stack like in Ganesh. Besides reducing algorithmic complexity, a significant benefit to this approach is that the shader program required to render a “draw” does not also depend on the state of the clip stack.

Left: Frame from Motionmark Suits Right: Depth buffer for the same frame.

Multithreading

Chromium is a complex multi-process application, with render processes issuing commands to a shared GPU process that is responsible for actually displaying everything in a webpage, tab, and even the browser UI. The GPU process main thread is the primary driver of all rendering work and is where all GPU commands are issued.

Due to the single threaded nature of Ganesh and OpenGL, only a limited set of work could be moved to other threads, making it easy to overload the main thread causing increased jank and latency ultimately hurting user experience.

In contrast, Graphite's API is designed to take advantage of multithreading capabilities of modern graphics APIs. Graphite’s new core API is centered around independent Recorders that can produce Recordings on multiple threads, with minimal need to synchronize between them. Even though the Recordings are submitted to the GPU on the main thread, more expensive work is moved to other threads when producing Recordings, keeping the GPU main thread free.

Performance cliffs and pipeline compilation

When Ganesh was initially implemented, the programmable capabilities of graphics cards were quite limited, and branching in particular was expensive. To work around this, Ganesh had many specialized shader pipelines to handle common cases. These specializations are hard to predict and depend on a large number of factors related to each individual draw, leading to an explosion of different pipelines for essentially the same page content. Since these pipelines must each be compiled, it doesn't work well for modern web content which might have effects and animations trigger new pipelines at any moment, causing noticeable jank.

Graphite’s design philosophy is instead to consolidate the number of rendering pipelines as much as possible while still preserving performance. This reduces the number of pipelines that have to be compiled, and makes it possible for Chrome to ensure they are compiled at startup so they do not interrupt active browsing. Ganesh’s specialization approach also led to surprising performance cliffs. For example, while it could handle simple cases, real page content was often a complex mix. By consolidating pipelines, complex content can be rendered as effectively as simple content.

Future Plans

Multithreaded Rasterization

Currently, Graphite is integrated into Chromium using two Recorders: one handles web content tiles and Canvas2D on the main thread, while the other is for compositing. In the future, this model will open up a number of exciting possibilities to further improve Chrome’s performance. Instead of saturating the main GPU thread with the tasks from each renderer process, rasterization can be forked across multiple threads.

Current:

Future:

Reducing GPU memory for simple content

Graphite recordings can also be re-issued to the GPU with certain dynamic changes such as translation. This can be used to accelerate scrolling while eliminating the unnecessary work to re-issue rendering commands. This lets us automatically reduce the amount of GPU memory required to cache web content as tiles. If the content is simple enough, the performance difference between drawing a cached image and drawing its content can be worth skipping allocating a tile for it and just re-rendering it each frame.

GPU Compute Path Rasterization

In the landscape of 2D graphics rendering, GPU compute-based path rasterization is very much en vogue with recent implementations like Pathfinder and vello. We would like to implement these ideas in Skia, possibly using a hybrid approach. Currently, Graphite relies on MSAA where it can, but in many cases we can't due to poor performance on older integrated GPUs or high memory overhead on non-tiling GPUs, and we have to fallback to CPU path rasterization using an atlas for caching. GPU compute based path rasterization would allow us to improve over both the visual quality of MSAA which is often limited to 4 samples per pixel and over the performance of CPU rasterization.

These are future directions the Chrome Graphics team plans to pursue, and we are excited to see how far we can push the needle.

Update (6/10/2025): This blog was updated to reflect that testing was done using the Speedometer 3.1 benchmark, and resulted in a 22% performance improvement. The previous version incorrectly noted that the performance improvement was 10% and that the benchmark was Speedometer 3.

Performance has always been one of the core pillars of Chrome and it’s something we’ve never stopped investing in. Publicly available and open benchmarks, which we create in open collaboration with other browsers, are useful tools for tracking our overall progress, understanding new areas of improvement, and validating potential optimizations. In today’s The Fast and the Curious post, we’d like to go through Chrome’s recent work that enabled it to achieve the highest score ever on the Speedometer benchmark.

For Speedometer, these optimizations have resulted in a 22% improvement since August 2024. That 22% improvement leads to better browser experiences, higher conversions for businesses, and deeper enjoyment of what the web has to offer. If each Chrome user used Chrome for just 10 minutes a day, these improvements collectively save 116 million hours or roughly 166 lifetimes worth of waiting around for websites to load and do things.

Speedometer 3.1 score measured on Apple Macbook Pro M4 with MacOS 15

Speedometer is a benchmark created in open collaboration with other browsers and measures web application responsiveness through workloads that cover a large variety of different areas of the Blink rendering engine used in Chrome:

  • HTML parsing
  • JavaScript and JSON processing
  • JavaScript and Document Object Model (DOM) interaction
  • DOM manipulations (element insertion and removal)
  • Text size computation (font shaping)
  • Cascading Style Sheet (CSS) application and layout calculation
  • Pixel rendering

In essence, Speedometer tests critical components of the entire rendering pipeline. For a deeper dive into these individual parts, we recommend the presentation Life of a Script at Chrome University.

Achieving exceptional web performance requires a multifaceted approach, and optimizing for Speedometer is a testament to overall product excellence. Over the past year, our team has focused on refining fundamental rendering paths across the entire stack. Here are some notable optimization examples.

The team heavily optimized memory layouts of many internal data structures across DOM, CSS, layout, and painting components. Blink now avoids a lot of useless churn on system memory by keeping state where it belongs with respect to access patterns, maximizing utilization of CPU caches. Where internal memory was already relying on garbage collection in Oilpan, e.g. DOM, the usage was expanded by converting types from using malloc to Oilpan. This generally speeds up the affected areas as it packs memory nicely in Oilpan’s backend.

Strings in the renderer improved quite a bit over the last year by avoiding costly representations where possible and switching hashing to rapidhash. More generally, lots of data structures were equipped with better hashes, filters, and probing algorithms.

Where rendering becomes inherently expensive, e.g., for computing CSS styles across various elements, caches are now used much more effectively with better hit rates. At the same time we cache fewer things that are not relevant. Another area where rendering becomes expensive is font shaping; the team significantly improved Apple Advanced Typography font shaping performance which is relevant everywhere text is rendered.

Notifications in Chrome are a useful feature to keep up with updates from your favorite sites. However, we know that some notifications may be spammy or even deceptive. We’ve received reports of notifications diverting you to download suspicious software, tricking you into sharing personal information or asking you to make purchases on potentially fraudulent online store fronts.

To defend against these threats, Chrome is launching warnings of unwanted notifications on Android. This new feature uses on-device machine learning to detect and warn you about potentially deceptive or spammy notifications, giving you an extra level of control over the information displayed on your device.

When a notification is flagged by Chrome, you’ll see the name of the site sending the notification, a message warning that the contents of the notification are potentially deceptive or spammy, and the option to either unsubscribe from the site or see the flagged content.

An example of a notification flagged as possibly spam.

If you choose to see the notification you will still see the option to unsubscribe or you can choose to always allow notifications from that site and not see warnings in the future.

What you see when viewing a flagged notification.

How It Works

Chrome uses a local, on-device machine learning model to analyze notification content. This model identifies notifications that are likely to be unwanted. The model is trained on the textual contents of the notification, like the title, body, and action button texts.

Notifications are end to end encrypted. The analysis of each message is done on-device and notification contents are not sent to Google, to protect user privacy. Due to the sensitive nature of notifications content, the model was trained using synthetic data generated by the Gemini large language model (LLM). The training data was evaluated against real notifications Chrome security team collected by subscribing to a variety of websites that were then classified by human experts. To start, this feature is only available on Android as the majority of notifications are sent to mobile devices, however we will evaluate expanding to other platforms in the future.

This feature is just one of many ways Chrome works to reduce the number of potentially harmful notifications you receive. Other ways Chrome protects against potentially harmful notifications include:

  • Revoking Notification Permissions from Abusive Sites: When Google Safe Browsing identifies a site engaging in abusive behavior Chrome will automatically revoke the site’s notification permissions. You can find a list of revoked notification permissions in Chrome Safety Check. Learn more about how Safety Check takes proactive steps to keep you safe here.

    In Safety Check you can review any notification permission revocations

    • One Tap Unsubscribe on Android: You have the option to unsubscribe from notifications with one click on any Chrome notification sent to an Android phone, whether the notification contents are benign or potentially harmful. Limiting notifications from sites you no longer want updates from can reduce the amount of data and battery life you use daily. If you ever want to review what sites have the ability to send you notifications you can visit Chrome Settings-> Privacy and Security->Site Settings->Notifications.

    Notification warnings are an important step in Chrome's ongoing commitment to user safety. The Chrome Security team in partnership with Google Safe Browsing continually monitors threats to our users in order to evolve our defenses against abusive activity across the web. Keep an eye on our blog for updates on how we are helping you stay one step ahead of online threats.

Since Google announced the Chromium project in 2008, we have been excited to build on the great foundations of open-source web browsers and contribute to the continued development of a rich web platform. Today, Chromium is used by hundreds of different projects globally, including big browsers like Chrome, home electronics from LG, application frameworks like Electron and even custom applications like Bloomberg terminals and SpaceX capsule control software.

In 2024, Google made over 100,000 commits to Chromium, accounting for ~94 percent of contributions. While we have no intention of reducing this investment, we continue to welcome others stepping up to invest more.

Google also continues to invest heavily in the shared infrastructure of the Open Source project to "keep the lights on", including having thousands of servers endlessly running millions of tests, responding to hundreds of incoming bugs per day, ensuring the important ones get fixed, and constantly investing in code health to keep the whole project maintainable. This work represents hundreds of millions of US dollars in annual investment just for maintenance costs before any new feature, innovation or other business priorities can be addressed.

Sustainable funding of critical open source infrastructure remains a hot industry-wide topic of discussion and over the years we’ve heard from many companies and developers about how critical the Chromium project is to their work. They’ve also shared how they would like to support the continued health of the project, beyond direct engineering support.

Today Google is pleased to announce our partnership with The Linux Foundation and the launch of the Supporters of Chromium-based Browsers. The goal of this initiative is to foster a sustainable environment of open-source contributions towards the health of the Chromium ecosystem and financially support a community of developers who want to contribute to the project, encouraging widespread support and continued technological progress for Chromium embedders.

The Supporters of Chromium-based Browsers fund will be managed by the Linux Foundation, following their long established practices for open governance, prioritizing transparency, inclusivity, and community-driven development. We’re thrilled to have Meta, Microsoft, and Opera on-board as the initial members to pledge their support.

We welcome this additional investment into Chromium’s commons and we’re looking forward to working with the other members of the Supporters of Chromium-based Browsers to ensure that it meets the needs of the wider Chromium community. At the same time, we remain committed to being the responsible steward of the Chromium project and to the massive investment necessary to keep Chromium working well for the entire web industry.

In October 2020, Chrome enabled HTTP/3 by default. HTTP/3 (RFC 9114) runs over IETF QUIC (RFC9000). Default-enabling HTTP/3 in Chrome resulted in improved performance compared not only HTTP/1 and HTTP/2, but also Google QUIC. Benefits included reduced Google search latency and fewer rebuffers for YouTube.

The journey to optimizing performance did not end when HTTP/3 was default enabled. Recent advancements include the implementation of the HTTP/3 ORIGIN frame (RFC 9412) and Server's Preferred Address (RFC 9000 Section 9.6). The former enhances connection coalescing, while the latter reduces a connection's round trip time (RTT). Both features have been enabled by default in M131, which was released to Stable on 11/19.

ORIGIN Frame

When a connection is established for a specific hostname, the server’s certificate typically contains numerous other hostnames for which the server is authoritative. However, a client cannot immediately send requests for those other hostnames on that connection without first performing a DNS lookup for the other hostname and verifying that the IP address of the connection matches the resolved address. This additional DNS resolution introduces latency and significantly reduces the likelihood of connection pooling due to potential IP mismatches. The metrics from Chrome indicate that nearly 20% of HTTP/3 connections would be unnecessary if not for this IP mismatch.

Creating a new connection, even with QUIC 0-RTT, is expensive in terms of latency, memory, and CPU usage. This is because:

  • DNS resolution adds latency unless cached locally in Chrome’s DNS cache.
  • Both client and server must send multiple packets to complete a QUIC handshake.
  • TLS necessitates CPU-intensive asymmetric cryptography on both ends.
  • The congestion controller begins in its default state, potentially leading to under or over-sending.
  • 0-RTT might fail.
  • Non-safe requests aren't sent via 0-RTT.
  • More connections consume more memory.

Additionally, features like HTTP priorities (RFC 9218) are only effective if there are multiple simultaneous responses to send.

The HTTP/3 ORIGIN Frame (RFC 9412) enables a server to indicate what domains it would like to pool onto a connection. Additionally, once the frame is received, it indicates other domains should not be pooled onto that connection, even if they are in the certificate.

Server’s Preferred Address

In some cases, the initial server address to which the client connects is not the most efficient route. It might be behind an L4 load balancer, and connecting directly could increase stability. Particularly when using Anycast, it’s possible the server is distant from where traffic enters the network, creating a 3-legged path that increases the round trip time.

Once the handshake is confirmed, Server’s Preferred Address allows a server to indicate it would like the client to migrate to a different server IP. Though a QUIC connection is not bound to a single 4-tuple like TCP, this is the only type of migration in RFC9000 where the server can change its address.

So far, only Google’s Media CDN has widely enabled advertising an alternative address, but we expect more servers to adopt it soon. Testing has shown that this migration is successful over 99% of the time in Chrome and reduces average RTT by 40-80%.

Today’s The Fast and the Curious post covers how Chrome achieved best-in-class Speedometer scores on mobile devices, resulting in faster and smoother web experiences for Android users.

Chrome has always been about speed. Whether it's loading pages quickly, running complex web apps smoothly, or delivering a seamless browsing experience, performance is at the heart of our browser. And we're always looking for ways to make Chrome even faster.

Over the last two years, we have been hard at work on a number of performance improvements for Android devices. We're excited to share some of the progress we've made.

Speedometer on Android

One of the key metrics we use to track Chrome's performance is the Speedometer benchmark. This benchmark is developed in collaboration with other major web browser engines and measures how quickly Chrome can complete interactions with web pages, including parsing/rendering HTML or CSS and running JavaScript.

Since the release of Chrome M112, we've seen a significant increase in Speedometer 2.1 scores on Android devices [1]. In fact, on many devices, scores more than doubled, with the newest Snapdragon® 8 Elite Mobile Platform setting new records for Speedometer performance on mobile devices! These huge accomplishments are a testament to the work not only of the Chrome and Android teams, but also our silicon and SoC partners.

Since Chrome M112, Speedometer 2.1 scores have more than doubled on many Android devices. [1]

How Did We Do It?

The improvements resulted from several changes, including:

  • Build optimizations: We've made a number of changes to the way Chrome is built, which has resulted in faster code execution tuned to modern premium Android devices and SoCs.
  • V8 and Blink improvements: Many improvements to the JavaScript engine (V8) and the rendering engine (Blink) have further boosted performance.
  • Scheduling, OS and SoCs: We worked closely with Android partners to optimize the way Chrome interacts with the operating system and its thread scheduling to make the best use of the silicon on the devices.

Let's take a closer look at each of these areas.

Build optimizations

The Android device ecosystem is very diverse. From entry-level phones to the newest premium ones, Chrome needs to run well on all devices. Up until last year, we shipped the same Chrome build to all these different Android devices. The memory and disk size constraints on entry-level devices resulted in Chrome having to prioritize a small binary size. Consequently, many modern build optimizations were out of reach, as they resulted in much larger binaries.

With M113, Chrome was finally able to ship a separate higher-performance build targeting premium Android devices via the Google Play Store. While we still ship a more binary-size-constrained build to other devices, this approach allowed us to land some of those modern optimizations into the new premium build:

  • By targeting 64-bit Arm instead of 32-bit Arm, we can make use of more efficient Arm instruction set features and larger 64-bit operations.
  • Since binary size is less relevant on premium devices with large disks and sufficient memory, we can now compile C++ code optimized for speed (-O2 / -O3) rather than size (-Oz).
  • Furthermore, we tweaked the inlining thresholds used by the compiler to enable more inlining in hot code (within and across modules), while updating the model and policy used by another compiler pass (MLGO) to reduce inlining in cold code.
  • We now also apply profile-guided optimization (PGO) techniques to the build to further improve the code layout and optimization level for hot code.
  • Finally, we improved cross-function code ordering by aligning Chrome's orderfile generation with the new 64-bit build. We also now include Speedometer 3, the latest version of the industry-standard browser speed benchmark, in the workloads used to generate the orderfile.

Together, these build optimizations account for more than half of the overall Speedometer score improvements. This progress was facilitated by our collaboration with Arm, who contributed valuable insights and improvements, including to identify and address inefficiencies in Chrome's PGO setup and inlining.

V8 and Blink improvements

Chrome continuously improves the performance of its JavaScript and web rendering engines, V8 and Blink. Most optimizations are small in individual impact, but stacked together, these improvements add up and contributed most of the remaining Speedometer impact! Notable ones include:

  • We now utilize an optimized fast-path HTML parser to parse innerHTML attributes.
  • V8 launched its Sparkplug compiler tier, a super fast baseline compiler that sits right above its Ignition interpreter and generates non-optimized code very quickly. Later, V8 also launched Maglev, a new mid-tier compiler that generates semi-optimized code. It takes longer to do so than Sparkplug, but much less time than Turbofan, V8's ultra-optimizing compiler tier. All together, this new tiering hierarchy allows V8 to tier up more gradually, improving both performance and power consumption.
  • We tuned our heuristics that decide when garbage collection occurs, targeting times when the rendering engine is idle or when users navigate away from pages.
  • We landed many other incremental optimizations, e.g. to V8 and our parsing, style, layout, and text rendering engines.

Scheduling and OS

To achieve the best possible performance, Android partners invest heavily in tuning the operating system's thread scheduling and frequency scaling policies, as well as improving the performance of the Silicon itself.

We worked closely with our partners to improve their tuning for Chrome and Speedometer. In particular, our collaboration with Qualcomm was very fruitful: By combining optimized scheduling policies with improved hardware performance, their newest Snapdragon 8 Elite mobile platform realized a 60-80% improvement in Speedometer 3.0 compared to its predecessor, resulting in class-leading web performance. This collaboration also highlighted important bottlenecks in Chrome's code, such as the need for improved PGO and opportunities in V8.

Speedometer 3.0 on Snapdragon 8 Gen 3 (left) compared to Snapdragon 8 Elite (right), Chrome M131

Why do these improvements matter?

Faster Speedometer scores translate to improvements in real user interactions with web content, such as faster page loads and interactions. Back at M112, loading a Google Docs document on Pixel Tablet took more than 50% longer than it does today -- that's the effect of a doubled Speedometer score!

Chrome M112 vs. M129 on Pixel Tablet, loading a Google Doc (frame count)

[1] Speedometer 3 was released during M122, so results from Speedometer 2.1 are provided for a full picture. Measurements shown in graphs were taken on Pixel Tablet.

Last October, we introduced a new identity model on iOS (Chrome 118) and are excited to bring it to Android devices and Desktop soon. This model aligns closely with how you already use other Google apps and services.

When we first launched Chrome sync back in 2009, powered by the Google Account, our goal then, as it is today, was simple: help users access their bookmarks, passwords, tabs and more, across devices. At the time, this was best achieved by a sync model: synchronizing device data with your account and therefore requiring both sign-in and enabling sync.

Over the years, the digital world has changed and user expectations have evolved significantly. Cloud services emerged in 2010, and over the past 15 years, the concept of having a digital identity became more prevalent, especially through smartphones and mobile apps. Today, users increasingly expect to just sign in to get access to their stuff and sign out to keep it safe.

Given this evolution of technology and user norms, we’re continuing to make progress on transforming our legacy sync model into one that more seamlessly meets the expectation users have today. From the point of signing in to Chrome you’ll get access to your saved passwords, addresses, and other data from your Google Account. Where relevant, we’ll offer you the choice to sign into Chrome for a customized browsing experience on any device. For example, you can sign in and start to plan a trip on your phone during your commute, and then seamlessly finish it up on any device. Send tabs between your devices, find your bookmarks and use autogenerated passwords with ease.

As always, you have control - we strive to provide an excellent browser experience regardless of whether you choose to sign in or not. Additionally, saving your history and open tabs to your account remains a separate opt-in after signing into Chrome.

Stay tuned for updates on this change - already on iOS and coming to Android and Desktop soon.

ChromeOS will soon be developed on large portions of the Android stack to bring Google AI, innovations, and features faster to users.

Over the last 13 years, we’ve evolved ChromeOS to deliver a secure, fast, and feature-rich Chromebook experience for millions of students and teachers, families, gamers, and businesses all over the world. With our recent announcements around new features powered by Google AI and Gemini, Chromebooks now give us the opportunity to put powerful tools in the hands of more people to help with everyday tasks.

To continue rolling out new Google AI features to users at a faster and even larger scale, we’ll be embracing portions of the Android stack, like the Android Linux kernel and Android frameworks, as part of the foundation of ChromeOS. We already have a strong history of collaboration, with Android apps available on ChromeOS and the start of unifying our Bluetooth stacks as of ChromeOS 122.

Bringing the Android-based tech stack into ChromeOS will allow us to accelerate the pace of AI innovation at the core of ChromeOS, simplify engineering efforts, and help different devices like phones and accessories work better together with Chromebooks. At the same time, we will continue to deliver the unmatched security, consistent look and feel, and extensive management capabilities that ChromeOS users, enterprises, and schools love.

These improvements in the tech stack are starting now but won’t be ready for consumers for quite some time. When they are, we’ll provide a seamless transition to the updated experience. In the meantime, we continue to be extremely excited about our continued progress on ChromeOS without any change to our regular software updates and new innovations.

Chromebooks will continue to deliver a great experience for our millions of customers, users, developers and partners worldwide. We’ve never been more excited about the future of ChromeOS.

Posted by Prajakta Gudadhe, Senior Director, Engineering, ChromeOS & Alexander Kuscher, Senior Director, Product Management, ChromeOS




Today’s The Fast and the Curious post explores how Chrome achieved the highest score on the new Speedometer 3.0, an upgraded browser benchmarking tool to optimize the performance of Web applications. Try out Chrome today! 

Speedometer 3.0 is a recently published benchmark for measuring browser performance that was created as an industry collaboration between companies like Google, Apple, Mozilla, Intel, and Microsoft. This benchmark helped us identify areas in which we could optimize Chrome to deliver a faster browser experience to all our users.

Here’s a closer look at how we further optimized Chrome to achieve the highest score ever Speedometer 3, by carefully tracking its recent performance over time as the updated benchmark was being developed. Since the inception of Speedometer 3 in May 2022, we've driven a 72% increase in Chrome’s Speedometer score - translating into performance gains for our users:



Optimizing workloads

By looking at the workloads in Speedometer and in which functions Chrome was spending the most time, we were able to make targeted optimizations to those functions that each drove an increase in Chrome’s score. For example, the SpaceSplitString function is used heavily to turn space-separated strings such as those in “class=’foo bar’ ” into a list representation. In this function we removed some unnecessary bound checks. When we detect that there are duplicated stylesheets, we dedupe them and reference a single stylesheet instance. We made an optimization to reduce the cost of drawing paths and arcs by tuning memory allocations. When creating form editors we detected some unnecessary processing that occurs when form elements are created. Within querySelector, we were able to detect what selector was commonly used and create a hot-path for that.

We previously shared how we optimized innerHTML using specialized fast paths for parsing, an implementation that also made its way into WebKit. Some workloads in Speedometer 3 use DOMParser so we extended the same optimization for another 1% gain.

We worked with the Harfbuzz maintainer to also optimize how Chrome renders AAT fonts such as those used by Apple Mac OS system fonts. Text starts as a processed stream of unicode characters that is then transformed into a glyph stream that is then run through a state machine defined in the AAT font. The optimization allows us to determine more quickly whether glyphs actually participate in the rules for the state machine, leading to speed-ups when processing text using AAT.

Picking the right code to focus on

An important strategy for achieving high performance is tiering up code, which is picking the right code to further optimize within the engine. Intel contributed profile guided tiering to V8 that remembers tiering decisions from the past such that if a function was stably tiered up in the past, we eagerly tier it up on future runs.

Improving garbage collection

Another area of changes that drove around 3% progression on Speedometer 3 was improvements around garbage collection. V8’s garbage collector has a long history of making use of renderer idle time to avoid interfering with actual application code. The recent changes follow this spirit by extending existing mechanisms to prefer garbage collection in idle time on otherwise very active renderers where possible. Specifically, DOM finalization code that is run on reclaiming objects is now also run in idle time. Previously, such operations would compete with regular application code over CPU resources. In addition, V8 now supports a much more compact layout for objects that wrap DOM elements, i.e., all objects that are exposed to JavaScript frameworks. The compact layout reduces memory pressure and results in less time spent on garbage collection.

Posted by Thomas Nattestad, Chrome Product Manager



On the Chrome team, we believe it’s not sufficient to be fast most of the time, we have to be fast all of the time. Today’s The Fast and the Curious post explores how we contributed to Core Web Vitals by surveying the field data of Chrome responding to user interactions across all websites, ultimately improving performance of the web.

As billions of people turn to the web to get things done every day, the browser becomes more responsible for hosting a multitude of apps at once, resource contention becomes a challenge. The multi-process Chrome browser contends for multiple resources: CPU and memory of course, but also its own queues of work between its internal services (in this article, the network service).

This is why we’ve been focused on identifying and fixing slow interactions from Chrome users’ field data, which is the authoritative source when it comes to real user experiences. We gather this field data by recording anonymized Perfetto traces on Chrome Canary, and report them using a privacy-preserving filter.

When looking at field data of slow interactions, one particular cause caught our attention: recurring synchronous calls to fetch the current site’s cookies from the network service.

Let’s dive into some history.

Cookies under an evolving web

Cookies have been part of the web platform since the very beginning. They are commonly created like this:

    document.cookie = "user=Alice;color=blue"

And later retrieved like this:

    // Assuming a `getCookie` helper method:
    getCookie("user", document.cookie)

Its implementation was simple in single-process browsers, which kept the cookie jar in memory.

Over time, browsers became multi-process, and the process hosting the cookie jar became responsible for answering more and more queries. Because the Web Spec requires Javascript to fetch cookies synchronously, however, answering each document.cookie query is a blocking operation.

The operation itself is very fast, so this approach was generally fine, but under heavy load scenarios where multiple websites are requesting cookies (and other resources) from the network service, the queue of requests could get backed up.

We discovered through field traces of slow interactions that some websites were triggering inefficient scenarios with cookies being fetched multiple times in a row. We landed additional metrics to measure how often a GetCookieString() IPC was redundant (same value returned as last time) across all navigations. We were astonished to discover that 87% of cookie accesses were redundant and that, in some cases, this could happen hundreds of times per second.

The simple design of document.cookie was backfiring as JavaScript on the web was using it like a local value when it was really a remote lookup. Was this a classic computer science case of caching?! Not so fast!

The web spec allows collaborating domains to modify each other’s cookies. Hence, a simple cache per renderer process didn’t work, as it would have prevented writes from propagating between such sites (causing stale cookies and, for example, unsynchronized carts in ecommerce applications).

A new paradigm: Shared Memory Versioning

We solved this with a new paradigm which we called Shared Memory Versioning. The idea is that each value of document.cookie is now paired with a monotonically increasing version. Each renderer caches its last read of document.cookie alongside that version. The network service hosts the version of each document.cookie in shared memory. Renderers can thus tell whether they have the latest version without having to send an inter-process query to the network service.



This reduced cookie-related inter-process messages by 80% and made document.cookie accesses 60% faster 🥳.

Hypothesis testing

Improving an algorithm is nice, but what we ultimately care about is whether that improvement results in improving slow interactions for users. In other words, we need to test the hypothesis that stalled cookie queries were a significant cause of slow interactions.

To achieve this, we used Chrome’s A/B testing framework to study the effect and determined that it, combined with other improvements to reduce resource contention, improved the slowest interactions by approximately 5% on all platforms. This further resulted in more websites passing Core Web Vitals 🥳. All of this adds up to a more seamless web for users.



Timeline of the weighted average of the slowest interactions across the web on Chrome as this was released to 1% (Nov), 50% (Dec), and then all users (Feb).

Onward to a seamless web!

By Gabriel Charette, Olivier Li Shing Tat-Dupuis, Carlos Caballero Grolimund, and François Doray, from the Chrome engineering team

Update (10/10/2024): We’ve started disabling extensions still using Manifest V2 in Chrome stable. Read more details in the MV2 support timeline documentation.

In November 2023, we shared a timeline for the phasing out of Manifest V2 extensions in Chrome. Based on the progress and feedback we’ve seen from the community, we’re now ready to roll out these changes as scheduled.

We’ve always been clear that the goal of Manifest V3 is to protect existing functionality while improving the security, privacy, performance and trustworthiness of the extension ecosystem as a whole. We appreciate the collaboration and feedback from the community that has allowed us - and continues to allow us - to constantly improve the extensions platform.

Addressing community feedback

We understand migrations of this magnitude can be challenging, which is why we’ve listened to developer feedback and spent years refining Manifest V3 to support the innovation happening across the extensions community. This included adding support for user scripts and introducing offscreen documents to allow extensions to use DOM APIs from a background context. Based on input from the extension community, we also increased the number of rulesets for declarativeNetRequest, allowing extensions to bundle up to 330,000 static rules and dynamically add a further 30,000. You can find more detail in our content filtering guide

This month, we made the transition even easier for extensions using declarativeNetRequest with the launch of review skipping for safe rule updates. If the only changes are for safe modifications to an extension’s static rule list for declarativeNetRequest, Chrome will approve the update in minutes. Coupled with the launch of version roll back last month, developers now have greater control over how their updates are deployed.

Ecosystem progress

After we addressed the top issues and feature gaps blocking migration last year, we saw an acceleration of extensions migrating successfully to Manifest V3. Over the past year, we’ve even been able to invite some developers - such as Eyeo, the makers of Adblock Plus - and GDE members like Matt Frisbie to share their experiences and insights with the community through guest posts and YouTube videos.

Now, over 85% of actively maintained extensions in the Chrome Web Store are running Manifest V3, and the top content filtering extensions all have Manifest V3 versions available - with options for users of AdBlock, Adblock Plus, uBlock Origin and AdGuard.

What to expect next

Starting on June 3 on the Chrome Beta, Dev and Canary channels, if users still have Manifest V2 extensions installed, some will start to see a warning banner when visiting their extension management page - chrome://extensions - informing them that some (Manifest V2) extensions they have installed will soon no longer be supported. At the same time, extensions with the Featured badge that are still using Manifest V2 will lose their badge.

This will be followed gradually in the coming months by the disabling of those extensions. Users will be directed to the Chrome Web Store, where they will be recommended Manifest V3 alternatives for their disabled extension. For a short time after the extensions are disabled, users will still be able to turn their Manifest V2 extensions back on, but over time, this toggle will go away as well.

Like any big launches, all these changes will begin in pre-stable channel builds of Chrome first – Chrome Beta, Dev, and Canary. The changes will be rolled out over the coming months to Chrome Stable, with the goal of completing the transition by the beginning of next year. Enterprises using the ExtensionManifestV2Availability policy will be exempt from any browser changes until June 2025.

We’ve shared more information about the process in our recent Chrome extensions Google I/O talk. If you have any additional questions, don’t hesitate to reach out via the Chromium extensions mailing list.

In the latest release of Chrome, we're introducing Minimized Custom Tabs, a feature that allows users to effortlessly transition between native app and web content. With a simple tap on the down button in the Chrome Custom Tabs toolbar, users can minimize a Custom Tab into a compact, floating picture-in-picture window. This seamless integration enables multi-tasking across surfaces, enhancing the in-app web browsing experience. By tapping on the floating window, users can easily maximize the tab, restoring it to its original size.

Minimize a Chrome Custom Tab to interact with the background app


How to get started

Because this change happens at the browser level, developers who use Chrome Custom Tabs will see this change automatically applied starting with Chrome version M124. End users will see the Minimize icon in the Chrome Custom Tab toolbar.

Please note that this is a change in Chrome, and we hope other browsers will adopt similar functionality.

Posted by Victor Gallet, Senior Product Manager


Google and many other organizations, such as NIST, IETF, and NSA, believe that migrating to post-quantum cryptography is important due to the large risk posed by a cryptographically-relevant quantum computer (CRQC). In August, we posted about how Chrome Security is working to protect users from the risk of future quantum computers by leveraging a new form of hybrid post-quantum cryptographic key exchange, Kyber (ML-KEM)1. We’re happy to announce that we have enabled the latest Kyber draft specification by default for TLS 1.3 and QUIC on all desktop Chrome platforms as of Chrome 124.2 This rollout revealed a number of previously-existing bugs in several TLS middlebox products. To assist with the deployment of fixes, Chrome is offering a temporary enterprise policy to opt-out.

Launching opportunistic quantum-resistant key exchange is part of Google’s broader strategy to prioritize deploying post-quantum cryptography in systems today that are at risk if an adversary has access to a quantum computer in the future. We believe that it’s important to inform standards with real-world experience, by implementing drafts and iterating based on feedback from implementers and early adopters. This iterative approach was a key part of developing QUIC and TLS 1.3. It’s part of why we’re launching this draft version of Kyber, and it informs our future plans for post-quantum cryptography.

Chrome’s post-quantum strategy prioritizes quantum-resistant key exchange in HTTPS, and increased agility in certificates from the Web PKI. While PKI agility may appear somewhat unrelated, its absence has contributed to significant delays in past cryptographic transitions and will continue to do so until we find a viable solution in this space. A more agile Web PKI is required to enable a secure and reliable transition to post-quantum cryptography on the web.

To understand this, let’s take a look at HTTPS and the current state of post-quantum cryptography. In the context of HTTPS, cryptography is primarily used in three different ways:

  • Symmetric Encryption/Decryption. HTTP is transmitted as data inside a TLS connection using an authenticated cipher (AEAD) such as AES-GCM. These algorithms are broadly considered safe against quantum cryptanalysis and can remain in place.
  • Key Exchange. Symmetric cryptography requires a secret key. Key exchange is a form of asymmetric cryptography in which two parties can mutually generate a shared secret key over a public channel. This secret key can then be used for symmetric encryption and decryption. All current forms of asymmetric key exchange standardized for use in TLS are vulnerable to quantum cryptanalysis.
  • Authentication. In HTTPS, authentication is achieved primarily through the use of digital signatures, which are used to convey server identity, handshake authentication, and transparency for certificate issuance. All of the digital signature and public key algorithms standardized for authentication in TLS are vulnerable to quantum cryptanalysis.

This results in two separate quantum threats to HTTPS.

The first is the threat to traffic being generated today. An adversary could store encrypted traffic now, wait for a CRQC to be practical, and then use it to decrypt the traffic after the fact. This is commonly known as a store-now-decrypt-later attack. This threat is relatively urgent, since it doesn’t matter when a CRQC is practical—the threat comes from storing encrypted data now. Defending against this attack requires the key exchange to be quantum resistant. Launching Kyber in Chrome enables servers to mitigate store-now-decrypt-later attacks.

The second threat is that future traffic is vulnerable to impersonation by a quantum computer. Once a CRQC actually exists, it could be used to break the asymmetric cryptography used for authentication in HTTPS. To defend against impersonation from a CRQC, we need to migrate all of the asymmetric cryptography used for authentication to post-quantum variants. However, breaking authentication only affects traffic generated after the availability of CRQCs. This is because breaking authentication on a recorded transcript doesn’t help the attacker impersonate either party—the conversation has already finished.

In other words, there’s no store-now-decrypt-later equivalent for authentication, and so while migrating key exchange and authentication to post-quantum variants are both important, migrating authentication is less urgent than key exchange. This is good, because there are a variety of challenges for migrating to post-quantum authentication. Specifically, size.

Post-quantum cryptography is big compared to the pre-quantum cryptographic algorithms used in HTTPS. A Kyber key exchange is ~1KB transmitted per peer, whereas an X25519 key exchange is only 32 bytes per peer, an over 30x increase. The actual key exchange operation in Kyber is quite fast. Transmitting Kyber keys is quite slow. The extra size from Kyber causes the TLS ClientHello to be split into two packets, resulting in a 4% median latency increase to all TLS handshakes in Chrome on desktop. On desktop platforms, this, with HTTP/2 and HTTP/3 connection reuse, is not large enough to be noticeable in Core Web Vitals. Unfortunately, it is noticeable on Android, where Internet connections are often lower bandwidth and higher latency, and so we have not yet launched on Android.

The size issues are even worse for authentication. ML-DSA (Dilithium) keys and signatures are ~40X the size of ECDSA keys and signatures. A typical TLS connection today uses two public keys and five signatures to fulfill all of the authentication requirements. A naive swap to ML-DSA would add ~14KB to the TLS handshake. Cloudflare anticipates it would increase latency by 20-40%, and we’ve seen that a single kilobyte was already impactful. Instead, we need alternate approaches to authentication in HTTPS that provide the desired properties and transmit fewer signatures and public keys.

We think the important next step for quantum-resistant authentication in HTTPS is to focus on enabling trust anchor agility. Historically, the public Web PKI could not deploy new algorithms quickly. This is because most site operators typically provision a single certificate for all supported clients and browsers. This certificate must both be issued from a trust hierarchy that is trusted by every browser or client the site operator supports, and the certificate must be compatible with each of these clients.

The single certificate model makes it difficult for the Web PKI to evolve. As security requirements change, site operators may find that there is no longer an intersection between certificates trusted by deployed clients, certificates trusted by new clients, algorithms supported by deployed clients, and algorithms supported by new clients (all crossed with every separate browser and root store). These clients may range from different browsers, older versions of those browsers not receiving updates, all the way to applications on smart TVs or payment terminals. As requirements diverge, site operators have to choose between security for new clients, and compatibility with older clients.

This conflict, in turn, limits new clients making PKI changes to improve user security, such as transitioning to post-quantum. Under a single-certificate deployment model, the newest clients cannot diverge too far from the oldest clients, or server operators will be left with no way to maintain compatibility. We propose to solve this by moving to a multi-certificate deployment model, where servers may be provisioned with multiple certificates, and automatically send the correct one to each client. This enables trust anchor agility, and allows clients to evolve at different rates. Clients who are up to date and reliably receiving updates could access the authentication mechanisms best suited for the Internet as it evolves without being hamstrung by old clients no longer receiving updates. Certification authorities and trust stores could introduce new post-quantum trust anchors without needing to wait for the slowest actor to add support. This would drastically simplify the post-quantum transition since it also enables the seamless addition and removal of hierarchies using experimental post-quantum authentication methods.

At first glance, TLS may appear to have trust anchor agility by way of cross-signatures and signature algorithm negotiation. However, neither of these mechanisms provide true trust anchor agility, nor were they intended to.

A cross-signature is when a CA creates two different certificates for a single subject and public key pair, but with different issuers and signatures. The first certificate is issued and signed as usual, by the CA itself. The second is issued and signed by a different trust hierarchy, often by a different organization. For example, the original Let’s Encrypt intermediate certificate existed in two forms3. The “regular” intermediate was signed by the Let’s Encrypt root, whereas the “cross-signed” intermediate was signed by IdenTrust. This approach of cross-signing a new PKI hierarchy with an older, more broadly available PKI hierarchy allows a new CA to bootstrap its trust on old devices, so long as the older devices support the signing algorithm. Cross-signatures, however, rely on significant cooperation among often competing CAs, and may not be suitable for when different clients have different needs. This limits when site operators can use cross-signs. Additionally, devices that do not support a new algorithm will still need to be updated to be able to use the new signing algorithm in the newer certificate, regardless of whether or not it is cross-signed.

Signature algorithm negotiation allows TLS peers to agree on the algorithm to be used for the handshake signature. This algorithm needs to correspond with the key type used in the certificate. Endpoints can infer that if the peer supports an algorithm such as ECDSA for the handshake signature, it must also support ECDSA certificates. This value can be used to multiplex between an RSA-based chain and a smaller ECDSA-based chain. For example, Google’s RSA-based large compatibility chain is four certificates and ~4.1KB, whereas the shortest ECDSA-based chain is three certificates and only ~1.7KB4.

Signature algorithm negotiation does not provide trust anchor agility. While the signature algorithm information implies algorithm support, it provides no information about what trust anchors a client actually trusts. A client can support ECDSA, but not have the latest ECDSA root certificate from a specific CA. Due to the wide variety of trust stores in use, many organizations may still often need to be conservative in when they serve ECDSA certificates and may need to provide a longer, cross-signed chain for maximum compatibility.

Neither cross-signatures nor signature algorithm negotiation are solutions to migrating to post-quantum cryptography for authentication. Cross-signatures do not help with new algorithms, and signature algorithm negotiation is solely about negotiating algorithms, not providing information about trust anchors. We expect a gradual transition to post-quantum cryptography. Inferring information about the contents of the trust store from the result of signature algorithm negotiation risks ossifying to a specific version of a specific trust store, rather than purely being used for algorithm negotiation.

Instead, to introduce agility to TLS we need an explicit mechanism for trust anchor negotiation, to allow the client and server to efficiently determine which certificate to use. At the November 2023 IETF meeting in Prague, Chrome proposed “Trust Expressions” as a mechanism for trust anchor negotiation in TLS. Chrome is currently seeking community input on Trust Expressions via the IETF process. We think the goal of being able to cleanly deploy multiple certificates to handle a range of clients is much more important than the specific mechanisms of the proposal.

From there, we can explore more efficient ways to authenticate servers, such as Merkle Tree Certificates. We view introducing some mechanism for trust anchor agility as a necessity for efficient post-quantum authentication. Experimentation will be extremely important as proposals are developed. Agility also enables using different solutions in different contexts, rather than sending extra data for the lowest-common denominator— solutions like Merkle Tree Certificates and intermediate elision require up-to-date clients.

Given these constraints, priorities, and risks, we think agility is more important than defining exactly what a post-quantum PKI will look like at this time. We recommend against immediately standardizing ML-DSA in X.509 for use in the public Web PKI via the CA/Browser Forum. We expect that ML-DSA, once NIST completes standardization, will play a part in a post-quantum Web PKI, but we’re focusing on agility first. This does not preclude introducing ML-DSA in X.509 as an option for private PKIs, which may be operating on more strict post-quantum timelines and have fewer constraints around certificate size, handshake latency, issuance transparency, and unmanaged endpoints.

Ultimately, we think that any approach to post-quantum authentication has the same first requirement—a migration mechanism for clients to opt-in to post-quantum secure authentication mechanisms when servers support it. Post-quantum authentication presents significant challenges to the Web ecosystem, but we believe trust anchor agility will enable us to overcome them and lead to a more secure, robust, and performant post-quantum web.

Notes


  1. The draft is X25519Kyber768, which is a combination of the pre-quantum algorithm X25519, and the post-quantum algorithm Kyber 768. Kyber is being renamed to ML-KEM, however for the purposes of this post, we will use “Kyber” to refer to the hybrid algorithm defined for TLS. 

  2. As the standards from NIST and IETF are not yet complete, this will be later removed and replaced with the final versions. At this stage of standardization, we expect only early adopters to use the primitives. 

  3. It actually exists in considerably more than two forms, but from an organizational perspective, there are versions that signed by other Let’s Encrypt certificates, and a version that is signed by IdenTrust, which is a completely separate certification authority from Let’s Encrypt. 

  4. The chain length includes the root certificate and leaf certificate. The byte numbers are what is transmitted over the wire, and so they include the leaf certificate but not the root certificate. 

Posted by David Adrian, Bob Beck, David Benjamin and Devon O'Brien



Used billions of times each day, the Chrome address bar (which we call the “omnibox”) is a powerful tool to make searching the web easier, whether you’re trying to quickly find your tabs or bookmarks, return to a web page you previously visited, or find information.

With the latest release of Chrome (M124), we’re integrating machine learning models to power the Chrome omnibox on desktop, so that web page suggestions are more precise and relevant to you. In the future, these models will also help improve the relevance scoring of search suggestions. Here’s a closer look at some of the important insights that help our team build this integration and where we hope the new model takes us.

How we got here

As the engineering lead for the team responsible for the omnibox, every launch feels special, but this one is truly near and dear to my heart. When I first started working on the Chrome omnibox, I asked around for ideas on how we could make it better for users. The number one answer I heard was, "improve the scoring system." The issue wasn't that the scoring was bad. In fact, the omnibox often feels magical in its ability to surface the URL or query you want! The issue was that it was inflexible. A set of hand-built and hand-tuned formulas did the job well, but were difficult to improve or to adapt to new scenarios. As a result, the scoring system went largely untouched for a long time.

For most of that time, an ML-trained scoring model was the obvious path forward. But it took many false starts to finally get here. Our inability to tackle this challenge for so long was due to the difficulty of replacing the core mechanism of a feature used literally billions of times every day. Software engineering projects are sometimes described as "building the plane while flying it." This project felt more like "replacing all the seats in every plane in the world while they're all flying." The scale was enormous and the changes are felt directly by every user.

This ambitious undertaking would not have been possible without the work of such a talented and dedicated team. There were bumps in the road, walls we had to break through, and unanticipated issues that slowed us down, but the team was driven by a sincere belief in the impact of getting this right for our users.

A Surprising Insight

One of the fun things about working with ML systems is that the training considers all the data at a scale that would be difficult to impossible for any individual person or team. And that can lead to surprising insights.

The coolest example of this phenomenon on this project was when we looked at the scoring curve of one particular signal: time since last navigation. The expectation with this signal is that the smaller it is (the more recently you've navigated to a particular URL), the bigger the contribution that signal should make towards a higher relevance score.

And that is, in fact, what the model learned. But when we looked closer, we noticed something surprising: when the time since navigation was very low (seconds instead of hours, days or weeks), the model was decreasing the relevance score. It turns out that the training data reflected a pattern where users sometimes navigate to a URL that was not what they really wanted and then immediately return to the Chrome omnibox and try again. In that case, the URL they just navigated to is almost certainly not what they want, so it should receive a low relevance score during this second attempt.

In retrospect, this is obvious. And if we had not launched ML scoring, we definitely would have added a new rule to the old system to reflect this scenario. But before the training system observed and learned from this pattern, it never occurred to anyone that this might be happening.

The Future

With the new ML models, we believe this will open up many new possibilities to improve the user experience by potentially incorporating new signals, like differentiating between time of the day to improve relevance. We want to explore training specialized versions of the model for particular environments: for example, mobile, enterprise or academic users, or perhaps different locales.

Additionally, we observe that the way users interact with the Chrome omnibox changes over time and we believe the relevance scoring should change with them. With the new scoring system, we can now simply collect fresher signals, re-train, evaluate, and deploy new models periodically over time.

By Justin Donnelly, Chrome software engineer