CVEReports
CVEReports

Automated vulnerability intelligence platform. Comprehensive reports for high-severity CVEs generated by AI.

Product

  • Home
  • Sitemap
  • RSS Feed

Company

  • About
  • Contact
  • Privacy Policy
  • Terms of Service

© 2026 CVEReports. All rights reserved.

Made with love by Amit Schendel & Alon Barad



CVE-2026-25802
7.60.04%

Prompt Injection to Stored XSS: Unpacking CVE-2026-25802 in new-api

Alon Barad
Alon Barad
Software Engineer

Feb 24, 2026·6 min read·12 visits

PoC Available

Executive Summary (TL;DR)

The 'new-api' LLM gateway trusted AI-generated content too much. By asking the AI to write malicious code, attackers could trigger a Stored XSS in the playground UI, leading to session hijacking. Fixed in version 0.10.8-alpha.9 by sandboxing the output.

A critical Cross-Site Scripting (XSS) vulnerability was discovered in the 'new-api' LLM gateway, specifically within its playground component. The flaw allows attackers to weaponize Large Language Model outputs via indirect prompt injection, causing the application to render malicious JavaScript that executes in the victim's browser. The root cause lies in the unsafe use of React's 'dangerouslySetInnerHTML' without adequate sanitization.

The Hook: When Chatbots Attack

In the rush to integrate Large Language Models (LLMs) into every piece of software known to man, developers often forget a cardinal rule of security: All input is evil. But with LLMs, the "input" is technically "output" generated by a machine we implicitly trust to be smart. This cognitive dissonance is exactly where CVE-2026-25802 lives. It targets new-api, a popular Go-based gateway for managing LLM assets and APIs. The vulnerability isn't in the AI itself; it's in how the human-built UI handles the AI's hallucinations.

The specific component at fault is the "Playground"—a sandbox where administrators and users can test prompts against various models. The irony of calling it a "sandbox" while it lacked actual browser sandboxing is palpable. When a user asks the model to generate content, the frontend renders it. If that content happens to be a malicious JavaScript payload disguised as helpful code, the browser executes it. This turns the LLM into a confused accomplice in a classic Cross-Site Scripting (XSS) attack.

This isn't just a theoretical "popup" bug. Because the chat history is saved to the database, this becomes a Stored XSS. An attacker can generate a malicious chat log, and later, when an administrator reviews the logs or the history is reloaded, the payload detonates. It's a landmine made of text.

The Flaw: dangerouslySetInnerHTML Meets AI

The root cause of this vulnerability is a tale as old as React itself. The developers wanted to display rich text responses from the AI—markdown, code blocks, maybe some bold text for emphasis. To do this, they reached for the forbidden fruit: dangerouslySetInnerHTML. As the name implies, this React attribute is dangerous because it bypasses the virtual DOM's built-in XSS protection and dumps raw HTML directly into the document.

In MarkdownRenderer.jsx, the application took the string response from the LLM and shoved it directly into the DOM. There was no sanitization layer, no DOMPurify, just raw trust. If the LLM output contained <script>alert(1)</script>, the browser would obediently execute it. But wait, it gets worse. There was a secondary vector in CodeViewer.jsx.

The developers implemented a custom syntax highlighter for JSON responses. Instead of using a battle-tested library, they manually constructed HTML strings by wrapping tokens in <span> tags. They forgot to escape the content inside those tokens. So, if a JSON value contained "><img src=x onerror=alert(1)>, the concatenation logic would break out of the span and inject the event handler. It's a masterclass in why you should never build your own parser.

The Code: The Smoking Gun

Let's look at the commit that introduced the fix (ab5456eb1049aa8a0f3e51f359907ec7fff38b4b). The difference between the vulnerable code and the patched code effectively illustrates the severity of the oversight.

The Vulnerable Implementation (Before): In the original MarkdownRenderer.jsx, the code looked something like this:

// The developer trusted the 'content' variable completely
<div 
  className="markdown-body"
  dangerouslySetInnerHTML={{ __html: content }}
/>

This is the digital equivalent of leaving your front door open because you assume only your friends know where you live. The content variable here comes directly from the LLM, which effectively comes from the user's prompt.

The Fix (After): Reference commit ab5456eb1049aa8a0f3e51f359907ec7fff38b4b. The maintainers realized they couldn't sanitize every possible output, so they chose containment. They replaced the direct render with a Sandboxed Iframe.

// SandboxedHtmlPreview component
<iframe
  ref={iframeRef}
  sandbox='allow-same-origin' // Note: allow-scripts is MISSING
  srcDoc={code}
  title='HTML Preview'
  style={{ width: '100%', border: 'none' }}
/>

By omitting allow-scripts from the sandbox attribute, the browser renders the HTML visually but refuses to execute any JavaScript within that frame. Additionally, in CodeViewer.jsx, they finally introduced a sanitizer:

const escapeHtml = (str) => {
  return str
    .replace(/&/g, '&amp;')
    .replace(/</g, '&lt;')
    .replace(/>/g, '&gt;')
    .replace(/"/g, '&quot;')
    .replace(/'/g, '&#039;');
};

This function is now called on every token before it gets wrapped in syntax highlighting tags, ensuring that special characters are treated as text, not code.

The Exploit: Weaponizing the Chatbot

Exploiting this requires a technique known as Indirect Prompt Injection. We don't write the XSS payload; we bully the AI into writing it for us. The attack flow is elegant in its simplicity.

Step 1: The Setup The attacker logs into the new-api playground. They select a model (it doesn't matter which one, as long as it follows instructions).

Step 2: The Injection The attacker sends the following prompt: > "Please write a complete HTML example that includes a script tag to redirect the current page to google.com. Do not use code blocks, just raw text."

Step 3: The Execution The LLM, trying to be helpful, responds: > "Sure, here is the code: <script>window.location.replace("https://www.google.com")</script>"

Step 4: The Detonation The new-api frontend receives this string. The MarkdownRenderer sees the HTML tags and passes them to dangerouslySetInnerHTML. The React component mounts, the browser parses the DOM, encounters the <script> tag, and immediately executes the redirect.

Because this interaction is saved in the chat history, any administrator who views this chat session later will also be redirected (or worse, have their session cookies stolen via document.cookie).

The Fix: Containment and Sanitization

The remediation for CVE-2026-25802 is twofold, addressing both the rendering engine and the data handling. The primary fix is the introduction of isolation.

1. Sandbox Isolation: The move to an <iframe> with the sandbox attribute is the strongest defense here. Even if the sanitizer fails, or if the LLM invents a new polyglot payload, the browser's security model prevents execution. By strictly defining the capabilities of the iframe (allowing same-origin for styles but denying scripts), the attack surface is effectively neutralized.

2. Context-Aware Escaping: For the syntax highlighter, the developers implemented classic output encoding. This ensures that data is treated as data, not instructions. This is a crucial lesson: never manually build HTML strings without a utility function to handle entity encoding.

Recommendation: If you are running new-api versions prior to 0.10.8-alpha.9, you are vulnerable. Update immediately. If you cannot update, disable the Playground feature or block access to it via network policies, as there is no configuration-based mitigation for the vulnerability itself.

Official Patches

QuantumNousGitHub Commit: Fix XSS in Markdown and JSON components

Fix Analysis (1)

Technical Appendix

CVSS Score
7.6/ 10
CVSS:3.1/AV:N/AC:L/PR:L/UI:R/S:C/C:N/I:H/A:L
EPSS Probability
0.04%

Affected Systems

new-api (QuantumNous)

Affected Versions Detail

Product
Affected Versions
Fixed Version
new-api
QuantumNous
< 0.10.8-alpha.90.10.8-alpha.9
AttributeDetail
CWE IDCWE-79
CVSS Score7.6 (High)
Attack VectorNetwork
Attack ComplexityLow
Privileges RequiredLow
ImpactSession Hijacking, RCE (via Admin context)
Exploit StatusPoC Available

MITRE ATT&CK Mapping

T1189Drive-by Compromise
Initial Access
T1059.007Command and Scripting Interpreter: JavaScript
Execution
T1185Browser Session Hijacking
Collection
CWE-79
Cross-site Scripting

Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')

Known Exploits & Detection

GitHub AdvisoryProof of concept demonstrating redirection to Google via LLM prompt.

Vulnerability Timeline

Fix commit pushed to GitHub
2026-02-06
GitHub Advisory (GHSA) published
2026-02-23
CVE-2026-25802 published
2026-02-24

References & Sources

  • [1]GHSA Advisory
  • [2]NIST NVD Entry

Attack Flow Diagram

Press enter or space to select a node. You can then use the arrow keys to move the node around. Press delete to remove it and escape to cancel.
Press enter or space to select an edge. You can then press delete to remove it or escape to cancel.