Next.js Image Optimizer: The 4GB Hello World
Jan 27, 2026·6 min read·2 visits
Executive Summary (TL;DR)
The `/_next/image` endpoint used `res.arrayBuffer()` to fetch upstream images, loading the entire file into RAM. An attacker can host a multi-gigabyte image on a whitelisted domain, request it via the optimizer, and instantly crash the Node.js process via Out-Of-Memory (OOM). Fixed in 15.5.10 and 16.1.5 by implementing streaming size checks.
The Next.js Image Optimization API, a beloved feature for frontend performance, contained a fatal resource handling flaw. By requesting the optimization of a massive external image, an attacker could force the server to buffer the entire file into memory before validation, leading to immediate process termination (OOM).
The Hook: Optimizing Ourselves to Death
Next.js is the crown jewel of the React ecosystem, and for good reason. It handles the hard stuff—SSR, routing, and especially Image Optimization. The <Image /> component is magical: you pass it a URL, and Next.js automatically resizes, compresses, and serves it in modern formats like WebP or AVIF.
Under the hood, this magic relies on a proxying endpoint: /_next/image. When you request an image, the Next.js server acts as a middleman. It fetches the original image from the source (upstream), processes it using a library like sharp, and spits out the optimized version.
But here is the catch: handling binary data in Node.js is a minefield. If you treat a 4GB file the same way you treat a 4KB JSON response, you are going to have a bad time. And that is exactly what happened here. Next.js forgot that remotePatterns (the whitelist of allowed image domains) doesn't guarantee the size of the content on those domains. Trusting the upstream source to be reasonable is a classic security anti-pattern.
The Flaw: A Gluttonous Buffer
The vulnerability lies in how Next.js fetched the upstream image. In the world of high-performance IO, the golden rule is Streaming. You never want to hold an entire file in RAM if you can help it. You want to read a chunk, process a chunk, send a chunk.
Next.js, however, took the "swallow the ocean" approach. In packages/next/src/server/image-optimizer.ts, the code used the fetch API's arrayBuffer() method.
// The code that killed the server
const upstreamBuffer = Buffer.from(await res.arrayBuffer());For those not fluent in Node.js internals: await res.arrayBuffer() tells the runtime to download the entire HTTP response body and allocate a contiguous block of memory in the V8 heap to store it.
There was no check on Content-Length before this allocation (which can be spoofed anyway), and no check on the incoming byte stream size during the download. The server would blindly attempt to allocate whatever the upstream server sent. If an attacker points the optimizer at a 5GB file, the Node.js process attempts to allocate 5GB of RAM. In a containerized environment like Kubernetes or AWS Fargate, where pods often have 512MB or 1GB limits, this is an instant OOMKill.
The Code: Before and After
Let's look at the smoking gun. This diff shows the transition from a naive buffer allocation to a defensive streaming approach.
The Vulnerable Logic (Pre-Patch): It was elegant, simple, and deadly.
// 💀 fetches everything into RAM
const buffer = Buffer.from(await res.arrayBuffer());The Fix (Commit 1caaca3):
The patch replaces the one-liner with a manual stream consumer. It iterates over the response body chunk by chunk, keeping a running tally of the size. If the size exceeds the new maximumResponseBody threshold (defaulting to 50MB), it aborts the stream and throws an error before the memory is fully consumed.
// 🛡️ Safe streaming implementation
const chunks: Buffer[] = []
let totalSize = 0
// Iterate over the stream
for await (const c of res.body) {
const chunk = Buffer.from(c)
totalSize += chunk.byteLength
// Check limits IN-FLIGHT
if (totalSize > maximumResponseBody) {
Log.error('upstream image response exceeded maximum size', totalSize)
throw new ImageError(413, 'Upstream response too large')
}
chunks.push(chunk)
}
// Only combine if we survived the loop
const buffer = Buffer.concat(chunks)This simple loop fundamentally changes the memory profile of the request. Instead of spiking to the full file size immediately, we now have a "dead man's switch" that kills the download if it gets too heavy.
The Exploit: Crashing the Pods
To exploit this, we don't need fancy shellcode or ROP chains. We just need a really big picture. The only constraint is the remotePatterns configuration in the victim's next.config.js.
Scenario:
The victim allows images from images.unsplash.com or a generic S3 bucket.
Step 1: The Trap
We upload a "Bomb" image to the allowed domain. If we control the domain (e.g., a whitelist for user content), we simply host a 10GB file named harmless-avatar.jpg. If we are attacking a public CDN like Unsplash, we might just look for the highest resolution TIFF available, or use a crafted server that responds to HEAD requests with Content-Length: 100 but sends 10GB of data on GET.
Step 2: The Trigger We send a request to the victim:
GET https://victim-app.com/_next/image?url=https://trusted-bucket.s3.amazonaws.com/10gb-bomb.jpg&w=1080&q=75
Step 3: The Crash The Next.js server receives the request. It sees the URL is whitelisted. It initiates the fetch. The upstream server starts pouring data. Next.js keeps expanding the heap.
At around ~1.5GB (depending on V8 flags), the Garbage Collector starts panic-sweeping. It fails. The process crashes with FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory.
If the victim is using a process manager like PM2 or Kubernetes, the service restarts. If we script this to send a request every time the server comes back up, we have achieved a persistent Denial of Service.
The Impact: Why This Matters
Denial of Service (DoS) is often overlooked as "annoying but not critical," but in modern serverless and containerized architectures, it is a wallet-drainer and a reliability nightmare.
Availability: A single request can kill a worker process. A small script with 10 threads can keep an entire cluster of pods in a crash loop, taking down the application for all users.
Financial: If the application is auto-scaling (e.g., AWS Fargate or Vercel specific setups), the orchestrator might see high CPU/Memory usage (as the GC trashes) and spin up more instances to handle the load. The attacker is essentially forcing the victim to provision max capacity while simultaneously killing it.
Re-exploitation Potential: Even with the patch, the default limit was initially 300MB, then lowered to 50MB. While 50MB prevents a single request from crashing a server, concurrency is still the enemy. If an attacker sends 20 simultaneous requests for 49MB images, that is still ~1GB of simultaneous memory pressure. The patch fixes the "infinite" allocation, but it doesn't solve the fundamental cost of processing large media on the main application thread.
Mitigation: Patch and Configure
The remediation is straightforward: stop trusting the upstream.
1. Update Immediately Upgrade to Next.js v15.5.10 or v16.1.5. These versions introduce the streaming check.
2. Configure Limits
The patch introduces a new configuration option. Do not rely on the default (50MB) if your server is memory-constrained. Set it explicitly in next.config.js:
module.exports = {
images: {
// Cap it at 5MB or 10MB to be safe
maximumResponseBody: 5 * 1024 * 1024,
},
}3. Architectural Defense Ideally, your core application server shouldn't be doing heavy image processing. Offload image optimization to a dedicated service or a commercial CDN (like Cloudflare, Akamai, or Vercel's managed edge) that handles resource limits at the infrastructure level. If you are self-hosting, consider running the Image Optimization in a separate, isolated microservice to prevent one heavy JPEG from taking down your checkout flow.
Fix Analysis (2)
Technical Appendix
CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:HAffected Systems
Affected Versions Detail
| Product | Affected Versions | Fixed Version |
|---|---|---|
Next.js Vercel | >= 10.0.0, <= 15.5.9 | 15.5.10 |
Next.js Vercel | >= 16.0.0, <= 16.1.4 | 16.1.5 |
| Attribute | Detail |
|---|---|
| CWE ID | CWE-400 (Uncontrolled Resource Consumption) |
| CVSS Score | 5.9 (Medium) |
| Attack Vector | Network |
| Impact | Availability (High) |
| Vulnerable Function | fetchExternalImage / res.arrayBuffer() |
| Fix Implementation | Streaming Byte Counter |
MITRE ATT&CK Mapping
The product does not properly restrict the size or amount of resources that are requested or consumed, potentially allowing an attacker to cause a denial of service.
Known Exploits & Detection
Vulnerability Timeline
Subscribe to updates
Get the latest CVE analysis reports delivered to your inbox.