Feb 28, 2026·7 min read·31 visits
Unauthenticated remote DoS in AIOHTTP via zip bomb. Attackers send small compressed payloads that expand to fill server memory. Fixed in version 3.13.3.
A high-severity Denial of Service (DoS) vulnerability exists in the AIOHTTP asynchronous HTTP client/server framework for Python (versions 3.13.2 and earlier). The flaw resides in the `auto_decompress` feature of the HTTP parser, which lacks appropriate size limits for decompressed data. This omission allows unauthenticated remote attackers to execute 'zip bomb' attacks, where a small, highly compressed request body expands into a massive payload in memory, causing resource exhaustion and server crashes.
AIOHTTP is a foundational asynchronous HTTP client/server framework for Python, widely used in modern microservices and web applications to handle concurrent connections efficiently. The framework supports automatic decompression of HTTP request bodies to simplify payload handling for developers. However, prior to version 3.13.3, this feature contained a critical oversight in its resource management logic.
The vulnerability, designated CVE-2025-69223, is a classic 'zip bomb' or decompression bomb scenario (CWE-409). When the auto_decompress setting is enabled (which is often the default or easily toggled configuration for handling compressed uploads), the server accepts request bodies encoded with algorithms such as gzip, deflate, brotli, or zstd. The parsing logic attempts to decompress the entire stream into memory without enforcing a maximum expansion limit.
This flaw allows an attacker to craft a malicious HTTP request with a high compression ratio—for example, a payload of a few kilobytes that decompresses into gigabytes of data. As the server processes this request, it allocates memory to store the decompressed output until the system's available RAM is exhausted, leading to an Out-of-Memory (OOM) crash or severe performance degradation affecting all users.
The root cause of CVE-2025-69223 lies in the implementation of the DeflateBuffer and DecompressionBaseHandler classes within aiohttp/http_parser.py and aiohttp/compression_utils.py. The framework delegates decompression to underlying Python libraries or bindings (such as zlib, brotli, or zstd) but failed to restrict the output size during the stream processing phase.
In the vulnerable versions, the feed_data method reads chunks of compressed data from the wire and passes them directly to the decompression context. While the input size might be small (and thus pass standard Content-Length checks), the output size is determined solely by the data's entropy. The parser lacked a mechanism to track the cumulative size of the decompressed data and abort the operation if a safety threshold was breached.
Specifically, the decompress calls inside the parsing loop did not utilize the max_length parameter (or equivalent) available in modern decompression APIs. This absence meant that the expansion loop would continue until the decompression was complete or the operating system terminated the process due to memory exhaustion. This represents a failure to implement Resource Allocation Throttling (CWE-770), effectively granting external actors control over the server's memory allocation.
The remediation in version 3.13.3 introduces a robust enforcement mechanism for decompression limits. The fix involves three key changes: defining a default limit, tracking the output size, and raising an exception when the limit is exceeded.
Below is a conceptual reconstruction of the patch applied to aiohttp/http_parser.py and related utilities.
Vulnerable Logic (Simplified):
# Inside the parsing loop, data is decompressed without limits
def feed_data(self, chunk):
# ... (code omitted)
try:
# The decompressor simply expands whatever it receives
decoded_chunk = self.decompressor.decompress(chunk)
self.payload.write(decoded_chunk)
except Exception:
# Generic error handling
passPatched Logic (Simplified):
DEFAULT_MAX_DECOMPRESS_SIZE = 2**25 # 32 MiB limit
class DecompressionBaseHandler:
def __init__(self, encoding, max_decompress_size=DEFAULT_MAX_DECOMPRESS_SIZE):
self._max_decompress_size = max_decompress_size
# ...
def decompress_sync(self, data):
# Check if the underlying library supports max_length
try:
# Enforce the limit directly in the decompress call
return self.decompressor.decompress(
data,
max_length=self._max_decompress_size
)
except (zlib.error, brotli.error) as exc:
# Handle specific decompression errors
raise DecompressionError() from excThe patch introduces DEFAULT_MAX_DECOMPRESS_SIZE, set to 32 MiB. If a request body expands beyond this threshold, the parser now raises a ContentEncodingError (specifically wrapping a DecompressSizeError), causing the connection to close immediately and freeing the allocated resources before they impact system stability.
Exploiting this vulnerability requires no authentication and can be performed with standard HTTP tooling, provided the attacker can construct a valid compressed payload. The attack targets endpoints that accept POST or PUT requests and respect the Content-Encoding header.
Attack Workflow:
Content-Encoding: gzip (or deflate, br). The body of the request contains the malicious compressed payload.The impact of CVE-2025-69223 is strictly a Denial of Service, but the severity is High (CVSS 7.5) due to the ease of exploitation and the potential for total service disruption. Unlike complex memory corruption bugs, this vulnerability relies on logical resource mismanagement, making it reliable and platform-independent.
Operational Impact:
There is no impact on Confidentiality or Integrity; the attacker cannot read memory or execute arbitrary code through this vector. The risk is purely regarding availability.
The primary remediation is to upgrade the aiohttp library to version 3.13.3 or later. This version enforces a hard limit of 32 MiB on decompressed data by default, which is sufficient for most standard API interactions but small enough to prevent memory exhaustion attacks.
Remediation Steps:
pip freeze or poetry show) for aiohttp versions <= 3.13.2.pip install --upgrade aiohttp
# OR
poetry update aiohttpmax_field_size or implement custom handling, though the patch specifically targets the decompression step handled by the parser.Defensive Coding (Workarounds):
If an immediate upgrade is impossible, developers should disable auto_decompress in their server configuration or middleware and handle decompression manually in the application logic. This allows developers to read the stream in chunks and count the bytes, aborting if the total size exceeds a safe threshold (e.g., 10 MB). Additionally, implementing a Web Application Firewall (WAF) rule to block requests with Content-Encoding from untrusted sources serves as a temporary stopgap.
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H| Product | Affected Versions | Fixed Version |
|---|---|---|
aiohttp aio-libs | <= 3.13.2 | 3.13.3 |
| Attribute | Detail |
|---|---|
| CWE ID | CWE-409 (Improper Handling of Highly Compressed Data) |
| CVSS v3.1 | 7.5 (High) |
| Attack Vector | Network (HTTP) |
| Impact | Denial of Service (Memory Exhaustion) |
| Authentication | None Required |
| Patch Status | Fixed in 3.13.3 |
The application does not properly control the amount of resources used when handling highly compressed data, leading to a denial of service.