Feb 24, 2026·6 min read·4 visits
A resource exhaustion vulnerability in NATS-Server allows unauthenticated attackers to crash the server via WebSocket compression bombs. By sending highly compressed frames, an attacker forces the server to allocate unlimited memory during decompression. Fixed in versions 2.11.12 and 2.12.3.
NATS-Server, the high-performance messaging system used as the nervous system for countless cloud-native architectures, contains a critical flaw in its WebSocket implementation. By failing to bound memory allocation during the decompression of WebSocket frames, the server exposes itself to a trivial Denial of Service (DoS) attack. An attacker can send a tiny, specially crafted 'compression bomb' packet that expands exponentially in memory, triggering the OOM killer and crashing the service instantly.
NATS is built for speed. It is the cloud-native equivalent of a formula one car—stripped down, aerodynamic, and capable of handling millions of messages a second. Developers love it because it just works. It connects microservices, IoT devices, and edge nodes with a simplicity that makes other message queues look like Rube Goldberg machines.
But here is the thing about racing cars: safety features often add weight. In the quest for performance and feature parity, NATS implemented WebSocket support to allow web clients to talk directly to the message bus. It included support for permessage-deflate, an extension defined in RFC 7692 that allows WebSocket frames to be compressed. This sounds great on paper—save bandwidth, reduce latency.
However, implementing compression is like handling explosives. If you do not put strict containment protocols in place, things blow up. In CVE-2026-27571, the NATS team forgot to check the size of the package before unwrapping it. They assumed the input would be reasonable. Spoiler alert: the internet is never reasonable.
The vulnerability is a textbook "Decompression Bomb" (or Zip Bomb). The concept is ancient in internet years, yet we keep seeing it resurface in modern Go, Rust, and Node.js projects. The logic flaw resides in server/websocket.go.
When a client connects via WebSocket and negotiates compression, the server spins up a decompressor. For every incoming frame, the server reads the compressed bytes and inflates them to process the message. The problem? The server utilized Go's io.ReadAll() directly on the decompression stream without a limit that correlated to the maximum allowed payload size.
In the DEFLATE algorithm, you can represent a massive string of repeated characters (like a gigabyte of zeros) with just a few bytes of metadata. When the NATS server receives this packet, io.ReadAll() keeps allocating more and more RAM to hold the expanding data until the stream ends. Since the check for max_payload size happened after or outside this reading loop (or simply wasn't enforced strictly enough on the stream itself), the Go runtime happily eats up all available heap memory. By the time the server realizes the message is too big, it is too late—the OOM (Out of Memory) killer has already stepped in to execute the process.
Let's look at the smoking gun in server/websocket.go. The vulnerability existed because the code trusted the standard library to handle the reading without boundaries. Here is the simplified logic pre-patch:
// The code that killed the cat
func (r *wsReadInfo) decompress(d io.Reader) ([]byte, error) {
// d is the flate.Reader (decompressor)
// io.ReadAll reads until EOF. If the compressed data says
// "I expand to 10GB", ReadAll tries to allocate 10GB.
b, err := io.ReadAll(d)
if err != nil {
return nil, err
}
return b, nil
}This is a rookie mistake in a systems language, but an easy one to make if you are focused on functionality over defensive coding. The fix, applied in commit f77fb7c4535e6727cc1a2899cd8e6bbdd8ba2017, introduces adult supervision in the form of io.LimitedReader.
// The Fix: Trust, but verify (and limit)
func (r *wsReadInfo) decompress(mpay int) ([]byte, error) {
// ...
// We wrap the decompressor 'd' in a LimitedReader.
// limit = max_payload + 1 (to detect overflow)
lr := io.LimitedReader{R: d, N: int64(mpay + 1)}
b, err := io.ReadAll(&lr)
// If we read more than mpay, we know the client is lying/malicious
if err == nil && len(b) > mpay {
return nil, ErrMaxPayload
}
return b, err
}By passing the mpay (Maximum Payload) variable into the decompression function, the server now stops reading exactly one byte past the limit. The memory allocation is capped, and the attack is neutralized.
Exploiting this does not require advanced shellcode or heap grooming. It just requires a basic understanding of the WebSocket protocol and zlib. An attacker doesn't even need valid NATS credentials—the WebSocket handshake and frame processing happen before the NATS-level CONNECT verb is fully processed and authenticated.
The Attack Chain:
ws://target:port/ with the header Sec-WebSocket-Extensions: permessage-deflate.The Result:
The server receives the small frame (passing network bandwidth limits). It passes it to the decompress function. The function starts allocating. 1MB... 100MB... 1GB... Crash.
Because Go is memory-safe, you won't get Remote Code Execution (RCE) via buffer overflow here. But you get something arguably worse for a message bus: total unavailability. If NATS is the backbone of your microservices, your entire architecture just went dark.
The CVSS score of 5.9 feels deceptively low for a vulnerability that can take down production infrastructure with a single packet. The "High Complexity" (AC:H) metric is doing a lot of heavy lifting here, likely arguing that WebSockets must be explicitly enabled and exposed.
However, in many modern deployments (especially those involving mobile clients or web dashboards), WebSocket support is the entire point of using NATS. If you have this feature turned on, the complexity for the attacker is actually quite low. There is no authentication barrier.
This is an asymmetric attack. The attacker spends pennies on bandwidth and CPU to generate the compressed frame. The victim spends gigabytes of RAM and potential downtime processing it. In a Kubernetes environment, this could lead to a crash loop where the pod restarts, accepts the connection again, and crashes again.
If you are running NATS Server with WebSockets enabled, you have two choices: patch or disable.
1. The Patch: Upgrade immediately to version 2.11.12 or 2.12.3. These versions include the boundary checks discussed above. This is the only true fix.
2. The Workaround: If you cannot upgrade today, check your configuration file. If you do not need WebSockets, turn them off.
# Disable this block if not strictly necessary
websocket {
listen: "0.0.0.0:4222"
# ...
}3. The Band-Aid: If you must run a vulnerable version, place a reverse proxy (like Nginx or HAProxy) in front of NATS that handles the WebSocket termination and does not support compression, or enforces strict body limits before passing traffic to NATS. However, ensure the proxy itself isn't vulnerable to the same bomb!
CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H| Product | Affected Versions | Fixed Version |
|---|---|---|
nats-server Synadia | < 2.11.12 | 2.11.12 |
nats-server Synadia | 2.12.0-RC.1 - < 2.12.3 | 2.12.3 |
| Attribute | Detail |
|---|---|
| CWE | CWE-409 (Improper Handling of Highly Compressed Data) |
| CVSS | 5.9 (Medium) |
| Attack Vector | Network (WebSocket) |
| Impact | Denial of Service (Memory Exhaustion) |
| Authentication | None Required |
| Fix Commit | f77fb7c4535e6727cc1a2899cd8e6bbdd8ba2017 |
The software does not handle compressed data correctly, allowing an attacker to send a small payload that expands to a very large size, consuming excessive resources.