Feb 18, 2026·5 min read·5 visits
OpenClaw (and the clawdbot package) < 2026.2.14 creates full memory buffers for Base64 inputs before checking their size. An attacker can send a large payload to trigger an OOM crash (DoS).
A classic case of 'allocation before validation' in the OpenClaw chat platform allows attackers to crash the server via memory exhaustion. By sending massive Base64-encoded strings, an attacker can trick the application into allocating gigabytes of memory to decode or sanitize the input before checking if the file size is actually within limits. This results in a Node.js OutOfMemoryError and a hard crash of the service.
In the world of web applications, handling file uploads is like handling a live grenade. You have to be gentle, precise, and most importantly, you need to know how big the explosion might be before you pull the pin.
OpenClaw, a chat platform that likely prides itself on slick communication, fell into a classic trap when handling media attachments. Specifically, the issue lies in how they calculate the size of Base64-encoded files. In a naive implementation, if you want to know how big a file is, you decode it and check the length. Simple, right?
Wrong. That logic is deadly in an event-driven, single-threaded environment like Node.js. If I tell the server, "Hey, I'm sending you a 2GB image encoded in Base64," and the server's response is, "Cool, let me load that entire 2GB into RAM real quick so I can check if it's allowed," you have a recipe for disaster. This vulnerability is exactly that: a polite request for the server to commit suicide by memory gluttony.
The root cause here is a failure to understand the cost of operations. The developers implemented a workflow that looks like this: Receive Input -> Clean Input -> Decode Input -> Check Size -> Error if too big.
The problem is that steps 2 and 3 require massive memory allocations. In the vulnerable versions of OpenClaw, the code utilized Buffer.from(b64, 'base64') to convert the incoming string into binary data. In Node.js, this operation allocates a new slab of memory outside the V8 heap (for large buffers) or inside it.
Even worse, there was logic to "estimate" the size that involved base64.trim().replace(/\s+/g, ""). Let's pause and appreciate the inefficiency here. If an attacker sends a 500MB string, .replace() creates another massive string in memory to hold the result. We are effectively doubling or tripling the memory footprint of the attack payload just to perform a check that says "Error: File too big." By the time the code realizes the file is too large, the process has already crashed with an OutOfMemoryError.
Let's look at the crime scene. The vulnerable code blindly trusted the runtime to handle massive allocations. Here is the logic that kills the process:
// The expensive way to check size
const sizeBytes = Buffer.from(b64, "base64").byteLength;
if (sizeBytes > maxBytes) {
throw new Error("exceeds size limit");
}This is the equivalent of buying a car just to see if it fits in your garage. If it doesn't, you've already spent the money (memory). The fix, introduced in commit 31791233d60495725fa012745dde8d6ee69e9595, changes the paradigm. Instead of allocating, we do math.
The patched version iterates over the string to count valid Base64 characters, ignoring whitespace, without creating new strings or buffers:
// The smart way: O(1) memory, O(n) CPU
export function estimateBase64DecodedBytes(base64: string): number {
let effectiveLen = 0;
// Iterate, don't allocate
for (let i = 0; i < base64.length; i += 1) {
const code = base64.charCodeAt(i);
if (code <= 0x20) continue; // Skip whitespace
effectiveLen += 1;
}
// ... calculation formula ...
}This function allows OpenClaw to reject a 10GB payload while only using a few bytes of stack memory for the loop counters.
Exploiting this does not require advanced binary wizardry. It requires curl and a lot of the letter 'A'. The attack vector is a standard HTTP POST request to the Gateway or Media endpoints.
An attacker constructs a JSON payload containing a Base64 field. They don't even need valid image data; they just need a valid Base64 string structure. Padding the string to a massive length (e.g., 1GB+) is sufficient.
# Conceptual Exploit Payload
payload=$(python3 -c "print('A' * 1000000000)")
curl -X POST https://target-openclaw-instance/api/media/upload \
-H "Content-Type: application/json" \
-d "{\"file\": \"$payload\", \"name\": \"doom.png\"}"When the server parses this JSON and attempts to process the file field using the vulnerable logic, the Node.js event loop blocks while trying to allocate the buffer. When the allocation fails (or the garbage collector goes into a panic spiral), the process dies. If the service is orchestrated (e.g., Kubernetes), the pod will crash and restart. If an attacker loops this request, they can keep the service in a permanent state of crash-loop backoff.
The primary fix is to update openclaw or clawdbot to version 2026.2.14. This version includes the memory-efficient estimation logic and stricter regex validation (/^[A-Za-z0-9+/]+={0,2}$/) that rejects malformed payloads before expensive processing occurs.
If you cannot patch immediately, you must rely on infrastructure-level defenses. A Web Application Firewall (WAF) or a reverse proxy (Nginx) configured with client_max_body_size can stop these requests before they hit the Node.js application layer.
However, relies on Content-Length headers which can be spoofed or omitted in chunked encoding attacks, so application-level validation is always the only true fix.
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H| Product | Affected Versions | Fixed Version |
|---|---|---|
openclaw/openclaw OpenClaw | < 2026.2.14 | 2026.2.14 |
clawdbot OpenClaw | < 2026.2.14 | 2026.2.14 |
| Attribute | Detail |
|---|---|
| CWE | CWE-770 (Allocation of Resources Without Limits) |
| Attack Vector | Network (API) |
| CVSS | 7.5 (High) |
| Impact | Denial of Service (DoS) |
| Language | TypeScript / Node.js |
| Vulnerable Function | Buffer.from() / String.replace() |
Allocation of Resources Without Limits or Throttling