Feb 26, 2026·6 min read·6 visits
The `psd-tools` library prior to 1.12.2 is vulnerable to Denial of Service via malicious PSD files. Attackers can trigger Zip bombs or massive memory allocations by manipulating file headers. Fix involves enforcing strict dimension limits and safe decompression practices.
A deep dive into a series of memory corruption and logic flaws within the `psd-tools` Python library. This vulnerability exploits the complex nature of Adobe's PSD format to trigger massive memory exhaustion (Zip Bombs), integer overflows in Cython modules, and bypasses critical integrity checks in production environments. It highlights the dangers of parsing untrusted binary formats without strict bounds checking.
Adobe's PSD format is less of a file specification and more of a crime scene. It is a historical artifact, a layered cake of legacy decisions, compression algorithms, and proprietary structures that has evolved over decades. For a developer, writing a parser for this format is like walking through a minefield blindfolded. Enter psd-tools, a popular Python package that bravely attempts to decode this chaos so developers can manipulate Photoshop files programmatically.
But here is the catch: when you parse complex binary formats, you are effectively letting an outsider dictate how your program allocates memory. CVE-2026-27809 isn't just one bug; it is a masterclass in what happens when you trust the data too much. We are looking at a cocktail of vulnerabilities ranging from the classic 'Zip Bomb' to subtle integer overflows in Cython extensions, all capable of bringing a processing pipeline to its knees.
Why does this matter? Because psd-tools often sits in the backend of automated asset pipelines. If you are building a service that accepts user-uploaded designs, generates previews, or extracts layers, this vulnerability turns your expensive cloud instance into a paperweight with a single 50KB file.
The root cause of CVE-2026-27809 is a fundamental lack of skepticism. The library read dimensions from the PSD header—width, height, and depth—and immediately treated them as gospel. If a file claimed to be 300,000 pixels wide and 300,000 pixels tall (the theoretical maximum for PSB files), psd-tools would nod politely and attempt to allocate a buffer for it. For a 32-bit image, that is roughly 144 terabytes of RAM. The operating system, naturally, would respond with a swift Kill -9 (OOM Killer).
But the rabbit hole goes deeper. The library handles compression, specifically the RLE (Run-Length Encoding) used in PSD channels, via a Cython extension for speed. In C-land, types matter. The developers defined loop indices as cdef int. On most 64-bit systems, a standard int is still 32-bits, capping out at roughly 2.14 billion. Large Document Format (PSB) files can easily exceed this size. When the loop counter overflows, it wraps around to negative numbers, causing undefined behavior—often resulting in infinite loops or segfaults.
Finally, there is the 'Pythonic' mistake. The code used assert statements to verify that the decompressed data size matched the expected size. It looks clean: assert len(result) == expected. However, in Python, if you run your code with -O (optimize), all assert statements are stripped out at bytecode compilation. This means in a production environment, the safety check simply vanishes, allowing malformed data to flow downstream into libraries like Pillow, causing havoc later in the execution chain.
Let's look at the diff. The fix, applied in commit 6c0a78f195b5942757886a1863793fd5946c1fb1, reveals exactly where the logic failed. First, the memory exhaustion vector. The original code used zlib blindly:
# VULNERABLE
data = zlib.decompress(raw_data)This is the textbook definition of a Zip Bomb vulnerability. A tiny payload of repeated bytes can expand 1000x in size. The fix introduces a wrapper that respects the laws of physics (and available RAM):
# FIXED
def _safe_zlib_decompress(data, max_length):
d = zlib.decompressobj()
# +1 ensures we detect if it tries to go over
out = d.decompress(data, max_length + 1)
if d.unconsumed_tail:
raise ValueError("Decompressed size exceeds expected maximum")
return outNext, the Cython integer overflow. This is a subtle one that often slips past code review unless you are specifically looking for C-type constraints:
# src/psd_tools/compression/_rle.pyx
- cdef int i = 0
- cdef int j = 0
+ cdef Py_ssize_t i = 0
+ cdef Py_ssize_t j = 0By changing int (32-bit signed) to Py_ssize_t (signed size type matching the platform's pointer size, usually 64-bit), the loop can now correctly address buffers larger than 2GB without wrapping around. This is the difference between a robust parser and one that crashes when an artist saves a billboard-sized project.
Exploiting this does not require a debugger or complex heap grooming. It just requires lying to the parser. To trigger the OOM (Out of Memory) crash, we don't even need valid image data. We just need a valid header.
Attack Scenario 1: The Infinite Canvas
We craft a PSD header stating the image is 300,000 x 300,000 pixels. We set the compression mode to RLE (PackBits). Since the library allocates the buffer before processing the compression, it calculates 300000 * 300000 * 4 bytes. It requests this allocation immediately. The Python process swells instantly, and the Linux OOM killer steps in to terminate it. Service denied.
Attack Scenario 2: The Silent Data Corruption
Target a production server running python -O app.py. We provide a compressed channel that claims to be 1000 bytes but decompresses to 50 bytes of garbage. Because the assert len(result) == expected check is stripped out by the -O flag, the library returns the garbage buffer. When the application tries to composite this layer or save it as a PNG, the dimension mismatch causes a cryptic IndexError or ValueError deep within the Pillow library, making debugging a nightmare for the defenders.
The mitigation strategy adopted in version 1.12.2 is "Trust, but Verify." The maintainers implemented hard limits on the image dimensions. You can no longer just say "I have a 300k pixel wide image"; the code now checks these values against sanity bounds before allocating a single byte.
Furthermore, the assert statements were replaced with explicit if checks that raise ValueError. This ensures that safety checks run regardless of the optimization flags used by the Python interpreter.
For the end-user or sysadmin, the fix is simple: upgrade. There is no config flag that fixes this. You need the patched logic that handles the physical reality of memory constraints.
pip install psd-tools>=1.12.2CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:H/SC:N/SI:N/SA:N/E:U| Product | Affected Versions | Fixed Version |
|---|---|---|
psd-tools psd-tools | < 1.12.2 | 1.12.2 |
| Attribute | Detail |
|---|---|
| Attack Vector | Network (via File Upload) |
| CVSS v4.0 | 6.8 (Medium) |
| Weakness | CWE-400 (Uncontrolled Resource Consumption) |
| Weakness | CWE-190 (Integer Overflow) |
| Platform | Python / Cython |
| Exploit Status | PoC Available |