Mar 5, 2026·5 min read·1 visit
The AWS SageMaker Python SDK contained a Remote Code Execution (RCE) vulnerability in its JumpStart search feature due to the unsafe use of `eval()`. Attackers controlling search inputs could break out of the limited sandbox and execute arbitrary commands. Users should upgrade to `sagemaker>=3.4.0` immediately.
A critical vulnerability exists in the AWS SageMaker Python SDK versions prior to 3.4.0, specifically within the JumpStart `search_hub()` functionality. The vulnerability arises from the use of the Python `eval()` function to process search query parameters without adequate sanitization or sandboxing. This flaw allows an attacker who can control the input to the search function to execute arbitrary Python code in the context of the application running the SDK. The issue has been addressed in version 3.4.0 by replacing the dynamic evaluation logic with a custom recursive descent parser and Abstract Syntax Tree (AST) implementation.
The AWS SageMaker Python SDK facilitates the interaction with SageMaker services, including the 'JumpStart' feature which provides pre-trained models and solution templates. The vulnerability is located in the sagemaker.core.jumpstart.search module, which handles the logic for filtering and searching through these models.
The specific flaw lies in how the SDK processes search expressions. When a user (or an upstream application) provides a search query, the SDK parses this query to filter the available models. Prior to version 3.4.0, this parsing logic relied on dynamically constructing a Python string representing the boolean logic of the search and executing it using the built-in eval() function. This creates a classic 'Eval Injection' scenario (CWE-95), where untrusted data is treated as code.
The root cause is the insecure implementation of the _Filter class in sagemaker-core/src/sagemaker/core/jumpstart/search.py. The match method in this class was responsible for determining if a model's keywords matched a user's search query. To achieve this, it performed the following steps:
task:image_classification might be converted into a string resembling any(k == 'image_classification' for k in keywords).eval().While the developers attempted to sandbox the execution by passing an empty dictionary for __builtins__ (eval(expr, {"__builtins__": {}}, ...)), this is a well-documented anti-pattern in Python security. In Python, restricting __builtins__ is insufficient to prevent code execution because the runtime's introspection capabilities allow an attacker to recover the object hierarchy. An attacker can access the __class__ attribute of any object, traverse up to object, and then list __subclasses__() to find classes that provide access to global scope or dangerous modules like os or subprocess.
The remediation strategy involved a complete rewrite of the filtering logic, moving from dynamic evaluation to a static parsing approach. The following comparison highlights the critical changes.
Vulnerable Implementation (< 3.4.0)
The vulnerable code relied on eval to process logic. The sandbox attempt is visible in the second argument to eval.
# sagemaker/core/jumpstart/search.py (Vulnerable)
def match(self, keywords: List[str]) -> bool:
# expr is a string constructed from user input
expr: str = self._convert_expression(self.expression)
try:
# DANGER: Executing the string as code
return eval(expr, {"__builtins__": {}}, {"keywords": keywords, "any": any})
except Exception:
return FalsePatched Implementation (v3.4.0)
The fix replaces eval with a robust tokenizer and parser. The _Filter class now generates an Abstract Syntax Tree (AST) composed of safe node types (_AndNode, _OrNode, _PatternNode).
# sagemaker/core/jumpstart/search.py (Patched)
def match(self, keywords: List[str]) -> bool:
try:
# SAFE: Parsing logic into an AST structure
ast_tree = self._parse_expression(self.expression)
# Evaluating the AST requires no dynamic code execution
return ast_tree.evaluate(keywords)
except Exception:
return FalseThe new implementation explicitly defines valid operations. For example, the _PatternNode only supports specific string matching logic (startswith, endswith, in, ==), eliminating the possibility of executing arbitrary function calls.
To exploit this vulnerability, an attacker must control the query string passed to the search_hub() function. While the sagemaker SDK is often used in authenticated, client-side scripts (lowering the risk), it is also frequently integrated into backend services or ML platforms that expose search functionality to end-users.
Attack Scenario
"a") and [c for c in ().__class__.__base__.__subclasses__() if c.__name__ == 'BuiltinImporter'][0]().load_module('os').system('id') #.().__class__.__base__ accesses the base object class.__subclasses__() lists all available classes in the current runtime.BuiltinImporter or catch_warnings).os or subprocess is obtained, the attacker executes arbitrary system commands. Since the eval happens within the application's process, the commands run with the same privileges as the application.The impact of this vulnerability is rated High (CVSS 8.5). Successful exploitation results in full Remote Code Execution (RCE) within the context of the application using the SDK.
The vector is assessed as Local (AV:L) because the vulnerability is in a library. The attacker does not exploit the library directly over the network but rather exploits the application that uses the library to process remote input. However, if the application exposes this search feature via a web API, the effective attack vector becomes network-based.
CVSS:4.0/AV:L/AC:L/AT:N/PR:N/UI:A/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N| Product | Affected Versions | Fixed Version |
|---|---|---|
sagemaker AWS | < 3.4.0 | 3.4.0 |
| Attribute | Detail |
|---|---|
| CWE ID | CWE-95 |
| Vulnerability Type | Eval Injection |
| CVSS v4.0 | 8.5 (High) |
| Attack Vector | Local (Library) |
| Patch Status | Fixed in v3.4.0 |
| Exploit Maturity | Proof of Concept (Theoretical) |
Improper Neutralization of Directives in Dynamically Evaluated Code ('Eval Injection')