CVEReports
CVEReports

Automated vulnerability intelligence platform. Comprehensive reports for high-severity CVEs generated by AI.

Product

  • Home
  • Sitemap
  • RSS Feed

Company

  • About
  • Contact
  • Privacy Policy
  • Terms of Service

© 2026 CVEReports. All rights reserved.

Made with love by Amit Schendel & Alon Barad



GHSA-5R2P-PJR8-7FH7
8.5

Remote Code Execution in AWS SageMaker Python SDK via Unsafe eval()

Alon Barad
Alon Barad
Software Engineer

Mar 5, 2026·5 min read·1 visit

No Known Exploit

Executive Summary (TL;DR)

The AWS SageMaker Python SDK contained a Remote Code Execution (RCE) vulnerability in its JumpStart search feature due to the unsafe use of `eval()`. Attackers controlling search inputs could break out of the limited sandbox and execute arbitrary commands. Users should upgrade to `sagemaker>=3.4.0` immediately.

A critical vulnerability exists in the AWS SageMaker Python SDK versions prior to 3.4.0, specifically within the JumpStart `search_hub()` functionality. The vulnerability arises from the use of the Python `eval()` function to process search query parameters without adequate sanitization or sandboxing. This flaw allows an attacker who can control the input to the search function to execute arbitrary Python code in the context of the application running the SDK. The issue has been addressed in version 3.4.0 by replacing the dynamic evaluation logic with a custom recursive descent parser and Abstract Syntax Tree (AST) implementation.

Vulnerability Overview

The AWS SageMaker Python SDK facilitates the interaction with SageMaker services, including the 'JumpStart' feature which provides pre-trained models and solution templates. The vulnerability is located in the sagemaker.core.jumpstart.search module, which handles the logic for filtering and searching through these models.

The specific flaw lies in how the SDK processes search expressions. When a user (or an upstream application) provides a search query, the SDK parses this query to filter the available models. Prior to version 3.4.0, this parsing logic relied on dynamically constructing a Python string representing the boolean logic of the search and executing it using the built-in eval() function. This creates a classic 'Eval Injection' scenario (CWE-95), where untrusted data is treated as code.

Root Cause Analysis

The root cause is the insecure implementation of the _Filter class in sagemaker-core/src/sagemaker/core/jumpstart/search.py. The match method in this class was responsible for determining if a model's keywords matched a user's search query. To achieve this, it performed the following steps:

  1. Expression Conversion: It converted the search query into a Python-compatible boolean expression string. For example, a query for task:image_classification might be converted into a string resembling any(k == 'image_classification' for k in keywords).
  2. Dynamic Execution: It passed this generated string to eval().

While the developers attempted to sandbox the execution by passing an empty dictionary for __builtins__ (eval(expr, {"__builtins__": {}}, ...)), this is a well-documented anti-pattern in Python security. In Python, restricting __builtins__ is insufficient to prevent code execution because the runtime's introspection capabilities allow an attacker to recover the object hierarchy. An attacker can access the __class__ attribute of any object, traverse up to object, and then list __subclasses__() to find classes that provide access to global scope or dangerous modules like os or subprocess.

Code Analysis

The remediation strategy involved a complete rewrite of the filtering logic, moving from dynamic evaluation to a static parsing approach. The following comparison highlights the critical changes.

Vulnerable Implementation (< 3.4.0) The vulnerable code relied on eval to process logic. The sandbox attempt is visible in the second argument to eval.

# sagemaker/core/jumpstart/search.py (Vulnerable)
 
def match(self, keywords: List[str]) -> bool:
    # expr is a string constructed from user input
    expr: str = self._convert_expression(self.expression)
    try:
        # DANGER: Executing the string as code
        return eval(expr, {"__builtins__": {}}, {"keywords": keywords, "any": any})
    except Exception:
        return False

Patched Implementation (v3.4.0) The fix replaces eval with a robust tokenizer and parser. The _Filter class now generates an Abstract Syntax Tree (AST) composed of safe node types (_AndNode, _OrNode, _PatternNode).

# sagemaker/core/jumpstart/search.py (Patched)
 
def match(self, keywords: List[str]) -> bool:
    try:
        # SAFE: Parsing logic into an AST structure
        ast_tree = self._parse_expression(self.expression)
        # Evaluating the AST requires no dynamic code execution
        return ast_tree.evaluate(keywords)
    except Exception:
        return False

The new implementation explicitly defines valid operations. For example, the _PatternNode only supports specific string matching logic (startswith, endswith, in, ==), eliminating the possibility of executing arbitrary function calls.

Exploitation Methodology

To exploit this vulnerability, an attacker must control the query string passed to the search_hub() function. While the sagemaker SDK is often used in authenticated, client-side scripts (lowering the risk), it is also frequently integrated into backend services or ML platforms that expose search functionality to end-users.

Attack Scenario

  1. Injection: The attacker inputs a crafted string designed to break out of the boolean logic structure. A payload might look like: "a") and [c for c in ().__class__.__base__.__subclasses__() if c.__name__ == 'BuiltinImporter'][0]().load_module('os').system('id') #.
  2. Sandbox Escape: The payload uses Python introspection:
    • ().__class__.__base__ accesses the base object class.
    • __subclasses__() lists all available classes in the current runtime.
    • The attacker iterates through this list to find a class that allows module loading (e.g., BuiltinImporter or catch_warnings).
  3. Execution: Once a handle to os or subprocess is obtained, the attacker executes arbitrary system commands. Since the eval happens within the application's process, the commands run with the same privileges as the application.

Impact Assessment

The impact of this vulnerability is rated High (CVSS 8.5). Successful exploitation results in full Remote Code Execution (RCE) within the context of the application using the SDK.

  • Confidentiality: An attacker can read sensitive data from the environment, including AWS credentials (IAM roles, Access Keys) often present in the environment variables of SageMaker jobs or local development machines.
  • Integrity: Attackers can modify data, inject malicious models, or tamper with the training pipeline.
  • Availability: Attackers can crash the application or use the compute resources for cryptomining.

The vector is assessed as Local (AV:L) because the vulnerability is in a library. The attacker does not exploit the library directly over the network but rather exploits the application that uses the library to process remote input. However, if the application exposes this search feature via a web API, the effective attack vector becomes network-based.

Official Patches

AWSGitHub Commit: Replace eval with custom parser
PyPIsagemaker 3.4.0 Release

Fix Analysis (1)

Technical Appendix

CVSS Score
8.5/ 10
CVSS:4.0/AV:L/AC:L/AT:N/PR:N/UI:A/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N

Affected Systems

AWS SageMaker Python SDKApplications integrating SageMaker JumpStart search

Affected Versions Detail

Product
Affected Versions
Fixed Version
sagemaker
AWS
< 3.4.03.4.0
AttributeDetail
CWE IDCWE-95
Vulnerability TypeEval Injection
CVSS v4.08.5 (High)
Attack VectorLocal (Library)
Patch StatusFixed in v3.4.0
Exploit MaturityProof of Concept (Theoretical)

MITRE ATT&CK Mapping

T1059Command and Scripting Interpreter
Execution
T1203Exploitation for Client Execution
Execution
CWE-95
Eval Injection

Improper Neutralization of Directives in Dynamically Evaluated Code ('Eval Injection')

References & Sources

  • [1]GitHub Security Advisory GHSA-5r2p-pjr8-7fh7
  • [2]Pull Request #5497

Attack Flow Diagram

Press enter or space to select a node. You can then use the arrow keys to move the node around. Press delete to remove it and escape to cancel.
Press enter or space to select an edge. You can then press delete to remove it or escape to cancel.