Feb 25, 2026·5 min read·8 visits
Wasmtime panicked when a host dropped an async call future and then tried to call the component again. The runtime allocated resources before checking if the component was free, leading to a failed assertion during cleanup. Fix involves checking state before allocation.
A state management vulnerability in Wasmtime's async component model allows attackers to trigger a thread-level panic (Denial of Service) by manipulating the lifecycle of asynchronous calls. When a host drops a pending future for a guest call, the component instance remains in an inconsistent 'busy' state. A subsequent call to that instance triggers a re-entrancy trap, which inadvertently hits a safety assertion in the cleanup logic, crashing the entire runtime.
Async Rust is a beautiful, terrifying beast. It promises non-blocking performance but demands absolute obedience to the laws of polling and pinning. In the world of WebAssembly, Wasmtime implemented the component-model-async feature to allow hosts (like your serverless platform) to invoke guest functions without blocking the thread while the guest crunches numbers or waits for I/O.
Here is the setup: The Host calls the Guest. The Guest yields (maybe it's waiting for a network packet). The Host receives a Future. Normally, the Host polls this future until it returns Ready. But what if the Host gets bored? What if the HTTP request that triggered this execution gets cancelled? The Host drops the future.
In a perfect world, Wasmtime would clean up the guest's stack, release the locks, and reset the state. But in versions 39 through 41, Wasmtime acted like a disgruntled waiter. It took the plate away, but left the table marked as 'Occupied'. When the next customer (request) tried to sit down, the runtime didn't just say 'Seat Taken'—it flipped the table and burned down the restaurant.
To understand this bug, you have to look at how Wasmtime manages 'Tasks'—the fibers (lightweight threads) used to run guest code. When you call a function via call_async, Wasmtime spins up a Task and hands you a future.
Here is the sequence of doom:
guest_func. Wasmtime marks the instance as Entered (busy).Pending.Entered because the execution didn't technically finish—it was abandoned.Now, the Host tries to call guest_func again on the same component instance. Logic dictates that Wasmtime should check if the instance is busy first. It didn't.
Instead, the vulnerable code allocated a new Task (a potentially heavy operation involving stack allocation) before checking the instance state. Once the task was allocated, it checked the state, saw it was still Entered, and raised a WebAssembly Trap (a soft error).
Here is the kicker: When the Trap happens, the newly allocated Task gets dropped. The Task destructor has a sanity check: assert!(state.is_finished() || state.is_dead()). Because this new task was aborted mere microseconds after birth—before it even started running—it was neither finished nor dead. It was just... confused. The assertion fails. The thread panics. Game over.
The fix is a classic example of 'check your preconditions before you allocate memory'. The developers simply moved the validation logic up the chain.
The Vulnerable Logic (Pseudocode):
fn call_async(&self, ...) -> Future {
// 1. Expensive Allocation FIRST
let task = Box::new(Task::new(self.instance, ...));
// 2. Validation SECOND
if self.instance.is_busy() {
// This returns an error, which drops 'task'
// 'task' drop panics because it wasn't run yet
return Err(Trap("Cannot re-enter component"));
}
return Future::new(task);
}The Fixed Logic:
fn call_async(&self, ...) -> Future {
// 1. Validation FIRST
if self.instance.is_busy() {
return Err(Trap("Cannot re-enter component"));
}
// 2. Expensive Allocation SECOND
let task = Box::new(Task::new(self.instance, ...));
return Future::new(task);
}It is a subtle reordering, but it prevents the creation of the Task object that triggers the panic upon destruction. If you don't create the zombie, it can't eat your brains.
Exploiting this requires control over the host embedding's behavior, specifically causing it to drop a future. While this sounds like a 'host-side' bug, many serverless environments enforce timeouts. If a guest runs too long, the host cancels (drops) the future.
If you are an attacker running inside a Wasmtime-powered cloud function, you can't directly drop your own future. However, you can create a condition where you yield indefinitely or sleep, tempting the host to timeout your execution. If the host architecture reuses Wasmtime Store objects (for caching/performance) and doesn't handle the dropped future correctly, the next request to that warm instance triggers the crash.
PoC Strategy:
foo.foo, perform an async host call that never returns or takes a long time (forcing a yield).foo.drop() the future.foo again on the same instance.thread 'main' panicked at 'assertion failed: state.is_finished()'.This turns a single bad request into a Denial of Service for the thread or process handling the requests.
If you are running Wasmtime, you need to patch. The vulnerability exists in the default configuration of component-model-async starting from version 39.0.0.
Remediation:
component-model-async feature in your Cargo.toml. This nukes the vulnerable code path entirely.Developer Lesson:
Never assume your destructors run in a happy state. Rust's drop is guaranteed to run (mostly), but the context in which it runs is chaotic. If your cleanup logic relies on assert! checks about the object's lifecycle, make sure you can't construct the object and immediately destroy it without transitioning through that lifecycle.
CVSS:4.0/AV:N/AC:H/AT:P/PR:L/UI:P/VC:N/VI:N/VA:H/SC:N/SI:N/SA:H| Product | Affected Versions | Fixed Version |
|---|---|---|
Wasmtime Bytecode Alliance | >= 39.0.0, < 40.0.4 | 40.0.4 |
Wasmtime Bytecode Alliance | >= 41.0.0, < 41.0.4 | 41.0.4 |
| Attribute | Detail |
|---|---|
| CWE | CWE-755: Improper Handling of Exceptional Conditions |
| CVSS v4.0 | 6.9 (Medium) |
| Attack Vector | Network (via Host Interaction) |
| Impact | Denial of Service (Panic) |
| Affected Component | component-model-async |
| Status | Patched |
Improper Handling of Exceptional Conditions