Feb 6, 2026·5 min read·208 visits
NGINX checks for incoming data (Read Event) before sending the TLS Client Hello (Write Event) on new upstream connections. Attackers can race this logic by sending plain text HTTP immediately upon TCP connect. NGINX accepts the unencrypted data, skipping TLS negotiation.
A high-severity race condition in NGINX's event loop allows a Man-in-the-Middle attacker to bypass upstream TLS protections entirely. By injecting a plain text HTTP response immediately after TCP connection establishment—but before the TLS handshake begins—an attacker can trick NGINX into processing and serving the malicious payload as if it came from the trusted backend.
We trust NGINX to be the bouncer. You configure proxy_pass https://backend.internal and you sleep soundly, assuming that the connection between your proxy and your backend is wrapped in a cozy blanket of TLS encryption. If a hacker is sitting on the wire, they should see nothing but garbage, right?
Wrong. Security is often an illusion held together by the sequence of events. If you change the sequence, the security evaporates. CVE-2026-1642 is a classic "Time-of-Check to Time-of-Use" (TOCTOU) sibling, born from the complexities of asynchronous, event-driven architecture.
Here is the punchline: You can tell NGINX to use HTTPS, but if the server (or an imposter) screams "HTTP 200 OK" fast enough, NGINX forgets it was supposed to negotiate encryption and just takes the data. It's like walking into a bank vault, but the guard is so distracted by someone handing him a donut that he forgets to ask for your ID.
To understand this bug, you have to think like an event loop. NGINX is non-blocking. When it initiates a connection to an upstream server, it registers interest in two things: writing data (to start the handshake) and reading data (the response).
Ideally, the flow is:
Client Hello.Server Hello.But the internet is messy. If a TCP connection is established and data arrives immediately, the event loop might fire both the READ and WRITE events in the same cycle. NGINX, in its infinite efficiency, processes the READ event first.
The function ngx_http_upstream_process_header gets called. It looks at the buffer. It sees valid HTTP headers. It parses them. It prepares the response for the client. Crucially, it never checks if the SSL handshake it intended to perform actually happened. It just assumes that if it's reading data, everything must be fine.
The fix, authored by Roman Arutyunyan at F5, is embarrassingly simple, which highlights just how subtle this logic flaw was. The vulnerability lived in src/http/ngx_http_upstream.c.
The patch introduces a sanity check. Before processing any header data, NGINX now asks: "Am I configured to use SSL (u->ssl)? And if so, is the SSL context (c->ssl) currently NULL?"
Here is the fix in all its glory:
// src/http/ngx_http_upstream.c
@@ -2508,6 +2508,15 @@ ngx_http_upstream_process_header(ngx_http_request_t *r, ngx_http_upstream_t *u)
return;
}
+#if (NGX_HTTP_SSL)
+ // THE FIX: Check if we expect SSL but haven't started it yet
+ if (u->ssl && c->ssl == NULL) {
+ ngx_log_error(NGX_LOG_ERR, c->log, 0,
+ "upstream prematurely sent response");
+ ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);
+ return;
+ }
+#endif
+
u->state->bytes_received += n;Without this block, the code would happily fall through to u->buffer.last += n and begin parsing the attacker's plain text injection as a legitimate response.
Exploiting this requires a Man-in-the-Middle (MITM) position between NGINX and the upstream server. This sounds hard, but in modern microservices environments (Kubernetes, internal clouds), ARP spoofing or compromised routers are fair game.
The attack chain looks like this:
Client Hello, the attacker floods the socket with a plain text HTTP response.
HTTP/1.1 200 OK
Content-Type: application/json
{"status": "admin_access_granted"}The Client Hello might eventually be sent, or the connection might error out afterwards, but it's too late. The response has been accepted. If NGINX is configured to cache responses, the attacker has now poisoned the cache for everyone.
Why is this rated High (CVSS 8.2) despite requiring MITM? Because it breaks the fundamental promise of proxy_pass https://. Architects design systems assuming that if they enforce TLS, they are safe from injection attacks on the wire.
This vulnerability allows for:
It is worth noting that this does not break the confidentiality of the request (the attacker can't necessarily decrypt what NGINX sends to the upstream), but they can completely fabricate the response.
If you are running NGINX Open Source versions 1.3.0 through 1.29.4, you are vulnerable. If you are on NGINX Plus R36 or earlier, you are likely vulnerable too.
Remediation:
upstream prematurely sent response. If you see this, someone (or something) is trying to race your TLS handshake.CVSS:4.0/AV:N/AC:L/AT:P/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N| Product | Affected Versions | Fixed Version |
|---|---|---|
NGINX Open Source F5 | 1.3.0 - 1.29.4 | 1.29.5 |
NGINX Plus F5 | R36 < P2 | R36 P2 |
| Attribute | Detail |
|---|---|
| CWE | CWE-349 |
| CVSS v4.0 | 8.2 (High) |
| CVSS v3.1 | 5.9 (Medium) |
| Attack Vector | Network (MITM) |
| Impact | Integrity (High) |
| EPSS Score | 0.00012 (~1.44%) |
| Patch Commit | 376c3739b633e4ddac8ecf59d72e43b0b9151c51 |
The product accepts untrusted data that is extraneous to the expected data, which can lead to modification of the interpretation of the trusted data.