CVE-2026-1642

NGINX Upstream TLS Injection: Racing the Handshake

Amit Schendel
Amit Schendel
Senior Security Researcher

Feb 6, 2026·5 min read·8 visits

Executive Summary (TL;DR)

NGINX checks for incoming data (Read Event) before sending the TLS Client Hello (Write Event) on new upstream connections. Attackers can race this logic by sending plain text HTTP immediately upon TCP connect. NGINX accepts the unencrypted data, skipping TLS negotiation.

A high-severity race condition in NGINX's event loop allows a Man-in-the-Middle attacker to bypass upstream TLS protections entirely. By injecting a plain text HTTP response immediately after TCP connection establishment—but before the TLS handshake begins—an attacker can trick NGINX into processing and serving the malicious payload as if it came from the trusted backend.

The Hook: The Illusion of Security

We trust NGINX to be the bouncer. You configure proxy_pass https://backend.internal and you sleep soundly, assuming that the connection between your proxy and your backend is wrapped in a cozy blanket of TLS encryption. If a hacker is sitting on the wire, they should see nothing but garbage, right?

Wrong. Security is often an illusion held together by the sequence of events. If you change the sequence, the security evaporates. CVE-2026-1642 is a classic "Time-of-Check to Time-of-Use" (TOCTOU) sibling, born from the complexities of asynchronous, event-driven architecture.

Here is the punchline: You can tell NGINX to use HTTPS, but if the server (or an imposter) screams "HTTP 200 OK" fast enough, NGINX forgets it was supposed to negotiate encryption and just takes the data. It's like walking into a bank vault, but the guard is so distracted by someone handing him a donut that he forgets to ask for your ID.

The Flaw: Read vs. Write

To understand this bug, you have to think like an event loop. NGINX is non-blocking. When it initiates a connection to an upstream server, it registers interest in two things: writing data (to start the handshake) and reading data (the response).

Ideally, the flow is:

  1. TCP Connect.
  2. Write Event: NGINX sends Client Hello.
  3. Read Event: NGINX receives Server Hello.

But the internet is messy. If a TCP connection is established and data arrives immediately, the event loop might fire both the READ and WRITE events in the same cycle. NGINX, in its infinite efficiency, processes the READ event first.

The function ngx_http_upstream_process_header gets called. It looks at the buffer. It sees valid HTTP headers. It parses them. It prepares the response for the client. Crucially, it never checks if the SSL handshake it intended to perform actually happened. It just assumes that if it's reading data, everything must be fine.

The Code: The Smoking Gun

The fix, authored by Roman Arutyunyan at F5, is embarrassingly simple, which highlights just how subtle this logic flaw was. The vulnerability lived in src/http/ngx_http_upstream.c.

The patch introduces a sanity check. Before processing any header data, NGINX now asks: "Am I configured to use SSL (u->ssl)? And if so, is the SSL context (c->ssl) currently NULL?"

Here is the fix in all its glory:

// src/http/ngx_http_upstream.c
 
@@ -2508,6 +2508,15 @@ ngx_http_upstream_process_header(ngx_http_request_t *r, ngx_http_upstream_t *u)
             return;
         }
 
+#if (NGX_HTTP_SSL)
+        // THE FIX: Check if we expect SSL but haven't started it yet
+        if (u->ssl && c->ssl == NULL) {
+            ngx_log_error(NGX_LOG_ERR, c->log, 0,
+                          "upstream prematurely sent response");
+            ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);
+            return;
+        }
+#endif
+
         u->state->bytes_received += n;

Without this block, the code would happily fall through to u->buffer.last += n and begin parsing the attacker's plain text injection as a legitimate response.

The Exploit: Beating the Clock

Exploiting this requires a Man-in-the-Middle (MITM) position between NGINX and the upstream server. This sounds hard, but in modern microservices environments (Kubernetes, internal clouds), ARP spoofing or compromised routers are fair game.

The attack chain looks like this:

  1. Intercept: The attacker sees NGINX initiate a TCP handshake (SYN) to the targeted upstream.
  2. Connect: The attacker (or the compromised upstream) completes the TCP handshake (SYN-ACK).
  3. Race: BEFORE NGINX can execute its write handler to send the TLS Client Hello, the attacker floods the socket with a plain text HTTP response.
    HTTP/1.1 200 OK
    Content-Type: application/json
     
    {"status": "admin_access_granted"}
  4. Win: NGINX sees the data. The Read Event handler fires. It parses the JSON. It serves it to the user.

The Client Hello might eventually be sent, or the connection might error out afterwards, but it's too late. The response has been accepted. If NGINX is configured to cache responses, the attacker has now poisoned the cache for everyone.

The Impact: Trust Issues

Why is this rated High (CVSS 8.2) despite requiring MITM? Because it breaks the fundamental promise of proxy_pass https://. Architects design systems assuming that if they enforce TLS, they are safe from injection attacks on the wire.

This vulnerability allows for:

  • Cache Poisoning: Injecting malicious JavaScript or fake data that gets cached and served to legitimate users.
  • WAF Bypass: If a WAF sits before NGINX, but NGINX trusts the upstream, injecting data here bypasses ingress protections.
  • Integrity Violation: Total control over the response content.

It is worth noting that this does not break the confidentiality of the request (the attacker can't necessarily decrypt what NGINX sends to the upstream), but they can completely fabricate the response.

The Fix: Patch Tuesday

If you are running NGINX Open Source versions 1.3.0 through 1.29.4, you are vulnerable. If you are on NGINX Plus R36 or earlier, you are likely vulnerable too.

Remediation:

  1. Patch: Upgrade to NGINX 1.29.5, 1.28.2, or NGINX Plus R36 P2 immediately.
  2. Monitor: Keep an eye on your error logs. The patch introduces a specific error message: upstream prematurely sent response. If you see this, someone (or something) is trying to race your TLS handshake.
  3. Architecture: Zero Trust isn't just a buzzword. Authenticated mTLS (mutual TLS) can make the initial TCP/TLS negotiation harder to spoof, though this specific race condition ignores the handshake entirely, so patching is the only true fix.

Fix Analysis (1)

Technical Appendix

CVSS Score
8.2/ 10
CVSS:4.0/AV:N/AC:L/AT:P/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N
EPSS Probability
0.01%
Top 99% most exploited

Affected Systems

NGINX Open SourceNGINX PlusKubernetes Ingress Controllers (using affected NGINX versions)Load Balancers based on NGINX

Affected Versions Detail

Product
Affected Versions
Fixed Version
NGINX Open Source
F5
1.3.0 - 1.29.41.29.5
NGINX Plus
F5
R36 < P2R36 P2
AttributeDetail
CWECWE-349
CVSS v4.08.2 (High)
CVSS v3.15.9 (Medium)
Attack VectorNetwork (MITM)
ImpactIntegrity (High)
EPSS Score0.00012 (~1.44%)
Patch Commit376c3739b633e4ddac8ecf59d72e43b0b9151c51
CWE-349
Acceptance of Extraneous Untrusted Data With Trusted Data

The product accepts untrusted data that is extraneous to the expected data, which can lead to modification of the interpretation of the trusted data.

Vulnerability Timeline

Fix committed to NGINX source
2026-01-29
Vulnerability Published
2026-02-04
Public Disclosure on oss-security
2026-02-05

Subscribe to updates

Get the latest CVE analysis reports delivered to your inbox.