Feb 16, 2026·6 min read·33 visits
The Nu Html Checker (v.Nu) contains an SSRF vulnerability allowing unauthenticated attackers to access internal services via the 'doc' parameter. The application relied on ineffective string-based blocklists for 'localhost', which are easily bypassed using DNS rebinding or alternative IP formats.
A classic Server-Side Request Forgery (SSRF) vulnerability exists in the Nu Html Checker (v.Nu), the engine powering W3C's HTML validation services. By failing to properly validate target IP addresses, the application allows attackers to bypass hostname blocklists (like 'localhost') and force the server to scan internal infrastructure. This turns a tool designed to validate code into a proxy for exploring the local network.
If you've ever built a website, you've likely used the W3C Validator or one of its many clones to yell at you about unclosed tags or deprecated attributes. Under the hood, many of these services run the Nu Html Checker (v.Nu), a Java-based beast that fetches your URL, parses the HTML, and spits out errors.
Here is the problem: In order to validate your site, the validator has to visit your site. It acts as an HTTP client. This functionality is the bread and butter of Server-Side Request Forgery (SSRF). You give it a URL, and it fetches it. The security model relies entirely on where it is allowed to go.
In CVE-2025-15104, we find out that the Nu Html Checker's travel restrictions were less 'TSA PreCheck' and more 'Honor System'. The developers tried to stop you from scanning their internal network, but they made the classic mistake of checking the name on the passport rather than the destination of the flight. As a result, an unauthenticated attacker can coerce the validator into probing local interfaces (127.0.0.1) or internal, non-routable networks, effectively turning a public facing service into an internal port scanner.
The root cause of this vulnerability is a tale as old as the web: Input Validation vs. Canonicalization. The application implemented a blocklist (blacklist) approach to security. When a user submitted a URL via the doc parameter, the code essentially looked at the hostname string.
If the hostname was explicitly localhost or 127.0.0.1, the application threw a fit and blocked the request. This sounds reasonable to a junior developer, but to a hacker, it's a joke. The internet is a messy place, and there are a million ways to say '127.0.0.1' without using those exact characters.
Here is why the logic failed:
attacker.com, saw it wasn't localhost, and approved it. Then, the HTTP client resolved attacker.com to 127.0.0.1 and connected. The check and the use were disconnected.2130706433 is the integer for 127.0.0.1).Let's look at a reconstruction of the vulnerable logic (Java-pseudocode based on the vnu architecture). The developers assumed that string matching was sufficient security.
The Vulnerable Pattern:
String userUrl = request.getParameter("doc");
URI uri = new URI(userUrl);
String host = uri.getHost();
// The "Security" Check
if (host.equalsIgnoreCase("localhost") || host.equals("127.0.0.1")) {
throw new SecurityException("Access to localhost is forbidden.");
}
// The Fatal Flaw: The check passed, now we fetch.
// The underlying HTTP client will resolve the host independently.
InputSource source = new InputSource(uri.toASCIIString());
parser.parse(source);The Fix (Correct Approach):
To fix this, you must resolve the DNS first, check the actual numeric IP address, and ensure it falls within a safe range. String checks are useless.
InetAddress address = InetAddress.getByName(uri.getHost());
// Check if it's a loopback (127.0.0.0/8) or site local (192.168.x.x, etc)
if (address.isLoopbackAddress() || address.isSiteLocalAddress() || address.isAnyLocalAddress()) {
throw new SecurityException("Internal network access denied.");
}
// Only now do we proceed, ideally enforcing the specific IP we just checkedThe patch effectively forces the application to look at the destination, not the label on the map.
Exploiting this requires tricking the validator into thinking it's visiting a safe public website when it's actually knocking on its own front door. We can use a few techniques here, ranging from simple to advanced.
Method 1: The 'nip.io' Trick
Services like nip.io provide wildcard DNS for any IP address. 127.0.0.1.nip.io resolves to 127.0.0.1. Since the string 127.0.0.1.nip.io does not equal localhost or 127.0.0.1, the validator's naive check lets it through.
Attack Request:
GET /?doc=http://127.0.0.1.nip.io:8080/admin HTTP/1.1
Host: validator.example.comMethod 2: DNS Rebinding (The Killer)
If the developers get smarter and implement a single DNS check but fail to pin the IP, we use DNS Rebinding. This attacks the gap between the check and the fetch.
rebind.evil.com with a very short TTL (0 seconds).rebind.evil.com. The DNS server returns 1.2.3.4 (Benign).rebind.evil.com.127.0.0.1.Why do we care if a validator can talk to localhost? Because in modern infrastructure, localhost is rarely empty. It is often a hub of unauthenticated administrative interfaces and metadata services.
Cloud Metadata: If this validator is running on AWS EC2, hitting http://169.254.169.254/latest/meta-data/iam/security-credentials/ could return the IAM role credentials for the server. This is Game Over. The attacker now owns your cloud infrastructure.
Local Services: Developers often bind services like Redis, Memcached, or admin panels to 127.0.0.1 assuming they are unreachable from the outside. This SSRF bridges that gap. An attacker could potentially extract data from a local Redis instance or trigger administrative actions on a local Jenkins agent.
Network Pivoting: Even if localhost is locked down, the SSRF allows the attacker to scan the internal network (e.g., 192.168.1.0/24) to find other vulnerable services behind the firewall.
Mitigating SSRF is notoriously difficult because you are trying to tell a machine to fetch things, but not those things. The only robust solution is network-layer validation.
1. Update immediately: Versions after commit 23f090a1 (Jan 2026) contain the fix.
2. Network Segmentation: The validator should live in a padded cell. It should be in a DMZ with strict firewall rules preventing it from initiating connections to internal private subnets (RFC 1918) or sensitive cloud metadata IPs.
3. Explicit Allow-listing: If possible, only allow the validator to visit known-good domains. If it must visit the open web, use a proxy that handles the security filtering.
4. Disable Redirects: Often, attackers use an open redirect on a whitelisted site to bounce the validator to an internal IP. Ensure the HTTP client does not automatically follow redirects, or checks the destination of every redirect hop.
CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:N/SC:L/SI:N/SA:N| Product | Affected Versions | Fixed Version |
|---|---|---|
Nu Html Checker W3C / Validator.nu | <= Commit 23f090a | Post-Commit 23f090a |
vnu-jar (npm) validator | <= 26.1.11 | 26.1.12 (Estimated) |
| Attribute | Detail |
|---|---|
| CWE ID | CWE-918 (SSRF) |
| Attack Vector | Network |
| CVSS v4.0 | 6.9 (Medium) |
| EPSS Score | 0.00054 (Low) |
| Impact | Confidentiality (Low) |
| Exploit Status | PoC Available |
| KEV Status | Not Listed |
Server-Side Request Forgery (SSRF)