Feb 12, 2026·6 min read·9 visits
OpenMetadata < 1.11.8 leaked the `ingestion-bot` admin token in plain JSON API responses. Any logged-in user could grab it and take over the system. Fixed in 1.11.8 by nullifying credentials in standard API calls.
A critical information disclosure vulnerability in OpenMetadata's REST API allowed authenticated users with minimal privileges to retrieve the raw JWT tokens of the highly privileged 'ingestion-bot'. This flaw, present in versions prior to 1.11.8, stemmed from excessive data exposure in API responses, enabling a trivial privilege escalation path from read-only access to full administrative control over the metadata platform.
In the modern data stack, OpenMetadata acts as the 'Google Maps' for your enterprise data. It tells you where your data lives, who owns it, and how it flows from AWS Glue to Redshift to Tableau. To accomplish this Herculean task of cataloging, the platform relies on a tireless, automated worker known as the ingestion-bot.
This bot is not your average service account. Because it needs to crawl schemas, profile data, and update lineage across your entire infrastructure, it usually wields the Ingestion Bot Role—a role that carries significant administrative weight. It holds the keys to the kingdom because it needs them to do its job.
But here is the catch: for the bot to run ingestion pipelines, the server needs to pass it credentials. And in versions prior to 1.11.8, the OpenMetadata server was a little too generous with who it shared those credentials with. It turns out, if you asked the API nicely (and by nicely, I mean simply issuing a GET request), it would hand you the bot's identity on a silver platter.
The vulnerability (CVE-2026-26010) is a textbook example of CWE-213: Exposure of Sensitive Information Due to Incompatible Policies. In modern web development, particularly in Java ecosystems, it is common to map database entities directly to API responses using serialization libraries like Jackson.
The IngestionPipeline entity in OpenMetadata is a complex object. It stores the schedule, the status, and crucially, the openMetadataServerConnection. This connection object contains the configuration the pipeline needs to talk back to the OpenMetadata server—specifically, the authConfig which houses the JWT (JSON Web Token) or secrets.
When a user views the "Ingestion" tab in the UI, the frontend calls /api/v1/services/ingestionPipelines. The backend retrieves the pipeline details from the database. Because these secrets are stored encrypted, the backend dutifully calls secretsManager.decryptIngestionPipeline(ingestionPipeline) to prepare the object for use.
The fatal flaw was assuming that because the UI doesn't display the token, the API shouldn't worry about it. The backend serialized the entire decrypted object—including the sensitive connection details—and shipped it to the client. The browser received the full credentials, hidden only by the fact that the UI didn't render them.
Let's dig into openmetadata-service/src/main/java/org/openmetadata/service/resources/services/ingestionpipelines/IngestionPipelineResource.java.
In the vulnerable versions, the logic for preparing the pipeline resource for the response looked something like this (simplified for clarity):
// VULNERABLE CODE PATTERN
public IngestionPipeline get(@Context UriInfo uriInfo, ...) {
IngestionPipeline ingestionPipeline = repository.get(...);
// Decrypt everything so the object is "complete"
secretsManager.decryptIngestionPipeline(ingestionPipeline);
// The connection object (with the token) is set here
OpenMetadataConnection connection = new OpenMetadataConnectionBuilder(config).build();
ingestionPipeline.setOpenMetadataServerConnection(connection);
// The object is returned directly to the JSON serializer
return ingestionPipeline;
}The fix implemented in commit 1c05bf450c3eb7908ee501aedf8d5b433b9dca21 introduces a necessary check. The developers realized that the only time the client actually needs the token is when the pipeline is being deployed to an orchestration engine (like Airflow). For all other "look but don't touch" operations, that field should be empty.
// PATCHED CODE LOGIC
if (forceNotMask) {
// Only reveal secrets if explicitly requested for deployment
// AND presumably authorized (though the flag logic handles the exposure)
ingestionPipeline.setOpenMetadataServerConnection(
secretsManager.encryptOpenMetadataConnection(openMetadataServerConnection, false)
);
} else {
// DEFAULT: Nullify the connection to prevent leaks
ingestionPipeline.setOpenMetadataServerConnection(null);
}By default, forceNotMask is false. The API now returns a null connection object for standard GET requests, closing the leak.
You don't need advanced tooling to exploit this. You just need a web browser and a low-privileged account (even a 'Data Consumer' or read-only role often has visibility into service configurations).
Log in as a low-privileged user. Open your browser's Developer Tools (F12) and switch to the Network tab.
Navigate to Settings -> Services and click on any configured service (e.g., an S3 or Snowflake connection) that has an Ingestion Pipeline set up. This triggers a GET request to the API.
Look for a request typically matching: GET /api/v1/services/ingestionPipelines?service=...
Click on the response. You will see a standard JSON structure. Drill down into the object:
{
"id": "...",
"name": "Snowflake_Ingestion",
"openMetadataServerConnection": {
"securityConfig": {
"jwtToken": "ey...<THE_GOLDEN_TICKET>..."
}
},
...
}Copy that JWT. Open a terminal or use Postman. Set the header Authorization: Bearer <COPIED_TOKEN>.
Congratulations. You are now the ingestion-bot. You can likely DELETE service definitions, modify metadata schemas, or extract sample data that the bot has access to. You have effectively escalated to a system administrator.
The impact here is severe because of the implicit trust placed in the ingestion-bot. In many OpenMetadata deployments, this bot is the glue holding the platform together.
Privilege Escalation: The most immediate threat. A user who should only be able to browse data definitions can now administer the platform.
Data Integrity Loss: An attacker could modify lineage data, effectively "poisoning the well." Imagine changing the metadata to say that a sensitive PII table feeds into a public dashboard. Downstream compliance tools relying on this metadata would be blinded.
Service Disruption: The attacker can delete the ingestion pipelines themselves, halting data freshness updates across the organization. In a data-driven company, stale data is often as bad as no data.
This vulnerability highlights the danger of 'Implicit Trust' in API design—assuming that because a user is authenticated, they can be trusted with the internal state of the application objects.
The remediation is straightforward but urgent.
1. Update OpenMetadata: Upgrade to version 1.11.8 or later. The patch ensures that the openMetadataServerConnection is stripped from API responses unless specifically required for internal deployment operations.
2. Rotate Tokens: This is the step most teams forget. Patching stops the leak, but it doesn't invalidate the stolen keys. If an attacker scraped your API yesterday, they still have a valid JWT today. Go to the OpenMetadata admin console, navigate to the Bots section, and regenerate the tokens for your ingestion-bot.
3. Audit Access: Check your logs for unusual API activity originating from the ingestion-bot user, especially from IP addresses associated with standard user workstations rather than your orchestration infrastructure (e.g., Airflow/Kubernetes pods).
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L| Product | Affected Versions | Fixed Version |
|---|---|---|
OpenMetadata OpenMetadata | < 1.11.8 | 1.11.8 |
| Attribute | Detail |
|---|---|
| CWE ID | CWE-213 |
| Attack Vector | Network (API) |
| CVSS | 7.6 (High) |
| Privileges Required | Low (Authenticated) |
| Impact | Privilege Escalation / Info Disclosure |
| Exploit Status | Trivial / PoC Available |
Exposure of Sensitive Information Due to Incompatible Policies