Skip to content

fix(appkit): honor TTL in CacheManager.getOrExecute#326

Open
Tim-Hoare wants to merge 1 commit intodatabricks:mainfrom
Tim-Hoare:fix/cache-getorexecute-ttl
Open

fix(appkit): honor TTL in CacheManager.getOrExecute#326
Tim-Hoare wants to merge 1 commit intodatabricks:mainfrom
Tim-Hoare:fix/cache-getorexecute-ttl

Conversation

@Tim-Hoare
Copy link
Copy Markdown

Fixes #325.

Problem

CacheManager.getOrExecute returns cached entries without checking expiry, so when InMemoryStorage is the backend (the default when Lakebase isn't configured) cached values are served indefinitely — until LRU eviction at maxSize or process restart. The configured ttl is silently ignored on this code path.

This affects every AppKit app using the analytics plugin's default cache (enabled: true, ttl: 3600) without Lakebase. In a deployed Databricks App we observed query results staying cached for >19 hours despite a 1-hour TTL.

Why it slipped through

InMemoryStorage.get() is intentionally a raw lookup — the existing test should return expired entry on get (expiry check is done by CacheManager) makes that contract explicit. The standalone CacheManager.get() honors it correctly. But CacheManager.getOrExecute didn't, and the existing TTL test ("should respect TTL expiry") only covered the set / get pair, not getOrExecute.

Fix

In getOrExecute, after storage.get(cacheKey) returns an entry, check Date.now() > cached.expiry. If expired: delete it and fall through to the normal cache-miss path so fn() re-executes. Otherwise return the cached value as before.

Net change is small and stays within the existing "expiry handling lives at the CacheManager layer" contract — no signature changes, no storage changes.

Test

Added a regression test in the describe("getOrExecute", …) block: caches a value with a 1ms TTL, waits, calls getOrExecute again, and asserts that fn was invoked twice and returns the new value.

Confirmed:

  • Without the fix: new test fails (expected 'result-1' to be 'result-2').
  • With the fix: new test passes; all 1287 existing appkit package tests continue to pass.

Lint (biome check) and typecheck (tsc --noEmit) clean.

Notes

  • I picked the CacheManager-level fix to preserve the existing contract documented in the storage tests. Happy to refactor (e.g. push expiry into storage implementations, or factor a getValid() helper used by both get and getOrExecute) if a different shape is preferred.
  • DCO sign-off included.

CacheManager.getOrExecute calls storage.get() and returns the cached
value without checking expiry. InMemoryStorage.get() returns entries
unconditionally by design ("expiry check is done by CacheManager"),
but the check was missing on the getOrExecute path — only the
standalone CacheManager.get() honored TTL.

With the in-memory backend (the default when Lakebase is not
configured), this caused cached entries to live until LRU eviction at
maxSize or process restart, regardless of the configured ttl. Apps
using the analytics plugin's default cache (enabled, ttl: 3600) would
serve stale query results indefinitely.

Add an explicit expiry check in getOrExecute: if storage returns an
expired entry, delete it and fall through to a normal cache miss so
fn() re-executes.

Adds a regression test in the getOrExecute describe block. The
existing TTL test only covered the set/get pair, which is why this
was not caught.

Fixes databricks#325

Signed-off-by: Tim Hoare <t.hoare@codat.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Cache TTL not honored in getOrExecute with in-memory storage (results never expire)

1 participant