How to set up an API-backed permanent store? #1402
Replies: 3 comments
-
|
Running into the same problem @Nick-Motion , did you end up with a specific solution? |
Beta Was this translation helpful? Give feedback.
-
|
After even more extensive testing and researching the library internals, I've come to the (surprising) conclusion that there is no supported solution for this. I ended up creating my own custom collection, which maintains a permanent model store, plus custom live-query hooks to manage. I've had to layer a lot of complexity on top of this, because the library doesn't seem to support this use case - the biggest problem being that there's a disconnect between "API request completed" and "Live query has materialized rows and is ready to render", meaning that I would love to be proven wrong here, or to have this use case supported officially. |
Beta Was this translation helpful? Give feedback.
-
|
For context, I’m building app with user-defined saved views over a fairly normalized local data model A simplified version of the canonical collections looks like this:
I originally tried to keep this fully canonical:
Architecturally that felt right, but in practice it became very laggy once the local dataset got into the low-thousands of rows. The problem was not one flat query, but the combination of:
So I introduced a workaround:
This gives much better first-render coherence and avoids the heavy lag from rebuilding the graph locally The downside is that I now have a parallel read model, so canonical optimistic mutations also need to patch the hydrated view collections to keep them coherent. In other words, writes to the source-of-truth collections also have to maintain the read model My questions for @KyleAMathews are:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
We're evaluating TanStack DB for a large application where we need to:
useLiveQueryAfter extensive testing and source code review, we've found that query collections cannot serve this use case due to three compounding issues:
1. Pagination is broken in on-demand mode (#820)
useLiveInfiniteQuerywithsyncMode: 'on-demand'doesn't implement cursor-based pagination. It appears to be designed only for moving a local window over a full dataset, rather than fetching pages on-demand.The proposed pagination docs in #1355 (Approach 2, marked "Recommended") won't work because of this.
2. Row eviction on unsubscribe deletes permanent data (#1309)
When a component unmounts,
subscription.unsubscribe()callsunloadSubset()for each loaded subset. The refcount inqueryToRows/rowToQueriesdrops to 0, andcleanupQueryInternaldeletes the rows from the collection viawrite({ type: 'delete' }).gcTimedoes not prevent this — it only controls TanStack Query's cache lifetime, not the DB collection's row lifecycle.3. Full State Sync prevents page accumulation
makeQueryResultHandlertreats eachqueryFnresult as complete state — items present in the old result but absent in the new result are deleted. This means anything written viawriteInsertgets wiped on refetch of pages.Questions:
Is there a planned path for "permanent collection" semantics — where rows survive unsubscribe and accumulate across paginated fetches?
What's the intended architecture for our use case? Using a Query Collection or Tanstack Query as the fetch/pagination layer and a separate local collection as the permanent store, and bridging them manually, creates a disconnect b/t the live query and the render - doesn't really work for making calls on-demand.
Versions:
Beta Was this translation helpful? Give feedback.
All reactions