cache gives your services one cache API with multiple backend options. Swap drivers without refactoring.
An explicit cache abstraction with a minimal Store interface and ergonomic Cache helpers. Drivers are chosen when you construct the store, so swapping backends is a dependency-injection change instead of a refactor.
go get github.com/goforj/cacheOptional backends are separate modules. Install only what you use:
go get github.com/goforj/cache/driver/rediscache
go get github.com/goforj/cache/driver/memcachedcache
go get github.com/goforj/cache/driver/natscache
go get github.com/goforj/cache/driver/dynamocache
go get github.com/goforj/cache/driver/sqlitecache
go get github.com/goforj/cache/driver/postgrescache
go get github.com/goforj/cache/driver/mysqlcacheUse root constructors for in-process backends, and driver-module constructors for external backends. Driver backends live in separate modules so applications only import/link the optional backend dependencies they actually use.
package main
import (
"context"
"fmt"
"time"
"github.com/goforj/cache"
"github.com/goforj/cache/cachecore"
"github.com/goforj/cache/driver/dynamocache"
"github.com/goforj/cache/driver/memcachedcache"
"github.com/goforj/cache/driver/mysqlcache"
"github.com/goforj/cache/driver/natscache"
"github.com/goforj/cache/driver/postgrescache"
"github.com/goforj/cache/driver/rediscache"
"github.com/goforj/cache/driver/sqlitecache"
)
func main() {
ctx := context.Background()
base := cachecore.BaseConfig{DefaultTTL: 5 * time.Minute, Prefix: "app"}
cache.NewMemoryStore(ctx) // in-process memory
cache.NewFileStore(ctx, "./cache-data") // local file-backed
cache.NewNullStore(ctx) // disabled / drop-only
// Redis (driver-owned connection config; no direct redis client required)
redisStore := rediscache.New(rediscache.Config{BaseConfig: base, Addr: "127.0.0.1:6379"})
_ = redisStore
// Memcached (one or more server addresses)
memcachedStore := memcachedcache.New(memcachedcache.Config{
BaseConfig: base,
Addresses: []string{"127.0.0.1:11211"},
})
_ = memcachedStore
// NATS JetStream KV (inject a bucket from your NATS setup)
var kv natscache.KeyValue // create via your NATS JetStream setup
natsStore := natscache.New(natscache.Config{BaseConfig: base, KeyValue: kv})
_ = natsStore
// DynamoDB (auto-creates client when Client is nil)
dynamoStore, err := dynamocache.New(ctx, dynamocache.Config{
BaseConfig: base,
Region: "us-east-1",
Table: "cache_entries",
})
fmt.Println(dynamoStore, err)
// SQLite (via sqlcore)
sqliteStore, err := sqlitecache.New(sqlitecache.Config{
BaseConfig: base,
DSN: "file::memory:?cache=shared",
Table: "cache_entries",
})
fmt.Println(sqliteStore, err)
// Postgres (via sqlcore)
postgresStore, err := postgrescache.New(postgrescache.Config{
BaseConfig: base,
DSN: "postgres://user:pass@127.0.0.1:5432/app?sslmode=disable",
Table: "cache_entries",
})
fmt.Println(postgresStore, err)
// MySQL (via sqlcore)
mysqlStore, err := mysqlcache.New(mysqlcache.Config{
BaseConfig: base,
DSN: "user:pass@tcp(127.0.0.1:3306)/app?parseTime=true",
Table: "cache_entries",
})
fmt.Println(mysqlStore, err)
}| Category | Module | Purpose |
|---|---|---|
| Core | github.com/goforj/cache | Cache API and root-backed stores (memory, file, null) |
| Core | github.com/goforj/cache/cachecore | Shared contracts, types, and base config |
| Core | github.com/goforj/cache/cachetest | Shared store contract test harness |
| Optional drivers | github.com/goforj/cache/driver/*cache | Backend driver modules |
| Optional drivers | github.com/goforj/cache/driver/sqlcore | Shared SQL implementation for dialect wrappers |
| Testing and tooling | github.com/goforj/cache/integration | Integration suites (root, all) |
| Testing and tooling | github.com/goforj/cache/docs | Docs + benchmark tooling |
import (
"context"
"fmt"
"time"
"github.com/goforj/cache"
"github.com/goforj/cache/cachecore"
"github.com/goforj/cache/driver/rediscache"
)
func main() {
ctx := context.Background()
store := cache.NewMemoryStoreWithConfig(ctx, cache.StoreConfig{
BaseConfig: cachecore.BaseConfig{DefaultTTL: 5 * time.Minute},
})
c := cache.NewCache(store)
type Profile struct { Name string `json:"name"` }
// Typed lifecycle (generic helpers): set -> get -> delete
_ = cache.Set(c, "user:42:profile", Profile{Name: "Ada"}, time.Minute)
profile, ok, err := cache.Get[Profile](c, "user:42:profile")
fmt.Println(err == nil, ok, profile.Name) // true true Ada
_ = c.Delete("user:42:profile")
// String lifecycle: set -> get -> delete
_ = c.SetString("settings:mode", "dark", time.Minute)
mode, ok, err := c.GetString("settings:mode")
fmt.Println(err == nil, ok, mode) // true true dark
_ = c.Delete("settings:mode")
// Remember pattern.
profile, err := cache.Remember[Profile](c, "user:42:profile", time.Minute, func() (Profile, error) {
return Profile{Name: "Ada"}, nil
})
fmt.Println(profile.Name) // Ada
// Switch to Redis (dependency injection, no code changes below).
store = rediscache.New(rediscache.Config{
BaseConfig: cachecore.BaseConfig{
Prefix: "app",
DefaultTTL: 5 * time.Minute,
},
Addr: "127.0.0.1:6379",
})
c = cache.NewCache(store)
}Cache uses explicit config structs throughout, with shared fields embedded via cachecore.BaseConfig.
Shared config (embedded by root stores and optional drivers):
type BaseConfig struct {
DefaultTTL time.Duration
Prefix string
Compression CompressionCodec
MaxValueBytes int
EncryptionKey []byte
}Root-backed stores use cache.StoreConfig:
type StoreConfig struct {
cachecore.BaseConfig
MemoryCleanupInterval time.Duration
FileDir string
}Typical root constructor usage:
store := cache.NewMemoryStoreWithConfig(ctx, cache.StoreConfig{
BaseConfig: cachecore.BaseConfig{
DefaultTTL: 5 * time.Minute,
Prefix: "app",
},
MemoryCleanupInterval: time.Minute,
})Optional backends use driver-local config types that embed the same cachecore.BaseConfig plus backend-specific fields.
Example shapes:
// rediscache.Config (abridged)
type Config struct {
cachecore.BaseConfig
Client rediscache.Client
}// sqlitecache.Config (abridged)
type Config struct {
cachecore.BaseConfig
DSN string
Table string
}See the API Index Driver Configs section for per-driver defaults and compile-checked examples for:
rediscache, memcachedcache, natscache, dynamocache, sqlitecache, postgrescache, mysqlcache, and sqlcore.
For precise runtime semantics, see Behavior Semantics:
- TTL/default-TTL matrix by operation/helper
- stale and refresh-ahead behavior and edge cases
- lock and rate-limit guarantees (process-local vs distributed scope)
For deployment defaults and operational patterns, see Production Guide:
- recommended defaults and tuning
- key naming/versioning conventions
- TTL jitter and miss-storm mitigation
- observability instrumentation patterns
Wrap any store with NewMemoStore to memoize reads within the process; cache is invalidated automatically on write paths.
memoStore := cache.NewMemoStore(store)
memoRepo := cache.NewCache(memoStore)Staleness note: memoization is per-process only. Writes that happen in other processes (or outside your app) will not invalidate this memo cache. Use it when local staleness is acceptable, or scope it narrowly (e.g., per-request) if multiple writers exist.
Unit tests cover the public helpers. Shared cross-driver integration coverage runs from the integration module (with testcontainers-go for container-backed backends):
cd integration
go test -tags=integration ./allUse INTEGRATION_DRIVER=sqlitecache (comma-separated) to select which fixtures run, or use the repo helper:
bash scripts/test-all-modules.shcd docs
go test -tags benchrender ./bench -run TestRenderBenchmarks -count=1 -vNote: NATS numbers can look slower than Redis/memory because the NATS driver preserves per-operation TTL semantics by storing per-key expiry metadata (envelope encode/decode) and may do extra compare/update steps for some operations.
Generic helper benchmarks (Get[T] / Set[T]) use the default JSON codec, so compare them against GetBytes / SetBytes (and GetString / SetString) when evaluating convenience vs raw-path performance.
Note: DynamoDB is intentionally omitted from these local charts because emulator-based numbers are not representative of real AWS latency.
NATS variants in these charts:
nats: per-key TTL semantics using a binary envelope (magic/expiresAt/value). This preserves per-key expiry parity with other drivers, with modest metadata overhead.nats_bucket_ttl: bucket-level TTL mode (WithNATSBucketTTL(true)), raw value path; faster but different expiry semantics.
The API section below is autogenerated; do not edit between the markers.
| Group | Functions |
|---|---|
| Constructors | NewFileStore NewFileStoreWithConfig NewMemoryStore NewMemoryStoreWithConfig NewNullStore NewNullStoreWithConfig |
| Core | Driver NewCache NewCacheWithTTL Store |
| Driver Configs | Shared BaseConfig DynamoDB Config Memcached Config MySQL Config NATS Config Postgres Config Redis Config SQL Core Config SQLite Config |
| Invalidation | Delete DeleteMany Flush Pull PullBytes |
| Locking | Acquire Block Lock LockCtx LockHandle.Get NewLockHandle Release TryLock Unlock |
| Memoization | NewMemoStore |
| Observability | OnCacheOp WithObserver |
| Rate Limiting | RateLimit |
| Read Through | Remember RememberBytes RememberStale RememberStaleBytes RememberStaleCtx |
| Reads | BatchGetBytes Get GetBytes GetJSON GetString |
| Refresh Ahead | RefreshAhead RefreshAheadBytes RefreshAheadValueWithCodec |
| Testing Helpers | AssertCalled AssertNotCalled AssertTotal Cache Count New Reset Total |
| Writes | Add BatchSetBytes Decrement Increment Set SetBytes SetJSON SetString |
Examples assume ctx := context.Background() and c := cache.NewCache(cache.NewMemoryStore(ctx)) unless shown otherwise.
NewFileStore is a convenience for a filesystem-backed store.
ctx := context.Background()
store := cache.NewFileStore(ctx, "/tmp/my-cache")
fmt.Println(store.Driver()) // fileNewFileStoreWithConfig builds a filesystem-backed store using explicit root config.
ctx := context.Background()
store := cache.NewFileStoreWithConfig(ctx, cache.StoreConfig{
BaseConfig: cachecore.BaseConfig{
EncryptionKey: []byte("01234567890123456789012345678901"),
MaxValueBytes: 4096,
Compression: cache.CompressionGzip,
},
FileDir: "/tmp/my-cache",
})
fmt.Println(store.Driver()) // fileNewMemoryStore is a convenience for an in-process store using defaults.
ctx := context.Background()
store := cache.NewMemoryStore(ctx)
fmt.Println(store.Driver()) // memoryNewMemoryStoreWithConfig builds an in-process store using explicit root config.
ctx := context.Background()
store := cache.NewMemoryStoreWithConfig(ctx, cache.StoreConfig{
BaseConfig: cachecore.BaseConfig{
DefaultTTL: 30 * time.Second,
Compression: cache.CompressionGzip,
},
MemoryCleanupInterval: 5 * time.Minute,
})
fmt.Println(store.Driver()) // memoryNewNullStore is a no-op store useful for tests where caching should be disabled.
ctx := context.Background()
store := cache.NewNullStore(ctx)
fmt.Println(store.Driver()) // nullNewNullStoreWithConfig builds a null store with shared wrappers (compression/encryption/limits).
ctx := context.Background()
store := cache.NewNullStoreWithConfig(ctx, cache.StoreConfig{
BaseConfig: cachecore.BaseConfig{
Compression: cache.CompressionGzip,
MaxValueBytes: 1024,
},
})
fmt.Println(store.Driver()) // nullDriver reports the underlying store driver.
NewCache creates a cache facade bound to a concrete store.
ctx := context.Background()
s := cache.NewMemoryStore(ctx)
c := cache.NewCache(s)
fmt.Println(c.Driver()) // memoryNewCacheWithTTL lets callers override the default TTL applied when ttl <= 0.
ctx := context.Background()
s := cache.NewMemoryStore(ctx)
c := cache.NewCacheWithTTL(s, 2*time.Minute)
fmt.Println(c.Driver(), c != nil) // memory trueStore returns the underlying store implementation.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
fmt.Println(c.Store().Driver()) // memoryOptional backend config examples (compile-checked from generated examples and driver New(...) docs).
Shared fields are embedded via cachecore.BaseConfig on every driver config:
DefaultTTL: defaults to5*time.Minutewhen zero in all optional driversPrefix: defaults to"app"when empty in all optional driversCompression: default zero value (cachecore.CompressionNone) unless setMaxValueBytes: default0(no limit) unless setEncryptionKey: defaultnil(disabled) unless set
Defaults:
- Region: "us-east-1" when empty
- Table: "cache_entries" when empty
- DefaultTTL: 5*time.Minute when zero
- Prefix: "app" when empty
- Client: auto-created when nil (uses Region and optional Endpoint)
- Endpoint: empty by default (normal AWS endpoint resolution)
ctx := context.Background()
store, err := dynamocache.New(ctx, dynamocache.Config{
BaseConfig: cachecore.BaseConfig{
DefaultTTL: 5 * time.Minute,
Prefix: "app",
},
Region: "us-east-1",
Table: "cache_entries",
})
if err != nil {
panic(err)
}
fmt.Println(store.Driver()) // dynamoDefaults:
- Addresses: []string{"127.0.0.1:11211"} when empty
- DefaultTTL: 5*time.Minute when zero
- Prefix: "app" when empty
store := memcachedcache.New(memcachedcache.Config{
BaseConfig: cachecore.BaseConfig{
DefaultTTL: 5 * time.Minute,
Prefix: "app",
},
Addresses: []string{"127.0.0.1:11211"},
})
fmt.Println(store.Driver()) // memcachedDefaults:
- DefaultTTL: 5*time.Minute when zero
- Prefix: "app" when empty
- Table: "cache_entries" when empty
- DSN: required
store, err := mysqlcache.New(mysqlcache.Config{
BaseConfig: cachecore.BaseConfig{
DefaultTTL: 5 * time.Minute,
Prefix: "app",
},
DSN: "user:pass@tcp(127.0.0.1:3306)/app?parseTime=true",
Table: "cache_entries",
})
if err != nil {
panic(err)
}
fmt.Println(store.Driver()) // sqlDefaults:
- DefaultTTL: 5*time.Minute when zero
- Prefix: "app" when empty
- BucketTTL: false (TTL enforced in value envelope metadata)
- KeyValue: required for real operations (nil allowed, operations return errors)
var kv natscache.KeyValue // provided by your NATS setup
store := natscache.New(natscache.Config{
BaseConfig: cachecore.BaseConfig{
DefaultTTL: 5 * time.Minute,
Prefix: "app",
},
KeyValue: kv,
BucketTTL: false,
})
fmt.Println(store.Driver()) // natsDefaults:
- DefaultTTL: 5*time.Minute when zero
- Prefix: "app" when empty
- Table: "cache_entries" when empty
- DSN: required
store, err := postgrescache.New(postgrescache.Config{
BaseConfig: cachecore.BaseConfig{
DefaultTTL: 5 * time.Minute,
Prefix: "app",
},
DSN: "postgres://user:pass@localhost:5432/app?sslmode=disable",
Table: "cache_entries",
})
if err != nil {
panic(err)
}
fmt.Println(store.Driver()) // sqlDefaults:
- DefaultTTL: 5*time.Minute when zero
- Prefix: "app" when empty
- Addr: empty by default (no client auto-created unless Addr is set)
- Client: optional advanced override (takes precedence when set)
- If neither Client nor Addr is set, operations return errors until a client is provided
store := rediscache.New(rediscache.Config{
BaseConfig: cachecore.BaseConfig{
DefaultTTL: 5 * time.Minute,
Prefix: "app",
},
Addr: "127.0.0.1:6379",
})
fmt.Println(store.Driver()) // redisDefaults:
- Table: "cache_entries" when empty
- DefaultTTL: 5*time.Minute when zero
- Prefix: "app" when empty
- DriverName: required
- DSN: required
store, err := sqlcore.New(sqlcore.Config{
BaseConfig: cachecore.BaseConfig{
DefaultTTL: 5 * time.Minute,
Prefix: "app",
},
DriverName: "sqlite",
DSN: "file::memory:?cache=shared",
Table: "cache_entries",
})
if err != nil {
panic(err)
}
fmt.Println(store.Driver()) // sqlDefaults:
- DefaultTTL: 5*time.Minute when zero
- Prefix: "app" when empty
- Table: "cache_entries" when empty
- DSN: required
store, err := sqlitecache.New(sqlitecache.Config{
BaseConfig: cachecore.BaseConfig{
DefaultTTL: 5 * time.Minute,
Prefix: "app",
},
DSN: "file::memory:?cache=shared",
Table: "cache_entries",
})
if err != nil {
panic(err)
}
fmt.Println(store.Driver()) // sqlDelete removes a single key.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetBytes("a", []byte("1"), time.Minute)
fmt.Println(c.Delete("a") == nil) // trueDeleteMany removes multiple keys.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
fmt.Println(c.DeleteMany("a", "b") == nil) // trueFlush clears all keys for this store scope.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetBytes("a", []byte("1"), time.Minute)
fmt.Println(c.Flush() == nil) // truePull returns a typed value for key and removes it, using the default codec (JSON).
type Token struct { Value string `json:"value"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = cache.Set(c, "reset:token:42", Token{Value: "abc"}, time.Minute)
tok, ok, err := cache.Pull[Token](c, "reset:token:42")
fmt.Println(err == nil, ok, tok.Value) // true true abcPullBytes returns value and removes it from cache.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetString("reset:token:42", "abc", time.Minute)
body, ok, _ := c.PullBytes("reset:token:42")
fmt.Println(ok, string(body)) // true abcAcquire attempts to acquire the lock once (non-blocking).
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, err := lock.Acquire()
fmt.Println(err == nil, locked) // true trueBlock waits up to timeout to acquire the lock, runs fn if acquired, then releases.
retryInterval <= 0 falls back to the cache default lock retry interval.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, err := lock.Block(500*time.Millisecond, 25*time.Millisecond, func() error {
// do protected work
return nil
})
fmt.Println(err == nil, locked) // true trueLock waits until the lock is acquired or timeout elapses.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
locked, err := c.Lock("job:sync", 10*time.Second, time.Second)
fmt.Println(err == nil, locked) // true trueLockCtx retries lock acquisition until success or context cancellation.
Get acquires the lock once, runs fn if acquired, then releases automatically.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, err := lock.Get(func() error {
// do protected work
return nil
})
fmt.Println(err == nil, locked) // true trueNewLockHandle creates a reusable lock handle for a key/ttl pair.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, err := lock.Acquire()
fmt.Println(err == nil, locked) // true true
if locked {
_ = lock.Release()
}Release unlocks the key if this handle previously acquired it.
It is safe to call multiple times; repeated calls become no-ops after the first successful release.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
lock := c.NewLockHandle("job:sync", 10*time.Second)
locked, _ := lock.Acquire()
if locked {
_ = lock.Release()
}TryLock acquires a short-lived lock key when not already held.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
locked, _ := c.TryLock("job:sync", 10*time.Second)
fmt.Println(locked) // trueUnlock releases a previously acquired lock key.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
locked, _ := c.TryLock("job:sync", 10*time.Second)
if locked {
_ = c.Unlock("job:sync")
}NewMemoStore decorates store with per-process read memoization.
Behavior:
- First Get hits the backing store, clones the value, and memoizes it in-process.
- Subsequent Get for the same key returns the memoized clone (no backend call).
- Any write/delete/flush invalidates the memo entry so local reads stay in sync with changes made through this process.
- Memo data is per-process only; other processes or external writers will not invalidate it. Use only when that staleness window is acceptable.
ctx := context.Background()
base := cache.NewMemoryStore(ctx)
memo := cache.NewMemoStore(base)
c := cache.NewCache(memo)
fmt.Println(c.Driver()) // memoryOnCacheOp implements Observer.
obs := cache.ObserverFunc(func(ctx context.Context, op, key string, hit bool, err error, dur time.Duration, driver cachecore.Driver) {
fmt.Println(op, key, hit, err == nil, driver)
_ = ctx
_ = dur
})
obs.OnCacheOp(context.Background(), "get", "user:42", true, nil, time.Millisecond, cachecore.DriverMemory)WithObserver attaches an observer to receive operation events.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
c = c.WithObserver(cache.ObserverFunc(func(ctx context.Context, op, key string, hit bool, err error, dur time.Duration, driver cachecore.Driver) {
// See docs/production-guide.md for a real metrics recipe.
fmt.Println(op, driver, hit, err == nil)
_ = ctx
_ = key
_ = dur
}))
_, _, _ = c.GetBytes("profile:42")RateLimit increments a fixed-window counter and returns allowance metadata.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
res, err := c.RateLimit("rl:api:ip:1.2.3.4", 100, time.Minute)
fmt.Println(err == nil, res.Allowed, res.Count, res.Remaining, !res.ResetAt.IsZero())
// Output: true true 1 99 trueRemember is the ergonomic, typed remember helper using JSON encoding by default.
type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
profile, err := cache.Remember[Profile](c, "profile:42", time.Minute, func() (Profile, error) {
return Profile{Name: "Ada"}, nil
})
fmt.Println(err == nil, profile.Name) // true AdaRememberBytes returns key value or computes/stores it when missing.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
data, err := c.RememberBytes("dashboard:summary", time.Minute, func() ([]byte, error) {
return []byte("payload"), nil
})
fmt.Println(err == nil, string(data)) // true payloadRememberStale returns a typed value with stale fallback semantics using JSON encoding by default.
type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
profile, usedStale, err := cache.RememberStale[Profile](c, "profile:42", time.Minute, 10*time.Minute, func() (Profile, error) {
return Profile{Name: "Ada"}, nil
})
fmt.Println(err == nil, usedStale, profile.Name) // true false AdaRememberStaleBytes returns a fresh value when available, otherwise computes and caches it. If computing fails and a stale value exists, it returns the stale value. The returned bool is true when a stale fallback was used.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
body, usedStale, err := c.RememberStaleBytes("profile:42", time.Minute, 10*time.Minute, func() ([]byte, error) {
return []byte(`{"name":"Ada"}`), nil
})
fmt.Println(err == nil, usedStale, len(body) > 0)RememberStaleCtx returns a typed value with stale fallback semantics using JSON encoding by default.
type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
profile, usedStale, err := cache.RememberStaleCtx[Profile](ctx, c, "profile:42", time.Minute, 10*time.Minute, func(ctx context.Context) (Profile, error) {
return Profile{Name: "Ada"}, nil
})
fmt.Println(err == nil, usedStale, profile.Name) // true false AdaBatchGetBytes returns all found values for the provided keys. Missing keys are omitted from the returned map.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetBytes("a", []byte("1"), time.Minute)
_ = c.SetBytes("b", []byte("2"), time.Minute)
values, err := c.BatchGetBytes("a", "b", "missing")
fmt.Println(err == nil, string(values["a"]), string(values["b"])) // true 1 2Get returns a typed value for key using the default codec (JSON) when present.
type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = cache.Set(c, "profile:42", Profile{Name: "Ada"}, time.Minute)
_ = cache.Set(c, "settings:mode", "dark", time.Minute)
profile, ok, err := cache.Get[Profile](c, "profile:42")
mode, ok2, err2 := cache.Get[string](c, "settings:mode")
fmt.Println(err == nil, ok, profile.Name, err2 == nil, ok2, mode) // true true Ada true true darkGetBytes returns raw bytes for key when present.
ctx := context.Background()
s := cache.NewMemoryStore(ctx)
c := cache.NewCache(s)
_ = c.SetBytes("user:42", []byte("Ada"), 0)
value, ok, _ := c.GetBytes("user:42")
fmt.Println(ok, string(value)) // true AdaGetJSON decodes a JSON value into T when key exists, using background context.
type Profile struct { Name string `json:"name"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = cache.SetJSON(c, "profile:42", Profile{Name: "Ada"}, time.Minute)
profile, ok, err := cache.GetJSON[Profile](c, "profile:42")
fmt.Println(err == nil, ok, profile.Name) // true true AdaGetString returns a UTF-8 string value for key when present.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
_ = c.SetString("user:42:name", "Ada", 0)
name, ok, _ := c.GetString("user:42:name")
fmt.Println(ok, name) // true AdaRefreshAhead returns a typed value and refreshes asynchronously when near expiry.
type Summary struct { Text string `json:"text"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
s, err := cache.RefreshAhead[Summary](c, "dashboard:summary", time.Minute, 10*time.Second, func() (Summary, error) {
return Summary{Text: "ok"}, nil
})
fmt.Println(err == nil, s.Text) // true okRefreshAheadBytes returns cached value immediately and refreshes asynchronously when near expiry. On miss, it computes and stores synchronously.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
body, err := c.RefreshAheadBytes("dashboard:summary", time.Minute, 10*time.Second, func() ([]byte, error) {
return []byte("payload"), nil
})
fmt.Println(err == nil, len(body) > 0) // true trueRefreshAheadValueWithCodec allows custom encoding/decoding for typed refresh-ahead operations.
AssertCalled verifies key was touched by op the expected number of times.
f := cachefake.New()
c := f.Cache()
_ = c.SetString("settings:mode", "dark", 0)
t := &testing.T{}
f.AssertCalled(t, cachefake.OpSet, "settings:mode", 1)AssertNotCalled ensures key was never touched by op.
f := cachefake.New()
t := &testing.T{}
f.AssertNotCalled(t, cachefake.OpDelete, "settings:mode")AssertTotal ensures the total call count for an op matches times.
f := cachefake.New()
c := f.Cache()
_ = c.Delete("a")
_ = c.Delete("b")
t := &testing.T{}
f.AssertTotal(t, cachefake.OpDelete, 2)Cache returns the cache facade to inject into code under test.
f := cachefake.New()
c := f.Cache()
_, _, _ = c.GetBytes("settings:mode")Count returns calls for op+key.
f := cachefake.New()
c := f.Cache()
_ = c.SetString("settings:mode", "dark", 0)
n := f.Count(cachefake.OpSet, "settings:mode")
_ = nNew creates a Fake using an in-memory store.
f := cachefake.New()
c := f.Cache()
_ = c.SetString("settings:mode", "dark", 0)Reset clears recorded counts.
f := cachefake.New()
_ = f.Cache().SetString("settings:mode", "dark", 0)
f.Reset()Total returns total calls for an op across keys.
f := cachefake.New()
c := f.Cache()
_ = c.Delete("a")
_ = c.Delete("b")
n := f.Total(cachefake.OpDelete)
_ = nAdd writes value only when key is not already present.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
created, _ := c.Add("boot:seeded", []byte("1"), time.Hour)
fmt.Println(created) // trueBatchSetBytes writes many key/value pairs using a shared ttl.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
err := c.BatchSetBytes(map[string][]byte{
"a": []byte("1"),
"b": []byte("2"),
}, time.Minute)
fmt.Println(err == nil) // trueDecrement decrements a numeric value and returns the result.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
val, _ := c.Decrement("rate:login:42", 1, time.Minute)
fmt.Println(val) // -1Increment increments a numeric value and returns the result.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
val, _ := c.Increment("rate:login:42", 1, time.Minute)
fmt.Println(val) // 1Set encodes value with the default codec (JSON) and writes it to key.
type Settings struct { Enabled bool `json:"enabled"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
err := cache.Set(c, "settings:alerts", Settings{Enabled: true}, time.Minute)
err2 := cache.Set(c, "settings:mode", "dark", time.Minute)
fmt.Println(err == nil, err2 == nil) // true trueSetBytes writes raw bytes to key.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
fmt.Println(c.SetBytes("token", []byte("abc"), time.Minute) == nil) // trueSetJSON encodes value as JSON and writes it to key using background context.
type Settings struct { Enabled bool `json:"enabled"` }
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
err := cache.SetJSON(c, "settings:alerts", Settings{Enabled: true}, time.Minute)
fmt.Println(err == nil) // trueSetString writes a string value to key.
ctx := context.Background()
c := cache.NewCache(cache.NewMemoryStore(ctx))
fmt.Println(c.SetString("user:42:name", "Ada", time.Minute) == nil) // true| Driver | Hard / default cap | Configurable | Notes |
|---|---|---|---|
| Null | N/A | N/A | No persistence. |
| Memory | Process memory | - | No backend hard cap. |
| File | Disk / filesystem | - | No backend hard cap. |
| Redis | Backend practical (memory/SLO) | Server-side | No commonly hit low per-value hard cap in app use. |
| NATS | Server/bucket payload limits | Server-side | Depends on NATS/JetStream config. |
| Memcached | ~1 MiB per item (default) | ✓ (server -I) |
Backend-enforced item limit. |
| DynamoDB | 400 KB item hard cap | No | Includes key/metadata overhead, so usable value bytes are lower. |
| SQL | DB/engine config dependent | Server-side | Blob/row/packet limits vary by engine and deployment. |
StoreConfig.MaxValueBytes (root-backed stores) is the uniform application-level cap, and it applies to post-shaping bytes (after compression/encryption overhead).
| Area | What is validated | Scope |
|---|---|---|
| Core store contract | Set/Get, TTL expiry, Add, counters, Delete/DeleteMany, Flush, typed Remember |
All drivers |
| Option contracts | prefix, compression, encryption, prefix+compression+encryption, max_value_bytes, default_ttl |
All drivers (per option case) |
| Locking | single-winner contention, timeout/cancel, TTL expiry reacquire, unlock safety | All drivers |
| Rate limiting | monotonic counts, remaining >= 0, window rollover reset |
All drivers |
| Refresh-ahead | miss/hit behavior, async refresh success/error, malformed metadata handling | All drivers |
| Remember stale | stale fallback semantics, TTL interactions, stale/fresh independent expiry, joined errors | All drivers |
| Batch ops | partial misses, empty input behavior, default TTL application | All drivers |
| Counter semantics | signed deltas, zero delta, TTL refresh extension | All drivers |
| Context cancellation | GetCtx/SetCtx/LockCtx/RefreshAheadCtx/Remember*Ctx prompt return + driver-aware cancel semantics |
All drivers (driver-aware assertions) |
| Latency / transient faults | injected slow Get/Add/Increment, timeout propagation, no hidden retries for RefreshAhead/Remember*/LockCtx/RateLimit* |
All drivers (integration wrappers over real stores) |
| Prefix isolation | Delete/Flush isolation + helper-generated keys (__lock:, :__refresh_exp, :__stale, rate-limit buckets) |
Shared/prefixed backends |
| Payload shaping / corruption | compression+encryption round-trips, corrupted compressed/encrypted payload errors | Shared/persistent backends |
| Payload size limits | large binary payload round-trips; backend-specific near/over-limit checks (Memcached, DynamoDB) | Driver-specific where meaningful |
| Cross-store scope | shared vs local semantics across store instances (e.g. rate-limit counters) | Driver-specific expectations |
| Backend fault / recovery | backend restart mid-suite, outage errors, post-recovery round-trip/lock/refresh/stale flows | Container-backed drivers (runs automatically when container-backed fixtures are selected) |
| Observer metadata | op names, hit/miss flags, propagated errors, driver labels | Unit contract tests (integration helper paths exercise emissions indirectly) |
| Memo store caveats | per-process memoization, local-only invalidation, cross-process staleness behavior | Unit tests |
Default integration runs cover the contract suite above. Fault/recovery restart tests run automatically when the selected integration suite includes container-backed fixtures.
README content is a mix of generated sections and manual sections.
- API reference (
<!-- api:embed:start --> ... <!-- api:embed:end -->) is generated. - Test badges are updated separately.
- Sections like driver notes and the integration coverage table are manual.
go run ./docs/readme/main.goStatic counts (fast, watcher-friendly; counts top-level Test* funcs):
go run ./docs/readme/main.goExecuted counts (runs tests and counts real go test -json test/subtest starts):
go run ./docs/readme/testcounts/main.go./docs/watcher.shNotes:
- The badge watcher runs real tests, so it is slower than API/example regeneration.
- Fault/recovery integration tests run with the integration suite when container-backed fixtures are selected.
