Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions .github/workflows/checks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,28 @@ jobs:
- name: Lint
run: pnpm lint

format:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 22

- name: Install pnpm
run: |
corepack enable
corepack prepare pnpm@latest --activate

- name: Install dependencies
run: pnpm install

- name: Format
run: pnpm format

typecheck:
runs-on: ubuntu-latest

Expand Down
11 changes: 11 additions & 0 deletions .prettierignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
node_modules
build
dist
.git
.github
coverage
*.min.js
pnpm-lock.yaml
yarn.lock
package-lock.json
**/CHANGELOG.md
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ The one-level mode is a standard caching method. Choose from a variety of driver
In addition to this, you benefit from many features that allow you to efficiently manage your cache, such as **cache stampede protection**, **grace periods**, **timeouts**, **namespaces**, etc.

### Two-levels

For those looking to go further, you can use the two-levels caching system. Here's basically how it works:

- **L1: Local Cache**: First level cache. Data is stored in memory with an LRU algorithm for quick access
Expand All @@ -60,7 +61,6 @@ The major benefit of multi-tier caching, is that it allows for responses between

In fact, it's a quite common pattern : to quote an example, it's [what Stackoverflow does](https://nickcraver.com/blog/2019/08/06/stack-overflow-how-we-do-app-caching/#layers-of-cache-at-stack-overflow).


To give some perspective, here's a simple benchmark that shows the difference between a simple distributed cache ( using Redis ) vs a multi-tier cache ( using Redis + In-memory cache ) :

![Redis vs Multi-tier caching](./assets/redis_vs_mtier.png)
Expand Down Expand Up @@ -99,10 +99,10 @@ Allows associating a cache entry with one or more tags to simplify invalidation.
await bento.getOrSet({
key: 'foo',
factory: getFromDb(),
tags: ['tag-1', 'tag-2']
});
tags: ['tag-1', 'tag-2'],
})

await bento.deleteByTag({ tags: ['tag-1'] });
await bento.deleteByTag({ tags: ['tag-1'] })
```

### Namespaces
Expand Down Expand Up @@ -157,7 +157,7 @@ You can pass a logger to Bentocache, and it will log everything that happens. Ca
import { pino } from 'pino'

const bento = new BentoCache({
logger: pino()
logger: pino(),
})
```

Expand Down
2 changes: 1 addition & 1 deletion benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
> [!IMPORTANT]
> The benchmarks are not meant to be a definitive proof of which library is the best. They are mainly here to see if we make any performance regressions. And also for fun. Do not take them too seriously.

At the time of writing, every librairies seems on pair with each other when using a single tier cache. The real differences come when using a two-tier cache, only CacheManager and Bentocache support this feature.
At the time of writing, every librairies seems on pair with each other when using a single tier cache. The real differences come when using a two-tier cache, only CacheManager and Bentocache support this feature.

- `mtier_get_key` : Just get a key from the cache stack.

Expand Down
40 changes: 20 additions & 20 deletions compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@ services:
redis:
image: redis:6.2-alpine
ports:
- "6379:6379"
- '6379:6379'

valkey:
image: valkey/valkey:8.1-alpine
ports:
- "6380:6379"
- '6380:6379'

valkey-cluster:
profiles:
Expand All @@ -17,7 +17,7 @@ services:
- valkey-node-1
- valkey-node-2
- valkey-node-3
entrypoint: ["/bin/sh", "-c"]
entrypoint: ['/bin/sh', '-c']
command:
- |
sleep 5
Expand All @@ -32,7 +32,7 @@ services:
image: valkey/valkey:8.1-alpine
command: valkey-server --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes --port 6379 --cluster-announce-ip valkey-node-1
ports:
- "7100:6379"
- '7100:6379'
networks:
- valkey-cluster

Expand All @@ -42,7 +42,7 @@ services:
image: valkey/valkey:8.1-alpine
command: valkey-server --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes --port 6379 --cluster-announce-ip valkey-node-2
ports:
- "7101:6379"
- '7101:6379'
networks:
- valkey-cluster

Expand All @@ -52,7 +52,7 @@ services:
image: valkey/valkey:8.1-alpine
command: valkey-server --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes --port 6379 --cluster-announce-ip valkey-node-3
ports:
- "7102:6379"
- '7102:6379'
networks:
- valkey-cluster

Expand All @@ -66,19 +66,19 @@ services:
- MASTERS=3
- SLAVES_PER_MASTER=0
ports:
- "7000:7000"
- "7001:7001"
- "7002:7002"
- '7000:7000'
- '7001:7001'
- '7002:7002'

redis-insight:
image: redis/redisinsight:latest
ports:
- "5540:5540"
- '5540:5540'

dynamodb:
image: amazon/dynamodb-local
ports:
- "8000:8000"
- '8000:8000'

postgres:
image: postgres:15-alpine
Expand All @@ -87,27 +87,27 @@ services:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
ports:
- "5432:5432"
- '5432:5432'

mysql:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: mysql
ports:
- "3306:3306"
- '3306:3306'

lgtm:
image: grafana/otel-lgtm:latest
extra_hosts:
- "host.docker.internal:host-gateway"
- 'host.docker.internal:host-gateway'
ports:
- "3001:3000" # Grafana
- "3100:3100" # Loki HTTP API
- "3200:3200" # Tempo HTTP API
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "9090:9090" # Prometheus
- '3001:3000' # Grafana
- '3100:3100' # Loki HTTP API
- '3200:3200' # Tempo HTTP API
- '4317:4317' # OTLP gRPC
- '4318:4318' # OTLP HTTP
- '9090:9090' # Prometheus
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin
Expand Down
14 changes: 7 additions & 7 deletions docker/prometheus.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@ global:
scrape_timeout: 10s
evaluation_interval: 5s
scrape_configs:
- job_name: playground-app
metrics_path: /metrics
scheme: https
static_configs:
- targets: ['employees-projectors-reason-finland.trycloudflare.com']
labels:
service: 'app'
- job_name: playground-app
metrics_path: /metrics
scheme: https
static_configs:
- targets: ['employees-projectors-reason-finland.trycloudflare.com']
labels:
service: 'app'
5 changes: 2 additions & 3 deletions docs/assets/app.css
Original file line number Diff line number Diff line change
Expand Up @@ -61,15 +61,15 @@ html.dark {
margin: var(--prose-elements-margin) 0;
}

.markdown .media_box figure, .markdown .media_box p {
.markdown .media_box figure,
.markdown .media_box p {
margin: 0;
}

.markdown ul li ul {
margin: 0px;
}


@media only screen and (min-width: 768px) {
.header_container {
background-color: var(--mauveA1);
Expand All @@ -80,4 +80,3 @@ html.dark {
.markdown .table_container {
overflow-x: auto;
}

28 changes: 14 additions & 14 deletions docs/content/docs/adaptive_caching.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ const authToken = await bento.getOrSet({
const token = await fetchAccessToken()
return token
},
ttl: '10m'
ttl: '10m',
})
```

Expand All @@ -30,35 +30,35 @@ This is where adaptive caching comes in. Instead of setting a fixed TTL, we can
const authToken = await bento.getOrSet({
key: 'token',
factory: async (options) => {
const token = await fetchAccessToken();
options.setOptions({ ttl: token.expiresIn });
return token;
}
});
const token = await fetchAccessToken()
options.setOptions({ ttl: token.expiresIn })
return token
},
})
```

And that's it! Now, the token will be removed from the cache when it expires, and a new one will be fetched.

There are other use cases for adaptive caching. For example, consider managing a news feed with BentoCache. You may want to cache the freshest articles for a short period of time and the older articles for a much longer period.
There are other use cases for adaptive caching. For example, consider managing a news feed with BentoCache. You may want to cache the freshest articles for a short period of time and the older articles for a much longer period.

Because the freshest articles are more likely to change: they may have typos, require updates, etc., whereas the older articles are less likely to change and may not have been updated for years.

Let's see how we can achieve this with BentoCache:

```ts
const namespace = bento.namespace('news');
const namespace = bento.namespace('news')
const news = await namespace.getOrSet({
key: newsId,
factory: async (options) => {
const newsItem = await fetchNews(newsId);
const newsItem = await fetchNews(newsId)

if (newsItem.hasBeenUpdatedRecently) {
options.setOptions({ ttl: '5m' });
options.setOptions({ ttl: '5m' })
} else {
options.setOptions({ ttl: '2d' });
options.setOptions({ ttl: '2d' })
}

return newsItem;
}
});
return newsItem
},
})
```
Loading
Loading