diff --git a/.cargo/config.toml b/.cargo/config.toml index bbfc28a440..a439e96d4c 100644 --- a/.cargo/config.toml +++ b/.cargo/config.toml @@ -1,3 +1,5 @@ [build] rustflags = ["--cfg", "tokio_unstable"] +[env] +LIBSQLITE3_FLAGS = "SQLITE_ENABLE_BATCH_ATOMIC_WRITE" diff --git a/CLAUDE.md b/CLAUDE.md index ba0b2b7ccc..cbd229a4d0 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -22,6 +22,18 @@ The `rivet.gg` domain is deprecated and should never be used in this codebase. - Add a new versioned schema instead, then migrate `versioned.rs` and related compatibility code to bridge old versions forward. - When bumping the protocol version, update `PROTOCOL_MK2_VERSION` in `engine/sdks/rust/runner-protocol/src/lib.rs` and `PROTOCOL_VERSION` in `engine/sdks/typescript/runner/src/mod.ts` together. Both must match the latest schema version. +**Keep the KV API in sync between the runner protocol and the KV channel protocol.** + +- The runner protocol (`engine/sdks/schemas/runner-protocol/`) and KV channel protocol (`engine/sdks/schemas/kv-channel-protocol/`) both expose KV operations. When adding, removing, or changing KV request/response types in one protocol, update the other to match. + +**Keep KV channel protocol versions in sync.** + +- When bumping the KV channel protocol version, update these two locations together: + - `engine/sdks/rust/kv-channel-protocol/src/lib.rs` (`PROTOCOL_VERSION`) + - `engine/sdks/rust/kv-channel-protocol/build.rs` (TypeScript `PROTOCOL_VERSION` in post-processing) +- All consumers (pegboard-kv-channel, sqlite-native, TS manager) get the version from the shared crate. +- The TypeScript SDK at `engine/sdks/typescript/kv-channel-protocol/src/index.ts` is auto-generated from the BARE schema during the Rust build. Do not edit it by hand. + ## Commands ### Build Commands @@ -92,6 +104,7 @@ git commit -m "chore(my-pkg): foo bar" ### SQLite Package - Use `@rivetkit/sqlite` for SQLite WebAssembly support. - Do not use the legacy upstream package directly. `@rivetkit/sqlite` is the maintained fork used in this repository and is sourced from `rivet-dev/wa-sqlite`. +- The native SQLite addon (`@rivetkit/sqlite-native`) statically links SQLite via `libsqlite3-sys` with the `bundled` feature. The bundled SQLite version must match the version used by `@rivetkit/sqlite` (WASM). When upgrading either, upgrade both. ### RivetKit Package Resolutions The root `/package.json` contains `resolutions` that map RivetKit packages to their local workspace versions: @@ -236,6 +249,16 @@ Key points: - If available, use the workspace dependency (e.g., `anyhow.workspace = true`) - If you need to add a dependency and can't find it in the Cargo.toml of the workspace, add it to the workspace dependencies in Cargo.toml (`[workspace.dependencies]`) and then add it to the package you need with `{dependency}.workspace = true` +**Native SQLite & KV Channel** +- Native SQLite (`rivetkit-typescript/packages/sqlite-native/`) is a napi-rs addon that statically links SQLite and uses a custom VFS backed by KV over a WebSocket KV channel. The WASM implementation (`@rivetkit/sqlite-vfs`) is the fallback. +- The KV channel (`engine/sdks/schemas/kv-channel-protocol/`) is independent of the runner protocol. It authenticates with `admin_token` (engine) or `config.token` (manager), not the runner key. +- The KV channel enforces single-writer locks per actor. Open/close are optimistic (no round-trip wait). +- The native VFS uses the same 4 KiB chunk layout and KV key encoding as the WASM VFS. Data is compatible between backends. +- **The native Rust VFS and the WASM TypeScript VFS must match 1:1.** This includes: KV key layout and encoding, chunk size, PRAGMA settings, VFS callback-to-KV-operation mapping, delete/truncate strategy (both must use `deleteRange`), and journal mode. When changing any VFS behavior in one implementation, update the other. The relevant files are: + - Native: `rivetkit-typescript/packages/sqlite-native/src/vfs.rs`, `kv.rs` + - WASM: `rivetkit-typescript/packages/sqlite-vfs/src/vfs.ts`, `kv.ts` +- Full spec: `docs-internal/engine/NATIVE_SQLITE_DATA_CHANNEL.md` + **Inspector HTTP API** - When updating the WebSocket inspector (`rivetkit-typescript/packages/rivetkit/src/inspector/`), also update the HTTP inspector endpoints in `rivetkit-typescript/packages/rivetkit/src/actor/router.ts`. The HTTP API mirrors the WebSocket inspector for agent-based debugging. - When adding or modifying inspector endpoints, also update the driver test at `rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/actor-inspector.ts` to cover all inspector HTTP endpoints. @@ -275,7 +298,8 @@ Data structures often include: ## Logging Patterns ### Structured Logging -- Use tracing for logging. Do not format parameters into the main message, instead use tracing's structured logging. +- Use tracing for logging. Never use `eprintln!` or `println!` for logging in Rust code. Always use tracing macros (`tracing::info!`, `tracing::warn!`, `tracing::error!`, etc.). +- Do not format parameters into the main message, instead use tracing's structured logging. - For example, instead of `tracing::info!("foo {x}")`, do `tracing::info!(?x, "foo")` - Log messages should be lowercase unless mentioning specific code symbols. For example, `tracing::info!("inserted UserRow")` instead of `tracing::info!("Inserted UserRow")` diff --git a/Cargo.lock b/Cargo.lock index 361603eedb..2d0284d236 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -3474,6 +3474,37 @@ dependencies = [ "vbare", ] +[[package]] +name = "pegboard-kv-channel" +version = "2.2.1" +dependencies = [ + "anyhow", + "async-trait", + "bytes", + "futures-util", + "gasoline", + "http-body 1.0.1", + "http-body-util", + "hyper 1.6.0", + "hyper-tungstenite", + "lazy_static", + "namespace", + "pegboard", + "rivet-config", + "rivet-error", + "rivet-guard-core", + "rivet-kv-channel-protocol", + "rivet-metrics", + "rivet-runtime", + "rivet-util", + "tokio", + "tokio-tungstenite", + "tracing", + "universaldb", + "url", + "uuid", +] + [[package]] name = "pegboard-outbound" version = "2.2.1" @@ -3517,7 +3548,6 @@ dependencies = [ "rand 0.8.5", "rivet-config", "rivet-data", - "rivet-envoy-protocol", "rivet-error", "rivet-guard-core", "rivet-metrics", @@ -4685,6 +4715,7 @@ dependencies = [ "pegboard-envoy", "pegboard-gateway", "pegboard-gateway2", + "pegboard-kv-channel", "pegboard-runner", "regex", "rivet-api-builder", @@ -4706,6 +4737,7 @@ dependencies = [ "rustls-pemfile", "serde", "serde_json", + "subtle", "tokio", "tokio-tungstenite", "tower 0.5.2", @@ -4760,6 +4792,16 @@ dependencies = [ "uuid", ] +[[package]] +name = "rivet-kv-channel-protocol" +version = "2.2.1" +dependencies = [ + "serde", + "serde_bare", + "vbare", + "vbare-compiler", +] + [[package]] name = "rivet-logs" version = "2.2.1" diff --git a/Cargo.toml b/Cargo.toml index 0782f47fca..96308985a7 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -30,6 +30,7 @@ members = [ "engine/packages/pegboard-envoy", "engine/packages/pegboard-gateway", "engine/packages/pegboard-gateway2", + "engine/packages/pegboard-kv-channel", "engine/packages/pegboard-outbound", "engine/packages/pegboard-runner", "engine/packages/pools", @@ -53,6 +54,7 @@ members = [ "engine/sdks/rust/envoy-client", "engine/sdks/rust/envoy-protocol", "engine/sdks/rust/epoxy-protocol", + "engine/sdks/rust/kv-channel-protocol", "engine/sdks/rust/runner-protocol", "engine/sdks/rust/ups-protocol" ] @@ -124,6 +126,7 @@ members = [ slog-async = "2.8" slog-term = "2.9" statrs = "0.18" + subtle = "2" sysinfo = "0.37.2" tabled = "0.17.0" tempfile = "3.13.0" @@ -432,6 +435,9 @@ members = [ [workspace.dependencies.pegboard-gateway2] path = "engine/packages/pegboard-gateway2" + [workspace.dependencies.pegboard-kv-channel] + path = "engine/packages/pegboard-kv-channel" + [workspace.dependencies.pegboard-outbound] path = "engine/packages/pegboard-outbound" @@ -493,6 +499,12 @@ members = [ [workspace.dependencies.rivet-data] path = "engine/sdks/rust/data" + [workspace.dependencies.epoxy-protocol] + path = "engine/sdks/rust/epoxy-protocol" + + [workspace.dependencies.rivet-kv-channel-protocol] + path = "engine/sdks/rust/kv-channel-protocol" + [workspace.dependencies.rivet-engine-runner] path = "engine/sdks/rust/engine-runner" @@ -502,9 +514,6 @@ members = [ [workspace.dependencies.rivet-envoy-protocol] path = "engine/sdks/rust/envoy-protocol" - [workspace.dependencies.epoxy-protocol] - path = "engine/sdks/rust/epoxy-protocol" - [workspace.dependencies.rivet-runner-protocol] path = "engine/sdks/rust/runner-protocol" diff --git a/docs-internal/rivetkit-typescript/SQLITE_VFS.md b/docs-internal/rivetkit-typescript/SQLITE_VFS.md index 2407bd94d7..f4af322489 100644 --- a/docs-internal/rivetkit-typescript/SQLITE_VFS.md +++ b/docs-internal/rivetkit-typescript/SQLITE_VFS.md @@ -36,6 +36,18 @@ - Do NOT enable `journal_mode=MEMORY`, `journal_mode=OFF`, or `synchronous=OFF` - `journal_mode=PERSIST` is safe to switch to later (no migration needed) +## Native SQLite Backend + +The WASM VFS described above has a native Rust counterpart (`@rivetkit/sqlite-native`) that statically links SQLite via napi-rs and routes VFS callbacks over a WebSocket-based KV channel protocol. The native backend shares one SQLite library across all actors (vs. one WASM module instance per actor), reducing memory overhead and removing JS from the I/O hot path. Data is fully compatible between backends. An actor can switch between WASM and native without migration. + +Key implementation files: + +- `rivetkit-typescript/packages/sqlite-native/` — napi-rs addon (Rust): `vfs.rs`, `kv.rs`, `channel.rs`, `protocol.rs`, `lib.rs` +- `engine/sdks/schemas/kv-channel-protocol/` — BARE schema and TypeScript codec +- `engine/packages/pegboard-kv-channel/` — engine-side KV channel WebSocket server +- `rivetkit-typescript/packages/rivetkit/src/manager/kv-channel.ts` — manager-side KV channel handler +- `rivetkit-typescript/packages/rivetkit/src/db/native-sqlite.ts` — integration and WASM fallback logic + ## Future Work - **PITR / fork**: implement at KV layer (immutable chunk versions, manifests, branch heads, GC) with SQLite layer providing snapshot boundary coordination - **Remove double mutex** once profiled diff --git a/engine/artifacts/config-schema.json b/engine/artifacts/config-schema.json index 594bd3cf2a..fb1fbfc681 100644 --- a/engine/artifacts/config-schema.json +++ b/engine/artifacts/config-schema.json @@ -731,6 +731,15 @@ "format": "uint32", "minimum": 0.0 }, + "preload_max_total_bytes": { + "description": "Maximum total size of all preloaded KV data sent with the actor start command. Setting to 0 disables all preloading.\n\nUnit is in bytes. Default: 1,048,576 (1 MiB).", + "type": [ + "integer", + "null" + ], + "format": "uint64", + "minimum": 0.0 + }, "reschedule_backoff_max_exponent": { "description": "Maximum exponent for the reschedule backoff calculation.\n\nThis controls the maximum backoff duration when rescheduling actors.", "type": [ diff --git a/engine/artifacts/errors/actor.kv_storage_quota_exceeded.json b/engine/artifacts/errors/actor.kv_storage_quota_exceeded.json new file mode 100644 index 0000000000..1915d5fdf0 --- /dev/null +++ b/engine/artifacts/errors/actor.kv_storage_quota_exceeded.json @@ -0,0 +1,5 @@ +{ + "code": "kv_storage_quota_exceeded", + "group": "actor", + "message": "Not enough space left in storage." +} \ No newline at end of file diff --git a/engine/artifacts/errors/guard.missing_query_parameter.json b/engine/artifacts/errors/guard.missing_query_parameter.json new file mode 100644 index 0000000000..2804c8d9f2 --- /dev/null +++ b/engine/artifacts/errors/guard.missing_query_parameter.json @@ -0,0 +1,5 @@ +{ + "code": "missing_query_parameter", + "group": "guard", + "message": "Missing query parameter required for routing." +} \ No newline at end of file diff --git a/engine/packages/config/src/config/pegboard.rs b/engine/packages/config/src/config/pegboard.rs index e5bae9c70d..69a0a7e630 100644 --- a/engine/packages/config/src/config/pegboard.rs +++ b/engine/packages/config/src/config/pegboard.rs @@ -146,6 +146,13 @@ pub struct Pegboard { /// /// Unit is in milliseconds. pub serverless_drain_grace_period: Option, + + // === KV Preload Settings === + /// Maximum total size of all preloaded KV data sent with the actor start command. + /// Setting to 0 disables all preloading. + /// + /// Unit is in bytes. Default: 1,048,576 (1 MiB). + pub preload_max_total_bytes: Option, } impl Pegboard { @@ -306,4 +313,8 @@ impl Pegboard { pub fn serverless_drain_grace_period(&self) -> u64 { self.serverless_drain_grace_period.unwrap_or(10_000) } + + pub fn preload_max_total_bytes(&self) -> u64 { + self.preload_max_total_bytes.unwrap_or(1_048_576) + } } diff --git a/engine/packages/guard/Cargo.toml b/engine/packages/guard/Cargo.toml index a2b1ea7ad5..29cd4a71be 100644 --- a/engine/packages/guard/Cargo.toml +++ b/engine/packages/guard/Cargo.toml @@ -30,6 +30,7 @@ once_cell.workspace = true pegboard-envoy.workspace = true pegboard-gateway.workspace = true pegboard-gateway2.workspace = true +pegboard-kv-channel.workspace = true pegboard-runner.workspace = true pegboard.workspace = true regex.workspace = true @@ -52,6 +53,7 @@ rustls-pemfile.workspace = true rustls.workspace = true serde_json.workspace = true serde.workspace = true +subtle.workspace = true tokio-tungstenite.workspace = true tokio.workspace = true tracing.workspace = true diff --git a/engine/packages/guard/src/errors.rs b/engine/packages/guard/src/errors.rs index 1686d9082e..61db04f0de 100644 --- a/engine/packages/guard/src/errors.rs +++ b/engine/packages/guard/src/errors.rs @@ -13,6 +13,17 @@ pub struct MissingHeader { pub header: String, } +#[derive(RivetError, Serialize)] +#[error( + "guard", + "missing_query_parameter", + "Missing query parameter required for routing.", + "Missing {parameter} query parameter." +)] +pub struct MissingQueryParameter { + pub parameter: String, +} + #[derive(RivetError, Serialize)] #[error( "guard", diff --git a/engine/packages/guard/src/routing/envoy.rs b/engine/packages/guard/src/routing/envoy.rs index 9c0d263988..5ee4cedf47 100644 --- a/engine/packages/guard/src/routing/envoy.rs +++ b/engine/packages/guard/src/routing/envoy.rs @@ -3,7 +3,7 @@ use gas::prelude::*; use rivet_guard_core::{RoutingOutput, request_context::RequestContext}; use std::sync::Arc; -use super::{SEC_WEBSOCKET_PROTOCOL, WS_PROTOCOL_TOKEN, X_RIVET_TOKEN}; +use super::{SEC_WEBSOCKET_PROTOCOL, WS_PROTOCOL_TOKEN, X_RIVET_TOKEN, validate_regional_host}; /// Route requests to the envoy service using header-based routing #[tracing::instrument(skip_all)] @@ -45,31 +45,7 @@ async fn route_envoy_internal( ctx: &StandaloneCtx, req_ctx: &RequestContext, ) -> Result { - // Validate that the host is valid for the current datacenter - let current_dc = ctx.config().topology().current_dc()?; - if !current_dc.is_valid_regional_host(req_ctx.hostname()) { - tracing::warn!(hostname=%req_ctx.hostname(), datacenter=?current_dc.name, "invalid host for current datacenter"); - - // Determine valid hosts for error message - let valid_hosts = if let Some(hosts) = ¤t_dc.valid_hosts { - hosts.join(", ") - } else { - current_dc - .public_url - .host_str() - .map(|h| h.to_string()) - .unwrap_or_else(|| "unknown".to_string()) - }; - - return Err(crate::errors::MustUseRegionalHost { - host: req_ctx.hostname().to_string(), - datacenter: current_dc.name.clone(), - valid_hosts, - } - .build()); - } - - tracing::debug!(datacenter = ?current_dc.name, "validated host for datacenter"); + validate_regional_host(ctx, req_ctx)?; // Check auth (if enabled) if let Some(auth) = &ctx.config().auth { diff --git a/engine/packages/guard/src/routing/kv_channel.rs b/engine/packages/guard/src/routing/kv_channel.rs new file mode 100644 index 0000000000..b0bdd46549 --- /dev/null +++ b/engine/packages/guard/src/routing/kv_channel.rs @@ -0,0 +1,54 @@ +use anyhow::*; +use gas::prelude::*; +use rivet_guard_core::{RoutingOutput, request_context::RequestContext}; +use std::sync::Arc; +use subtle::ConstantTimeEq; + +use super::validate_regional_host; + +/// Route requests to the KV channel service using path-based routing. +/// Matches path: /kv/connect +#[tracing::instrument(skip_all)] +pub async fn route_request_path_based( + ctx: &StandaloneCtx, + req_ctx: &RequestContext, + handler: &Arc, +) -> Result> { + let path_without_query = req_ctx.path().split('?').next().unwrap_or(req_ctx.path()); + if path_without_query != "/kv/connect" && path_without_query != "/kv/connect/" { + return Ok(None); + } + + tracing::debug!( + hostname = %req_ctx.hostname(), + path = %req_ctx.path(), + "routing to kv channel via path" + ); + + validate_regional_host(ctx, req_ctx)?; + + // Check auth (if enabled). + if let Some(auth) = &ctx.config().auth { + // Extract token from query params. + let url = url::Url::parse(&format!("ws://placeholder{}", req_ctx.path())) + .context("failed to parse URL for auth")?; + let token = url + .query_pairs() + .find(|(k, _)| k == "token") + .map(|(_, v)| v.to_string()) + .ok_or_else(|| { + crate::errors::MissingQueryParameter { + parameter: "token".to_string(), + } + .build() + })?; + + if token.as_bytes().ct_ne(auth.admin_token.read().as_bytes()).into() { + return Err(rivet_api_builder::ApiForbidden.build()); + } + + tracing::debug!("authenticated kv channel connection"); + } + + Ok(Some(RoutingOutput::CustomServe(handler.clone()))) +} diff --git a/engine/packages/guard/src/routing/mod.rs b/engine/packages/guard/src/routing/mod.rs index 73e068cf1a..db025d59a7 100644 --- a/engine/packages/guard/src/routing/mod.rs +++ b/engine/packages/guard/src/routing/mod.rs @@ -1,14 +1,16 @@ use std::sync::Arc; +use anyhow::Result; use gas::prelude::*; use hyper::header::HeaderName; -use rivet_guard_core::RoutingFn; +use rivet_guard_core::{RoutingFn, request_context::RequestContext}; use crate::{errors, metrics, shared_state::SharedState}; mod api_public; pub mod actor_path; mod envoy; +mod kv_channel; pub(crate) mod matrix_param_deserializer; pub mod pegboard_gateway; mod runner; @@ -25,9 +27,13 @@ pub(crate) const WS_PROTOCOL_TOKEN: &str = "rivet_token."; #[tracing::instrument(skip_all)] pub fn create_routing_function(ctx: &StandaloneCtx, shared_state: SharedState) -> RoutingFn { let ctx = ctx.clone(); + let kv_channel_handler = Arc::new( + pegboard_kv_channel::PegboardKvChannelCustomServe::new(ctx.clone()), + ); Arc::new(move |req_ctx| { let ctx = ctx.with_ray(req_ctx.ray_id(), req_ctx.req_id()).unwrap(); let shared_state = shared_state.clone(); + let kv_channel_handler = kv_channel_handler.clone(); let hostname = req_ctx.hostname().to_string(); let path = req_ctx.path().to_string(); @@ -71,6 +77,18 @@ pub fn create_routing_function(ctx: &StandaloneCtx, shared_state: SharedState) - return Ok(routing_output); } + // Route KV channel + if let Some(routing_output) = + kv_channel::route_request_path_based(&ctx, req_ctx, &kv_channel_handler) + .await? + { + metrics::ROUTE_TOTAL + .with_label_values(&["kv_channel"]) + .inc(); + + return Ok(routing_output); + } + // MARK: Header- & protocol-based routing (X-Rivet-Target) // Determine target let target = if req_ctx.is_websocket() { @@ -150,3 +168,38 @@ pub fn create_routing_function(ctx: &StandaloneCtx, shared_state: SharedState) - ) }) } + +/// Validates that the request hostname is valid for the current datacenter. +/// Returns an error if the host does not match a valid regional host. +pub(crate) fn validate_regional_host( + ctx: &StandaloneCtx, + req_ctx: &RequestContext, +) -> Result<()> { + let current_dc = ctx.config().topology().current_dc()?; + if !current_dc.is_valid_regional_host(req_ctx.hostname()) { + tracing::warn!( + hostname = %req_ctx.hostname(), + datacenter = ?current_dc.name, + "invalid host for current datacenter" + ); + + let valid_hosts = if let Some(hosts) = ¤t_dc.valid_hosts { + hosts.join(", ") + } else { + current_dc + .public_url + .host_str() + .map(|h| h.to_string()) + .unwrap_or_else(|| "unknown".to_string()) + }; + + return Err(errors::MustUseRegionalHost { + host: req_ctx.hostname().to_string(), + datacenter: current_dc.name.clone(), + valid_hosts, + } + .build()); + } + + Ok(()) +} diff --git a/engine/packages/guard/src/routing/pegboard_gateway/mod.rs b/engine/packages/guard/src/routing/pegboard_gateway/mod.rs index 1be69f773e..5070bf44a9 100644 --- a/engine/packages/guard/src/routing/pegboard_gateway/mod.rs +++ b/engine/packages/guard/src/routing/pegboard_gateway/mod.rs @@ -122,9 +122,7 @@ pub async fn route_request( // Find actor to route to let actor_id = Id::parse(&actor_id_str).context("invalid x-rivet-actor header")?; - route_request_inner(ctx, shared_state, req_ctx, actor_id, req_ctx.path(), token) - .await - .map(Some) + route_request_inner(ctx, shared_state, req_ctx, actor_id, req_ctx.path(), token).await } #[derive(Debug)] @@ -200,7 +198,7 @@ async fn route_request_inner( actor_id: Id, stripped_path: &str, _token: Option<&str>, -) -> Result { +) -> Result> { // NOTE: Token validation implemented in EE // Route to peer dc where the actor lives @@ -281,6 +279,7 @@ async fn route_request_inner( destroy_sub2, ) .await + .map(Some) } 1 => { handle_actor_v1( @@ -300,6 +299,7 @@ async fn route_request_inner( destroy_sub2, ) .await + .map(Some) } _ => bail!("unknown actor version"), } diff --git a/engine/packages/guard/src/routing/runner.rs b/engine/packages/guard/src/routing/runner.rs index 820acc6777..071715d214 100644 --- a/engine/packages/guard/src/routing/runner.rs +++ b/engine/packages/guard/src/routing/runner.rs @@ -2,8 +2,9 @@ use anyhow::Result; use gas::prelude::*; use rivet_guard_core::{RoutingOutput, request_context::RequestContext}; use std::sync::Arc; +use subtle::ConstantTimeEq; -use super::{SEC_WEBSOCKET_PROTOCOL, X_RIVET_TOKEN}; +use super::{SEC_WEBSOCKET_PROTOCOL, X_RIVET_TOKEN, validate_regional_host}; pub(crate) const WS_PROTOCOL_TOKEN: &str = "rivet_token."; /// Route requests to the runner service using header-based routing @@ -46,31 +47,7 @@ async fn route_runner_internal( ctx: &StandaloneCtx, req_ctx: &RequestContext, ) -> Result { - // Validate that the host is valid for the current datacenter - let current_dc = ctx.config().topology().current_dc()?; - if !current_dc.is_valid_regional_host(req_ctx.hostname()) { - tracing::warn!(hostname=%req_ctx.hostname(), datacenter=?current_dc.name, "invalid host for current datacenter"); - - // Determine valid hosts for error message - let valid_hosts = if let Some(hosts) = ¤t_dc.valid_hosts { - hosts.join(", ") - } else { - current_dc - .public_url - .host_str() - .map(|h| h.to_string()) - .unwrap_or_else(|| "unknown".to_string()) - }; - - return Err(crate::errors::MustUseRegionalHost { - host: req_ctx.hostname().to_string(), - datacenter: current_dc.name.clone(), - valid_hosts, - } - .build()); - } - - tracing::debug!(datacenter = ?current_dc.name, "validated host for datacenter"); + validate_regional_host(ctx, req_ctx)?; // Check auth (if enabled) if let Some(auth) = &ctx.config().auth { @@ -106,7 +83,7 @@ async fn route_runner_internal( }; // Validate token - if token != auth.admin_token.read() { + if token.as_bytes().ct_ne(auth.admin_token.read().as_bytes()).into() { return Err(rivet_api_builder::ApiForbidden.build()); } diff --git a/engine/packages/pegboard-envoy/src/conn.rs b/engine/packages/pegboard-envoy/src/conn.rs index 8c3e6ad8d0..9f47c426c9 100644 --- a/engine/packages/pegboard-envoy/src/conn.rs +++ b/engine/packages/pegboard-envoy/src/conn.rs @@ -126,7 +126,7 @@ pub async fn handle_init( let envoy_key = &conn.envoy_key; let pool_name = &conn.pool_name; let protocol_version = conn.protocol_version; - let (pool_res, missed_commands) = tokio::try_join!( + let (pool_res, mut missed_commands) = tokio::try_join!( ctx.op(pegboard::ops::runner_config::get::Input { runners: vec![(namespace_id, pool_name.clone())], bypass_cache: false, @@ -353,8 +353,37 @@ pub async fn handle_init( // Send missed commands if !missed_commands.is_empty() { + let db = ctx.udb()?; let msg = - versioned::ToEnvoy::wrap_latest(protocol::ToEnvoy::ToEnvoyCommands(missed_commands)); + { + for cmd_wrapper in &mut missed_commands { + if let protocol::Command::CommandStartActor(ref mut start) = + cmd_wrapper.inner + { + let actor_id = cmd_wrapper + .checkpoint + .actor_id + .parse::() + .context( + "failed to parse actor_id from missed envoy command", + )?; + let preloaded = + pegboard::actor_kv::preload::fetch_preloaded_kv( + &db, + pb, + actor_id, + conn.namespace_id, + &start.config.name, + ) + .await?; + start.preloaded_kv = preloaded; + } + } + + versioned::ToEnvoy::wrap_latest(protocol::ToEnvoy::ToEnvoyCommands( + missed_commands, + )) + }; let msg_serialized = msg.serialize(conn.protocol_version)?; conn.ws_handle .send(Message::Binary(msg_serialized.into())) diff --git a/engine/packages/pegboard-envoy/src/tunnel_to_ws_task.rs b/engine/packages/pegboard-envoy/src/tunnel_to_ws_task.rs index e50f3c8c13..32f230fdff 100644 --- a/engine/packages/pegboard-envoy/src/tunnel_to_ws_task.rs +++ b/engine/packages/pegboard-envoy/src/tunnel_to_ws_task.rs @@ -126,25 +126,38 @@ async fn handle_message( protocol::ToEnvoyConn::ToEnvoyCommands(mut command_wrappers) => { // TODO: Parallelize for command_wrapper in &mut command_wrappers { - if let protocol::Command::CommandStartActor(protocol::CommandStartActor { - hibernating_requests, - .. - }) = &mut command_wrapper.inner + if let protocol::Command::CommandStartActor(start) = + &mut command_wrapper.inner { + let actor_id = Id::parse(&command_wrapper.checkpoint.actor_id)?; + let actor_name = start.config.name.clone(); let ids = ctx .op(pegboard::ops::actor::hibernating_request::list::Input { - actor_id: Id::parse(&command_wrapper.checkpoint.actor_id)?, + actor_id, }) .await?; // Dynamically populate hibernating request ids - *hibernating_requests = ids + start.hibernating_requests = ids .into_iter() .map(|x| protocol::HibernatingRequest { gateway_id: x.gateway_id, request_id: x.request_id, }) .collect(); + + if start.preloaded_kv.is_none() { + let db = ctx.udb()?; + start.preloaded_kv = + pegboard::actor_kv::preload::fetch_preloaded_kv( + &db, + ctx.config().pegboard(), + actor_id, + conn.namespace_id, + &actor_name, + ) + .await?; + } } } diff --git a/engine/packages/pegboard-kv-channel/Cargo.toml b/engine/packages/pegboard-kv-channel/Cargo.toml new file mode 100644 index 0000000000..5c0ef864e3 --- /dev/null +++ b/engine/packages/pegboard-kv-channel/Cargo.toml @@ -0,0 +1,35 @@ +[package] +name = "pegboard-kv-channel" +version.workspace = true +authors.workspace = true +license.workspace = true +edition.workspace = true + +[dependencies] +anyhow.workspace = true +async-trait.workspace = true +lazy_static.workspace = true +bytes.workspace = true +futures-util.workspace = true +gas.workspace = true +http-body.workspace = true +http-body-util.workspace = true +# TODO: Make this use workspace version +hyper = "1.6" +hyper-tungstenite.workspace = true +rivet-config.workspace = true +rivet-error.workspace = true +rivet-guard-core.workspace = true +rivet-metrics.workspace = true +rivet-runtime.workspace = true +rivet-kv-channel-protocol.workspace = true +tokio.workspace = true +tokio-tungstenite.workspace = true +tracing.workspace = true +universaldb.workspace = true +url.workspace = true +uuid.workspace = true + +pegboard.workspace = true +namespace.workspace = true +util.workspace = true diff --git a/engine/packages/pegboard-kv-channel/src/lib.rs b/engine/packages/pegboard-kv-channel/src/lib.rs new file mode 100644 index 0000000000..79a45fcf2a --- /dev/null +++ b/engine/packages/pegboard-kv-channel/src/lib.rs @@ -0,0 +1,870 @@ +//! KV channel WebSocket handler for the engine. +//! +//! Serves the KV channel protocol at /kv/connect for native SQLite to route +//! page-level KV operations over WebSocket. See +//! docs-internal/engine/NATIVE_SQLITE_DATA_CHANNEL.md for the full spec. + +mod metrics; + +use std::collections::{HashMap, HashSet}; +use std::sync::atomic::{AtomicI64, Ordering}; +use std::sync::Arc; +use std::time::{Duration, Instant}; + +use anyhow::{Context, Result}; +use async_trait::async_trait; +use bytes::Bytes; +use futures_util::TryStreamExt; +use gas::prelude::*; +use http_body_util::Full; +use hyper::{Response, StatusCode}; +use hyper_tungstenite::tungstenite::Message; +use pegboard::actor_kv; +use rivet_guard_core::{ + ResponseBody, WebSocketHandle, custom_serve::CustomServeTrait, + request_context::RequestContext, +}; +use tokio::sync::{Mutex, mpsc, watch}; +use tokio_tungstenite::tungstenite::protocol::frame::CloseFrame; +use uuid::Uuid; + +pub use rivet_kv_channel_protocol as protocol; + +use actor_kv::{MAX_KEY_SIZE, MAX_KEYS, MAX_PUT_PAYLOAD_SIZE, MAX_VALUE_SIZE}; + +/// Overhead added by KeyWrapper tuple packing (NESTED prefix byte + NIL suffix +/// byte). Must match `KeyWrapper::tuple_len` in +/// `engine/packages/pegboard/src/keys/actor_kv.rs`. +const KEY_WRAPPER_OVERHEAD: usize = 2; + +/// Maximum number of actors a single connection can have open simultaneously. +/// Prevents a malicious client from exhausting memory via unbounded actor_channels. +const MAX_ACTORS_PER_CONNECTION: usize = 1000; + +/// Shared state across all KV channel connections. +pub struct KvChannelState { + /// Maps actor_id string to the connection_id holding the single-writer lock and a reference + /// to that connection's open_actors set. The Arc reference allows lock eviction to remove the + /// actor from the old connection's set without acquiring the global lock on the KV hot path. + actor_locks: Mutex>>)>>, +} + +pub struct PegboardKvChannelCustomServe { + ctx: StandaloneCtx, + state: Arc, +} + +impl PegboardKvChannelCustomServe { + pub fn new(ctx: StandaloneCtx) -> Self { + Self { + ctx, + state: Arc::new(KvChannelState { + actor_locks: Mutex::new(HashMap::new()), + }), + } + } +} + +#[async_trait] +impl CustomServeTrait for PegboardKvChannelCustomServe { + #[tracing::instrument(skip_all)] + async fn handle_request( + &self, + _req: hyper::Request>, + _req_ctx: &mut RequestContext, + ) -> Result> { + let response = Response::builder() + .status(StatusCode::OK) + .header("Content-Type", "text/plain") + .body(ResponseBody::Full(Full::new(Bytes::from( + "kv-channel WebSocket endpoint", + ))))?; + Ok(response) + } + + #[tracing::instrument(skip_all)] + async fn handle_websocket( + &self, + req_ctx: &mut RequestContext, + ws_handle: WebSocketHandle, + _after_hibernation: bool, + ) -> Result> { + let ctx = self.ctx.with_ray(req_ctx.ray_id(), req_ctx.req_id())?; + let state = self.state.clone(); + + // Parse URL params. + let url = url::Url::parse(&format!("ws://placeholder{}", req_ctx.path())) + .context("failed to parse WebSocket URL")?; + let params: HashMap = url + .query_pairs() + .map(|(k, v)| (k.to_string(), v.to_string())) + .collect(); + + // Validate protocol version. + let protocol_version: u32 = params + .get("protocol_version") + .context("missing protocol_version query param")? + .parse() + .context("invalid protocol_version")?; + anyhow::ensure!( + protocol_version == protocol::PROTOCOL_VERSION, + "unsupported protocol version: {protocol_version}, expected {}", + protocol::PROTOCOL_VERSION + ); + + // Resolve namespace. + let namespace_name = params + .get("namespace") + .context("missing namespace query param")? + .clone(); + let namespace = ctx + .op(namespace::ops::resolve_for_name_global::Input { + name: namespace_name.clone(), + }) + .await + .with_context(|| format!("failed to resolve namespace: {namespace_name}"))? + .ok_or_else(|| namespace::errors::Namespace::NotFound.build()) + .with_context(|| format!("namespace not found: {namespace_name}"))?; + + // Assign connection ID. Uses UUID to eliminate any possibility of ID collision. + let conn_id = Uuid::new_v4(); + let namespace_id = namespace.namespace_id; + + tracing::info!(%conn_id, %namespace_id, "kv channel connection established"); + + // Track actors opened by this connection for cleanup on disconnect. + let open_actors: Arc>> = Arc::new(Mutex::new(HashSet::new())); + let last_pong_ts = Arc::new(AtomicI64::new(util::timestamp::now())); + + // Run the connection loop. Any error triggers cleanup below. + let result = run_connection( + ctx.clone(), + state.clone(), + ws_handle, + conn_id, + namespace_id, + open_actors.clone(), + last_pong_ts, + ) + .await; + + // Release all locks held by this connection. Only remove entries where the lock is still + // held by this conn_id, since another connection may have evicted it via ActorOpenRequest. + { + let open = open_actors.lock().await; + let mut locks = state.actor_locks.lock().await; + for actor_id in open.iter() { + if let Some((lock_conn, _)) = locks.get(actor_id) { + if *lock_conn == conn_id { + locks.remove(actor_id); + tracing::debug!(%conn_id, %actor_id, "released actor lock on disconnect"); + } + } + } + } + + tracing::info!(%conn_id, "kv channel connection closed"); + + result.map(|_| None) + } +} + +// MARK: Connection lifecycle + +async fn run_connection( + ctx: StandaloneCtx, + state: Arc, + ws_handle: WebSocketHandle, + conn_id: Uuid, + namespace_id: Id, + open_actors: Arc>>, + last_pong_ts: Arc, +) -> Result<()> { + let ping_interval = + Duration::from_millis(ctx.config().pegboard().runner_update_ping_interval_ms()); + let ping_timeout_ms = ctx.config().pegboard().runner_ping_timeout_ms(); + + let (ping_abort_tx, ping_abort_rx) = watch::channel(()); + + // Spawn ping task. + let ping_ws = ws_handle.clone(); + let ping_last_pong = last_pong_ts.clone(); + let ping = tokio::spawn(async move { + ping_task( + ping_ws, + ping_last_pong, + ping_abort_rx, + ping_interval, + ping_timeout_ms, + ) + .await + }); + + // Run message loop. + let msg_result = message_loop( + &ctx, + &state, + &ws_handle, + conn_id, + namespace_id, + &open_actors, + &last_pong_ts, + ) + .await; + + // Signal ping task to stop and wait for it. + let _ = ping_abort_tx.send(()); + let _ = ping.await; + + msg_result +} + +// MARK: Ping task + +async fn ping_task( + ws_handle: WebSocketHandle, + last_pong_ts: Arc, + mut abort_rx: watch::Receiver<()>, + interval: Duration, + timeout_ms: i64, +) -> Result<()> { + loop { + tokio::select! { + _ = tokio::time::sleep(interval) => {} + _ = abort_rx.changed() => return Ok(()), + } + + // Check pong timeout. + let last = last_pong_ts.load(Ordering::Relaxed); + let now = util::timestamp::now(); + if now - last > timeout_ms { + tracing::warn!("kv channel ping timed out, closing connection"); + return Err(anyhow::anyhow!("ping timed out")); + } + + // Send ping. + let ping = protocol::ToClient::ToClientPing(protocol::ToClientPing { ts: now }); + let data = protocol::encode_to_client(&ping)?; + ws_handle.send(Message::Binary(data.into())).await?; + } +} + +// MARK: Message loop + +async fn message_loop( + ctx: &StandaloneCtx, + state: &Arc, + ws_handle: &WebSocketHandle, + conn_id: Uuid, + namespace_id: Id, + open_actors: &Arc>>, + last_pong_ts: &AtomicI64, +) -> Result<()> { + let ws_rx = ws_handle.recv(); + let mut ws_rx = ws_rx.lock().await; + let mut term_signal = rivet_runtime::TermSignal::get(); + + // Per-actor channel routing for concurrent cross-actor request processing. + // Each actor gets its own mpsc channel and a spawned task that drains it + // sequentially, preserving intra-actor ordering while allowing inter-actor + // parallelism. Do not use tokio::spawn per request as that would break + // optimistic pipelining and journal write ordering. + // See docs-internal/engine/NATIVE_SQLITE_REVIEW_FINDINGS.md Finding 2. + let mut actor_channels: HashMap> = + HashMap::new(); + let mut actor_tasks = tokio::task::JoinSet::new(); + + // Use an async block so that early returns (via ?) still run cleanup below. + let result = async { + loop { + let msg = tokio::select! { + res = ws_rx.try_next() => { + match res? { + Some(msg) => msg, + None => { + tracing::debug!("websocket closed"); + return Ok(()); + } + } + } + _ = term_signal.recv() => { + // Send ToClientClose before shutting down. + let close_msg = protocol::ToClient::ToClientClose; + let data = protocol::encode_to_client(&close_msg)?; + let _ = ws_handle.send(Message::Binary(data.into())).await; + return Ok(()); + } + }; + + match msg { + Message::Binary(data) => { + handle_binary_message( + ctx, + state, + ws_handle, + conn_id, + namespace_id, + open_actors, + last_pong_ts, + &data, + &mut actor_channels, + &mut actor_tasks, + ) + .await?; + } + Message::Close(_) => { + tracing::debug!("websocket close frame received"); + return Ok(()); + } + _ => {} + } + } + } + .await; + + // Drop all senders to signal per-actor tasks to stop, then wait for them + // to finish draining any in-flight requests. + actor_channels.clear(); + while actor_tasks.join_next().await.is_some() {} + + result +} + +async fn handle_binary_message( + ctx: &StandaloneCtx, + state: &Arc, + ws_handle: &WebSocketHandle, + conn_id: Uuid, + namespace_id: Id, + open_actors: &Arc>>, + last_pong_ts: &AtomicI64, + data: &[u8], + actor_channels: &mut HashMap>, + actor_tasks: &mut tokio::task::JoinSet<()>, +) -> Result<()> { + let msg = match protocol::decode_to_server(data) { + Ok(msg) => msg, + Err(err) => { + tracing::warn!( + ?err, + data_len = data.len(), + "failed to deserialize kv channel message" + ); + return Ok(()); + } + }; + + match msg { + protocol::ToServer::ToServerPong(pong) => { + last_pong_ts.store(util::timestamp::now(), Ordering::Relaxed); + tracing::trace!(ts = pong.ts, "received pong"); + } + protocol::ToServer::ToServerRequest(req) => { + let is_close = matches!(req.data, protocol::RequestData::ActorCloseRequest); + let actor_id = req.actor_id.clone(); + let request_id = req.request_id; + + // Create a per-actor channel and task on first request for this actor. + if !actor_channels.contains_key(&actor_id) { + let (tx, rx) = mpsc::channel(64); + actor_tasks.spawn(actor_request_task( + Clone::clone(ctx), + Clone::clone(state), + Clone::clone(ws_handle), + conn_id, + namespace_id, + Clone::clone(open_actors), + rx, + )); + actor_channels.insert(actor_id.clone(), tx); + } + + // Route request to the actor's channel for sequential processing. + if let Some(tx) = actor_channels.get(&actor_id) { + match tx.try_send(req) { + Ok(()) => {} + Err(mpsc::error::TrySendError::Full(_)) => { + tracing::warn!(%actor_id, "per-actor channel full, applying backpressure"); + send_response( + ws_handle, + request_id, + error_response( + "backpressure", + "too many in-flight requests for this actor", + ), + ) + .await; + } + Err(mpsc::error::TrySendError::Closed(_)) => { + tracing::warn!(%actor_id, "per-actor task channel closed, removing dead entry"); + actor_channels.remove(&actor_id); + send_response( + ws_handle, + request_id, + error_response( + "internal_error", + "internal error", + ), + ) + .await; + } + } + } + + // Remove the channel entry on close so the task exits after draining + // remaining requests and resources are freed. + if is_close { + actor_channels.remove(&actor_id); + } + } + } + + Ok(()) +} + +/// Processes requests for a single actor sequentially, preserving intra-actor +/// ordering. Spawned once per actor per connection. Exits when the sender is +/// dropped (connection end) or after processing an ActorCloseRequest. +async fn actor_request_task( + ctx: StandaloneCtx, + state: Arc, + ws_handle: WebSocketHandle, + conn_id: Uuid, + namespace_id: Id, + open_actors: Arc>>, + mut rx: mpsc::Receiver, +) { + // Cached actor resolution. Populated on first KV request, reused for all + // subsequent requests. Actor name is immutable so this never goes stale. + let mut cached_actor: Option<(Id, String)> = None; + + while let Some(req) = rx.recv().await { + let is_close = matches!(req.data, protocol::RequestData::ActorCloseRequest); + + let response_data = match &req.data { + // Open/close are lifecycle ops that don't need a resolved actor. + protocol::RequestData::ActorOpenRequest + | protocol::RequestData::ActorCloseRequest => { + handle_request(&ctx, &state, conn_id, namespace_id, &open_actors, &req).await + } + // KV ops: resolve once, cache, reuse. + _ => { + let is_open = open_actors.lock().await.contains(&req.actor_id); + if !is_open { + let locks = state.actor_locks.lock().await; + if locks.contains_key(&req.actor_id) { + error_response( + "actor_locked", + "actor is locked by another connection", + ) + } else { + error_response( + "actor_not_open", + "actor is not opened on this connection", + ) + } + } else { + // Lazy-resolve and cache. + if cached_actor.is_none() { + match resolve_actor(&ctx, &req.actor_id, namespace_id).await { + Ok(v) => { + cached_actor = Some(v); + } + Err(resp) => { + // Don't cache failures. Next request will retry. + send_response(&ws_handle, req.request_id, resp).await; + if is_close { + break; + } + continue; + } + } + } + let (parsed_id, actor_name) = cached_actor.as_ref().unwrap(); + + let recipient = actor_kv::Recipient { + actor_id: *parsed_id, + namespace_id, + name: actor_name.clone(), + }; + + match &req.data { + protocol::RequestData::KvGetRequest(body) => { + handle_kv_get(&ctx, &recipient, body).await + } + protocol::RequestData::KvPutRequest(body) => { + handle_kv_put(&ctx, &recipient, body).await + } + protocol::RequestData::KvDeleteRequest(body) => { + handle_kv_delete(&ctx, &recipient, body).await + } + protocol::RequestData::KvDeleteRangeRequest(body) => { + handle_kv_delete_range(&ctx, &recipient, body).await + } + _ => unreachable!(), + } + } + } + }; + + send_response(&ws_handle, req.request_id, response_data).await; + + // Stop processing after a close request. The sender is also removed + // from actor_channels by the message loop so no new requests arrive. + if is_close { + break; + } + } +} + +/// Encode and send a response to the client. Logs warnings on failure. +async fn send_response( + ws_handle: &WebSocketHandle, + request_id: u32, + data: protocol::ResponseData, +) { + let response = protocol::ToClient::ToClientResponse(protocol::ToClientResponse { + request_id, + data, + }); + + match protocol::encode_to_client(&response) { + Ok(encoded) => { + if let Err(err) = ws_handle.send(Message::Binary(encoded.into())).await { + tracing::warn!(?err, "failed to send kv channel response from actor task"); + } + } + Err(err) => { + tracing::warn!(?err, "failed to encode kv channel response"); + } + } +} + +// MARK: Request handling + +/// Handles actor lifecycle requests (open/close). KV operations are handled +/// directly in `actor_request_task` with cached actor resolution. +async fn handle_request( + _ctx: &StandaloneCtx, + state: &KvChannelState, + conn_id: Uuid, + _namespace_id: Id, + open_actors: &Arc>>, + req: &protocol::ToServerRequest, +) -> protocol::ResponseData { + match &req.data { + protocol::RequestData::ActorOpenRequest => { + handle_actor_open(state, conn_id, open_actors, &req.actor_id).await + } + protocol::RequestData::ActorCloseRequest => { + handle_actor_close(state, conn_id, open_actors, &req.actor_id).await + } + _ => unreachable!("KV operations are handled in actor_request_task"), + } +} + +// MARK: Actor open/close + +async fn handle_actor_open( + state: &KvChannelState, + conn_id: Uuid, + open_actors: &Arc>>, + actor_id: &str, +) -> protocol::ResponseData { + // Reject if this connection already has too many actors open. + { + let current_count = open_actors.lock().await.len(); + if current_count >= MAX_ACTORS_PER_CONNECTION { + return error_response( + "too_many_actors", + &format!( + "connection has too many open actors (max {MAX_ACTORS_PER_CONNECTION})" + ), + ); + } + } + + let mut locks = state.actor_locks.lock().await; + + // If the actor is locked by a different connection, unconditionally evict the old lock. + // This handles reconnection scenarios where the server hasn't detected the old connection's + // disconnect yet. The old connection's next KV request will fail the fast-path check + // (open_actors.contains) and return actor_not_open. + // See docs-internal/engine/NATIVE_SQLITE_REVIEW_FINDINGS.md Finding 4. + if let Some((existing_conn, old_open_actors)) = locks.get(actor_id) { + if *existing_conn != conn_id { + old_open_actors.lock().await.remove(actor_id); + tracing::info!( + %conn_id, + old_conn_id = %existing_conn, + %actor_id, + "evicted stale actor lock from old connection" + ); + } + } + + locks.insert(actor_id.to_string(), (conn_id, open_actors.clone())); + open_actors.lock().await.insert(actor_id.to_string()); + tracing::debug!(%conn_id, %actor_id, "actor lock acquired"); + protocol::ResponseData::ActorOpenResponse +} + +async fn handle_actor_close( + state: &KvChannelState, + conn_id: Uuid, + open_actors: &Arc>>, + actor_id: &str, +) -> protocol::ResponseData { + let mut locks = state.actor_locks.lock().await; + + if let Some((lock_conn, _)) = locks.get(actor_id) { + if *lock_conn == conn_id { + locks.remove(actor_id); + open_actors.lock().await.remove(actor_id); + tracing::debug!(%conn_id, %actor_id, "actor lock released"); + } + } + + protocol::ResponseData::ActorCloseResponse +} + +// MARK: KV operations + +async fn handle_kv_get( + ctx: &StandaloneCtx, + recipient: &actor_kv::Recipient, + body: &protocol::KvGetRequest, +) -> protocol::ResponseData { + let start = Instant::now(); + metrics::KV_CHANNEL_REQUESTS_TOTAL.with_label_values(&["get"]).inc(); + metrics::KV_CHANNEL_REQUEST_KEYS.with_label_values(&["get"]).observe(body.keys.len() as f64); + + if let Err(resp) = validate_keys(&body.keys) { + return resp; + } + + let udb = match ctx.udb() { + Ok(udb) => udb, + Err(err) => return internal_error(&err), + }; + + let result = match actor_kv::get(&*udb, recipient, body.keys.clone()).await { + Ok((keys, values, _metadata)) => { + protocol::ResponseData::KvGetResponse(protocol::KvGetResponse { keys, values }) + } + Err(err) => internal_error(&err), + }; + metrics::KV_CHANNEL_REQUEST_DURATION.with_label_values(&["get"]).observe(start.elapsed().as_secs_f64()); + result +} + +async fn handle_kv_put( + ctx: &StandaloneCtx, + recipient: &actor_kv::Recipient, + body: &protocol::KvPutRequest, +) -> protocol::ResponseData { + let start = Instant::now(); + metrics::KV_CHANNEL_REQUESTS_TOTAL.with_label_values(&["put"]).inc(); + metrics::KV_CHANNEL_REQUEST_KEYS.with_label_values(&["put"]).observe(body.keys.len() as f64); + + // Validate keys/values length match. + if body.keys.len() != body.values.len() { + return error_response( + "keys_values_length_mismatch", + "keys and values must have the same length", + ); + } + + // Validate batch size. + if body.keys.len() > MAX_KEYS { + return error_response( + "batch_too_large", + &format!("a maximum of {MAX_KEYS} entries is allowed"), + ); + } + + for key in &body.keys { + if key.len() + KEY_WRAPPER_OVERHEAD > MAX_KEY_SIZE { + return error_response( + "key_too_large", + &format!("key is too long (max {} bytes)", MAX_KEY_SIZE - KEY_WRAPPER_OVERHEAD), + ); + } + } + for value in &body.values { + if value.len() > MAX_VALUE_SIZE { + return error_response( + "value_too_large", + &format!("value is too large (max {} KiB)", MAX_VALUE_SIZE / 1024), + ); + } + } + + let payload_size: usize = body.keys.iter().map(|k| k.len() + KEY_WRAPPER_OVERHEAD).sum::() + + body.values.iter().map(|v| v.len()).sum::(); + if payload_size > MAX_PUT_PAYLOAD_SIZE { + return error_response( + "payload_too_large", + &format!( + "total payload is too large (max {} KiB)", + MAX_PUT_PAYLOAD_SIZE / 1024 + ), + ); + } + + let udb = match ctx.udb() { + Ok(udb) => udb, + Err(err) => return internal_error(&err), + }; + + let result = match actor_kv::put(&*udb, recipient, body.keys.clone(), body.values.clone()).await { + Ok(()) => protocol::ResponseData::KvPutResponse, + Err(err) => { + let rivet_err = rivet_error::RivetError::extract(&err); + if rivet_err.code() == "kv_storage_quota_exceeded" { + error_response("storage_quota_exceeded", rivet_err.message()) + } else { + internal_error(&err) + } + } + }; + metrics::KV_CHANNEL_REQUEST_DURATION.with_label_values(&["put"]).observe(start.elapsed().as_secs_f64()); + result +} + +async fn handle_kv_delete( + ctx: &StandaloneCtx, + recipient: &actor_kv::Recipient, + body: &protocol::KvDeleteRequest, +) -> protocol::ResponseData { + let start = Instant::now(); + metrics::KV_CHANNEL_REQUESTS_TOTAL.with_label_values(&["delete"]).inc(); + metrics::KV_CHANNEL_REQUEST_KEYS.with_label_values(&["delete"]).observe(body.keys.len() as f64); + + if let Err(resp) = validate_keys(&body.keys) { + return resp; + } + + let udb = match ctx.udb() { + Ok(udb) => udb, + Err(err) => return internal_error(&err), + }; + + let result = match actor_kv::delete(&*udb, recipient, body.keys.clone()).await { + Ok(()) => protocol::ResponseData::KvDeleteResponse, + Err(err) => internal_error(&err), + }; + metrics::KV_CHANNEL_REQUEST_DURATION.with_label_values(&["delete"]).observe(start.elapsed().as_secs_f64()); + result +} + +async fn handle_kv_delete_range( + ctx: &StandaloneCtx, + recipient: &actor_kv::Recipient, + body: &protocol::KvDeleteRangeRequest, +) -> protocol::ResponseData { + let start = Instant::now(); + metrics::KV_CHANNEL_REQUESTS_TOTAL.with_label_values(&["delete_range"]).inc(); + if body.start.len() + KEY_WRAPPER_OVERHEAD > MAX_KEY_SIZE { + return error_response( + "key_too_large", + &format!("start key is too long (max {} bytes)", MAX_KEY_SIZE - KEY_WRAPPER_OVERHEAD), + ); + } + if body.end.len() + KEY_WRAPPER_OVERHEAD > MAX_KEY_SIZE { + return error_response( + "key_too_large", + &format!("end key is too long (max {} bytes)", MAX_KEY_SIZE - KEY_WRAPPER_OVERHEAD), + ); + } + + let udb = match ctx.udb() { + Ok(udb) => udb, + Err(err) => return internal_error(&err), + }; + + let result = match actor_kv::delete_range(&*udb, recipient, body.start.clone(), body.end.clone()).await { + Ok(()) => protocol::ResponseData::KvDeleteResponse, + Err(err) => internal_error(&err), + }; + metrics::KV_CHANNEL_REQUEST_DURATION.with_label_values(&["delete_range"]).observe(start.elapsed().as_secs_f64()); + result +} + +// MARK: Helpers + +/// Look up an actor by ID and return the parsed ID and actor name. +/// +/// Defense-in-depth: verifies the actor belongs to the authenticated namespace. +/// The admin_token is a global credential, so this is not strictly necessary +/// today, but prevents cross-namespace access if a less-privileged auth +/// mechanism is introduced in the future. +async fn resolve_actor( + ctx: &StandaloneCtx, + actor_id: &str, + expected_namespace_id: Id, +) -> std::result::Result<(Id, String), protocol::ResponseData> { + let parsed_id = Id::parse(actor_id).map_err(|err| { + error_response( + "actor_not_found", + &format!("invalid actor id: {err}"), + ) + })?; + + let actor = ctx + .op(pegboard::ops::actor::get_for_runner::Input { + actor_id: parsed_id, + }) + .await + .map_err(|err| internal_error(&err))?; + + match actor { + Some(actor) => { + if actor.namespace_id != expected_namespace_id { + return Err(error_response( + "actor_not_found", + "actor does not exist or is not running", + )); + } + Ok((parsed_id, actor.name)) + } + None => Err(error_response( + "actor_not_found", + "actor does not exist or is not running", + )), + } +} + +/// Validate a list of KV keys against size and count limits. +fn validate_keys(keys: &[protocol::KvKey]) -> std::result::Result<(), protocol::ResponseData> { + if keys.len() > MAX_KEYS { + return Err(error_response( + "batch_too_large", + &format!("a maximum of {MAX_KEYS} keys is allowed"), + )); + } + for key in keys { + if key.len() + KEY_WRAPPER_OVERHEAD > MAX_KEY_SIZE { + return Err(error_response( + "key_too_large", + &format!("key is too long (max {} bytes)", MAX_KEY_SIZE - KEY_WRAPPER_OVERHEAD), + )); + } + } + Ok(()) +} + +fn error_response(code: &str, message: &str) -> protocol::ResponseData { + protocol::ResponseData::ErrorResponse(protocol::ErrorResponse { + code: code.to_string(), + message: message.to_string(), + }) +} + +/// Log an internal error with full details server-side and return a generic +/// error message to the client. Prevents leaking stack traces, database errors, +/// or other internal state over the wire. +fn internal_error(err: &anyhow::Error) -> protocol::ResponseData { + tracing::error!(?err, "kv channel internal error"); + error_response("internal_error", "internal error") +} diff --git a/engine/packages/pegboard-kv-channel/src/metrics.rs b/engine/packages/pegboard-kv-channel/src/metrics.rs new file mode 100644 index 0000000000..6c71756979 --- /dev/null +++ b/engine/packages/pegboard-kv-channel/src/metrics.rs @@ -0,0 +1,26 @@ +use rivet_metrics::{BUCKETS, REGISTRY, prometheus::*}; + +lazy_static::lazy_static! { + pub static ref KV_CHANNEL_REQUEST_DURATION: HistogramVec = register_histogram_vec_with_registry!( + "pegboard_kv_channel_request_duration_seconds", + "Duration of KV channel handler requests.", + &["op"], + BUCKETS.to_vec(), + *REGISTRY + ).unwrap(); + + pub static ref KV_CHANNEL_REQUEST_KEYS: HistogramVec = register_histogram_vec_with_registry!( + "pegboard_kv_channel_request_keys", + "Number of keys per KV channel request.", + &["op"], + vec![1.0, 2.0, 5.0, 10.0, 25.0, 50.0, 100.0, 128.0], + *REGISTRY + ).unwrap(); + + pub static ref KV_CHANNEL_REQUESTS_TOTAL: IntCounterVec = register_int_counter_vec_with_registry!( + "pegboard_kv_channel_requests_total", + "Total KV channel requests handled.", + &["op"], + *REGISTRY + ).unwrap(); +} diff --git a/engine/packages/pegboard-outbound/src/lib.rs b/engine/packages/pegboard-outbound/src/lib.rs index 3205262642..d45a1726ec 100644 --- a/engine/packages/pegboard-outbound/src/lib.rs +++ b/engine/packages/pegboard-outbound/src/lib.rs @@ -4,7 +4,7 @@ use gas::prelude::*; use pegboard::pubsub_subjects::ServerlessOutboundSubject; use reqwest::header::{HeaderName, HeaderValue}; use reqwest_eventsource as sse; -use rivet_envoy_protocol::{self as protocol, versioned}; +use rivet_envoy_protocol::{self as protocol, PROTOCOL_VERSION, versioned}; use rivet_runtime::TermSignal; use rivet_types::actor::RunnerPoolError; use rivet_types::runner_configs::RunnerConfigKind; @@ -192,6 +192,16 @@ async fn handle(ctx: &StandaloneCtx, packet: protocol::ToOutbound) -> Result<()> return Ok(()); }; + let udb = ctx.udb()?; + let preloaded_kv = pegboard::actor_kv::preload::fetch_preloaded_kv( + &udb, + ctx.config().pegboard(), + actor_id, + namespace_id, + &actor_config.name, + ) + .await?; + let payload = versioned::ToEnvoy::wrap_latest(protocol::ToEnvoy::ToEnvoyCommands(vec![ protocol::CommandWrapper { checkpoint, @@ -200,10 +210,11 @@ async fn handle(ctx: &StandaloneCtx, packet: protocol::ToOutbound) -> Result<()> // Empty because request ids are ephemeral. This is intercepted by guard and // populated before it reaches the envoy hibernating_requests: Vec::new(), + preloaded_kv, }), }, ])) - .serialize_with_embedded_version(pool.protocol_version.unwrap_or(1))?; + .serialize_with_embedded_version(pool.protocol_version.unwrap_or(PROTOCOL_VERSION))?; let RunnerConfigKind::Serverless { url, diff --git a/engine/packages/pegboard-runner/Cargo.toml b/engine/packages/pegboard-runner/Cargo.toml index 604038952b..3566407f40 100644 --- a/engine/packages/pegboard-runner/Cargo.toml +++ b/engine/packages/pegboard-runner/Cargo.toml @@ -21,7 +21,6 @@ lazy_static.workspace = true rand.workspace = true rivet-config.workspace = true rivet-data.workspace = true -rivet-envoy-protocol.workspace = true rivet-error.workspace = true rivet-guard-core.workspace = true rivet-metrics.workspace = true diff --git a/engine/packages/pegboard-runner/src/lib.rs b/engine/packages/pegboard-runner/src/lib.rs index fcd7a7f649..db97f070fd 100644 --- a/engine/packages/pegboard-runner/src/lib.rs +++ b/engine/packages/pegboard-runner/src/lib.rs @@ -26,7 +26,6 @@ mod ws_to_tunnel_task; enum LifecycleResult { Closed, Aborted, - Evicted, } pub struct PegboardRunnerWsCustomServe { @@ -223,40 +222,34 @@ impl CustomServeTrait for PegboardRunnerWsCustomServe { ); // Determine single result from all tasks - let mut lifecycle_res = match (tunnel_to_ws_res, ws_to_tunnel_res, ping_res) { + let lifecycle_res = match (tunnel_to_ws_res, ws_to_tunnel_res, ping_res) { // Prefer error (Err(err), _, _) => Err(err), (_, Err(err), _) => Err(err), (_, _, Err(err)) => Err(err), - // Prefer non aborted result + // Prefer non aborted result if both succeed (Ok(res), Ok(LifecycleResult::Aborted), _) => Ok(res), (Ok(LifecycleResult::Aborted), Ok(res), _) => Ok(res), // Unlikely case (res, _, _) => res, }; - if let Ok(LifecycleResult::Evicted) = &lifecycle_res { - lifecycle_res = Err(errors::WsError::Eviction.build()); - } - // Clear alloc idx if not evicted - else { - // Make runner immediately ineligible when it disconnects - let update_alloc_res = self - .ctx - .op(pegboard::ops::runner::update_alloc_idx::Input { - runners: vec![pegboard::ops::runner::update_alloc_idx::Runner { - runner_id: conn.runner_id, - action: Action::ClearIdx, - }], - }) - .await; - if let Err(err) = update_alloc_res { - tracing::error!( - runner_id=?conn.runner_id, - ?err, - "failed to evict runner from allocation index during disconnect" - ); - } + // Make runner immediately ineligible when it disconnects + let update_alloc_res = self + .ctx + .op(pegboard::ops::runner::update_alloc_idx::Input { + runners: vec![pegboard::ops::runner::update_alloc_idx::Runner { + runner_id: conn.runner_id, + action: Action::ClearIdx, + }], + }) + .await; + if let Err(err) = update_alloc_res { + tracing::error!( + runner_id=?conn.runner_id, + ?err, + "critical: failed to evict runner from allocation index during disconnect" + ); } tracing::debug!(%topic, "runner websocket closed"); diff --git a/engine/packages/pegboard-runner/src/metrics.rs b/engine/packages/pegboard-runner/src/metrics.rs index c5edab2c14..931d0c9077 100644 --- a/engine/packages/pegboard-runner/src/metrics.rs +++ b/engine/packages/pegboard-runner/src/metrics.rs @@ -31,13 +31,13 @@ lazy_static::lazy_static! { ).unwrap(); pub static ref EVENT_MULTIPLEXER_COUNT: IntGauge = register_int_gauge_with_registry!( - "pegboard_runner_event_multiplexer_count", + "pegboard_event_multiplexer_count", "Number of active actor event multiplexers.", *REGISTRY ).unwrap(); pub static ref INGESTED_EVENTS_TOTAL: IntCounter = register_int_counter_with_registry!( - "pegboard_runner_ingested_events_total", + "pegboard_ingested_events_total", "Count of actor events.", *REGISTRY ).unwrap(); diff --git a/engine/packages/pegboard-runner/src/tunnel_to_ws_task.rs b/engine/packages/pegboard-runner/src/tunnel_to_ws_task.rs index 6763e27519..1c8f2168ba 100644 --- a/engine/packages/pegboard-runner/src/tunnel_to_ws_task.rs +++ b/engine/packages/pegboard-runner/src/tunnel_to_ws_task.rs @@ -66,7 +66,7 @@ async fn recv_msg( ]) .inc(); - return Ok(Err(LifecycleResult::Evicted)); + return Err(errors::WsError::Eviction.build()); } _ = tunnel_to_ws_abort_rx.changed() => { tracing::debug!("task aborted"); diff --git a/engine/packages/pegboard-runner/src/ws_to_tunnel_task.rs b/engine/packages/pegboard-runner/src/ws_to_tunnel_task.rs index f3492a706f..2ab612070f 100644 --- a/engine/packages/pegboard-runner/src/ws_to_tunnel_task.rs +++ b/engine/packages/pegboard-runner/src/ws_to_tunnel_task.rs @@ -6,7 +6,6 @@ use gas::prelude::*; use hyper_tungstenite::tungstenite::Message; use pegboard::actor_kv; use pegboard::pubsub_subjects::GatewayReceiverSubject; -use rivet_envoy_protocol as ep; use rivet_guard_core::websocket_handle::WebSocketReceiver; use rivet_runner_protocol::{self as protocol, PROTOCOL_MK2_VERSION, versioned}; use std::sync::{Arc, atomic::Ordering}; @@ -54,7 +53,7 @@ pub async fn task_inner( event_demuxer: &mut ActorEventDemuxer, ) -> Result { let mut ws_rx = ws_rx.lock().await; - let mut term_signal = rivet_runtime::TermSignal::get(); + let mut term_signal = rivet_runtime::TermSignal::new().await; loop { match recv_msg( @@ -106,7 +105,7 @@ async fn recv_msg( ]) .inc(); - return Ok(Err(LifecycleResult::Evicted)); + return Err(errors::WsError::Eviction.build()); } _ = ws_to_tunnel_abort_rx.changed() => { tracing::debug!("task aborted"); @@ -240,13 +239,7 @@ async fn handle_message_mk2( protocol::mk2::KvGetResponse { keys, values, - metadata: metadata - .into_iter() - .map(|m| protocol::mk2::KvMetadata { - version: m.version, - update_ts: m.update_ts, - }) - .collect(), + metadata, }, ) } @@ -273,23 +266,7 @@ async fn handle_message_mk2( let res = actor_kv::list( &*ctx.udb()?, &recipient, - match body.query { - protocol::mk2::KvListQuery::KvListAllQuery => { - ep::KvListQuery::KvListAllQuery - } - protocol::mk2::KvListQuery::KvListRangeQuery(x) => { - ep::KvListQuery::KvListRangeQuery(ep::KvListRangeQuery { - start: x.start, - end: x.end, - exclusive: x.exclusive, - }) - } - protocol::mk2::KvListQuery::KvListPrefixQuery(x) => { - ep::KvListQuery::KvListPrefixQuery(ep::KvListPrefixQuery { - key: x.key, - }) - } - }, + body.query, body.reverse.unwrap_or_default(), body.limit .map(TryInto::try_into) @@ -308,13 +285,7 @@ async fn handle_message_mk2( protocol::mk2::KvListResponse { keys, values, - metadata: metadata - .into_iter() - .map(|m| protocol::mk2::KvMetadata { - version: m.version, - update_ts: m.update_ts, - }) - .collect(), + metadata, }, ) } @@ -632,19 +603,21 @@ async fn handle_message_mk1(ctx: &StandaloneCtx, conn: &Conn, msg: Bytes) -> Res &recipient, match body.query { protocol::KvListQuery::KvListAllQuery => { - ep::KvListQuery::KvListAllQuery + protocol::mk2::KvListQuery::KvListAllQuery } protocol::KvListQuery::KvListRangeQuery(q) => { - ep::KvListQuery::KvListRangeQuery(ep::KvListRangeQuery { - start: q.start, - end: q.end, - exclusive: q.exclusive, - }) + protocol::mk2::KvListQuery::KvListRangeQuery( + protocol::mk2::KvListRangeQuery { + start: q.start, + end: q.end, + exclusive: q.exclusive, + }, + ) } protocol::KvListQuery::KvListPrefixQuery(q) => { - ep::KvListQuery::KvListPrefixQuery(ep::KvListPrefixQuery { - key: q.key, - }) + protocol::mk2::KvListQuery::KvListPrefixQuery( + protocol::mk2::KvListPrefixQuery { key: q.key }, + ) } }, body.reverse.unwrap_or_default(), diff --git a/engine/packages/pegboard/src/actor_kv/metrics.rs b/engine/packages/pegboard/src/actor_kv/metrics.rs new file mode 100644 index 0000000000..7716a90342 --- /dev/null +++ b/engine/packages/pegboard/src/actor_kv/metrics.rs @@ -0,0 +1,19 @@ +use rivet_metrics::{BUCKETS, REGISTRY, prometheus::*}; + +lazy_static::lazy_static! { + pub static ref ACTOR_KV_OPERATION_DURATION: HistogramVec = register_histogram_vec_with_registry!( + "actor_kv_operation_duration_seconds", + "Duration of actor KV operations including UDB transaction.", + &["op"], + BUCKETS.to_vec(), + *REGISTRY + ).unwrap(); + + pub static ref ACTOR_KV_KEYS_PER_OP: HistogramVec = register_histogram_vec_with_registry!( + "actor_kv_keys_per_operation", + "Number of keys per actor KV operation.", + &["op"], + vec![1.0, 2.0, 4.0, 8.0, 16.0, 32.0, 64.0, 128.0], + *REGISTRY + ).unwrap(); +} diff --git a/engine/packages/pegboard/src/actor_kv/mod.rs b/engine/packages/pegboard/src/actor_kv/mod.rs index 830041fd46..d4689e4268 100644 --- a/engine/packages/pegboard/src/actor_kv/mod.rs +++ b/engine/packages/pegboard/src/actor_kv/mod.rs @@ -10,16 +10,18 @@ use utils::{validate_entries, validate_keys, validate_range}; use crate::keys; mod entry; +mod metrics; +pub mod preload; mod utils; const VERSION: &str = env!("CARGO_PKG_VERSION"); // Keep the KV validation limits below in sync with // rivetkit-typescript/packages/rivetkit/src/drivers/file-system/kv-limits.ts. -const MAX_KEY_SIZE: usize = 2 * 1024; -const MAX_VALUE_SIZE: usize = 128 * 1024; -const MAX_KEYS: usize = 128; -const MAX_PUT_PAYLOAD_SIZE: usize = 976 * 1024; +pub const MAX_KEY_SIZE: usize = 2 * 1024; +pub const MAX_VALUE_SIZE: usize = 128 * 1024; +pub const MAX_KEYS: usize = 128; +pub const MAX_PUT_PAYLOAD_SIZE: usize = 976 * 1024; const MAX_STORAGE_SIZE: usize = 10 * 1024 * 1024 * 1024; // 10 GiB const VALUE_CHUNK_SIZE: usize = 10_000; // 10 KB, not KiB, see https://apple.github.io/foundationdb/blob.html @@ -46,9 +48,11 @@ pub async fn get( recipient: &Recipient, keys: Vec, ) -> Result<(Vec, Vec, Vec)> { + let start = std::time::Instant::now(); +metrics::ACTOR_KV_KEYS_PER_OP.with_label_values(&["get"]).observe(keys.len() as f64); validate_keys(&keys)?; - db.run(|tx| { + let result = db.run(|tx| { let keys = keys.clone(); async move { let tx = tx.with_subspace(keys::actor_kv::subspace(recipient.actor_id)); @@ -138,7 +142,9 @@ pub async fn get( }) .custom_instrument(tracing::info_span!("kv_get_tx")) .await - .map_err(Into::::into) + .map_err(Into::::into); + metrics::ACTOR_KV_OPERATION_DURATION.with_label_values(&["get"]).observe(start.elapsed().as_secs_f64()); + result } /// Gets keys from the KV store. @@ -261,9 +267,11 @@ pub async fn put( keys: Vec, values: Vec, ) -> Result<()> { + let start = std::time::Instant::now(); +metrics::ACTOR_KV_KEYS_PER_OP.with_label_values(&["put"]).observe(keys.len() as f64); let keys = &keys; let values = &values; - db.run(|tx| { + let result = db.run(|tx| { async move { let total_size = estimate_kv_size(&tx, recipient.actor_id).await? as usize; @@ -331,7 +339,9 @@ pub async fn put( }) .custom_instrument(tracing::info_span!("kv_put_tx")) .await - .map_err(Into::into) + .map_err(Into::into); + metrics::ACTOR_KV_OPERATION_DURATION.with_label_values(&["put"]).observe(start.elapsed().as_secs_f64()); + result } /// Deletes keys from the KV store. Cannot be undone. @@ -341,10 +351,12 @@ pub async fn delete( recipient: &Recipient, keys: Vec, ) -> Result<()> { + let start = std::time::Instant::now(); +metrics::ACTOR_KV_KEYS_PER_OP.with_label_values(&["delete"]).observe(keys.len() as f64); validate_keys(&keys)?; let keys = &keys; - db.run(|tx| { + let result = db.run(|tx| { async move { // Total written bytes (rounded up to nearest chunk) let total_size = keys.iter().fold(0, |s, key| s + key.len()); @@ -370,7 +382,9 @@ pub async fn delete( }) .custom_instrument(tracing::info_span!("kv_delete_tx")) .await - .map_err(Into::into) + .map_err(Into::into); + metrics::ACTOR_KV_OPERATION_DURATION.with_label_values(&["delete"]).observe(start.elapsed().as_secs_f64()); + result } /// Deletes all keys in the half-open range [start, end). Cannot be undone. @@ -381,12 +395,14 @@ pub async fn delete_range( start: ep::KvKey, end: ep::KvKey, ) -> Result<()> { - validate_range(&start, &end)?; + let timer = std::time::Instant::now(); +validate_range(&start, &end)?; if start >= end { + metrics::ACTOR_KV_OPERATION_DURATION.with_label_values(&["delete_range"]).observe(timer.elapsed().as_secs_f64()); return Ok(()); } - db.run(|tx| { + let result = db.run(|tx| { let start = start.clone(); let end = end.clone(); async move { @@ -417,7 +433,9 @@ pub async fn delete_range( }) .custom_instrument(tracing::info_span!("kv_delete_range_tx")) .await - .map_err(Into::into) + .map_err(Into::into); + metrics::ACTOR_KV_OPERATION_DURATION.with_label_values(&["delete_range"]).observe(timer.elapsed().as_secs_f64()); + result } /// Deletes all keys from the KV store. Cannot be undone. diff --git a/engine/packages/pegboard/src/actor_kv/preload.rs b/engine/packages/pegboard/src/actor_kv/preload.rs new file mode 100644 index 0000000000..3b7f4cd824 --- /dev/null +++ b/engine/packages/pegboard/src/actor_kv/preload.rs @@ -0,0 +1,363 @@ +use anyhow::Result; +use futures_util::TryStreamExt; +use gas::prelude::*; +use rivet_config::config::pegboard::Pegboard; +use rivet_envoy_protocol as ep; +use serde::Deserialize; +use universaldb::prelude::*; +use universaldb::tuple::Subspace; + +use super::entry::EntryBuilder; +use crate::keys; + +/// Request to preload a prefix range from the actor's KV store. +pub struct PreloadPrefixRequest { + /// The raw key prefix bytes (e.g., [2] for connections, [8] for SQLite). + pub prefix: ep::KvKey, + /// Maximum bytes to preload for this prefix. + pub max_bytes: u64, + /// If true, return whatever fits even if truncated (for per-key lookup subsystems + /// like SQLite VFS). If false, return nothing if the total data exceeds max_bytes + /// (for list-based subsystems like connections and workflows). + pub partial: bool, +} + +#[derive(Deserialize)] +#[serde(rename_all = "camelCase")] +struct PreloadConfig { + #[serde(default)] + keys: Vec>, + #[serde(default)] + prefixes: Vec, +} + +#[derive(Deserialize)] +#[serde(rename_all = "camelCase")] +struct PreloadPrefixConfig { + prefix: Vec, + max_bytes: u64, + #[serde(default)] + partial: bool, +} + +/// Fetches all preload data for an actor in a single FDB snapshot transaction. +/// +/// Reads exact get-keys and prefix ranges, reassembles chunked FDB values using +/// EntryBuilder, strips tuple encoding via KeyWrapper::unpack, and returns raw +/// byte key-value pairs ready for TypeScript consumption. +/// +/// Prefix requests should be passed in descending priority order (highest priority +/// first). When the global byte cap is reached, lower-priority prefixes are +/// truncated first. +#[tracing::instrument(skip_all)] +pub(crate) async fn batch_preload( + db: &universaldb::Database, + actor_id: Id, + get_keys: Vec, + prefix_requests: Vec, + max_total_bytes: u64, +) -> Result { + let subspace = keys::actor_kv::subspace(actor_id); + + // Break prefix_requests into separate vectors so they can be cloned for the + // FDB transaction closure (which may retry on conflicts). + let prefix_keys: Vec = prefix_requests.iter().map(|r| r.prefix.clone()).collect(); + let prefix_params: Vec<(u64, bool)> = prefix_requests + .iter() + .map(|r| (r.max_bytes, r.partial)) + .collect(); + + db.run(|tx| { + let subspace = subspace.clone(); + let get_keys = get_keys.clone(); + let prefix_keys = prefix_keys.clone(); + let prefix_params = prefix_params.clone(); + + async move { + let tx = tx.with_subspace(subspace.clone()); + let mut entries = Vec::new(); + let mut total_bytes: u64 = 0; + + // Build requested lists dynamically so they only contain keys/prefixes + // that were actually scanned. Keys or prefixes skipped due to budget + // exhaustion or disabled config must not appear, otherwise the actor + // would mistake "not scanned" for "scanned and not found". + let mut requested_get_keys: Vec = Vec::new(); + let mut requested_prefixes: Vec = Vec::new(); + + // 1. Read exact get-keys. Each key maps to a single logical entry + // (or nothing if the key doesn't exist in FDB). + for key in &get_keys { + if total_bytes >= max_total_bytes { + tracing::debug!( + skipped_keys = get_keys.len() - requested_get_keys.len(), + "preload get-keys skipped due to global budget exhaustion" + ); + break; + } + + // Mark this key as scanned regardless of whether it exists in FDB. + requested_get_keys.push(key.clone()); + + let key_subspace = + subspace.subspace(&keys::actor_kv::KeyWrapper(key.clone())); + let mut stream = tx.get_ranges_keyvalues( + universaldb::RangeOption { + mode: universaldb::options::StreamingMode::WantAll, + ..key_subspace.range().into() + }, + Snapshot, + ); + + let mut builder: Option = None; + + while let Some(fdb_kv) = stream.try_next().await? { + if builder.is_none() { + let parsed_key = + tx.unpack::(&fdb_kv.key())? + .key; + builder = Some(EntryBuilder::new(parsed_key)); + } + + let b = builder.as_mut().unwrap(); + + if let Ok(chunk_key) = + tx.unpack::(&fdb_kv.key()) + { + b.append_chunk(chunk_key.chunk, fdb_kv.value()); + } else if let Ok(metadata_key) = + tx.unpack::(&fdb_kv.key()) + { + let metadata = metadata_key.deserialize(fdb_kv.value())?; + b.append_metadata(metadata); + } else { + bail!("unexpected sub key in preload get"); + } + } + + if let Some(b) = builder { + let (k, v, m) = b.build()?; + let size = entry_size(&k, &v, &m); + if total_bytes + size <= max_total_bytes { + total_bytes += size; + entries.push(ep::PreloadedKvEntry { + key: k, + value: v, + metadata: m, + }); + } + } + } + + // 2. Read prefix ranges in priority order. Each prefix is bounded by + // its per-prefix max_bytes and the remaining global budget. + for (i, prefix) in prefix_keys.iter().enumerate() { + let (max_bytes, partial) = prefix_params[i]; + + // Skip prefixes disabled by config (max_bytes == 0) or when + // global budget is exhausted. Do not include in requested_prefixes + // so the actor falls back to kvListPrefix. + let remaining_budget = max_total_bytes.saturating_sub(total_bytes); + let effective_limit = max_bytes.min(remaining_budget); + + if effective_limit == 0 { + tracing::debug!( + ?prefix, + max_bytes, + remaining_budget, + "preload prefix skipped, effective limit is 0" + ); + continue; + } + + let range = prefix_range(prefix, &subspace); + let mut stream = tx.get_ranges_keyvalues( + universaldb::RangeOption { + mode: universaldb::options::StreamingMode::Iterator, + ..range.into() + }, + Snapshot, + ); + + let mut prefix_entries: Vec = Vec::new(); + let mut prefix_bytes: u64 = 0; + let mut current_entry: Option = None; + let mut exceeded = false; + + while let Some(fdb_kv) = stream.try_next().await? { + let key = + tx.unpack::(&fdb_kv.key())?.key; + + let curr = if let Some(inner) = &mut current_entry { + if inner.key != key { + // Finalize the previous entry. + let prev = + std::mem::replace(inner, EntryBuilder::new(key)); + let (k, v, m) = prev.build()?; + let size = entry_size(&k, &v, &m); + + if prefix_bytes + size > effective_limit { + exceeded = true; + break; + } + + prefix_bytes += size; + prefix_entries.push(ep::PreloadedKvEntry { + key: k, + value: v, + metadata: m, + }); + } + + inner + } else { + current_entry = Some(EntryBuilder::new(key)); + current_entry.as_mut().expect("just set") + }; + + if let Ok(chunk_key) = + tx.unpack::(&fdb_kv.key()) + { + curr.append_chunk(chunk_key.chunk, fdb_kv.value()); + } else if let Ok(metadata_key) = + tx.unpack::(&fdb_kv.key()) + { + let metadata = metadata_key.deserialize(fdb_kv.value())?; + curr.append_metadata(metadata); + } else { + bail!("unexpected sub key in preload prefix scan"); + } + } + + // Finalize the last entry if the stream ended without exceeding. + if !exceeded { + if let Some(b) = current_entry { + let (k, v, m) = b.build()?; + let size = entry_size(&k, &v, &m); + if prefix_bytes + size > effective_limit { + exceeded = true; + } else { + prefix_bytes += size; + prefix_entries.push(ep::PreloadedKvEntry { + key: k, + value: v, + metadata: m, + }); + } + } + } + + // For non-partial prefixes, discard all entries if the data exceeded + // the limit. The subsystem will fall back to a full kvListPrefix. + // Do not include in requested_prefixes so the actor knows to fall back. + if exceeded && !partial { + tracing::debug!( + ?prefix, + prefix_bytes, + effective_limit, + "preload prefix truncated (partial: false), discarding entries" + ); + continue; + } + + if exceeded { + tracing::debug!( + ?prefix, + prefix_bytes, + effective_limit, + "preload prefix truncated (partial: true), keeping partial data" + ); + } + + requested_prefixes.push(prefix.clone()); + total_bytes += prefix_bytes; + entries.extend(prefix_entries); + } + + Ok(ep::PreloadedKv { + entries, + requested_get_keys, + requested_prefixes, + }) + } + }) + .custom_instrument(tracing::info_span!("kv_batch_preload_tx")) + .await + .map_err(Into::::into) +} + +/// Fetches preloaded KV data for an actor using engine config and actor name +/// metadata. Returns `None` if preloading is disabled or missing from actor +/// metadata. Fails if the FDB transaction fails (no silent fallback). +#[tracing::instrument(skip_all)] +pub async fn fetch_preloaded_kv( + db: &universaldb::Database, + config: &Pegboard, + actor_id: Id, + namespace_id: Id, + actor_name: &str, +) -> Result> { + // Read actor name metadata from FDB. + let metadata = db + .run(|tx| { + let tx = tx.with_subspace(keys::subspace()); + let name_key = + keys::ns::ActorNameKey::new(namespace_id, actor_name.to_string()); + async move { tx.read_opt(&name_key, Snapshot).await } + }) + .await?; + + let metadata_map = metadata + .map(|d: rivet_data::converted::ActorNameKeyData| d.metadata) + .unwrap_or_default(); + + let Some(preload_config) = metadata_map + .get("preload") + .and_then(|value| serde_json::from_value::(value.clone()).ok()) + else { + return Ok(None); + }; + + if config.preload_max_total_bytes() == 0 { + return Ok(None); + }; + + let prefix_requests = preload_config + .prefixes + .into_iter() + .map(|prefix| PreloadPrefixRequest { + prefix: prefix.prefix, + max_bytes: prefix.max_bytes, + partial: prefix.partial, + }) + .collect(); + + let preloaded = batch_preload( + db, + actor_id, + preload_config.keys, + prefix_requests, + config.preload_max_total_bytes(), + ) + .await?; + + Ok(Some(preloaded)) +} + +/// Computes the serialized size of a preloaded KV entry, including key, value, +/// and metadata (version bytes + i64 timestamp). +fn entry_size(key: &ep::KvKey, value: &ep::KvValue, metadata: &ep::KvMetadata) -> u64 { + (key.len() + value.len() + metadata.version.len() + std::mem::size_of::()) as u64 +} + +/// Computes the FDB key range for a prefix scan within the actor KV subspace. +fn prefix_range(prefix: &ep::KvKey, subspace: &Subspace) -> (Vec, Vec) { + let mut start = subspace.pack(&keys::actor_kv::ListKeyWrapper(prefix.clone())); + // Remove the trailing 0 byte that tuple encoding adds to bytes. + if let Some(&0) = start.last() { + start.pop(); + } + let mut end = start.clone(); + end.push(0xFF); + (start, end) +} diff --git a/engine/packages/pegboard/src/actor_kv/utils.rs b/engine/packages/pegboard/src/actor_kv/utils.rs index 93be4d6d8a..ed91ade151 100644 --- a/engine/packages/pegboard/src/actor_kv/utils.rs +++ b/engine/packages/pegboard/src/actor_kv/utils.rs @@ -5,6 +5,7 @@ use super::{ MAX_KEY_SIZE, MAX_KEYS, MAX_PUT_PAYLOAD_SIZE, MAX_STORAGE_SIZE, MAX_VALUE_SIZE, keys::actor_kv::KeyWrapper, }; +use crate::errors; pub fn validate_list_query(query: &ep::KvListQuery) -> Result<()> { match query { @@ -74,10 +75,13 @@ pub fn validate_entries( ); let storage_remaining = MAX_STORAGE_SIZE.saturating_sub(total_size); - ensure!( - payload_size <= storage_remaining, - "not enough space left in storage ({storage_remaining} bytes remaining, current payload is {payload_size} bytes)" - ); + if payload_size > storage_remaining { + return Err(errors::Actor::KvStorageQuotaExceeded { + remaining: storage_remaining, + payload_size, + } + .build()); + } for key in keys { ensure!( diff --git a/engine/packages/pegboard/src/errors.rs b/engine/packages/pegboard/src/errors.rs index 6744bce7c8..f59b03a2b6 100644 --- a/engine/packages/pegboard/src/errors.rs +++ b/engine/packages/pegboard/src/errors.rs @@ -68,6 +68,13 @@ pub enum Actor { #[error("kv_key_not_found", "The KV key does not exist for this actor.")] KvKeyNotFound, + + #[error( + "kv_storage_quota_exceeded", + "Not enough space left in storage.", + "Not enough space left in storage ({remaining} bytes remaining, current payload is {payload_size} bytes)." + )] + KvStorageQuotaExceeded { remaining: usize, payload_size: usize }, } #[derive(RivetError, Debug, Clone, Deserialize, Serialize)] diff --git a/engine/packages/pegboard/src/ops/actor/get_for_runner.rs b/engine/packages/pegboard/src/ops/actor/get_for_runner.rs index c3ead1f566..b34639ced2 100644 --- a/engine/packages/pegboard/src/ops/actor/get_for_runner.rs +++ b/engine/packages/pegboard/src/ops/actor/get_for_runner.rs @@ -11,6 +11,7 @@ pub struct Input { #[derive(Debug)] pub struct Output { pub name: String, + pub namespace_id: Id, pub runner_id: Id, pub is_connectable: bool, } @@ -28,26 +29,28 @@ pub async fn pegboard_actor_get_for_runner( let workflow_id_key = keys::actor::WorkflowIdKey::new(input.actor_id); let name_key = keys::actor::NameKey::new(input.actor_id); + let namespace_id_key = keys::actor::NamespaceIdKey::new(input.actor_id); let runner_id_key = keys::actor::RunnerIdKey::new(input.actor_id); let connectable_key = keys::actor::ConnectableKey::new(input.actor_id); - let (workflow_id, name_entry, runner_id_entry, is_connectable) = tokio::try_join!( + let (workflow_id, name_entry, namespace_id, runner_id_entry, is_connectable) = tokio::try_join!( tx.read_opt(&workflow_id_key, Serializable), tx.read_opt(&name_key, Serializable), + tx.read_opt(&namespace_id_key, Serializable), tx.read_opt(&runner_id_key, Serializable), tx.exists(&connectable_key, Serializable), )?; - let (Some(workflow_id), Some(runner_id)) = (workflow_id, runner_id_entry) else { + let (Some(workflow_id), Some(namespace_id), Some(runner_id)) = (workflow_id, namespace_id, runner_id_entry) else { return Ok(None); }; - Ok(Some((workflow_id, name_entry, runner_id, is_connectable))) + Ok(Some((workflow_id, name_entry, namespace_id, runner_id, is_connectable))) }) .custom_instrument(tracing::info_span!("actor_get_for_runner_tx")) .await?; - let Some((workflow_id, name, runner_id, is_connectable)) = res else { + let Some((workflow_id, name, namespace_id, runner_id, is_connectable)) = res else { return Ok(None); }; @@ -81,6 +84,7 @@ pub async fn pegboard_actor_get_for_runner( Ok(Some(Output { name, + namespace_id, runner_id, is_connectable, })) diff --git a/engine/packages/pegboard/src/workflows/actor2/runtime.rs b/engine/packages/pegboard/src/workflows/actor2/runtime.rs index 5e08ecd023..5986573e15 100644 --- a/engine/packages/pegboard/src/workflows/actor2/runtime.rs +++ b/engine/packages/pegboard/src/workflows/actor2/runtime.rs @@ -366,6 +366,7 @@ pub async fn send_outbound(ctx: &ActivityCtx, input: &SendOutboundInput) -> Resu // Empty because request ids are ephemeral. This is intercepted by guard and // populated before it reaches the runner hibernating_requests: Vec::new(), + preloaded_kv: None, }); // NOTE: Kinda jank but it works @@ -965,25 +966,66 @@ pub async fn insert_and_send_commands( state.envoy_last_command_idx += input.commands.len() as i64; + // Fetch preloaded KV at send time for any start commands. Preloaded KV is + // never persisted in the command queue or workflow history. + let preloaded_kv = { + let has_start_cmd = input + .commands + .iter() + .any(|command| matches!(command, protocol::Command::CommandStartActor(_))); + if has_start_cmd { + let db = ctx.udb()?; + crate::actor_kv::preload::fetch_preloaded_kv( + &db, + ctx.config().pegboard(), + actor_id, + namespace_id, + &input + .commands + .iter() + .find_map(|command| match command { + protocol::Command::CommandStartActor(start) => { + Some(start.config.name.clone()) + } + _ => None, + }) + .unwrap_or_default(), + ) + .await? + } else { + None + } + }; + let receiver_subject = crate::pubsub_subjects::EnvoyReceiverSubject::new( state.namespace_id, input.envoy_key.clone(), ) .to_string(); + let mut preloaded_kv = preloaded_kv; let message_serialized = versioned::ToEnvoyConn::wrap_latest(protocol::ToEnvoyConn::ToEnvoyCommands( input .commands .iter() .enumerate() - .map(|(i, command)| protocol::CommandWrapper { - checkpoint: protocol::ActorCheckpoint { - actor_id: state.actor_id.to_string(), - generation: input.generation, - index: old_last_command_idx + i as i64 + 1, - }, - inner: command.clone(), + .map(|(i, command)| { + let mut command = command.clone(); + if let protocol::Command::CommandStartActor(ref mut start) = + command + { + start.preloaded_kv = preloaded_kv.take(); + } + + protocol::CommandWrapper { + checkpoint: protocol::ActorCheckpoint { + actor_id: state.actor_id.to_string(), + generation: input.generation, + index: old_last_command_idx + i as i64 + 1, + }, + inner: command, + } }) .collect(), )) diff --git a/engine/sdks/rust/kv-channel-protocol/Cargo.toml b/engine/sdks/rust/kv-channel-protocol/Cargo.toml new file mode 100644 index 0000000000..7991d5772b --- /dev/null +++ b/engine/sdks/rust/kv-channel-protocol/Cargo.toml @@ -0,0 +1,14 @@ +[package] +name = "rivet-kv-channel-protocol" +version.workspace = true +authors.workspace = true +license.workspace = true +edition.workspace = true + +[dependencies] +serde_bare.workspace = true +serde.workspace = true +vbare.workspace = true + +[build-dependencies] +vbare-compiler.workspace = true diff --git a/engine/sdks/rust/kv-channel-protocol/build.rs b/engine/sdks/rust/kv-channel-protocol/build.rs new file mode 100644 index 0000000000..7867d7183a --- /dev/null +++ b/engine/sdks/rust/kv-channel-protocol/build.rs @@ -0,0 +1,157 @@ +use std::path::Path; + +fn main() -> Result<(), Box> { + let manifest_dir = std::env::var("CARGO_MANIFEST_DIR")?; + let workspace_root = Path::new(&manifest_dir) + .parent() + .and_then(|p| p.parent()) + .and_then(|p| p.parent()) + .ok_or("Failed to find workspace root")?; + + let schema_dir = workspace_root + .join("sdks") + .join("schemas") + .join("kv-channel-protocol"); + + // Rust type generation from BARE schemas. + let cfg = vbare_compiler::Config::with_hashable_map(); + vbare_compiler::process_schemas_with_config(&schema_dir, &cfg)?; + + // TypeScript SDK generation. + let cli_js_path = workspace_root + .parent() + .unwrap() + .join("node_modules/@bare-ts/tools/dist/bin/cli.js"); + if cli_js_path.exists() { + typescript::generate_sdk(&schema_dir, workspace_root); + } else { + println!( + "cargo:warning=TypeScript SDK generation skipped: cli.js not found at {}. Run `pnpm install` to install.", + cli_js_path.display() + ); + } + + Ok(()) +} + +mod typescript { + use super::*; + use std::{fs, path::PathBuf, process::Command}; + + pub fn generate_sdk(schema_dir: &Path, workspace_root: &Path) { + let sdk_dir = workspace_root + .join("sdks") + .join("typescript") + .join("kv-channel-protocol"); + let src_dir = sdk_dir.join("src"); + + let highest_version_path = find_highest_version(schema_dir); + + let _ = fs::remove_dir_all(&src_dir); + if let Err(e) = fs::create_dir_all(&src_dir) { + panic!("Failed to create SDK directory: {}", e); + } + + let output_path = src_dir.join("index.ts"); + + let output = Command::new( + workspace_root + .parent() + .unwrap() + .join("node_modules/@bare-ts/tools/dist/bin/cli.js"), + ) + .arg("compile") + .arg("--generator") + .arg("ts") + .arg(&highest_version_path) + .arg("-o") + .arg(&output_path) + .output() + .expect("Failed to execute bare compiler for TypeScript"); + + if !output.status.success() { + panic!( + "BARE TypeScript generation failed: {}", + String::from_utf8_lossy(&output.stderr), + ); + } + + // Post-process the generated TypeScript file. + // IMPORTANT: Keep this in sync with rivetkit-typescript/packages/rivetkit/scripts/compile-bare.ts + post_process_generated_ts(&output_path); + } + + const POST_PROCESS_MARKER: &str = "// @generated - post-processed by build.rs\n"; + + /// Post-process the generated TypeScript file to: + /// 1. Replace @bare-ts/lib import with @rivetkit/bare-ts + /// 2. Replace Node.js assert import with a custom assert function + /// + /// IMPORTANT: Keep this in sync with rivetkit-typescript/packages/rivetkit/scripts/compile-bare.ts + fn post_process_generated_ts(path: &Path) { + let content = fs::read_to_string(path).expect("Failed to read generated TypeScript file"); + + // Skip if already post-processed. + if content.starts_with(POST_PROCESS_MARKER) { + return; + } + + // Add PROTOCOL_VERSION constant at the top (not in the BARE schema). + let content = format!("export const PROTOCOL_VERSION = 1;\n\n{content}"); + + // Replace @bare-ts/lib with @rivetkit/bare-ts. + let content = content.replace("@bare-ts/lib", "@rivetkit/bare-ts"); + + // Replace Node.js assert import with custom assert function. + let content = content.replace("import assert from \"assert\"", ""); + let content = content.replace("import assert from \"node:assert\"", ""); + + // Append custom assert function. + let assert_function = r#" +function assert(condition: boolean, message?: string): asserts condition { + if (!condition) throw new Error(message ?? "Assertion failed") +} +"#; + let content = format!("{}{}\n{}", POST_PROCESS_MARKER, content, assert_function); + + // Validate post-processing succeeded. + assert!( + !content.contains("@bare-ts/lib"), + "Failed to replace @bare-ts/lib import" + ); + assert!( + !content.contains("import assert from"), + "Failed to remove Node.js assert import" + ); + + fs::write(path, content).expect("Failed to write post-processed TypeScript file"); + } + + fn find_highest_version(schema_dir: &Path) -> PathBuf { + let mut highest_version = 0; + let mut highest_version_path = PathBuf::new(); + + for entry in fs::read_dir(schema_dir).unwrap().flatten() { + if !entry.path().is_dir() { + let path = entry.path(); + let bare_name = path + .file_name() + .unwrap() + .to_str() + .unwrap() + .split_once('.') + .unwrap() + .0; + + if let Ok(version) = bare_name[1..].parse::() { + if version > highest_version { + highest_version = version; + highest_version_path = path; + } + } + } + } + + highest_version_path + } +} diff --git a/engine/sdks/rust/kv-channel-protocol/src/generated.rs b/engine/sdks/rust/kv-channel-protocol/src/generated.rs new file mode 100644 index 0000000000..84801af8dc --- /dev/null +++ b/engine/sdks/rust/kv-channel-protocol/src/generated.rs @@ -0,0 +1 @@ +include!(concat!(env!("OUT_DIR"), "/combined_imports.rs")); diff --git a/engine/sdks/rust/kv-channel-protocol/src/lib.rs b/engine/sdks/rust/kv-channel-protocol/src/lib.rs new file mode 100644 index 0000000000..7502faf079 --- /dev/null +++ b/engine/sdks/rust/kv-channel-protocol/src/lib.rs @@ -0,0 +1,138 @@ +pub mod generated; + +// Re-export latest version. +pub use generated::v1::*; + +pub const PROTOCOL_VERSION: u32 = 1; + +/// Serialize a ToServer message to BARE bytes. +pub fn encode_to_server(msg: &ToServer) -> Result, serde_bare::error::Error> { + serde_bare::to_vec(msg) +} + +/// Deserialize a ToServer message from BARE bytes. +pub fn decode_to_server(bytes: &[u8]) -> Result { + serde_bare::from_slice(bytes) +} + +/// Serialize a ToClient message to BARE bytes. +pub fn encode_to_client(msg: &ToClient) -> Result, serde_bare::error::Error> { + serde_bare::to_vec(msg) +} + +/// Deserialize a ToClient message from BARE bytes. +pub fn decode_to_client(bytes: &[u8]) -> Result { + serde_bare::from_slice(bytes) +} + +#[cfg(test)] +mod tests { + use super::*; + + // MARK: Round-trip tests + + #[test] + fn round_trip_to_server_request_actor_open() { + let msg = ToServer::ToServerRequest(ToServerRequest { + request_id: 1, + actor_id: "abc".into(), + data: RequestData::ActorOpenRequest, + }); + let bytes = encode_to_server(&msg).unwrap(); + let decoded = decode_to_server(&bytes).unwrap(); + assert_eq!(msg, decoded); + } + + #[test] + fn round_trip_to_server_request_kv_get() { + let msg = ToServer::ToServerRequest(ToServerRequest { + request_id: 3, + actor_id: "actor1".into(), + data: RequestData::KvGetRequest(KvGetRequest { + keys: vec![vec![1, 2, 3], vec![4, 5]], + }), + }); + let bytes = encode_to_server(&msg).unwrap(); + let decoded = decode_to_server(&bytes).unwrap(); + assert_eq!(msg, decoded); + } + + #[test] + fn round_trip_to_client_response_error() { + let msg = ToClient::ToClientResponse(ToClientResponse { + request_id: 10, + data: ResponseData::ErrorResponse(ErrorResponse { + code: "actor_locked".into(), + message: "actor is locked by another connection".into(), + }), + }); + let bytes = encode_to_client(&msg).unwrap(); + let decoded = decode_to_client(&bytes).unwrap(); + assert_eq!(msg, decoded); + } + + #[test] + fn round_trip_to_client_ping() { + let msg = ToClient::ToClientPing(ToClientPing { ts: 9876543210 }); + let bytes = encode_to_client(&msg).unwrap(); + let decoded = decode_to_client(&bytes).unwrap(); + assert_eq!(msg, decoded); + } + + #[test] + fn round_trip_to_client_close() { + let msg = ToClient::ToClientClose; + let bytes = encode_to_client(&msg).unwrap(); + let decoded = decode_to_client(&bytes).unwrap(); + assert_eq!(msg, decoded); + } + + // MARK: Cross-language byte compatibility tests + + #[test] + fn bytes_to_server_request_actor_open() { + let msg = ToServer::ToServerRequest(ToServerRequest { + request_id: 1, + actor_id: "abc".into(), + data: RequestData::ActorOpenRequest, + }); + let bytes = encode_to_server(&msg).unwrap(); + assert_eq!( + bytes, + [0x00, 0x01, 0x00, 0x00, 0x00, 0x03, 0x61, 0x62, 0x63, 0x00] + ); + } + + #[test] + fn bytes_to_server_pong() { + let msg = ToServer::ToServerPong(ToServerPong { ts: 1234567890 }); + let bytes = encode_to_server(&msg).unwrap(); + assert_eq!( + bytes, + [0x01, 0xD2, 0x02, 0x96, 0x49, 0x00, 0x00, 0x00, 0x00] + ); + } + + #[test] + fn bytes_to_client_close() { + let msg = ToClient::ToClientClose; + let bytes = encode_to_client(&msg).unwrap(); + assert_eq!(bytes, [0x02]); + } + + #[test] + fn bytes_to_client_response_kv_get() { + let msg = ToClient::ToClientResponse(ToClientResponse { + request_id: 42, + data: ResponseData::KvGetResponse(KvGetResponse { + keys: vec![vec![1, 2]], + values: vec![vec![3, 4, 5]], + }), + }); + let bytes = encode_to_client(&msg).unwrap(); + assert_eq!( + bytes, + [0x00, 0x2A, 0x00, 0x00, 0x00, 0x03, 0x01, 0x02, 0x01, 0x02, 0x01, 0x03, 0x03, 0x04, 0x05] + ); + } +} diff --git a/engine/sdks/schemas/envoy-protocol/v1.bare b/engine/sdks/schemas/envoy-protocol/v1.bare index 977f35d268..33d8de9876 100644 --- a/engine/sdks/schemas/envoy-protocol/v1.bare +++ b/engine/sdks/schemas/envoy-protocol/v1.bare @@ -174,6 +174,20 @@ type EventWrapper struct { inner: Event } +# MARK: Preloaded KV + +type PreloadedKvEntry struct { + key: KvKey + value: KvValue + metadata: KvMetadata +} + +type PreloadedKv struct { + entries: list + requestedGetKeys: list + requestedPrefixes: list +} + # MARK: Commands type HibernatingRequest struct { @@ -184,6 +198,7 @@ type HibernatingRequest struct { type CommandStartActor struct { config: ActorConfig hibernatingRequests: list + preloadedKv: optional } type StopActorReason enum { diff --git a/engine/sdks/schemas/kv-channel-protocol/v1.bare b/engine/sdks/schemas/kv-channel-protocol/v1.bare new file mode 100644 index 0000000000..1a393301d7 --- /dev/null +++ b/engine/sdks/schemas/kv-channel-protocol/v1.bare @@ -0,0 +1,146 @@ +# KV Channel Protocol v1 + +# MARK: Core + +# Id is a 30-character base36 string encoding the V1 format from +# engine/packages/util-id/. Use the util-id library for parsing +# and validation. Do not hand-roll Id parsing. +type Id str + +# MARK: Actor Session +# +# ActorOpen acquires a single-writer lock on an actor's KV data. +# ActorClose releases the lock. These are optimistic: the client +# does not wait for a response before sending KV requests. The +# server processes messages in WebSocket order, so the open is +# always processed before any KV requests that follow it. +# +# If the lock cannot be acquired (another connection holds it), +# the server sends an error response for the open and rejects +# subsequent KV requests for that actor with "actor_locked". + +# actorId is on ToServerRequest, not on open/close. The outer +# actorId is the single source of truth for routing. +type ActorOpenRequest void + +type ActorCloseRequest void + +type ActorOpenResponse void + +type ActorCloseResponse void + +# MARK: KV +# +# These types mirror the runner protocol KV types +# (engine/sdks/schemas/runner-protocol/). Changes to KV types in +# either protocol must be mirrored in the other. +# +# Omitted from the runner protocol (not needed by the VFS): +# - KvListRequest/KvListResponse (prefix scan) +# - KvDropRequest/KvDropResponse (drop all KV data) +# - KvMetadata on responses (update timestamps) +# +# The same engine KV limits apply to both protocols. See the +# "KV Limits" section below. + +type KvKey data +type KvValue data + +type KvGetRequest struct { + keys: list +} + +type KvPutRequest struct { + # keys and values are parallel lists. keys.len() must equal values.len(). + keys: list + values: list +} + +type KvDeleteRequest struct { + keys: list +} + +type KvDeleteRangeRequest struct { + start: KvKey + end: KvKey +} + +# MARK: Request/Response + +type RequestData union { + ActorOpenRequest | + ActorCloseRequest | + KvGetRequest | + KvPutRequest | + KvDeleteRequest | + KvDeleteRangeRequest +} + +type ErrorResponse struct { + code: str + message: str +} + +type KvGetResponse struct { + # Only keys that exist are returned. Missing keys are omitted. + # The client infers missing keys by comparing request keys to + # response keys. This matches the runner protocol behavior + # (engine/packages/pegboard/src/actor_kv/mod.rs). + keys: list + values: list +} + +type KvPutResponse void + +# KvDeleteResponse is used for both KvDeleteRequest and +# KvDeleteRangeRequest, same as the runner protocol. +type KvDeleteResponse void + +type ResponseData union { + ErrorResponse | + ActorOpenResponse | + ActorCloseResponse | + KvGetResponse | + KvPutResponse | + KvDeleteResponse +} + +# MARK: To Server + +type ToServerRequest struct { + requestId: u32 + actorId: Id + data: RequestData +} + +type ToServerPong struct { + ts: i64 +} + +type ToServer union { + ToServerRequest | + ToServerPong +} + +# MARK: To Client + +type ToClientResponse struct { + requestId: u32 + data: ResponseData +} + +type ToClientPing struct { + ts: i64 +} + +# Server-initiated close. Sent when the server is shutting down +# or draining connections. The client should close all actors +# and reconnect with backoff. Same pattern as the runner +# protocol's ToRunnerClose. +type ToClientClose void + +type ToClient union { + ToClientResponse | + ToClientPing | + ToClientClose +} diff --git a/engine/sdks/typescript/envoy-client/src/config.ts b/engine/sdks/typescript/envoy-client/src/config.ts index b20ed35aa2..37720ed78f 100644 --- a/engine/sdks/typescript/envoy-client/src/config.ts +++ b/engine/sdks/typescript/envoy-client/src/config.ts @@ -152,6 +152,7 @@ export interface EnvoyConfig { actorId: string, generation: number, config: protocol.ActorConfig, + preloadedKv: protocol.PreloadedKv | null, ) => Promise; onActorStop: ( diff --git a/engine/sdks/typescript/envoy-client/src/tasks/actor.ts b/engine/sdks/typescript/envoy-client/src/tasks/actor.ts index 8a8aff33fd..7c4243f6f9 100644 --- a/engine/sdks/typescript/envoy-client/src/tasks/actor.ts +++ b/engine/sdks/typescript/envoy-client/src/tasks/actor.ts @@ -18,8 +18,8 @@ export interface CreateActorOpts { actorId: string; generation: number; config: protocol.ActorConfig; - hibernatingRequests: readonly protocol.HibernatingRequest[]; + preloadedKv: protocol.PreloadedKv | null; } export type ToActor = @@ -123,7 +123,6 @@ async function actorInner( config: opts.config, commandIdx: 0n, eventIndex: 0n, - pendingRequests: new BufferMap(), webSockets: new BufferMap(), hibernationRestored: false, @@ -136,6 +135,7 @@ async function actorInner( opts.actorId, opts.generation, opts.config, + opts.preloadedKv, ); } catch (error) { log(ctx)?.error({ @@ -408,7 +408,7 @@ async function handleWsOpen(ctx: ActorContext, messageId: protocol.MessageId, pa ); try { - // #createWebSocket will call `runner.config.websocket` under the + // #createWebSocket will call `envoy.config.websocket` under the // hood to add the event listeners for open, etc. If this handler // throws, then the WebSocket will be closed before sending the // open event. @@ -539,7 +539,7 @@ async function handleHwsRestore(ctx: ActorContext, metaEntries: HibernatingWebSo } else { ctx.pendingRequests.set([gatewayId, requestId], { envoyMessageIndex: 0 }); - // This will call `runner.config.websocket` under the hood to + // This will call `envoy.config.websocket` under the hood to // attach the event listeners to the WebSocket. // Track this operation to ensure it completes const restoreOperation = createWebSocket( diff --git a/engine/sdks/typescript/envoy-client/src/tasks/envoy/commands.ts b/engine/sdks/typescript/envoy-client/src/tasks/envoy/commands.ts index 05e2770afd..7686e9ee87 100644 --- a/engine/sdks/typescript/envoy-client/src/tasks/envoy/commands.ts +++ b/engine/sdks/typescript/envoy-client/src/tasks/envoy/commands.ts @@ -26,6 +26,7 @@ export function handleCommands( generation: checkpoint.generation, config: val.config, hibernatingRequests: val.hibernatingRequests, + preloadedKv: val.preloadedKv ?? null, }); let generations = ctx.actors.get(checkpoint.actorId); diff --git a/engine/sdks/typescript/envoy-client/src/tasks/envoy/index.ts b/engine/sdks/typescript/envoy-client/src/tasks/envoy/index.ts index 67248242c4..9daefd8137 100644 --- a/engine/sdks/typescript/envoy-client/src/tasks/envoy/index.ts +++ b/engine/sdks/typescript/envoy-client/src/tasks/envoy/index.ts @@ -235,7 +235,7 @@ function handleConnClose(ctx: EnvoyContext, lostTimeout: NodeJS.Timeout | undefi if (!lostTimeout) { let lostThreshold = ctx.shared.protocolMetadata ? Number(ctx.shared.protocolMetadata.envoyLostThreshold) : 10000; log(ctx.shared)?.debug({ - msg: "starting runner lost timeout", + msg: "starting envoy lost timeout", seconds: lostThreshold / 1000, }); @@ -251,7 +251,7 @@ function handleConnClose(ctx: EnvoyContext, lostTimeout: NodeJS.Timeout | undefi if (ctx.actors.size == 0) return; log(ctx.shared)?.warn({ - msg: "stopping all actors due to runner lost threshold", + msg: "stopping all actors due to envoy lost threshold", }); // Stop all actors diff --git a/engine/sdks/typescript/envoy-client/src/websocket.ts b/engine/sdks/typescript/envoy-client/src/websocket.ts index 9001ca2a6d..bfd3bb4fe1 100644 --- a/engine/sdks/typescript/envoy-client/src/websocket.ts +++ b/engine/sdks/typescript/envoy-client/src/websocket.ts @@ -1,4 +1,4 @@ -import type * as protocol from "@rivetkit/engine-runner-protocol"; +import type * as protocol from "@rivetkit/engine-envoy-protocol"; import type { UnboundedReceiver, UnboundedSender } from "antiox/sync/mpsc"; import { OnceCell } from "antiox/sync/once_cell"; import { spawn } from "antiox/task"; diff --git a/engine/sdks/typescript/envoy-protocol/src/index.ts b/engine/sdks/typescript/envoy-protocol/src/index.ts index ba20853621..388e77fe35 100644 --- a/engine/sdks/typescript/envoy-protocol/src/index.ts +++ b/engine/sdks/typescript/envoy-protocol/src/index.ts @@ -865,6 +865,65 @@ export function writeEventWrapper(bc: bare.ByteCursor, x: EventWrapper): void { writeEvent(bc, x.inner) } +export type PreloadedKvEntry = { + readonly key: KvKey + readonly value: KvValue + readonly metadata: KvMetadata +} + +export function readPreloadedKvEntry(bc: bare.ByteCursor): PreloadedKvEntry { + return { + key: readKvKey(bc), + value: readKvValue(bc), + metadata: readKvMetadata(bc), + } +} + +export function writePreloadedKvEntry(bc: bare.ByteCursor, x: PreloadedKvEntry): void { + writeKvKey(bc, x.key) + writeKvValue(bc, x.value) + writeKvMetadata(bc, x.metadata) +} + +function read8(bc: bare.ByteCursor): readonly PreloadedKvEntry[] { + const len = bare.readUintSafe(bc) + if (len === 0) { + return [] + } + const result = [readPreloadedKvEntry(bc)] + for (let i = 1; i < len; i++) { + result[i] = readPreloadedKvEntry(bc) + } + return result +} + +function write8(bc: bare.ByteCursor, x: readonly PreloadedKvEntry[]): void { + bare.writeUintSafe(bc, x.length) + for (let i = 0; i < x.length; i++) { + writePreloadedKvEntry(bc, x[i]) + } +} + +export type PreloadedKv = { + readonly entries: readonly PreloadedKvEntry[] + readonly requestedGetKeys: readonly KvKey[] + readonly requestedPrefixes: readonly KvKey[] +} + +export function readPreloadedKv(bc: bare.ByteCursor): PreloadedKv { + return { + entries: read8(bc), + requestedGetKeys: read0(bc), + requestedPrefixes: read0(bc), + } +} + +export function writePreloadedKv(bc: bare.ByteCursor, x: PreloadedKv): void { + write8(bc, x.entries) + write0(bc, x.requestedGetKeys) + write0(bc, x.requestedPrefixes) +} + export type HibernatingRequest = { readonly gatewayId: GatewayId readonly requestId: RequestId @@ -882,7 +941,7 @@ export function writeHibernatingRequest(bc: bare.ByteCursor, x: HibernatingReque writeRequestId(bc, x.requestId) } -function read8(bc: bare.ByteCursor): readonly HibernatingRequest[] { +function read9(bc: bare.ByteCursor): readonly HibernatingRequest[] { const len = bare.readUintSafe(bc) if (len === 0) { return [] @@ -894,28 +953,42 @@ function read8(bc: bare.ByteCursor): readonly HibernatingRequest[] { return result } -function write8(bc: bare.ByteCursor, x: readonly HibernatingRequest[]): void { +function write9(bc: bare.ByteCursor, x: readonly HibernatingRequest[]): void { bare.writeUintSafe(bc, x.length) for (let i = 0; i < x.length; i++) { writeHibernatingRequest(bc, x[i]) } } +function read10(bc: bare.ByteCursor): PreloadedKv | null { + return bare.readBool(bc) ? readPreloadedKv(bc) : null +} + +function write10(bc: bare.ByteCursor, x: PreloadedKv | null): void { + bare.writeBool(bc, x != null) + if (x != null) { + writePreloadedKv(bc, x) + } +} + export type CommandStartActor = { readonly config: ActorConfig readonly hibernatingRequests: readonly HibernatingRequest[] + readonly preloadedKv: PreloadedKv | null } export function readCommandStartActor(bc: bare.ByteCursor): CommandStartActor { return { config: readActorConfig(bc), - hibernatingRequests: read8(bc), + hibernatingRequests: read9(bc), + preloadedKv: read10(bc), } } export function writeCommandStartActor(bc: bare.ByteCursor, x: CommandStartActor): void { writeActorConfig(bc, x.config) - write8(bc, x.hibernatingRequests) + write9(bc, x.hibernatingRequests) + write10(bc, x.preloadedKv) } export enum StopActorReason { @@ -1122,7 +1195,7 @@ export function writeMessageId(bc: bare.ByteCursor, x: MessageId): void { writeMessageIndex(bc, x.messageIndex) } -function read9(bc: bare.ByteCursor): ReadonlyMap { +function read11(bc: bare.ByteCursor): ReadonlyMap { const len = bare.readUintSafe(bc) const result = new Map() for (let i = 0; i < len; i++) { @@ -1137,7 +1210,7 @@ function read9(bc: bare.ByteCursor): ReadonlyMap { return result } -function write9(bc: bare.ByteCursor, x: ReadonlyMap): void { +function write11(bc: bare.ByteCursor, x: ReadonlyMap): void { bare.writeUintSafe(bc, x.size) for (const kv of x) { bare.writeString(bc, kv[0]) @@ -1162,7 +1235,7 @@ export function readToEnvoyRequestStart(bc: bare.ByteCursor): ToEnvoyRequestStar actorId: readId(bc), method: bare.readString(bc), path: bare.readString(bc), - headers: read9(bc), + headers: read11(bc), body: read6(bc), stream: bare.readBool(bc), } @@ -1172,7 +1245,7 @@ export function writeToEnvoyRequestStart(bc: bare.ByteCursor, x: ToEnvoyRequestS writeId(bc, x.actorId) bare.writeString(bc, x.method) bare.writeString(bc, x.path) - write9(bc, x.headers) + write11(bc, x.headers) write6(bc, x.body) bare.writeBool(bc, x.stream) } @@ -1206,7 +1279,7 @@ export type ToRivetResponseStart = { export function readToRivetResponseStart(bc: bare.ByteCursor): ToRivetResponseStart { return { status: bare.readU16(bc), - headers: read9(bc), + headers: read11(bc), body: read6(bc), stream: bare.readBool(bc), } @@ -1214,7 +1287,7 @@ export function readToRivetResponseStart(bc: bare.ByteCursor): ToRivetResponseSt export function writeToRivetResponseStart(bc: bare.ByteCursor, x: ToRivetResponseStart): void { bare.writeU16(bc, x.status) - write9(bc, x.headers) + write11(bc, x.headers) write6(bc, x.body) bare.writeBool(bc, x.stream) } @@ -1251,14 +1324,14 @@ export function readToEnvoyWebSocketOpen(bc: bare.ByteCursor): ToEnvoyWebSocketO return { actorId: readId(bc), path: bare.readString(bc), - headers: read9(bc), + headers: read11(bc), } } export function writeToEnvoyWebSocketOpen(bc: bare.ByteCursor, x: ToEnvoyWebSocketOpen): void { writeId(bc, x.actorId) bare.writeString(bc, x.path) - write9(bc, x.headers) + write11(bc, x.headers) } export type ToEnvoyWebSocketMessage = { @@ -1278,11 +1351,11 @@ export function writeToEnvoyWebSocketMessage(bc: bare.ByteCursor, x: ToEnvoyWebS bare.writeBool(bc, x.binary) } -function read10(bc: bare.ByteCursor): u16 | null { +function read12(bc: bare.ByteCursor): u16 | null { return bare.readBool(bc) ? bare.readU16(bc) : null } -function write10(bc: bare.ByteCursor, x: u16 | null): void { +function write12(bc: bare.ByteCursor, x: u16 | null): void { bare.writeBool(bc, x != null) if (x != null) { bare.writeU16(bc, x) @@ -1296,13 +1369,13 @@ export type ToEnvoyWebSocketClose = { export function readToEnvoyWebSocketClose(bc: bare.ByteCursor): ToEnvoyWebSocketClose { return { - code: read10(bc), + code: read12(bc), reason: read5(bc), } } export function writeToEnvoyWebSocketClose(bc: bare.ByteCursor, x: ToEnvoyWebSocketClose): void { - write10(bc, x.code) + write12(bc, x.code) write5(bc, x.reason) } @@ -1359,14 +1432,14 @@ export type ToRivetWebSocketClose = { export function readToRivetWebSocketClose(bc: bare.ByteCursor): ToRivetWebSocketClose { return { - code: read10(bc), + code: read12(bc), reason: read5(bc), hibernate: bare.readBool(bc), } } export function writeToRivetWebSocketClose(bc: bare.ByteCursor, x: ToRivetWebSocketClose): void { - write10(bc, x.code) + write12(bc, x.code) write5(bc, x.reason) bare.writeBool(bc, x.hibernate) } @@ -1575,7 +1648,7 @@ export function writeToEnvoyPing(bc: bare.ByteCursor, x: ToEnvoyPing): void { bare.writeI64(bc, x.ts) } -function read11(bc: bare.ByteCursor): ReadonlyMap { +function read13(bc: bare.ByteCursor): ReadonlyMap { const len = bare.readUintSafe(bc) const result = new Map() for (let i = 0; i < len; i++) { @@ -1590,7 +1663,7 @@ function read11(bc: bare.ByteCursor): ReadonlyMap { return result } -function write11(bc: bare.ByteCursor, x: ReadonlyMap): void { +function write13(bc: bare.ByteCursor, x: ReadonlyMap): void { bare.writeUintSafe(bc, x.size) for (const kv of x) { bare.writeString(bc, kv[0]) @@ -1598,22 +1671,22 @@ function write11(bc: bare.ByteCursor, x: ReadonlyMap): void { } } -function read12(bc: bare.ByteCursor): ReadonlyMap | null { - return bare.readBool(bc) ? read11(bc) : null +function read14(bc: bare.ByteCursor): ReadonlyMap | null { + return bare.readBool(bc) ? read13(bc) : null } -function write12(bc: bare.ByteCursor, x: ReadonlyMap | null): void { +function write14(bc: bare.ByteCursor, x: ReadonlyMap | null): void { bare.writeBool(bc, x != null) if (x != null) { - write11(bc, x) + write13(bc, x) } } -function read13(bc: bare.ByteCursor): Json | null { +function read15(bc: bare.ByteCursor): Json | null { return bare.readBool(bc) ? readJson(bc) : null } -function write13(bc: bare.ByteCursor, x: Json | null): void { +function write15(bc: bare.ByteCursor, x: Json | null): void { bare.writeBool(bc, x != null) if (x != null) { writeJson(bc, x) @@ -1634,16 +1707,16 @@ export function readToRivetInit(bc: bare.ByteCursor): ToRivetInit { return { envoyKey: bare.readString(bc), version: bare.readU32(bc), - prepopulateActorNames: read12(bc), - metadata: read13(bc), + prepopulateActorNames: read14(bc), + metadata: read15(bc), } } export function writeToRivetInit(bc: bare.ByteCursor, x: ToRivetInit): void { bare.writeString(bc, x.envoyKey) bare.writeU32(bc, x.version) - write12(bc, x.prepopulateActorNames) - write13(bc, x.metadata) + write14(bc, x.prepopulateActorNames) + write15(bc, x.metadata) } export type ToRivetEvents = readonly EventWrapper[] @@ -1667,7 +1740,7 @@ export function writeToRivetEvents(bc: bare.ByteCursor, x: ToRivetEvents): void } } -function read14(bc: bare.ByteCursor): readonly ActorCheckpoint[] { +function read16(bc: bare.ByteCursor): readonly ActorCheckpoint[] { const len = bare.readUintSafe(bc) if (len === 0) { return [] @@ -1679,7 +1752,7 @@ function read14(bc: bare.ByteCursor): readonly ActorCheckpoint[] { return result } -function write14(bc: bare.ByteCursor, x: readonly ActorCheckpoint[]): void { +function write16(bc: bare.ByteCursor, x: readonly ActorCheckpoint[]): void { bare.writeUintSafe(bc, x.length) for (let i = 0; i < x.length; i++) { writeActorCheckpoint(bc, x[i]) @@ -1692,12 +1765,12 @@ export type ToRivetAckCommands = { export function readToRivetAckCommands(bc: bare.ByteCursor): ToRivetAckCommands { return { - lastCommandCheckpoints: read14(bc), + lastCommandCheckpoints: read16(bc), } } export function writeToRivetAckCommands(bc: bare.ByteCursor, x: ToRivetAckCommands): void { - write14(bc, x.lastCommandCheckpoints) + write16(bc, x.lastCommandCheckpoints) } export type ToRivetStopping = null @@ -1895,12 +1968,12 @@ export type ToEnvoyAckEvents = { export function readToEnvoyAckEvents(bc: bare.ByteCursor): ToEnvoyAckEvents { return { - lastEventCheckpoints: read14(bc), + lastEventCheckpoints: read16(bc), } } export function writeToEnvoyAckEvents(bc: bare.ByteCursor, x: ToEnvoyAckEvents): void { - write14(bc, x.lastEventCheckpoints) + write16(bc, x.lastEventCheckpoints) } export type ToEnvoyKvResponse = { diff --git a/engine/sdks/typescript/kv-channel-protocol/package.json b/engine/sdks/typescript/kv-channel-protocol/package.json new file mode 100644 index 0000000000..79cb0822b9 --- /dev/null +++ b/engine/sdks/typescript/kv-channel-protocol/package.json @@ -0,0 +1,36 @@ +{ + "name": "@rivetkit/engine-kv-channel-protocol", + "version": "2.1.6", + "license": "Apache-2.0", + "type": "module", + "exports": { + ".": { + "import": { + "types": "./dist/index.d.ts", + "default": "./dist/index.js" + }, + "require": { + "types": "./dist/index.d.cts", + "default": "./dist/index.cjs" + } + } + }, + "files": [ + "dist/**/*.js", + "dist/**/*.d.ts" + ], + "scripts": { + "build": "tsup src/index.ts", + "clean": "rm -rf dist", + "check-types": "tsc --noEmit" + }, + "types": "dist/index.d.ts", + "dependencies": { + "@rivetkit/bare-ts": "^0.6.2" + }, + "devDependencies": { + "@types/node": "^20.19.13", + "tsup": "^8.5.0", + "typescript": "^5.9.2" + } +} diff --git a/engine/sdks/typescript/kv-channel-protocol/src/index.ts b/engine/sdks/typescript/kv-channel-protocol/src/index.ts new file mode 100644 index 0000000000..229cde4a2f --- /dev/null +++ b/engine/sdks/typescript/kv-channel-protocol/src/index.ts @@ -0,0 +1,524 @@ +// @generated - post-processed by build.rs +export const PROTOCOL_VERSION = 1; + +import * as bare from "@rivetkit/bare-ts" + +const DEFAULT_CONFIG = /* @__PURE__ */ bare.Config({}) + +export type i64 = bigint +export type u32 = number + +/** + * Id is a 30-character base36 string encoding the V1 format from + * engine/packages/util-id/. Use the util-id library for parsing + * and validation. Do not hand-roll Id parsing. + */ +export type Id = string + +export function readId(bc: bare.ByteCursor): Id { + return bare.readString(bc) +} + +export function writeId(bc: bare.ByteCursor, x: Id): void { + bare.writeString(bc, x) +} + +/** + * actorId is on ToServerRequest, not on open/close. The outer + * actorId is the single source of truth for routing. + */ +export type ActorOpenRequest = null + +export type ActorCloseRequest = null + +export type ActorOpenResponse = null + +export type ActorCloseResponse = null + +export type KvKey = ArrayBuffer + +export function readKvKey(bc: bare.ByteCursor): KvKey { + return bare.readData(bc) +} + +export function writeKvKey(bc: bare.ByteCursor, x: KvKey): void { + bare.writeData(bc, x) +} + +export type KvValue = ArrayBuffer + +export function readKvValue(bc: bare.ByteCursor): KvValue { + return bare.readData(bc) +} + +export function writeKvValue(bc: bare.ByteCursor, x: KvValue): void { + bare.writeData(bc, x) +} + +function read0(bc: bare.ByteCursor): readonly KvKey[] { + const len = bare.readUintSafe(bc) + if (len === 0) { + return [] + } + const result = [readKvKey(bc)] + for (let i = 1; i < len; i++) { + result[i] = readKvKey(bc) + } + return result +} + +function write0(bc: bare.ByteCursor, x: readonly KvKey[]): void { + bare.writeUintSafe(bc, x.length) + for (let i = 0; i < x.length; i++) { + writeKvKey(bc, x[i]) + } +} + +export type KvGetRequest = { + readonly keys: readonly KvKey[] +} + +export function readKvGetRequest(bc: bare.ByteCursor): KvGetRequest { + return { + keys: read0(bc), + } +} + +export function writeKvGetRequest(bc: bare.ByteCursor, x: KvGetRequest): void { + write0(bc, x.keys) +} + +function read1(bc: bare.ByteCursor): readonly KvValue[] { + const len = bare.readUintSafe(bc) + if (len === 0) { + return [] + } + const result = [readKvValue(bc)] + for (let i = 1; i < len; i++) { + result[i] = readKvValue(bc) + } + return result +} + +function write1(bc: bare.ByteCursor, x: readonly KvValue[]): void { + bare.writeUintSafe(bc, x.length) + for (let i = 0; i < x.length; i++) { + writeKvValue(bc, x[i]) + } +} + +export type KvPutRequest = { + /** + * keys and values are parallel lists. keys.len() must equal values.len(). + */ + readonly keys: readonly KvKey[] + readonly values: readonly KvValue[] +} + +export function readKvPutRequest(bc: bare.ByteCursor): KvPutRequest { + return { + keys: read0(bc), + values: read1(bc), + } +} + +export function writeKvPutRequest(bc: bare.ByteCursor, x: KvPutRequest): void { + write0(bc, x.keys) + write1(bc, x.values) +} + +export type KvDeleteRequest = { + readonly keys: readonly KvKey[] +} + +export function readKvDeleteRequest(bc: bare.ByteCursor): KvDeleteRequest { + return { + keys: read0(bc), + } +} + +export function writeKvDeleteRequest(bc: bare.ByteCursor, x: KvDeleteRequest): void { + write0(bc, x.keys) +} + +export type KvDeleteRangeRequest = { + readonly start: KvKey + readonly end: KvKey +} + +export function readKvDeleteRangeRequest(bc: bare.ByteCursor): KvDeleteRangeRequest { + return { + start: readKvKey(bc), + end: readKvKey(bc), + } +} + +export function writeKvDeleteRangeRequest(bc: bare.ByteCursor, x: KvDeleteRangeRequest): void { + writeKvKey(bc, x.start) + writeKvKey(bc, x.end) +} + +export type RequestData = + | { readonly tag: "ActorOpenRequest"; readonly val: ActorOpenRequest } + | { readonly tag: "ActorCloseRequest"; readonly val: ActorCloseRequest } + | { readonly tag: "KvGetRequest"; readonly val: KvGetRequest } + | { readonly tag: "KvPutRequest"; readonly val: KvPutRequest } + | { readonly tag: "KvDeleteRequest"; readonly val: KvDeleteRequest } + | { readonly tag: "KvDeleteRangeRequest"; readonly val: KvDeleteRangeRequest } + +export function readRequestData(bc: bare.ByteCursor): RequestData { + const offset = bc.offset + const tag = bare.readU8(bc) + switch (tag) { + case 0: + return { tag: "ActorOpenRequest", val: null } + case 1: + return { tag: "ActorCloseRequest", val: null } + case 2: + return { tag: "KvGetRequest", val: readKvGetRequest(bc) } + case 3: + return { tag: "KvPutRequest", val: readKvPutRequest(bc) } + case 4: + return { tag: "KvDeleteRequest", val: readKvDeleteRequest(bc) } + case 5: + return { tag: "KvDeleteRangeRequest", val: readKvDeleteRangeRequest(bc) } + default: { + bc.offset = offset + throw new bare.BareError(offset, "invalid tag") + } + } +} + +export function writeRequestData(bc: bare.ByteCursor, x: RequestData): void { + switch (x.tag) { + case "ActorOpenRequest": { + bare.writeU8(bc, 0) + break + } + case "ActorCloseRequest": { + bare.writeU8(bc, 1) + break + } + case "KvGetRequest": { + bare.writeU8(bc, 2) + writeKvGetRequest(bc, x.val) + break + } + case "KvPutRequest": { + bare.writeU8(bc, 3) + writeKvPutRequest(bc, x.val) + break + } + case "KvDeleteRequest": { + bare.writeU8(bc, 4) + writeKvDeleteRequest(bc, x.val) + break + } + case "KvDeleteRangeRequest": { + bare.writeU8(bc, 5) + writeKvDeleteRangeRequest(bc, x.val) + break + } + } +} + +export type ErrorResponse = { + readonly code: string + readonly message: string +} + +export function readErrorResponse(bc: bare.ByteCursor): ErrorResponse { + return { + code: bare.readString(bc), + message: bare.readString(bc), + } +} + +export function writeErrorResponse(bc: bare.ByteCursor, x: ErrorResponse): void { + bare.writeString(bc, x.code) + bare.writeString(bc, x.message) +} + +export type KvGetResponse = { + /** + * Only keys that exist are returned. Missing keys are omitted. + * The client infers missing keys by comparing request keys to + * response keys. This matches the runner protocol behavior + * (engine/packages/pegboard/src/actor_kv/mod.rs). + */ + readonly keys: readonly KvKey[] + readonly values: readonly KvValue[] +} + +export function readKvGetResponse(bc: bare.ByteCursor): KvGetResponse { + return { + keys: read0(bc), + values: read1(bc), + } +} + +export function writeKvGetResponse(bc: bare.ByteCursor, x: KvGetResponse): void { + write0(bc, x.keys) + write1(bc, x.values) +} + +export type KvPutResponse = null + +/** + * KvDeleteResponse is used for both KvDeleteRequest and + * KvDeleteRangeRequest, same as the runner protocol. + */ +export type KvDeleteResponse = null + +export type ResponseData = + | { readonly tag: "ErrorResponse"; readonly val: ErrorResponse } + | { readonly tag: "ActorOpenResponse"; readonly val: ActorOpenResponse } + | { readonly tag: "ActorCloseResponse"; readonly val: ActorCloseResponse } + | { readonly tag: "KvGetResponse"; readonly val: KvGetResponse } + | { readonly tag: "KvPutResponse"; readonly val: KvPutResponse } + | { readonly tag: "KvDeleteResponse"; readonly val: KvDeleteResponse } + +export function readResponseData(bc: bare.ByteCursor): ResponseData { + const offset = bc.offset + const tag = bare.readU8(bc) + switch (tag) { + case 0: + return { tag: "ErrorResponse", val: readErrorResponse(bc) } + case 1: + return { tag: "ActorOpenResponse", val: null } + case 2: + return { tag: "ActorCloseResponse", val: null } + case 3: + return { tag: "KvGetResponse", val: readKvGetResponse(bc) } + case 4: + return { tag: "KvPutResponse", val: null } + case 5: + return { tag: "KvDeleteResponse", val: null } + default: { + bc.offset = offset + throw new bare.BareError(offset, "invalid tag") + } + } +} + +export function writeResponseData(bc: bare.ByteCursor, x: ResponseData): void { + switch (x.tag) { + case "ErrorResponse": { + bare.writeU8(bc, 0) + writeErrorResponse(bc, x.val) + break + } + case "ActorOpenResponse": { + bare.writeU8(bc, 1) + break + } + case "ActorCloseResponse": { + bare.writeU8(bc, 2) + break + } + case "KvGetResponse": { + bare.writeU8(bc, 3) + writeKvGetResponse(bc, x.val) + break + } + case "KvPutResponse": { + bare.writeU8(bc, 4) + break + } + case "KvDeleteResponse": { + bare.writeU8(bc, 5) + break + } + } +} + +export type ToServerRequest = { + readonly requestId: u32 + readonly actorId: Id + readonly data: RequestData +} + +export function readToServerRequest(bc: bare.ByteCursor): ToServerRequest { + return { + requestId: bare.readU32(bc), + actorId: readId(bc), + data: readRequestData(bc), + } +} + +export function writeToServerRequest(bc: bare.ByteCursor, x: ToServerRequest): void { + bare.writeU32(bc, x.requestId) + writeId(bc, x.actorId) + writeRequestData(bc, x.data) +} + +export type ToServerPong = { + readonly ts: i64 +} + +export function readToServerPong(bc: bare.ByteCursor): ToServerPong { + return { + ts: bare.readI64(bc), + } +} + +export function writeToServerPong(bc: bare.ByteCursor, x: ToServerPong): void { + bare.writeI64(bc, x.ts) +} + +export type ToServer = + | { readonly tag: "ToServerRequest"; readonly val: ToServerRequest } + | { readonly tag: "ToServerPong"; readonly val: ToServerPong } + +export function readToServer(bc: bare.ByteCursor): ToServer { + const offset = bc.offset + const tag = bare.readU8(bc) + switch (tag) { + case 0: + return { tag: "ToServerRequest", val: readToServerRequest(bc) } + case 1: + return { tag: "ToServerPong", val: readToServerPong(bc) } + default: { + bc.offset = offset + throw new bare.BareError(offset, "invalid tag") + } + } +} + +export function writeToServer(bc: bare.ByteCursor, x: ToServer): void { + switch (x.tag) { + case "ToServerRequest": { + bare.writeU8(bc, 0) + writeToServerRequest(bc, x.val) + break + } + case "ToServerPong": { + bare.writeU8(bc, 1) + writeToServerPong(bc, x.val) + break + } + } +} + +export function encodeToServer(x: ToServer, config?: Partial): Uint8Array { + const fullConfig = config != null ? bare.Config(config) : DEFAULT_CONFIG + const bc = new bare.ByteCursor( + new Uint8Array(fullConfig.initialBufferLength), + fullConfig, + ) + writeToServer(bc, x) + return new Uint8Array(bc.view.buffer, bc.view.byteOffset, bc.offset) +} + +export function decodeToServer(bytes: Uint8Array): ToServer { + const bc = new bare.ByteCursor(bytes, DEFAULT_CONFIG) + const result = readToServer(bc) + if (bc.offset < bc.view.byteLength) { + throw new bare.BareError(bc.offset, "remaining bytes") + } + return result +} + +export type ToClientResponse = { + readonly requestId: u32 + readonly data: ResponseData +} + +export function readToClientResponse(bc: bare.ByteCursor): ToClientResponse { + return { + requestId: bare.readU32(bc), + data: readResponseData(bc), + } +} + +export function writeToClientResponse(bc: bare.ByteCursor, x: ToClientResponse): void { + bare.writeU32(bc, x.requestId) + writeResponseData(bc, x.data) +} + +export type ToClientPing = { + readonly ts: i64 +} + +export function readToClientPing(bc: bare.ByteCursor): ToClientPing { + return { + ts: bare.readI64(bc), + } +} + +export function writeToClientPing(bc: bare.ByteCursor, x: ToClientPing): void { + bare.writeI64(bc, x.ts) +} + +/** + * Server-initiated close. Sent when the server is shutting down + * or draining connections. The client should close all actors + * and reconnect with backoff. Same pattern as the runner + * protocol's ToRunnerClose. + */ +export type ToClientClose = null + +export type ToClient = + | { readonly tag: "ToClientResponse"; readonly val: ToClientResponse } + | { readonly tag: "ToClientPing"; readonly val: ToClientPing } + | { readonly tag: "ToClientClose"; readonly val: ToClientClose } + +export function readToClient(bc: bare.ByteCursor): ToClient { + const offset = bc.offset + const tag = bare.readU8(bc) + switch (tag) { + case 0: + return { tag: "ToClientResponse", val: readToClientResponse(bc) } + case 1: + return { tag: "ToClientPing", val: readToClientPing(bc) } + case 2: + return { tag: "ToClientClose", val: null } + default: { + bc.offset = offset + throw new bare.BareError(offset, "invalid tag") + } + } +} + +export function writeToClient(bc: bare.ByteCursor, x: ToClient): void { + switch (x.tag) { + case "ToClientResponse": { + bare.writeU8(bc, 0) + writeToClientResponse(bc, x.val) + break + } + case "ToClientPing": { + bare.writeU8(bc, 1) + writeToClientPing(bc, x.val) + break + } + case "ToClientClose": { + bare.writeU8(bc, 2) + break + } + } +} + +export function encodeToClient(x: ToClient, config?: Partial): Uint8Array { + const fullConfig = config != null ? bare.Config(config) : DEFAULT_CONFIG + const bc = new bare.ByteCursor( + new Uint8Array(fullConfig.initialBufferLength), + fullConfig, + ) + writeToClient(bc, x) + return new Uint8Array(bc.view.buffer, bc.view.byteOffset, bc.offset) +} + +export function decodeToClient(bytes: Uint8Array): ToClient { + const bc = new bare.ByteCursor(bytes, DEFAULT_CONFIG) + const result = readToClient(bc) + if (bc.offset < bc.view.byteLength) { + throw new bare.BareError(bc.offset, "remaining bytes") + } + return result +} + + +function assert(condition: boolean, message?: string): asserts condition { + if (!condition) throw new Error(message ?? "Assertion failed") +} diff --git a/engine/sdks/typescript/kv-channel-protocol/tsconfig.json b/engine/sdks/typescript/kv-channel-protocol/tsconfig.json new file mode 100644 index 0000000000..d8dc820c55 --- /dev/null +++ b/engine/sdks/typescript/kv-channel-protocol/tsconfig.json @@ -0,0 +1,9 @@ +{ + "extends": "../../../../tsconfig.base.json", + "compilerOptions": { + "declaration": true, + "outDir": "./dist" + }, + "exclude": ["dist", "node_modules"], + "include": ["**/*.ts"] +} diff --git a/engine/sdks/typescript/kv-channel-protocol/tsup.config.ts b/engine/sdks/typescript/kv-channel-protocol/tsup.config.ts new file mode 100644 index 0000000000..2d399b5efe --- /dev/null +++ b/engine/sdks/typescript/kv-channel-protocol/tsup.config.ts @@ -0,0 +1,4 @@ +import { defineConfig } from "tsup"; +import defaultConfig from "../../../../tsup.base"; + +export default defineConfig(defaultConfig); diff --git a/examples/kitchen-sink/scripts/bench-report.ts b/examples/kitchen-sink/scripts/bench-report.ts new file mode 100644 index 0000000000..49cb5d8829 --- /dev/null +++ b/examples/kitchen-sink/scripts/bench-report.ts @@ -0,0 +1,183 @@ +#!/usr/bin/env -S npx tsx + +/** + * Benchmark Report Generator + * + * Runs the SQLite benchmark twice (native and WASM) and generates a + * markdown comparison report. + * + * Usage: + * RIVET_ENDPOINT=http://127.0.0.1:6420 npx tsx scripts/bench-report.ts + * RIVET_ENDPOINT=http://127.0.0.1:6420 npx tsx scripts/bench-report.ts --quick + */ + +import { execSync } from "node:child_process"; +import { readFileSync, writeFileSync, renameSync, existsSync } from "node:fs"; +import { join, dirname } from "node:path"; +import { fileURLToPath } from "node:url"; + +const __filename = fileURLToPath(import.meta.url); +const __dirname = dirname(__filename); + +const QUICK = process.argv.includes("--quick") ? "--quick" : ""; +const endpoint = process.env.RIVET_ENDPOINT; +if (!endpoint) { + console.error("RIVET_ENDPOINT is required"); + process.exit(1); +} + +const NATIVE_NODE = join( + __dirname, + "../../../rivetkit-typescript/packages/sqlite-native/sqlite-native.linux-x64-gnu.node", +); +const NATIVE_BAK = NATIVE_NODE + ".bak"; + +interface BenchEntry { + name: string; + elapsedMs: number; + detail?: string; +} + +function runBench(label: string, outputFile: string): BenchEntry[] { + console.log(`\n${"=".repeat(60)}`); + console.log(`Running ${label} benchmark...`); + console.log("=".repeat(60)); + + try { + execSync( + `BENCH_REPORT=${outputFile} RIVET_ENDPOINT=${endpoint} npx tsx scripts/bench-sqlite.ts ${QUICK}`, + { stdio: "inherit", timeout: 600_000 }, + ); + } catch { + console.error(`${label} benchmark failed`); + } + + const jsonFile = outputFile.replace(/\.md$/, ".json"); + if (!existsSync(jsonFile)) return []; + const data = readFileSync(jsonFile, "utf-8"); + return JSON.parse(data) as BenchEntry[]; +} + +// Run native (default). +const nativeResults = runBench("Native KV Channel", "/tmp/bench-native.md"); + +// Hide native addon to force WASM fallback. +if (existsSync(NATIVE_NODE)) { + renameSync(NATIVE_NODE, NATIVE_BAK); +} + +let wasmResults: BenchEntry[]; +try { + wasmResults = runBench("WASM VFS", "/tmp/bench-wasm.md"); +} finally { + // Restore native addon. + if (existsSync(NATIVE_BAK)) { + renameSync(NATIVE_BAK, NATIVE_NODE); + } +} + +// Build lookup maps. +const nativeMap = new Map(nativeResults.map((e) => [e.name, e])); +const wasmMap = new Map(wasmResults.map((e) => [e.name, e])); + +// Collect all unique names in order (native first, then any WASM-only). +const allNames: string[] = []; +const seen = new Set(); +for (const e of nativeResults) { + if (!seen.has(e.name)) { allNames.push(e.name); seen.add(e.name); } +} +for (const e of wasmResults) { + if (!seen.has(e.name)) { allNames.push(e.name); seen.add(e.name); } +} + +// Generate markdown. +const lines: string[] = []; +const now = new Date().toISOString().slice(0, 19).replace("T", " "); +lines.push(`# SQLite Benchmark Report`); +lines.push(``); +lines.push(`Generated: ${now} UTC`); +lines.push(`Engine: ${endpoint}`); +lines.push(`Mode: ${QUICK ? "quick" : "full"}`); +lines.push(``); + +// Summary table. +lines.push(`## Results`); +lines.push(``); +lines.push(`| Benchmark | Native (ms) | WASM (ms) | Speedup | Native/op | WASM/op |`); +lines.push(`|-----------|------------:|----------:|--------:|----------:|--------:|`); + +for (const name of allNames) { + const n = nativeMap.get(name); + const w = wasmMap.get(name); + + const nMs = n && n.elapsedMs > 0 ? n.elapsedMs : null; + const wMs = w && w.elapsedMs > 0 ? w.elapsedMs : null; + + const nStr = nMs !== null ? nMs.toFixed(1) : n?.detail === "TIMEOUT" ? "TIMEOUT" : "-"; + const wStr = wMs !== null ? wMs.toFixed(1) : w?.detail === "TIMEOUT" ? "TIMEOUT" : "-"; + + let speedup = "-"; + if (nMs !== null && wMs !== null && nMs > 0) { + const ratio = wMs / nMs; + speedup = ratio >= 1.1 ? `**${ratio.toFixed(1)}x**` : ratio <= 0.9 ? `${ratio.toFixed(1)}x` : `~1.0x`; + } + + const nOp = n?.detail && n.detail !== "TIMEOUT" ? n.detail : "-"; + const wOp = w?.detail && w.detail !== "TIMEOUT" ? w.detail : "-"; + + lines.push(`| ${name} | ${nStr} | ${wStr} | ${speedup} | ${nOp} | ${wOp} |`); +} + +// Scale sweep tables (group by prefix). +const scalePrefixes = ["Insert single", "Insert batch", "Insert TX", "Point read", "Mixed OLTP", "Hot row updates"]; +for (const prefix of scalePrefixes) { + const scaleEntries = allNames.filter((name) => name.startsWith(prefix + " x")); + if (scaleEntries.length < 2) continue; + + lines.push(``); + lines.push(`### ${prefix} (scale sweep)`); + lines.push(``); + lines.push(`| N | Native (ms) | Native/op | WASM (ms) | WASM/op | Speedup |`); + lines.push(`|--:|------------:|----------:|----------:|--------:|--------:|`); + + for (const name of scaleEntries) { + const n = nativeMap.get(name); + const w = wasmMap.get(name); + const nMs = n && n.elapsedMs > 0 ? n.elapsedMs : null; + const wMs = w && w.elapsedMs > 0 ? w.elapsedMs : null; + + // Extract N from name like "Insert single x1000" + const nMatch = name.match(/x(\d+)/); + const count = nMatch ? nMatch[1] : "?"; + + const nStr = nMs !== null ? nMs.toFixed(1) : n?.detail === "TIMEOUT" ? "TIMEOUT" : "-"; + const wStr = wMs !== null ? wMs.toFixed(1) : w?.detail === "TIMEOUT" ? "TIMEOUT" : "-"; + const nOp = n?.detail && n.detail !== "TIMEOUT" ? n.detail : "-"; + const wOp = w?.detail && w.detail !== "TIMEOUT" ? w.detail : "-"; + + let speedup = "-"; + if (nMs !== null && wMs !== null && nMs > 0) { + const ratio = wMs / nMs; + speedup = ratio >= 1.1 ? `**${ratio.toFixed(1)}x**` : ratio <= 0.9 ? `${ratio.toFixed(1)}x` : `~1.0x`; + } + + lines.push(`| ${count} | ${nStr} | ${nOp} | ${wStr} | ${wOp} | ${speedup} |`); + } +} + +// Totals. +const nTotal = nativeResults.reduce((s, e) => s + (e.elapsedMs > 0 ? e.elapsedMs : 0), 0); +const wTotal = wasmResults.reduce((s, e) => s + (e.elapsedMs > 0 ? e.elapsedMs : 0), 0); +lines.push(``); +lines.push(`## Totals`); +lines.push(``); +lines.push(`- **Native total**: ${(nTotal / 1000).toFixed(1)}s`); +lines.push(`- **WASM total**: ${(wTotal / 1000).toFixed(1)}s`); +lines.push(`- **Overall speedup**: ${(wTotal / nTotal).toFixed(1)}x`); +lines.push(``); + +const reportPath = process.env.BENCH_REPORT || "/home/nathan/rivet-5/.agent/notes/bench-report.md"; +writeFileSync(reportPath, lines.join("\n")); +console.log(`\nReport written to ${reportPath}`); + +process.exit(0); diff --git a/examples/kitchen-sink/scripts/bench-sqlite.ts b/examples/kitchen-sink/scripts/bench-sqlite.ts new file mode 100644 index 0000000000..0305fd5ad5 --- /dev/null +++ b/examples/kitchen-sink/scripts/bench-sqlite.ts @@ -0,0 +1,719 @@ +#!/usr/bin/env -S npx tsx + +/** + * SQLite Benchmark Runner + * + * Spins up actors against the kitchen-sink registry and runs each benchmark + * scenario, printing a summary table at the end. + * + * Usage: + * # Against local engine (spawn_engine=true default): + * npx tsx scripts/bench-sqlite.ts + * + * # Against a remote endpoint: + * RIVET_ENDPOINT=http://localhost:6420 npx tsx scripts/bench-sqlite.ts + * + * # Quick mode (smaller datasets): + * npx tsx scripts/bench-sqlite.ts --quick + */ + +import { setup } from "rivetkit"; +import { createClient } from "rivetkit/client"; +import { sqliteBench } from "../src/actors/sqlite-bench.ts"; + +const registry = setup({ use: { sqliteBench } }); +type Registry = typeof registry; + +// ── Config ────────────────────────────────────────────────────────── + +const QUICK = process.argv.includes("--quick"); + +const SIZES = QUICK + ? { small: 100, medium: 500, large: 1000, growth: 2000, growthInterval: 500 } + : { small: 500, medium: 2000, large: 10000, growth: 10000, growthInterval: 2000 }; + +// Orders of magnitude for scale sweep benchmarks. +const SCALES = QUICK + ? [1, 10, 100, 1000, 10000] + : [1, 10, 100, 1000, 10000, 100000]; + +// Per-operation timeout for large scale tests (ms). +const SCALE_TIMEOUT_MS = 120_000; + +// Output file for markdown report. +const REPORT_FILE = process.env.BENCH_REPORT; + +// ── Types ─────────────────────────────────────────────────────────── + +interface BenchmarkEntry { + name: string; + elapsedMs: number; + detail?: string; +} + +// ── Helpers ───────────────────────────────────────────────────────── + +function ms(n: number): string { + return `${n.toFixed(1)}ms`; +} + +function perOp(total: number, count: number): string { + return `${(total / count).toFixed(3)}ms/op`; +} + +/** Run a benchmark with a timeout. Returns null if it times out. */ +async function withTimeout(fn: () => Promise, timeoutMs: number): Promise { + return Promise.race([ + fn(), + new Promise((resolve) => setTimeout(() => resolve(null), timeoutMs)), + ]); +} + +function printTable(entries: BenchmarkEntry[]): void { + const nameWidth = Math.max(40, ...entries.map((e) => e.name.length)); + const timeWidth = 14; + const detailWidth = 40; + + const sep = "-".repeat(nameWidth + timeWidth + detailWidth + 8); + console.log(sep); + console.log( + `${"Benchmark".padEnd(nameWidth)} ${"Time".padStart(timeWidth)} ${"Detail".padEnd(detailWidth)}`, + ); + console.log(sep); + for (const e of entries) { + console.log( + `${e.name.padEnd(nameWidth)} ${ms(e.elapsedMs).padStart(timeWidth)} ${(e.detail ?? "").padEnd(detailWidth)}`, + ); + } + console.log(sep); +} + +// ── Runner ────────────────────────────────────────────────────────── + +type Client = Awaited>["client"]; + +async function freshActor(client: Client) { + return client.sqliteBench.getOrCreate([`bench-${crypto.randomUUID()}`]); +} + +async function runAll(client: Client): Promise { + const entries: BenchmarkEntry[] = []; + + // 1. Large migrations + console.log(" [1/14] Large migrations..."); + { + const a = await freshActor(client); + const r = await a.benchMigration(QUICK ? 50 : 100); + entries.push({ + name: `Migration (${r.tableCount} tables + indexes)`, + elapsedMs: r.elapsedMs, + detail: perOp(r.elapsedMs, r.tableCount), + }); + } + + // 1b. Large migrations in transaction + console.log(" [1b/14] Large migrations (transaction)..."); + { + const a = await freshActor(client); + const r = await a.benchMigrationTransaction(QUICK ? 50 : 100); + entries.push({ + name: `Migration TX (${r.tableCount} tables + indexes)`, + elapsedMs: r.elapsedMs, + detail: perOp(r.elapsedMs, r.tableCount), + }); + } + + // 2. Single-row inserts (scale sweep) + console.log(" [2] Single-row inserts (scale sweep)..."); + for (const n of SCALES) { + process.stdout.write(` x${n}...`); + const a = await freshActor(client); + const r = await withTimeout(() => a.benchInsertSingle(n), SCALE_TIMEOUT_MS); + if (r) { + console.log(` ${ms(r.elapsedMs)}`); + entries.push({ name: `Insert single x${n}`, elapsedMs: r.elapsedMs, detail: perOp(r.elapsedMs, n) }); + } else { + console.log(" TIMEOUT"); + entries.push({ name: `Insert single x${n}`, elapsedMs: -1, detail: "TIMEOUT" }); + break; + } + } + + // 3. Batch inserts (scale sweep) + console.log(" [3] Batch inserts (scale sweep)..."); + for (const n of SCALES) { + process.stdout.write(` x${n}...`); + const a = await freshActor(client); + const r = await withTimeout(() => a.benchInsertBatch(n, 50), SCALE_TIMEOUT_MS); + if (r) { + console.log(` ${ms(r.elapsedMs)}`); + entries.push({ name: `Insert batch x${n}`, elapsedMs: r.elapsedMs, detail: perOp(r.elapsedMs, n) }); + } else { + console.log(" TIMEOUT"); + entries.push({ name: `Insert batch x${n}`, elapsedMs: -1, detail: "TIMEOUT" }); + break; + } + } + + // 4. Transactional inserts (scale sweep) + console.log(" [4] TX inserts (scale sweep)..."); + for (const n of SCALES) { + process.stdout.write(` x${n}...`); + const a = await freshActor(client); + const r = await withTimeout(() => a.benchInsertTransaction(n), SCALE_TIMEOUT_MS); + if (r) { + console.log(` ${ms(r.elapsedMs)}`); + entries.push({ name: `Insert TX x${n}`, elapsedMs: r.elapsedMs, detail: perOp(r.elapsedMs, n) }); + } else { + console.log(" TIMEOUT"); + entries.push({ name: `Insert TX x${n}`, elapsedMs: -1, detail: "TIMEOUT" }); + break; + } + } + + // 5. Point reads (scale sweep) + console.log(" [5] Point reads (scale sweep)..."); + for (const n of SCALES) { + process.stdout.write(` x${n}...`); + const a = await freshActor(client); + const r = await withTimeout(() => a.benchPointRead(n), SCALE_TIMEOUT_MS); + if (r) { + console.log(` ${ms(r.elapsedMs)}`); + entries.push({ name: `Point read x${n}`, elapsedMs: r.elapsedMs, detail: perOp(r.elapsedMs, n) }); + } else { + console.log(" TIMEOUT"); + entries.push({ name: `Point read x${n}`, elapsedMs: -1, detail: "TIMEOUT" }); + break; + } + } + + // 6. Full table scan + console.log(" [6/14] Full table scan..."); + { + const a = await freshActor(client); + const r = await a.benchFullScan(SIZES.medium); + entries.push({ + name: `Full scan (${r.rowsReturned} rows)`, + elapsedMs: r.elapsedMs, + }); + } + + // 7. Range scan + console.log(" [7/14] Range scan..."); + { + const a = await freshActor(client); + const r = await a.benchRangeScan(SIZES.medium); + entries.push({ + name: `Range scan indexed (${r.indexed.rowsReturned} rows)`, + elapsedMs: r.indexed.elapsedMs, + }); + entries.push({ + name: `Range scan unindexed (${r.unindexed.rowsReturned} rows)`, + elapsedMs: r.unindexed.elapsedMs, + }); + } + + // 8. Large payloads + console.log(" [8/14] Large payloads..."); + { + const a = await freshActor(client); + const r = await a.benchLargePayload(100, 4096); + entries.push({ + name: `Large payload insert (4KB x ${r.rowCount})`, + elapsedMs: r.insertElapsedMs, + detail: perOp(r.insertElapsedMs, r.rowCount), + }); + entries.push({ + name: `Large payload read (4KB x ${r.rowsRead})`, + elapsedMs: r.readElapsedMs, + }); + } + { + const a = await freshActor(client); + const r = await a.benchLargePayload(20, 32768); + entries.push({ + name: `Large payload insert (32KB x ${r.rowCount})`, + elapsedMs: r.insertElapsedMs, + detail: perOp(r.insertElapsedMs, r.rowCount), + }); + entries.push({ + name: `Large payload read (32KB x ${r.rowsRead})`, + elapsedMs: r.readElapsedMs, + }); + } + + // 9. Complex queries + console.log(" [9/14] Complex queries..."); + { + const a = await freshActor(client); + const r = await a.benchComplexQueries(SIZES.medium); + for (const [queryType, result] of Object.entries(r.results)) { + entries.push({ + name: `Complex: ${queryType} (${result.rowCount} rows)`, + elapsedMs: result.elapsedMs, + }); + } + } + + // 10. Bulk update + console.log(" [10/14] Bulk update..."); + { + const a = await freshActor(client); + const r = await a.benchBulkUpdate(SIZES.medium); + entries.push({ + name: `Bulk update (~${Math.floor(r.seedRows / 2)} rows)`, + elapsedMs: r.elapsedMs, + }); + } + + // 11. Bulk delete + VACUUM + console.log(" [11/14] Bulk delete + VACUUM..."); + { + const a = await freshActor(client); + const r = await a.benchDeleteVacuum(SIZES.medium); + entries.push({ + name: `Bulk delete (~${Math.floor(r.seedRows / 2)} rows)`, + elapsedMs: r.deleteElapsedMs, + }); + entries.push({ + name: `VACUUM after delete`, + elapsedMs: r.vacuumElapsedMs, + }); + } + + // 12. Mixed OLTP (scale sweep) + console.log(" [12] Mixed OLTP (scale sweep)..."); + for (const n of SCALES) { + process.stdout.write(` x${n}...`); + const a = await freshActor(client); + const r = await withTimeout(() => a.benchMixedOltp(n, 0.7), SCALE_TIMEOUT_MS); + if (r) { + console.log(` ${ms(r.elapsedMs)}`); + entries.push({ name: `Mixed OLTP x${n} (${r.reads}R/${r.writes}W)`, elapsedMs: r.elapsedMs, detail: perOp(r.elapsedMs, n) }); + } else { + console.log(" TIMEOUT"); + entries.push({ name: `Mixed OLTP x${n}`, elapsedMs: -1, detail: "TIMEOUT" }); + break; + } + } + + // 13. Hot row (scale sweep) + console.log(" [13] Hot row (scale sweep)..."); + for (const n of SCALES) { + process.stdout.write(` x${n}...`); + const a = await freshActor(client); + const r = await withTimeout(() => a.benchHotRow(n), SCALE_TIMEOUT_MS); + if (r) { + console.log(` ${ms(r.elapsedMs)}`); + entries.push({ name: `Hot row updates x${n}`, elapsedMs: r.elapsedMs, detail: perOp(r.elapsedMs, n) }); + } else { + console.log(" TIMEOUT"); + entries.push({ name: `Hot row updates x${n}`, elapsedMs: -1, detail: "TIMEOUT" }); + break; + } + } + + // 14. JSON operations + console.log(" [14/14] JSON operations..."); + { + const a = await freshActor(client); + const r = await a.benchJson(SIZES.small); + entries.push({ + name: `JSON insert x${r.rowCount}`, + elapsedMs: r.insertElapsedMs, + detail: perOp(r.insertElapsedMs, r.rowCount), + }); + entries.push({ + name: `JSON extract query (${r.jsonExtract.rowCount} rows)`, + elapsedMs: r.jsonExtract.elapsedMs, + }); + entries.push({ + name: `JSON each aggregation (${r.jsonEach.rowCount} groups)`, + elapsedMs: r.jsonEach.elapsedMs, + }); + } + + // FTS and growth are optional since FTS5 may not be available in all builds. + console.log(" [bonus] FTS5..."); + try { + const a = await freshActor(client); + const r = await a.benchFts(SIZES.small); + entries.push({ + name: `FTS5 insert x${r.docCount}`, + elapsedMs: r.insertElapsedMs, + detail: perOp(r.insertElapsedMs, r.docCount), + }); + entries.push({ + name: `FTS5 search (${r.search.rowCount} hits)`, + elapsedMs: r.search.elapsedMs, + }); + entries.push({ + name: `FTS5 prefix search (${r.prefixSearch.rowCount} hits)`, + elapsedMs: r.prefixSearch.elapsedMs, + }); + } catch (err) { + console.log(` Skipped FTS5: ${err}`); + } + + console.log(" [bonus] Growth test..."); + { + const a = await freshActor(client); + const r = await a.benchGrowth(SIZES.growth, SIZES.growthInterval); + for (const m of r.measurements) { + entries.push({ + name: `Growth @${m.rowCount} rows: insert batch`, + elapsedMs: m.insertBatchMs, + detail: perOp(m.insertBatchMs, SIZES.growthInterval), + }); + entries.push({ + name: `Growth @${m.rowCount} rows: 100 point reads`, + elapsedMs: m.pointReadMs, + detail: perOp(m.pointReadMs, 100), + }); + } + } + + return entries; +} + +// ── Concurrent actor benchmark ────────────────────────────────────── + +const CONCURRENCY_SCALES = [1, 5, 10, 50, 100]; +const ROWS_PER_CONCURRENT_ACTOR = 100; + +async function runConcurrent( + client: Client, +): Promise { + const entries: BenchmarkEntry[] = []; + + for (const actorCount of CONCURRENCY_SCALES) { + process.stdout.write(` Concurrent: ${actorCount} actors x ${ROWS_PER_CONCURRENT_ACTOR} rows...`); + const start = performance.now(); + const promises = Array.from({ length: actorCount }, async () => { + const a = await freshActor(client); + return a.benchInsertTransaction(ROWS_PER_CONCURRENT_ACTOR); + }); + const results = await withTimeout( + () => Promise.all(promises), + SCALE_TIMEOUT_MS, + ); + if (!results) { + console.log(" TIMEOUT"); + entries.push({ + name: `Concurrent x${actorCount} wall`, + elapsedMs: -1, + detail: "TIMEOUT", + }); + break; + } + const totalMs = performance.now() - start; + const avgMs = + results.reduce((sum, r) => sum + r.elapsedMs, 0) / results.length; + const totalRows = actorCount * ROWS_PER_CONCURRENT_ACTOR; + console.log(` ${totalMs.toFixed(0)}ms wall, ${avgMs.toFixed(1)}ms avg/actor`); + entries.push({ + name: `Concurrent x${actorCount} wall`, + elapsedMs: totalMs, + detail: `${totalRows} total rows`, + }); + entries.push({ + name: `Concurrent x${actorCount} avg/actor`, + elapsedMs: avgMs, + detail: perOp(avgMs, ROWS_PER_CONCURRENT_ACTOR), + }); + entries.push({ + name: `Concurrent x${actorCount} throughput`, + elapsedMs: totalRows / (totalMs / 1000), + detail: `rows/sec`, + }); + } + + return entries; +} + +// ── Native SQLite baseline (no VFS, no KV channel, raw disk) ─────── + +async function runNativeBaseline(): Promise { + const { DatabaseSync } = await import("node:sqlite"); + const { mkdtempSync, rmSync } = await import("node:fs"); + const { join } = await import("node:path"); + const { tmpdir } = await import("node:os"); + + const dir = mkdtempSync(join(tmpdir(), "sqlite-bench-")); + const dbPath = join(dir, "bench.db"); + const db = new DatabaseSync(dbPath); + db.exec("PRAGMA journal_mode=WAL"); + db.exec("PRAGMA synchronous=NORMAL"); + + // Create the base table (matches actor onMigrate) + db.exec(` + CREATE TABLE IF NOT EXISTS bench ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + key TEXT, value TEXT, num REAL, created_at INTEGER NOT NULL + ) + `); + db.exec("CREATE INDEX IF NOT EXISTS idx_bench_key ON bench(key)"); + db.exec("CREATE INDEX IF NOT EXISTS idx_bench_num ON bench(num)"); + + const entries: BenchmarkEntry[] = []; + const tableCount = QUICK ? 50 : 100; + + // Migration (no transaction) + { + const start = performance.now(); + for (let i = 0; i < tableCount; i++) { + db.exec(`CREATE TABLE IF NOT EXISTS baseline_t${i} ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + a TEXT, b TEXT, c REAL, d INTEGER, created_at INTEGER NOT NULL + )`); + db.exec(`CREATE INDEX IF NOT EXISTS idx_baseline_t${i}_a ON baseline_t${i}(a)`); + } + entries.push({ + name: `[baseline] Migration (${tableCount} tables)`, + elapsedMs: performance.now() - start, + detail: perOp(performance.now() - start, tableCount), + }); + } + + // Migration in transaction + { + const start = performance.now(); + db.exec("BEGIN"); + for (let i = 0; i < tableCount; i++) { + db.exec(`CREATE TABLE IF NOT EXISTS baseline_tx_t${i} ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + a TEXT, b TEXT, c REAL, d INTEGER, created_at INTEGER NOT NULL + )`); + db.exec(`CREATE INDEX IF NOT EXISTS idx_baseline_tx_t${i}_a ON baseline_tx_t${i}(a)`); + } + db.exec("COMMIT"); + entries.push({ + name: `[baseline] Migration TX (${tableCount} tables)`, + elapsedMs: performance.now() - start, + detail: perOp(performance.now() - start, tableCount), + }); + } + + // Single-row inserts + { + const count = SIZES.small; + const stmt = db.prepare("INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)"); + const start = performance.now(); + for (let i = 0; i < count; i++) { + stmt.run(`key-${i}`, `value-${i}`, Math.random(), Date.now()); + } + entries.push({ + name: `[baseline] Insert single-row x${count}`, + elapsedMs: performance.now() - start, + detail: perOp(performance.now() - start, count), + }); + } + + // Insert in transaction + { + const count = SIZES.small; + const stmt = db.prepare("INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)"); + const start = performance.now(); + db.exec("BEGIN"); + for (let i = 0; i < count; i++) { + stmt.run(`tx-key-${i}`, `tx-value-${i}`, Math.random(), Date.now()); + } + db.exec("COMMIT"); + entries.push({ + name: `[baseline] Insert TX x${count}`, + elapsedMs: performance.now() - start, + detail: perOp(performance.now() - start, count), + }); + } + + // Point reads + { + const count = SIZES.small; + const stmt = db.prepare("SELECT * FROM bench WHERE key = ?"); + const start = performance.now(); + for (let i = 0; i < count; i++) { + stmt.get(`key-${i}`); + } + entries.push({ + name: `[baseline] Point read x${count}`, + elapsedMs: performance.now() - start, + detail: perOp(performance.now() - start, count), + }); + } + + // Full scan + { + const start = performance.now(); + const rows = db.prepare("SELECT * FROM bench").all(); + entries.push({ + name: `[baseline] Full scan (${rows.length} rows)`, + elapsedMs: performance.now() - start, + }); + } + + // Hot row updates + { + db.exec("INSERT INTO bench (key, value, num, created_at) VALUES ('hot', 'row', 0, 0)"); + const count = SIZES.small; + const stmt = db.prepare("UPDATE bench SET num = ? WHERE key = 'hot'"); + const start = performance.now(); + for (let i = 0; i < count; i++) { + stmt.run(i); + } + entries.push({ + name: `[baseline] Hot row updates x${count}`, + elapsedMs: performance.now() - start, + detail: perOp(performance.now() - start, count), + }); + } + + // Mixed OLTP + { + const count = SIZES.small; + const readStmt = db.prepare("SELECT * FROM bench WHERE key = ?"); + const writeStmt = db.prepare("INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)"); + let reads = 0, writes = 0; + const start = performance.now(); + for (let i = 0; i < count; i++) { + if (Math.random() < 0.7) { + readStmt.get(`key-${Math.floor(Math.random() * count)}`); + reads++; + } else { + writeStmt.run(`oltp-${i}`, `val-${i}`, Math.random(), Date.now()); + writes++; + } + } + entries.push({ + name: `[baseline] Mixed OLTP x${count} (${reads}R/${writes}W)`, + elapsedMs: performance.now() - start, + detail: perOp(performance.now() - start, count), + }); + } + + db.close(); + rmSync(dir, { recursive: true }); + return entries; +} + +// ── Main ──────────────────────────────────────────────────────────── + +async function main(): Promise { + console.log(`SQLite Benchmark (${QUICK ? "quick" : "full"} mode)\n`); + + const endpoint = process.env.RIVET_ENDPOINT; + + let client: ReturnType>; + if (endpoint) { + console.log(`Connecting to endpoint: ${endpoint}\n`); + registry.start(); + client = createClient({ endpoint }); + } else { + console.log("Starting with local file-system driver\n"); + registry.start(); + client = createClient({ endpoint: "http://localhost:6420" }); + } + + // Give runner time to connect to the engine + if (endpoint) { + console.log("Waiting for runner to connect..."); + // Poll until the engine has a runner available + for (let i = 0; i < 30; i++) { + try { + const res = await fetch(`${endpoint}/actors?namespace=default`, { + method: "PUT", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify({ name: "sqliteBench", key: ["__health_check__"] }), + }); + if (res.ok) { + console.log("Runner connected!\n"); + break; + } + const body = await res.json().catch(() => ({})); + if ((body as any)?.error?.code !== "no_runners_available") { + console.log("Runner connected!\n"); + break; + } + } catch {} + await new Promise(r => setTimeout(r, 1000)); + process.stdout.write("."); + } + console.log(""); + } + + console.log("Running benchmarks...\n"); + const entries = await runAll(client); + + // Concurrent actor test. + console.log("\nRunning concurrent actor scale sweep..."); + const concurrentEntries = await runConcurrent(client); + entries.push(...concurrentEntries); + + // Native SQLite baseline (raw disk, no VFS/KV). + console.log("\nRunning native SQLite baseline..."); + const baselineEntries = await runNativeBaseline(); + entries.push(...baselineEntries); + + console.log("\n"); + printTable(entries); + + // Summary stats. + const totalMs = entries.reduce((sum, e) => sum + (e.elapsedMs > 0 ? e.elapsedMs : 0), 0); + console.log(`\nTotal benchmark time: ${ms(totalMs)}`); + console.log(`Scenarios run: ${entries.length}`); + + // Write JSON results for report generation. + const jsonFile = REPORT_FILE ? REPORT_FILE.replace(/\.md$/, ".json") : `/tmp/bench-results-${Date.now()}.json`; + const { writeFileSync } = await import("node:fs"); + writeFileSync(jsonFile, JSON.stringify(entries, null, 2)); + console.log(`\nResults written to ${jsonFile}`); + + // Print KV channel metrics if native SQLite is available. + try { + // Access the internal native-sqlite module to get KV channel metrics. + // Uses createRequire for CJS compat with the napi addon. + const { createRequire } = await import("node:module"); + const require = createRequire(import.meta.url); + const native = require("@rivetkit/sqlite-native"); + // The kvChannel handle is stored as a module-level singleton in native-sqlite.ts. + // We can't access it directly, but we exported getKvChannelMetrics. + // For the bench, we'll try the direct path. + const nativeSqlite = await import( + // @ts-ignore + "../../../rivetkit-typescript/packages/rivetkit/src/db/native-sqlite.ts" + ); + const m = nativeSqlite.getKvChannelMetrics?.(); + if (m) { + console.log("\n--- KV Channel Metrics ---"); + const ops: [string, any][] = [ + ["get", m.get], + ["put", m.put], + ["delete", m.delete], + ["deleteRange", m.deleteRange], + ["actorOpen", m.actorOpen], + ["actorClose", m.actorClose], + ]; + const nameWidth = 14; + const colWidth = 12; + console.log( + `${"Op".padEnd(nameWidth)} ${"Count".padStart(colWidth)} ${"Avg (us)".padStart(colWidth)} ${"Min (us)".padStart(colWidth)} ${"Max (us)".padStart(colWidth)} ${"Total (ms)".padStart(colWidth)}`, + ); + console.log("-".repeat(nameWidth + colWidth * 5 + 10)); + for (const [name, s] of ops) { + if (s && s.count > 0) { + console.log( + `${name.padEnd(nameWidth)} ${String(s.count).padStart(colWidth)} ${s.avgDurationUs.toFixed(0).padStart(colWidth)} ${String(s.minDurationUs).padStart(colWidth)} ${String(s.maxDurationUs).padStart(colWidth)} ${(s.totalDurationUs / 1000).toFixed(1).padStart(colWidth)}`, + ); + } + } + } + } catch { + // Native module or metrics not available, skip. + } + + process.exit(0); +} + +main().catch((err) => { + console.error("Benchmark failed:", err); + process.exit(1); +}); diff --git a/examples/kitchen-sink/scripts/diag-cold-reads.ts b/examples/kitchen-sink/scripts/diag-cold-reads.ts new file mode 100644 index 0000000000..bfdfe493e6 --- /dev/null +++ b/examples/kitchen-sink/scripts/diag-cold-reads.ts @@ -0,0 +1,65 @@ +#!/usr/bin/env -S npx tsx + +import { setup } from "rivetkit"; +import { createClient } from "rivetkit/client"; +import { sqliteBench } from "../src/actors/sqlite-bench.ts"; + +const registry = setup({ use: { sqliteBench } }); +type R = typeof registry; + +async function main() { + const endpoint = process.env.RIVET_ENDPOINT || "http://127.0.0.1:6420"; + registry.start(); + const client = createClient({ endpoint }); + await new Promise(r => setTimeout(r, 5000)); + + function fresh() { + return client.sqliteBench.getOrCreate(["diag-" + Math.random().toString(36).slice(2)]); + } + + // Cold reads: fresh actor every time (like the bench does) + console.log("=== Cold actor point reads ==="); + for (const n of [1, 10, 100]) { + const a = fresh(); + const r = await a.benchPointRead(n); + console.log(`Fresh actor point read x${n}: ${r.elapsedMs.toFixed(1)}ms (${(r.elapsedMs / n).toFixed(3)}ms/op)`); + } + + // Warm reads: reuse same actor + console.log("\n=== Warm actor point reads ==="); + const a = fresh(); + await a.benchInsertTransaction(1000); + for (const n of [1, 10, 100, 1000]) { + const r = await a.benchPointRead(n); + console.log(`Warm actor point read x${n}: ${r.elapsedMs.toFixed(1)}ms (${(r.elapsedMs / n).toFixed(3)}ms/op)`); + } + + // Cold TX: fresh actor + console.log("\n=== Cold actor TX inserts ==="); + for (const n of [1, 10, 100]) { + const a = fresh(); + const r = await a.benchInsertTransaction(n); + console.log(`Fresh actor TX x${n}: ${r.elapsedMs.toFixed(1)}ms (${(r.elapsedMs / n).toFixed(3)}ms/op)`); + } + + // Warm TX: reuse same actor + console.log("\n=== Warm actor TX inserts ==="); + const a2 = fresh(); + await a2.benchInsertTransaction(10); // warmup + for (const n of [1, 10, 100]) { + const r = await a2.benchInsertTransaction(n); + console.log(`Warm actor TX x${n}: ${r.elapsedMs.toFixed(1)}ms (${(r.elapsedMs / n).toFixed(3)}ms/op)`); + } + + // Cold batch x1: the 38ms anomaly + console.log("\n=== Cold actor batch x1 ==="); + for (let i = 0; i < 5; i++) { + const a = fresh(); + const r = await a.benchInsertBatch(1, 50); + console.log(`Fresh actor batch x1 attempt ${i}: ${r.elapsedMs.toFixed(1)}ms`); + } + + process.exit(0); +} + +main().catch(e => { console.error(e); process.exit(1); }); diff --git a/examples/kitchen-sink/scripts/diag-perf.ts b/examples/kitchen-sink/scripts/diag-perf.ts new file mode 100644 index 0000000000..b183accf2b --- /dev/null +++ b/examples/kitchen-sink/scripts/diag-perf.ts @@ -0,0 +1,83 @@ +#!/usr/bin/env -S npx tsx + +import { setup } from "rivetkit"; +import { createClient } from "rivetkit/client"; +import { sqliteBench } from "../src/actors/sqlite-bench.ts"; + +const registry = setup({ use: { sqliteBench } }); +type R = typeof registry; + +async function main() { + const endpoint = process.env.RIVET_ENDPOINT || "http://127.0.0.1:6420"; + registry.start(); + const client = createClient({ endpoint }); + await new Promise(r => setTimeout(r, 5000)); + + function fresh() { + return client.sqliteBench.getOrCreate(["diag-" + Math.random().toString(36).slice(2)]); + } + + // TX x1 cold vs warm + console.log("=== TX x1 cold vs warm ==="); + const a1 = fresh(); + const r1 = await a1.benchInsertTransaction(1); + console.log(`TX x1 cold: ${r1.elapsedMs.toFixed(1)}ms`); + const r2 = await a1.benchInsertTransaction(1); + console.log(`TX x1 warm: ${r2.elapsedMs.toFixed(1)}ms`); + const r3 = await a1.benchInsertTransaction(1); + console.log(`TX x1 warm2: ${r3.elapsedMs.toFixed(1)}ms`); + + // TX x10 cold vs warm + console.log("\n=== TX x10 cold vs warm ==="); + const a1b = fresh(); + const r1b = await a1b.benchInsertTransaction(10); + console.log(`TX x10 cold: ${r1b.elapsedMs.toFixed(1)}ms`); + const r2b = await a1b.benchInsertTransaction(10); + console.log(`TX x10 warm: ${r2b.elapsedMs.toFixed(1)}ms`); + + // Point reads: seed, then multiple rounds + console.log("\n=== Point reads ==="); + const a2 = fresh(); + await a2.benchInsertTransaction(1000); + console.log("Seeded 1000 rows"); + for (let i = 0; i < 5; i++) { + const r = await a2.benchPointRead(100); + console.log(`Point read x100 round ${i}: ${r.elapsedMs.toFixed(1)}ms (${(r.elapsedMs / 100).toFixed(3)}ms/op)`); + } + for (let i = 0; i < 3; i++) { + const r = await a2.benchPointRead(1000); + console.log(`Point read x1000 round ${i}: ${r.elapsedMs.toFixed(1)}ms (${(r.elapsedMs / 1000).toFixed(3)}ms/op)`); + } + + // Large payload: insert then read multiple times + console.log("\n=== Large payload 4KB ==="); + const a3 = fresh(); + const rL1 = await a3.benchLargePayload(100, 4096); + console.log(`Insert 4KB x100: ${rL1.insertElapsedMs.toFixed(1)}ms`); + console.log(`Read 4KB x100: ${rL1.readElapsedMs.toFixed(1)}ms`); + // Read again by querying directly + const rL2 = await a3.benchPointRead(100); + console.log(`Point read after large payload: ${rL2.elapsedMs.toFixed(1)}ms`); + + // Batch x1 cold vs warm + console.log("\n=== Batch x1 cold vs warm ==="); + const a4 = fresh(); + const rB1 = await a4.benchInsertBatch(1, 50); + console.log(`Batch x1 cold: ${rB1.elapsedMs.toFixed(1)}ms`); + const rB2 = await a4.benchInsertBatch(1, 50); + console.log(`Batch x1 warm: ${rB2.elapsedMs.toFixed(1)}ms`); + const rB3 = await a4.benchInsertBatch(1, 50); + console.log(`Batch x1 warm2: ${rB3.elapsedMs.toFixed(1)}ms`); + + // JSON insert cold vs warm + console.log("\n=== JSON insert ==="); + const a5 = fresh(); + const rJ1 = await a5.benchJson(10); + console.log(`JSON x10 cold: insert=${rJ1.insertElapsedMs.toFixed(1)}ms extract=${rJ1.jsonExtract.elapsedMs.toFixed(1)}ms each=${rJ1.jsonEach.elapsedMs.toFixed(1)}ms`); + const rJ2 = await a5.benchJson(10); + console.log(`JSON x10 warm: insert=${rJ2.insertElapsedMs.toFixed(1)}ms extract=${rJ2.jsonExtract.elapsedMs.toFixed(1)}ms each=${rJ2.jsonEach.elapsedMs.toFixed(1)}ms`); + + process.exit(0); +} + +main().catch(e => { console.error(e); process.exit(1); }); diff --git a/examples/kitchen-sink/src/actors/sqlite-bench.ts b/examples/kitchen-sink/src/actors/sqlite-bench.ts new file mode 100644 index 0000000000..9787bf8c12 --- /dev/null +++ b/examples/kitchen-sink/src/actors/sqlite-bench.ts @@ -0,0 +1,742 @@ +import { actor } from "rivetkit"; +import { db } from "rivetkit/db"; + +// Generates a string of the given byte size. +function payload(bytes: number): string { + return "x".repeat(bytes); +} + +export const sqliteBench = actor({ + options: { + actionTimeout: 300_000, // 5 minutes for large-scale benchmarks + }, + db: db({ + onMigrate: async (db) => { + await db.execute(` + CREATE TABLE IF NOT EXISTS bench ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + key TEXT, + value TEXT, + num REAL, + created_at INTEGER NOT NULL + ) + `); + await db.execute( + `CREATE INDEX IF NOT EXISTS idx_bench_key ON bench(key)`, + ); + await db.execute( + `CREATE INDEX IF NOT EXISTS idx_bench_num ON bench(num)`, + ); + }, + }), + actions: { + // ── Migrations ────────────────────────────────────────────── + + // Create N tables each with an index to stress migration overhead. + benchMigration: async (c, tableCount: number) => { + const start = performance.now(); + for (let i = 0; i < tableCount; i++) { + await c.db.execute(` + CREATE TABLE IF NOT EXISTS migration_t${i} ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + a TEXT, b TEXT, c REAL, d INTEGER, + created_at INTEGER NOT NULL + ) + `); + await c.db.execute( + `CREATE INDEX IF NOT EXISTS idx_migration_t${i}_a ON migration_t${i}(a)`, + ); + } + return { tableCount, elapsedMs: performance.now() - start }; + }, + + // Same as benchMigration but wrapped in a single transaction. + benchMigrationTransaction: async (c, tableCount: number) => { + const start = performance.now(); + await c.db.execute("BEGIN"); + for (let i = 0; i < tableCount; i++) { + await c.db.execute(` + CREATE TABLE IF NOT EXISTS migration_tx_t${i} ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + a TEXT, b TEXT, c REAL, d INTEGER, + created_at INTEGER NOT NULL + ) + `); + await c.db.execute( + `CREATE INDEX IF NOT EXISTS idx_migration_tx_t${i}_a ON migration_tx_t${i}(a)`, + ); + } + await c.db.execute("COMMIT"); + return { tableCount, elapsedMs: performance.now() - start }; + }, + + // ── Single-row inserts ────────────────────────────────────── + + benchInsertSingle: async (c, rowCount: number) => { + const start = performance.now(); + for (let i = 0; i < rowCount; i++) { + await c.db.execute( + "INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)", + `key-${i}`, + `value-${i}`, + Math.random() * 1000, + Date.now(), + ); + } + return { + rowCount, + elapsedMs: performance.now() - start, + }; + }, + + // ── Batch inserts (multi-row VALUES) ──────────────────────── + + benchInsertBatch: async (c, rowCount: number, batchSize: number = 50) => { + const start = performance.now(); + for (let offset = 0; offset < rowCount; offset += batchSize) { + const count = Math.min(batchSize, rowCount - offset); + const placeholders = Array.from( + { length: count }, + () => "(?, ?, ?, ?)", + ).join(", "); + const params: (string | number)[] = []; + for (let i = 0; i < count; i++) { + const idx = offset + i; + params.push( + `key-${idx}`, + `value-${idx}`, + Math.random() * 1000, + Date.now(), + ); + } + await c.db.execute( + `INSERT INTO bench (key, value, num, created_at) VALUES ${placeholders}`, + ...params, + ); + } + return { + rowCount, + batchSize, + elapsedMs: performance.now() - start, + }; + }, + + // ── Transactional inserts ─────────────────────────────────── + + benchInsertTransaction: async (c, rowCount: number) => { + const start = performance.now(); + await c.db.execute("BEGIN"); + for (let i = 0; i < rowCount; i++) { + await c.db.execute( + "INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)", + `key-${i}`, + `value-${i}`, + Math.random() * 1000, + Date.now(), + ); + } + await c.db.execute("COMMIT"); + return { + rowCount, + elapsedMs: performance.now() - start, + }; + }, + + // ── Point reads ───────────────────────────────────────────── + + benchPointRead: async (c, queryCount: number) => { + // Seed data if table is empty. + const [{ cnt }] = (await c.db.execute( + "SELECT COUNT(*) as cnt FROM bench", + )) as { cnt: number }[]; + if (cnt === 0) { + await c.db.execute("BEGIN"); + for (let i = 0; i < 1000; i++) { + await c.db.execute( + "INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)", + `key-${i}`, + `value-${i}`, + i, + Date.now(), + ); + } + await c.db.execute("COMMIT"); + } + + const start = performance.now(); + for (let i = 0; i < queryCount; i++) { + const id = (i % 1000) + 1; + await c.db.execute("SELECT * FROM bench WHERE id = ?", id); + } + return { + queryCount, + elapsedMs: performance.now() - start, + }; + }, + + // ── Full table scan ───────────────────────────────────────── + + benchFullScan: async (c, seedRows: number) => { + const [{ cnt }] = (await c.db.execute( + "SELECT COUNT(*) as cnt FROM bench", + )) as { cnt: number }[]; + if (cnt < seedRows) { + await c.db.execute("BEGIN"); + for (let i = cnt; i < seedRows; i++) { + await c.db.execute( + "INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)", + `key-${i}`, + `value-${i}`, + i, + Date.now(), + ); + } + await c.db.execute("COMMIT"); + } + + const start = performance.now(); + const rows = await c.db.execute("SELECT * FROM bench"); + const elapsed = performance.now() - start; + return { + rowsReturned: (rows as unknown[]).length, + elapsedMs: elapsed, + }; + }, + + // ── Range scan (indexed vs non-indexed) ───────────────────── + + benchRangeScan: async (c, seedRows: number) => { + const [{ cnt }] = (await c.db.execute( + "SELECT COUNT(*) as cnt FROM bench", + )) as { cnt: number }[]; + if (cnt < seedRows) { + await c.db.execute("BEGIN"); + for (let i = cnt; i < seedRows; i++) { + await c.db.execute( + "INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)", + `key-${i}`, + `value-${i}`, + i, + Date.now(), + ); + } + await c.db.execute("COMMIT"); + } + + // Indexed range scan on num. + const startIndexed = performance.now(); + const indexedRows = await c.db.execute( + "SELECT * FROM bench WHERE num BETWEEN ? AND ?", + 0, + seedRows / 10, + ); + const indexedMs = performance.now() - startIndexed; + + // Non-indexed scan on value (LIKE prefix). + const startUnindexed = performance.now(); + const unindexedRows = await c.db.execute( + "SELECT * FROM bench WHERE value LIKE ?", + "value-1%", + ); + const unindexedMs = performance.now() - startUnindexed; + + return { + seedRows, + indexed: { + rowsReturned: (indexedRows as unknown[]).length, + elapsedMs: indexedMs, + }, + unindexed: { + rowsReturned: (unindexedRows as unknown[]).length, + elapsedMs: unindexedMs, + }, + }; + }, + + // ── Large payloads (chunk boundary stress) ────────────────── + + benchLargePayload: async ( + c, + rowCount: number, + payloadBytes: number, + ) => { + const data = payload(payloadBytes); + const start = performance.now(); + await c.db.execute("BEGIN"); + for (let i = 0; i < rowCount; i++) { + await c.db.execute( + "INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)", + `large-${i}`, + data, + i, + Date.now(), + ); + } + await c.db.execute("COMMIT"); + const insertMs = performance.now() - start; + + // Read them back. + const readStart = performance.now(); + const rows = await c.db.execute( + "SELECT * FROM bench WHERE key LIKE 'large-%'", + ); + const readMs = performance.now() - readStart; + + return { + rowCount, + payloadBytes, + insertElapsedMs: insertMs, + readElapsedMs: readMs, + rowsRead: (rows as unknown[]).length, + }; + }, + + // ── Complex queries (JOINs, aggregations, CTEs, window fns) ─ + + benchComplexQueries: async (c, seedRows: number) => { + // Ensure we have two tables to join. + await c.db.execute(` + CREATE TABLE IF NOT EXISTS bench_tags ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + bench_id INTEGER NOT NULL, + tag TEXT NOT NULL + ) + `); + await c.db.execute( + `CREATE INDEX IF NOT EXISTS idx_bench_tags_bid ON bench_tags(bench_id)`, + ); + + const [{ cnt }] = (await c.db.execute( + "SELECT COUNT(*) as cnt FROM bench", + )) as { cnt: number }[]; + if (cnt < seedRows) { + await c.db.execute("BEGIN"); + for (let i = cnt; i < seedRows; i++) { + await c.db.execute( + "INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)", + `key-${i}`, + `value-${i}`, + i, + Date.now(), + ); + // Add 2 tags per row. + await c.db.execute( + "INSERT INTO bench_tags (bench_id, tag) VALUES (?, ?), (?, ?)", + i + 1, + `tag-${i % 10}`, + i + 1, + `tag-${(i + 5) % 10}`, + ); + } + await c.db.execute("COMMIT"); + } + + const results: Record = + {}; + + // JOIN + { + const s = performance.now(); + const rows = await c.db.execute(` + SELECT b.id, b.key, t.tag + FROM bench b + JOIN bench_tags t ON t.bench_id = b.id + WHERE b.num < 100 + `); + results.join = { + elapsedMs: performance.now() - s, + rowCount: (rows as unknown[]).length, + }; + } + + // Aggregation + { + const s = performance.now(); + const rows = await c.db.execute(` + SELECT t.tag, COUNT(*) as cnt, AVG(b.num) as avg_num + FROM bench b + JOIN bench_tags t ON t.bench_id = b.id + GROUP BY t.tag + HAVING cnt > 1 + ORDER BY cnt DESC + `); + results.aggregation = { + elapsedMs: performance.now() - s, + rowCount: (rows as unknown[]).length, + }; + } + + // CTE + { + const s = performance.now(); + const rows = await c.db.execute(` + WITH ranked AS ( + SELECT id, key, num, + ROW_NUMBER() OVER (ORDER BY num DESC) as rank + FROM bench + ) + SELECT * FROM ranked WHERE rank <= 50 + `); + results.cte_window = { + elapsedMs: performance.now() - s, + rowCount: (rows as unknown[]).length, + }; + } + + // Subquery + { + const s = performance.now(); + const rows = await c.db.execute(` + SELECT * FROM bench + WHERE id IN ( + SELECT bench_id FROM bench_tags WHERE tag = 'tag-0' + ) + `); + results.subquery = { + elapsedMs: performance.now() - s, + rowCount: (rows as unknown[]).length, + }; + } + + return { seedRows, results }; + }, + + // ── Bulk update ───────────────────────────────────────────── + + benchBulkUpdate: async (c, seedRows: number) => { + const [{ cnt }] = (await c.db.execute( + "SELECT COUNT(*) as cnt FROM bench", + )) as { cnt: number }[]; + if (cnt < seedRows) { + await c.db.execute("BEGIN"); + for (let i = cnt; i < seedRows; i++) { + await c.db.execute( + "INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)", + `key-${i}`, + `value-${i}`, + i, + Date.now(), + ); + } + await c.db.execute("COMMIT"); + } + + const start = performance.now(); + await c.db.execute( + "UPDATE bench SET value = 'updated', num = num + 1 WHERE num < ?", + seedRows / 2, + ); + return { + seedRows, + elapsedMs: performance.now() - start, + }; + }, + + // ── Bulk delete + VACUUM ──────────────────────────────────── + + benchDeleteVacuum: async (c, seedRows: number) => { + const [{ cnt }] = (await c.db.execute( + "SELECT COUNT(*) as cnt FROM bench", + )) as { cnt: number }[]; + if (cnt < seedRows) { + await c.db.execute("BEGIN"); + for (let i = cnt; i < seedRows; i++) { + await c.db.execute( + "INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)", + `key-${i}`, + `value-${i}`, + i, + Date.now(), + ); + } + await c.db.execute("COMMIT"); + } + + // Delete half the rows. + const deleteStart = performance.now(); + await c.db.execute("DELETE FROM bench WHERE num < ?", seedRows / 2); + const deleteMs = performance.now() - deleteStart; + + // VACUUM. + const vacuumStart = performance.now(); + await c.db.execute("VACUUM"); + const vacuumMs = performance.now() - vacuumStart; + + const [{ remaining }] = (await c.db.execute( + "SELECT COUNT(*) as remaining FROM bench", + )) as { remaining: number }[]; + + return { + seedRows, + deleteElapsedMs: deleteMs, + vacuumElapsedMs: vacuumMs, + remainingRows: remaining, + }; + }, + + // ── Mixed OLTP (interleaved reads + writes) ───────────────── + + benchMixedOltp: async ( + c, + operationCount: number, + readRatio: number = 0.7, + ) => { + // Seed some data. + await c.db.execute("BEGIN"); + for (let i = 0; i < 500; i++) { + await c.db.execute( + "INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)", + `oltp-${i}`, + `value-${i}`, + i, + Date.now(), + ); + } + await c.db.execute("COMMIT"); + + let reads = 0; + let writes = 0; + const start = performance.now(); + + for (let i = 0; i < operationCount; i++) { + if (Math.random() < readRatio) { + const id = Math.floor(Math.random() * 500) + 1; + await c.db.execute("SELECT * FROM bench WHERE id = ?", id); + reads++; + } else { + await c.db.execute( + "INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)", + `oltp-new-${i}`, + `value-${i}`, + i, + Date.now(), + ); + writes++; + } + } + return { + operationCount, + reads, + writes, + elapsedMs: performance.now() - start, + }; + }, + + // ── Hot row (write amplification) ─────────────────────────── + + benchHotRow: async (c, updateCount: number) => { + await c.db.execute( + "INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)", + "hot-row", + "initial", + 0, + Date.now(), + ); + const [{ id: hotId }] = (await c.db.execute( + "SELECT id FROM bench WHERE key = 'hot-row' LIMIT 1", + )) as { id: number }[]; + + const start = performance.now(); + for (let i = 0; i < updateCount; i++) { + await c.db.execute( + "UPDATE bench SET value = ?, num = ? WHERE id = ?", + `updated-${i}`, + i, + hotId, + ); + } + return { + updateCount, + elapsedMs: performance.now() - start, + }; + }, + + // ── JSON operations ───────────────────────────────────────── + + benchJson: async (c, rowCount: number) => { + await c.db.execute(` + CREATE TABLE IF NOT EXISTS bench_json ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + data TEXT NOT NULL + ) + `); + + // Insert JSON rows. + const insertStart = performance.now(); + await c.db.execute("BEGIN"); + for (let i = 0; i < rowCount; i++) { + const json = JSON.stringify({ + name: `user-${i}`, + age: 20 + (i % 50), + tags: [`tag-${i % 5}`, `tag-${(i + 1) % 5}`], + address: { + city: `city-${i % 20}`, + zip: `${10000 + i}`, + }, + }); + await c.db.execute( + "INSERT INTO bench_json (data) VALUES (?)", + json, + ); + } + await c.db.execute("COMMIT"); + const insertMs = performance.now() - insertStart; + + // json_extract query. + const extractStart = performance.now(); + const extractRows = await c.db.execute(` + SELECT id, json_extract(data, '$.name') as name, + json_extract(data, '$.age') as age, + json_extract(data, '$.address.city') as city + FROM bench_json + WHERE json_extract(data, '$.age') > 40 + `); + const extractMs = performance.now() - extractStart; + + // json_each aggregation. + const eachStart = performance.now(); + const eachRows = await c.db.execute(` + SELECT value as tag, COUNT(*) as cnt + FROM bench_json, json_each(json_extract(data, '$.tags')) + GROUP BY value + ORDER BY cnt DESC + `); + const eachMs = performance.now() - eachStart; + + return { + rowCount, + insertElapsedMs: insertMs, + jsonExtract: { + elapsedMs: extractMs, + rowCount: (extractRows as unknown[]).length, + }, + jsonEach: { + elapsedMs: eachMs, + rowCount: (eachRows as unknown[]).length, + }, + }; + }, + + // ── FTS5 full-text search ─────────────────────────────────── + + benchFts: async (c, docCount: number) => { + await c.db.execute(` + CREATE VIRTUAL TABLE IF NOT EXISTS bench_fts + USING fts5(title, body) + `); + + const words = [ + "alpha", "bravo", "charlie", "delta", "echo", + "foxtrot", "golf", "hotel", "india", "juliet", + "kilo", "lima", "mike", "november", "oscar", + ]; + function randomSentence(len: number): string { + return Array.from( + { length: len }, + () => words[Math.floor(Math.random() * words.length)], + ).join(" "); + } + + // Insert documents. + const insertStart = performance.now(); + await c.db.execute("BEGIN"); + for (let i = 0; i < docCount; i++) { + await c.db.execute( + "INSERT INTO bench_fts (title, body) VALUES (?, ?)", + randomSentence(5), + randomSentence(50), + ); + } + await c.db.execute("COMMIT"); + const insertMs = performance.now() - insertStart; + + // Search. + const searchStart = performance.now(); + const searchRows = await c.db.execute(` + SELECT * FROM bench_fts WHERE bench_fts MATCH 'alpha AND bravo' + ORDER BY rank + LIMIT 50 + `); + const searchMs = performance.now() - searchStart; + + // Prefix search. + const prefixStart = performance.now(); + const prefixRows = await c.db.execute(` + SELECT * FROM bench_fts WHERE bench_fts MATCH 'cha*' + ORDER BY rank + LIMIT 50 + `); + const prefixMs = performance.now() - prefixStart; + + return { + docCount, + insertElapsedMs: insertMs, + search: { + elapsedMs: searchMs, + rowCount: (searchRows as unknown[]).length, + }, + prefixSearch: { + elapsedMs: prefixMs, + rowCount: (prefixRows as unknown[]).length, + }, + }; + }, + + // ── Database growth (throughput at different sizes) ────────── + + benchGrowth: async (c, targetRows: number, measureInterval: number) => { + const measurements: { + rowCount: number; + insertBatchMs: number; + pointReadMs: number; + }[] = []; + + let totalInserted = 0; + while (totalInserted < targetRows) { + const batchCount = Math.min(measureInterval, targetRows - totalInserted); + + // Measure insert batch. + const insertStart = performance.now(); + await c.db.execute("BEGIN"); + for (let i = 0; i < batchCount; i++) { + await c.db.execute( + "INSERT INTO bench (key, value, num, created_at) VALUES (?, ?, ?, ?)", + `grow-${totalInserted + i}`, + `value-${totalInserted + i}`, + totalInserted + i, + Date.now(), + ); + } + await c.db.execute("COMMIT"); + const insertMs = performance.now() - insertStart; + totalInserted += batchCount; + + // Measure point read at current size. + const readStart = performance.now(); + for (let i = 0; i < 100; i++) { + const id = Math.floor(Math.random() * totalInserted) + 1; + await c.db.execute("SELECT * FROM bench WHERE id = ?", id); + } + const readMs = performance.now() - readStart; + + measurements.push({ + rowCount: totalInserted, + insertBatchMs: insertMs, + pointReadMs: readMs, + }); + } + + return { targetRows, measureInterval, measurements }; + }, + + // ── Utility: reset tables ─────────────────────────────────── + + reset: async (c) => { + await c.db.execute("DELETE FROM bench"); + await c.db.execute( + "DELETE FROM sqlite_sequence WHERE name='bench'", + ); + return { ok: true }; + }, + }, +}); diff --git a/examples/kitchen-sink/src/index.ts b/examples/kitchen-sink/src/index.ts index e2e9de5ac5..fb22f56b62 100644 --- a/examples/kitchen-sink/src/index.ts +++ b/examples/kitchen-sink/src/index.ts @@ -1,9 +1,11 @@ import { setup } from "rivetkit"; import { demo } from "./actors/demo.ts"; +import { sqliteBench } from "./actors/sqlite-bench.ts"; export const registry = setup({ use: { demo, + sqliteBench, }, }); diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index ad86e48ee6..f1500907db 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -195,6 +195,22 @@ importers: specifier: ^5.9.2 version: 5.9.3 + engine/sdks/typescript/kv-channel-protocol: + dependencies: + '@rivetkit/bare-ts': + specifier: ^0.6.2 + version: 0.6.2 + devDependencies: + '@types/node': + specifier: ^20.19.13 + version: 20.19.13 + tsup: + specifier: ^8.5.0 + version: 8.5.1(@microsoft/api-extractor@7.53.2(@types/node@20.19.13))(@swc/core@1.15.11(@swc/helpers@0.5.17))(jiti@1.21.7)(postcss@8.5.6)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.2) + typescript: + specifier: ^5.9.2 + version: 5.9.3 + engine/sdks/typescript/runner: dependencies: '@rivetkit/engine-runner-protocol': @@ -255,7 +271,7 @@ importers: dependencies: '@hono/node-server': specifier: ^1.19.1 - version: 1.19.9(hono@4.11.9) + version: 1.19.12(hono@4.11.9) '@rivetkit/engine-envoy-client': specifier: workspace:* version: link:../envoy-client @@ -4147,6 +4163,9 @@ importers: '@rivetkit/engine-envoy-client': specifier: workspace:* version: link:../../../engine/sdks/typescript/envoy-client + '@rivetkit/engine-kv-channel-protocol': + specifier: workspace:* + version: link:../../../engine/sdks/typescript/kv-channel-protocol '@rivetkit/engine-runner': specifier: workspace:* version: link:../../../engine/sdks/typescript/runner @@ -4173,13 +4192,13 @@ importers: version: link:../workflow-engine '@vercel/sandbox': specifier: '>=0.1.0' - version: 1.9.0 + version: 1.9.2 cbor-x: specifier: ^1.6.0 version: 1.6.0 computesdk: specifier: '>=0.1.0' - version: 2.5.3 + version: 2.5.4 drizzle-kit: specifier: ^0.31.2 version: 0.31.5 @@ -4194,7 +4213,7 @@ importers: version: 2.2.4 modal: specifier: '>=0.1.0' - version: 0.7.3 + version: 0.7.4 nanoevents: specifier: ^9.1.0 version: 9.1.0 @@ -4206,7 +4225,7 @@ importers: version: 9.9.5 sandbox-agent: specifier: ^0.4.2 - version: 0.4.2(@daytonaio/sdk@0.150.0(ws@8.19.0))(@e2b/code-interpreter@2.3.3)(@fly/sprites@0.0.1)(@vercel/sandbox@1.9.0)(computesdk@2.5.3)(dockerode@4.0.9)(get-port@7.1.0)(modal@0.7.3)(zod@4.1.13) + version: 0.4.2(@daytonaio/sdk@0.150.0(ws@8.19.0))(@e2b/code-interpreter@2.3.3)(@fly/sprites@0.0.1)(@vercel/sandbox@1.9.2)(computesdk@2.5.4)(dockerode@4.0.9)(get-port@7.1.0)(modal@0.7.4)(zod@4.1.13) tar: specifier: ^7.5.0 version: 7.5.7 @@ -4228,7 +4247,7 @@ importers: version: 2.3.11 '@copilotkit/llmock': specifier: ^1.6.0 - version: 1.6.0 + version: 1.7.1 '@daytonaio/sdk': specifier: ^0.150.0 version: 0.150.0(ws@8.19.0) @@ -4308,6 +4327,12 @@ importers: specifier: ^8.5.0 version: 8.5.1(@microsoft/api-extractor@7.53.2(@types/node@22.19.10))(@swc/core@1.15.11(@swc/helpers@0.5.17))(jiti@1.21.7)(postcss@8.5.6)(tsx@4.21.0)(typescript@5.9.3)(yaml@2.8.2) + rivetkit-typescript/packages/sqlite-native: + devDependencies: + '@napi-rs/cli': + specifier: ^2.18.0 + version: 2.18.4 + rivetkit-typescript/packages/sqlite-vfs: dependencies: '@rivetkit/bare-ts': @@ -4971,52 +4996,88 @@ packages: '@aws-crypto/util@5.2.0': resolution: {integrity: sha512-4RkU9EsI6ZpBve5fseQlGNUWKMa1RLPQ1dnjnQoe07ldfIzcsGb5hC5W0Dm7u423KWzawlrpbjXBrXCEv9zazQ==} - '@aws-sdk/client-bedrock-runtime@3.1019.0': - resolution: {integrity: sha512-Wq1uMAZfySYofuwkFMaMM+k7epsGBRcJGE+ZosGB+8jC8Xs1lycbjSEFMt0Mo3z1qhkgEKGCQyjCbPTICMkkVw==} + '@aws-sdk/client-bedrock-runtime@3.1024.0': + resolution: {integrity: sha512-nIhsn0/eYrL2fTh4kMO7Hpfmhv+AkkXl0KGNpD6+fdmotGvRBWcDv9/PmP/+sT6gvrKTYyzH3vu4efpTPzzP0Q==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/client-s3@3.1007.0': + resolution: {integrity: sha512-QdFNDy+eKpcbv3ieGNl7XsDhpOj5mfb2xwnNM/YC108JpNJ5Ox79mbwtsKKqmQfen0JeaJml58vFnRHjfkjw9w==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/core@3.973.19': + resolution: {integrity: sha512-56KePyOcZnKTWCd89oJS1G6j3HZ9Kc+bh/8+EbvtaCCXdP6T7O7NzCiPuHRhFLWnzXIaXX3CxAz0nI5My9spHQ==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/core@3.973.26': + resolution: {integrity: sha512-A/E6n2W42ruU+sfWk+mMUOyVXbsSgGrY3MJ9/0Az5qUdG67y8I6HYzzoAa+e/lzxxl1uCYmEL6BTMi9ZiZnplQ==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/crc64-nvme@3.972.4': + resolution: {integrity: sha512-HKZIZLbRyvzo/bXZU7Zmk6XqU+1C9DjI56xd02vwuDIxedxBEqP17t9ExhbP9QFeNq/a3l9GOcyirFXxmbDhmw==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/credential-provider-env@3.972.17': + resolution: {integrity: sha512-MBAMW6YELzE1SdkOniqr51mrjapQUv8JXSGxtwRjQV0mwVDutVsn22OPAUt4RcLRvdiHQmNBDEFP9iTeSVCOlA==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/credential-provider-env@3.972.24': + resolution: {integrity: sha512-FWg8uFmT6vQM7VuzELzwVo5bzExGaKHdubn0StjgrcU5FvuLExUe+k06kn/40uKv59rYzhez8eFNM4yYE/Yb/w==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/credential-provider-http@3.972.19': + resolution: {integrity: sha512-9EJROO8LXll5a7eUFqu48k6BChrtokbmgeMWmsH7lBb6lVbtjslUYz/ShLi+SHkYzTomiGBhmzTW7y+H4BxsnA==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/credential-provider-http@3.972.26': + resolution: {integrity: sha512-CY4ppZ+qHYqcXqBVi//sdHST1QK3KzOEiLtpLsc9W2k2vfZPKExGaQIsOwcyvjpjUEolotitmd3mUNY56IwDEA==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/credential-provider-ini@3.972.18': + resolution: {integrity: sha512-vthIAXJISZnj2576HeyLBj4WTeX+I7PwWeRkbOa0mVX39K13SCGxCgOFuKj2ytm9qTlLOmXe4cdEnroteFtJfw==} engines: {node: '>=20.0.0'} - '@aws-sdk/client-s3@3.1019.0': - resolution: {integrity: sha512-0pb9x7PPhS4oEi4c0rL3vzQQoXA4cWKtPuGga/UfVYLZ68yrqdq0NDKg0fr55qzdhNvWFCpmGx73g9Iyy03kkA==} + '@aws-sdk/credential-provider-ini@3.972.28': + resolution: {integrity: sha512-wXYvq3+uQcZV7k+bE4yDXCTBdzWTU9x/nMiKBfzInmv6yYK1veMK0AKvRfRBd72nGWYKcL6AxwiPg9z/pYlgpw==} engines: {node: '>=20.0.0'} - '@aws-sdk/core@3.973.25': - resolution: {integrity: sha512-TNrx7eq6nKNOO62HWPqoBqPLXEkW6nLZQGwjL6lq1jZtigWYbK1NbCnT7mKDzbLMHZfuOECUt3n6CzxjUW9HWQ==} + '@aws-sdk/credential-provider-login@3.972.18': + resolution: {integrity: sha512-kINzc5BBxdYBkPZ0/i1AMPMOk5b5QaFNbYMElVw5QTX13AKj6jcxnv/YNl9oW9mg+Y08ti19hh01HhyEAxsSJQ==} engines: {node: '>=20.0.0'} - '@aws-sdk/crc64-nvme@3.972.5': - resolution: {integrity: sha512-2VbTstbjKdT+yKi8m7b3a9CiVac+pL/IY2PHJwsaGkkHmuuqkJZIErPck1h6P3T9ghQMLSdMPyW6Qp7Di5swFg==} + '@aws-sdk/credential-provider-login@3.972.28': + resolution: {integrity: sha512-ZSTfO6jqUTCysbdBPtEX5OUR//3rbD0lN7jO3sQeS2Gjr/Y+DT6SbIJ0oT2cemNw3UzKu97sNONd1CwNMthuZQ==} engines: {node: '>=20.0.0'} - '@aws-sdk/credential-provider-env@3.972.23': - resolution: {integrity: sha512-EamaclJcCEaPHp6wiVknNMM2RlsPMjAHSsYSFLNENBM8Wz92QPc6cOn3dif6vPDQt0Oo4IEghDy3NMDCzY/IvA==} + '@aws-sdk/credential-provider-node@3.972.19': + resolution: {integrity: sha512-yDWQ9dFTr+IMxwanFe7+tbN5++q8psZBjlUwOiCXn1EzANoBgtqBwcpYcHaMGtn0Wlfj4NuXdf2JaEx1lz5RaQ==} engines: {node: '>=20.0.0'} - '@aws-sdk/credential-provider-http@3.972.25': - resolution: {integrity: sha512-qPymamdPcLp6ugoVocG1y5r69ScNiRzb0hogX25/ij+Wz7c7WnsgjLTaz7+eB5BfRxeyUwuw5hgULMuwOGOpcw==} + '@aws-sdk/credential-provider-node@3.972.29': + resolution: {integrity: sha512-clSzDcvndpFJAggLDnDb36sPdlZYyEs5Zm6zgZjjUhwsJgSWiWKwFIXUVBcbruidNyBdbpOv2tNDL9sX8y3/0g==} engines: {node: '>=20.0.0'} - '@aws-sdk/credential-provider-ini@3.972.26': - resolution: {integrity: sha512-xKxEAMuP6GYx2y5GET+d3aGEroax3AgGfwBE65EQAUe090lzyJ/RzxPX9s8v7Z6qAk0XwfQl+LrmH05X7YvTeg==} + '@aws-sdk/credential-provider-process@3.972.17': + resolution: {integrity: sha512-c8G8wT1axpJDgaP3xzcy+q8Y1fTi9A2eIQJvyhQ9xuXrUZhlCfXbC0vM9bM1CUXiZppFQ1p7g0tuUMvil/gCPg==} engines: {node: '>=20.0.0'} - '@aws-sdk/credential-provider-login@3.972.26': - resolution: {integrity: sha512-EFcM8RM3TUxnZOfMJo++3PnyxFu1fL/huzmn3Vh+8IWRgqZawUD3cRwwOr+/4bE9DpyHaLOWFAjY0lfK5X9ZkQ==} + '@aws-sdk/credential-provider-process@3.972.24': + resolution: {integrity: sha512-Q2k/XLrFXhEztPHqj4SLCNID3hEPdlhh1CDLBpNnM+1L8fq7P+yON9/9M1IGN/dA5W45v44ylERfXtDAlmMNmw==} engines: {node: '>=20.0.0'} - '@aws-sdk/credential-provider-node@3.972.27': - resolution: {integrity: sha512-jXpxSolfFnPVj6GCTtx3xIdWNoDR7hYC/0SbetGZxOC9UnNmipHeX1k6spVstf7eWJrMhXNQEgXC0pD1r5tXIg==} + '@aws-sdk/credential-provider-sso@3.972.18': + resolution: {integrity: sha512-YHYEfj5S2aqInRt5ub8nDOX8vAxgMvd84wm2Y3WVNfFa/53vOv9T7WOAqXI25qjj3uEcV46xxfqdDQk04h5XQA==} engines: {node: '>=20.0.0'} - '@aws-sdk/credential-provider-process@3.972.23': - resolution: {integrity: sha512-IL/TFW59++b7MpHserjUblGrdP5UXy5Ekqqx1XQkERXBFJcZr74I7VaSrQT5dxdRMU16xGK4L0RQ5fQG1pMgnA==} + '@aws-sdk/credential-provider-sso@3.972.28': + resolution: {integrity: sha512-IoUlmKMLEITFn1SiCTjPfR6KrE799FBo5baWyk/5Ppar2yXZoUdaRqZzJzK6TcJxx450M8m8DbpddRVYlp5R/A==} engines: {node: '>=20.0.0'} - '@aws-sdk/credential-provider-sso@3.972.26': - resolution: {integrity: sha512-c6ghvRb6gTlMznWhGxn/bpVCcp0HRaz4DobGVD9kI4vwHq186nU2xN/S7QGkm0lo0H2jQU8+dgpUFLxfTcwCOg==} + '@aws-sdk/credential-provider-web-identity@3.972.18': + resolution: {integrity: sha512-OqlEQpJ+J3T5B96qtC1zLLwkBloechP+fezKbCH0sbd2cCc0Ra55XpxWpk/hRj69xAOYtHvoC4orx6eTa4zU7g==} engines: {node: '>=20.0.0'} - '@aws-sdk/credential-provider-web-identity@3.972.26': - resolution: {integrity: sha512-cXcS3+XD3iwhoXkM44AmxjmbcKueoLCINr1e+IceMmCySda5ysNIfiGBGe9qn5EMiQ9Jd7pP0AGFtcd6OV3Lvg==} + '@aws-sdk/credential-provider-web-identity@3.972.28': + resolution: {integrity: sha512-d+6h0SD8GGERzKe27v5rOzNGKOl0D+l0bWJdqrxH8WSQzHzjsQFIAPgIeOTUwBHVsKKwtSxc91K/SWax6XgswQ==} engines: {node: '>=20.0.0'} '@aws-sdk/eventstream-handler-node@3.972.12': @@ -5029,68 +5090,104 @@ packages: peerDependencies: '@aws-sdk/client-s3': ^3.1007.0 - '@aws-sdk/middleware-bucket-endpoint@3.972.8': - resolution: {integrity: sha512-WR525Rr2QJSETa9a050isktyWi/4yIGcmY3BQ1kpHqb0LqUglQHCS8R27dTJxxWNZvQ0RVGtEZjTCbZJpyF3Aw==} + '@aws-sdk/middleware-bucket-endpoint@3.972.7': + resolution: {integrity: sha512-goX+axlJ6PQlRnzE2bQisZ8wVrlm6dXJfBzMJhd8LhAIBan/w1Kl73fJnalM/S+18VnpzIHumyV6DtgmvqG5IA==} engines: {node: '>=20.0.0'} '@aws-sdk/middleware-eventstream@3.972.8': resolution: {integrity: sha512-r+oP+tbCxgqXVC3pu3MUVePgSY0ILMjA+aEwOosS77m3/DRbtvHrHwqvMcw+cjANMeGzJ+i0ar+n77KXpRA8RQ==} engines: {node: '>=20.0.0'} - '@aws-sdk/middleware-expect-continue@3.972.8': - resolution: {integrity: sha512-5DTBTiotEES1e2jOHAq//zyzCjeMB78lEHd35u15qnrid4Nxm7diqIf9fQQ3Ov0ChH1V3Vvt13thOnrACmfGVQ==} + '@aws-sdk/middleware-expect-continue@3.972.7': + resolution: {integrity: sha512-mvWqvm61bmZUKmmrtl2uWbokqpenY3Mc3Jf4nXB/Hse6gWxLPaCQThmhPBDzsPSV8/Odn8V6ovWt3pZ7vy4BFQ==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/middleware-flexible-checksums@3.973.5': + resolution: {integrity: sha512-Dp3hqE5W6hG8HQ3Uh+AINx9wjjqYmFHbxede54sGj3akx/haIQrkp85lNdTdC+ouNUcSYNiuGkzmyDREfHX1Gg==} engines: {node: '>=20.0.0'} - '@aws-sdk/middleware-flexible-checksums@3.974.5': - resolution: {integrity: sha512-SPSvF0G1t8m8CcB0L+ClNFszzQOvXaxmRj25oRWDf6aU+TuN2PXPFAJ9A6lt1IvX4oGAqqbTdMPTYs/SSHUYYQ==} + '@aws-sdk/middleware-host-header@3.972.7': + resolution: {integrity: sha512-aHQZgztBFEpDU1BB00VWCIIm85JjGjQW1OG9+98BdmaOpguJvzmXBGbnAiYcciCd+IS4e9BEq664lhzGnWJHgQ==} engines: {node: '>=20.0.0'} '@aws-sdk/middleware-host-header@3.972.8': resolution: {integrity: sha512-wAr2REfKsqoKQ+OkNqvOShnBoh+nkPurDKW7uAeVSu6kUECnWlSJiPvnoqxGlfousEY/v9LfS9sNc46hjSYDIQ==} engines: {node: '>=20.0.0'} - '@aws-sdk/middleware-location-constraint@3.972.8': - resolution: {integrity: sha512-KaUoFuoFPziIa98DSQsTPeke1gvGXlc5ZGMhy+b+nLxZ4A7jmJgLzjEF95l8aOQN2T/qlPP3MrAyELm8ExXucw==} + '@aws-sdk/middleware-location-constraint@3.972.7': + resolution: {integrity: sha512-vdK1LJfffBp87Lj0Bw3WdK1rJk9OLDYdQpqoKgmpIZPe+4+HawZ6THTbvjhJt4C4MNnRrHTKHQjkwBiIpDBoig==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/middleware-logger@3.972.7': + resolution: {integrity: sha512-LXhiWlWb26txCU1vcI9PneESSeRp/RYY/McuM4SpdrimQR5NgwaPb4VJCadVeuGWgh6QmqZ6rAKSoL1ob16W6w==} engines: {node: '>=20.0.0'} '@aws-sdk/middleware-logger@3.972.8': resolution: {integrity: sha512-CWl5UCM57WUFaFi5kB7IBY1UmOeLvNZAZ2/OZ5l20ldiJ3TiIz1pC65gYj8X0BCPWkeR1E32mpsCk1L1I4n+lA==} engines: {node: '>=20.0.0'} + '@aws-sdk/middleware-recursion-detection@3.972.7': + resolution: {integrity: sha512-l2VQdcBcYLzIzykCHtXlbpiVCZ94/xniLIkAj0jpnpjY4xlgZx7f56Ypn+uV1y3gG0tNVytJqo3K9bfMFee7SQ==} + engines: {node: '>=20.0.0'} + '@aws-sdk/middleware-recursion-detection@3.972.9': resolution: {integrity: sha512-/Wt5+CT8dpTFQxEJ9iGy/UGrXr7p2wlIOEHvIr/YcHYByzoLjrqkYqXdJjd9UIgWjv7eqV2HnFJen93UTuwfTQ==} engines: {node: '>=20.0.0'} - '@aws-sdk/middleware-sdk-s3@3.972.26': - resolution: {integrity: sha512-5q7UGSTtt7/KF0Os8wj2VZtlLxeWJVb0e2eDrDJlWot2EIxUNKDDMPFq/FowUqrwZ40rO2bu6BypxaKNvQhI+g==} + '@aws-sdk/middleware-sdk-s3@3.972.19': + resolution: {integrity: sha512-/CtOHHVFg4ZuN6CnLnYkrqWgVEnbOBC4kNiKa+4fldJ9cioDt3dD/f5vpq0cWLOXwmGL2zgVrVxNhjxWpxNMkg==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/middleware-ssec@3.972.7': + resolution: {integrity: sha512-G9clGVuAml7d8DYzY6DnRi7TIIDRvZ3YpqJPz/8wnWS5fYx/FNWNmkO6iJVlVkQg9BfeMzd+bVPtPJOvC4B+nQ==} engines: {node: '>=20.0.0'} - '@aws-sdk/middleware-ssec@3.972.8': - resolution: {integrity: sha512-wqlK0yO/TxEC2UsY9wIlqeeutF6jjLe0f96Pbm40XscTo57nImUk9lBcw0dPgsm0sppFtAkSlDrfpK+pC30Wqw==} + '@aws-sdk/middleware-user-agent@3.972.20': + resolution: {integrity: sha512-3kNTLtpUdeahxtnJRnj/oIdLAUdzTfr9N40KtxNhtdrq+Q1RPMdCJINRXq37m4t5+r3H70wgC3opW46OzFcZYA==} engines: {node: '>=20.0.0'} - '@aws-sdk/middleware-user-agent@3.972.26': - resolution: {integrity: sha512-AilFIh4rI/2hKyyGN6XrB0yN96W2o7e7wyrPWCM6QjZM1mcC/pVkW3IWWRvuBWMpVP8Fg+rMpbzeLQ6dTM4gig==} + '@aws-sdk/middleware-user-agent@3.972.28': + resolution: {integrity: sha512-cfWZFlVh7Va9lRay4PN2A9ARFzaBYcA097InT5M2CdRS05ECF5yaz86jET8Wsl2WcyKYEvVr/QNmKtYtafUHtQ==} engines: {node: '>=20.0.0'} '@aws-sdk/middleware-websocket@3.972.14': resolution: {integrity: sha512-qnfDlIHjm6DrTYNvWOUbnZdVKgtoKbO/Qzj+C0Wp5Y7VUrsvBRQtGKxD+hc+mRTS4N0kBJ6iZ3+zxm4N1OSyjg==} engines: {node: '>= 14.0.0'} - '@aws-sdk/nested-clients@3.996.16': - resolution: {integrity: sha512-L7Qzoj/qQU1cL5GnYLQP5LbI+wlLCLoINvcykR3htKcQ4tzrPf2DOs72x933BM7oArYj1SKrkb2lGlsJHIic3g==} + '@aws-sdk/nested-clients@3.996.18': + resolution: {integrity: sha512-c7ZSIXrESxHKx2Mcopgd8AlzZgoXMr20fkx5ViPWPOLBvmyhw9VwJx/Govg8Ef/IhEon5R9l53Z8fdYSEmp6VA==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/nested-clients@3.996.8': + resolution: {integrity: sha512-6HlLm8ciMW8VzfB80kfIx16PBA9lOa9Dl+dmCBi78JDhvGlx3I7Rorwi5PpVRkL31RprXnYna3yBf6UKkD/PqA==} engines: {node: '>=20.0.0'} '@aws-sdk/region-config-resolver@3.972.10': resolution: {integrity: sha512-1dq9ToC6e070QvnVhhbAs3bb5r6cQ10gTVc6cyRV5uvQe7P138TV2uG2i6+Yok4bAkVAcx5AqkTEBUvWEtBlsQ==} engines: {node: '>=20.0.0'} - '@aws-sdk/signature-v4-multi-region@3.996.14': - resolution: {integrity: sha512-4nZSrBr1NO+48HCM/6BRU8mnRjuHZjcpziCvLXZk5QVftwWz5Mxqbhwdz4xf7WW88buaTB8uRO2MHklSX1m0vg==} + '@aws-sdk/region-config-resolver@3.972.7': + resolution: {integrity: sha512-/Ev/6AI8bvt4HAAptzSjThGUMjcWaX3GX8oERkB0F0F9x2dLSBdgFDiyrRz3i0u0ZFZFQ1b28is4QhyqXTUsVA==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/signature-v4-multi-region@3.996.7': + resolution: {integrity: sha512-mYhh7FY+7OOqjkYkd6+6GgJOsXK1xBWmuR+c5mxJPj2kr5TBNeZq+nUvE9kANWAux5UxDVrNOSiEM/wlHzC3Lg==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/token-providers@3.1005.0': + resolution: {integrity: sha512-vMxd+ivKqSxU9bHx5vmAlFKDAkjGotFU56IOkDa5DaTu1WWwbcse0yFHEm9I537oVvodaiwMl3VBwgHfzQ2rvw==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/token-providers@3.1021.0': + resolution: {integrity: sha512-TKY6h9spUk3OLs5v1oAgW9mAeBE3LAGNBwJokLy96wwmd4W2v/tYlXseProyed9ValDj2u1jK/4Rg1T+1NXyJA==} + engines: {node: '>=20.0.0'} + + '@aws-sdk/token-providers@3.1024.0': + resolution: {integrity: sha512-eoyTMgd6OzoE1dq50um5Y53NrosEkWsjH0W6pswi7vrv1W9hY/7hR43jDcPevqqj+OQksf/5lc++FTqRlb8Y1Q==} engines: {node: '>=20.0.0'} - '@aws-sdk/token-providers@3.1019.0': - resolution: {integrity: sha512-OF+2RfRmUKyjzrRWlDcyju3RBsuqcrYDQ8TwrJg8efcOotMzuZN4U9mpVTIdATpmEc4lWNZBMSjPzrGm6JPnAQ==} + '@aws-sdk/types@3.973.5': + resolution: {integrity: sha512-hl7BGwDCWsjH8NkZfx+HgS7H2LyM2lTMAI7ba9c8O0KqdBLTdNJivsHpqjg9rNlAlPyREb6DeDRXUl0s8uFdmQ==} engines: {node: '>=20.0.0'} '@aws-sdk/types@3.973.6': @@ -5101,6 +5198,10 @@ packages: resolution: {integrity: sha512-HzSD8PMFrvgi2Kserxuff5VitNq2sgf3w9qxmskKDiDTThWfVteJxuCS9JXiPIPtmCrp+7N9asfIaVhBFORllA==} engines: {node: '>=20.0.0'} + '@aws-sdk/util-endpoints@3.996.4': + resolution: {integrity: sha512-Hek90FBmd4joCFj+Vc98KLJh73Zqj3s2W56gjAcTkrNLMDI5nIFkG9YpfcJiVI1YlE2Ne1uOQNe+IgQ/Vz2XRA==} + engines: {node: '>=20.0.0'} + '@aws-sdk/util-endpoints@3.996.5': resolution: {integrity: sha512-Uh93L5sXFNbyR5sEPMzUU8tJ++Ku97EY4udmC01nB8Zu+xfBPwpIwJ6F7snqQeq8h2pf+8SGN5/NoytfKgYPIw==} engines: {node: '>=20.0.0'} @@ -5113,11 +5214,14 @@ packages: resolution: {integrity: sha512-WhlJNNINQB+9qtLtZJcpQdgZw3SCDCpXdUJP7cToGwHbCWCnRckGlc6Bx/OhWwIYFNAn+FIydY8SZ0QmVu3xTQ==} engines: {node: '>=20.0.0'} + '@aws-sdk/util-user-agent-browser@3.972.7': + resolution: {integrity: sha512-7SJVuvhKhMF/BkNS1n0QAJYgvEwYbK2QLKBrzDiwQGiTRU6Yf1f3nehTzm/l21xdAOtWSfp2uWSddPnP2ZtsVw==} + '@aws-sdk/util-user-agent-browser@3.972.8': resolution: {integrity: sha512-B3KGXJviV2u6Cdw2SDY2aDhoJkVfY/Q/Trwk2CMSkikE1Oi6gRzxhvhIfiRpHfmIsAhV4EA54TVEX8K6CbHbkA==} - '@aws-sdk/util-user-agent-node@3.973.12': - resolution: {integrity: sha512-8phW0TS8ntENJgDcFewYT/Q8dOmarpvSxEjATu2GUBAutiHr++oEGCiBUwxslCMNvwW2cAPZNT53S/ym8zm/gg==} + '@aws-sdk/util-user-agent-node@3.973.14': + resolution: {integrity: sha512-vNSB/DYaPOyujVZBg/zUznH9QC142MaTHVmaFlF7uzzfg3CgT9f/l4C0Yi+vU/tbBhxVcXVB90Oohk5+o+ZbWw==} engines: {node: '>=20.0.0'} peerDependencies: aws-crt: '>=1.0.0' @@ -5125,6 +5229,19 @@ packages: aws-crt: optional: true + '@aws-sdk/util-user-agent-node@3.973.5': + resolution: {integrity: sha512-Dyy38O4GeMk7UQ48RupfHif//gqnOPbq/zlvRssc11E2mClT+aUfc3VS2yD8oLtzqO3RsqQ9I3gOBB4/+HjPOw==} + engines: {node: '>=20.0.0'} + peerDependencies: + aws-crt: '>=1.0.0' + peerDependenciesMeta: + aws-crt: + optional: true + + '@aws-sdk/xml-builder@3.972.10': + resolution: {integrity: sha512-OnejAIVD+CxzyAUrVic7lG+3QRltyja9LoNqCE/1YVs8ichoTbJlVSaZ9iSMcnHLyzrSNtvaOGjSDRP+d/ouFA==} + engines: {node: '>=20.0.0'} + '@aws-sdk/xml-builder@3.972.16': resolution: {integrity: sha512-iu2pyvaqmeatIJLURLqx9D+4jKAdTH20ntzB6BFwjyN7V960r4jK32mx0Zf7YbtOYAbmbtQfDNuL60ONinyw7A==} engines: {node: '>=20.0.0'} @@ -5614,10 +5731,6 @@ packages: peerDependencies: '@babel/core': ^7.0.0-0 - '@babel/runtime@7.28.6': - resolution: {integrity: sha512-05WQkdpL9COIMz4LjTxGpPNCdlpyimKppYNoJ5Di5EUObifl8t4tuLuUBBZEpoLYOmfvIWrsp9fCl0HoPRVTdA==} - engines: {node: '>=6.9.0'} - '@babel/runtime@7.29.2': resolution: {integrity: sha512-JiDShH45zKHWyGe4ZNVRrCjBz8Nh9TMmZG1kh4QTK8hCBTWBi8Da+i7s1fJw7/lYpM4ccepSNfqzZ/QvABBi5g==} engines: {node: '>=6.9.0'} @@ -5865,7 +5978,6 @@ packages: '@clerk/types@4.101.12': resolution: {integrity: sha512-ePXOla3B1qgPtV0AzrLx2PVC3s/lsjOSYnuIFAxaIlRNT2+eb/BjeoqtTOcezwbdQ00jQ2RvXahdfZRSEuvZ7A==} engines: {node: '>=18.17.0'} - deprecated: 'This package is no longer supported. Please import types from @clerk/shared/types instead. See the upgrade guide for more info: https://clerk.com/docs/guides/development/upgrading/upgrade-guides/core-3' '@cloudflare/kv-asset-handler@0.4.0': resolution: {integrity: sha512-+tv3z+SPp+gqTIcImN9o0hqE9xyfQjI1XD9pL6NuKjua9B1y7mNYv0S9cP+QEbA4ppVgGZEmKOvHX5G5Ei1CVA==} @@ -5967,12 +6079,16 @@ packages: peerDependencies: '@bufbuild/protobuf': ^2.2.0 - '@copilotkit/llmock@1.6.0': - resolution: {integrity: sha512-wq4J7ampjoEiOi6v2d7GMK5lTZcTnuhMduSPCIwmyxBTCPA3lekXyNKGJ4t3xM5OgoJReMQ5KmlfrMBVTRNGsA==} + '@copilotkit/aimock@1.7.0': + resolution: {integrity: sha512-X6B2z0MgGTg8N/geRg6zRVVgEp3krP+gYapwXCt2w3JU7BSf2q0laa4iHC+BZqPXf29iVDVwDM7BxB5LqhjcAg==} engines: {node: '>=20.15.0'} deprecated: This package has moved to @copilotkit/aimock hasBin: true + '@copilotkit/llmock@1.7.1': + resolution: {integrity: sha512-IHBhkowTi8baM67Z5fpFcmeEPwNmzEfSWejZ1hmur/nlNKPdc28n0aD9DUdSfvaRN7wjtHgcOF6RizCgfoqaaQ==} + deprecated: This package has moved to @copilotkit/aimock + '@cspotcode/source-map-support@0.8.1': resolution: {integrity: sha512-IchNf6dN4tHoMFIn/7OE8LWZ19Y6q/67Bmf6vnGREv8RSbBVb9LPJxEcnwrcwX6ixSvaiGoomAUvu4YSxXrVgw==} engines: {node: '>=12'} @@ -7223,8 +7339,8 @@ packages: '@fortawesome/fontawesome-svg-core': ~6 || ~7 react: 19.1.0 - '@google/genai@1.47.0': - resolution: {integrity: sha512-0VV7AaXm5rQu3oRHNZNEubRAOL2lv5u+YA72eWnDwcOx3B1jFRbvtgL4drRHlocRHOnludvr3xmbQGbR+/RQAQ==} + '@google/genai@1.48.0': + resolution: {integrity: sha512-plonYK4ML2PrxsRD9SeqmFt76eREWkQdPCglOA6aYDzL1AAbE+7PUnT54SvpWGfws13L0AZEqGSpL7+1IPnTxQ==} engines: {node: '>=20.0.0'} peerDependencies: '@modelcontextprotocol/sdk': ^1.25.2 @@ -7969,6 +8085,11 @@ packages: resolution: {integrity: sha512-7G0Uf0yK3f2bjElBLGHIQzgRgMESczOMyYVasq1XK8P5HaXtlW4eQhz9MBL+TQILZLaruq+ClGId+hH0w4jvWw==} engines: {node: '>=18'} + '@napi-rs/cli@2.18.4': + resolution: {integrity: sha512-SgJeA4df9DE2iAEpr3M2H0OKl/yjtg1BnRI5/JyowS71tUWhrfSu2LT0V3vlHET+g1hBVlrO60PmEXwUEKp8Mg==} + engines: {node: '>= 10'} + hasBin: true + '@neophi/sieve-cache@1.5.0': resolution: {integrity: sha512-9T3nD5q51X1d4QYW6vouKW9hBSb2Tb/wB/2XoTr4oP5SCGtp3a7aTHHewQFylred1B21/Bhev6gy4x01FPBcbQ==} engines: {node: '>=18'} @@ -9998,10 +10119,6 @@ packages: resolution: {integrity: sha512-Hj4WoYWMJnSpM6/kchsm4bUNTL9XiSyhvoMb2KIq4VJzyDt7JpGHUZHkVNPZVC7YE1tf8tPeVauxpFBKGW4/KQ==} engines: {node: '>=18.0.0'} - '@smithy/abort-controller@4.2.12': - resolution: {integrity: sha512-xolrFw6b+2iYGl6EcOL7IJY71vvyZ0DJ3mcKtpykqPe2uscwtzDZJa1uVQXyP7w9Dd+kGwYnPbMsJrGISKiY/Q==} - engines: {node: '>=18.0.0'} - '@smithy/chunked-blob-reader-native@4.2.3': resolution: {integrity: sha512-jA5k5Udn7Y5717L86h4EIv06wIr3xn8GM1qHRi/Nf31annXcXHJjBKvgztnbn2TxH3xWrPBfgwHsOwZf0UmQWw==} engines: {node: '>=18.0.0'} @@ -10010,38 +10127,66 @@ packages: resolution: {integrity: sha512-St+kVicSyayWQca+I1rGitaOEH6uKgE8IUWoYnnEX26SWdWQcL6LvMSD19Lg+vYHKdT9B2Zuu7rd3i6Wnyb/iw==} engines: {node: '>=18.0.0'} + '@smithy/config-resolver@4.4.10': + resolution: {integrity: sha512-IRTkd6ps0ru+lTWnfnsbXzW80A8Od8p3pYiZnW98K2Hb20rqfsX7VTlfUwhrcOeSSy68Gn9WBofwPuw3e5CCsg==} + engines: {node: '>=18.0.0'} + '@smithy/config-resolver@4.4.13': resolution: {integrity: sha512-iIzMC5NmOUP6WL6o8iPBjFhUhBZ9pPjpUpQYWMUFQqKyXXzOftbfK8zcQCz/jFV1Psmf05BK5ypx4K2r4Tnwdg==} engines: {node: '>=18.0.0'} - '@smithy/core@3.23.12': - resolution: {integrity: sha512-o9VycsYNtgC+Dy3I0yrwCqv9CWicDnke0L7EVOrZtJpjb2t0EjaEofmMrYc0T1Kn3yk32zm6cspxF9u9Bj7e5w==} + '@smithy/core@3.23.13': + resolution: {integrity: sha512-J+2TT9D6oGsUVXVEMvz8h2EmdVnkBiy2auCie4aSJMvKlzUtO5hqjEzXhoCUkIMo7gAYjbQcN0g/MMSXEhDs1Q==} engines: {node: '>=18.0.0'} '@smithy/core@3.23.9': resolution: {integrity: sha512-1Vcut4LEL9HZsdpI0vFiRYIsaoPwZLjAxnVQDUMQK8beMS+EYPLDQCXtbzfxmM5GzSgjfe2Q9M7WaXwIMQllyQ==} engines: {node: '>=18.0.0'} + '@smithy/credential-provider-imds@4.2.11': + resolution: {integrity: sha512-lBXrS6ku0kTj3xLmsJW0WwqWbGQ6ueooYyp/1L9lkyT0M02C+DWwYwc5aTyXFbRaK38ojALxNixg+LxKSHZc0g==} + engines: {node: '>=18.0.0'} + '@smithy/credential-provider-imds@4.2.12': resolution: {integrity: sha512-cr2lR792vNZcYMriSIj+Um3x9KWrjcu98kn234xA6reOAFMmbRpQMOv8KPgEmLLtx3eldU6c5wALKFqNOhugmg==} engines: {node: '>=18.0.0'} + '@smithy/eventstream-codec@4.2.11': + resolution: {integrity: sha512-Sf39Ml0iVX+ba/bgMPxaXWAAFmHqYLTmbjAPfLPLY8CrYkRDEqZdUsKC1OwVMCdJXfAt0v4j49GIJ8DoSYAe6w==} + engines: {node: '>=18.0.0'} + '@smithy/eventstream-codec@4.2.12': resolution: {integrity: sha512-FE3bZdEl62ojmy8x4FHqxq2+BuOHlcxiH5vaZ6aqHJr3AIZzwF5jfx8dEiU/X0a8RboyNDjmXjlbr8AdEyLgiA==} engines: {node: '>=18.0.0'} + '@smithy/eventstream-serde-browser@4.2.11': + resolution: {integrity: sha512-3rEpo3G6f/nRS7fQDsZmxw/ius6rnlIpz4UX6FlALEzz8JoSxFmdBt0SZnthis+km7sQo6q5/3e+UJcuQivoXA==} + engines: {node: '>=18.0.0'} + '@smithy/eventstream-serde-browser@4.2.12': resolution: {integrity: sha512-XUSuMxlTxV5pp4VpqZf6Sa3vT/Q75FVkLSpSSE3KkWBvAQWeuWt1msTv8fJfgA4/jcJhrbrbMzN1AC/hvPmm5A==} engines: {node: '>=18.0.0'} + '@smithy/eventstream-serde-config-resolver@4.3.11': + resolution: {integrity: sha512-XeNIA8tcP/GDWnnKkO7qEm/bg0B/bP9lvIXZBXcGZwZ+VYM8h8k9wuDvUODtdQ2Wcp2RcBkPTCSMmaniVHrMlA==} + engines: {node: '>=18.0.0'} + '@smithy/eventstream-serde-config-resolver@4.3.12': resolution: {integrity: sha512-7epsAZ3QvfHkngz6RXQYseyZYHlmWXSTPOfPmXkiS+zA6TBNo1awUaMFL9vxyXlGdoELmCZyZe1nQE+imbmV+Q==} engines: {node: '>=18.0.0'} + '@smithy/eventstream-serde-node@4.2.11': + resolution: {integrity: sha512-fzbCh18rscBDTQSCrsp1fGcclLNF//nJyhjldsEl/5wCYmgpHblv5JSppQAyQI24lClsFT0wV06N1Porn0IsEw==} + engines: {node: '>=18.0.0'} + '@smithy/eventstream-serde-node@4.2.12': resolution: {integrity: sha512-D1pFuExo31854eAvg89KMn9Oab/wEeJR6Buy32B49A9Ogdtx5fwZPqBHUlDzaCDpycTFk2+fSQgX689Qsk7UGA==} engines: {node: '>=18.0.0'} + '@smithy/eventstream-serde-universal@4.2.11': + resolution: {integrity: sha512-MJ7HcI+jEkqoWT5vp+uoVaAjBrmxBtKhZTeynDRG/seEjJfqyg3SiqMMqyPnAMzmIfLaeJ/uiuSDP/l9AnMy/Q==} + engines: {node: '>=18.0.0'} + '@smithy/eventstream-serde-universal@4.2.12': resolution: {integrity: sha512-+yNuTiyBACxOJUTvbsNsSOfH9G9oKbaJE1lNL3YHpGcuucl6rPZMi3nrpehpVOVR2E07YqFFmtwpImtpzlouHQ==} engines: {node: '>=18.0.0'} @@ -10054,16 +10199,24 @@ packages: resolution: {integrity: sha512-T4jFU5N/yiIfrtrsb9uOQn7RdELdM/7HbyLNr6uO/mpkj1ctiVs7CihVr51w4LyQlXWDpXFn4BElf1WmQvZu/A==} engines: {node: '>=18.0.0'} - '@smithy/hash-blob-browser@4.2.13': - resolution: {integrity: sha512-YrF4zWKh+ghLuquldj6e/RzE3xZYL8wIPfkt0MqCRphVICjyyjH8OwKD7LLlKpVEbk4FLizFfC1+gwK6XQdR3g==} + '@smithy/hash-blob-browser@4.2.12': + resolution: {integrity: sha512-1wQE33DsxkM/waftAhCH9VtJbUGyt1PJ9YRDpOu+q9FUi73LLFUZ2fD8A61g2mT1UY9k7b99+V1xZ41Rz4SHRQ==} + engines: {node: '>=18.0.0'} + + '@smithy/hash-node@4.2.11': + resolution: {integrity: sha512-T+p1pNynRkydpdL015ruIoyPSRw9e/SQOWmSAMmmprfswMrd5Ow5igOWNVlvyVFZlxXqGmyH3NQwfwy8r5Jx0A==} engines: {node: '>=18.0.0'} '@smithy/hash-node@4.2.12': resolution: {integrity: sha512-QhBYbGrbxTkZ43QoTPrK72DoYviDeg6YKDrHTMJbbC+A0sml3kSjzFtXP7BtbyJnXojLfTQldGdUR0RGD8dA3w==} engines: {node: '>=18.0.0'} - '@smithy/hash-stream-node@4.2.12': - resolution: {integrity: sha512-O3YbmGExeafuM/kP7Y8r6+1y0hIh3/zn6GROx0uNlB54K9oihAL75Qtc+jFfLNliTi6pxOAYZrRKD9A7iA6UFw==} + '@smithy/hash-stream-node@4.2.11': + resolution: {integrity: sha512-hQsTjwPCRY8w9GK07w1RqJi3e+myh0UaOWBBhZ1UMSDgofH/Q1fEYzU1teaX6HkpX/eWDdm7tAGR0jBPlz9QEQ==} + engines: {node: '>=18.0.0'} + + '@smithy/invalid-dependency@4.2.11': + resolution: {integrity: sha512-cGNMrgykRmddrNhYy1yBdrp5GwIgEkniS7k9O1VLB38yxQtlvrxpZtUVvo6T4cKpeZsriukBuuxfJcdZQc/f/g==} engines: {node: '>=18.0.0'} '@smithy/invalid-dependency@4.2.12': @@ -10078,8 +10231,12 @@ packages: resolution: {integrity: sha512-n6rQ4N8Jj4YTQO3YFrlgZuwKodf4zUFs7EJIWH86pSCWBaAtAGBFfCM7Wx6D2bBJ2xqFNxGBSrUWswT3M0VJow==} engines: {node: '>=18.0.0'} - '@smithy/md5-js@4.2.12': - resolution: {integrity: sha512-W/oIpHCpWU2+iAkfZYyGWE+qkpuf3vEXHLxQQDx9FPNZTTdnul0dZ2d/gUFrtQ5je1G2kp4cjG0/24YueG2LbQ==} + '@smithy/md5-js@4.2.11': + resolution: {integrity: sha512-350X4kGIrty0Snx2OWv7rPM6p6vM7RzryvFs6B/56Cux3w3sChOb3bymo5oidXJlPcP9fIRxGUCk7GqpiSOtng==} + engines: {node: '>=18.0.0'} + + '@smithy/middleware-content-length@4.2.11': + resolution: {integrity: sha512-UvIfKYAKhCzr4p6jFevPlKhQwyQwlJ6IeKLDhmV1PlYfcW3RL4ROjNEDtSik4NYMi9kDkH7eSwyTP3vNJ/u/Dw==} engines: {node: '>=18.0.0'} '@smithy/middleware-content-length@4.2.12': @@ -10090,20 +10247,24 @@ packages: resolution: {integrity: sha512-UEFIejZy54T1EJn2aWJ45voB7RP2T+IRzUqocIdM6GFFa5ClZncakYJfcYnoXt3UsQrZZ9ZRauGm77l9UCbBLw==} engines: {node: '>=18.0.0'} - '@smithy/middleware-endpoint@4.4.27': - resolution: {integrity: sha512-T3TFfUgXQlpcg+UdzcAISdZpj4Z+XECZ/cefgA6wLBd6V4lRi0svN2hBouN/be9dXQ31X4sLWz3fAQDf+nt6BA==} + '@smithy/middleware-endpoint@4.4.28': + resolution: {integrity: sha512-p1gfYpi91CHcs5cBq982UlGlDrxoYUX6XdHSo91cQ2KFuz6QloHosO7Jc60pJiVmkWrKOV8kFYlGFFbQ2WUKKQ==} + engines: {node: '>=18.0.0'} + + '@smithy/middleware-retry@4.4.40': + resolution: {integrity: sha512-YhEMakG1Ae57FajERdHNZ4ShOPIY7DsgV+ZoAxo/5BT0KIe+f6DDU2rtIymNNFIj22NJfeeI6LWIifrwM0f+rA==} engines: {node: '>=18.0.0'} - '@smithy/middleware-retry@4.4.44': - resolution: {integrity: sha512-Y1Rav7m5CFRPQyM4CI0koD/bXjyjJu3EQxZZhtLGD88WIrBrQ7kqXM96ncd6rYnojwOo/u9MXu57JrEvu/nLrA==} + '@smithy/middleware-retry@4.4.46': + resolution: {integrity: sha512-SpvWNNOPOrKQGUqZbEPO+es+FRXMWvIyzUKUOYdDgdlA6BdZj/R58p4umoQ76c2oJC44PiM7mKizyyex1IJzow==} engines: {node: '>=18.0.0'} '@smithy/middleware-serde@4.2.12': resolution: {integrity: sha512-W9g1bOLui7Xn5FABRVS0o3rXL0gfN37d/8I/W7i0N7oxjx9QecUmXEMSUMADTODwdtka9cN43t5BI2CodLJpng==} engines: {node: '>=18.0.0'} - '@smithy/middleware-serde@4.2.15': - resolution: {integrity: sha512-ExYhcltZSli0pgAKOpQQe1DLFBLryeZ22605y/YS+mQpdNWekum9Ujb/jMKfJKgjtz1AZldtwA/wCYuKJgjjlg==} + '@smithy/middleware-serde@4.2.16': + resolution: {integrity: sha512-beqfV+RZ9RSv+sQqor3xroUUYgRFCGRw6niGstPG8zO9LgTl0B0MCucxjmrH/2WwksQN7UUgI7KNANoZv+KALA==} engines: {node: '>=18.0.0'} '@smithy/middleware-stack@4.2.11': @@ -10126,8 +10287,8 @@ packages: resolution: {integrity: sha512-DamSqaU8nuk0xTJDrYnRzZndHwwRnyj/n/+RqGGCcBKB4qrQem0mSDiWdupaNWdwxzyMU91qxDmHOCazfhtO3A==} engines: {node: '>=18.0.0'} - '@smithy/node-http-handler@4.5.0': - resolution: {integrity: sha512-Rnq9vQWiR1+/I6NZZMNzJHV6pZYyEHt2ZnuV3MG8z2NNenC4i/8Kzttz7CjZiHSmsN5frhXhg17z3Zqjjhmz1A==} + '@smithy/node-http-handler@4.5.1': + resolution: {integrity: sha512-ejjxdAXjkPIs9lyYyVutOGNOraqUE9v/NjGMKwwFrfOM354wfSD8lmlj8hVwUzQmlLLF4+udhfCX9Exnbmvfzw==} engines: {node: '>=18.0.0'} '@smithy/property-provider@4.2.11': @@ -10162,6 +10323,10 @@ packages: resolution: {integrity: sha512-P2OdvrgiAKpkPNKlKUtWbNZKB1XjPxM086NeVhK+W+wI46pIKdWBe5QyXvhUm3MEcyS/rkLvY8rZzyUdmyDZBw==} engines: {node: '>=18.0.0'} + '@smithy/service-error-classification@4.2.11': + resolution: {integrity: sha512-HkMFJZJUhzU3HvND1+Yw/kYWXp4RPDLBWLcK1n+Vqw8xn4y2YiBhdww8IxhkQjP/QlZun5bwm3vcHc8AqIU3zw==} + engines: {node: '>=18.0.0'} + '@smithy/service-error-classification@4.2.12': resolution: {integrity: sha512-LlP29oSQN0Tw0b6D0Xo6BIikBswuIiGYbRACy5ujw/JgWSzTdYj46U83ssf6Ux0GyNJVivs2uReU8pt7Eu9okQ==} engines: {node: '>=18.0.0'} @@ -10174,6 +10339,10 @@ packages: resolution: {integrity: sha512-HrOKWsUb+otTeo1HxVWeEb99t5ER1XrBi/xka2Wv6NVmTbuCUC1dvlrksdvxFtODLBjsC+PHK+fuy2x/7Ynyiw==} engines: {node: '>=18.0.0'} + '@smithy/signature-v4@5.3.11': + resolution: {integrity: sha512-V1L6N9aKOBAN4wEHLyqjLBnAz13mtILU0SeDrjOaIZEeN6IFa6DxwRt1NNpOdmSpQUfkBj0qeD3m6P77uzMhgQ==} + engines: {node: '>=18.0.0'} + '@smithy/signature-v4@5.3.12': resolution: {integrity: sha512-B/FBwO3MVOL00DaRSXfXfa/TRXRheagt/q5A2NM13u7q+sHS59EOVGQNfG7DkmVtdQm5m3vOosoKAXSqn/OEgw==} engines: {node: '>=18.0.0'} @@ -10182,8 +10351,8 @@ packages: resolution: {integrity: sha512-7k4UxjSpHmPN2AxVhvIazRSzFQjWnud3sOsXcFStzagww17j1cFQYqTSiQ8xuYK3vKLR1Ni8FzuT3VlKr3xCNw==} engines: {node: '>=18.0.0'} - '@smithy/smithy-client@4.12.7': - resolution: {integrity: sha512-q3gqnwml60G44FECaEEsdQMplYhDMZYCtYhMCzadCnRnnHIobZJjegmdoUo6ieLQlPUzvrMdIJUpx6DoPmzANQ==} + '@smithy/smithy-client@4.12.8': + resolution: {integrity: sha512-aJaAX7vHe5i66smoSSID7t4rKY08PbD8EBU7DOloixvhOozfYWdcSYE4l6/tjkZ0vBZhGjheWzB2mh31sLgCMA==} engines: {node: '>=18.0.0'} '@smithy/types@4.13.0': @@ -10226,12 +10395,24 @@ packages: resolution: {integrity: sha512-dWU03V3XUprJwaUIFVv4iOnS1FC9HnMHDfUrlNDSh4315v0cWyaIErP8KiqGVbf5z+JupoVpNM7ZB3jFiTejvQ==} engines: {node: '>=18.0.0'} - '@smithy/util-defaults-mode-browser@4.3.43': - resolution: {integrity: sha512-Qd/0wCKMaXxev/z00TvNzGCH2jlKKKxXP1aDxB6oKwSQthe3Og2dMhSayGCnsma1bK/kQX1+X7SMP99t6FgiiQ==} + '@smithy/util-defaults-mode-browser@4.3.39': + resolution: {integrity: sha512-ui7/Ho/+VHqS7Km2wBw4/Ab4RktoiSshgcgpJzC4keFPs6tLJS4IQwbeahxQS3E/w98uq6E1mirCH/id9xIXeQ==} engines: {node: '>=18.0.0'} - '@smithy/util-defaults-mode-node@4.2.47': - resolution: {integrity: sha512-qSRbYp1EQ7th+sPFuVcVO05AE0QH635hycdEXlpzIahqHHf2Fyd/Zl+8v0XYMJ3cgDVPa0lkMefU7oNUjAP+DQ==} + '@smithy/util-defaults-mode-browser@4.3.44': + resolution: {integrity: sha512-eZg6XzaCbVr2S5cAErU5eGBDaOVTuTo1I65i4tQcHENRcZ8rMWhQy1DaIYUSLyZjsfXvmCqZrstSMYyGFocvHA==} + engines: {node: '>=18.0.0'} + + '@smithy/util-defaults-mode-node@4.2.42': + resolution: {integrity: sha512-QDA84CWNe8Akpj15ofLO+1N3Rfg8qa2K5uX0y6HnOp4AnRYRgWrKx/xzbYNbVF9ZsyJUYOfcoaN3y93wA/QJ2A==} + engines: {node: '>=18.0.0'} + + '@smithy/util-defaults-mode-node@4.2.48': + resolution: {integrity: sha512-FqOKTlqSaoV3nzO55pMs5NBnZX8EhoI0DGmn9kbYeXWppgHD6dchyuj2HLqp4INJDJbSrj6OFYJkAh/WhSzZPg==} + engines: {node: '>=18.0.0'} + + '@smithy/util-endpoints@3.3.2': + resolution: {integrity: sha512-+4HFLpE5u29AbFlTdlKIT7jfOzZ8PDYZKTb3e+AgLz986OYwqTourQ5H+jg79/66DB69Un1+qKecLnkZdAsYcA==} engines: {node: '>=18.0.0'} '@smithy/util-endpoints@3.3.3': @@ -10250,16 +10431,20 @@ packages: resolution: {integrity: sha512-Er805uFUOvgc0l8nv0e0su0VFISoxhJ/AwOn3gL2NWNY2LUEldP5WtVcRYSQBcjg0y9NfG8JYrCJaYDpupBHJQ==} engines: {node: '>=18.0.0'} - '@smithy/util-retry@4.2.12': - resolution: {integrity: sha512-1zopLDUEOwumjcHdJ1mwBHddubYF8GMQvstVCLC54Y46rqoHwlIU+8ZzUeaBcD+WCJHyDGSeZ2ml9YSe9aqcoQ==} + '@smithy/util-retry@4.2.11': + resolution: {integrity: sha512-XSZULmL5x6aCTTii59wJqKsY1l3eMIAomRAccW7Tzh9r8s7T/7rdo03oektuH5jeYRlJMPcNP92EuRDvk9aXbw==} + engines: {node: '>=18.0.0'} + + '@smithy/util-retry@4.2.13': + resolution: {integrity: sha512-qQQsIvL0MGIbUjeSrg0/VlQ3jGNKyM3/2iU3FPNgy01z+Sp4OvcaxbgIoFOTvB61ZoohtutuOvOcgmhbD0katQ==} engines: {node: '>=18.0.0'} '@smithy/util-stream@4.5.17': resolution: {integrity: sha512-793BYZ4h2JAQkNHcEnyFxDTcZbm9bVybD0UV/LEWmZ5bkTms7JqjfrLMi2Qy0E5WFcCzLwCAPgcvcvxoeALbAQ==} engines: {node: '>=18.0.0'} - '@smithy/util-stream@4.5.20': - resolution: {integrity: sha512-4yXLm5n/B5SRBR2p8cZ90Sbv4zL4NKsgxdzCzp/83cXw2KxLEumt5p+GAVyRNZgQOSrzXn9ARpO0lUe8XSlSDw==} + '@smithy/util-stream@4.5.21': + resolution: {integrity: sha512-KzSg+7KKywLnkoKejRtIBXDmwBfjGvg1U1i/etkC7XSWUyFCoLno1IohV2c74IzQqdhX5y3uE44r/8/wuK+A7Q==} engines: {node: '>=18.0.0'} '@smithy/util-uri-escape@4.2.2': @@ -10274,8 +10459,8 @@ packages: resolution: {integrity: sha512-75MeYpjdWRe8M5E3AW0O4Cx3UadweS+cwdXjwYGBW5h/gxxnbeZ877sLPX/ZJA9GVTlL/qG0dXP29JWFCD1Ayw==} engines: {node: '>=18.0.0'} - '@smithy/util-waiter@4.2.13': - resolution: {integrity: sha512-2zdZ9DTHngRtcYxJK1GUDxruNr53kv5W2Lupe0LMU+Imr6ohQg8M2T14MNkj1Y0wS3FFwpgpGQyvuaMF7CiTmQ==} + '@smithy/util-waiter@4.2.12': + resolution: {integrity: sha512-ek5hyDrzS6mBFsNCEX8LpM+EWSLq6b9FdmPRlkpXXhiJE6aIZehKT9clC6+nFpZAA+i/Yg0xlaPeWGNbf5rzQA==} engines: {node: '>=18.0.0'} '@smithy/uuid@1.1.2': @@ -11014,8 +11199,8 @@ packages: resolution: {integrity: sha512-UycprH3T6n3jH0k44NHMa7pnFHGu/N05MjojYr+Mc6I7obkoLIJujSWwin1pCvdy/eOxrI/l3uDLQsmcrOb4ug==} engines: {node: '>= 20'} - '@vercel/sandbox@1.9.0': - resolution: {integrity: sha512-zgr1ad0tkT1xZn/8Vxo60wOUOLqMAVGo4WqJQ8/UDcUtWynNJsBjI2tiMdWZrAo9EKH1MIqEzJNkcclF0UT1EQ==} + '@vercel/sandbox@1.9.2': + resolution: {integrity: sha512-tKPKisnf9YSmqCr1X4mThLjNacTnWMmAfzYfoEul1aILdMHKpsECUBae9FASWL+PsZpT4hi1QrcSHkPXX212rw==} '@visx/axis@3.12.0': resolution: {integrity: sha512-8MoWpfuaJkhm2Yg+HwyytK8nk+zDugCqTT/tRmQX7gW4LYrHYLXFUXOzbDyyBakCVaUbUaAhVFxpMASJiQKf7A==} @@ -11287,6 +11472,9 @@ packages: '@webgpu/types@0.1.69': resolution: {integrity: sha512-RPmm6kgRbI8e98zSD3RVACvnuktIja5+yLgDAkTmxLr90BEwdTXRQWNLF3ETTTyH/8mKhznZuN5AveXYFEsMGQ==} + '@workflow/serde@4.1.0-beta.2': + resolution: {integrity: sha512-8kkeoQKLDaKXefjV5dbhBj2aErfKp1Mc4pb6tj8144cF+Em5SPbyMbyLCHp+BVrFfFVCBluCtMx+jjvaFVZGww==} + '@xmldom/xmldom@0.7.13': resolution: {integrity: sha512-lm2GW5PkosIzccsaZIz7tp8cPADSIlIHWDFTR1N0SzfinhhYgeIQjFMz4rYzanCScr3DqQLeomUDArp6MWKm+g==} engines: {node: '>=10.0.0'} @@ -11848,6 +12036,10 @@ packages: resolution: {integrity: sha512-fy6KJm2RawA5RcHkLa1z/ScpBeA762UF9KmZQxwIbDtRJrgLzM10depAiEQ+CXYcoiqW1/m96OAAoke2nE9EeA==} engines: {node: 18 || 20 || >=22} + brace-expansion@5.0.5: + resolution: {integrity: sha512-VZznLgtwhn+Mact9tfiwx64fA9erHH/MCXEUfB/0bX/6Fz6ny5EGTXYltMocqg4xFAQZtnO3DHWWXi8RiuN7cQ==} + engines: {node: 18 || 20 || >=22} + braces@3.0.3: resolution: {integrity: sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==} engines: {node: '>=8'} @@ -12297,8 +12489,8 @@ packages: computeds@0.0.1: resolution: {integrity: sha512-7CEBgcMjVmitjYo5q8JTJVra6X5mQ20uTThdK+0kR7UEaDrAWEQcRiBtWJzga4eRpP6afNwwLsX2SET2JhVB1Q==} - computesdk@2.5.3: - resolution: {integrity: sha512-YR3xLnBYokxNC/IdDXPiwIWd2dA/gud3wqXpOQbUhYhk7Kuk5Gz2sAko+naQTfZVB5zXVwOqjQUytJF3TL5uxg==} + computesdk@2.5.4: + resolution: {integrity: sha512-5y705cJcGo8TwD9oPxRsfQ+G2oqslv/bCfGC1vAUA7p5xdL7ScIEI2bVYJJy10gnFyeHHgNHwMZ++tesB0PLjg==} concat-map@0.0.1: resolution: {integrity: sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==} @@ -13598,9 +13790,16 @@ packages: fast-uri@3.1.0: resolution: {integrity: sha512-iPeeDKJSWf4IEOasVVrknXpaBV0IApz/gp7S2bb7Z4Lljbl2MGJRqInZiUrQwV16cpzw/D3S5j5Julj/gT52AA==} + fast-xml-builder@1.1.2: + resolution: {integrity: sha512-NJAmiuVaJEjVa7TjLZKlYd7RqmzOC91EtPFXHvlTcqBVo50Qh7XV5IwvXi1c7NRz2Q/majGX9YLcwJtWgHjtkA==} + fast-xml-builder@1.1.4: resolution: {integrity: sha512-f2jhpN4Eccy0/Uz9csxh3Nu6q4ErKxf0XIsasomfOihuSUa3/xw6w8dnOtCDgEItQFJG8KyXPzQXzcODDrrbOg==} + fast-xml-parser@5.4.1: + resolution: {integrity: sha512-BQ30U1mKkvXQXXkAGcuyUA/GA26oEB7NzOtsxCDtyu62sjGw5QraKFhx2Em3WQNjPw9PG6MQ9yuIIgkSDfGu5A==} + hasBin: true + fast-xml-parser@5.5.8: resolution: {integrity: sha512-Z7Fh2nVQSb2d+poDViM063ix2ZGt9jmY1nWhPfHBOK2Hgnb/OW3P4Et3P/81SEej0J7QbWtJqxO05h8QYfK7LQ==} hasBin: true @@ -14746,8 +14945,8 @@ packages: resolution: {integrity: sha512-zPPuIt+ku1iCpFBRwseMcPYQ1cJL8l60rSmKeOuGfOXyE6YnTBmf2aEFNL2HQGrD0cPcLO/t+v9RTgC+fwEh/g==} engines: {node: ^4.8.4 || ^6.10.1 || ^7.10.1 || >= 8.1.4} - koffi@2.15.2: - resolution: {integrity: sha512-r9tjJLVRSOhCRWdVyQlF3/Ugzeg13jlzS4czS82MAgLff4W+BcYOW7g8Y62t9O5JYjYOLAjAovAZDNlDfZNu+g==} + koffi@2.15.4: + resolution: {integrity: sha512-6l7xxt8heHWQ63WyGd8ofne4TrzhqeKHhvSlI3GnxMIHp3PlDrOPyZbW5YNINXNma1qrKkpM/PGLY8U0V8Hxbw==} kolorist@1.8.0: resolution: {integrity: sha512-Y+60/zizpJ3HRH8DCss+q95yr6145JXZo46OTpFvDZWLfRCE4qChOyk1b26nMaNpfHHgxagk9dXT5OP0Tfe+dQ==} @@ -15513,8 +15712,8 @@ packages: resolution: {integrity: sha512-+G4CpNBxa5MprY+04MbgOw1v7So6n5JY166pFi9KfYwT78fxScCeSNQSNzp6dpPSW2rONOps6Ocam1wFhCgoVw==} engines: {node: 18 || 20 || >=22} - minimatch@10.2.4: - resolution: {integrity: sha512-oRjTw/97aTBN0RHbYCdtF1MQfvusSIBQM0IZEgzl6426+8jSC0nF1a/GmnVLpfB9yyr6g6FTqWqiZVbxrtaCIg==} + minimatch@10.2.5: + resolution: {integrity: sha512-MULkVLfKGYDFYejP07QOurDLLQpcjk7Fw+7jXS2R2czRQzR56yHRveU5NDJEOviH+hETZKSkIk5c+T23GjFUMg==} engines: {node: 18 || 20 || >=22} minimatch@3.0.8: @@ -15568,8 +15767,8 @@ packages: mlly@1.8.0: resolution: {integrity: sha512-l8D9ODSRWLe2KHJSifWGwBqpTZXIXTeo8mlKjY+E2HAakaTeNpqAyBZ8GSqLzHgw4XmHmC8whvpjJNMbFZN7/g==} - modal@0.7.3: - resolution: {integrity: sha512-4CliqNF15sZPBGpSoCj5Y9fd8fTp1ONrBLIJiC4amm/Qzc1rn8CH45SVzSu+1DokHCIRiZqQ1xMhRKpDvDCkBw==} + modal@0.7.4: + resolution: {integrity: sha512-md/+L67tM1RazAt2xvLO+gUqRz6zllyYoNNiM8h+Eb1wLy7JzliH7vnx9f9Sq4zE3qQHENpX0Tjy/LSkIyrANA==} module-details-from-path@1.0.4: resolution: {integrity: sha512-EGWKgxALGMgzvxYF1UyGTy0HXX/2vHLkw6+NvDKW2jypWbHpjQuj4UMcqQWXHERJhVGKikolT06G3bcKe4fi7w==} @@ -16154,8 +16353,12 @@ packages: resolution: {integrity: sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==} engines: {node: '>=8'} - path-expression-matcher@1.2.0: - resolution: {integrity: sha512-DwmPWeFn+tq7TiyJ2CxezCAirXjFxvaiD03npak3cRjlP9+OjTmSy1EpIrEbh+l6JgUundniloMLDQ/6VTdhLQ==} + path-expression-matcher@1.1.3: + resolution: {integrity: sha512-qdVgY8KXmVdJZRSS1JdEPOKPdTiEK/pi0RkcT2sw1RhXxohdujUlJFPuS1TSkevZ9vzd3ZlL7ULl1MHGTApKzQ==} + engines: {node: '>=14.0.0'} + + path-expression-matcher@1.2.1: + resolution: {integrity: sha512-d7gQQmLvAKXKXE2GeP9apIGbMYKz88zWdsn/BN2HRWVQsDFdUY36WSLTY0Jvd4HWi7Fb30gQ62oAOzdgJA6fZw==} engines: {node: '>=14.0.0'} path-is-absolute@1.0.1: @@ -18186,8 +18389,8 @@ packages: resolution: {integrity: sha512-Vqs8HTzjpQXZeXdpsfChQTlafcMQaaIwnGwLam1wudSSjlJeQ3bw1j+TLPePgrCnCpUXx7Ba5Pdpf5OBih62NQ==} engines: {node: '>=20.18.1'} - undici@7.24.6: - resolution: {integrity: sha512-Xi4agocCbRzt0yYMZGMA6ApD7gvtUFaxm4ZmeacWI4cZxaF6C+8I8QfofC20NAePiB/IcvZmzkJ7XPa471AEtA==} + undici@7.24.7: + resolution: {integrity: sha512-H/nlJ/h0ggGC+uRL3ovD+G0i4bqhvsDOpbDv7At5eFLlj2b41L8QliGbnl2H7SnDiYhENphh1tQFJZf+MyfLsQ==} engines: {node: '>=20.18.1'} unenv@2.0.0-rc.21: @@ -19509,20 +19712,20 @@ snapshots: '@aws-crypto/crc32@5.2.0': dependencies: '@aws-crypto/util': 5.2.0 - '@aws-sdk/types': 3.973.6 + '@aws-sdk/types': 3.973.5 tslib: 2.8.1 '@aws-crypto/crc32c@5.2.0': dependencies: '@aws-crypto/util': 5.2.0 - '@aws-sdk/types': 3.973.6 + '@aws-sdk/types': 3.973.5 tslib: 2.8.1 '@aws-crypto/sha1-browser@5.2.0': dependencies: '@aws-crypto/supports-web-crypto': 5.2.0 '@aws-crypto/util': 5.2.0 - '@aws-sdk/types': 3.973.6 + '@aws-sdk/types': 3.973.5 '@aws-sdk/util-locate-window': 3.965.5 '@smithy/util-utf8': 2.3.0 tslib: 2.8.1 @@ -19532,7 +19735,7 @@ snapshots: '@aws-crypto/sha256-js': 5.2.0 '@aws-crypto/supports-web-crypto': 5.2.0 '@aws-crypto/util': 5.2.0 - '@aws-sdk/types': 3.973.6 + '@aws-sdk/types': 3.973.5 '@aws-sdk/util-locate-window': 3.965.5 '@smithy/util-utf8': 2.3.0 tslib: 2.8.1 @@ -19540,7 +19743,7 @@ snapshots: '@aws-crypto/sha256-js@5.2.0': dependencies: '@aws-crypto/util': 5.2.0 - '@aws-sdk/types': 3.973.6 + '@aws-sdk/types': 3.973.5 tslib: 2.8.1 '@aws-crypto/supports-web-crypto@5.2.0': @@ -19549,31 +19752,31 @@ snapshots: '@aws-crypto/util@5.2.0': dependencies: - '@aws-sdk/types': 3.973.6 + '@aws-sdk/types': 3.973.5 '@smithy/util-utf8': 2.3.0 tslib: 2.8.1 - '@aws-sdk/client-bedrock-runtime@3.1019.0': + '@aws-sdk/client-bedrock-runtime@3.1024.0': dependencies: '@aws-crypto/sha256-browser': 5.2.0 '@aws-crypto/sha256-js': 5.2.0 - '@aws-sdk/core': 3.973.25 - '@aws-sdk/credential-provider-node': 3.972.27 + '@aws-sdk/core': 3.973.26 + '@aws-sdk/credential-provider-node': 3.972.29 '@aws-sdk/eventstream-handler-node': 3.972.12 '@aws-sdk/middleware-eventstream': 3.972.8 '@aws-sdk/middleware-host-header': 3.972.8 '@aws-sdk/middleware-logger': 3.972.8 '@aws-sdk/middleware-recursion-detection': 3.972.9 - '@aws-sdk/middleware-user-agent': 3.972.26 + '@aws-sdk/middleware-user-agent': 3.972.28 '@aws-sdk/middleware-websocket': 3.972.14 '@aws-sdk/region-config-resolver': 3.972.10 - '@aws-sdk/token-providers': 3.1019.0 + '@aws-sdk/token-providers': 3.1024.0 '@aws-sdk/types': 3.973.6 '@aws-sdk/util-endpoints': 3.996.5 '@aws-sdk/util-user-agent-browser': 3.972.8 - '@aws-sdk/util-user-agent-node': 3.973.12 + '@aws-sdk/util-user-agent-node': 3.973.14 '@smithy/config-resolver': 4.4.13 - '@smithy/core': 3.23.12 + '@smithy/core': 3.23.13 '@smithy/eventstream-serde-browser': 4.2.12 '@smithy/eventstream-serde-config-resolver': 4.3.12 '@smithy/eventstream-serde-node': 4.2.12 @@ -19581,142 +19784,198 @@ snapshots: '@smithy/hash-node': 4.2.12 '@smithy/invalid-dependency': 4.2.12 '@smithy/middleware-content-length': 4.2.12 - '@smithy/middleware-endpoint': 4.4.27 - '@smithy/middleware-retry': 4.4.44 - '@smithy/middleware-serde': 4.2.15 + '@smithy/middleware-endpoint': 4.4.28 + '@smithy/middleware-retry': 4.4.46 + '@smithy/middleware-serde': 4.2.16 '@smithy/middleware-stack': 4.2.12 '@smithy/node-config-provider': 4.3.12 - '@smithy/node-http-handler': 4.5.0 + '@smithy/node-http-handler': 4.5.1 '@smithy/protocol-http': 5.3.12 - '@smithy/smithy-client': 4.12.7 + '@smithy/smithy-client': 4.12.8 '@smithy/types': 4.13.1 '@smithy/url-parser': 4.2.12 '@smithy/util-base64': 4.3.2 '@smithy/util-body-length-browser': 4.2.2 '@smithy/util-body-length-node': 4.2.3 - '@smithy/util-defaults-mode-browser': 4.3.43 - '@smithy/util-defaults-mode-node': 4.2.47 + '@smithy/util-defaults-mode-browser': 4.3.44 + '@smithy/util-defaults-mode-node': 4.2.48 '@smithy/util-endpoints': 3.3.3 '@smithy/util-middleware': 4.2.12 - '@smithy/util-retry': 4.2.12 - '@smithy/util-stream': 4.5.20 + '@smithy/util-retry': 4.2.13 + '@smithy/util-stream': 4.5.21 '@smithy/util-utf8': 4.2.2 tslib: 2.8.1 transitivePeerDependencies: - aws-crt - '@aws-sdk/client-s3@3.1019.0': + '@aws-sdk/client-s3@3.1007.0': dependencies: '@aws-crypto/sha1-browser': 5.2.0 '@aws-crypto/sha256-browser': 5.2.0 '@aws-crypto/sha256-js': 5.2.0 - '@aws-sdk/core': 3.973.25 - '@aws-sdk/credential-provider-node': 3.972.27 - '@aws-sdk/middleware-bucket-endpoint': 3.972.8 - '@aws-sdk/middleware-expect-continue': 3.972.8 - '@aws-sdk/middleware-flexible-checksums': 3.974.5 - '@aws-sdk/middleware-host-header': 3.972.8 - '@aws-sdk/middleware-location-constraint': 3.972.8 - '@aws-sdk/middleware-logger': 3.972.8 - '@aws-sdk/middleware-recursion-detection': 3.972.9 - '@aws-sdk/middleware-sdk-s3': 3.972.26 - '@aws-sdk/middleware-ssec': 3.972.8 - '@aws-sdk/middleware-user-agent': 3.972.26 - '@aws-sdk/region-config-resolver': 3.972.10 - '@aws-sdk/signature-v4-multi-region': 3.996.14 - '@aws-sdk/types': 3.973.6 - '@aws-sdk/util-endpoints': 3.996.5 - '@aws-sdk/util-user-agent-browser': 3.972.8 - '@aws-sdk/util-user-agent-node': 3.973.12 - '@smithy/config-resolver': 4.4.13 - '@smithy/core': 3.23.12 - '@smithy/eventstream-serde-browser': 4.2.12 - '@smithy/eventstream-serde-config-resolver': 4.3.12 - '@smithy/eventstream-serde-node': 4.2.12 - '@smithy/fetch-http-handler': 5.3.15 - '@smithy/hash-blob-browser': 4.2.13 - '@smithy/hash-node': 4.2.12 - '@smithy/hash-stream-node': 4.2.12 - '@smithy/invalid-dependency': 4.2.12 - '@smithy/md5-js': 4.2.12 - '@smithy/middleware-content-length': 4.2.12 - '@smithy/middleware-endpoint': 4.4.27 - '@smithy/middleware-retry': 4.4.44 - '@smithy/middleware-serde': 4.2.15 - '@smithy/middleware-stack': 4.2.12 - '@smithy/node-config-provider': 4.3.12 - '@smithy/node-http-handler': 4.5.0 - '@smithy/protocol-http': 5.3.12 - '@smithy/smithy-client': 4.12.7 - '@smithy/types': 4.13.1 - '@smithy/url-parser': 4.2.12 + '@aws-sdk/core': 3.973.19 + '@aws-sdk/credential-provider-node': 3.972.19 + '@aws-sdk/middleware-bucket-endpoint': 3.972.7 + '@aws-sdk/middleware-expect-continue': 3.972.7 + '@aws-sdk/middleware-flexible-checksums': 3.973.5 + '@aws-sdk/middleware-host-header': 3.972.7 + '@aws-sdk/middleware-location-constraint': 3.972.7 + '@aws-sdk/middleware-logger': 3.972.7 + '@aws-sdk/middleware-recursion-detection': 3.972.7 + '@aws-sdk/middleware-sdk-s3': 3.972.19 + '@aws-sdk/middleware-ssec': 3.972.7 + '@aws-sdk/middleware-user-agent': 3.972.20 + '@aws-sdk/region-config-resolver': 3.972.7 + '@aws-sdk/signature-v4-multi-region': 3.996.7 + '@aws-sdk/types': 3.973.5 + '@aws-sdk/util-endpoints': 3.996.4 + '@aws-sdk/util-user-agent-browser': 3.972.7 + '@aws-sdk/util-user-agent-node': 3.973.5 + '@smithy/config-resolver': 4.4.10 + '@smithy/core': 3.23.9 + '@smithy/eventstream-serde-browser': 4.2.11 + '@smithy/eventstream-serde-config-resolver': 4.3.11 + '@smithy/eventstream-serde-node': 4.2.11 + '@smithy/fetch-http-handler': 5.3.13 + '@smithy/hash-blob-browser': 4.2.12 + '@smithy/hash-node': 4.2.11 + '@smithy/hash-stream-node': 4.2.11 + '@smithy/invalid-dependency': 4.2.11 + '@smithy/md5-js': 4.2.11 + '@smithy/middleware-content-length': 4.2.11 + '@smithy/middleware-endpoint': 4.4.23 + '@smithy/middleware-retry': 4.4.40 + '@smithy/middleware-serde': 4.2.12 + '@smithy/middleware-stack': 4.2.11 + '@smithy/node-config-provider': 4.3.11 + '@smithy/node-http-handler': 4.4.14 + '@smithy/protocol-http': 5.3.11 + '@smithy/smithy-client': 4.12.3 + '@smithy/types': 4.13.0 + '@smithy/url-parser': 4.2.11 '@smithy/util-base64': 4.3.2 '@smithy/util-body-length-browser': 4.2.2 '@smithy/util-body-length-node': 4.2.3 - '@smithy/util-defaults-mode-browser': 4.3.43 - '@smithy/util-defaults-mode-node': 4.2.47 - '@smithy/util-endpoints': 3.3.3 - '@smithy/util-middleware': 4.2.12 - '@smithy/util-retry': 4.2.12 - '@smithy/util-stream': 4.5.20 + '@smithy/util-defaults-mode-browser': 4.3.39 + '@smithy/util-defaults-mode-node': 4.2.42 + '@smithy/util-endpoints': 3.3.2 + '@smithy/util-middleware': 4.2.11 + '@smithy/util-retry': 4.2.11 + '@smithy/util-stream': 4.5.17 '@smithy/util-utf8': 4.2.2 - '@smithy/util-waiter': 4.2.13 + '@smithy/util-waiter': 4.2.12 tslib: 2.8.1 transitivePeerDependencies: - aws-crt - '@aws-sdk/core@3.973.25': + '@aws-sdk/core@3.973.19': + dependencies: + '@aws-sdk/types': 3.973.5 + '@aws-sdk/xml-builder': 3.972.10 + '@smithy/core': 3.23.9 + '@smithy/node-config-provider': 4.3.11 + '@smithy/property-provider': 4.2.11 + '@smithy/protocol-http': 5.3.11 + '@smithy/signature-v4': 5.3.11 + '@smithy/smithy-client': 4.12.3 + '@smithy/types': 4.13.0 + '@smithy/util-base64': 4.3.2 + '@smithy/util-middleware': 4.2.11 + '@smithy/util-utf8': 4.2.2 + tslib: 2.8.1 + + '@aws-sdk/core@3.973.26': dependencies: '@aws-sdk/types': 3.973.6 '@aws-sdk/xml-builder': 3.972.16 - '@smithy/core': 3.23.12 + '@smithy/core': 3.23.13 '@smithy/node-config-provider': 4.3.12 '@smithy/property-provider': 4.2.12 '@smithy/protocol-http': 5.3.12 '@smithy/signature-v4': 5.3.12 - '@smithy/smithy-client': 4.12.7 + '@smithy/smithy-client': 4.12.8 '@smithy/types': 4.13.1 '@smithy/util-base64': 4.3.2 '@smithy/util-middleware': 4.2.12 '@smithy/util-utf8': 4.2.2 tslib: 2.8.1 - '@aws-sdk/crc64-nvme@3.972.5': + '@aws-sdk/crc64-nvme@3.972.4': dependencies: - '@smithy/types': 4.13.1 + '@smithy/types': 4.13.0 tslib: 2.8.1 - '@aws-sdk/credential-provider-env@3.972.23': + '@aws-sdk/credential-provider-env@3.972.17': dependencies: - '@aws-sdk/core': 3.973.25 + '@aws-sdk/core': 3.973.19 + '@aws-sdk/types': 3.973.5 + '@smithy/property-provider': 4.2.11 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + + '@aws-sdk/credential-provider-env@3.972.24': + dependencies: + '@aws-sdk/core': 3.973.26 '@aws-sdk/types': 3.973.6 '@smithy/property-provider': 4.2.12 '@smithy/types': 4.13.1 tslib: 2.8.1 - '@aws-sdk/credential-provider-http@3.972.25': + '@aws-sdk/credential-provider-http@3.972.19': + dependencies: + '@aws-sdk/core': 3.973.19 + '@aws-sdk/types': 3.973.5 + '@smithy/fetch-http-handler': 5.3.13 + '@smithy/node-http-handler': 4.4.14 + '@smithy/property-provider': 4.2.11 + '@smithy/protocol-http': 5.3.11 + '@smithy/smithy-client': 4.12.3 + '@smithy/types': 4.13.0 + '@smithy/util-stream': 4.5.17 + tslib: 2.8.1 + + '@aws-sdk/credential-provider-http@3.972.26': dependencies: - '@aws-sdk/core': 3.973.25 + '@aws-sdk/core': 3.973.26 '@aws-sdk/types': 3.973.6 '@smithy/fetch-http-handler': 5.3.15 - '@smithy/node-http-handler': 4.5.0 + '@smithy/node-http-handler': 4.5.1 '@smithy/property-provider': 4.2.12 '@smithy/protocol-http': 5.3.12 - '@smithy/smithy-client': 4.12.7 + '@smithy/smithy-client': 4.12.8 '@smithy/types': 4.13.1 - '@smithy/util-stream': 4.5.20 + '@smithy/util-stream': 4.5.21 tslib: 2.8.1 - '@aws-sdk/credential-provider-ini@3.972.26': + '@aws-sdk/credential-provider-ini@3.972.18': + dependencies: + '@aws-sdk/core': 3.973.19 + '@aws-sdk/credential-provider-env': 3.972.17 + '@aws-sdk/credential-provider-http': 3.972.19 + '@aws-sdk/credential-provider-login': 3.972.18 + '@aws-sdk/credential-provider-process': 3.972.17 + '@aws-sdk/credential-provider-sso': 3.972.18 + '@aws-sdk/credential-provider-web-identity': 3.972.18 + '@aws-sdk/nested-clients': 3.996.8 + '@aws-sdk/types': 3.973.5 + '@smithy/credential-provider-imds': 4.2.11 + '@smithy/property-provider': 4.2.11 + '@smithy/shared-ini-file-loader': 4.4.6 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + transitivePeerDependencies: + - aws-crt + + '@aws-sdk/credential-provider-ini@3.972.28': dependencies: - '@aws-sdk/core': 3.973.25 - '@aws-sdk/credential-provider-env': 3.972.23 - '@aws-sdk/credential-provider-http': 3.972.25 - '@aws-sdk/credential-provider-login': 3.972.26 - '@aws-sdk/credential-provider-process': 3.972.23 - '@aws-sdk/credential-provider-sso': 3.972.26 - '@aws-sdk/credential-provider-web-identity': 3.972.26 - '@aws-sdk/nested-clients': 3.996.16 + '@aws-sdk/core': 3.973.26 + '@aws-sdk/credential-provider-env': 3.972.24 + '@aws-sdk/credential-provider-http': 3.972.26 + '@aws-sdk/credential-provider-login': 3.972.28 + '@aws-sdk/credential-provider-process': 3.972.24 + '@aws-sdk/credential-provider-sso': 3.972.28 + '@aws-sdk/credential-provider-web-identity': 3.972.28 + '@aws-sdk/nested-clients': 3.996.18 '@aws-sdk/types': 3.973.6 '@smithy/credential-provider-imds': 4.2.12 '@smithy/property-provider': 4.2.12 @@ -19726,10 +19985,23 @@ snapshots: transitivePeerDependencies: - aws-crt - '@aws-sdk/credential-provider-login@3.972.26': + '@aws-sdk/credential-provider-login@3.972.18': + dependencies: + '@aws-sdk/core': 3.973.19 + '@aws-sdk/nested-clients': 3.996.8 + '@aws-sdk/types': 3.973.5 + '@smithy/property-provider': 4.2.11 + '@smithy/protocol-http': 5.3.11 + '@smithy/shared-ini-file-loader': 4.4.6 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + transitivePeerDependencies: + - aws-crt + + '@aws-sdk/credential-provider-login@3.972.28': dependencies: - '@aws-sdk/core': 3.973.25 - '@aws-sdk/nested-clients': 3.996.16 + '@aws-sdk/core': 3.973.26 + '@aws-sdk/nested-clients': 3.996.18 '@aws-sdk/types': 3.973.6 '@smithy/property-provider': 4.2.12 '@smithy/protocol-http': 5.3.12 @@ -19739,14 +20011,31 @@ snapshots: transitivePeerDependencies: - aws-crt - '@aws-sdk/credential-provider-node@3.972.27': + '@aws-sdk/credential-provider-node@3.972.19': + dependencies: + '@aws-sdk/credential-provider-env': 3.972.17 + '@aws-sdk/credential-provider-http': 3.972.19 + '@aws-sdk/credential-provider-ini': 3.972.18 + '@aws-sdk/credential-provider-process': 3.972.17 + '@aws-sdk/credential-provider-sso': 3.972.18 + '@aws-sdk/credential-provider-web-identity': 3.972.18 + '@aws-sdk/types': 3.973.5 + '@smithy/credential-provider-imds': 4.2.11 + '@smithy/property-provider': 4.2.11 + '@smithy/shared-ini-file-loader': 4.4.6 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + transitivePeerDependencies: + - aws-crt + + '@aws-sdk/credential-provider-node@3.972.29': dependencies: - '@aws-sdk/credential-provider-env': 3.972.23 - '@aws-sdk/credential-provider-http': 3.972.25 - '@aws-sdk/credential-provider-ini': 3.972.26 - '@aws-sdk/credential-provider-process': 3.972.23 - '@aws-sdk/credential-provider-sso': 3.972.26 - '@aws-sdk/credential-provider-web-identity': 3.972.26 + '@aws-sdk/credential-provider-env': 3.972.24 + '@aws-sdk/credential-provider-http': 3.972.26 + '@aws-sdk/credential-provider-ini': 3.972.28 + '@aws-sdk/credential-provider-process': 3.972.24 + '@aws-sdk/credential-provider-sso': 3.972.28 + '@aws-sdk/credential-provider-web-identity': 3.972.28 '@aws-sdk/types': 3.973.6 '@smithy/credential-provider-imds': 4.2.12 '@smithy/property-provider': 4.2.12 @@ -19756,20 +20045,42 @@ snapshots: transitivePeerDependencies: - aws-crt - '@aws-sdk/credential-provider-process@3.972.23': + '@aws-sdk/credential-provider-process@3.972.17': + dependencies: + '@aws-sdk/core': 3.973.19 + '@aws-sdk/types': 3.973.5 + '@smithy/property-provider': 4.2.11 + '@smithy/shared-ini-file-loader': 4.4.6 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + + '@aws-sdk/credential-provider-process@3.972.24': dependencies: - '@aws-sdk/core': 3.973.25 + '@aws-sdk/core': 3.973.26 '@aws-sdk/types': 3.973.6 '@smithy/property-provider': 4.2.12 '@smithy/shared-ini-file-loader': 4.4.7 '@smithy/types': 4.13.1 tslib: 2.8.1 - '@aws-sdk/credential-provider-sso@3.972.26': + '@aws-sdk/credential-provider-sso@3.972.18': dependencies: - '@aws-sdk/core': 3.973.25 - '@aws-sdk/nested-clients': 3.996.16 - '@aws-sdk/token-providers': 3.1019.0 + '@aws-sdk/core': 3.973.19 + '@aws-sdk/nested-clients': 3.996.8 + '@aws-sdk/token-providers': 3.1005.0 + '@aws-sdk/types': 3.973.5 + '@smithy/property-provider': 4.2.11 + '@smithy/shared-ini-file-loader': 4.4.6 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + transitivePeerDependencies: + - aws-crt + + '@aws-sdk/credential-provider-sso@3.972.28': + dependencies: + '@aws-sdk/core': 3.973.26 + '@aws-sdk/nested-clients': 3.996.18 + '@aws-sdk/token-providers': 3.1021.0 '@aws-sdk/types': 3.973.6 '@smithy/property-provider': 4.2.12 '@smithy/shared-ini-file-loader': 4.4.7 @@ -19778,10 +20089,22 @@ snapshots: transitivePeerDependencies: - aws-crt - '@aws-sdk/credential-provider-web-identity@3.972.26': + '@aws-sdk/credential-provider-web-identity@3.972.18': + dependencies: + '@aws-sdk/core': 3.973.19 + '@aws-sdk/nested-clients': 3.996.8 + '@aws-sdk/types': 3.973.5 + '@smithy/property-provider': 4.2.11 + '@smithy/shared-ini-file-loader': 4.4.6 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + transitivePeerDependencies: + - aws-crt + + '@aws-sdk/credential-provider-web-identity@3.972.28': dependencies: - '@aws-sdk/core': 3.973.25 - '@aws-sdk/nested-clients': 3.996.16 + '@aws-sdk/core': 3.973.26 + '@aws-sdk/nested-clients': 3.996.18 '@aws-sdk/types': 3.973.6 '@smithy/property-provider': 4.2.12 '@smithy/shared-ini-file-loader': 4.4.7 @@ -19797,9 +20120,9 @@ snapshots: '@smithy/types': 4.13.1 tslib: 2.8.1 - '@aws-sdk/lib-storage@3.1007.0(@aws-sdk/client-s3@3.1019.0)': + '@aws-sdk/lib-storage@3.1007.0(@aws-sdk/client-s3@3.1007.0)': dependencies: - '@aws-sdk/client-s3': 3.1019.0 + '@aws-sdk/client-s3': 3.1007.0 '@smithy/abort-controller': 4.2.11 '@smithy/middleware-endpoint': 4.4.23 '@smithy/smithy-client': 4.12.3 @@ -19808,13 +20131,13 @@ snapshots: stream-browserify: 3.0.0 tslib: 2.8.1 - '@aws-sdk/middleware-bucket-endpoint@3.972.8': + '@aws-sdk/middleware-bucket-endpoint@3.972.7': dependencies: - '@aws-sdk/types': 3.973.6 + '@aws-sdk/types': 3.973.5 '@aws-sdk/util-arn-parser': 3.972.3 - '@smithy/node-config-provider': 4.3.12 - '@smithy/protocol-http': 5.3.12 - '@smithy/types': 4.13.1 + '@smithy/node-config-provider': 4.3.11 + '@smithy/protocol-http': 5.3.11 + '@smithy/types': 4.13.0 '@smithy/util-config-provider': 4.2.2 tslib: 2.8.1 @@ -19825,30 +20148,37 @@ snapshots: '@smithy/types': 4.13.1 tslib: 2.8.1 - '@aws-sdk/middleware-expect-continue@3.972.8': + '@aws-sdk/middleware-expect-continue@3.972.7': dependencies: - '@aws-sdk/types': 3.973.6 - '@smithy/protocol-http': 5.3.12 - '@smithy/types': 4.13.1 + '@aws-sdk/types': 3.973.5 + '@smithy/protocol-http': 5.3.11 + '@smithy/types': 4.13.0 tslib: 2.8.1 - '@aws-sdk/middleware-flexible-checksums@3.974.5': + '@aws-sdk/middleware-flexible-checksums@3.973.5': dependencies: '@aws-crypto/crc32': 5.2.0 '@aws-crypto/crc32c': 5.2.0 '@aws-crypto/util': 5.2.0 - '@aws-sdk/core': 3.973.25 - '@aws-sdk/crc64-nvme': 3.972.5 - '@aws-sdk/types': 3.973.6 + '@aws-sdk/core': 3.973.19 + '@aws-sdk/crc64-nvme': 3.972.4 + '@aws-sdk/types': 3.973.5 '@smithy/is-array-buffer': 4.2.2 - '@smithy/node-config-provider': 4.3.12 - '@smithy/protocol-http': 5.3.12 - '@smithy/types': 4.13.1 - '@smithy/util-middleware': 4.2.12 - '@smithy/util-stream': 4.5.20 + '@smithy/node-config-provider': 4.3.11 + '@smithy/protocol-http': 5.3.11 + '@smithy/types': 4.13.0 + '@smithy/util-middleware': 4.2.11 + '@smithy/util-stream': 4.5.17 '@smithy/util-utf8': 4.2.2 tslib: 2.8.1 + '@aws-sdk/middleware-host-header@3.972.7': + dependencies: + '@aws-sdk/types': 3.973.5 + '@smithy/protocol-http': 5.3.11 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + '@aws-sdk/middleware-host-header@3.972.8': dependencies: '@aws-sdk/types': 3.973.6 @@ -19856,10 +20186,16 @@ snapshots: '@smithy/types': 4.13.1 tslib: 2.8.1 - '@aws-sdk/middleware-location-constraint@3.972.8': + '@aws-sdk/middleware-location-constraint@3.972.7': dependencies: - '@aws-sdk/types': 3.973.6 - '@smithy/types': 4.13.1 + '@aws-sdk/types': 3.973.5 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + + '@aws-sdk/middleware-logger@3.972.7': + dependencies: + '@aws-sdk/types': 3.973.5 + '@smithy/types': 4.13.0 tslib: 2.8.1 '@aws-sdk/middleware-logger@3.972.8': @@ -19868,6 +20204,14 @@ snapshots: '@smithy/types': 4.13.1 tslib: 2.8.1 + '@aws-sdk/middleware-recursion-detection@3.972.7': + dependencies: + '@aws-sdk/types': 3.973.5 + '@aws/lambda-invoke-store': 0.2.3 + '@smithy/protocol-http': 5.3.11 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + '@aws-sdk/middleware-recursion-detection@3.972.9': dependencies: '@aws-sdk/types': 3.973.6 @@ -19876,38 +20220,49 @@ snapshots: '@smithy/types': 4.13.1 tslib: 2.8.1 - '@aws-sdk/middleware-sdk-s3@3.972.26': + '@aws-sdk/middleware-sdk-s3@3.972.19': dependencies: - '@aws-sdk/core': 3.973.25 - '@aws-sdk/types': 3.973.6 + '@aws-sdk/core': 3.973.19 + '@aws-sdk/types': 3.973.5 '@aws-sdk/util-arn-parser': 3.972.3 - '@smithy/core': 3.23.12 - '@smithy/node-config-provider': 4.3.12 - '@smithy/protocol-http': 5.3.12 - '@smithy/signature-v4': 5.3.12 - '@smithy/smithy-client': 4.12.7 - '@smithy/types': 4.13.1 + '@smithy/core': 3.23.9 + '@smithy/node-config-provider': 4.3.11 + '@smithy/protocol-http': 5.3.11 + '@smithy/signature-v4': 5.3.11 + '@smithy/smithy-client': 4.12.3 + '@smithy/types': 4.13.0 '@smithy/util-config-provider': 4.2.2 - '@smithy/util-middleware': 4.2.12 - '@smithy/util-stream': 4.5.20 + '@smithy/util-middleware': 4.2.11 + '@smithy/util-stream': 4.5.17 '@smithy/util-utf8': 4.2.2 tslib: 2.8.1 - '@aws-sdk/middleware-ssec@3.972.8': + '@aws-sdk/middleware-ssec@3.972.7': dependencies: - '@aws-sdk/types': 3.973.6 - '@smithy/types': 4.13.1 + '@aws-sdk/types': 3.973.5 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + + '@aws-sdk/middleware-user-agent@3.972.20': + dependencies: + '@aws-sdk/core': 3.973.19 + '@aws-sdk/types': 3.973.5 + '@aws-sdk/util-endpoints': 3.996.4 + '@smithy/core': 3.23.9 + '@smithy/protocol-http': 5.3.11 + '@smithy/types': 4.13.0 + '@smithy/util-retry': 4.2.11 tslib: 2.8.1 - '@aws-sdk/middleware-user-agent@3.972.26': + '@aws-sdk/middleware-user-agent@3.972.28': dependencies: - '@aws-sdk/core': 3.973.25 + '@aws-sdk/core': 3.973.26 '@aws-sdk/types': 3.973.6 '@aws-sdk/util-endpoints': 3.996.5 - '@smithy/core': 3.23.12 + '@smithy/core': 3.23.13 '@smithy/protocol-http': 5.3.12 '@smithy/types': 4.13.1 - '@smithy/util-retry': 4.2.12 + '@smithy/util-retry': 4.2.13 tslib: 2.8.1 '@aws-sdk/middleware-websocket@3.972.14': @@ -19925,44 +20280,87 @@ snapshots: '@smithy/util-utf8': 4.2.2 tslib: 2.8.1 - '@aws-sdk/nested-clients@3.996.16': + '@aws-sdk/nested-clients@3.996.18': dependencies: '@aws-crypto/sha256-browser': 5.2.0 '@aws-crypto/sha256-js': 5.2.0 - '@aws-sdk/core': 3.973.25 + '@aws-sdk/core': 3.973.26 '@aws-sdk/middleware-host-header': 3.972.8 '@aws-sdk/middleware-logger': 3.972.8 '@aws-sdk/middleware-recursion-detection': 3.972.9 - '@aws-sdk/middleware-user-agent': 3.972.26 + '@aws-sdk/middleware-user-agent': 3.972.28 '@aws-sdk/region-config-resolver': 3.972.10 '@aws-sdk/types': 3.973.6 '@aws-sdk/util-endpoints': 3.996.5 '@aws-sdk/util-user-agent-browser': 3.972.8 - '@aws-sdk/util-user-agent-node': 3.973.12 + '@aws-sdk/util-user-agent-node': 3.973.14 '@smithy/config-resolver': 4.4.13 - '@smithy/core': 3.23.12 + '@smithy/core': 3.23.13 '@smithy/fetch-http-handler': 5.3.15 '@smithy/hash-node': 4.2.12 '@smithy/invalid-dependency': 4.2.12 '@smithy/middleware-content-length': 4.2.12 - '@smithy/middleware-endpoint': 4.4.27 - '@smithy/middleware-retry': 4.4.44 - '@smithy/middleware-serde': 4.2.15 + '@smithy/middleware-endpoint': 4.4.28 + '@smithy/middleware-retry': 4.4.46 + '@smithy/middleware-serde': 4.2.16 '@smithy/middleware-stack': 4.2.12 '@smithy/node-config-provider': 4.3.12 - '@smithy/node-http-handler': 4.5.0 + '@smithy/node-http-handler': 4.5.1 '@smithy/protocol-http': 5.3.12 - '@smithy/smithy-client': 4.12.7 + '@smithy/smithy-client': 4.12.8 '@smithy/types': 4.13.1 '@smithy/url-parser': 4.2.12 '@smithy/util-base64': 4.3.2 '@smithy/util-body-length-browser': 4.2.2 '@smithy/util-body-length-node': 4.2.3 - '@smithy/util-defaults-mode-browser': 4.3.43 - '@smithy/util-defaults-mode-node': 4.2.47 + '@smithy/util-defaults-mode-browser': 4.3.44 + '@smithy/util-defaults-mode-node': 4.2.48 '@smithy/util-endpoints': 3.3.3 '@smithy/util-middleware': 4.2.12 - '@smithy/util-retry': 4.2.12 + '@smithy/util-retry': 4.2.13 + '@smithy/util-utf8': 4.2.2 + tslib: 2.8.1 + transitivePeerDependencies: + - aws-crt + + '@aws-sdk/nested-clients@3.996.8': + dependencies: + '@aws-crypto/sha256-browser': 5.2.0 + '@aws-crypto/sha256-js': 5.2.0 + '@aws-sdk/core': 3.973.19 + '@aws-sdk/middleware-host-header': 3.972.7 + '@aws-sdk/middleware-logger': 3.972.7 + '@aws-sdk/middleware-recursion-detection': 3.972.7 + '@aws-sdk/middleware-user-agent': 3.972.20 + '@aws-sdk/region-config-resolver': 3.972.7 + '@aws-sdk/types': 3.973.5 + '@aws-sdk/util-endpoints': 3.996.4 + '@aws-sdk/util-user-agent-browser': 3.972.7 + '@aws-sdk/util-user-agent-node': 3.973.5 + '@smithy/config-resolver': 4.4.10 + '@smithy/core': 3.23.9 + '@smithy/fetch-http-handler': 5.3.13 + '@smithy/hash-node': 4.2.11 + '@smithy/invalid-dependency': 4.2.11 + '@smithy/middleware-content-length': 4.2.11 + '@smithy/middleware-endpoint': 4.4.23 + '@smithy/middleware-retry': 4.4.40 + '@smithy/middleware-serde': 4.2.12 + '@smithy/middleware-stack': 4.2.11 + '@smithy/node-config-provider': 4.3.11 + '@smithy/node-http-handler': 4.4.14 + '@smithy/protocol-http': 5.3.11 + '@smithy/smithy-client': 4.12.3 + '@smithy/types': 4.13.0 + '@smithy/url-parser': 4.2.11 + '@smithy/util-base64': 4.3.2 + '@smithy/util-body-length-browser': 4.2.2 + '@smithy/util-body-length-node': 4.2.3 + '@smithy/util-defaults-mode-browser': 4.3.39 + '@smithy/util-defaults-mode-node': 4.2.42 + '@smithy/util-endpoints': 3.3.2 + '@smithy/util-middleware': 4.2.11 + '@smithy/util-retry': 4.2.11 '@smithy/util-utf8': 4.2.2 tslib: 2.8.1 transitivePeerDependencies: @@ -19976,19 +20374,51 @@ snapshots: '@smithy/types': 4.13.1 tslib: 2.8.1 - '@aws-sdk/signature-v4-multi-region@3.996.14': + '@aws-sdk/region-config-resolver@3.972.7': + dependencies: + '@aws-sdk/types': 3.973.5 + '@smithy/config-resolver': 4.4.10 + '@smithy/node-config-provider': 4.3.11 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + + '@aws-sdk/signature-v4-multi-region@3.996.7': + dependencies: + '@aws-sdk/middleware-sdk-s3': 3.972.19 + '@aws-sdk/types': 3.973.5 + '@smithy/protocol-http': 5.3.11 + '@smithy/signature-v4': 5.3.11 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + + '@aws-sdk/token-providers@3.1005.0': + dependencies: + '@aws-sdk/core': 3.973.19 + '@aws-sdk/nested-clients': 3.996.8 + '@aws-sdk/types': 3.973.5 + '@smithy/property-provider': 4.2.11 + '@smithy/shared-ini-file-loader': 4.4.6 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + transitivePeerDependencies: + - aws-crt + + '@aws-sdk/token-providers@3.1021.0': dependencies: - '@aws-sdk/middleware-sdk-s3': 3.972.26 + '@aws-sdk/core': 3.973.26 + '@aws-sdk/nested-clients': 3.996.18 '@aws-sdk/types': 3.973.6 - '@smithy/protocol-http': 5.3.12 - '@smithy/signature-v4': 5.3.12 + '@smithy/property-provider': 4.2.12 + '@smithy/shared-ini-file-loader': 4.4.7 '@smithy/types': 4.13.1 tslib: 2.8.1 + transitivePeerDependencies: + - aws-crt - '@aws-sdk/token-providers@3.1019.0': + '@aws-sdk/token-providers@3.1024.0': dependencies: - '@aws-sdk/core': 3.973.25 - '@aws-sdk/nested-clients': 3.996.16 + '@aws-sdk/core': 3.973.26 + '@aws-sdk/nested-clients': 3.996.18 '@aws-sdk/types': 3.973.6 '@smithy/property-provider': 4.2.12 '@smithy/shared-ini-file-loader': 4.4.7 @@ -19997,6 +20427,11 @@ snapshots: transitivePeerDependencies: - aws-crt + '@aws-sdk/types@3.973.5': + dependencies: + '@smithy/types': 4.13.0 + tslib: 2.8.1 + '@aws-sdk/types@3.973.6': dependencies: '@smithy/types': 4.13.1 @@ -20006,6 +20441,14 @@ snapshots: dependencies: tslib: 2.8.1 + '@aws-sdk/util-endpoints@3.996.4': + dependencies: + '@aws-sdk/types': 3.973.5 + '@smithy/types': 4.13.0 + '@smithy/url-parser': 4.2.11 + '@smithy/util-endpoints': 3.3.2 + tslib: 2.8.1 + '@aws-sdk/util-endpoints@3.996.5': dependencies: '@aws-sdk/types': 3.973.6 @@ -20025,6 +20468,13 @@ snapshots: dependencies: tslib: 2.8.1 + '@aws-sdk/util-user-agent-browser@3.972.7': + dependencies: + '@aws-sdk/types': 3.973.5 + '@smithy/types': 4.13.0 + bowser: 2.14.1 + tslib: 2.8.1 + '@aws-sdk/util-user-agent-browser@3.972.8': dependencies: '@aws-sdk/types': 3.973.6 @@ -20032,15 +20482,29 @@ snapshots: bowser: 2.14.1 tslib: 2.8.1 - '@aws-sdk/util-user-agent-node@3.973.12': + '@aws-sdk/util-user-agent-node@3.973.14': dependencies: - '@aws-sdk/middleware-user-agent': 3.972.26 + '@aws-sdk/middleware-user-agent': 3.972.28 '@aws-sdk/types': 3.973.6 '@smithy/node-config-provider': 4.3.12 '@smithy/types': 4.13.1 '@smithy/util-config-provider': 4.2.2 tslib: 2.8.1 + '@aws-sdk/util-user-agent-node@3.973.5': + dependencies: + '@aws-sdk/middleware-user-agent': 3.972.20 + '@aws-sdk/types': 3.973.5 + '@smithy/node-config-provider': 4.3.11 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + + '@aws-sdk/xml-builder@3.972.10': + dependencies: + '@smithy/types': 4.13.0 + fast-xml-parser: 5.4.1 + tslib: 2.8.1 + '@aws-sdk/xml-builder@3.972.16': dependencies: '@smithy/types': 4.13.1 @@ -20629,8 +21093,6 @@ snapshots: transitivePeerDependencies: - supports-color - '@babel/runtime@7.28.6': {} - '@babel/runtime@7.29.2': {} '@babel/template@7.28.6': @@ -21035,7 +21497,11 @@ snapshots: dependencies: '@bufbuild/protobuf': 2.11.0 - '@copilotkit/llmock@1.6.0': {} + '@copilotkit/aimock@1.7.0': {} + + '@copilotkit/llmock@1.7.1': + dependencies: + '@copilotkit/aimock': 1.7.0 '@cspotcode/source-map-support@0.8.1': dependencies: @@ -21051,8 +21517,8 @@ snapshots: '@daytonaio/sdk@0.150.0(ws@8.19.0)': dependencies: - '@aws-sdk/client-s3': 3.1019.0 - '@aws-sdk/lib-storage': 3.1007.0(@aws-sdk/client-s3@3.1019.0) + '@aws-sdk/client-s3': 3.1007.0 + '@aws-sdk/lib-storage': 3.1007.0(@aws-sdk/client-s3@3.1007.0) '@daytonaio/api-client': 0.150.0 '@daytonaio/toolbox-api-client': 0.150.0 '@iarna/toml': 2.2.5 @@ -21128,7 +21594,7 @@ snapshots: '@emotion/babel-plugin@11.13.5': dependencies: '@babel/helper-module-imports': 7.28.6 - '@babel/runtime': 7.28.6 + '@babel/runtime': 7.29.2 '@emotion/hash': 0.9.2 '@emotion/memoize': 0.9.0 '@emotion/serialize': 1.3.3 @@ -21157,7 +21623,7 @@ snapshots: '@emotion/react@11.11.1(@types/react@19.2.13)(react@19.1.0)': dependencies: - '@babel/runtime': 7.28.6 + '@babel/runtime': 7.29.2 '@emotion/babel-plugin': 11.13.5 '@emotion/cache': 11.11.0 '@emotion/serialize': 1.3.3 @@ -22152,7 +22618,7 @@ snapshots: '@fortawesome/fontawesome-svg-core': 7.1.0 react: 19.1.0 - '@google/genai@1.47.0(@modelcontextprotocol/sdk@1.25.3(hono@4.11.9)(zod@3.25.76))': + '@google/genai@1.48.0(@modelcontextprotocol/sdk@1.25.3(hono@4.11.9)(zod@3.25.76))': dependencies: google-auth-library: 10.6.2 p-retry: 4.6.2 @@ -22165,7 +22631,7 @@ snapshots: - supports-color - utf-8-validate - '@google/genai@1.47.0(@modelcontextprotocol/sdk@1.25.3(hono@4.11.9)(zod@4.1.13))': + '@google/genai@1.48.0(@modelcontextprotocol/sdk@1.25.3(hono@4.11.9)(zod@4.1.13))': dependencies: google-auth-library: 10.6.2 p-retry: 4.6.2 @@ -22855,8 +23321,8 @@ snapshots: '@mariozechner/pi-ai@0.60.0(@modelcontextprotocol/sdk@1.25.3(hono@4.11.9)(zod@3.25.76))(ws@8.19.0)(zod@3.25.76)': dependencies: '@anthropic-ai/sdk': 0.73.0(zod@3.25.76) - '@aws-sdk/client-bedrock-runtime': 3.1019.0 - '@google/genai': 1.47.0(@modelcontextprotocol/sdk@1.25.3(hono@4.11.9)(zod@3.25.76)) + '@aws-sdk/client-bedrock-runtime': 3.1024.0 + '@google/genai': 1.48.0(@modelcontextprotocol/sdk@1.25.3(hono@4.11.9)(zod@3.25.76)) '@mistralai/mistralai': 1.14.1 '@sinclair/typebox': 0.34.41 ajv: 8.17.1 @@ -22865,7 +23331,7 @@ snapshots: openai: 6.26.0(ws@8.19.0)(zod@3.25.76) partial-json: 0.1.7 proxy-agent: 6.5.0 - undici: 7.24.6 + undici: 7.24.7 zod-to-json-schema: 3.25.1(zod@3.25.76) transitivePeerDependencies: - '@modelcontextprotocol/sdk' @@ -22879,8 +23345,8 @@ snapshots: '@mariozechner/pi-ai@0.60.0(@modelcontextprotocol/sdk@1.25.3(hono@4.11.9)(zod@4.1.13))(ws@8.19.0)(zod@4.1.13)': dependencies: '@anthropic-ai/sdk': 0.73.0(zod@4.1.13) - '@aws-sdk/client-bedrock-runtime': 3.1019.0 - '@google/genai': 1.47.0(@modelcontextprotocol/sdk@1.25.3(hono@4.11.9)(zod@4.1.13)) + '@aws-sdk/client-bedrock-runtime': 3.1024.0 + '@google/genai': 1.48.0(@modelcontextprotocol/sdk@1.25.3(hono@4.11.9)(zod@4.1.13)) '@mistralai/mistralai': 1.14.1 '@sinclair/typebox': 0.34.41 ajv: 8.17.1 @@ -22889,7 +23355,7 @@ snapshots: openai: 6.26.0(ws@8.19.0)(zod@4.1.13) partial-json: 0.1.7 proxy-agent: 6.5.0 - undici: 7.24.6 + undici: 7.24.7 zod-to-json-schema: 3.25.1(zod@4.1.13) transitivePeerDependencies: - '@modelcontextprotocol/sdk' @@ -22916,10 +23382,10 @@ snapshots: hosted-git-info: 9.0.2 ignore: 7.0.5 marked: 15.0.12 - minimatch: 10.2.4 + minimatch: 10.2.5 proper-lockfile: 4.1.2 strip-ansi: 7.1.2 - undici: 7.24.6 + undici: 7.24.7 yaml: 2.8.2 optionalDependencies: '@mariozechner/clipboard': 0.3.2 @@ -22948,10 +23414,10 @@ snapshots: hosted-git-info: 9.0.2 ignore: 7.0.5 marked: 15.0.12 - minimatch: 10.2.4 + minimatch: 10.2.5 proper-lockfile: 4.1.2 strip-ansi: 7.1.2 - undici: 7.24.6 + undici: 7.24.7 yaml: 2.8.2 optionalDependencies: '@mariozechner/clipboard': 0.3.2 @@ -22972,7 +23438,7 @@ snapshots: marked: 15.0.12 mime-types: 3.0.2 optionalDependencies: - koffi: 2.15.2 + koffi: 2.15.4 '@mdx-js/mdx@3.1.1': dependencies: @@ -23299,7 +23765,7 @@ snapshots: '@modelcontextprotocol/sdk@1.25.3(hono@4.11.9)(zod@3.25.76)': dependencies: - '@hono/node-server': 1.19.9(hono@4.11.9) + '@hono/node-server': 1.19.12(hono@4.11.9) ajv: 8.17.1 ajv-formats: 3.0.1(ajv@8.17.1) content-type: 1.0.5 @@ -23321,7 +23787,7 @@ snapshots: '@modelcontextprotocol/sdk@1.25.3(hono@4.11.9)(zod@4.1.13)': dependencies: - '@hono/node-server': 1.19.9(hono@4.11.9) + '@hono/node-server': 1.19.12(hono@4.11.9) ajv: 8.17.1 ajv-formats: 3.0.1(ajv@8.17.1) content-type: 1.0.5 @@ -23380,6 +23846,8 @@ snapshots: outvariant: 1.4.3 strict-event-emitter: 0.5.1 + '@napi-rs/cli@2.18.4': {} + '@neophi/sieve-cache@1.5.0': {} '@next/env@15.5.9': @@ -24165,7 +24633,7 @@ snapshots: '@radix-ui/react-compose-refs@1.0.0(react@19.1.0)': dependencies: - '@babel/runtime': 7.28.6 + '@babel/runtime': 7.29.2 react: 19.1.0 '@radix-ui/react-compose-refs@1.1.2(@types/react@19.2.13)(react@19.1.0)': @@ -24518,7 +24986,7 @@ snapshots: '@radix-ui/react-slot@1.0.1(react@19.1.0)': dependencies: - '@babel/runtime': 7.28.6 + '@babel/runtime': 7.29.2 '@radix-ui/react-compose-refs': 1.0.0(react@19.1.0) react: 19.1.0 @@ -25828,11 +26296,6 @@ snapshots: '@smithy/types': 4.13.0 tslib: 2.8.1 - '@smithy/abort-controller@4.2.12': - dependencies: - '@smithy/types': 4.13.1 - tslib: 2.8.1 - '@smithy/chunked-blob-reader-native@4.2.3': dependencies: '@smithy/util-base64': 4.3.2 @@ -25842,6 +26305,15 @@ snapshots: dependencies: tslib: 2.8.1 + '@smithy/config-resolver@4.4.10': + dependencies: + '@smithy/node-config-provider': 4.3.11 + '@smithy/types': 4.13.0 + '@smithy/util-config-provider': 4.2.2 + '@smithy/util-endpoints': 3.3.2 + '@smithy/util-middleware': 4.2.11 + tslib: 2.8.1 + '@smithy/config-resolver@4.4.13': dependencies: '@smithy/node-config-provider': 4.3.12 @@ -25851,7 +26323,7 @@ snapshots: '@smithy/util-middleware': 4.2.12 tslib: 2.8.1 - '@smithy/core@3.23.12': + '@smithy/core@3.23.13': dependencies: '@smithy/protocol-http': 5.3.12 '@smithy/types': 4.13.1 @@ -25859,7 +26331,7 @@ snapshots: '@smithy/util-base64': 4.3.2 '@smithy/util-body-length-browser': 4.2.2 '@smithy/util-middleware': 4.2.12 - '@smithy/util-stream': 4.5.20 + '@smithy/util-stream': 4.5.21 '@smithy/util-utf8': 4.2.2 '@smithy/uuid': 1.1.2 tslib: 2.8.1 @@ -25877,6 +26349,14 @@ snapshots: '@smithy/uuid': 1.1.2 tslib: 2.8.1 + '@smithy/credential-provider-imds@4.2.11': + dependencies: + '@smithy/node-config-provider': 4.3.11 + '@smithy/property-provider': 4.2.11 + '@smithy/types': 4.13.0 + '@smithy/url-parser': 4.2.11 + tslib: 2.8.1 + '@smithy/credential-provider-imds@4.2.12': dependencies: '@smithy/node-config-provider': 4.3.12 @@ -25885,6 +26365,13 @@ snapshots: '@smithy/url-parser': 4.2.12 tslib: 2.8.1 + '@smithy/eventstream-codec@4.2.11': + dependencies: + '@aws-crypto/crc32': 5.2.0 + '@smithy/types': 4.13.0 + '@smithy/util-hex-encoding': 4.2.2 + tslib: 2.8.1 + '@smithy/eventstream-codec@4.2.12': dependencies: '@aws-crypto/crc32': 5.2.0 @@ -25892,23 +26379,46 @@ snapshots: '@smithy/util-hex-encoding': 4.2.2 tslib: 2.8.1 + '@smithy/eventstream-serde-browser@4.2.11': + dependencies: + '@smithy/eventstream-serde-universal': 4.2.11 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + '@smithy/eventstream-serde-browser@4.2.12': dependencies: '@smithy/eventstream-serde-universal': 4.2.12 '@smithy/types': 4.13.1 tslib: 2.8.1 + '@smithy/eventstream-serde-config-resolver@4.3.11': + dependencies: + '@smithy/types': 4.13.0 + tslib: 2.8.1 + '@smithy/eventstream-serde-config-resolver@4.3.12': dependencies: '@smithy/types': 4.13.1 tslib: 2.8.1 + '@smithy/eventstream-serde-node@4.2.11': + dependencies: + '@smithy/eventstream-serde-universal': 4.2.11 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + '@smithy/eventstream-serde-node@4.2.12': dependencies: '@smithy/eventstream-serde-universal': 4.2.12 '@smithy/types': 4.13.1 tslib: 2.8.1 + '@smithy/eventstream-serde-universal@4.2.11': + dependencies: + '@smithy/eventstream-codec': 4.2.11 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + '@smithy/eventstream-serde-universal@4.2.12': dependencies: '@smithy/eventstream-codec': 4.2.12 @@ -25931,11 +26441,18 @@ snapshots: '@smithy/util-base64': 4.3.2 tslib: 2.8.1 - '@smithy/hash-blob-browser@4.2.13': + '@smithy/hash-blob-browser@4.2.12': dependencies: '@smithy/chunked-blob-reader': 5.2.2 '@smithy/chunked-blob-reader-native': 4.2.3 - '@smithy/types': 4.13.1 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + + '@smithy/hash-node@4.2.11': + dependencies: + '@smithy/types': 4.13.0 + '@smithy/util-buffer-from': 4.2.2 + '@smithy/util-utf8': 4.2.2 tslib: 2.8.1 '@smithy/hash-node@4.2.12': @@ -25945,12 +26462,17 @@ snapshots: '@smithy/util-utf8': 4.2.2 tslib: 2.8.1 - '@smithy/hash-stream-node@4.2.12': + '@smithy/hash-stream-node@4.2.11': dependencies: - '@smithy/types': 4.13.1 + '@smithy/types': 4.13.0 '@smithy/util-utf8': 4.2.2 tslib: 2.8.1 + '@smithy/invalid-dependency@4.2.11': + dependencies: + '@smithy/types': 4.13.0 + tslib: 2.8.1 + '@smithy/invalid-dependency@4.2.12': dependencies: '@smithy/types': 4.13.1 @@ -25964,12 +26486,18 @@ snapshots: dependencies: tslib: 2.8.1 - '@smithy/md5-js@4.2.12': + '@smithy/md5-js@4.2.11': dependencies: - '@smithy/types': 4.13.1 + '@smithy/types': 4.13.0 '@smithy/util-utf8': 4.2.2 tslib: 2.8.1 + '@smithy/middleware-content-length@4.2.11': + dependencies: + '@smithy/protocol-http': 5.3.11 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + '@smithy/middleware-content-length@4.2.12': dependencies: '@smithy/protocol-http': 5.3.12 @@ -25987,10 +26515,10 @@ snapshots: '@smithy/util-middleware': 4.2.11 tslib: 2.8.1 - '@smithy/middleware-endpoint@4.4.27': + '@smithy/middleware-endpoint@4.4.28': dependencies: - '@smithy/core': 3.23.12 - '@smithy/middleware-serde': 4.2.15 + '@smithy/core': 3.23.13 + '@smithy/middleware-serde': 4.2.16 '@smithy/node-config-provider': 4.3.12 '@smithy/shared-ini-file-loader': 4.4.7 '@smithy/types': 4.13.1 @@ -25998,15 +26526,27 @@ snapshots: '@smithy/util-middleware': 4.2.12 tslib: 2.8.1 - '@smithy/middleware-retry@4.4.44': + '@smithy/middleware-retry@4.4.40': + dependencies: + '@smithy/node-config-provider': 4.3.11 + '@smithy/protocol-http': 5.3.11 + '@smithy/service-error-classification': 4.2.11 + '@smithy/smithy-client': 4.12.3 + '@smithy/types': 4.13.0 + '@smithy/util-middleware': 4.2.11 + '@smithy/util-retry': 4.2.11 + '@smithy/uuid': 1.1.2 + tslib: 2.8.1 + + '@smithy/middleware-retry@4.4.46': dependencies: '@smithy/node-config-provider': 4.3.12 '@smithy/protocol-http': 5.3.12 '@smithy/service-error-classification': 4.2.12 - '@smithy/smithy-client': 4.12.7 + '@smithy/smithy-client': 4.12.8 '@smithy/types': 4.13.1 '@smithy/util-middleware': 4.2.12 - '@smithy/util-retry': 4.2.12 + '@smithy/util-retry': 4.2.13 '@smithy/uuid': 1.1.2 tslib: 2.8.1 @@ -26016,9 +26556,9 @@ snapshots: '@smithy/types': 4.13.0 tslib: 2.8.1 - '@smithy/middleware-serde@4.2.15': + '@smithy/middleware-serde@4.2.16': dependencies: - '@smithy/core': 3.23.12 + '@smithy/core': 3.23.13 '@smithy/protocol-http': 5.3.12 '@smithy/types': 4.13.1 tslib: 2.8.1 @@ -26055,9 +26595,8 @@ snapshots: '@smithy/types': 4.13.0 tslib: 2.8.1 - '@smithy/node-http-handler@4.5.0': + '@smithy/node-http-handler@4.5.1': dependencies: - '@smithy/abort-controller': 4.2.12 '@smithy/protocol-http': 5.3.12 '@smithy/querystring-builder': 4.2.12 '@smithy/types': 4.13.1 @@ -26105,6 +26644,10 @@ snapshots: '@smithy/types': 4.13.1 tslib: 2.8.1 + '@smithy/service-error-classification@4.2.11': + dependencies: + '@smithy/types': 4.13.0 + '@smithy/service-error-classification@4.2.12': dependencies: '@smithy/types': 4.13.1 @@ -26119,6 +26662,17 @@ snapshots: '@smithy/types': 4.13.1 tslib: 2.8.1 + '@smithy/signature-v4@5.3.11': + dependencies: + '@smithy/is-array-buffer': 4.2.2 + '@smithy/protocol-http': 5.3.11 + '@smithy/types': 4.13.0 + '@smithy/util-hex-encoding': 4.2.2 + '@smithy/util-middleware': 4.2.11 + '@smithy/util-uri-escape': 4.2.2 + '@smithy/util-utf8': 4.2.2 + tslib: 2.8.1 + '@smithy/signature-v4@5.3.12': dependencies: '@smithy/is-array-buffer': 4.2.2 @@ -26140,14 +26694,14 @@ snapshots: '@smithy/util-stream': 4.5.17 tslib: 2.8.1 - '@smithy/smithy-client@4.12.7': + '@smithy/smithy-client@4.12.8': dependencies: - '@smithy/core': 3.23.12 - '@smithy/middleware-endpoint': 4.4.27 + '@smithy/core': 3.23.13 + '@smithy/middleware-endpoint': 4.4.28 '@smithy/middleware-stack': 4.2.12 '@smithy/protocol-http': 5.3.12 '@smithy/types': 4.13.1 - '@smithy/util-stream': 4.5.20 + '@smithy/util-stream': 4.5.21 tslib: 2.8.1 '@smithy/types@4.13.0': @@ -26198,23 +26752,46 @@ snapshots: dependencies: tslib: 2.8.1 - '@smithy/util-defaults-mode-browser@4.3.43': + '@smithy/util-defaults-mode-browser@4.3.39': + dependencies: + '@smithy/property-provider': 4.2.11 + '@smithy/smithy-client': 4.12.3 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + + '@smithy/util-defaults-mode-browser@4.3.44': dependencies: '@smithy/property-provider': 4.2.12 - '@smithy/smithy-client': 4.12.7 + '@smithy/smithy-client': 4.12.8 '@smithy/types': 4.13.1 tslib: 2.8.1 - '@smithy/util-defaults-mode-node@4.2.47': + '@smithy/util-defaults-mode-node@4.2.42': + dependencies: + '@smithy/config-resolver': 4.4.10 + '@smithy/credential-provider-imds': 4.2.11 + '@smithy/node-config-provider': 4.3.11 + '@smithy/property-provider': 4.2.11 + '@smithy/smithy-client': 4.12.3 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + + '@smithy/util-defaults-mode-node@4.2.48': dependencies: '@smithy/config-resolver': 4.4.13 '@smithy/credential-provider-imds': 4.2.12 '@smithy/node-config-provider': 4.3.12 '@smithy/property-provider': 4.2.12 - '@smithy/smithy-client': 4.12.7 + '@smithy/smithy-client': 4.12.8 '@smithy/types': 4.13.1 tslib: 2.8.1 + '@smithy/util-endpoints@3.3.2': + dependencies: + '@smithy/node-config-provider': 4.3.11 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + '@smithy/util-endpoints@3.3.3': dependencies: '@smithy/node-config-provider': 4.3.12 @@ -26235,7 +26812,13 @@ snapshots: '@smithy/types': 4.13.1 tslib: 2.8.1 - '@smithy/util-retry@4.2.12': + '@smithy/util-retry@4.2.11': + dependencies: + '@smithy/service-error-classification': 4.2.11 + '@smithy/types': 4.13.0 + tslib: 2.8.1 + + '@smithy/util-retry@4.2.13': dependencies: '@smithy/service-error-classification': 4.2.12 '@smithy/types': 4.13.1 @@ -26252,10 +26835,10 @@ snapshots: '@smithy/util-utf8': 4.2.2 tslib: 2.8.1 - '@smithy/util-stream@4.5.20': + '@smithy/util-stream@4.5.21': dependencies: '@smithy/fetch-http-handler': 5.3.15 - '@smithy/node-http-handler': 4.5.0 + '@smithy/node-http-handler': 4.5.1 '@smithy/types': 4.13.1 '@smithy/util-base64': 4.3.2 '@smithy/util-buffer-from': 4.2.2 @@ -26277,10 +26860,10 @@ snapshots: '@smithy/util-buffer-from': 4.2.2 tslib: 2.8.1 - '@smithy/util-waiter@4.2.13': + '@smithy/util-waiter@4.2.12': dependencies: - '@smithy/abort-controller': 4.2.12 - '@smithy/types': 4.13.1 + '@smithy/abort-controller': 4.2.11 + '@smithy/types': 4.13.0 tslib: 2.8.1 '@smithy/uuid@1.1.2': @@ -27069,15 +27652,16 @@ snapshots: '@vercel/oidc@3.2.0': {} - '@vercel/sandbox@1.9.0': + '@vercel/sandbox@1.9.2': dependencies: '@vercel/oidc': 3.2.0 + '@workflow/serde': 4.1.0-beta.2 async-retry: 1.3.3 jsonlines: 0.1.1 ms: 2.1.3 picocolors: 1.1.1 tar-stream: 3.1.7 - undici: 7.24.6 + undici: 7.24.7 xdg-app-paths: 5.1.0 zod: 3.24.4 transitivePeerDependencies: @@ -27626,6 +28210,8 @@ snapshots: '@webgpu/types@0.1.69': {} + '@workflow/serde@4.1.0-beta.2': {} + '@xmldom/xmldom@0.7.13': {} '@xmldom/xmldom@0.8.11': {} @@ -28130,7 +28716,7 @@ snapshots: babel-plugin-macros@3.1.0: dependencies: - '@babel/runtime': 7.28.6 + '@babel/runtime': 7.29.2 cosmiconfig: 7.1.0 resolve: 1.22.11 @@ -28357,6 +28943,10 @@ snapshots: dependencies: balanced-match: 4.0.4 + brace-expansion@5.0.5: + dependencies: + balanced-match: 4.0.4 + braces@3.0.3: dependencies: fill-range: 7.1.1 @@ -28870,7 +29460,7 @@ snapshots: computeds@0.0.1: {} - computesdk@2.5.3: + computesdk@2.5.4: dependencies: '@computesdk/cmd': 0.4.1 @@ -29456,7 +30046,7 @@ snapshots: dom-helpers@5.2.1: dependencies: - '@babel/runtime': 7.28.6 + '@babel/runtime': 7.29.2 csstype: 3.2.3 dom-serializer@2.0.0: @@ -30276,14 +30866,23 @@ snapshots: fast-uri@3.1.0: {} + fast-xml-builder@1.1.2: + dependencies: + path-expression-matcher: 1.1.3 + fast-xml-builder@1.1.4: dependencies: - path-expression-matcher: 1.2.0 + path-expression-matcher: 1.2.1 + + fast-xml-parser@5.4.1: + dependencies: + fast-xml-builder: 1.1.2 + strnum: 2.2.0 fast-xml-parser@5.5.8: dependencies: fast-xml-builder: 1.1.4 - path-expression-matcher: 1.2.0 + path-expression-matcher: 1.2.1 strnum: 2.2.0 fastest-levenshtein@1.0.16: {} @@ -31056,7 +31655,7 @@ snapshots: history@5.3.0: dependencies: - '@babel/runtime': 7.28.6 + '@babel/runtime': 7.29.2 hmac-drbg@1.0.1: dependencies: @@ -31507,7 +32106,7 @@ snapshots: json-schema-to-ts@3.1.1: dependencies: - '@babel/runtime': 7.28.6 + '@babel/runtime': 7.29.2 ts-algebra: 2.0.0 json-schema-traverse@0.4.1: {} @@ -31612,7 +32211,7 @@ snapshots: transitivePeerDependencies: - supports-color - koffi@2.15.2: + koffi@2.15.4: optional: true kolorist@1.8.0: {} @@ -31918,7 +32517,7 @@ snapshots: md5.js@1.3.5: dependencies: - hash-base: 3.1.2 + hash-base: 3.0.5 inherits: 2.0.4 safe-buffer: 5.2.1 @@ -32839,9 +33438,9 @@ snapshots: dependencies: brace-expansion: 5.0.3 - minimatch@10.2.4: + minimatch@10.2.5: dependencies: - brace-expansion: 5.0.3 + brace-expansion: 5.0.5 minimatch@3.0.8: dependencies: @@ -32890,7 +33489,7 @@ snapshots: pkg-types: 1.3.1 ufo: 1.6.1 - modal@0.7.3: + modal@0.7.4: dependencies: cbor-x: 1.6.0 long: 5.3.2 @@ -33607,7 +34206,9 @@ snapshots: path-exists@4.0.0: {} - path-expression-matcher@1.2.0: {} + path-expression-matcher@1.1.3: {} + + path-expression-matcher@1.2.1: {} path-is-absolute@1.0.1: {} @@ -34159,7 +34760,7 @@ snapshots: react-helmet-async@1.3.0(react-dom@19.1.0(react@19.1.0))(react@19.1.0): dependencies: - '@babel/runtime': 7.28.6 + '@babel/runtime': 7.29.2 invariant: 2.2.4 prop-types: 15.8.1 react: 19.1.0 @@ -34335,7 +34936,7 @@ snapshots: react-transition-group@4.4.5(react-dom@19.1.0(react@19.1.0))(react@19.1.0): dependencies: - '@babel/runtime': 7.28.6 + '@babel/runtime': 7.29.2 dom-helpers: 5.2.1 loose-envify: 1.4.0 prop-types: 15.8.1 @@ -34789,7 +35390,7 @@ snapshots: safer-buffer@2.1.2: {} - sandbox-agent@0.4.2(@daytonaio/sdk@0.150.0(ws@8.19.0))(@e2b/code-interpreter@2.3.3)(@fly/sprites@0.0.1)(@vercel/sandbox@1.9.0)(computesdk@2.5.3)(dockerode@4.0.9)(get-port@7.1.0)(modal@0.7.3)(zod@4.1.13): + sandbox-agent@0.4.2(@daytonaio/sdk@0.150.0(ws@8.19.0))(@e2b/code-interpreter@2.3.3)(@fly/sprites@0.0.1)(@vercel/sandbox@1.9.2)(computesdk@2.5.4)(dockerode@4.0.9)(get-port@7.1.0)(modal@0.7.4)(zod@4.1.13): dependencies: '@sandbox-agent/cli-shared': 0.4.2 acp-http-client: 0.4.2(zod@4.1.13) @@ -34798,11 +35399,11 @@ snapshots: '@e2b/code-interpreter': 2.3.3 '@fly/sprites': 0.0.1 '@sandbox-agent/cli': 0.4.2 - '@vercel/sandbox': 1.9.0 - computesdk: 2.5.3 + '@vercel/sandbox': 1.9.2 + computesdk: 2.5.4 dockerode: 4.0.9 get-port: 7.1.0 - modal: 0.7.3 + modal: 0.7.4 transitivePeerDependencies: - zod @@ -36021,7 +36622,7 @@ snapshots: undici@7.14.0: {} - undici@7.24.6: {} + undici@7.24.7: {} unenv@2.0.0-rc.21: dependencies: @@ -36816,7 +37417,7 @@ snapshots: expect-type: 1.2.2 magic-string: 0.30.21 pathe: 1.1.2 - std-env: 3.10.0 + std-env: 3.9.0 tinybench: 2.9.0 tinyexec: 0.3.2 tinypool: 1.1.1 diff --git a/pnpm-workspace.yaml b/pnpm-workspace.yaml index bb3ce9b987..34265d5adf 100644 --- a/pnpm-workspace.yaml +++ b/pnpm-workspace.yaml @@ -5,6 +5,7 @@ packages: - engine/sdks/typescript/envoy-client - engine/sdks/typescript/envoy-protocol - engine/sdks/typescript/runner + - engine/sdks/typescript/kv-channel-protocol - engine/sdks/typescript/runner-protocol - engine/sdks/typescript/test-envoy - engine/sdks/typescript/test-runner diff --git a/rivetkit-typescript/packages/cloudflare-workers/src/actor-handler-do.ts b/rivetkit-typescript/packages/cloudflare-workers/src/actor-handler-do.ts index 8b26c8bbb0..aac687f2ec 100644 --- a/rivetkit-typescript/packages/cloudflare-workers/src/actor-handler-do.ts +++ b/rivetkit-typescript/packages/cloudflare-workers/src/actor-handler-do.ts @@ -13,7 +13,7 @@ import { createCloudflareActorsActorDriverBuilder, } from "./actor-driver"; import { buildActorId, parseActorId } from "./actor-id"; -import { kvGet, kvPut } from "./actor-kv"; +import { kvDelete, kvDeleteRange, kvGet, kvPut } from "./actor-kv"; import { GLOBAL_KV_KEYS } from "./global-kv"; import type { Bindings } from "./handler"; import { getCloudflareAmbientEnv } from "./handler"; @@ -32,6 +32,10 @@ export interface ActorHandlerInterface extends DurableObject { | undefined >; managerKvGet(key: Uint8Array): Promise; + managerKvBatchGet(keys: Uint8Array[]): Promise<(Uint8Array | null)[]>; + managerKvBatchPut(entries: [Uint8Array, Uint8Array][]): Promise; + managerKvBatchDelete(keys: Uint8Array[]): Promise; + managerKvDeleteRange(start: Uint8Array, end: Uint8Array): Promise; } export interface ActorInitRequest { @@ -281,6 +285,33 @@ export function createActorDurableObject( return kvGet(this.ctx.storage.sql, key); } + /** RPC called by ManagerDriver.kvBatchGet to read multiple keys from KV. */ + async managerKvBatchGet(keys: Uint8Array[]): Promise<(Uint8Array | null)[]> { + const sql = this.ctx.storage.sql; + return keys.map((key) => kvGet(sql, key)); + } + + /** RPC called by ManagerDriver.kvBatchPut to write multiple entries to KV. */ + async managerKvBatchPut(entries: [Uint8Array, Uint8Array][]): Promise { + const sql = this.ctx.storage.sql; + for (const [key, value] of entries) { + kvPut(sql, key, value); + } + } + + /** RPC called by ManagerDriver.kvBatchDelete to delete multiple keys from KV. */ + async managerKvBatchDelete(keys: Uint8Array[]): Promise { + const sql = this.ctx.storage.sql; + for (const key of keys) { + kvDelete(sql, key); + } + } + + /** RPC called by ManagerDriver.kvDeleteRange to delete a key range from KV. */ + async managerKvDeleteRange(start: Uint8Array, end: Uint8Array): Promise { + kvDeleteRange(this.ctx.storage.sql, start, end); + } + /** RPC called by the manager to create a DO. Can optionally allow existing actors. */ async create(req: ActorInitRequest): Promise { // Check if actor exists diff --git a/rivetkit-typescript/packages/cloudflare-workers/src/config.ts b/rivetkit-typescript/packages/cloudflare-workers/src/config.ts index ac5cb59bf5..d00a359f9e 100644 --- a/rivetkit-typescript/packages/cloudflare-workers/src/config.ts +++ b/rivetkit-typescript/packages/cloudflare-workers/src/config.ts @@ -5,8 +5,8 @@ const ConfigSchemaBase = z.object({ /** Path that the Rivet manager API will be mounted. */ managerPath: z.string().optional().default("/api/rivet"), - /** Deprecated. Runner key for authentication. */ - runnerKey: z.string().optional(), + /** Deprecated. Envoy key for authentication. */ + envoyKey: z.string().optional(), /** Disable the welcome message. */ noWelcome: z.boolean().optional().default(false), diff --git a/rivetkit-typescript/packages/cloudflare-workers/src/handler.ts b/rivetkit-typescript/packages/cloudflare-workers/src/handler.ts index fbe0139c5c..f776db15ef 100644 --- a/rivetkit-typescript/packages/cloudflare-workers/src/handler.ts +++ b/rivetkit-typescript/packages/cloudflare-workers/src/handler.ts @@ -57,8 +57,8 @@ export function createInlineClient>( ): InlineOutput { // HACK: Cloudflare does not support using `crypto.randomUUID()` before start, so we pass a default value // - // Runner key is not used on Cloudflare - inputConfig = { ...inputConfig, runnerKey: "" }; + // Envoy key is not used on Cloudflare + inputConfig = { ...inputConfig, envoyKey: "" }; // Parse config const config = ConfigSchema.parse(inputConfig); diff --git a/rivetkit-typescript/packages/cloudflare-workers/src/manager-driver.ts b/rivetkit-typescript/packages/cloudflare-workers/src/manager-driver.ts index 30ccdb34d8..ecde6a8627 100644 --- a/rivetkit-typescript/packages/cloudflare-workers/src/manager-driver.ts +++ b/rivetkit-typescript/packages/cloudflare-workers/src/manager-driver.ts @@ -446,4 +446,61 @@ export class CloudflareActorsManagerDriver implements ManagerDriver { const value = await stub.managerKvGet(key); return value !== null ? new TextDecoder().decode(value) : null; } + + async kvBatchGet( + actorId: string, + keys: Uint8Array[], + ): Promise<(Uint8Array | null)[]> { + const env = getCloudflareAmbientEnv(); + + const [doId] = parseActorId(actorId); + + const id = env.ACTOR_DO.idFromString(doId); + const stub = env.ACTOR_DO.get(id); + + return await stub.managerKvBatchGet(keys); + } + + async kvBatchPut( + actorId: string, + entries: [Uint8Array, Uint8Array][], + ): Promise { + const env = getCloudflareAmbientEnv(); + + const [doId] = parseActorId(actorId); + + const id = env.ACTOR_DO.idFromString(doId); + const stub = env.ACTOR_DO.get(id); + + await stub.managerKvBatchPut(entries); + } + + async kvBatchDelete( + actorId: string, + keys: Uint8Array[], + ): Promise { + const env = getCloudflareAmbientEnv(); + + const [doId] = parseActorId(actorId); + + const id = env.ACTOR_DO.idFromString(doId); + const stub = env.ACTOR_DO.get(id); + + await stub.managerKvBatchDelete(keys); + } + + async kvDeleteRange( + actorId: string, + start: Uint8Array, + end: Uint8Array, + ): Promise { + const env = getCloudflareAmbientEnv(); + + const [doId] = parseActorId(actorId); + + const id = env.ACTOR_DO.idFromString(doId); + const stub = env.ACTOR_DO.get(id); + + await stub.managerKvDeleteRange(start, end); + } } diff --git a/rivetkit-typescript/packages/rivetkit/fixtures/driver-test-suite/db-stress.ts b/rivetkit-typescript/packages/rivetkit/fixtures/driver-test-suite/db-stress.ts new file mode 100644 index 0000000000..9239c5b5ac --- /dev/null +++ b/rivetkit-typescript/packages/rivetkit/fixtures/driver-test-suite/db-stress.ts @@ -0,0 +1,104 @@ +import { actor } from "rivetkit"; +import { db } from "rivetkit/db"; + +export const dbStressActor = actor({ + state: {}, + db: db({ + onMigrate: async (db) => { + await db.execute(` + CREATE TABLE IF NOT EXISTS stress_data ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + value TEXT NOT NULL, + created_at INTEGER NOT NULL + ) + `); + }, + }), + actions: { + // Insert many rows in a single action. Used to create a long-running + // DB operation that can race with destroy/disconnect. + insertBatch: async (c, count: number) => { + const now = Date.now(); + const values: string[] = []; + for (let i = 0; i < count; i++) { + values.push(`('row-${i}', ${now})`); + } + await c.db.execute( + `INSERT INTO stress_data (value, created_at) VALUES ${values.join(", ")}`, + ); + return { count }; + }, + + getCount: async (c) => { + const results = await c.db.execute<{ count: number }>( + `SELECT COUNT(*) as count FROM stress_data`, + ); + return results[0].count; + }, + + // Measure event loop health during a DB operation. + // Runs a Promise.resolve() microtask check interleaved with DB + // inserts to detect if the event loop is being blocked between + // awaits. Reports the wall-clock duration so the test can verify + // the inserts complete in a reasonable time (not blocked by + // synchronous lifecycle operations). + measureEventLoopHealth: async (c, insertCount: number) => { + const startMs = Date.now(); + + // Do DB work that should NOT block the event loop. + // Insert rows one at a time to create many async round-trips. + for (let i = 0; i < insertCount; i++) { + await c.db.execute( + `INSERT INTO stress_data (value, created_at) VALUES ('drift-${i}', ${Date.now()})`, + ); + } + + const elapsedMs = Date.now() - startMs; + + return { + elapsedMs, + insertCount, + }; + }, + + // Write data to multiple rows that can be verified after a + // forced disconnect and reconnect. + writeAndVerify: async (c, count: number) => { + const now = Date.now(); + for (let i = 0; i < count; i++) { + await c.db.execute( + `INSERT INTO stress_data (value, created_at) VALUES ('verify-${i}', ${now})`, + ); + } + + const results = await c.db.execute<{ count: number }>( + `SELECT COUNT(*) as count FROM stress_data WHERE value LIKE 'verify-%'`, + ); + return results[0].count; + }, + + integrityCheck: async (c) => { + const rows = await c.db.execute>( + "PRAGMA integrity_check", + ); + const value = Object.values(rows[0] ?? {})[0]; + return String(value ?? ""); + }, + + triggerSleep: (c) => { + c.sleep(); + }, + + reset: async (c) => { + await c.db.execute(`DELETE FROM stress_data`); + }, + + destroy: (c) => { + c.destroy(); + }, + }, + options: { + actionTimeout: 120_000, + sleepTimeout: 100, + }, +}); diff --git a/rivetkit-typescript/packages/rivetkit/fixtures/driver-test-suite/registry.ts b/rivetkit-typescript/packages/rivetkit/fixtures/driver-test-suite/registry.ts index 12a5e74d49..0c8c00007a 100644 --- a/rivetkit-typescript/packages/rivetkit/fixtures/driver-test-suite/registry.ts +++ b/rivetkit-typescript/packages/rivetkit/fixtures/driver-test-suite/registry.ts @@ -3,7 +3,6 @@ import { accessControlActor, accessControlNoQueuesActor, } from "./access-control"; -import { agentOsTestActor } from "./agent-os"; import { inputActor } from "./action-inputs"; import { @@ -71,6 +70,7 @@ import { } from "./run"; import { dockerSandboxActor } from "./sandbox"; import { scheduled } from "./scheduled"; +import { dbStressActor } from "./db-stress"; import { scheduledDb } from "./scheduled-db"; import { sleep, @@ -140,6 +140,18 @@ import { workflowTryActor, } from "./workflow"; +let agentOsTestActor: + | (Awaited["agentOsTestActor"]) + | undefined; + +try { + ({ agentOsTestActor } = await import("./agent-os")); +} catch (error) { + if (!(error instanceof Error) || !error.message.includes("agent-os")) { + throw error; + } +} + // Consolidated setup with all actors export const registry = setup({ use: { @@ -151,6 +163,8 @@ export const registry = setup({ counterWithLifecycle, // From scheduled.ts scheduled, + // From db-stress.ts + dbStressActor, // From scheduled-db.ts scheduledDb, // From sandbox.ts @@ -302,7 +316,11 @@ export const registry = setup({ dbPragmaMigrationActor, // From state-zod-coercion.ts stateZodCoercionActor, - // From agent-os.ts - agentOsTestActor, + ...(agentOsTestActor + ? { + // From agent-os.ts + agentOsTestActor, + } + : {}), }, }); diff --git a/rivetkit-typescript/packages/rivetkit/package.json b/rivetkit-typescript/packages/rivetkit/package.json index 8a89adc2be..a05f021214 100644 --- a/rivetkit-typescript/packages/rivetkit/package.json +++ b/rivetkit-typescript/packages/rivetkit/package.json @@ -336,6 +336,7 @@ "@hono/zod-openapi": "^1.1.5", "@rivetkit/bare-ts": "^0.6.2", "@rivetkit/engine-envoy-client": "workspace:*", + "@rivetkit/engine-kv-channel-protocol": "workspace:*", "@rivetkit/engine-runner": "workspace:*", "@rivetkit/fast-json-patch": "^3.1.2", "@rivetkit/on-change": "^6.0.2-rc.1", diff --git a/rivetkit-typescript/packages/rivetkit/scripts/manager-openapi-gen.ts b/rivetkit-typescript/packages/rivetkit/scripts/manager-openapi-gen.ts index bc8f69f6ed..3724f94025 100644 --- a/rivetkit-typescript/packages/rivetkit/scripts/manager-openapi-gen.ts +++ b/rivetkit-typescript/packages/rivetkit/scripts/manager-openapi-gen.ts @@ -33,6 +33,10 @@ async function main() { setGetUpgradeWebSocket: unimplemented, buildGatewayUrl: unimplemented, kvGet: unimplemented, + kvBatchGet: unimplemented, + kvBatchPut: unimplemented, + kvBatchDelete: unimplemented, + kvDeleteRange: unimplemented, }; // const client = createClientWithDriver( diff --git a/rivetkit-typescript/packages/rivetkit/src/actor/config.ts b/rivetkit-typescript/packages/rivetkit/src/actor/config.ts index 6092b11347..e75fe0f1b1 100644 --- a/rivetkit-typescript/packages/rivetkit/src/actor/config.ts +++ b/rivetkit-typescript/packages/rivetkit/src/actor/config.ts @@ -261,6 +261,12 @@ export const ActorConfigSchema = z zFunction<(request: Request) => boolean>(), ]) .default(false), + /** Override RivetKit's SQLite preload budget for this actor. Set to 0 to disable SQLite preloading. */ + preloadMaxSqliteBytes: z.number().nonnegative().optional(), + /** Override RivetKit's workflow preload budget for this actor. Set to 0 to disable workflow preloading. */ + preloadMaxWorkflowBytes: z.number().nonnegative().optional(), + /** Override RivetKit's connections preload budget for this actor. Set to 0 to disable connections preloading. */ + preloadMaxConnectionsBytes: z.number().nonnegative().optional(), }) .strict() .prefault(() => ({})), @@ -1128,6 +1134,24 @@ export const DocActorOptionsSchema = z .describe( "Whether WebSockets using onWebSocket can be hibernated. WebSockets using actions/events are hibernatable by default. Default: false", ), + preloadMaxSqliteBytes: z + .number() + .optional() + .describe( + "Override RivetKit's SQLite preload budget for this actor. Set to 0 to disable SQLite preloading.", + ), + preloadMaxWorkflowBytes: z + .number() + .optional() + .describe( + "Override RivetKit's workflow preload budget for this actor. Set to 0 to disable workflow preloading.", + ), + preloadMaxConnectionsBytes: z + .number() + .optional() + .describe( + "Override RivetKit's connections preload budget for this actor. Set to 0 to disable connections preloading.", + ), }) .describe("Actor options for timeouts and behavior configuration."); diff --git a/rivetkit-typescript/packages/rivetkit/src/actor/driver.ts b/rivetkit-typescript/packages/rivetkit/src/actor/driver.ts index 0307aa783d..2951108c32 100644 --- a/rivetkit-typescript/packages/rivetkit/src/actor/driver.ts +++ b/rivetkit-typescript/packages/rivetkit/src/actor/driver.ts @@ -4,7 +4,11 @@ import type { ManagerDriver } from "@/manager/driver"; import { type AnyConn } from "./conn/mod"; import type { AnyActorInstance } from "./instance/mod"; import type { RegistryConfig } from "@/registry/config"; -import type { RawDatabaseClient, DrizzleDatabaseClient } from "@/db/config"; +import type { + RawDatabaseClient, + DrizzleDatabaseClient, + NativeSqliteConfig, +} from "@/db/config"; import type { ISqliteVfs } from "@rivetkit/sqlite-vfs"; export type ActorDriverBuilder = ( @@ -101,6 +105,13 @@ export interface ActorDriver { */ createSqliteVfs?(actorId: string): ISqliteVfs | Promise; + /** + * Returns native SQLite channel configuration for this actor. + */ + getNativeSqliteConfig?( + actorId: string, + ): NativeSqliteConfig | undefined; + /** * Requests the actor to go to sleep. * @@ -115,6 +126,15 @@ export interface ActorDriver { */ startDestroy(actorId: string): void; + /** + * Test-only helper that simulates an abrupt actor crash. + * + * Unlike startSleep/startDestroy, this skips actor lifecycle hooks and the + * final persist flush. Drivers may still release local resources so the + * current test process can continue running. + */ + hardCrashActor?(actorId: string): Promise; + /** * Shuts down the actor runner. */ diff --git a/rivetkit-typescript/packages/rivetkit/src/actor/instance/mod.ts b/rivetkit-typescript/packages/rivetkit/src/actor/instance/mod.ts index 2b025e6158..9194699a39 100644 --- a/rivetkit-typescript/packages/rivetkit/src/actor/instance/mod.ts +++ b/rivetkit-typescript/packages/rivetkit/src/actor/instance/mod.ts @@ -773,6 +773,34 @@ export class ActorInstance< } } + async debugForceCrash() { + if (this.#shutdownComplete) { + return; + } + if (this.#stopCalled) { + this.#rLog.warn({ msg: "already stopping actor during hard crash" }); + return; + } + this.#stopCalled = true; + + try { + if (this.#sleepTimeout) { + clearTimeout(this.#sleepTimeout); + this.#sleepTimeout = undefined; + } + + this.driver.cancelAlarm?.(this.#actorId); + this.stateManager.clearPendingSaveTimeout(); + + try { + this.#abortController.abort(); + } catch {} + } finally { + this.#shutdownComplete = true; + await this.#cleanupDatabase(); + } + } + // MARK: - Sleep startSleep() { if (this.#stopCalled || this.#destroyCalled) { @@ -1901,11 +1929,16 @@ export class ActorInstance< this.driver.kvBatchGet(this.#actorId, keys), batchDelete: (keys: Uint8Array[]) => this.driver.kvBatchDelete(this.#actorId, keys), + deleteRange: (start: Uint8Array, end: Uint8Array) => + this.driver.kvDeleteRange(this.#actorId, start, end), }, sqliteVfs: this.#sqliteVfs, metrics: this.#metrics, preloadedEntries: sqlitePreloadEntries, log: this.#rLog, + nativeSqliteConfig: this.driver.getNativeSqliteConfig?.( + this.#actorId, + ), }), ); this.#rLog.info({ msg: "database migration starting" }); diff --git a/rivetkit-typescript/packages/rivetkit/src/actor/instance/preload-map.ts b/rivetkit-typescript/packages/rivetkit/src/actor/instance/preload-map.ts index 4162beea79..2e259c4418 100644 --- a/rivetkit-typescript/packages/rivetkit/src/actor/instance/preload-map.ts +++ b/rivetkit-typescript/packages/rivetkit/src/actor/instance/preload-map.ts @@ -1,6 +1,6 @@ /** - * Input types matching the runner protocol PreloadedKv structure. - * Defined locally to avoid a dependency on the runner-protocol package. + * Input types matching the actor start-command PreloadedKv structure. + * Defined locally to avoid a direct dependency on a protocol package here. */ export interface PreloadedKvInput { readonly entries: readonly { diff --git a/rivetkit-typescript/packages/rivetkit/src/db/config.ts b/rivetkit-typescript/packages/rivetkit/src/db/config.ts index f6a88af0d1..3ff3b32d0c 100644 --- a/rivetkit-typescript/packages/rivetkit/src/db/config.ts +++ b/rivetkit-typescript/packages/rivetkit/src/db/config.ts @@ -3,6 +3,12 @@ import type { ActorMetrics } from "@/actor/metrics"; export type AnyDatabaseProvider = DatabaseProvider | undefined; +export interface NativeSqliteConfig { + endpoint: string; + token?: string; + namespace: string; +} + /** * Context provided to database providers for creating database clients */ @@ -33,6 +39,7 @@ export interface DatabaseProviderContext { batchPut: (entries: [Uint8Array, Uint8Array][]) => Promise; batchGet: (keys: Uint8Array[]) => Promise<(Uint8Array | null)[]>; batchDelete: (keys: Uint8Array[]) => Promise; + deleteRange: (start: Uint8Array, end: Uint8Array) => Promise; }; /** @@ -58,6 +65,12 @@ export interface DatabaseProviderContext { * duration and KV call count. */ log?: { debug(obj: Record): void }; + + /** + * Native SQLite channel configuration. When provided, the native addon + * connects to this explicit endpoint instead of reading process env. + */ + nativeSqliteConfig?: NativeSqliteConfig; } export type DatabaseProvider = { diff --git a/rivetkit-typescript/packages/rivetkit/src/db/mod.ts b/rivetkit-typescript/packages/rivetkit/src/db/mod.ts index def08f4474..9fbe132299 100644 --- a/rivetkit-typescript/packages/rivetkit/src/db/mod.ts +++ b/rivetkit-typescript/packages/rivetkit/src/db/mod.ts @@ -1,4 +1,8 @@ import type { DatabaseProvider, RawAccess } from "./config"; +import { + nativeSqliteAvailable, + createNativeRawAccess, +} from "./native-sqlite"; import { AsyncMutex, createActorKvStore, @@ -8,6 +12,9 @@ import { export type { RawAccess } from "./config"; +// Log the native SQLite fallback warning at most once per process. +let nativeFallbackWarned = false; + interface DatabaseFactoryConfig { onMigrate?: (db: RawAccess) => Promise | void; } @@ -47,6 +54,24 @@ export function db({ } satisfies RawAccess; } + // Use native SQLite when the addon is available. The native path + // routes KV operations over a WebSocket KV channel, bypassing + // the WASM VFS entirely. + if (nativeSqliteAvailable()) { + return await createNativeRawAccess( + ctx.actorId, + ctx.nativeSqliteConfig, + ); + } + + // Native addon not available. Fall back to WASM SQLite. + if (!nativeFallbackWarned) { + nativeFallbackWarned = true; + console.warn( + "native SQLite not available, falling back to WebAssembly. run npm rebuild to install native bindings.", + ); + } + // Construct KV-backed client using actor driver's KV operations if (!ctx.sqliteVfs) { throw new Error( diff --git a/rivetkit-typescript/packages/rivetkit/src/db/native-sqlite.ts b/rivetkit-typescript/packages/rivetkit/src/db/native-sqlite.ts new file mode 100644 index 0000000000..b0da15ed54 --- /dev/null +++ b/rivetkit-typescript/packages/rivetkit/src/db/native-sqlite.ts @@ -0,0 +1,399 @@ +/** + * Native SQLite integration via @rivetkit/sqlite-native. + * + * Attempts to load the native addon at runtime and provides a fallback-aware + * API for the database provider. The KV channel connection is initialized once + * per process and shared across all actors. + * + * The native VFS and WASM VFS are byte-compatible. See + * rivetkit-typescript/packages/sqlite-native/src/vfs.rs and + * rivetkit-typescript/packages/sqlite-vfs/src/vfs.ts. + */ + +import { getRequireFn } from "@/utils/node"; +import { + getRivetEndpoint, + getRivetToken, + getRivetNamespace, +} from "@/utils/env-vars"; +import type { NativeSqliteConfig, RawAccess } from "./config"; +import { AsyncMutex } from "./shared"; + +// Type declarations for @rivetkit/sqlite-native. +// Declared inline to avoid a build-time dependency on the native addon, +// which may not be installed or compiled. + +/** Typed bind parameter matching the Rust BindParam napi struct. */ +interface NativeBindParam { + kind: "null" | "int" | "float" | "text" | "blob"; + intValue?: number; + floatValue?: number; + textValue?: string; + blobValue?: Buffer; +} + +interface NativeSqliteModule { + connect(config: { + url: string; + token?: string; + namespace: string; + }): NativeKvChannel; + openDatabase( + channel: NativeKvChannel, + actorId: string, + ): Promise; + execute( + db: NativeDatabase, + sql: string, + params?: NativeBindParam[], + ): Promise<{ changes: number }>; + query( + db: NativeDatabase, + sql: string, + params?: NativeBindParam[], + ): Promise<{ columns: string[]; rows: unknown[][] }>; + exec( + db: NativeDatabase, + sql: string, + ): Promise<{ columns: string[]; rows: unknown[][] }>; + closeDatabase(db: NativeDatabase): Promise; + disconnect(channel: NativeKvChannel): Promise; + getMetrics?(channel: NativeKvChannel): KvChannelMetricsSnapshot | undefined; +} + +/** Metrics snapshot for a single KV operation type. */ +export interface OpMetricsSnapshot { + count: number; + totalDurationUs: number; + minDurationUs: number; + maxDurationUs: number; + avgDurationUs: number; +} + +/** Aggregated KV channel metrics for all operation types. */ +export interface KvChannelMetricsSnapshot { + get: OpMetricsSnapshot; + put: OpMetricsSnapshot; + delete: OpMetricsSnapshot; + deleteRange: OpMetricsSnapshot; + actorOpen: OpMetricsSnapshot; + actorClose: OpMetricsSnapshot; +} + +// Opaque handles from the native addon. +type NativeKvChannel = object; +type NativeDatabase = object; + +// Cached detection result. +let nativeModule: NativeSqliteModule | null = null; +let detectionDone = false; + +// KV channels are pooled by endpoint/token/namespace so concurrent test +// runtimes do not tear down each other's connection state. +const kvChannels = new Map(); + +// Whether the process shutdown handler has been registered. +let shutdownRegistered = false; + +/** + * Reset the cached native SQLite detection state. + * For testing only. Allows tests to switch between native and WASM VFS + * backends within the same process. + * + * @param disable - If true, force detection to report native as unavailable. + * If false/undefined, reset so the next call re-detects. + * @internal + */ +export async function _resetNativeDetection(disable?: boolean): Promise { + if (nativeModule) { + const disconnectPromises = Array.from(kvChannels.values()).map( + async (channel) => { + try { + await nativeModule!.disconnect(channel); + } catch { + // Ignore cleanup errors + } + }, + ); + await Promise.all(disconnectPromises); + } + kvChannels.clear(); + + if (disable) { + detectionDone = true; + nativeModule = null; + } else { + detectionDone = false; + nativeModule = null; + } +} + +/** + * Attempts to load the @rivetkit/sqlite-native .node addon. + * Catches all failure modes: missing file, glibc mismatch, + * N-API version mismatch, corrupted binary. + */ +export function nativeSqliteAvailable(): boolean { + if (detectionDone) return nativeModule !== null; + detectionDone = true; + + try { + const requireFn = getRequireFn(); + nativeModule = requireFn( + /* webpackIgnore: true */ "@rivetkit/sqlite-native", + ) as NativeSqliteModule; + return true; + } catch { + nativeModule = null; + return false; + } +} + +/** + * Returns the loaded native module. Only valid after nativeSqliteAvailable() + * returns true. + */ +function getNativeModule(): NativeSqliteModule { + if (!nativeModule) { + throw new Error("native SQLite module not loaded"); + } + return nativeModule; +} + +/** + * Disconnect the singleton KV channel if it exists. Safe to call multiple times. + */ +export function disconnectKvChannel(): void { + if (nativeModule) { + for (const channel of kvChannels.values()) { + // Fire-and-forget the async disconnect. During process shutdown, + // we cannot reliably await Promises (beforeExit/signal handlers + // are synchronous). The WebSocket close frame is best-effort. + nativeModule.disconnect(channel).catch(() => { + // Ignore cleanup errors during shutdown. + }); + } + } + kvChannels.clear(); +} + +/** + * Register process shutdown handlers that clean up the singleton KV channel. + * Called once per process on first channel creation. Uses `beforeExit` for + * graceful exit and signal handlers for SIGTERM/SIGINT. + */ +function registerShutdownHandler(): void { + if (shutdownRegistered) return; + shutdownRegistered = true; + + const onShutdown = () => { + disconnectKvChannel(); + }; + + // beforeExit fires when the event loop drains. Signals fire on external + // termination. Both paths call disconnectKvChannel which is idempotent. + process.on("beforeExit", onShutdown); + process.on("SIGTERM", onShutdown); + process.on("SIGINT", onShutdown); +} + +/** + * Get or create the process-level KV channel connection. + * + * Derives the WebSocket URL from RIVET_ENDPOINT (defaults to + * http://127.0.0.1:6420 for local dev). Authenticates with RIVET_TOKEN. + * + * If the channel was previously disconnected (e.g., during shutdown or due + * to a permanent failure), a new channel is created automatically. + */ +function getKvChannelConfig(config?: NativeSqliteConfig) { + const endpoint = + config?.endpoint ?? getRivetEndpoint() ?? "http://127.0.0.1:6420"; + const token = config?.token ?? getRivetToken(); + const namespace = config?.namespace ?? getRivetNamespace() ?? "default"; + + // Convert HTTP(S) endpoint to WebSocket URL for the KV channel. + const wsUrl = endpoint + .replace(/^https:\/\//, "wss://") + .replace(/^http:\/\//, "ws://") + .replace(/\/$/, ""); + + return { + wsUrl, + token: token ?? undefined, + namespace, + key: `${wsUrl}\u0000${token ?? ""}\u0000${namespace}`, + }; +} + +function getOrCreateKvChannel(config?: NativeSqliteConfig): NativeKvChannel { + const mod = getNativeModule(); + const channelConfig = getKvChannelConfig(config); + const existing = kvChannels.get(channelConfig.key); + if (existing) return existing; + + const channel = mod.connect({ + url: channelConfig.wsUrl, + token: channelConfig.token, + namespace: channelConfig.namespace, + }); + kvChannels.set(channelConfig.key, channel); + + registerShutdownHandler(); + + return channel; +} + +/** + * Convert binding values to typed BindParam objects for the native addon. + * Uses Buffer for blobs instead of JSON arrays to avoid 20x serialization + * overhead. See docs-internal/engine/NATIVE_SQLITE_REVIEW_FIXES.md M7. + */ +function toNativeBindings(args: unknown[]): NativeBindParam[] { + return args.map((arg): NativeBindParam => { + if (arg === null || arg === undefined) { + return { kind: "null" }; + } + if (typeof arg === "bigint") { + return { kind: "int", intValue: Number(arg) }; + } + if (typeof arg === "number") { + if (Number.isInteger(arg)) { + return { kind: "int", intValue: arg }; + } + return { kind: "float", floatValue: arg }; + } + if (typeof arg === "string") { + return { kind: "text", textValue: arg }; + } + if (typeof arg === "boolean") { + return { kind: "int", intValue: arg ? 1 : 0 }; + } + if (arg instanceof Uint8Array) { + return { kind: "blob", blobValue: Buffer.from(arg) }; + } + throw new Error(`unsupported bind parameter type: ${typeof arg}`); + }); +} + +/** + * Get a snapshot of KV channel operation metrics. + * Returns undefined if the native module is not available or the channel is not connected. + */ +export function getKvChannelMetrics(): KvChannelMetricsSnapshot | undefined { + if (!nativeModule?.getMetrics) return undefined; + const channel = kvChannels.get(getKvChannelConfig().key); + if (!channel) return undefined; + return nativeModule.getMetrics(channel) as KvChannelMetricsSnapshot | undefined; +} + +/** + * Disconnect the KV channel for the current endpoint/token/namespace only. + * This is used by the local driver test harness so one test runtime does not + * shut down another concurrent runtime's native SQLite channel. + */ +export async function disconnectKvChannelForCurrentConfig( + config?: NativeSqliteConfig, +): Promise { + if (!nativeModule) { + return; + } + + const { key } = getKvChannelConfig(config); + const channel = kvChannels.get(key); + if (!channel) { + return; + } + + kvChannels.delete(key); + await nativeModule.disconnect(channel); +} + +/** + * Create a RawAccess database client backed by the native SQLite addon. + * The KV channel is shared per process; a new database is opened per actor. + */ +export async function createNativeRawAccess( + actorId: string, + config?: NativeSqliteConfig, +): Promise { + const mod = getNativeModule(); + const channel = getOrCreateKvChannel(config); + const nativeDb = await mod.openDatabase(channel, actorId); + let closed = false; + const mutex = new AsyncMutex(); + + const ensureOpen = () => { + if (closed) { + throw new Error("database is closed"); + } + }; + + return { + execute: async < + TRow extends Record = Record< + string, + unknown + >, + >( + query: string, + ...args: unknown[] + ): Promise => { + return await mutex.run(async () => { + ensureOpen(); + + if (args.length > 0) { + // The native addon validates binding types in Rust + // (bind_params). Convert bigint/Uint8Array to + // JSON-compatible representations. + const bindings = toNativeBindings(args); + const token = query + .trimStart() + .slice(0, 16) + .toUpperCase(); + const returnsRows = + token.startsWith("SELECT") || + token.startsWith("PRAGMA") || + token.startsWith("WITH"); + + if (returnsRows) { + const { rows, columns } = await mod.query( + nativeDb, + query, + bindings, + ); + return rows.map((row: unknown[]) => { + const rowObj: Record = {}; + for (let i = 0; i < columns.length; i++) { + rowObj[columns[i]] = row[i]; + } + return rowObj; + }) as TRow[]; + } + + await mod.execute(nativeDb, query, bindings); + return [] as TRow[]; + } + + // Multi-statement SQL (e.g., migrations) without parameters. + // Uses the native exec which loops sqlite3_prepare_v2 with + // tail pointer tracking. + const { rows, columns } = await mod.exec(nativeDb, query); + return rows.map((row: unknown[]) => { + const rowObj: Record = {}; + for (let i = 0; i < columns.length; i++) { + rowObj[columns[i]] = row[i]; + } + return rowObj; + }) as TRow[]; + }); + }, + close: async () => { + await mutex.run(async () => { + if (closed) return; + closed = true; + await mod.closeDatabase(nativeDb); + }); + }, + }; +} diff --git a/rivetkit-typescript/packages/rivetkit/src/db/shared.ts b/rivetkit-typescript/packages/rivetkit/src/db/shared.ts index 3e7b728683..6992b9ef37 100644 --- a/rivetkit-typescript/packages/rivetkit/src/db/shared.ts +++ b/rivetkit-typescript/packages/rivetkit/src/db/shared.ts @@ -200,6 +200,9 @@ export function createActorKvStore( poison: () => { poisoned = true; }, + deleteRange: async (start: Uint8Array, end: Uint8Array) => { + await kv.deleteRange(start, end); + }, }; } diff --git a/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/mod.ts b/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/mod.ts index c1f273b2df..913e7b30f7 100644 --- a/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/mod.ts +++ b/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/mod.ts @@ -17,6 +17,7 @@ import { runActorConnTests } from "./tests/actor-conn"; import { runActorConnHibernationTests } from "./tests/actor-conn-hibernation"; import { runActorConnStateTests } from "./tests/actor-conn-state"; import { runActorDbTests } from "./tests/actor-db"; +import { runActorDbStressTests } from "./tests/actor-db-stress"; import { runConnErrorSerializationTests } from "./tests/conn-error-serialization"; import { runActorDestroyTests } from "./tests/actor-destroy"; import { runActorDriverTests } from "./tests/actor-driver"; @@ -33,6 +34,7 @@ import { runActorSandboxTests } from "./tests/actor-sandbox"; import { runActorStatelessTests } from "./tests/actor-stateless"; import { runActorVarsTests } from "./tests/actor-vars"; import { runActorWorkflowTests } from "./tests/actor-workflow"; +import { runCrossBackendVfsTests } from "./tests/cross-backend-vfs"; import { runManagerDriverTests } from "./tests/manager-driver"; import { runRawHttpTests } from "./tests/raw-http"; import { runRawHttpRequestPropertiesTests } from "./tests/raw-http-request-properties"; @@ -86,6 +88,8 @@ export interface DriverDeployOutput { endpoint: string; namespace: string; runnerName: string; + hardCrashActor?: (actorId: string) => Promise; + hardCrashPreservesData?: boolean; /** Cleans up the test. */ cleanup(): Promise; @@ -182,6 +186,22 @@ export function runDriverTests( } }); } + + // Cross-backend VFS compatibility runs once, independent of + // client type and encoding. Skips when native SQLite is unavailable. + runCrossBackendVfsTests({ + ...driverTestConfigPartial, + clientType: "http", + encoding: "bare", + }); + + // Stress tests for DB lifecycle races, event loop blocking, and + // KV channel resilience. Run once, not per-encoding. + runActorDbStressTests({ + ...driverTestConfigPartial, + clientType: "http", + encoding: "bare", + }); }); } @@ -200,6 +220,8 @@ export async function createTestRuntime( token: string; }; driver: DriverConfig; + hardCrashActor?: (actorId: string) => Promise; + hardCrashPreservesData?: boolean; cleanup?: () => Promise; }>, ): Promise { @@ -228,6 +250,8 @@ export async function createTestRuntime( driver, cleanup: driverCleanup, rivetEngine, + hardCrashActor, + hardCrashPreservesData, } = await driverFactory(registry); if (rivetEngine) { @@ -242,6 +266,8 @@ export async function createTestRuntime( endpoint: rivetEngine.endpoint, namespace: rivetEngine.namespace, runnerName: rivetEngine.runnerName, + hardCrashActor, + hardCrashPreservesData, cleanup, }; } else { @@ -294,11 +320,29 @@ export async function createTestRuntime( ); const port = address.port; const serverEndpoint = `http://127.0.0.1:${port}`; + managerDriver.setNativeSqliteConfig?.({ + endpoint: serverEndpoint, + namespace: "default", + }); logger().info({ msg: "test serer listening", port }); // Cleanup const cleanup = async () => { + // Disconnect only the current test runtime's native KV channel so + // concurrent local runtimes do not shut down each other's channel. + try { + const { disconnectKvChannelForCurrentConfig } = await import( + "@/db/native-sqlite" + ); + await disconnectKvChannelForCurrentConfig({ + endpoint: serverEndpoint, + namespace: "default", + }); + } catch { + // Native module may not be available. + } + // Stop server await new Promise((resolve) => server.close(() => resolve(undefined)), @@ -312,6 +356,8 @@ export async function createTestRuntime( endpoint: serverEndpoint, namespace: "default", runnerName: "default", + hardCrashActor: managerDriver.hardCrashActor?.bind(managerDriver), + hardCrashPreservesData: driver.name !== "memory", cleanup, }; } diff --git a/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/test-inline-client-driver.ts b/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/test-inline-client-driver.ts index e70f11ea63..14bc92d800 100644 --- a/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/test-inline-client-driver.ts +++ b/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/test-inline-client-driver.ts @@ -246,7 +246,34 @@ export function createTestInlineClientDriver( getUpgradeWebSocket = getUpgradeWebSocketInner; }, kvGet: (_actorId: string, _key: Uint8Array) => { - throw new Error("kvGet not impelmented on inline client driver"); + throw new Error("kvGet not implemented on inline client driver"); + }, + kvBatchGet: (_actorId: string, _keys: Uint8Array[]) => { + throw new Error( + "kvBatchGet not implemented on inline client driver", + ); + }, + kvBatchPut: ( + _actorId: string, + _entries: [Uint8Array, Uint8Array][], + ) => { + throw new Error( + "kvBatchPut not implemented on inline client driver", + ); + }, + kvBatchDelete: (_actorId: string, _keys: Uint8Array[]) => { + throw new Error( + "kvBatchDelete not implemented on inline client driver", + ); + }, + kvDeleteRange: ( + _actorId: string, + _start: Uint8Array, + _end: Uint8Array, + ) => { + throw new Error( + "kvDeleteRange not implemented on inline client driver", + ); }, } satisfies ManagerDriver; return driver; diff --git a/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/actor-agent-os.ts b/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/actor-agent-os.ts index 07e053f129..f77e2f8f2c 100644 --- a/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/actor-agent-os.ts +++ b/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/actor-agent-os.ts @@ -1,9 +1,20 @@ +import { createRequire } from "node:module"; import { describe, expect, test } from "vitest"; import type { DriverTestConfig } from "../mod"; import { setupDriverTest } from "../utils"; +const require = createRequire(import.meta.url); +const hasAgentOsCore = (() => { + try { + require.resolve("@rivet-dev/agent-os-core"); + return true; + } catch { + return false; + } +})(); + export function runActorAgentOsTests(driverTestConfig: DriverTestConfig) { - describe.skipIf(driverTestConfig.skip?.agentOs)( + describe.skipIf(driverTestConfig.skip?.agentOs || !hasAgentOsCore)( "Actor agentOS Tests", () => { // --- Filesystem --- diff --git a/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/actor-db-stress.ts b/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/actor-db-stress.ts new file mode 100644 index 0000000000..861035321c --- /dev/null +++ b/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/actor-db-stress.ts @@ -0,0 +1,234 @@ +import { describe, expect, test } from "vitest"; +import { nativeSqliteAvailable } from "@/db/native-sqlite"; +import type { DriverTestConfig } from "../mod"; +import { setupDriverTest, waitFor } from "../utils"; + +const STRESS_TEST_TIMEOUT_MS = 60_000; + +/** + * Stress and resilience tests for the SQLite database subsystem. + * + * These tests target edge cases from the adversarial review: + * - C1: close_database racing with in-flight operations + * - H1: lifecycle operations blocking the Node.js event loop + * - Reconnect: WebSocket disconnect during active KV operations + * + * They run against the file-system driver with real timers and require + * the native SQLite addon for the KV channel tests. + */ +export function runActorDbStressTests(driverTestConfig: DriverTestConfig) { + const nativeAvailable = nativeSqliteAvailable(); + + describe("Actor Database Stress Tests", () => { + test( + "destroy during long-running DB operation completes without crash", + async (c) => { + const { client } = await setupDriverTest( + c, + driverTestConfig, + ); + + // Start multiple actors and kick off long DB operations, + // then destroy them mid-flight. The test passes if no + // actor crashes and no unhandled errors propagate. + const actors = Array.from({ length: 5 }, (_, i) => + client.dbStressActor.getOrCreate([ + `stress-destroy-${i}-${crypto.randomUUID()}`, + ]), + ); + + // Start long-running inserts on all actors. + const insertPromises = actors.map((actor) => + actor.insertBatch(500).catch((err: Error) => ({ + error: err.message, + })), + ); + + // Immediately destroy all actors while inserts are in flight. + const destroyPromises = actors.map((actor) => + actor.destroy().catch((err: Error) => ({ + error: err.message, + })), + ); + + // Both sets of operations should resolve without hanging. + // Inserts may succeed or fail with an error (actor destroyed), + // but must not crash the process. + const results = await Promise.allSettled([ + ...insertPromises, + ...destroyPromises, + ]); + + // Verify all promises settled (none hung). + expect(results).toHaveLength(10); + for (const result of results) { + expect(result.status).toBe("fulfilled"); + } + }, + STRESS_TEST_TIMEOUT_MS, + ); + + test( + "rapid create-insert-destroy cycles handle DB lifecycle correctly", + async (c) => { + const { client } = await setupDriverTest( + c, + driverTestConfig, + ); + + // Perform rapid cycles of create -> insert -> destroy. + // This exercises the close_database path racing with + // any pending DB operations from the insert. + for (let i = 0; i < 10; i++) { + const actor = client.dbStressActor.getOrCreate([ + `stress-cycle-${i}-${crypto.randomUUID()}`, + ]); + + // Insert some data. + await actor.insertBatch(10); + + // Verify data was written. + const count = await actor.getCount(); + expect(count).toBeGreaterThanOrEqual(10); + + // Destroy the actor (triggers close_database). + await actor.destroy(); + } + }, + STRESS_TEST_TIMEOUT_MS, + ); + + test( + "DB operations complete without excessive blocking", + async (c) => { + const { client } = await setupDriverTest( + c, + driverTestConfig, + ); + + const actor = client.dbStressActor.getOrCreate([ + `stress-health-${crypto.randomUUID()}`, + ]); + + // Measure wall-clock time for 100 sequential DB inserts. + // Each insert is an async round-trip through the VFS. + // If lifecycle operations (open_database, close_database) + // block the event loop, this will take much longer than + // expected because the action itself runs on that loop. + const health = await actor.measureEventLoopHealth(100); + + // 100 sequential inserts should complete in well under + // 30 seconds. A blocked event loop (e.g., 30s WebSocket + // timeout on open_database) would push this way over. + expect(health.elapsedMs).toBeLessThan(30_000); + expect(health.insertCount).toBe(100); + + // Verify the actor is still healthy after the test. + const integrity = await actor.integrityCheck(); + expect(integrity.toLowerCase()).toBe("ok"); + }, + STRESS_TEST_TIMEOUT_MS, + ); + + // This test requires native SQLite (KV channel WebSocket). + // When using WASM SQLite, there's no WebSocket to disconnect. + describe.skipIf(!nativeAvailable)( + "KV Channel Resilience", + () => { + test( + "recovers from forced WebSocket disconnect during DB writes", + async (c) => { + const { client, endpoint } = + await setupDriverTest(c, driverTestConfig); + + const actor = client.dbStressActor.getOrCreate([ + `stress-disconnect-${crypto.randomUUID()}`, + ]); + + // Write initial data to confirm the actor works. + await actor.insertBatch(10); + expect(await actor.getCount()).toBe(10); + + // Force-close all KV channel WebSocket connections. + // The native SQLite addon should reconnect automatically. + const res = await fetch( + `${endpoint}/.test/kv-channel/force-disconnect`, + { method: "POST" }, + ); + expect(res.ok).toBe(true); + const body = (await res.json()) as { + closed: number; + }; + expect(body.closed).toBeGreaterThanOrEqual(0); + + // Give the native addon time to detect the disconnect + // and reconnect. + await waitFor(driverTestConfig, 2000); + + // The actor should still work after reconnection. + // The native addon re-opens actors on the new connection. + await actor.insertBatch(10); + const finalCount = await actor.getCount(); + expect(finalCount).toBe(20); + + // Verify data integrity after the disruption. + const integrity = await actor.integrityCheck(); + expect(integrity.toLowerCase()).toBe("ok"); + }, + STRESS_TEST_TIMEOUT_MS, + ); + + test( + "handles disconnect during active write operation", + async (c) => { + const { client, endpoint } = + await setupDriverTest(c, driverTestConfig); + + const actor = client.dbStressActor.getOrCreate([ + `stress-active-disconnect-${crypto.randomUUID()}`, + ]); + + // Confirm the actor is healthy. + await actor.insertBatch(5); + + // Start a large write operation and disconnect + // mid-flight. The write may fail, but the actor + // should recover. + const writePromise = actor + .insertBatch(200) + .catch((err: Error) => ({ + error: err.message, + })); + + // Small delay to let the write start, then disconnect. + await new Promise((resolve) => + setTimeout(resolve, 50), + ); + + await fetch( + `${endpoint}/.test/kv-channel/force-disconnect`, + { method: "POST" }, + ); + + // Wait for the write to settle (success or failure). + await writePromise; + + // Wait for reconnection. + await waitFor(driverTestConfig, 2000); + + // Actor should recover. New operations should work. + await actor.insertBatch(5); + const count = await actor.getCount(); + // At least the initial 5 + final 5 should exist. + // The mid-disconnect 200 may or may not have committed. + expect(count).toBeGreaterThanOrEqual(10); + + const integrity = await actor.integrityCheck(); + expect(integrity.toLowerCase()).toBe("ok"); + }, + STRESS_TEST_TIMEOUT_MS, + ); + }, + ); + }); +} diff --git a/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/actor-db.ts b/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/actor-db.ts index 9463bfb358..f495e528cd 100644 --- a/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/actor-db.ts +++ b/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/actor-db.ts @@ -10,6 +10,8 @@ const HIGH_VOLUME_COUNT = 1000; const SLEEP_WAIT_MS = 150; const LIFECYCLE_POLL_INTERVAL_MS = 25; const LIFECYCLE_POLL_ATTEMPTS = 40; +const REAL_TIMER_HARD_CRASH_POLL_INTERVAL_MS = 50; +const REAL_TIMER_HARD_CRASH_POLL_ATTEMPTS = 600; const REAL_TIMER_DB_TIMEOUT_MS = 180_000; const CHUNK_BOUNDARY_SIZES = [ CHUNK_SIZE - 1, @@ -163,6 +165,71 @@ export function runActorDbTests(driverTestConfig: DriverTestConfig) { dbTestTimeout, ); + test.skipIf(driverTestConfig.skip?.sleep)( + "preserves committed rows across a hard crash and restart", + async (c) => { + const { + client, + hardCrashActor, + hardCrashPreservesData, + } = await setupDriverTest(c, driverTestConfig); + if (!hardCrashPreservesData) { + return; + } + if (!hardCrashActor) { + throw new Error( + "hardCrashActor test helper is unavailable for this driver", + ); + } + + const actor = getDbActor(client, variant).getOrCreate([ + `db-${variant}-hard-crash-${crypto.randomUUID()}`, + ]); + + await actor.reset(); + await actor.insertValue("before-crash"); + expect(await actor.getCount()).toBe(1); + + const actorId = await actor.resolve(); + await hardCrashActor(actorId); + + const hardCrashPollAttempts = + driverTestConfig.useRealTimers + ? REAL_TIMER_HARD_CRASH_POLL_ATTEMPTS + : LIFECYCLE_POLL_ATTEMPTS; + const hardCrashPollIntervalMs = + driverTestConfig.useRealTimers + ? REAL_TIMER_HARD_CRASH_POLL_INTERVAL_MS + : LIFECYCLE_POLL_INTERVAL_MS; + + let countAfterCrash = 0; + for (let i = 0; i < hardCrashPollAttempts; i++) { + try { + countAfterCrash = await actor.getCount(); + } catch { + countAfterCrash = 0; + } + if (countAfterCrash === 1) { + break; + } + await waitFor( + driverTestConfig, + hardCrashPollIntervalMs, + ); + } + + expect(countAfterCrash).toBe(1); + const values = await actor.getValues(); + expect( + values.some((row) => row.value === "before-crash"), + ).toBe(true); + + await actor.insertValue("after-crash"); + expect(await actor.getCount()).toBe(2); + }, + lifecycleTestTimeout, + ); + test( "completes onDisconnect DB writes before sleeping", async (c) => { @@ -181,7 +248,25 @@ export function runActorDbTests(driverTestConfig: DriverTestConfig) { await waitFor(driverTestConfig, SLEEP_WAIT_MS + 250); await actor.configureDisconnectInsert(false, 0); - expect(await actor.getDisconnectInsertCount()).toBe(1); + // Poll for the disconnect insert to complete. + // Native SQLite routes writes through a WebSocket KV + // channel, which adds latency that can push the + // onDisconnect DB write past the fixed wait window + // under concurrent test load. + let count = 0; + for (let i = 0; i < LIFECYCLE_POLL_ATTEMPTS; i++) { + count = + await actor.getDisconnectInsertCount(); + if (count >= 1) { + break; + } + await waitFor( + driverTestConfig, + LIFECYCLE_POLL_INTERVAL_MS, + ); + } + + expect(count).toBe(1); }, dbTestTimeout, ); diff --git a/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/cross-backend-vfs.ts b/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/cross-backend-vfs.ts new file mode 100644 index 0000000000..7f203d0abe --- /dev/null +++ b/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/tests/cross-backend-vfs.ts @@ -0,0 +1,166 @@ +import { describe, expect, test } from "vitest"; +import { + nativeSqliteAvailable, + _resetNativeDetection, +} from "@/db/native-sqlite"; +import type { DriverTestConfig } from "../mod"; +import { setupDriverTest, waitFor } from "../utils"; + +const SLEEP_WAIT_MS = 500; +const CROSS_BACKEND_TIMEOUT_MS = 30_000; + +/** + * Cross-backend VFS compatibility tests. + * + * Verifies that data written by the WASM VFS can be read by the native VFS + * and vice versa. Both VFS implementations store data in the same KV format + * (chunk keys, chunk data, metadata encoding). These tests catch encoding + * mismatches like the metadata version prefix difference fixed in US-024. + * + * Skipped when the native SQLite addon is not available. + */ +export function runCrossBackendVfsTests(driverTestConfig: DriverTestConfig) { + const nativeAvailable = nativeSqliteAvailable(); + + describe.skipIf(!nativeAvailable)( + "Cross-Backend VFS Compatibility Tests", + () => { + test( + "WASM-to-native: data written with WASM VFS is readable with native VFS", + async (c) => { + // Restore native detection on cleanup + c.onTestFinished(async () => { + await _resetNativeDetection(); + }); + + // Phase 1: Force WASM VFS + await _resetNativeDetection(true); + + const { client } = await setupDriverTest( + c, + driverTestConfig, + ); + const actorId = `cross-w2n-${crypto.randomUUID()}`; + const actor = client.dbActorRaw.getOrCreate([actorId]); + + // Write structured data with various sizes to exercise + // chunk boundaries (CHUNK_SIZE = 4096). + await actor.insertValue("wasm-alpha"); + await actor.insertValue("wasm-beta"); + await actor.insertMany(10); + + // Large payload spanning multiple chunks + const { id: largeId } = + await actor.insertPayloadOfSize(8192); + + const wasmCount = await actor.getCount(); + expect(wasmCount).toBe(13); + + const wasmValues = await actor.getValues(); + const wasmLargePayloadSize = + await actor.getPayloadSize(largeId); + expect(wasmLargePayloadSize).toBe(8192); + + // Sleep the actor to flush all data to KV + await actor.triggerSleep(); + await waitFor(driverTestConfig, SLEEP_WAIT_MS); + + // Phase 2: Restore native VFS detection + await _resetNativeDetection(); + + // Recreate the actor. The db() provider now uses native + // SQLite, reading data written by the WASM VFS. + const actor2 = client.dbActorRaw.getOrCreate([actorId]); + + const nativeCount = await actor2.getCount(); + expect(nativeCount).toBe(13); + + const nativeValues = await actor2.getValues(); + expect(nativeValues).toHaveLength(wasmValues.length); + for (let i = 0; i < wasmValues.length; i++) { + expect(nativeValues[i].value).toBe( + wasmValues[i].value, + ); + } + + const nativeLargePayloadSize = + await actor2.getPayloadSize(largeId); + expect(nativeLargePayloadSize).toBe(8192); + + // Verify integrity + const integrity = await actor2.integrityCheck(); + expect(integrity).toBe("ok"); + }, + CROSS_BACKEND_TIMEOUT_MS, + ); + + test( + "native-to-WASM: data written with native VFS is readable with WASM VFS", + async (c) => { + // Restore native detection on cleanup + c.onTestFinished(async () => { + await _resetNativeDetection(); + }); + + // Phase 1: Use native VFS (default when addon is available) + await _resetNativeDetection(); + + const { client } = await setupDriverTest( + c, + driverTestConfig, + ); + const actorId = `cross-n2w-${crypto.randomUUID()}`; + const actor = client.dbActorRaw.getOrCreate([actorId]); + + // Write structured data with various sizes + await actor.insertValue("native-alpha"); + await actor.insertValue("native-beta"); + await actor.insertMany(10); + + // Large payload spanning multiple chunks + const { id: largeId } = + await actor.insertPayloadOfSize(8192); + + const nativeCount = await actor.getCount(); + expect(nativeCount).toBe(13); + + const nativeValues = await actor.getValues(); + const nativeLargePayloadSize = + await actor.getPayloadSize(largeId); + expect(nativeLargePayloadSize).toBe(8192); + + // Sleep the actor to flush all data to KV + await actor.triggerSleep(); + await waitFor(driverTestConfig, SLEEP_WAIT_MS); + + // Phase 2: Force WASM VFS + await _resetNativeDetection(true); + + // Recreate the actor. The db() provider now uses WASM + // SQLite, reading data written by the native VFS. + const actor2 = client.dbActorRaw.getOrCreate([actorId]); + + const wasmCount = await actor2.getCount(); + expect(wasmCount).toBe(13); + + const wasmValues = await actor2.getValues(); + expect(wasmValues).toHaveLength(nativeValues.length); + for (let i = 0; i < nativeValues.length; i++) { + expect(wasmValues[i].value).toBe( + nativeValues[i].value, + ); + } + + const wasmLargePayloadSize = + await actor2.getPayloadSize(largeId); + expect(wasmLargePayloadSize).toBe(8192); + + // Verify integrity + const integrity = await actor2.integrityCheck(); + expect(integrity).toBe("ok"); + }, + CROSS_BACKEND_TIMEOUT_MS, + ); + }, + ); +} diff --git a/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/utils.ts b/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/utils.ts index 929ce83089..c38ac3767f 100644 --- a/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/utils.ts +++ b/rivetkit-typescript/packages/rivetkit/src/driver-test-suite/utils.ts @@ -17,6 +17,8 @@ export async function setupDriverTest( ): Promise<{ client: Client; endpoint: string; + hardCrashActor?: (actorId: string) => Promise; + hardCrashPreservesData: boolean; }> { if (!driverTestConfig.useRealTimers) { vi.useFakeTimers(); @@ -24,7 +26,14 @@ export async function setupDriverTest( } // Build drivers - const { endpoint, namespace, runnerName, cleanup } = + const { + endpoint, + namespace, + runnerName, + hardCrashActor, + hardCrashPreservesData, + cleanup, + } = await driverTestConfig.start(); let client: Client; @@ -33,7 +42,7 @@ export async function setupDriverTest( client = createClient({ endpoint, namespace, - runnerName, + poolName: runnerName, encoding: driverTestConfig.encoding, // Disable metadata lookup to prevent redirect to the wrong port. // Each test starts a new server on a dynamic port, but the @@ -64,6 +73,8 @@ export async function setupDriverTest( return { client, endpoint, + hardCrashActor, + hardCrashPreservesData: hardCrashPreservesData ?? false, }; } diff --git a/rivetkit-typescript/packages/rivetkit/src/drivers/engine/actor-driver.ts b/rivetkit-typescript/packages/rivetkit/src/drivers/engine/actor-driver.ts index a8a7d340d8..ba999184b5 100644 --- a/rivetkit-typescript/packages/rivetkit/src/drivers/engine/actor-driver.ts +++ b/rivetkit-typescript/packages/rivetkit/src/drivers/engine/actor-driver.ts @@ -11,9 +11,6 @@ import { type AnyConn, CONN_STATE_MANAGER_SYMBOL } from "@/actor/conn/mod"; import { lookupInRegistry } from "@/actor/definition"; import { KEYS, - queueMetadataKey, - sqliteStoragePrefix, - workflowStoragePrefix, } from "@/actor/instance/keys"; import { type PreloadMap, @@ -109,10 +106,10 @@ export class EngineActorDriver implements ActorDriver { ); #isEnvoyStopped: boolean = false; - // HACK: Track actor stop intent locally since the runner protocol doesn't - // pass the stop reason to onActorStop. This will be fixed when the runner + // HACK: Track actor stop intent locally since the envoy protocol doesn't + // pass the stop reason to onActorStop. This will be fixed when the envoy // protocol is updated to send the intent directly (see RVT-5284) - #actorStopIntent: Map = new Map(); + #actorStopIntent: Map = new Map(); // Map of conn IDs to message index waiting to be persisted before sending // an ack @@ -148,7 +145,6 @@ export class EngineActorDriver implements ActorDriver { // HACK: Override inspector token (which are likely to be // removed later on) with token from x-rivet-token header - const token = config.token; // TODO: // if (token && runConfig.inspector && runConfig.inspector.enabled) { // runConfig.inspector.token = () => token; @@ -165,7 +161,7 @@ export class EngineActorDriver implements ActorDriver { const envoyConfig: EnvoyConfig = { version: config.envoy.version, endpoint: getEndpoint(config), - token, + token: config.token, namespace: config.namespace, poolName: config.envoy.poolName, metadata: { @@ -206,6 +202,33 @@ export class EngineActorDriver implements ActorDriver { }); } + async #discardCrashedActorState(actorId: string) { + const handler = this.#actors.get(actorId); + if (!handler) { + return; + } + + if (handler.alarmTimeout) { + handler.alarmTimeout.abort(); + handler.alarmTimeout = undefined; + } + + if (handler.actor) { + try { + await handler.actor.debugForceCrash(); + } catch (err) { + logger().debug({ + msg: "actor crash cleanup errored", + actorId, + err: stringifyError(err), + }); + } + } + + this.#actors.delete(actorId); + this.#actorStopIntent.delete(actorId); + } + getExtraActorLogParams(): Record { return { envoyKey: this.#envoy.getEnvoyKey() ?? "-" }; } @@ -270,6 +293,14 @@ export class EngineActorDriver implements ActorDriver { // No database overrides - will use KV-backed implementation from rivetkit/db + getNativeSqliteConfig() { + return { + endpoint: getEndpoint(this.#config), + token: this.#config.token, + namespace: this.#config.namespace, + }; + } + // MARK: - Batch KV operations async kvBatchPut( actorId: string, @@ -302,12 +333,12 @@ export class EngineActorDriver implements ActorDriver { actorId, new Uint8Array(), ); - const keys = entries.map(([key]) => key); + const keys = entries.map(([key]: [Uint8Array, ...unknown[]]) => key); logger().info({ msg: "kvList called", actorId, keysCount: keys.length, - keys: keys.map((k) => new TextDecoder().decode(k)), + keys: keys.map((k: Uint8Array) => new TextDecoder().decode(k)), }); return keys; } @@ -330,7 +361,7 @@ export class EngineActorDriver implements ActorDriver { actorId, prefixStr: new TextDecoder().decode(prefix), entriesCount: result.length, - keys: result.map(([key]) => new TextDecoder().decode(key)), + keys: result.map(([key]: [Uint8Array, ...unknown[]]) => new TextDecoder().decode(key)), }); return result; } @@ -377,33 +408,51 @@ export class EngineActorDriver implements ActorDriver { this.#envoy.destroyActor(actorId); } + async hardCrashActor(actorId: string): Promise { + const handler = this.#actors.get(actorId); + if (!handler) { + return; + } + + if (handler.actorStartPromise) { + await handler.actorStartPromise.promise.catch(() => undefined); + } + + logger().info({ + msg: "simulating hard crash for actor", + actorId, + }); + + await this.#discardCrashedActorState(actorId); + } + async shutdown(immediate: boolean): Promise { logger().info({ msg: "stopping engine actor driver", immediate }); - // TODO: We need to update the runner to have a draining state so: + // TODO: We need to update the envoy to have a draining state so: // 1. Send ToServerDraining - // - This causes Pegboard to stop allocating actors to this runner - // 2. Pegboard sends ToClientStopActor for all actors on this runner which handles the graceful migration of each actor independently + // - This causes Pegboard to stop allocating actors to this envoy + // 2. Pegboard sends ToClientStopActor for all actors on this envoy which handles the graceful migration of each actor independently // 3. Send ToServerStopping once all actors have successfully stopped // // What's happening right now is: // 1. All actors enter stopped state // 2. Actors still respond to requests because only RivetKit knows it's // stopping, this causes all requests to issue errors that the actor is - // stopping. (This will NOT return a 503 bc the runner has no idea the + // stopping. (This will NOT return a 503 bc the envoy has no idea the // actors are stopping.) - // 3. Once the last actor stops, then the runner finally stops + actors + // 3. Once the last actor stops, then the envoy finally stops + actors // reschedule // // This means that: - // - All actors on this runner are bricked until the slowest onStop finishes + // - All actors on this envoy are bricked until the slowest onStop finishes // - Guard will not gracefully handle requests bc it's not receiving a 503 - // - Actors can still be scheduled to this runner while the other + // - Actors can still be scheduled to this envoy while the other // actors are stopping, meaning that those actors will NOT get onStop - // and will potentiall corrupt their state + // and will potentially corrupt their state // // HACK: Stop all actors to allow state to be saved - // NOTE: onStop is only supposed to be called by the runner, we're + // NOTE: onStop is only supposed to be called by the envoy, we're // abusing it here logger().debug({ msg: "stopping all actors before shutdown", @@ -508,57 +557,50 @@ export class EngineActorDriver implements ActorDriver { }); } - /** - * Fetch remaining startup KV data in parallel and build a PreloadMap. - * PERSIST_DATA is already known (passed in), so we only fetch the - * remaining exact keys and prefix scans. - */ - async #preloadStartupKv( - actorId: string, - persistData: Uint8Array, - ): Promise<{ preloadMap: PreloadMap; entries: number }> { - const remainingExactKeys = [KEYS.INSPECTOR_TOKEN, queueMetadataKey()]; - - const prefixScans = [ - KEYS.CONN_PREFIX, - sqliteStoragePrefix(), - workflowStoragePrefix(), - ]; - - const [exactResults, ...prefixResults] = await Promise.all([ - this.#envoy.kvGet(actorId, remainingExactKeys), - ...prefixScans.map((prefix) => - this.#envoy.kvListPrefix(actorId, prefix), - ), - ]); + #buildStartupPreloadMap( + preloadedKv: protocol.PreloadedKv | null, + persistDataOverride?: Uint8Array, + ): { preloadMap: PreloadMap | undefined; entries: number } { + if (preloadedKv == null) { + return { preloadMap: undefined, entries: 0 }; + } - const allExactKeys = [KEYS.PERSIST_DATA, ...remainingExactKeys]; - const entries: [Uint8Array, Uint8Array][] = []; + const entries: [Uint8Array, Uint8Array][] = preloadedKv.entries.map( + (entry) => [new Uint8Array(entry.key), new Uint8Array(entry.value)], + ); - entries.push([KEYS.PERSIST_DATA, persistData]); - for (let i = 0; i < remainingExactKeys.length; i++) { - const value = exactResults[i]; - if (value !== null) { - entries.push([remainingExactKeys[i], value]); + if (persistDataOverride) { + let replaced = false; + for (const entry of entries) { + if (compareBytes(entry[0], KEYS.PERSIST_DATA) === 0) { + entry[1] = persistDataOverride; + replaced = true; + break; + } } - } - for (const prefixEntries of prefixResults) { - for (const entry of prefixEntries) { - entries.push(entry); + + if (!replaced) { + entries.push([KEYS.PERSIST_DATA, persistDataOverride]); } } entries.sort((a, b) => compareBytes(a[0], b[0])); - const requestedGetKeys = allExactKeys.slice().sort(compareBytes); - const requestedPrefixes = prefixScans.slice().sort(compareBytes); - - const preloadMap = createPreloadMap( - entries, - requestedGetKeys, - requestedPrefixes, - ); - return { preloadMap, entries: entries.length }; + const requestedGetKeys = preloadedKv.requestedGetKeys + .map((key) => new Uint8Array(key)) + .sort(compareBytes); + const requestedPrefixes = preloadedKv.requestedPrefixes + .map((prefix) => new Uint8Array(prefix)) + .sort(compareBytes); + + return { + preloadMap: createPreloadMap( + entries, + requestedGetKeys, + requestedPrefixes, + ), + entries: entries.length, + }; } async #envoyOnActorStart( @@ -566,6 +608,7 @@ export class EngineActorDriver implements ActorDriver { actorId: string, generation: number, actorConfig: protocol.ActorConfig, + preloadedKv: protocol.PreloadedKv | null, ): Promise { logger().debug({ msg: "engine actor starting", @@ -604,48 +647,61 @@ export class EngineActorDriver implements ActorDriver { const key = deserializeActorKey(actorConfig.key); try { - // Check if this actor already has persisted state. - let checkStart = performance.now(); - const [persistDataBuffer] = await this.#envoy.kvGet(actorId, [ - KEYS.PERSIST_DATA, - ]); - const checkPersistDataMs = performance.now() - checkStart; - - // For new actors there is no existing KV data to preload. let preloadMap: PreloadMap | undefined; + let persistDataBuffer: Uint8Array | null | undefined; + let checkPersistDataMs = 0; let initNewActorMs = 0; let preloadKvMs = 0; let preloadKvEntries = 0; - // 1 round-trip for the persist data check - let driverKvRoundTrips = 1; + let driverKvRoundTrips = 0; + + if (preloadedKv) { + const preloadStart = performance.now(); + const preloaded = this.#buildStartupPreloadMap(preloadedKv); + preloadMap = preloaded.preloadMap; + preloadKvEntries = preloaded.entries; + preloadKvMs = performance.now() - preloadStart; + persistDataBuffer = preloadMap?.get(KEYS.PERSIST_DATA)?.value; + logger().debug({ + msg: "received startup kv preload from start command", + actorId, + entries: preloadKvEntries, + durationMs: preloadKvMs, + }); + } + + if (persistDataBuffer === undefined) { + const checkStart = performance.now(); + const [persistData] = await this.#envoy.kvGet(actorId, [ + KEYS.PERSIST_DATA, + ]); + persistDataBuffer = persistData; + checkPersistDataMs = performance.now() - checkStart; + driverKvRoundTrips++; + } if (persistDataBuffer === null) { const initStart = performance.now(); const initialKvState = getInitialActorKvState(input); + const persistData = initialKvState[0]?.[1]; await this.#envoy.kvPut(actorId, initialKvState); initNewActorMs = performance.now() - initStart; driverKvRoundTrips++; + if (preloadedKv && persistData) { + const preloadStart = performance.now(); + const preloaded = this.#buildStartupPreloadMap( + preloadedKv, + persistData, + ); + preloadMap = preloaded.preloadMap; + preloadKvEntries = preloaded.entries; + preloadKvMs += performance.now() - preloadStart; + } logger().debug({ msg: "initialized persist data for new actor", actorId, durationMs: initNewActorMs, }); - } else { - const preloadStart = performance.now(); - const result = await this.#preloadStartupKv( - actorId, - persistDataBuffer, - ); - preloadMap = result.preloadMap; - preloadKvEntries = result.entries; - preloadKvMs = performance.now() - preloadStart; - driverKvRoundTrips++; - logger().debug({ - msg: "preloaded startup kv for existing actor", - actorId, - entries: preloadKvEntries, - durationMs: preloadKvMs, - }); } // Create actor instance @@ -727,7 +783,7 @@ export class EngineActorDriver implements ActorDriver { }); try { - this.#envoy.stopActor(actorId, undefined, stringifyError(error)); + this.#envoy.stopActor(actorId, undefined); } catch (stopError) { logger().debug({ msg: "failed to stop actor after start failure", @@ -786,7 +842,11 @@ export class EngineActorDriver implements ActorDriver { if (handler.actor) { try { - await handler.actor.onStop(reason); + if (reason === "crash") { + await handler.actor.debugForceCrash(); + } else { + await handler.actor.onStop(reason); + } } catch (err) { logger().error({ msg: "error in onStop, proceeding with removing actor", @@ -795,6 +855,11 @@ export class EngineActorDriver implements ActorDriver { } } + if (handler.alarmTimeout) { + handler.alarmTimeout.abort(); + handler.alarmTimeout = undefined; + } + this.#actors.delete(actorId); logger().debug({ msg: "engine actor stopped", actorId, reason }); diff --git a/rivetkit-typescript/packages/rivetkit/src/drivers/engine/config.ts b/rivetkit-typescript/packages/rivetkit/src/drivers/engine/config.ts index 59972a7a74..d7eef77f79 100644 --- a/rivetkit-typescript/packages/rivetkit/src/drivers/engine/config.ts +++ b/rivetkit-typescript/packages/rivetkit/src/drivers/engine/config.ts @@ -7,10 +7,10 @@ import { ClientConfigSchemaBase, transformClientConfig } from "@/client/config"; * We include the client config since this includes the common properties like endpoint, namespace, etc. */ export const EngineConfigSchemaBase = ClientConfigSchemaBase.extend({ - /** Deprecated. Unique key for this runner. Runners connecting a given key will replace any other runner connected with the same key. */ - runnerKey: z.string().optional(), + /** Deprecated. Unique key for this envoy. Envoys connecting with a given key will replace any other envoy connected with the same key. */ + envoyKey: z.string().optional(), - /** How many actors this runner can run. */ + /** How many actors this envoy can run. */ totalSlots: z.number().default(100_000), }); diff --git a/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/actor.ts b/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/actor.ts index 17ba8d9497..600736b847 100644 --- a/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/actor.ts +++ b/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/actor.ts @@ -1,5 +1,5 @@ import type { AnyClient } from "@/client/client"; -import type { RawDatabaseClient } from "@/db/config"; +import type { NativeSqliteConfig, RawDatabaseClient } from "@/db/config"; import type { ISqliteVfs } from "@rivetkit/sqlite-vfs"; import { type ActorDriver, @@ -65,6 +65,10 @@ export class FileSystemActorDriver implements ActorDriver { return {}; } + getNativeSqliteConfig(_actorId: string): NativeSqliteConfig | undefined { + return this.#state.nativeSqliteConfig; + } + async kvBatchPut( actorId: string, entries: [Uint8Array, Uint8Array][], @@ -131,6 +135,10 @@ export class FileSystemActorDriver implements ActorDriver { await this.#sqlitePool.shutdown(); } + async hardCrashActor(actorId: string): Promise { + await this.#state.hardCrashActor(actorId); + } + async startDestroy(actorId: string): Promise { await this.#state.destroyActor(actorId); } diff --git a/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/global-state.ts b/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/global-state.ts index 2b1377fc53..c977d8773f 100644 --- a/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/global-state.ts +++ b/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/global-state.ts @@ -4,6 +4,7 @@ import { ActorDuplicateKey, ActorError, ActorNotFound } from "@/actor/errors"; import type { AnyActorInstance } from "@/actor/instance/mod"; import type { ActorKey } from "@/actor/mod"; import type { AnyClient } from "@/client/client"; +import type { NativeSqliteConfig } from "@/db/config"; import { type ActorDriver, getInitialActorKvState } from "@/driver-helpers/mod"; import type { RegistryConfig } from "@/registry/config"; import type * as schema from "@/schemas/file-system-driver/mod"; @@ -123,6 +124,7 @@ export class FileSystemGlobalState { #actors = new Map(); #actorCountOnStartup: number = 0; + #nativeSqliteConfig?: NativeSqliteConfig; #runnerParams?: { config: RegistryConfig; @@ -142,6 +144,10 @@ export class FileSystemGlobalState { return this.#actorCountOnStartup; } + get nativeSqliteConfig() { + return this.#nativeSqliteConfig; + } + constructor(options: FileSystemDriverOptions = {}) { const { persist = true, customPath, useNativeSqlite = true } = options; if (!useNativeSqlite) { @@ -205,6 +211,10 @@ export class FileSystemGlobalState { } } + setNativeSqliteConfig(config: NativeSqliteConfig): void { + this.#nativeSqliteConfig = config; + } + getActorStatePath(actorId: string): string { return getNodePath().join(this.#stateDir, actorId); } @@ -798,6 +808,41 @@ export class FileSystemGlobalState { } } + async hardCrashActor(actorId: string): Promise { + const actor = this.#actors.get(actorId); + if (!actor) { + return; + } + + if (this.isActorStopping(actorId)) { + await this.#waitForActorStop(actorId); + return; + } + + if (actor.loadPromise) { + await actor.loadPromise.catch(() => undefined); + } + if (actor.startPromise?.promise) { + await actor.startPromise.promise.catch(() => undefined); + } + + try { + if (actor.alarmTimeout) { + actor.alarmTimeout.abort(); + actor.alarmTimeout = undefined; + } + + if (actor.actor) { + await actor.actor.debugForceCrash(); + } + } finally { + this.#closeActorKvDatabase(actorId); + actor.stopPromise?.resolve(); + actor.stopPromise = undefined; + this.#actors.delete(actorId); + } + } + /** * Save actor state to disk. */ @@ -1367,13 +1412,12 @@ export class FileSystemGlobalState { ): Promise { await this.loadActor(actorId); await this.#withActorWrite(actorId, async (entry) => { - if (!entry.state) { - if (this.isActorStopping(actorId)) { - return; - } - throw new Error(`Actor ${actorId} state not loaded`); + if (!entry.state && this.isActorStopping(actorId)) { + return; } + // KV database is independent of actor state and may be written + // during actor creation (e.g. native SQLite via KV channel). const db = this.#getOrCreateActorKvDatabase(actorId); const totalSize = estimateKvSize(db); validateKvEntries(entries, totalSize); @@ -1388,15 +1432,8 @@ export class FileSystemGlobalState { actorId: string, keys: Uint8Array[], ): Promise<(Uint8Array | null)[]> { - const entry = await this.loadActor(actorId); + await this.loadActor(actorId); await this.#waitForPendingWrite(actorId); - if (!entry.state) { - if (this.isActorStopping(actorId)) { - throw new Error(`Actor ${actorId} is stopping`); - } else { - throw new Error(`Actor ${actorId} state not loaded`); - } - } validateKvKeys(keys); @@ -1422,11 +1459,8 @@ export class FileSystemGlobalState { async kvBatchDelete(actorId: string, keys: Uint8Array[]): Promise { await this.loadActor(actorId); await this.#withActorWrite(actorId, async (entry) => { - if (!entry.state) { - if (this.isActorStopping(actorId)) { - return; - } - throw new Error(`Actor ${actorId} state not loaded`); + if (!entry.state && this.isActorStopping(actorId)) { + return; } if (keys.length === 0) { @@ -1462,11 +1496,8 @@ export class FileSystemGlobalState { ): Promise { await this.loadActor(actorId); await this.#withActorWrite(actorId, async (entry) => { - if (!entry.state) { - if (this.isActorStopping(actorId)) { - return; - } - throw new Error(`Actor ${actorId} state not loaded`); + if (!entry.state && this.isActorStopping(actorId)) { + return; } validateKvKey(start, "start key"); @@ -1491,15 +1522,8 @@ export class FileSystemGlobalState { limit?: number; }, ): Promise<[Uint8Array, Uint8Array][]> { - const entry = await this.loadActor(actorId); + await this.loadActor(actorId); await this.#waitForPendingWrite(actorId); - if (!entry.state) { - if (this.isActorStopping(actorId)) { - throw new Error(`Actor ${actorId} is destroying`); - } else { - throw new Error(`Actor ${actorId} state not loaded`); - } - } validateKvKey(prefix, "prefix key"); const db = this.#getOrCreateActorKvDatabase(actorId); diff --git a/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/kv-limits.ts b/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/kv-limits.ts index 20a2270e41..d0ed666b90 100644 --- a/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/kv-limits.ts +++ b/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/kv-limits.ts @@ -1,5 +1,19 @@ import type { SqliteRuntimeDatabase } from "./sqlite-runtime"; +export class KvStorageQuotaExceededError extends Error { + readonly remaining: number; + readonly payloadSize: number; + + constructor(remaining: number, payloadSize: number) { + super( + `not enough space left in storage (${remaining} bytes remaining, current payload is ${payloadSize} bytes)`, + ); + this.name = "KvStorageQuotaExceededError"; + this.remaining = remaining; + this.payloadSize = payloadSize; + } +} + // Keep these limits in sync with engine/packages/pegboard/src/actor_kv/mod.rs. const KV_MAX_KEY_SIZE = 2 * 1024; const KV_MAX_VALUE_SIZE = 128 * 1024; @@ -54,9 +68,7 @@ export function validateKvEntries( const storageRemaining = Math.max(0, KV_MAX_STORAGE_SIZE - totalSize); if (payloadSize > storageRemaining) { - throw new Error( - `not enough space left in storage (${storageRemaining} bytes remaining, current payload is ${payloadSize} bytes)`, - ); + throw new KvStorageQuotaExceededError(storageRemaining, payloadSize); } for (const [key, value] of entries) { diff --git a/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/manager.ts b/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/manager.ts index 11f71a8eb5..e9b984bb9f 100644 --- a/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/manager.ts +++ b/rivetkit-typescript/packages/rivetkit/src/drivers/file-system/manager.ts @@ -6,6 +6,7 @@ import { routeWebSocket } from "@/actor/router-websocket-endpoints"; import { createClientWithDriver } from "@/client/client"; import { createInlineWebSocket } from "@/common/inline-websocket-adapter"; import { noopNext } from "@/common/utils"; +import type { NativeSqliteConfig } from "@/db/config"; import { resolveGatewayTarget, type ActorDriver, @@ -35,6 +36,7 @@ export class FileSystemManagerDriver implements ManagerDriver { #actorDriver: ActorDriver; #actorRouter: ActorRouter; + #kvChannelShutdown: (() => void) | null = null; constructor( config: RegistryConfig, @@ -187,6 +189,14 @@ export class FileSystemManagerDriver implements ManagerDriver { throw new Error("unreachable: unknown gateway target type"); } + async hardCrashActor(actorId: string): Promise { + await this.#actorDriver.hardCrashActor?.(actorId); + } + + setNativeSqliteConfig(config: NativeSqliteConfig): void { + this.#state.setNativeSqliteConfig(config); + } + async getForId({ actorId, }: GetForIdInput): Promise { @@ -281,6 +291,32 @@ export class FileSystemManagerDriver implements ManagerDriver { : null; } + async kvBatchGet( + actorId: string, + keys: Uint8Array[], + ): Promise<(Uint8Array | null)[]> { + return await this.#state.kvBatchGet(actorId, keys); + } + + async kvBatchPut( + actorId: string, + entries: [Uint8Array, Uint8Array][], + ): Promise { + await this.#state.kvBatchPut(actorId, entries); + } + + async kvBatchDelete(actorId: string, keys: Uint8Array[]): Promise { + await this.#state.kvBatchDelete(actorId, keys); + } + + async kvDeleteRange( + actorId: string, + start: Uint8Array, + end: Uint8Array, + ): Promise { + await this.#state.kvDeleteRange(actorId, start, end); + } + displayInformation(): ManagerDisplayInformation { return { properties: { @@ -303,6 +339,14 @@ export class FileSystemManagerDriver implements ManagerDriver { this.#getUpgradeWebSocket = getUpgradeWebSocket; } + setKvChannelShutdown(fn: () => void): void { + this.#kvChannelShutdown = fn; + } + + shutdown(): void { + this.#kvChannelShutdown?.(); + this.#kvChannelShutdown = null; + } } function actorStateToOutput(state: schema.ActorState): ActorOutput { diff --git a/rivetkit-typescript/packages/rivetkit/src/manager/driver.ts b/rivetkit-typescript/packages/rivetkit/src/manager/driver.ts index fc288a2636..44a599d6a7 100644 --- a/rivetkit-typescript/packages/rivetkit/src/manager/driver.ts +++ b/rivetkit-typescript/packages/rivetkit/src/manager/driver.ts @@ -1,5 +1,6 @@ import type { Hono, Context as HonoContext } from "hono"; import type { ActorKey, Encoding, UniversalWebSocket } from "@/actor/mod"; +import type { NativeSqliteConfig } from "@/db/config"; import type { RegistryConfig } from "@/registry/config"; import type { GetUpgradeWebSocket } from "@/utils"; import type { ActorQuery, CrashPolicy } from "./protocol/query"; @@ -55,8 +56,52 @@ export interface ManagerDriver { **/ setGetUpgradeWebSocket(getUpgradeWebSocket: GetUpgradeWebSocket): void; + /** + * Clean shutdown of manager resources (timers, lock tables, etc.). + * Called after all actors have stopped. + */ + shutdown?(): void; + + /** + * Inject the KV channel shutdown callback. Called by the manager + * router so the driver can invoke it during shutdown. + */ + setKvChannelShutdown?(fn: () => void): void; + + /** + * Test-only helper that simulates an abrupt actor crash. + */ + hardCrashActor?(actorId: string): Promise; + + /** + * Inject native SQLite connection settings for driver-created actors. + */ + setNativeSqliteConfig?(config: NativeSqliteConfig): void; + /** Read a key. Returns null if the key doesn't exist. */ kvGet(actorId: string, key: Uint8Array): Promise; + + /** Batch get KV entries. Returns null for keys that don't exist. */ + kvBatchGet( + actorId: string, + keys: Uint8Array[], + ): Promise<(Uint8Array | null)[]>; + + /** Batch put KV entries. */ + kvBatchPut( + actorId: string, + entries: [Uint8Array, Uint8Array][], + ): Promise; + + /** Batch delete KV entries. */ + kvBatchDelete(actorId: string, keys: Uint8Array[]): Promise; + + /** Delete KV entries in the half-open range [start, end). */ + kvDeleteRange( + actorId: string, + start: Uint8Array, + end: Uint8Array, + ): Promise; } export interface ManagerDisplayInformation { diff --git a/rivetkit-typescript/packages/rivetkit/src/manager/gateway.ts b/rivetkit-typescript/packages/rivetkit/src/manager/gateway.ts index fecc1da0f8..dae9de22b4 100644 --- a/rivetkit-typescript/packages/rivetkit/src/manager/gateway.ts +++ b/rivetkit-typescript/packages/rivetkit/src/manager/gateway.ts @@ -148,6 +148,11 @@ export async function actorGateway( return next(); } + // Skip KV channel routes - handled by the dedicated KV channel endpoint + if (c.req.path.endsWith("/kv/connect")) { + return next(); + } + // Strip basePath from the request path let strippedPath = c.req.path; if ( diff --git a/rivetkit-typescript/packages/rivetkit/src/manager/kv-channel.ts b/rivetkit-typescript/packages/rivetkit/src/manager/kv-channel.ts new file mode 100644 index 0000000000..aa95b0ab22 --- /dev/null +++ b/rivetkit-typescript/packages/rivetkit/src/manager/kv-channel.ts @@ -0,0 +1,709 @@ +// KV Channel WebSocket handler for the local dev manager. +// +// Serves the /kv/connect endpoint that the native SQLite addon +// (rivetkit-typescript/packages/sqlite-native/) connects to for +// KV-backed database I/O. See docs-internal/engine/NATIVE_SQLITE_DATA_CHANNEL.md +// for the full specification. + +import type { WSContext } from "hono/ws"; +import { + PROTOCOL_VERSION, + type ToServer, + type ToClient, + type RequestData, + type ResponseData, + type ToServerRequest, + decodeToServer, + encodeToClient, +} from "@rivetkit/engine-kv-channel-protocol"; +import { KvStorageQuotaExceededError } from "@/drivers/file-system/kv-limits"; +import type { ManagerDriver } from "./driver"; +import { logger } from "./log"; + +// Ping every 3 seconds, close if no pong within 15 seconds. +// Matches runner protocol defaults (runner_update_ping_interval_ms=3000, +// runner_ping_timeout_ms=15000 in engine/packages/config/src/config/pegboard.rs). +const PING_INTERVAL_MS = 3_000; +const PONG_TIMEOUT_MS = 15_000; + +// Maximum actors a single connection can open. Prevents unbounded memory growth. +const MAX_ACTORS_PER_CONNECTION = 1_000; + +// Sweep interval for removing stale lock entries from dead connections. +const STALE_LOCK_SWEEP_INTERVAL_MS = 60_000; + +/** Per-connection state for the KV channel WebSocket. */ +interface KvChannelConnection { + /** Actor IDs locked by this connection. */ + openActors: Set; + + /** Timer for sending pings. */ + pingInterval: ReturnType | null; + + /** Timer for detecting pong timeout. */ + pongTimeout: ReturnType | null; + + /** Timestamp of the last pong received. */ + lastPongTs: number; + + /** Whether the connection has been closed. */ + closed: boolean; + + /** Reference to the WebSocket context for sending messages. */ + ws: WSContext | null; + + /** Per-actor request queues for sequential execution. */ + actorQueues: Map>; +} + +/** Instance-scoped state for a KV channel manager. */ +interface KvChannelManagerState { + actorLocks: Map; + activeConnections: Set; + staleLockSweepTimer: ReturnType | null; +} + +/** Return type of createKvChannelManager. */ +export interface KvChannelManager { + createHandler: (managerDriver: ManagerDriver) => { + onOpen: (event: any, ws: WSContext) => void; + onMessage: (event: any, ws: WSContext) => void; + onClose: (event: any, ws: WSContext) => void; + onError: (error: any, ws: WSContext) => void; + }; + shutdown: () => void; + _testForceCloseAllKvChannels: () => number; +} + +/** + * Create an instance-scoped KV channel manager. + * + * All lock state and timers are scoped to the returned object, so multiple + * manager instances in the same process (e.g., tests) do not share state. + */ +export function createKvChannelManager(): KvChannelManager { + const state: KvChannelManagerState = { + actorLocks: new Map(), + activeConnections: new Set(), + staleLockSweepTimer: null, + }; + + return { + createHandler(managerDriver: ManagerDriver) { + const conn: KvChannelConnection = { + openActors: new Set(), + pingInterval: null, + pongTimeout: null, + lastPongTs: Date.now(), + closed: false, + ws: null, + actorQueues: new Map(), + }; + + state.activeConnections.add(conn); + + return { + onOpen: (_event: any, ws: WSContext) => { + logger().debug({ msg: "kv channel websocket opened" }); + conn.ws = ws; + startPingPong(state, conn); + }, + + onMessage: (event: any, _ws: WSContext) => { + try { + let bytes: Uint8Array; + if (event.data instanceof ArrayBuffer) { + bytes = new Uint8Array(event.data); + } else if (event.data instanceof Uint8Array) { + bytes = event.data; + } else if (Buffer.isBuffer(event.data)) { + bytes = new Uint8Array(event.data); + } else { + logger().warn({ + msg: "kv channel received non-binary message, ignoring", + }); + return; + } + + const msg = decodeToServer(bytes); + handleToServerMessage(state, conn, managerDriver, msg); + } catch (err: unknown) { + logger().error({ + msg: "kv channel failed to decode message", + error: + err instanceof Error + ? err.message + : String(err), + }); + } + }, + + onClose: (_event: any, _ws: WSContext) => { + logger().debug({ msg: "kv channel websocket closed" }); + cleanupConnection(state, conn); + }, + + onError: (error: any, _ws: WSContext) => { + logger().error({ + msg: "kv channel websocket error", + error: + error instanceof Error + ? error.message + : String(error), + }); + cleanupConnection(state, conn); + }, + }; + }, + + shutdown() { + if (state.staleLockSweepTimer) { + clearInterval(state.staleLockSweepTimer); + state.staleLockSweepTimer = null; + } + state.actorLocks.clear(); + state.activeConnections.clear(); + }, + + _testForceCloseAllKvChannels() { + let closed = 0; + for (const conn of state.activeConnections) { + if (!conn.closed && conn.ws) { + const ws = conn.ws; + cleanupConnection(state, conn); + ws.close(1001, "test force disconnect"); + closed++; + } + } + return closed; + }, + }; +} + +function makeErrorResponse( + requestId: number, + code: string, + message: string, +): ToClient { + return { + tag: "ToClientResponse", + val: { + requestId, + data: { + tag: "ErrorResponse", + val: { code, message }, + }, + }, + }; +} + +function makeResponse(requestId: number, data: ResponseData): ToClient { + return { + tag: "ToClientResponse", + val: { requestId, data }, + }; +} + +function sendMessage(conn: KvChannelConnection, msg: ToClient): void { + if (conn.closed || !conn.ws) return; + const bytes = encodeToClient(msg); + // Copy to a fresh ArrayBuffer to satisfy WSContext.send() parameter type. + const copy = new ArrayBuffer(bytes.byteLength); + new Uint8Array(copy).set(bytes); + conn.ws.send(copy); +} + +function startPingPong( + state: KvChannelManagerState, + conn: KvChannelConnection, +): void { + conn.lastPongTs = Date.now(); + + conn.pingInterval = setInterval(() => { + if (conn.closed || !conn.ws) return; + + const ts = BigInt(Date.now()); + sendMessage(conn, { + tag: "ToClientPing", + val: { ts }, + }); + + // Check if the last pong was too long ago. + if (Date.now() - conn.lastPongTs > PONG_TIMEOUT_MS) { + logger().warn({ + msg: "kv channel pong timeout, closing connection", + }); + // Capture ws before cleanup nulls it. + const ws = conn.ws; + cleanupConnection(state, conn); + if (ws) { + ws.close(1000, "pong timeout"); + } + } + }, PING_INTERVAL_MS); +} + +function cleanupConnection( + state: KvChannelManagerState, + conn: KvChannelConnection, +): void { + conn.closed = true; + conn.ws = null; + state.activeConnections.delete(conn); + + if (conn.pingInterval) { + clearInterval(conn.pingInterval); + conn.pingInterval = null; + } + if (conn.pongTimeout) { + clearTimeout(conn.pongTimeout); + conn.pongTimeout = null; + } + + // Release all actor locks held by this connection. + for (const actorId of conn.openActors) { + if (state.actorLocks.get(actorId) === conn) { + state.actorLocks.delete(actorId); + } + } + conn.openActors.clear(); +} + +async function handleRequest( + state: KvChannelManagerState, + conn: KvChannelConnection, + managerDriver: ManagerDriver, + request: ToServerRequest, +): Promise { + const { requestId, actorId, data } = request; + + try { + const responseData = await processRequestData( + state, + conn, + managerDriver, + actorId, + data, + ); + sendMessage(conn, makeResponse(requestId, responseData)); + } catch (err: unknown) { + // Log the full error server-side but return a generic message to the + // client to avoid leaking internal details. Specific known error codes + // (actor_not_open, actor_locked, storage_quota_exceeded, etc.) are + // returned as structured responses before reaching this catch block. + logger().error({ + msg: "kv channel request error", + requestId, + actorId, + error: err instanceof Error ? err.message : String(err), + }); + sendMessage( + conn, + makeErrorResponse(requestId, "internal_error", "internal error"), + ); + } +} + +// Defense-in-depth: in the engine KV channel, resolve_actor verifies the actor +// belongs to the authenticated namespace. The local dev manager is +// single-namespace, so all actors implicitly belong to the same namespace and +// no cross-namespace access is possible. If a less-privileged auth mechanism is +// introduced for the dev manager, namespace verification should be added here. +async function processRequestData( + state: KvChannelManagerState, + conn: KvChannelConnection, + managerDriver: ManagerDriver, + actorId: string, + data: RequestData, +): Promise { + switch (data.tag) { + case "ActorOpenRequest": + return handleActorOpen(state, conn, actorId); + + case "ActorCloseRequest": + return handleActorClose(state, conn, actorId); + + case "KvGetRequest": + case "KvPutRequest": + case "KvDeleteRequest": + case "KvDeleteRangeRequest": { + // All KV operations require the actor to be open on this connection. + const lockHolder = state.actorLocks.get(actorId); + if (!lockHolder || lockHolder !== conn) { + if (lockHolder && lockHolder !== conn) { + return { + tag: "ErrorResponse", + val: { + code: "actor_locked", + message: `actor ${actorId} is locked by another connection`, + }, + }; + } + return { + tag: "ErrorResponse", + val: { + code: "actor_not_open", + message: `actor ${actorId} is not open on this connection`, + }, + }; + } + return await handleKvOperation(managerDriver, actorId, data); + } + } +} + +function handleActorOpen( + state: KvChannelManagerState, + conn: KvChannelConnection, + actorId: string, +): ResponseData { + // Reject if this connection already has too many actors open. + if (conn.openActors.size >= MAX_ACTORS_PER_CONNECTION) { + return { + tag: "ErrorResponse", + val: { + code: "too_many_actors", + message: `connection has too many open actors (max ${MAX_ACTORS_PER_CONNECTION})`, + }, + }; + } + + const existingLock = state.actorLocks.get(actorId); + if (existingLock && existingLock !== conn) { + // Unconditionally evict the old connection's lock. The old connection + // is either dead (network issue) or stale (same process reconnecting). + // Remove the actor from the old connection's openActors so its next KV + // request fails the fast-path check immediately with actor_not_open. + existingLock.openActors.delete(actorId); + logger().info({ + msg: "kv channel evicting actor lock from old connection", + actorId, + }); + } + + state.actorLocks.set(actorId, conn); + conn.openActors.add(actorId); + + // Start the stale lock sweep if not already running. + ensureStaleLockSweep(state); + + return { tag: "ActorOpenResponse", val: null }; +} + +function handleActorClose( + state: KvChannelManagerState, + conn: KvChannelConnection, + actorId: string, +): ResponseData { + if (state.actorLocks.get(actorId) === conn) { + state.actorLocks.delete(actorId); + } + conn.openActors.delete(actorId); + + return { tag: "ActorCloseResponse", val: null }; +} + +/** Start the stale lock sweep if not already running. */ +function ensureStaleLockSweep(state: KvChannelManagerState): void { + if (state.staleLockSweepTimer) return; + state.staleLockSweepTimer = setInterval(() => { + let removed = 0; + for (const [actorId, conn] of state.actorLocks) { + if (conn.closed) { + state.actorLocks.delete(actorId); + removed++; + } + } + if (removed > 0) { + logger().debug({ + msg: "kv channel stale lock sweep completed", + removedCount: removed, + remainingCount: state.actorLocks.size, + }); + } + // Stop the sweep if there are no more lock entries. + if (state.actorLocks.size === 0 && state.staleLockSweepTimer) { + clearInterval(state.staleLockSweepTimer); + state.staleLockSweepTimer = null; + } + }, STALE_LOCK_SWEEP_INTERVAL_MS); + // Allow the process to exit even if the sweep timer is still running. + state.staleLockSweepTimer.unref?.(); +} + +type KvRequestData = Extract< + RequestData, + | { readonly tag: "KvGetRequest" } + | { readonly tag: "KvPutRequest" } + | { readonly tag: "KvDeleteRequest" } + | { readonly tag: "KvDeleteRangeRequest" } +>; + +async function handleKvOperation( + managerDriver: ManagerDriver, + actorId: string, + data: KvRequestData, +): Promise { + switch (data.tag) { + case "KvGetRequest": { + const keys = data.val.keys.map( + (k) => new Uint8Array(k), + ); + + // Validate key count. + if (keys.length > 128) { + return { + tag: "ErrorResponse", + val: { + code: "batch_too_large", + message: "a maximum of 128 keys is allowed", + }, + }; + } + + // Validate individual key sizes. + for (const key of keys) { + if (key.byteLength + 2 > 2048) { + return { + tag: "ErrorResponse", + val: { + code: "key_too_large", + message: "key is too long (max 2048 bytes)", + }, + }; + } + } + + const results = await managerDriver.kvBatchGet(actorId, keys); + + // Return only found keys and values. + const foundKeys: ArrayBuffer[] = []; + const foundValues: ArrayBuffer[] = []; + for (let i = 0; i < keys.length; i++) { + const val = results[i]; + if (val !== null) { + foundKeys.push(new Uint8Array(keys[i]).buffer as ArrayBuffer); + foundValues.push(new Uint8Array(val).buffer as ArrayBuffer); + } + } + + return { + tag: "KvGetResponse", + val: { keys: foundKeys, values: foundValues }, + }; + } + + case "KvPutRequest": { + const keys = data.val.keys.map( + (k) => new Uint8Array(k), + ); + const values = data.val.values.map( + (v) => new Uint8Array(v), + ); + + if (keys.length !== values.length) { + return { + tag: "ErrorResponse", + val: { + code: "keys_values_length_mismatch", + message: + "keys and values arrays must have the same length", + }, + }; + } + + if (keys.length > 128) { + return { + tag: "ErrorResponse", + val: { + code: "batch_too_large", + message: + "a maximum of 128 key-value entries is allowed", + }, + }; + } + + // Validate sizes. + let payloadSize = 0; + for (let i = 0; i < keys.length; i++) { + if (keys[i].byteLength + 2 > 2048) { + return { + tag: "ErrorResponse", + val: { + code: "key_too_large", + message: "key is too long (max 2048 bytes)", + }, + }; + } + if (values[i].byteLength > 128 * 1024) { + return { + tag: "ErrorResponse", + val: { + code: "value_too_large", + message: "value is too large (max 128 KiB)", + }, + }; + } + payloadSize += + keys[i].byteLength + 2 + values[i].byteLength; + } + + if (payloadSize > 976 * 1024) { + return { + tag: "ErrorResponse", + val: { + code: "payload_too_large", + message: + "total payload is too large (max 976 KiB)", + }, + }; + } + + const entries: [Uint8Array, Uint8Array][] = keys.map( + (k, i) => [k, values[i]], + ); + + try { + await managerDriver.kvBatchPut(actorId, entries); + } catch (err: unknown) { + if (err instanceof KvStorageQuotaExceededError) { + return { + tag: "ErrorResponse", + val: { + code: "storage_quota_exceeded", + message: err.message, + }, + }; + } + throw err; + } + + return { tag: "KvPutResponse", val: null }; + } + + case "KvDeleteRequest": { + const keys = data.val.keys.map( + (k) => new Uint8Array(k), + ); + + if (keys.length > 128) { + return { + tag: "ErrorResponse", + val: { + code: "batch_too_large", + message: "a maximum of 128 keys is allowed", + }, + }; + } + + for (const key of keys) { + if (key.byteLength + 2 > 2048) { + return { + tag: "ErrorResponse", + val: { + code: "key_too_large", + message: "key is too long (max 2048 bytes)", + }, + }; + } + } + + await managerDriver.kvBatchDelete(actorId, keys); + + return { tag: "KvDeleteResponse", val: null }; + } + + case "KvDeleteRangeRequest": { + const start = new Uint8Array(data.val.start); + const end = new Uint8Array(data.val.end); + + if (start.byteLength + 2 > 2048) { + return { + tag: "ErrorResponse", + val: { + code: "key_too_large", + message: "start key is too long (max 2048 bytes)", + }, + }; + } + if (end.byteLength + 2 > 2048) { + return { + tag: "ErrorResponse", + val: { + code: "key_too_large", + message: "end key is too long (max 2048 bytes)", + }, + }; + } + + await managerDriver.kvDeleteRange(actorId, start, end); + + return { tag: "KvDeleteResponse", val: null }; + } + + default: { + // Should never happen since processRequestData routes only KV tags here. + const _exhaustive: never = data; + throw new Error(`unexpected request tag`); + } + } +} + +function handleToServerMessage( + state: KvChannelManagerState, + conn: KvChannelConnection, + managerDriver: ManagerDriver, + msg: ToServer, +): void { + switch (msg.tag) { + case "ToServerRequest": { + const { actorId } = msg.val; + + // Chain requests per actor so they execute sequentially, + // preventing journal write ordering violations. Cross-actor + // requests still execute concurrently since each actor has its + // own queue. See docs-internal/engine/NATIVE_SQLITE_REVIEW_FIXES.md H2. + const prev = conn.actorQueues.get(actorId) ?? Promise.resolve(); + const next = prev.then(() => + handleRequest(state, conn, managerDriver, msg.val).catch( + (err) => { + logger().error({ + msg: "unhandled error in kv channel request handler", + error: + err instanceof Error + ? err.message + : String(err), + }); + }, + ), + ); + conn.actorQueues.set(actorId, next); + + // Clean up the queue entry once it settles to avoid unbounded map growth. + next.then(() => { + if (conn.actorQueues.get(actorId) === next) { + conn.actorQueues.delete(actorId); + } + }); + break; + } + + case "ToServerPong": + conn.lastPongTs = Date.now(); + break; + } +} + +/** Validate the protocol version query parameter. Returns an error string or null. */ +export function validateProtocolVersion( + protocolVersion: string | undefined, +): string | null { + if (!protocolVersion) { + return "missing protocol_version query parameter"; + } + const version = Number.parseInt(protocolVersion, 10); + if (Number.isNaN(version) || version !== PROTOCOL_VERSION) { + return `unsupported protocol_version: ${protocolVersion} (server supports ${PROTOCOL_VERSION})`; + } + return null; +} diff --git a/rivetkit-typescript/packages/rivetkit/src/manager/router.ts b/rivetkit-typescript/packages/rivetkit/src/manager/router.ts index c05ede00b9..e876081411 100644 --- a/rivetkit-typescript/packages/rivetkit/src/manager/router.ts +++ b/rivetkit-typescript/packages/rivetkit/src/manager/router.ts @@ -48,6 +48,10 @@ import { } from "@/utils/router"; import type { ActorOutput, ManagerDriver } from "./driver"; import { actorGateway, createTestWebSocketProxy } from "./gateway"; +import { + createKvChannelManager, + validateProtocolVersion, +} from "./kv-channel"; import { logger } from "./log"; export function buildManagerRouter( @@ -56,6 +60,12 @@ export function buildManagerRouter( getUpgradeWebSocket: GetUpgradeWebSocket | undefined, runtime: Runtime = "node", ) { + const kvChannelManager = createKvChannelManager(); + + // Inject the KV channel shutdown into the driver so it can be + // called during the driver's teardown, after all actors have stopped. + managerDriver.setKvChannelShutdown?.(kvChannelManager.shutdown); + return createRouter(config.managerBasePath, (router) => { // Actor gateway router.use( @@ -355,6 +365,53 @@ export function buildManagerRouter( }); } + // GET /kv/connect - KV channel WebSocket endpoint for native SQLite + router.get("/kv/connect", async (c) => { + // Validate authentication. + if (isDev() && !config.token) { + logger().warn({ + msg: "RIVET_TOKEN is not set, skipping KV channel auth in development mode", + }); + } else { + const token = c.req.query("token"); + if (!config.token) { + return c.text("KV channel requires RIVET_TOKEN to be set", 403); + } + if ( + !token || + timingSafeEqual(config.token, token) === false + ) { + return c.json( + { + error: { + code: "unauthorized", + message: "invalid or missing authentication token", + }, + }, + 401, + ); + } + } + + // Validate protocol version. + const versionError = validateProtocolVersion( + c.req.query("protocol_version"), + ); + if (versionError) { + return c.text(versionError, 400); + } + + // Upgrade to WebSocket. + const upgradeWebSocket = getUpgradeWebSocket?.(); + if (!upgradeWebSocket) { + return c.text("WebSocket upgrades not supported on this platform", 500); + } + + return upgradeWebSocket(() => + kvChannelManager.createHandler(managerDriver), + )(c, noopNext()); + }); + // TODO: // // DELETE /actors/{actor_id} // { @@ -585,6 +642,13 @@ export function buildManagerRouter( return c.text(`Error: ${error}`, 500); } }); + + // Force-close all KV channel WebSocket connections. Used by + // stress tests to simulate network failures mid-operation. + router.post("/.test/kv-channel/force-disconnect", async (c) => { + const closed = kvChannelManager._testForceCloseAllKvChannels(); + return c.json({ closed }); + }); } if (config.inspector.enabled) { diff --git a/rivetkit-typescript/packages/rivetkit/src/registry/config/envoy.ts b/rivetkit-typescript/packages/rivetkit/src/registry/config/envoy.ts index b2ec654cc4..d347bd6f79 100644 --- a/rivetkit-typescript/packages/rivetkit/src/registry/config/envoy.ts +++ b/rivetkit-typescript/packages/rivetkit/src/registry/config/envoy.ts @@ -28,7 +28,7 @@ export const EnvoyConfigSchema = z.object({ // Deprecated. totalSlots: z.number().default(() => getRivetTotalSlots() ?? 100000), - runnerKey: z.string().optional(), + envoyKey: z.string().optional(), }); export type EnvoyConfigInput = z.input; export type EnvoyConfig = z.infer; diff --git a/rivetkit-typescript/packages/rivetkit/src/registry/config/index.ts b/rivetkit-typescript/packages/rivetkit/src/registry/config/index.ts index f04d240a81..51495b8c3c 100644 --- a/rivetkit-typescript/packages/rivetkit/src/registry/config/index.ts +++ b/rivetkit-typescript/packages/rivetkit/src/registry/config/index.ts @@ -1,6 +1,12 @@ import { z } from "zod"; import { getRunMetadata } from "@/actor/config"; import type { ActorDefinition, AnyActorDefinition } from "@/actor/definition"; +import { + KEYS, + queueMetadataKey, + sqliteStoragePrefix, + workflowStoragePrefix, +} from "@/actor/instance/keys"; import { type Logger, LogLevelSchema } from "@/common/log"; import { ENGINE_ENDPOINT } from "@/engine-process/constants"; import { InspectorConfigSchema } from "@/inspector/config"; @@ -218,7 +224,7 @@ export const RegistryConfigSchema = z }); } - // configurerPool requires an engine (via endpoint or spawnEngine) + // configurePool requires an engine (via endpoint or spawnEngine) if ( config.serverless.configurePool && !parsedEndpoint && @@ -321,6 +327,30 @@ export function buildActorNames( // Actor options take precedence over run metadata metadata.icon = options.icon ?? runMeta.icon; metadata.name = options.name ?? runMeta.name; + metadata.preload = { + keys: [ + Array.from(KEYS.PERSIST_DATA), + Array.from(KEYS.INSPECTOR_TOKEN), + Array.from(queueMetadataKey()), + ], + prefixes: [ + { + prefix: Array.from(sqliteStoragePrefix()), + maxBytes: options.preloadMaxSqliteBytes ?? 786_432, + partial: true, + }, + { + prefix: Array.from(workflowStoragePrefix()), + maxBytes: options.preloadMaxWorkflowBytes ?? 131_072, + partial: false, + }, + { + prefix: Array.from(KEYS.CONN_PREFIX), + maxBytes: options.preloadMaxConnectionsBytes ?? 65_536, + partial: false, + }, + ], + }; // Remove undefined values if (!metadata.icon) delete metadata.icon; if (!metadata.name) delete metadata.name; @@ -440,26 +470,26 @@ export const DocServerlessConfigSchema = z }) .describe("Configuration for serverless deployment mode."); -export const DocRunnerConfigSchema = z +export const DocEnvoyConfigSchema = z .object({ totalSlots: z .number() .optional() .describe("Total number of actor slots available. Default: 100000"), - runnerName: z + poolName: z .string() .optional() - .describe("Name of this runner. Default: 'default'"), - runnerKey: z + .describe("Name of this envoy pool. Default: 'default'"), + envoyKey: z .string() .optional() - .describe("Deprecated. Authentication key for the runner."), + .describe("Deprecated. Authentication key for the envoy."), version: z .number() .optional() - .describe("Version number of this runner. Default: 1"), + .describe("Version number of this envoy. Default: 1"), }) - .describe("Configuration for runner mode."); + .describe("Configuration for envoy mode."); export const DocRegistryConfigSchema = z .object({ @@ -563,6 +593,6 @@ export const DocRegistryConfigSchema = z .describe("Port to run the manager on. Default: 6420"), inspector: DocInspectorConfigSchema, serverless: DocServerlessConfigSchema.optional(), - runner: DocRunnerConfigSchema.optional(), + envoy: DocEnvoyConfigSchema.optional(), }) .describe("RivetKit registry configuration."); diff --git a/rivetkit-typescript/packages/rivetkit/src/remote-manager-driver/mod.ts b/rivetkit-typescript/packages/rivetkit/src/remote-manager-driver/mod.ts index a4884b9641..782750eefc 100644 --- a/rivetkit-typescript/packages/rivetkit/src/remote-manager-driver/mod.ts +++ b/rivetkit-typescript/packages/rivetkit/src/remote-manager-driver/mod.ts @@ -345,6 +345,39 @@ export class RemoteManagerDriver implements ManagerDriver { return response.value; } + async kvBatchGet( + _actorId: string, + _keys: Uint8Array[], + ): Promise<(Uint8Array | null)[]> { + throw new Error("kvBatchGet not supported on remote manager driver"); + } + + async kvBatchPut( + _actorId: string, + _entries: [Uint8Array, Uint8Array][], + ): Promise { + throw new Error("kvBatchPut not supported on remote manager driver"); + } + + async kvBatchDelete( + _actorId: string, + _keys: Uint8Array[], + ): Promise { + throw new Error( + "kvBatchDelete not supported on remote manager driver", + ); + } + + async kvDeleteRange( + _actorId: string, + _start: Uint8Array, + _end: Uint8Array, + ): Promise { + throw new Error( + "kvDeleteRange not supported on remote manager driver", + ); + } + displayInformation(): ManagerDisplayInformation { return { properties: {} }; } diff --git a/rivetkit-typescript/packages/rivetkit/tests/db-closed-race.test.ts b/rivetkit-typescript/packages/rivetkit/tests/db-closed-race.test.ts index 746a541b68..1a11ed0f1c 100644 --- a/rivetkit-typescript/packages/rivetkit/tests/db-closed-race.test.ts +++ b/rivetkit-typescript/packages/rivetkit/tests/db-closed-race.test.ts @@ -22,7 +22,7 @@ describe("database closed race condition", () => { const client = createClient({ endpoint: runtime.endpoint, namespace: runtime.namespace, - runnerName: runtime.runnerName, + poolName: runtime.runnerName, disableMetadataLookup: true, }); diff --git a/rivetkit-typescript/packages/rivetkit/tests/driver-engine.test.ts b/rivetkit-typescript/packages/rivetkit/tests/driver-engine.test.ts index 4eb208d2c8..dd381d40b1 100644 --- a/rivetkit-typescript/packages/rivetkit/tests/driver-engine.test.ts +++ b/rivetkit-typescript/packages/rivetkit/tests/driver-engine.test.ts @@ -1,11 +1,14 @@ import { join } from "node:path"; import { createClientWithDriver } from "@/client/client"; import { createTestRuntime, runDriverTests } from "@/driver-test-suite/mod"; +import type { DriverTestConfig } from "@/driver-test-suite/mod"; +import { setupDriverTest } from "@/driver-test-suite/utils"; import { createEngineDriver } from "@/drivers/engine/mod"; import invariant from "invariant"; import { convertRegistryConfigToClientConfig } from "@/client/config"; +import { describe, expect, test, vi } from "vitest"; -runDriverTests({ +const driverTestConfig = { // Use real timers for engine-runner tests useRealTimers: true, skip: { @@ -57,9 +60,9 @@ runDriverTests({ registry.config.endpoint = endpoint; registry.config.namespace = namespace; registry.config.token = token; - registry.config.runner = { - ...registry.config.runner, - runnerName, + registry.config.envoy = { + ...registry.config.envoy, + poolName: runnerName, }; // Parse config only after mutating registry.config so the manager @@ -140,6 +143,10 @@ runDriverTests({ token, }, driver: driverConfig, + hardCrashActor: async (actorId: string) => { + await actorDriver.hardCrashActor?.(actorId); + }, + hardCrashPreservesData: true, cleanup: async () => { await actorDriver.shutdownRunner?.(true); }, @@ -147,4 +154,40 @@ runDriverTests({ }, ); }, +} satisfies Omit; + +runDriverTests(driverTestConfig); + +describe("engine startup kv preload", () => { + test("wakes actors with envoy-provided preloaded kv", async (c) => { + const { client } = await setupDriverTest(c, { + ...driverTestConfig, + clientType: "http", + encoding: "bare", + }); + const handle = client.sleep.getOrCreate(); + + await handle.getCounts(); + await handle.triggerSleep(); + + await vi.waitFor( + async () => { + const counts = await handle.getCounts(); + expect(counts.sleepCount).toBeGreaterThanOrEqual(1); + expect(counts.startCount).toBeGreaterThanOrEqual(2); + }, + { timeout: 5_000, interval: 100 }, + ); + + const gatewayUrl = await handle.getGatewayUrl(); + const response = await fetch(`${gatewayUrl}/inspector/metrics`, { + headers: { Authorization: "Bearer token" }, + }); + expect(response.status).toBe(200); + + const metrics: any = await response.json(); + expect(metrics.startup_is_new.value).toBe(0); + expect(metrics.startup_internal_preload_kv_entries.value).toBeGreaterThan(0); + expect(metrics.startup_kv_round_trips.value).toBe(0); + }); }); diff --git a/rivetkit-typescript/packages/rivetkit/tests/driver-file-system.test.ts b/rivetkit-typescript/packages/rivetkit/tests/driver-file-system.test.ts index 6ddb862120..c6168f7689 100644 --- a/rivetkit-typescript/packages/rivetkit/tests/driver-file-system.test.ts +++ b/rivetkit-typescript/packages/rivetkit/tests/driver-file-system.test.ts @@ -42,7 +42,7 @@ describe("file-system websocket hibernation cleanup", () => { const client = createClient({ endpoint: runtime.endpoint, namespace: runtime.namespace, - runnerName: runtime.runnerName, + poolName: runtime.runnerName, disableMetadataLookup: true, }); const conn = client.fileSystemHibernationCleanupActor diff --git a/rivetkit-typescript/packages/rivetkit/tsconfig.json b/rivetkit-typescript/packages/rivetkit/tsconfig.json index b23ac70edd..a3671f47d7 100644 --- a/rivetkit-typescript/packages/rivetkit/tsconfig.json +++ b/rivetkit-typescript/packages/rivetkit/tsconfig.json @@ -14,6 +14,7 @@ "@rivetkit/sqlite-vfs/wasm": ["../sqlite-vfs/src/wasm.ts"], // Used for test fixtures "rivetkit": ["./src/mod.ts"], + "rivetkit/errors": ["./src/actor/errors.ts"], "rivetkit/utils": ["./src/utils.ts"], "rivetkit/sandbox": ["./src/sandbox/index.ts"], "rivetkit/sandbox/docker": ["./src/sandbox/providers/docker.ts"], diff --git a/rivetkit-typescript/packages/rivetkit/vitest.config.ts b/rivetkit-typescript/packages/rivetkit/vitest.config.ts index f7424359d1..161dc6af32 100644 --- a/rivetkit-typescript/packages/rivetkit/vitest.config.ts +++ b/rivetkit-typescript/packages/rivetkit/vitest.config.ts @@ -7,4 +7,9 @@ export default defineConfig({ ...defaultConfig, // Used to resolve "rivetkit" to "src/mod.ts" in the test fixtures plugins: [tsconfigPaths()], + resolve: { + alias: { + "rivetkit/errors": resolve(__dirname, "./src/actor/errors.ts"), + }, + }, }); diff --git a/rivetkit-typescript/packages/sqlite-native/Cargo.lock b/rivetkit-typescript/packages/sqlite-native/Cargo.lock new file mode 100644 index 0000000000..f1e2f7f83a --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/Cargo.lock @@ -0,0 +1,988 @@ +# This file is automatically @generated by Cargo. +# It is not intended for manual editing. +version = 4 + +[[package]] +name = "aho-corasick" +version = "1.1.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ddd31a130427c27518df266943a5308ed92d4b226cc639f5a8f1002816174301" +dependencies = [ + "memchr", +] + +[[package]] +name = "allocator-api2" +version = "0.2.21" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "683d7910e743518b0e34f1186f92494becacb047c7b6bf616c96772180fef923" + +[[package]] +name = "anyhow" +version = "1.0.102" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7f202df86484c868dbad7eaa557ef785d5c66295e41b460ef922eca0723b842c" + +[[package]] +name = "bitflags" +version = "2.11.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "843867be96c8daad0d758b57df9392b6d8d271134fce549de6ce169ff98a92af" + +[[package]] +name = "block-buffer" +version = "0.10.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3078c7629b62d3f0439517fa394996acacc5cbc91c5a20d8c658e77abd503a71" +dependencies = [ + "generic-array", +] + +[[package]] +name = "byteorder" +version = "1.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1fd0f2584146f6f2ef48085050886acf353beff7305ebd1ae69500e27c67f64b" + +[[package]] +name = "bytes" +version = "1.11.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1e748733b7cbc798e1434b6ac524f0c1ff2ab456fe201501e6497c8417a4fc33" + +[[package]] +name = "cc" +version = "1.2.57" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7a0dd1ca384932ff3641c8718a02769f1698e7563dc6974ffd03346116310423" +dependencies = [ + "find-msvc-tools", + "shlex", +] + +[[package]] +name = "cfg-if" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9330f8b2ff13f34540b44e946ef35111825727b38d33286ef986142615121801" + +[[package]] +name = "convert_case" +version = "0.6.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ec182b0ca2f35d8fc196cf3404988fd8b8c739a4d270ff118a398feb0cbec1ca" +dependencies = [ + "unicode-segmentation", +] + +[[package]] +name = "cpufeatures" +version = "0.2.17" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "59ed5838eebb26a2bb2e58f6d5b5316989ae9d08bab10e0e6d103e656d1b0280" +dependencies = [ + "libc", +] + +[[package]] +name = "crypto-common" +version = "0.1.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "78c8292055d1c1df0cce5d180393dc8cce0abec0a7102adb6c7b1eef6016d60a" +dependencies = [ + "generic-array", + "typenum", +] + +[[package]] +name = "ctor" +version = "0.2.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "32a2785755761f3ddc1492979ce1e48d2c00d09311c39e4466429188f3dd6501" +dependencies = [ + "quote", + "syn", +] + +[[package]] +name = "data-encoding" +version = "2.10.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d7a1e2f27636f116493b8b860f5546edb47c8d8f8ea73e1d2a20be88e28d1fea" + +[[package]] +name = "digest" +version = "0.10.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292" +dependencies = [ + "block-buffer", + "crypto-common", +] + +[[package]] +name = "equivalent" +version = "1.0.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f" + +[[package]] +name = "find-msvc-tools" +version = "0.1.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5baebc0774151f905a1a2cc41989300b1e6fbb29aff0ceffa1064fdd3088d582" + +[[package]] +name = "foldhash" +version = "0.1.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d9c4f5dac5e15c24eb999c26181a6ca40b39fe946cbe4c263c7209467bc83af2" + +[[package]] +name = "futures-core" +version = "0.3.32" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7e3450815272ef58cec6d564423f6e755e25379b217b0bc688e295ba24df6b1d" + +[[package]] +name = "futures-sink" +version = "0.3.32" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c39754e157331b013978ec91992bde1ac089843443c49cbc7f46150b0fad0893" + +[[package]] +name = "futures-task" +version = "0.3.32" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "037711b3d59c33004d3856fbdc83b99d4ff37a24768fa1be9ce3538a1cde4393" + +[[package]] +name = "futures-util" +version = "0.3.32" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "389ca41296e6190b48053de0321d02a77f32f8a5d2461dd38762c0593805c6d6" +dependencies = [ + "futures-core", + "futures-sink", + "futures-task", + "pin-project-lite", + "slab", +] + +[[package]] +name = "generic-array" +version = "0.14.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a" +dependencies = [ + "typenum", + "version_check", +] + +[[package]] +name = "getrandom" +version = "0.2.17" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ff2abc00be7fca6ebc474524697ae276ad847ad0a6b3faa4bcb027e9a4614ad0" +dependencies = [ + "cfg-if", + "libc", + "wasi", +] + +[[package]] +name = "hashbrown" +version = "0.15.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9229cfe53dfd69f0609a49f65461bd93001ea1ef889cd5529dd176593f5338a1" +dependencies = [ + "allocator-api2", + "equivalent", + "foldhash", +] + +[[package]] +name = "heck" +version = "0.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea" + +[[package]] +name = "http" +version = "1.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e3ba2a386d7f85a81f119ad7498ebe444d2e22c2af0b86b069416ace48b3311a" +dependencies = [ + "bytes", + "itoa", +] + +[[package]] +name = "httparse" +version = "1.10.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6dbf3de79e51f3d586ab4cb9d5c3e2c14aa28ed23d180cf89b4df0454a69cc87" + +[[package]] +name = "indoc" +version = "2.0.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "79cf5c93f93228cf8efb3ba362535fb11199ac548a09ce117c9b1adc3030d706" +dependencies = [ + "rustversion", +] + +[[package]] +name = "itoa" +version = "1.0.17" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "92ecc6618181def0457392ccd0ee51198e065e016d1d527a7ac1b6dc7c1f09d2" + +[[package]] +name = "lazy_static" +version = "1.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe" + +[[package]] +name = "libc" +version = "0.2.183" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b5b646652bf6661599e1da8901b3b9522896f01e736bad5f723fe7a3a27f899d" + +[[package]] +name = "libloading" +version = "0.8.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d7c4b02199fee7c5d21a5ae7d8cfa79a6ef5bb2fc834d6e9058e89c825efdc55" +dependencies = [ + "cfg-if", + "windows-link", +] + +[[package]] +name = "libsqlite3-sys" +version = "0.30.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2e99fb7a497b1e3339bc746195567ed8d3e24945ecd636e3619d20b9de9e9149" +dependencies = [ + "cc", + "pkg-config", + "vcpkg", +] + +[[package]] +name = "log" +version = "0.4.29" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5e5032e24019045c762d3c0f28f5b6b8bbf38563a65908389bf7978758920897" + +[[package]] +name = "lru" +version = "0.12.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "234cf4f4a04dc1f57e24b96cc0cd600cf2af460d4161ac5ecdd0af8e1f3b2a38" +dependencies = [ + "hashbrown", +] + +[[package]] +name = "matchers" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d1525a2a28c7f4fa0fc98bb91ae755d1e2d1505079e05539e35bc876b5d65ae9" +dependencies = [ + "regex-automata", +] + +[[package]] +name = "memchr" +version = "2.8.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f8ca58f447f06ed17d5fc4043ce1b10dd205e060fb3ce5b979b8ed8e59ff3f79" + +[[package]] +name = "mio" +version = "1.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a69bcab0ad47271a0234d9422b131806bf3968021e5dc9328caf2d4cd58557fc" +dependencies = [ + "libc", + "wasi", + "windows-sys", +] + +[[package]] +name = "napi" +version = "2.16.17" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "55740c4ae1d8696773c78fdafd5d0e5fe9bc9f1b071c7ba493ba5c413a9184f3" +dependencies = [ + "bitflags", + "ctor", + "napi-derive", + "napi-sys", + "once_cell", + "serde", + "serde_json", + "tokio", +] + +[[package]] +name = "napi-build" +version = "2.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d376940fd5b723c6893cd1ee3f33abbfd86acb1cd1ec079f3ab04a2a3bc4d3b1" + +[[package]] +name = "napi-derive" +version = "2.16.13" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7cbe2585d8ac223f7d34f13701434b9d5f4eb9c332cccce8dee57ea18ab8ab0c" +dependencies = [ + "cfg-if", + "convert_case", + "napi-derive-backend", + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "napi-derive-backend" +version = "1.0.75" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1639aaa9eeb76e91c6ae66da8ce3e89e921cd3885e99ec85f4abacae72fc91bf" +dependencies = [ + "convert_case", + "once_cell", + "proc-macro2", + "quote", + "regex", + "semver", + "syn", +] + +[[package]] +name = "napi-sys" +version = "2.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "427802e8ec3a734331fec1035594a210ce1ff4dc5bc1950530920ab717964ea3" +dependencies = [ + "libloading", +] + +[[package]] +name = "nu-ansi-term" +version = "0.50.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7957b9740744892f114936ab4a57b3f487491bbeafaf8083688b16841a4240e5" +dependencies = [ + "windows-sys", +] + +[[package]] +name = "once_cell" +version = "1.21.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9f7c3e4beb33f85d45ae3e3a1792185706c8e16d043238c593331cc7cd313b50" + +[[package]] +name = "pest" +version = "2.8.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e0848c601009d37dfa3430c4666e147e49cdcf1b92ecd3e63657d8a5f19da662" +dependencies = [ + "memchr", + "ucd-trie", +] + +[[package]] +name = "pest_derive" +version = "2.8.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "11f486f1ea21e6c10ed15d5a7c77165d0ee443402f0780849d1768e7d9d6fe77" +dependencies = [ + "pest", + "pest_generator", +] + +[[package]] +name = "pest_generator" +version = "2.8.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8040c4647b13b210a963c1ed407c1ff4fdfa01c31d6d2a098218702e6664f94f" +dependencies = [ + "pest", + "pest_meta", + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "pest_meta" +version = "2.8.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "89815c69d36021a140146f26659a81d6c2afa33d216d736dd4be5381a7362220" +dependencies = [ + "pest", + "sha2", +] + +[[package]] +name = "pin-project-lite" +version = "0.2.17" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a89322df9ebe1c1578d689c92318e070967d1042b512afbe49518723f4e6d5cd" + +[[package]] +name = "pkg-config" +version = "0.3.32" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7edddbd0b52d732b21ad9a5fab5c704c14cd949e5e9a1ec5929a24fded1b904c" + +[[package]] +name = "ppv-lite86" +version = "0.2.21" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "85eae3c4ed2f50dcfe72643da4befc30deadb458a9b590d720cde2f2b1e97da9" +dependencies = [ + "zerocopy", +] + +[[package]] +name = "prettyplease" +version = "0.2.37" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "479ca8adacdd7ce8f1fb39ce9ecccbfe93a3f1344b3d0d97f20bc0196208f62b" +dependencies = [ + "proc-macro2", + "syn", +] + +[[package]] +name = "proc-macro2" +version = "1.0.106" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8fd00f0bb2e90d81d1044c2b32617f68fcb9fa3bb7640c23e9c748e53fb30934" +dependencies = [ + "unicode-ident", +] + +[[package]] +name = "quote" +version = "1.0.45" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "41f2619966050689382d2b44f664f4bc593e129785a36d6ee376ddf37259b924" +dependencies = [ + "proc-macro2", +] + +[[package]] +name = "rand" +version = "0.8.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404" +dependencies = [ + "libc", + "rand_chacha", + "rand_core", +] + +[[package]] +name = "rand_chacha" +version = "0.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88" +dependencies = [ + "ppv-lite86", + "rand_core", +] + +[[package]] +name = "rand_core" +version = "0.6.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c" +dependencies = [ + "getrandom", +] + +[[package]] +name = "regex" +version = "1.12.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e10754a14b9137dd7b1e3e5b0493cc9171fdd105e0ab477f51b72e7f3ac0e276" +dependencies = [ + "aho-corasick", + "memchr", + "regex-automata", + "regex-syntax", +] + +[[package]] +name = "regex-automata" +version = "0.4.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6e1dd4122fc1595e8162618945476892eefca7b88c52820e74af6262213cae8f" +dependencies = [ + "aho-corasick", + "memchr", + "regex-syntax", +] + +[[package]] +name = "regex-syntax" +version = "0.8.10" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dc897dd8d9e8bd1ed8cdad82b5966c3e0ecae09fb1907d58efaa013543185d0a" + +[[package]] +name = "rivet-kv-channel-protocol" +version = "2.2.1" +dependencies = [ + "serde", + "serde_bare", + "vbare", + "vbare-compiler", +] + +[[package]] +name = "rivetkit-sqlite-native" +version = "2.1.6" +dependencies = [ + "futures-util", + "getrandom", + "libsqlite3-sys", + "lru", + "napi", + "napi-build", + "napi-derive", + "rivet-kv-channel-protocol", + "serde", + "serde_bare", + "serde_json", + "tokio", + "tokio-tungstenite", + "tracing", + "tracing-subscriber", + "urlencoding", +] + +[[package]] +name = "rustversion" +version = "1.0.22" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b39cdef0fa800fc44525c84ccb54a029961a8215f9619753635a9c0d2538d46d" + +[[package]] +name = "semver" +version = "1.0.27" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d767eb0aabc880b29956c35734170f26ed551a859dbd361d140cdbeca61ab1e2" + +[[package]] +name = "serde" +version = "1.0.228" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9a8e94ea7f378bd32cbbd37198a4a91436180c5bb472411e48b5ec2e2124ae9e" +dependencies = [ + "serde_core", + "serde_derive", +] + +[[package]] +name = "serde_bare" +version = "0.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "51c55386eed0f1ae957b091dc2ca8122f287b60c79c774cbe3d5f2b69fded660" +dependencies = [ + "serde", +] + +[[package]] +name = "serde_core" +version = "1.0.228" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "41d385c7d4ca58e59fc732af25c3983b67ac852c1a25000afe1175de458b67ad" +dependencies = [ + "serde_derive", +] + +[[package]] +name = "serde_derive" +version = "1.0.228" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d540f220d3187173da220f885ab66608367b6574e925011a9353e4badda91d79" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "serde_json" +version = "1.0.149" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "83fc039473c5595ace860d8c4fafa220ff474b3fc6bfdb4293327f1a37e94d86" +dependencies = [ + "itoa", + "memchr", + "serde", + "serde_core", + "zmij", +] + +[[package]] +name = "sha1" +version = "0.10.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e3bf829a2d51ab4a5ddf1352d8470c140cadc8301b2ae1789db023f01cedd6ba" +dependencies = [ + "cfg-if", + "cpufeatures", + "digest", +] + +[[package]] +name = "sha2" +version = "0.10.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a7507d819769d01a365ab707794a4084392c824f54a7a6a7862f8c3d0892b283" +dependencies = [ + "cfg-if", + "cpufeatures", + "digest", +] + +[[package]] +name = "sharded-slab" +version = "0.1.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f40ca3c46823713e0d4209592e8d6e826aa57e928f09752619fc696c499637f6" +dependencies = [ + "lazy_static", +] + +[[package]] +name = "shlex" +version = "1.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64" + +[[package]] +name = "slab" +version = "0.4.12" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0c790de23124f9ab44544d7ac05d60440adc586479ce501c1d6d7da3cd8c9cf5" + +[[package]] +name = "smallvec" +version = "1.15.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "67b1b7a3b5fe4f1376887184045fcf45c69e92af734b7aaddc05fb777b6fbd03" + +[[package]] +name = "socket2" +version = "0.6.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3a766e1110788c36f4fa1c2b71b387a7815aa65f88ce0229841826633d93723e" +dependencies = [ + "libc", + "windows-sys", +] + +[[package]] +name = "syn" +version = "2.0.117" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e665b8803e7b1d2a727f4023456bbbbe74da67099c585258af0ad9c5013b9b99" +dependencies = [ + "proc-macro2", + "quote", + "unicode-ident", +] + +[[package]] +name = "thiserror" +version = "1.0.69" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b6aaf5339b578ea85b50e080feb250a3e8ae8cfcdff9a461c9ec2904bc923f52" +dependencies = [ + "thiserror-impl", +] + +[[package]] +name = "thiserror-impl" +version = "1.0.69" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4fee6c4efc90059e10f81e6d42c60a18f76588c3d74cb83a0b242a2b6c7504c1" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "thread_local" +version = "1.1.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f60246a4944f24f6e018aa17cdeffb7818b76356965d03b07d6a9886e8962185" +dependencies = [ + "cfg-if", +] + +[[package]] +name = "tokio" +version = "1.50.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "27ad5e34374e03cfffefc301becb44e9dc3c17584f414349ebe29ed26661822d" +dependencies = [ + "bytes", + "libc", + "mio", + "pin-project-lite", + "socket2", + "tokio-macros", + "windows-sys", +] + +[[package]] +name = "tokio-macros" +version = "2.6.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5c55a2eff8b69ce66c84f85e1da1c233edc36ceb85a2058d11b0d6a3c7e7569c" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "tokio-tungstenite" +version = "0.24.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "edc5f74e248dc973e0dbb7b74c7e0d6fcc301c694ff50049504004ef4d0cdcd9" +dependencies = [ + "futures-util", + "log", + "tokio", + "tungstenite", +] + +[[package]] +name = "tracing" +version = "0.1.44" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "63e71662fa4b2a2c3a26f570f037eb95bb1f85397f3cd8076caed2f026a6d100" +dependencies = [ + "pin-project-lite", + "tracing-attributes", + "tracing-core", +] + +[[package]] +name = "tracing-attributes" +version = "0.1.31" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7490cfa5ec963746568740651ac6781f701c9c5ea257c58e057f3ba8cf69e8da" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "tracing-core" +version = "0.1.36" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "db97caf9d906fbde555dd62fa95ddba9eecfd14cb388e4f491a66d74cd5fb79a" +dependencies = [ + "once_cell", + "valuable", +] + +[[package]] +name = "tracing-log" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ee855f1f400bd0e5c02d150ae5de3840039a3f54b025156404e34c23c03f47c3" +dependencies = [ + "log", + "once_cell", + "tracing-core", +] + +[[package]] +name = "tracing-subscriber" +version = "0.3.23" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cb7f578e5945fb242538965c2d0b04418d38ec25c79d160cd279bf0731c8d319" +dependencies = [ + "matchers", + "nu-ansi-term", + "once_cell", + "regex-automata", + "sharded-slab", + "smallvec", + "thread_local", + "tracing", + "tracing-core", + "tracing-log", +] + +[[package]] +name = "tungstenite" +version = "0.24.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "18e5b8366ee7a95b16d32197d0b2604b43a0be89dc5fac9f8e96ccafbaedda8a" +dependencies = [ + "byteorder", + "bytes", + "data-encoding", + "http", + "httparse", + "log", + "rand", + "sha1", + "thiserror", + "utf-8", +] + +[[package]] +name = "typenum" +version = "1.19.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "562d481066bde0658276a35467c4af00bdc6ee726305698a55b86e61d7ad82bb" + +[[package]] +name = "ucd-trie" +version = "0.1.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2896d95c02a80c6d6a5d6e953d479f5ddf2dfdb6a244441010e373ac0fb88971" + +[[package]] +name = "unicode-ident" +version = "1.0.24" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e6e4313cd5fcd3dad5cafa179702e2b244f760991f45397d14d4ebf38247da75" + +[[package]] +name = "unicode-segmentation" +version = "1.12.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f6ccf251212114b54433ec949fd6a7841275f9ada20dddd2f29e9ceea4501493" + +[[package]] +name = "urlencoding" +version = "2.1.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "daf8dba3b7eb870caf1ddeed7bc9d2a049f3cfdfae7cb521b087cc33ae4c49da" + +[[package]] +name = "utf-8" +version = "0.7.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "09cc8ee72d2a9becf2f2febe0205bbed8fc6615b7cb429ad062dc7b7ddd036a9" + +[[package]] +name = "valuable" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ba73ea9cf16a25df0c8caa16c51acb937d5712a8429db78a3ee29d5dcacd3a65" + +[[package]] +name = "vbare" +version = "0.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "51a63d4c173f6a6f7c8f524dcdda615e51f83d12bca9cc129f676229b995ca41" +dependencies = [ + "anyhow", +] + +[[package]] +name = "vbare-compiler" +version = "0.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ce091703409d0a86ddd6c02c68794abce94e256bebe971724fa1e1296d309939" +dependencies = [ + "indoc", + "prettyplease", + "syn", + "vbare-gen", +] + +[[package]] +name = "vbare-gen" +version = "0.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2059af920d5d876dd7b737dac2c647aa872a58ef266d8af5bd660a2ec6c25bcb" +dependencies = [ + "heck", + "pest", + "pest_derive", + "proc-macro2", + "quote", + "serde", + "syn", +] + +[[package]] +name = "vcpkg" +version = "0.2.15" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "accd4ea62f7bb7a82fe23066fb0957d48ef677f6eeb8215f372f52e48bb32426" + +[[package]] +name = "version_check" +version = "0.9.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0b928f33d975fc6ad9f86c8f283853ad26bdd5b10b7f1542aa2fa15e2289105a" + +[[package]] +name = "wasi" +version = "0.11.1+wasi-snapshot-preview1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ccf3ec651a847eb01de73ccad15eb7d99f80485de043efb2f370cd654f4ea44b" + +[[package]] +name = "windows-link" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f0805222e57f7521d6a62e36fa9163bc891acd422f971defe97d64e70d0a4fe5" + +[[package]] +name = "windows-sys" +version = "0.61.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ae137229bcbd6cdf0f7b80a31df61766145077ddf49416a728b02cb3921ff3fc" +dependencies = [ + "windows-link", +] + +[[package]] +name = "zerocopy" +version = "0.8.42" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f2578b716f8a7a858b7f02d5bd870c14bf4ddbbcf3a4c05414ba6503640505e3" +dependencies = [ + "zerocopy-derive", +] + +[[package]] +name = "zerocopy-derive" +version = "0.8.42" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7e6cc098ea4d3bd6246687de65af3f920c430e236bee1e3bf2e441463f08a02f" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "zmij" +version = "1.0.21" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b8848ee67ecc8aedbaf3e4122217aff892639231befc6a1b58d29fff4c2cabaa" diff --git a/rivetkit-typescript/packages/sqlite-native/Cargo.toml b/rivetkit-typescript/packages/sqlite-native/Cargo.toml new file mode 100644 index 0000000000..43b84de555 --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/Cargo.toml @@ -0,0 +1,34 @@ +[package] +name = "rivetkit-sqlite-native" +version = "2.1.6" +edition = "2021" +license = "Apache-2.0" +description = "Native SQLite addon for RivetKit backed by KV channel protocol" + +[lib] +crate-type = ["cdylib"] + +[dependencies] +napi = { version = "2", default-features = false, features = ["napi6", "async", "serde-json"] } +napi-derive = "2" +libsqlite3-sys = { version = "0.30", features = ["bundled"] } +tokio = { version = "1", features = ["rt-multi-thread", "sync", "net", "time", "macros"] } +tokio-tungstenite = "0.24" +futures-util = { version = "0.3", default-features = false, features = ["sink"] } +rivet-kv-channel-protocol = { path = "../../../engine/sdks/rust/kv-channel-protocol" } +serde = { version = "1", features = ["derive"] } +serde_bare = "0.5" +serde_json = "1" +lru = "0.12" +tracing = "0.1" +tracing-subscriber = { version = "0.3", features = ["env-filter"] } +urlencoding = "2" +getrandom = "0.2" + +[build-dependencies] +napi-build = "2" + +[workspace] + +[profile.release] +lto = true diff --git a/rivetkit-typescript/packages/sqlite-native/build.rs b/rivetkit-typescript/packages/sqlite-native/build.rs new file mode 100644 index 0000000000..f8bfd67ec9 --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/build.rs @@ -0,0 +1,5 @@ +extern crate napi_build; + +fn main() { + napi_build::setup(); +} diff --git a/rivetkit-typescript/packages/sqlite-native/index.d.ts b/rivetkit-typescript/packages/sqlite-native/index.d.ts new file mode 100644 index 0000000000..49ed86044b --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/index.d.ts @@ -0,0 +1,180 @@ +/* tslint:disable */ +/* eslint-disable */ + +/* auto-generated by NAPI-RS */ + +/** + * Typed bind parameter passed from JavaScript. + * + * Replaces `Vec` for statement parameters, avoiding 20x + * serialization overhead for blob data. Instead of JSON arrays of numbers, + * blobs are passed as `Buffer` (a single memcpy from JS heap to Rust). + * + * See docs-internal/engine/NATIVE_SQLITE_REVIEW_FIXES.md M7. + */ +export interface BindParam { + /** One of: "null", "int", "float", "text", "blob" */ + kind: string + intValue?: number + floatValue?: number + textValue?: string + blobValue?: Buffer +} +/** Configuration for connecting to the KV channel endpoint. */ +export interface ConnectConfig { + url: string + token?: string + namespace: string +} +/** Result of an execute() call. */ +export interface ExecuteResult { + changes: number +} +/** Result of a query() call. */ +export interface QueryResult { + columns: Array + rows: Array> +} +/** + * Open the shared KV channel WebSocket connection. + * + * In production, token is the engine's admin_token (RIVET__AUTH__ADMIN_TOKEN). + * In local dev, token is config.token (RIVET_TOKEN), optional in dev mode. + */ +export declare function connect(config: ConnectConfig): KvChannel +/** + * Open a database for an actor. Sends ActorOpenRequest optimistically. + * + * VFS registration and sqlite3_open_v2 run inside `spawn_blocking` because + * they trigger synchronous VFS callbacks that call `Handle::block_on()` for + * KV I/O. This is safe from a blocking thread but would deadlock or freeze + * the Node.js main thread if called via `rt.block_on()`. + */ +export declare function openDatabase(channel: KvChannel, actorId: string): Promise +/** + * Execute a statement (INSERT, UPDATE, DELETE, CREATE, etc.). + * + * SQLite operations run on tokio's blocking thread pool via `spawn_blocking`. + * VFS callbacks call `Handle::block_on()` from blocking threads (not tokio + * worker threads), which is safe. The Node.js main thread is never blocked. + * + * Three threading approaches were considered: + * + * 1. **spawn_blocking** (chosen): napi `async fn` dispatches to tokio's + * blocking thread pool (default cap 512). Simplest, idiomatic, tokio + * manages the pool. Minor downside: thread may change between queries + * (slightly worse cache locality). + * + * 2. **Dedicated thread per actor**: One `std::thread` per actor, receives + * SQL via mpsc, sends results via oneshot. Best cache locality, but + * requires manual lifecycle management and one idle thread per open actor. + * + * 3. **Channel + block-in-place**: Sync napi function, VFS callbacks send + * requests via `std::sync::mpsc` and block on `recv()`. Does NOT solve + * the core problem because the Node.js main thread is still blocked. + * + * See docs-internal/engine/NATIVE_SQLITE_REVIEW_FINDINGS.md Finding 1. + */ +export declare function execute(db: NativeDatabase, sql: string, params?: Array | undefined | null): Promise +/** + * Run a query (SELECT, PRAGMA, etc.). + * + * See `execute` for threading model documentation. + */ +export declare function query(db: NativeDatabase, sql: string, params?: Array | undefined | null): Promise +/** + * Execute multi-statement SQL without parameters. + * Uses sqlite3_prepare_v2 in a loop with tail pointer tracking to handle + * multiple statements (e.g., migrations). Returns columns and rows from + * the last statement that produced results. + * + * See `execute` for threading model documentation. + */ +export declare function exec(db: NativeDatabase, sql: string): Promise +/** + * Close the database connection and release the actor lock. + * Sends ActorCloseRequest to the server. + * + * Locks the db mutex and takes the Option, so concurrent/subsequent + * execute/query/exec operations see None and return "database is closed". + */ +export declare function closeDatabase(db: NativeDatabase): Promise +/** Close the KV channel WebSocket connection. */ +export declare function disconnect(channel: KvChannel): Promise +/** Per-operation metrics snapshot. */ +export interface OpMetricsSnapshot { + count: number + totalDurationUs: number + minDurationUs: number + maxDurationUs: number + avgDurationUs: number +} +/** All KV channel metrics (Layer 1). */ +export interface KvChannelMetricsSnapshot { + get: OpMetricsSnapshot + put: OpMetricsSnapshot + delete: OpMetricsSnapshot + deleteRange: OpMetricsSnapshot + actorOpen: OpMetricsSnapshot + actorClose: OpMetricsSnapshot + keysTotal: number + requestsTotal: number + batchAtomicCommits: number + batchAtomicPages: number +} +/** SQL execution metrics (Layer 0). */ +export interface SqlMetricsSnapshot { + execute: OpMetricsSnapshot + query: OpMetricsSnapshot + exec: OpMetricsSnapshot + spawnBlockingWait: OpMetricsSnapshot + sqliteStep: OpMetricsSnapshot + stmtCache: OpMetricsSnapshot + resultSerialize: OpMetricsSnapshot +} +/** VFS callback metrics. */ +export interface VfsMetricsSnapshot { + xreadCount: number + xreadUs: number + xwriteCount: number + xwriteUs: number + xwriteBufferedCount: number + xsyncCount: number + xsyncUs: number + commitAtomicCount: number + commitAtomicUs: number + commitAtomicPages: number +} +/** All metrics across all layers. */ +export interface AllMetricsSnapshot { + kvChannel: KvChannelMetricsSnapshot + sql: SqlMetricsSnapshot + vfs: VfsMetricsSnapshot +} +/** Get a snapshot of all metrics across all layers. */ +export declare function getMetrics(channel: KvChannel): AllMetricsSnapshot +export type JsKvChannel = KvChannel +/** + * A shared WebSocket connection to the KV channel server. + * One per process, shared across all actors. + * + * The tokio runtime is owned here so it is dropped when the channel is dropped, + * ensuring clean process exit after disconnect. The runtime MUST NOT be dropped + * before all actors have closed their databases. + */ +export declare class KvChannel { } +export type JsNativeDatabase = NativeDatabase +/** + * An open SQLite database backed by KV storage via the channel. + * + * The `db` field is wrapped in `Arc>>` so that + * `close_database` can atomically take the handle while concurrent + * `execute`/`query`/`exec` closures hold an Arc clone. Any operation + * that finds `None` returns a "database is closed" error. This prevents + * use-after-free if `close_database` runs between pointer extraction + * and `spawn_blocking` task execution. + * + * Field order matters for drop safety: `stmt_cache` is declared before `db` + * so cached statements are finalized before the database connection is closed. + */ +export declare class NativeDatabase { } diff --git a/rivetkit-typescript/packages/sqlite-native/index.js b/rivetkit-typescript/packages/sqlite-native/index.js new file mode 100644 index 0000000000..167fd643d5 --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/index.js @@ -0,0 +1,324 @@ +/* tslint:disable */ +/* eslint-disable */ +/* prettier-ignore */ + +/* auto-generated by NAPI-RS */ + +const { existsSync, readFileSync } = require('fs') +const { join } = require('path') + +const { platform, arch } = process + +let nativeBinding = null +let localFileExisted = false +let loadError = null + +function isMusl() { + // For Node 10 + if (!process.report || typeof process.report.getReport !== 'function') { + try { + const lddPath = require('child_process').execSync('which ldd').toString().trim() + return readFileSync(lddPath, 'utf8').includes('musl') + } catch (e) { + return true + } + } else { + const { glibcVersionRuntime } = process.report.getReport().header + return !glibcVersionRuntime + } +} + +switch (platform) { + case 'android': + switch (arch) { + case 'arm64': + localFileExisted = existsSync(join(__dirname, 'sqlite-native.android-arm64.node')) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.android-arm64.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-android-arm64') + } + } catch (e) { + loadError = e + } + break + case 'arm': + localFileExisted = existsSync(join(__dirname, 'sqlite-native.android-arm-eabi.node')) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.android-arm-eabi.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-android-arm-eabi') + } + } catch (e) { + loadError = e + } + break + default: + throw new Error(`Unsupported architecture on Android ${arch}`) + } + break + case 'win32': + switch (arch) { + case 'x64': + localFileExisted = existsSync( + join(__dirname, 'sqlite-native.win32-x64-msvc.node') + ) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.win32-x64-msvc.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-win32-x64-msvc') + } + } catch (e) { + loadError = e + } + break + case 'ia32': + localFileExisted = existsSync( + join(__dirname, 'sqlite-native.win32-ia32-msvc.node') + ) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.win32-ia32-msvc.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-win32-ia32-msvc') + } + } catch (e) { + loadError = e + } + break + case 'arm64': + localFileExisted = existsSync( + join(__dirname, 'sqlite-native.win32-arm64-msvc.node') + ) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.win32-arm64-msvc.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-win32-arm64-msvc') + } + } catch (e) { + loadError = e + } + break + default: + throw new Error(`Unsupported architecture on Windows: ${arch}`) + } + break + case 'darwin': + localFileExisted = existsSync(join(__dirname, 'sqlite-native.darwin-universal.node')) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.darwin-universal.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-darwin-universal') + } + break + } catch {} + switch (arch) { + case 'x64': + localFileExisted = existsSync(join(__dirname, 'sqlite-native.darwin-x64.node')) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.darwin-x64.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-darwin-x64') + } + } catch (e) { + loadError = e + } + break + case 'arm64': + localFileExisted = existsSync( + join(__dirname, 'sqlite-native.darwin-arm64.node') + ) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.darwin-arm64.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-darwin-arm64') + } + } catch (e) { + loadError = e + } + break + default: + throw new Error(`Unsupported architecture on macOS: ${arch}`) + } + break + case 'freebsd': + if (arch !== 'x64') { + throw new Error(`Unsupported architecture on FreeBSD: ${arch}`) + } + localFileExisted = existsSync(join(__dirname, 'sqlite-native.freebsd-x64.node')) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.freebsd-x64.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-freebsd-x64') + } + } catch (e) { + loadError = e + } + break + case 'linux': + switch (arch) { + case 'x64': + if (isMusl()) { + localFileExisted = existsSync( + join(__dirname, 'sqlite-native.linux-x64-musl.node') + ) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.linux-x64-musl.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-linux-x64-musl') + } + } catch (e) { + loadError = e + } + } else { + localFileExisted = existsSync( + join(__dirname, 'sqlite-native.linux-x64-gnu.node') + ) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.linux-x64-gnu.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-linux-x64-gnu') + } + } catch (e) { + loadError = e + } + } + break + case 'arm64': + if (isMusl()) { + localFileExisted = existsSync( + join(__dirname, 'sqlite-native.linux-arm64-musl.node') + ) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.linux-arm64-musl.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-linux-arm64-musl') + } + } catch (e) { + loadError = e + } + } else { + localFileExisted = existsSync( + join(__dirname, 'sqlite-native.linux-arm64-gnu.node') + ) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.linux-arm64-gnu.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-linux-arm64-gnu') + } + } catch (e) { + loadError = e + } + } + break + case 'arm': + if (isMusl()) { + localFileExisted = existsSync( + join(__dirname, 'sqlite-native.linux-arm-musleabihf.node') + ) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.linux-arm-musleabihf.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-linux-arm-musleabihf') + } + } catch (e) { + loadError = e + } + } else { + localFileExisted = existsSync( + join(__dirname, 'sqlite-native.linux-arm-gnueabihf.node') + ) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.linux-arm-gnueabihf.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-linux-arm-gnueabihf') + } + } catch (e) { + loadError = e + } + } + break + case 'riscv64': + if (isMusl()) { + localFileExisted = existsSync( + join(__dirname, 'sqlite-native.linux-riscv64-musl.node') + ) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.linux-riscv64-musl.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-linux-riscv64-musl') + } + } catch (e) { + loadError = e + } + } else { + localFileExisted = existsSync( + join(__dirname, 'sqlite-native.linux-riscv64-gnu.node') + ) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.linux-riscv64-gnu.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-linux-riscv64-gnu') + } + } catch (e) { + loadError = e + } + } + break + case 's390x': + localFileExisted = existsSync( + join(__dirname, 'sqlite-native.linux-s390x-gnu.node') + ) + try { + if (localFileExisted) { + nativeBinding = require('./sqlite-native.linux-s390x-gnu.node') + } else { + nativeBinding = require('@rivetkit/sqlite-native-linux-s390x-gnu') + } + } catch (e) { + loadError = e + } + break + default: + throw new Error(`Unsupported architecture on Linux: ${arch}`) + } + break + default: + throw new Error(`Unsupported OS: ${platform}, architecture: ${arch}`) +} + +if (!nativeBinding) { + if (loadError) { + throw loadError + } + throw new Error(`Failed to load native binding`) +} + +const { KvChannel, NativeDatabase, connect, openDatabase, execute, query, exec, closeDatabase, disconnect, getMetrics } = nativeBinding + +module.exports.KvChannel = KvChannel +module.exports.NativeDatabase = NativeDatabase +module.exports.connect = connect +module.exports.openDatabase = openDatabase +module.exports.execute = execute +module.exports.query = query +module.exports.exec = exec +module.exports.closeDatabase = closeDatabase +module.exports.disconnect = disconnect +module.exports.getMetrics = getMetrics diff --git a/rivetkit-typescript/packages/sqlite-native/npm/darwin-arm64/package.json b/rivetkit-typescript/packages/sqlite-native/npm/darwin-arm64/package.json new file mode 100644 index 0000000000..6f456a9ab3 --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/npm/darwin-arm64/package.json @@ -0,0 +1,13 @@ +{ + "name": "@rivetkit/sqlite-native-darwin-arm64", + "version": "2.1.6", + "description": "Native SQLite addon for RivetKit - macOS arm64", + "license": "Apache-2.0", + "os": ["darwin"], + "cpu": ["arm64"], + "main": "sqlite-native.darwin-arm64.node", + "files": ["sqlite-native.darwin-arm64.node"], + "engines": { + "node": ">= 20.0.0" + } +} diff --git a/rivetkit-typescript/packages/sqlite-native/npm/darwin-x64/package.json b/rivetkit-typescript/packages/sqlite-native/npm/darwin-x64/package.json new file mode 100644 index 0000000000..933de7d328 --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/npm/darwin-x64/package.json @@ -0,0 +1,13 @@ +{ + "name": "@rivetkit/sqlite-native-darwin-x64", + "version": "2.1.6", + "description": "Native SQLite addon for RivetKit - macOS x64", + "license": "Apache-2.0", + "os": ["darwin"], + "cpu": ["x64"], + "main": "sqlite-native.darwin-x64.node", + "files": ["sqlite-native.darwin-x64.node"], + "engines": { + "node": ">= 20.0.0" + } +} diff --git a/rivetkit-typescript/packages/sqlite-native/npm/linux-arm64-gnu/package.json b/rivetkit-typescript/packages/sqlite-native/npm/linux-arm64-gnu/package.json new file mode 100644 index 0000000000..f797a36b3e --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/npm/linux-arm64-gnu/package.json @@ -0,0 +1,13 @@ +{ + "name": "@rivetkit/sqlite-native-linux-arm64-gnu", + "version": "2.1.6", + "description": "Native SQLite addon for RivetKit - Linux arm64 GNU", + "license": "Apache-2.0", + "os": ["linux"], + "cpu": ["arm64"], + "main": "sqlite-native.linux-arm64-gnu.node", + "files": ["sqlite-native.linux-arm64-gnu.node"], + "engines": { + "node": ">= 20.0.0" + } +} diff --git a/rivetkit-typescript/packages/sqlite-native/npm/linux-x64-gnu/package.json b/rivetkit-typescript/packages/sqlite-native/npm/linux-x64-gnu/package.json new file mode 100644 index 0000000000..716c52c8bc --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/npm/linux-x64-gnu/package.json @@ -0,0 +1,13 @@ +{ + "name": "@rivetkit/sqlite-native-linux-x64-gnu", + "version": "2.1.6", + "description": "Native SQLite addon for RivetKit - Linux x64 GNU", + "license": "Apache-2.0", + "os": ["linux"], + "cpu": ["x64"], + "main": "sqlite-native.linux-x64-gnu.node", + "files": ["sqlite-native.linux-x64-gnu.node"], + "engines": { + "node": ">= 20.0.0" + } +} diff --git a/rivetkit-typescript/packages/sqlite-native/npm/win32-x64-msvc/package.json b/rivetkit-typescript/packages/sqlite-native/npm/win32-x64-msvc/package.json new file mode 100644 index 0000000000..a5045b2a9b --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/npm/win32-x64-msvc/package.json @@ -0,0 +1,13 @@ +{ + "name": "@rivetkit/sqlite-native-win32-x64-msvc", + "version": "2.1.6", + "description": "Native SQLite addon for RivetKit - Windows x64 MSVC", + "license": "Apache-2.0", + "os": ["win32"], + "cpu": ["x64"], + "main": "sqlite-native.win32-x64-msvc.node", + "files": ["sqlite-native.win32-x64-msvc.node"], + "engines": { + "node": ">= 20.0.0" + } +} diff --git a/rivetkit-typescript/packages/sqlite-native/package.json b/rivetkit-typescript/packages/sqlite-native/package.json new file mode 100644 index 0000000000..2a8e48c3a7 --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/package.json @@ -0,0 +1,43 @@ +{ + "name": "@rivetkit/sqlite-native", + "version": "2.1.6", + "description": "Native SQLite addon for RivetKit backed by KV channel protocol", + "license": "Apache-2.0", + "main": "index.js", + "types": "index.d.ts", + "engines": { + "node": ">= 20.0.0" + }, + "napi": { + "name": "sqlite-native", + "triples": { + "defaults": false, + "additional": [ + "x86_64-unknown-linux-gnu", + "aarch64-unknown-linux-gnu", + "x86_64-apple-darwin", + "aarch64-apple-darwin", + "x86_64-pc-windows-msvc" + ] + } + }, + "files": [ + "index.js", + "index.d.ts", + "package.json" + ], + "scripts": { + "build": "napi build --platform --release", + "prepublishOnly": "napi prepublish -t npm" + }, + "optionalDependencies": { + "@rivetkit/sqlite-native-linux-x64-gnu": "2.1.6", + "@rivetkit/sqlite-native-linux-arm64-gnu": "2.1.6", + "@rivetkit/sqlite-native-darwin-x64": "2.1.6", + "@rivetkit/sqlite-native-darwin-arm64": "2.1.6", + "@rivetkit/sqlite-native-win32-x64-msvc": "2.1.6" + }, + "devDependencies": { + "@napi-rs/cli": "^2.18.0" + } +} diff --git a/rivetkit-typescript/packages/sqlite-native/sqlite-native.linux-x64-gnu.node b/rivetkit-typescript/packages/sqlite-native/sqlite-native.linux-x64-gnu.node new file mode 100755 index 0000000000..c108848343 Binary files /dev/null and b/rivetkit-typescript/packages/sqlite-native/sqlite-native.linux-x64-gnu.node differ diff --git a/rivetkit-typescript/packages/sqlite-native/src/channel.rs b/rivetkit-typescript/packages/sqlite-native/src/channel.rs new file mode 100644 index 0000000000..eff118a3f2 --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/src/channel.rs @@ -0,0 +1,891 @@ +//! WebSocket KV channel client. +//! +//! Manages a persistent WebSocket connection to the KV channel endpoint, +//! sends requests with correlation IDs, and handles reconnection with +//! exponential backoff. +//! +//! One channel per process, shared across all actors. +//! See `docs-internal/engine/NATIVE_SQLITE_DATA_CHANNEL.md` for the full spec. +//! +//! End-to-end tests are in the driver-test-suite +//! (`rivetkit-typescript/packages/rivetkit/src/driver-test-suite/`). + +use std::collections::{HashMap, HashSet}; +use std::fmt; +use std::sync::atomic::{AtomicU64, Ordering}; +use std::sync::Arc; +use std::time::{Duration, Instant}; + +use futures_util::{SinkExt, StreamExt}; +use tokio::sync::{mpsc, oneshot, watch, Mutex}; +use tokio::time; +use tokio_tungstenite::connect_async; +use tokio_tungstenite::tungstenite::Message; + +use crate::protocol::{ + decode_to_client, encode_to_server, ErrorResponse, RequestData, ResponseData, ToClient, + ToServer, ToServerPong, ToServerRequest, PROTOCOL_VERSION, +}; + +// MARK: Constants + +/// Timeout for individual KV operations in milliseconds. +/// Matches KV_EXPIRE in engine/sdks/typescript/runner/src/mod.ts. +const KV_EXPIRE_MS: u64 = 30_000; + +/// Initial reconnect delay in milliseconds. +const INITIAL_BACKOFF_MS: u64 = 1000; + +/// Maximum reconnect delay in milliseconds. +const MAX_BACKOFF_MS: u64 = 30_000; + +/// Backoff multiplier (exponential). +const BACKOFF_MULTIPLIER: f64 = 2.0; + +/// Maximum jitter fraction added to each backoff delay (0-25%). +const JITTER_MAX: f64 = 0.25; + +// PROTOCOL_VERSION is imported from crate::protocol (rivet-kv-channel-protocol). + +// MARK: Metrics + +/// Per-operation-type metrics for KV channel operations. +#[derive(Debug, Default)] +pub struct OpMetrics { + pub count: AtomicU64, + pub total_duration_us: AtomicU64, + pub min_duration_us: AtomicU64, + pub max_duration_us: AtomicU64, +} + +impl OpMetrics { + pub fn new() -> Self { + Self { + count: AtomicU64::new(0), + total_duration_us: AtomicU64::new(0), + min_duration_us: AtomicU64::new(u64::MAX), + max_duration_us: AtomicU64::new(0), + } + } + + pub fn record(&self, duration: Duration) { + let us = duration.as_micros() as u64; + self.count.fetch_add(1, Ordering::Relaxed); + self.total_duration_us.fetch_add(us, Ordering::Relaxed); + self.min_duration_us.fetch_min(us, Ordering::Relaxed); + self.max_duration_us.fetch_max(us, Ordering::Relaxed); + } + + pub fn snapshot(&self) -> (u64, u64, u64, u64) { + let count = self.count.load(Ordering::Relaxed); + let total = self.total_duration_us.load(Ordering::Relaxed); + let min = self.min_duration_us.load(Ordering::Relaxed); + let max = self.max_duration_us.load(Ordering::Relaxed); + (count, total, if min == u64::MAX { 0 } else { min }, max) + } +} + +/// Aggregated metrics for all KV channel operations. +pub struct KvChannelMetrics { + pub get: OpMetrics, + pub put: OpMetrics, + pub delete: OpMetrics, + pub delete_range: OpMetrics, + pub actor_open: OpMetrics, + pub actor_close: OpMetrics, + pub keys_total: AtomicU64, + pub requests_total: AtomicU64, + pub batch_atomic_commits: AtomicU64, + pub batch_atomic_pages: AtomicU64, +} + +impl KvChannelMetrics { + pub fn new() -> Self { + Self { + get: OpMetrics::new(), + put: OpMetrics::new(), + delete: OpMetrics::new(), + delete_range: OpMetrics::new(), + actor_open: OpMetrics::new(), + actor_close: OpMetrics::new(), + keys_total: AtomicU64::new(0), + requests_total: AtomicU64::new(0), + batch_atomic_commits: AtomicU64::new(0), + batch_atomic_pages: AtomicU64::new(0), + } + } + + fn record_op(&self, data: &RequestData, duration: Duration) { + match data { + RequestData::KvGetRequest(_) => self.get.record(duration), + RequestData::KvPutRequest(_) => self.put.record(duration), + RequestData::KvDeleteRequest(_) => self.delete.record(duration), + RequestData::KvDeleteRangeRequest(_) => self.delete_range.record(duration), + RequestData::ActorOpenRequest => self.actor_open.record(duration), + RequestData::ActorCloseRequest => self.actor_close.record(duration), + } + } +} + +// MARK: Error + +/// Errors returned by KV channel operations. +#[derive(Debug)] +pub enum ChannelError { + /// The WebSocket connection is not established. + ConnectionClosed, + /// The operation timed out (KV_EXPIRE exceeded). + Timeout, + /// Protocol serialization/deserialization error. + Protocol(String), + /// WebSocket transport error. + WebSocket(String), + /// Server returned an error response. + ServerError(ErrorResponse), + /// The channel has been shut down. + Shutdown, +} + +impl fmt::Display for ChannelError { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + match self { + Self::ConnectionClosed => write!(f, "kv channel connection closed"), + Self::Timeout => write!(f, "kv channel operation timed out"), + Self::Protocol(msg) => write!(f, "kv channel protocol error: {msg}"), + Self::WebSocket(msg) => write!(f, "kv channel websocket error: {msg}"), + Self::ServerError(e) => { + write!(f, "kv channel server error: {} - {}", e.code, e.message) + } + Self::Shutdown => write!(f, "kv channel shut down"), + } + } +} + +impl std::error::Error for ChannelError {} + +// MARK: Config + +/// Configuration for connecting to the KV channel endpoint. +#[derive(Debug, Clone)] +pub struct KvChannelConfig { + /// Base WebSocket endpoint URL (e.g., "ws://localhost:6420"). + pub url: String, + /// Authentication token. Engine uses admin_token, manager uses config.token. + pub token: Option, + /// Namespace for actor scoping. + pub namespace: String, +} + +// MARK: KvChannel + +/// A persistent WebSocket connection to the KV channel server. +/// +/// One channel per process, shared across all actors. Handles reconnection +/// with exponential backoff and re-opens actors after reconnect. +pub struct KvChannel { + inner: Arc, +} + +struct Inner { + config: KvChannelConfig, + + /// Sender for outgoing WebSocket binary frames. None when disconnected. + outgoing_tx: Mutex>>>, + + /// In-flight requests awaiting responses, keyed by requestId. + in_flight: Mutex>>>, + + /// Next requestId to allocate. Resets to 0 on reconnect. + next_request_id: Mutex, + + /// Actor IDs that are currently open. Re-opened on reconnect. + open_actors: Mutex>, + + /// Actors pending re-open on reconnect. KV requests for these actors + /// wait until the watch value becomes true (ActorOpenResponse received). + /// Empty during initial connection (optimistic open). + reconnect_ready: Mutex>>, + + /// Request IDs of reconnect ActorOpenRequests. Maps request_id -> actor_id + /// so the response handler can mark actors as ready. + reconnect_request_ids: Mutex>, + + /// Signal to shut down background tasks. + shutdown_tx: watch::Sender, + + /// Fires once the first WebSocket connection is established. Allows + /// `send_request` to wait for the initial connection instead of + /// returning `ConnectionClosed` immediately. + connected_notify: tokio::sync::Notify, + + /// Whether we have ever successfully connected. + ever_connected: std::sync::atomic::AtomicBool, + + /// Per-operation-type metrics. + metrics: KvChannelMetrics, +} + +impl KvChannel { + /// Create a new KV channel and spawn the background connection loop. + /// + /// The initial WebSocket connection is established asynchronously in the + /// background. KV operations fail with `ConnectionClosed` until connected. + pub fn connect(config: KvChannelConfig) -> Self { + let (shutdown_tx, shutdown_rx) = watch::channel(false); + + let inner = Arc::new(Inner { + config, + outgoing_tx: Mutex::new(None), + in_flight: Mutex::new(HashMap::new()), + next_request_id: Mutex::new(0), + open_actors: Mutex::new(HashSet::new()), + reconnect_ready: Mutex::new(HashMap::new()), + reconnect_request_ids: Mutex::new(HashMap::new()), + shutdown_tx, + connected_notify: tokio::sync::Notify::new(), + ever_connected: std::sync::atomic::AtomicBool::new(false), + metrics: KvChannelMetrics::new(), + }); + + let inner_clone = inner.clone(); + tokio::spawn(async move { + connection_loop(inner_clone, shutdown_rx).await; + }); + + KvChannel { inner } + } + + /// Send a request and wait for the correlated response. + /// + /// Times out after KV_EXPIRE (30 seconds). + pub async fn send_request( + &self, + actor_id: &str, + data: RequestData, + ) -> Result { + let start = Instant::now(); + + if *self.inner.shutdown_tx.borrow() { + return Err(ChannelError::Shutdown); + } + + // Wait for the initial WebSocket connection before attempting to send. + if !self.inner.ever_connected.load(std::sync::atomic::Ordering::SeqCst) { + match time::timeout( + Duration::from_millis(KV_EXPIRE_MS), + self.inner.connected_notify.notified(), + ) + .await + { + Ok(()) => {} + Err(_) => return Err(ChannelError::Timeout), + } + } + + // On reconnect, wait for ActorOpenResponse before sending KV requests. + // The initial open (first connection) has no reconnect_ready entries, + // so this is a no-op. See docs-internal/engine/NATIVE_SQLITE_REVIEW_FINDINGS.md + // Finding 4 'Client-side change' section. + let pending_rx = { + let ready = self.inner.reconnect_ready.lock().await; + ready.get(actor_id).map(|tx| tx.subscribe()) + }; + if let Some(mut rx) = pending_rx { + match rx.wait_for(|v| *v).await { + Ok(_) => {} + Err(_) => return Err(ChannelError::ConnectionClosed), + } + } + + let (resp_tx, resp_rx) = oneshot::channel(); + + // Allocate a request ID. + let request_id = { + let mut id = self.inner.next_request_id.lock().await; + let rid = *id; + *id = rid.wrapping_add(1); + rid + }; + + // Save the operation type tag and key count before data is moved. + let key_count: u64 = match &data { + RequestData::KvGetRequest(body) => body.keys.len() as u64, + RequestData::KvPutRequest(body) => body.keys.len() as u64, + RequestData::KvDeleteRequest(body) => body.keys.len() as u64, + _ => 0, + }; + let op_tag = match &data { + RequestData::KvGetRequest(_) => 0u8, + RequestData::KvPutRequest(_) => 1, + RequestData::KvDeleteRequest(_) => 2, + RequestData::KvDeleteRangeRequest(_) => 3, + RequestData::ActorOpenRequest => 4, + RequestData::ActorCloseRequest => 5, + }; + + // Serialize the message. + let msg = ToServer::ToServerRequest(ToServerRequest { + request_id, + actor_id: actor_id.to_string(), + data, + }); + let bytes = + encode_to_server(&msg).map_err(|e| ChannelError::Protocol(e.to_string()))?; + + // Register in-flight before sending to avoid response racing ahead. + self.inner + .in_flight + .lock() + .await + .insert(request_id, resp_tx); + + // Send via WebSocket. If not connected, fail immediately. + let send_result = { + let tx_guard = self.inner.outgoing_tx.lock().await; + match tx_guard.as_ref() { + Some(tx) => tx.send(bytes).map_err(|_| ChannelError::ConnectionClosed), + None => Err(ChannelError::ConnectionClosed), + } + }; + + if let Err(e) = send_result { + self.inner.in_flight.lock().await.remove(&request_id); + return Err(e); + } + + // Wait for correlated response with timeout. + let result = match time::timeout(Duration::from_millis(KV_EXPIRE_MS), resp_rx).await { + Ok(Ok(result)) => result, + Ok(Err(_)) => Err(ChannelError::ConnectionClosed), + Err(_) => { + self.inner.in_flight.lock().await.remove(&request_id); + Err(ChannelError::Timeout) + } + }; + + // Record round-trip duration by operation type. + let duration = start.elapsed(); + let m = &self.inner.metrics; + match op_tag { + 0 => m.get.record(duration), + 1 => m.put.record(duration), + 2 => m.delete.record(duration), + 3 => m.delete_range.record(duration), + 4 => m.actor_open.record(duration), + _ => m.actor_close.record(duration), + } + m.keys_total.fetch_add(key_count, Ordering::Relaxed); + m.requests_total.fetch_add(1, Ordering::Relaxed); + + result + } + + /// Get a reference to the channel metrics. + pub fn metrics(&self) -> &KvChannelMetrics { + &self.inner.metrics + } + + /// Open an actor, registering it for re-open on reconnect. + /// The actor is only added to `open_actors` after the server confirms the open. + pub async fn open_actor(&self, actor_id: &str) -> Result { + let resp = self + .send_request(actor_id, RequestData::ActorOpenRequest) + .await?; + { + let mut open = self.inner.open_actors.lock().await; + open.insert(actor_id.to_string()); + } + Ok(resp) + } + + /// Close an actor, removing it from the re-open set. + /// The actor is only removed from `open_actors` after the server confirms the close. + pub async fn close_actor(&self, actor_id: &str) -> Result { + let resp = self + .send_request(actor_id, RequestData::ActorCloseRequest) + .await?; + { + let mut open = self.inner.open_actors.lock().await; + open.remove(actor_id); + } + Ok(resp) + } + + /// Shut down the channel, closing the WebSocket and failing in-flight requests. + pub async fn disconnect(&self) { + let _ = self.inner.shutdown_tx.send(true); + *self.inner.outgoing_tx.lock().await = None; + fail_all_in_flight(&self.inner).await; + } +} + +// MARK: Connection Loop + +/// Main background loop that manages the WebSocket connection lifecycle. +/// +/// Connects to the server, runs read/write tasks, and reconnects with +/// exponential backoff on disconnect. +async fn connection_loop(inner: Arc, mut shutdown_rx: watch::Receiver) { + let mut attempt: u32 = 0; + + loop { + if *shutdown_rx.borrow() { + return; + } + + let url = build_ws_url(&inner.config); + + // Use tungstenite's IntoClientRequest to build the request with proper + // headers, then add the "rivet" subprotocol. The engine's guard + // unconditionally adds Sec-WebSocket-Protocol: rivet to upgrade + // responses, so the client must request it to avoid a protocol error. + let ws_request = { + use tokio_tungstenite::tungstenite::client::IntoClientRequest; + let mut req = url.as_str().into_client_request().expect("failed to build websocket request"); + req.headers_mut().insert( + "Sec-WebSocket-Protocol", + "rivet".parse().unwrap(), + ); + req + }; + + match connect_async(ws_request).await { + Ok((ws_stream, _)) => { + // Reset backoff on successful connection. + attempt = 0; + + // Signal waiters that the initial connection is ready. + if !inner.ever_connected.swap(true, std::sync::atomic::Ordering::SeqCst) { + inner.connected_notify.notify_waiters(); + } + + let (ws_write, ws_read) = ws_stream.split(); + let (outgoing_tx, outgoing_rx) = mpsc::unbounded_channel::>(); + + // Reset request ID counter and reconnect state. + *inner.next_request_id.lock().await = 0; + inner.reconnect_ready.lock().await.clear(); + inner.reconnect_request_ids.lock().await.clear(); + + // Re-open all previously open actors. On reconnect, KV requests + // wait for each ActorOpenResponse before proceeding. On initial + // connection (actors empty), this is a no-op and open_actor + // proceeds optimistically. + // See docs-internal/engine/NATIVE_SQLITE_REVIEW_FINDINGS.md Finding 4. + let actors: Vec = + inner.open_actors.lock().await.iter().cloned().collect(); + let mut next_id = 0u32; + { + let mut ready = inner.reconnect_ready.lock().await; + let mut req_ids = inner.reconnect_request_ids.lock().await; + for actor_id in &actors { + let (tx, _rx) = watch::channel(false); + ready.insert(actor_id.clone(), tx); + req_ids.insert(next_id, actor_id.clone()); + + let msg = ToServer::ToServerRequest(ToServerRequest { + request_id: next_id, + actor_id: actor_id.clone(), + data: RequestData::ActorOpenRequest, + }); + if let Ok(bytes) = encode_to_server(&msg) { + let _ = outgoing_tx.send(bytes); + } + next_id = next_id.wrapping_add(1); + } + } + *inner.next_request_id.lock().await = next_id; + + // Store the outgoing sender so send_request can use it. + *inner.outgoing_tx.lock().await = Some(outgoing_tx); + + // Run read/write tasks until disconnect. + run_connection( + inner.clone(), + ws_read, + ws_write, + outgoing_rx, + &mut shutdown_rx, + ) + .await; + + // Connection ended. Clear sender and fail in-flight requests. + *inner.outgoing_tx.lock().await = None; + fail_all_in_flight(&inner).await; + } + Err(e) => { + tracing::warn!(%e, "kv channel connection failed"); + } + } + + if *shutdown_rx.borrow() { + return; + } + + // Exponential backoff before next reconnect attempt. + let delay = calculate_backoff(attempt); + attempt = attempt.saturating_add(1); + + tokio::select! { + _ = time::sleep(delay) => {} + _ = shutdown_rx.changed() => { return; } + } + } +} + +/// Run the read and write tasks for an active WebSocket connection. +/// +/// Returns when the connection is lost or a shutdown signal is received. +async fn run_connection( + inner: Arc, + mut ws_read: S, + mut ws_write: W, + mut outgoing_rx: mpsc::UnboundedReceiver>, + shutdown_rx: &mut watch::Receiver, +) where + S: StreamExt> + Unpin + Send, + W: SinkExt + Unpin + Send + 'static, +{ + // Write task: forward outgoing messages from the mpsc channel to the WebSocket. + let write_shutdown_tx = inner.shutdown_tx.clone(); + let write_task = tokio::spawn(async move { + let mut write_shutdown_rx = write_shutdown_tx.subscribe(); + loop { + tokio::select! { + msg = outgoing_rx.recv() => { + match msg { + Some(bytes) => { + if ws_write + .send(Message::Binary(bytes.into())) + .await + .is_err() + { + return; + } + } + None => return, + } + } + _ = write_shutdown_rx.changed() => { return; } + } + } + }); + + // Read loop: dispatch responses, handle pings, and detect close. + loop { + tokio::select! { + msg = ws_read.next() => { + match msg { + Some(Ok(Message::Binary(data))) => { + match decode_to_client(&data) { + Ok(ToClient::ToClientResponse(response)) => { + // Check if this is a reconnect ActorOpenResponse. + let reconnect_actor = { + inner + .reconnect_request_ids + .lock() + .await + .remove(&response.request_id) + }; + + if let Some(actor_id) = reconnect_actor { + match response.data { + ResponseData::ActorOpenResponse => { + // Mark actor as ready for KV requests. + if let Some(tx) = inner + .reconnect_ready + .lock() + .await + .remove(&actor_id) + { + let _ = tx.send(true); + } + } + ResponseData::ErrorResponse(err) => { + // Re-open failed. Remove actor and drop + // the watch sender so waiters get + // RecvError -> ConnectionClosed. + inner + .open_actors + .lock() + .await + .remove(&actor_id); + inner + .reconnect_ready + .lock() + .await + .remove(&actor_id); + tracing::warn!( + %actor_id, + code = %err.code, + message = %err.message, + "kv channel reconnect open failed" + ); + } + _ => { + inner + .reconnect_ready + .lock() + .await + .remove(&actor_id); + } + } + } else { + let mut in_flight = + inner.in_flight.lock().await; + if let Some(tx) = + in_flight.remove(&response.request_id) + { + let result = match response.data { + ResponseData::ErrorResponse(err) => { + Err(ChannelError::ServerError( + err, + )) + } + data => Ok(data), + }; + let _ = tx.send(result); + } + // Ignore responses for unknown request IDs. + } + } + Ok(ToClient::ToClientPing(ping)) => { + // Respond with pong echoing the timestamp. + let pong = + ToServer::ToServerPong(ToServerPong { ts: ping.ts }); + if let Ok(bytes) = encode_to_server(&pong) { + let tx_guard = inner.outgoing_tx.lock().await; + if let Some(tx) = tx_guard.as_ref() { + let _ = tx.send(bytes); + } + } + } + Ok(ToClient::ToClientClose) => { + // Server requested close. Break to trigger reconnect. + break; + } + Err(e) => { + tracing::warn!(%e, "kv channel failed to decode message"); + } + } + } + Some(Ok(Message::Close(_))) | None => { + // Connection closed by server or stream ended. + break; + } + Some(Ok(_)) => { + // Ignore text, ping/pong frames. Tungstenite handles + // WebSocket-level ping/pong automatically. + } + Some(Err(_)) => { + // Read error. Connection is broken. + break; + } + } + } + _ = shutdown_rx.changed() => { break; } + } + } + + write_task.abort(); +} + +// MARK: Helpers + +/// Build the full WebSocket URL with query parameters. +fn build_ws_url(config: &KvChannelConfig) -> String { + let base = config.url.trim_end_matches('/'); + let ns_encoded = urlencoding::encode(&config.namespace); + let mut url = format!( + "{base}/kv/connect?namespace={ns_encoded}&protocol_version={PROTOCOL_VERSION}", + ); + if let Some(ref token) = config.token { + let token_encoded = urlencoding::encode(token); + url.push_str(&format!("&token={token_encoded}")); + } + url +} + +/// Calculate exponential backoff delay with jitter. +/// +/// Matches the runner protocol reconnect strategy from +/// engine/sdks/typescript/runner/src/utils.ts. +fn calculate_backoff(attempt: u32) -> Duration { + let delay = INITIAL_BACKOFF_MS as f64 * BACKOFF_MULTIPLIER.powi(attempt as i32); + let delay = delay.min(MAX_BACKOFF_MS as f64); + + // Add 0-25% jitter using nanosecond-based pseudo-random value. + let nanos = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .unwrap_or_default() + .subsec_nanos(); + let jitter_frac = (nanos as f64 / u32::MAX as f64) * JITTER_MAX; + let delay_with_jitter = delay * (1.0 + jitter_frac); + + Duration::from_millis(delay_with_jitter as u64) +} + +/// Fail all in-flight requests with a connection closed error. +async fn fail_all_in_flight(inner: &Inner) { + let mut in_flight = inner.in_flight.lock().await; + for (_, tx) in in_flight.drain() { + let _ = tx.send(Err(ChannelError::ConnectionClosed)); + } + // Clear reconnect state. Dropping watch senders wakes waiters with + // RecvError, which send_request maps to ConnectionClosed. + inner.reconnect_ready.lock().await.clear(); + inner.reconnect_request_ids.lock().await.clear(); +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn build_ws_url_with_token() { + let config = KvChannelConfig { + url: "ws://localhost:6420".into(), + token: Some("secret123".into()), + namespace: "test-ns".into(), + }; + let url = build_ws_url(&config); + assert_eq!( + url, + "ws://localhost:6420/kv/connect?namespace=test-ns&protocol_version=1&token=secret123" + ); + } + + #[test] + fn build_ws_url_without_token() { + let config = KvChannelConfig { + url: "ws://localhost:6420".into(), + token: None, + namespace: "my-ns".into(), + }; + let url = build_ws_url(&config); + assert_eq!( + url, + "ws://localhost:6420/kv/connect?namespace=my-ns&protocol_version=1" + ); + } + + #[test] + fn build_ws_url_strips_trailing_slash() { + let config = KvChannelConfig { + url: "ws://example.com/".into(), + token: None, + namespace: "ns".into(), + }; + let url = build_ws_url(&config); + assert!(url.starts_with("ws://example.com/kv/connect?")); + } + + #[test] + fn backoff_attempt_zero() { + let delay = calculate_backoff(0); + // Initial delay is 1000ms with 0-25% jitter -> 1000..1250ms. + assert!(delay.as_millis() >= 1000); + assert!(delay.as_millis() <= 1250); + } + + #[test] + fn backoff_attempt_one() { + let delay = calculate_backoff(1); + // 1000 * 2^1 = 2000ms with jitter -> 2000..2500ms. + assert!(delay.as_millis() >= 2000); + assert!(delay.as_millis() <= 2500); + } + + #[test] + fn backoff_attempt_two() { + let delay = calculate_backoff(2); + // 1000 * 2^2 = 4000ms with jitter -> 4000..5000ms. + assert!(delay.as_millis() >= 4000); + assert!(delay.as_millis() <= 5000); + } + + #[test] + fn backoff_caps_at_max() { + let delay = calculate_backoff(100); + // Capped at 30000ms with 0-25% jitter -> 30000..37500ms. + assert!(delay.as_millis() >= 30000); + assert!(delay.as_millis() <= 37500); + } + + #[test] + fn backoff_progression() { + // Verify delay increases with attempt number (ignoring jitter variance). + let d0_base = 1000u128; + let d5_base = 32000u128; + let d0 = calculate_backoff(0).as_millis(); + let d5 = calculate_backoff(5).as_millis(); + // d0 is in [1000, 1250], d5 is in [30000, 37500] (capped at 30s). + assert!(d0 >= d0_base); + assert!(d5 >= d5_base.min(30000)); + } + + #[test] + fn channel_error_display() { + assert_eq!( + ChannelError::ConnectionClosed.to_string(), + "kv channel connection closed" + ); + assert_eq!( + ChannelError::Timeout.to_string(), + "kv channel operation timed out" + ); + assert_eq!( + ChannelError::Shutdown.to_string(), + "kv channel shut down" + ); + assert_eq!( + ChannelError::Protocol("bad data".into()).to_string(), + "kv channel protocol error: bad data" + ); + assert_eq!( + ChannelError::WebSocket("connect failed".into()).to_string(), + "kv channel websocket error: connect failed" + ); + assert_eq!( + ChannelError::ServerError(ErrorResponse { + code: "actor_locked".into(), + message: "locked by another connection".into(), + }) + .to_string(), + "kv channel server error: actor_locked - locked by another connection" + ); + } + + #[test] + fn build_ws_url_encodes_special_chars() { + let config = KvChannelConfig { + url: "ws://localhost:6420".into(), + token: Some("tok&en=val?ue#frag".into()), + namespace: "ns with spaces&special".into(), + }; + let url = build_ws_url(&config); + assert_eq!( + url, + "ws://localhost:6420/kv/connect?namespace=ns%20with%20spaces%26special&protocol_version=1&token=tok%26en%3Dval%3Fue%23frag" + ); + } + + #[test] + fn protocol_version_is_one() { + assert_eq!(PROTOCOL_VERSION, 1); + } + + #[test] + fn kv_expire_matches_spec() { + assert_eq!(KV_EXPIRE_MS, 30_000); + } + + #[test] + fn backoff_constants_match_runner_protocol() { + // These must match engine/sdks/typescript/runner/src/utils.ts. + assert_eq!(INITIAL_BACKOFF_MS, 1000); + assert_eq!(MAX_BACKOFF_MS, 30_000); + assert!((BACKOFF_MULTIPLIER - 2.0).abs() < f64::EPSILON); + assert!((JITTER_MAX - 0.25).abs() < f64::EPSILON); + } +} diff --git a/rivetkit-typescript/packages/sqlite-native/src/integration_tests.rs b/rivetkit-typescript/packages/sqlite-native/src/integration_tests.rs new file mode 100644 index 0000000000..12b558edf7 --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/src/integration_tests.rs @@ -0,0 +1,1123 @@ +//! Integration tests for the native VFS with a mock WebSocket KV server. +//! +//! These tests exercise the full VFS pipeline through SQLite operations, +//! verifying chunk mapping, boundary handling, metadata persistence, and +//! channel reconnection. They use a mock WebSocket server with an in-memory +//! KV store that implements the KV channel protocol. +//! +//! End-to-end tests (Layer 2) are in the driver test suite: +//! `rivetkit-typescript/packages/rivetkit/src/driver-test-suite/` + +use std::collections::{BTreeMap, HashMap}; +use std::ffi::{CStr, CString}; +use std::ptr; +use std::sync::atomic::{AtomicU64, Ordering}; +use std::sync::Arc; +use std::time::Duration; + +use futures_util::{SinkExt, StreamExt}; +use libsqlite3_sys::*; +use tokio::net::TcpListener; +use tokio::runtime::Runtime; +use tokio::sync::{broadcast, mpsc, Mutex, Semaphore}; +use tokio_tungstenite::{ + accept_hdr_async, + tungstenite::{ + handshake::server::{Request, Response}, + Message, + }, +}; + +use crate::channel::{ChannelError, KvChannel, KvChannelConfig}; +use crate::kv; +use crate::protocol::*; +use crate::vfs; +use crate::vfs::decode_file_meta; + +// MARK: VFS Name Counter + +static VFS_COUNTER: AtomicU64 = AtomicU64::new(0); + +fn unique_vfs_name(actor_id: &str) -> String { + let id = VFS_COUNTER.fetch_add(1, Ordering::Relaxed); + format!("test-vfs-{actor_id}-{id}") +} + +// MARK: Mock KV Server + +/// Operation recorded by the mock server for test verification. +#[derive(Debug, Clone)] +#[allow(dead_code)] +enum MockOp { + Open { actor_id: String }, + Close { actor_id: String }, + Get { actor_id: String, keys: Vec> }, + Put { actor_id: String, keys: Vec> }, + Delete { actor_id: String, keys: Vec> }, + DeleteRange { actor_id: String, start: Vec, end: Vec }, +} + +struct MockState { + /// Per-actor KV stores. BTreeMap for ordered range operations. + stores: Mutex, Vec>>>, + /// Single-writer locks: actor_id -> connection_id. + locks: Mutex>, + /// Recorded operations for test assertions. + ops: Mutex>, + /// Connection ID counter. + next_conn_id: AtomicU64, + /// Broadcast to force-close all connections (for reconnection testing). + kill_tx: broadcast::Sender<()>, + /// Semaphore gate for ActorOpenResponse. When set with 0 permits, + /// open responses block until permits are added. + open_gate: Mutex>>, +} + +struct MockKvServer { + port: u16, + state: Arc, +} + +impl MockKvServer { + async fn start() -> Self { + let listener = TcpListener::bind("127.0.0.1:0").await.unwrap(); + let port = listener.local_addr().unwrap().port(); + let (kill_tx, _) = broadcast::channel::<()>(16); + let state = Arc::new(MockState { + stores: Mutex::new(HashMap::new()), + locks: Mutex::new(HashMap::new()), + ops: Mutex::new(Vec::new()), + next_conn_id: AtomicU64::new(1), + kill_tx, + open_gate: Mutex::new(None), + }); + + let state_clone = state.clone(); + tokio::spawn(async move { + mock_accept_loop(listener, state_clone).await; + }); + + MockKvServer { port, state } + } + + fn url(&self) -> String { + format!("ws://127.0.0.1:{}", self.port) + } + + async fn get_store(&self, actor_id: &str) -> BTreeMap, Vec> { + self.state + .stores + .lock() + .await + .get(actor_id) + .cloned() + .unwrap_or_default() + } + + async fn ops(&self) -> Vec { + self.state.ops.lock().await.clone() + } + + async fn reset_ops(&self) { + self.state.ops.lock().await.clear(); + } + + async fn close_all_connections(&self) { + let _ = self.state.kill_tx.send(()); + } +} + +async fn mock_accept_loop(listener: TcpListener, state: Arc) { + loop { + match listener.accept().await { + Ok((stream, _)) => { + let conn_id = state.next_conn_id.fetch_add(1, Ordering::Relaxed); + let state = state.clone(); + let mut kill_rx = state.kill_tx.subscribe(); + tokio::spawn(async move { + let ws = match accept_hdr_async(stream, |req: &Request, mut response: Response| { + if let Some(protocols) = req.headers().get("Sec-WebSocket-Protocol") { + if protocols + .to_str() + .ok() + .into_iter() + .flat_map(|value| value.split(',')) + .any(|value| value.trim() == "rivet") + { + response + .headers_mut() + .insert("Sec-WebSocket-Protocol", "rivet".parse().unwrap()); + } + } + Ok(response) + }) + .await + { + Ok(ws) => ws, + Err(_) => return, + }; + let (write, mut read) = ws.split(); + let open_actors: Arc>> = + Arc::new(Mutex::new(Vec::new())); + + // Write responses via mpsc channel so spawned tasks can send. + let (resp_tx, mut resp_rx) = mpsc::unbounded_channel::>(); + let write_handle = tokio::spawn(async move { + let mut write = write; + while let Some(bytes) = resp_rx.recv().await { + if write + .send(Message::Binary(bytes.into())) + .await + .is_err() + { + break; + } + } + }); + + loop { + tokio::select! { + msg = read.next() => { + match msg { + Some(Ok(Message::Binary(data))) => { + if let Ok(ToServer::ToServerRequest(req)) = decode_to_server(&data) { + let state = state.clone(); + let resp_tx = resp_tx.clone(); + let open_actors = open_actors.clone(); + let actor_id = req.actor_id.clone(); + let request_id = req.request_id; + let data = req.data; + tokio::spawn(async move { + let resp_data = mock_handle_request( + &state, conn_id, &actor_id, data, &open_actors, + ).await; + let resp = ToClient::ToClientResponse(ToClientResponse { + request_id, + data: resp_data, + }); + if let Ok(bytes) = encode_to_client(&resp) { + let _ = resp_tx.send(bytes); + } + }); + } + } + Some(Ok(_)) => {} + _ => break, + } + } + _ = kill_rx.recv() => break, + } + } + + // Release all locks held by this connection. + let oa = open_actors.lock().await; + let mut locks = state.locks.lock().await; + for actor_id in oa.iter() { + if locks.get(actor_id) == Some(&conn_id) { + locks.remove(actor_id); + } + } + + // Clean up writer task. + drop(resp_tx); + write_handle.abort(); + }); + } + Err(_) => break, + } + } +} + +async fn mock_handle_request( + state: &MockState, + conn_id: u64, + actor_id: &str, + data: RequestData, + open_actors: &Mutex>, +) -> ResponseData { + match data { + RequestData::ActorOpenRequest => { + // Wait for gate if set (for testing reconnect waiting). + { + let gate = state.open_gate.lock().await; + if let Some(sem) = gate.as_ref() { + let sem = sem.clone(); + drop(gate); + let _permit = sem.acquire().await.unwrap(); + } + } + let mut locks = state.locks.lock().await; + if let Some(&holder) = locks.get(actor_id) { + if holder != conn_id { + return ResponseData::ErrorResponse(ErrorResponse { + code: "actor_locked".into(), + message: "actor is locked by another connection".into(), + }); + } + } + locks.insert(actor_id.to_string(), conn_id); + open_actors.lock().await.push(actor_id.to_string()); + state.stores.lock().await.entry(actor_id.to_string()).or_default(); + state.ops.lock().await.push(MockOp::Open { actor_id: actor_id.to_string() }); + ResponseData::ActorOpenResponse + } + RequestData::ActorCloseRequest => { + let mut locks = state.locks.lock().await; + if locks.get(actor_id) == Some(&conn_id) { + locks.remove(actor_id); + } + open_actors.lock().await.retain(|a| a != actor_id); + state.ops.lock().await.push(MockOp::Close { actor_id: actor_id.to_string() }); + ResponseData::ActorCloseResponse + } + RequestData::KvGetRequest(req) => { + { + let locks = state.locks.lock().await; + if locks.get(actor_id) != Some(&conn_id) { + return ResponseData::ErrorResponse(ErrorResponse { + code: "actor_not_open".into(), + message: "actor is not open".into(), + }); + } + } + state.ops.lock().await.push(MockOp::Get { + actor_id: actor_id.to_string(), + keys: req.keys.clone(), + }); + let stores = state.stores.lock().await; + let store = stores.get(actor_id); + let mut found_keys = Vec::new(); + let mut found_values = Vec::new(); + for key in &req.keys { + if let Some(s) = store { + if let Some(v) = s.get(key) { + found_keys.push(key.clone()); + found_values.push(v.clone()); + } + } + } + ResponseData::KvGetResponse(KvGetResponse { + keys: found_keys, + values: found_values, + }) + } + RequestData::KvPutRequest(req) => { + { + let locks = state.locks.lock().await; + if locks.get(actor_id) != Some(&conn_id) { + return ResponseData::ErrorResponse(ErrorResponse { + code: "actor_not_open".into(), + message: "actor is not open".into(), + }); + } + } + state.ops.lock().await.push(MockOp::Put { + actor_id: actor_id.to_string(), + keys: req.keys.clone(), + }); + let mut stores = state.stores.lock().await; + let store = stores.entry(actor_id.to_string()).or_default(); + for (k, v) in req.keys.into_iter().zip(req.values) { + store.insert(k, v); + } + ResponseData::KvPutResponse + } + RequestData::KvDeleteRequest(req) => { + { + let locks = state.locks.lock().await; + if locks.get(actor_id) != Some(&conn_id) { + return ResponseData::ErrorResponse(ErrorResponse { + code: "actor_not_open".into(), + message: "actor is not open".into(), + }); + } + } + state.ops.lock().await.push(MockOp::Delete { + actor_id: actor_id.to_string(), + keys: req.keys.clone(), + }); + let mut stores = state.stores.lock().await; + if let Some(store) = stores.get_mut(actor_id) { + for k in &req.keys { + store.remove(k); + } + } + ResponseData::KvDeleteResponse + } + RequestData::KvDeleteRangeRequest(req) => { + { + let locks = state.locks.lock().await; + if locks.get(actor_id) != Some(&conn_id) { + return ResponseData::ErrorResponse(ErrorResponse { + code: "actor_not_open".into(), + message: "actor is not open".into(), + }); + } + } + state.ops.lock().await.push(MockOp::DeleteRange { + actor_id: actor_id.to_string(), + start: req.start.clone(), + end: req.end.clone(), + }); + let mut stores = state.stores.lock().await; + if let Some(store) = stores.get_mut(actor_id) { + let to_remove: Vec> = store + .range(req.start.clone()..req.end.clone()) + .map(|(k, _)| k.clone()) + .collect(); + for k in to_remove { + store.remove(&k); + } + } + ResponseData::KvDeleteResponse + } + } +} + +// MARK: Test Helpers + +fn create_runtime() -> Runtime { + tokio::runtime::Builder::new_multi_thread() + .enable_all() + .build() + .unwrap() +} + +/// Set up a mock server, connect channel, and open an actor. +async fn setup_server_and_channel(actor_id: &str) -> (MockKvServer, Arc) { + let server = MockKvServer::start().await; + let channel = KvChannel::connect(KvChannelConfig { + url: server.url(), + token: None, + namespace: "test".into(), + }); + tokio::time::sleep(Duration::from_millis(300)).await; + let channel = Arc::new(channel); + channel.open_actor(actor_id).await.unwrap(); + (server, channel) +} + +/// Open a SQLite database via the KV VFS. +fn open_test_db(rt: &Runtime, channel: Arc, actor_id: &str) -> vfs::NativeDatabase { + let vfs_name = unique_vfs_name(actor_id); + let kv_vfs = vfs::KvVfs::register( + &vfs_name, + channel, + actor_id.to_string(), + rt.handle().clone(), + ) + .unwrap(); + vfs::open_database(kv_vfs, actor_id).unwrap() +} + +/// Execute a SQL statement, panicking on failure. +unsafe fn exec_sql(db: *mut sqlite3, sql: &str) { + let c_sql = CString::new(sql).unwrap(); + let rc = sqlite3_exec(db, c_sql.as_ptr(), None, ptr::null_mut(), ptr::null_mut()); + if rc != SQLITE_OK { + let msg = CStr::from_ptr(sqlite3_errmsg(db)).to_string_lossy(); + panic!("SQL '{}' failed (rc={}): {}", sql, rc, msg); + } +} + +/// Query a SQL statement and return rows as Vec>. +unsafe fn query_rows(db: *mut sqlite3, sql: &str) -> Vec> { + let c_sql = CString::new(sql).unwrap(); + let mut stmt: *mut sqlite3_stmt = ptr::null_mut(); + let rc = sqlite3_prepare_v2(db, c_sql.as_ptr(), -1, &mut stmt, ptr::null_mut()); + if rc != SQLITE_OK { + let msg = CStr::from_ptr(sqlite3_errmsg(db)).to_string_lossy(); + panic!("Prepare '{}' failed: {}", sql, msg); + } + let col_count = sqlite3_column_count(stmt); + let mut rows = Vec::new(); + loop { + let rc = sqlite3_step(stmt); + if rc == SQLITE_DONE { + break; + } + assert_eq!(rc, SQLITE_ROW, "step failed with rc={rc}"); + let mut row = Vec::new(); + for i in 0..col_count { + let ptr = sqlite3_column_text(stmt, i); + if ptr.is_null() { + row.push("NULL".to_string()); + } else { + row.push( + CStr::from_ptr(ptr as *const _) + .to_string_lossy() + .into_owned(), + ); + } + } + rows.push(row); + } + sqlite3_finalize(stmt); + rows +} + +fn key_targets_file_tag(key: &[u8], file_tag: u8) -> bool { + key.len() >= 4 + && key[0] == kv::SQLITE_PREFIX + && (key[2] == kv::META_PREFIX || key[2] == kv::CHUNK_PREFIX) + && key[3] == file_tag +} + +fn op_targets_file_tag(op: &MockOp, file_tag: u8) -> bool { + match op { + MockOp::Get { keys, .. } | MockOp::Put { keys, .. } | MockOp::Delete { keys, .. } => { + keys.iter().any(|key| key_targets_file_tag(key, file_tag)) + } + MockOp::DeleteRange { start, end, .. } => { + key_targets_file_tag(start, file_tag) || key_targets_file_tag(end, file_tag) + } + MockOp::Open { .. } | MockOp::Close { .. } => false, + } +} + +// MARK: Tests + +#[test] +fn test_basic_sql_through_vfs() { + let rt = create_runtime(); + let (server, channel) = rt.block_on(setup_server_and_channel("actor-basic")); + let db = open_test_db(&rt, channel.clone(), "actor-basic"); + + unsafe { + exec_sql(db.as_ptr(), "CREATE TABLE test (id INTEGER PRIMARY KEY, value TEXT)"); + exec_sql(db.as_ptr(), "INSERT INTO test VALUES (1, 'hello')"); + exec_sql(db.as_ptr(), "INSERT INTO test VALUES (2, 'world')"); + + let rows = query_rows(db.as_ptr(), "SELECT id, value FROM test ORDER BY id"); + assert_eq!(rows.len(), 2); + assert_eq!(rows[0], vec!["1", "hello"]); + assert_eq!(rows[1], vec!["2", "world"]); + } + + // Verify KV store has main file metadata and at least chunk 0. + let store = rt.block_on(server.get_store("actor-basic")); + let meta_key = kv::get_meta_key(kv::FILE_TAG_MAIN).to_vec(); + assert!(store.contains_key(&meta_key), "metadata key missing"); + let chunk0_key = kv::get_chunk_key(kv::FILE_TAG_MAIN, 0).to_vec(); + assert!(store.contains_key(&chunk0_key), "chunk 0 missing"); + + // Verify metadata decodes to a valid file size. + let meta = store.get(&meta_key).unwrap(); + let file_size = decode_file_meta(meta).expect("metadata decode failed"); + assert!(file_size > 0, "file size should be positive"); + + drop(db); + rt.block_on(channel.disconnect()); +} + +#[test] +fn test_open_prewrites_empty_main_page() { + let rt = create_runtime(); + let (server, channel) = rt.block_on(setup_server_and_channel("actor-empty-page")); + let db = open_test_db(&rt, channel.clone(), "actor-empty-page"); + + let store = rt.block_on(server.get_store("actor-empty-page")); + let meta_key = kv::get_meta_key(kv::FILE_TAG_MAIN).to_vec(); + let chunk0_key = kv::get_chunk_key(kv::FILE_TAG_MAIN, 0).to_vec(); + + let meta = store.get(&meta_key).expect("main metadata key missing"); + assert_eq!(decode_file_meta(meta).unwrap(), kv::CHUNK_SIZE as i64); + + let chunk0 = store.get(&chunk0_key).expect("main chunk 0 missing"); + assert_eq!(chunk0.len(), kv::CHUNK_SIZE); + assert_eq!(&chunk0[..16], b"SQLite format 3\0"); + + drop(db); + rt.block_on(channel.disconnect()); +} + +#[test] +fn test_warm_update_uses_batch_atomic_put_without_journal() { + let rt = create_runtime(); + let (server, channel) = rt.block_on(setup_server_and_channel("actor-batch-atomic")); + let db = open_test_db(&rt, channel.clone(), "actor-batch-atomic"); + + unsafe { + exec_sql(db.as_ptr(), "CREATE TABLE counter (value INTEGER)"); + exec_sql(db.as_ptr(), "INSERT INTO counter VALUES (0)"); + } + + rt.block_on(server.reset_ops()); + + unsafe { + exec_sql(db.as_ptr(), "UPDATE counter SET value = value + 1"); + } + + let ops = rt.block_on(server.ops()); + let put_ops: Vec<_> = ops + .iter() + .filter(|op| matches!(op, MockOp::Put { .. })) + .collect(); + let get_ops: Vec<_> = ops + .iter() + .filter(|op| matches!(op, MockOp::Get { .. })) + .collect(); + + assert_eq!(put_ops.len(), 1, "warm update should flush with a single put"); + assert_eq!(get_ops.len(), 0, "warm update should not need KV reads"); + assert!( + !ops.iter().any(|op| op_targets_file_tag(op, kv::FILE_TAG_JOURNAL)), + "warm update should not touch journal keys when BATCH_ATOMIC is active" + ); + + drop(db); + rt.block_on(channel.disconnect()); +} + +#[test] +fn test_multi_chunk_data() { + let rt = create_runtime(); + let (server, channel) = rt.block_on(setup_server_and_channel("actor-multi-chunk")); + let db = open_test_db(&rt, channel.clone(), "actor-multi-chunk"); + + unsafe { + exec_sql(db.as_ptr(), "CREATE TABLE big (id INTEGER PRIMARY KEY, data TEXT)"); + // Insert enough data to span multiple 4 KiB chunks. + for i in 0..20 { + let data = "X".repeat(1000); + let sql = format!("INSERT INTO big VALUES ({i}, '{data}')"); + exec_sql(db.as_ptr(), &sql); + } + } + + let store = rt.block_on(server.get_store("actor-multi-chunk")); + + // Count chunk keys for the main file. + let chunk_keys: Vec<_> = store + .keys() + .filter(|k| { + k.len() == 8 + && k[0] == kv::SQLITE_PREFIX + && k[2] == kv::CHUNK_PREFIX + && k[3] == kv::FILE_TAG_MAIN + }) + .collect(); + assert!( + chunk_keys.len() >= 2, + "expected at least 2 chunks, got {}", + chunk_keys.len() + ); + + // Verify chunk indices are sequential starting from 0. + let mut indices: Vec = chunk_keys + .iter() + .map(|k| u32::from_be_bytes([k[4], k[5], k[6], k[7]])) + .collect(); + indices.sort(); + for (i, &idx) in indices.iter().enumerate() { + assert_eq!(idx, i as u32, "chunk indices should be sequential"); + } + + // Verify metadata shows file size spanning 2+ chunks. + let meta_key = kv::get_meta_key(kv::FILE_TAG_MAIN).to_vec(); + let file_size = decode_file_meta(store.get(&meta_key).unwrap()).unwrap(); + assert!( + file_size >= (kv::CHUNK_SIZE * 2) as i64, + "file should span at least 2 chunks, size={file_size}" + ); + + drop(db); + rt.block_on(channel.disconnect()); +} + +#[test] +fn test_chunk_boundary_data_integrity() { + let rt = create_runtime(); + let (_server, channel) = rt.block_on(setup_server_and_channel("actor-boundary")); + let db = open_test_db(&rt, channel.clone(), "actor-boundary"); + + unsafe { + exec_sql( + db.as_ptr(), + "CREATE TABLE chunks (id INTEGER PRIMARY KEY, payload TEXT)", + ); + // Insert enough data to span chunk boundaries. + for i in 0..50 { + let payload = format!("{:0>500}", i); + let sql = format!("INSERT INTO chunks VALUES ({i}, '{payload}')"); + exec_sql(db.as_ptr(), &sql); + } + + // Verify all data reads back correctly despite chunk boundaries. + let rows = query_rows(db.as_ptr(), "SELECT id, payload FROM chunks ORDER BY id"); + assert_eq!(rows.len(), 50); + for (i, row) in rows.iter().enumerate() { + assert_eq!(row[0], i.to_string()); + assert_eq!(row[1], format!("{:0>500}", i)); + } + } + + drop(db); + rt.block_on(channel.disconnect()); +} + +#[test] +fn test_large_truncate_journal_fallback_produces_delete_batches() { + let rt = create_runtime(); + let (server, channel) = rt.block_on(setup_server_and_channel("actor-truncate")); + let db = open_test_db(&rt, channel.clone(), "actor-truncate"); + + unsafe { + exec_sql(db.as_ptr(), "PRAGMA journal_mode = truncate"); + exec_sql(db.as_ptr(), "CREATE TABLE trunc (x TEXT)"); + } + + rt.block_on(server.reset_ops()); + + unsafe { + exec_sql(db.as_ptr(), "BEGIN"); + for i in 0..200 { + let data = "Z".repeat(3500); + exec_sql( + db.as_ptr(), + &format!("INSERT INTO trunc VALUES ('{data}{i:03}')"), + ); + } + exec_sql(db.as_ptr(), "COMMIT"); + } + + let ops = rt.block_on(server.ops()); + let delete_ops: Vec<_> = ops + .iter() + .filter(|op| matches!(op, MockOp::Delete { .. }) && op_targets_file_tag(op, kv::FILE_TAG_JOURNAL)) + .collect(); + assert!( + !delete_ops.is_empty(), + "expected journal Delete operations from truncate fallback" + ); + + for op in &delete_ops { + if let MockOp::Delete { keys, .. } = op { + for key in keys { + assert_eq!(key[0], kv::SQLITE_PREFIX, "key should have SQLITE_PREFIX"); + assert_eq!(key[2], kv::CHUNK_PREFIX, "key should have CHUNK_PREFIX"); + } + } + } + + drop(db); + rt.block_on(channel.disconnect()); +} + +#[test] +fn test_small_default_transaction_avoids_journal_keys() { + let rt = create_runtime(); + let (server, channel) = rt.block_on(setup_server_and_channel("actor-del-journal")); + let db = open_test_db(&rt, channel.clone(), "actor-del-journal"); + + unsafe { + exec_sql(db.as_ptr(), "CREATE TABLE djtest (x TEXT)"); + exec_sql(db.as_ptr(), "INSERT INTO djtest VALUES ('seed')"); + } + + rt.block_on(server.reset_ops()); + + unsafe { + exec_sql(db.as_ptr(), "BEGIN"); + for i in 0..20 { + exec_sql(db.as_ptr(), &format!("INSERT INTO djtest VALUES ('row_{i}')")); + } + exec_sql(db.as_ptr(), "COMMIT"); + } + + let ops = rt.block_on(server.ops()); + let put_ops: Vec<_> = ops + .iter() + .filter(|op| matches!(op, MockOp::Put { .. })) + .collect(); + assert!( + !ops.iter().any(|op| op_targets_file_tag(op, kv::FILE_TAG_JOURNAL)), + "small transactions should avoid journal keys when BATCH_ATOMIC is active" + ); + assert_eq!( + put_ops.len(), + 1, + "small transactions should flush once at COMMIT_ATOMIC_WRITE" + ); + + drop(db); + rt.block_on(channel.disconnect()); +} + +#[test] +fn test_metadata_tracks_file_size() { + let rt = create_runtime(); + let (server, channel) = rt.block_on(setup_server_and_channel("actor-metadata")); + let db = open_test_db(&rt, channel.clone(), "actor-metadata"); + + unsafe { + exec_sql(db.as_ptr(), "CREATE TABLE meta_test (id INTEGER)"); + } + + let store = rt.block_on(server.get_store("actor-metadata")); + let meta_key = kv::get_meta_key(kv::FILE_TAG_MAIN).to_vec(); + let meta = store.get(&meta_key).unwrap(); + let file_size = decode_file_meta(meta).unwrap(); + + // After CREATE TABLE, the file should be at least 1 page (4096 bytes). + assert!( + file_size >= 4096, + "file should be at least 1 page, got {file_size}" + ); + assert_eq!( + file_size % 4096, + 0, + "file size should be page-aligned, got {file_size}" + ); + + drop(db); + rt.block_on(channel.disconnect()); +} + +#[test] +fn test_close_flushes_and_reopen_preserves_data() { + let rt = create_runtime(); + let (server, channel) = rt.block_on(setup_server_and_channel("actor-reopen")); + + // Write data and close the database. + { + let db = open_test_db(&rt, channel.clone(), "actor-reopen"); + unsafe { + exec_sql(db.as_ptr(), "CREATE TABLE persist (id INTEGER, val TEXT)"); + exec_sql(db.as_ptr(), "INSERT INTO persist VALUES (1, 'saved')"); + exec_sql(db.as_ptr(), "INSERT INTO persist VALUES (2, 'data')"); + } + drop(db); // xClose flushes metadata + } + + // Verify metadata was flushed to the store. + let store = rt.block_on(server.get_store("actor-reopen")); + let meta_key = kv::get_meta_key(kv::FILE_TAG_MAIN).to_vec(); + assert!(store.contains_key(&meta_key), "metadata should be flushed on close"); + + // Close and reopen actor (release and reacquire lock). + rt.block_on(async { + channel.close_actor("actor-reopen").await.unwrap(); + channel.open_actor("actor-reopen").await.unwrap(); + }); + + // Reopen database and verify data persists. + { + let db = open_test_db(&rt, channel.clone(), "actor-reopen"); + unsafe { + let rows = query_rows(db.as_ptr(), "SELECT id, val FROM persist ORDER BY id"); + assert_eq!(rows.len(), 2); + assert_eq!(rows[0], vec!["1", "saved"]); + assert_eq!(rows[1], vec!["2", "data"]); + } + drop(db); + } + + rt.block_on(channel.disconnect()); +} + +#[test] +fn test_file_tags_encoding() { + let rt = create_runtime(); + let (server, channel) = rt.block_on(setup_server_and_channel("actor-tags")); + let db = open_test_db(&rt, channel.clone(), "actor-tags"); + + unsafe { + // A write transaction creates a journal file with a different file tag. + exec_sql(db.as_ptr(), "BEGIN"); + exec_sql(db.as_ptr(), "CREATE TABLE tag_test (x INTEGER)"); + exec_sql(db.as_ptr(), "INSERT INTO tag_test VALUES (1)"); + exec_sql(db.as_ptr(), "COMMIT"); + } + + let store = rt.block_on(server.get_store("actor-tags")); + + // Main file metadata and chunks should exist. + let main_meta = kv::get_meta_key(kv::FILE_TAG_MAIN).to_vec(); + assert!(store.contains_key(&main_meta), "main metadata should exist"); + let main_chunk0 = kv::get_chunk_key(kv::FILE_TAG_MAIN, 0).to_vec(); + assert!(store.contains_key(&main_chunk0), "main chunk 0 should exist"); + + // All chunk keys should have valid file tags. + let chunk_keys: Vec<_> = store + .keys() + .filter(|k| k.len() == 8 && k[0] == kv::SQLITE_PREFIX && k[2] == kv::CHUNK_PREFIX) + .collect(); + for key in &chunk_keys { + assert!( + key[3] == kv::FILE_TAG_MAIN + || key[3] == kv::FILE_TAG_JOURNAL + || key[3] == kv::FILE_TAG_WAL + || key[3] == kv::FILE_TAG_SHM, + "unexpected file tag: {}", + key[3] + ); + } + + drop(db); + rt.block_on(channel.disconnect()); +} + +#[test] +fn test_error_actor_not_open() { + let rt = create_runtime(); + let (_server, channel) = rt.block_on(async { + let server = MockKvServer::start().await; + let channel = KvChannel::connect(KvChannelConfig { + url: server.url(), + token: None, + namespace: "test".into(), + }); + tokio::time::sleep(Duration::from_millis(300)).await; + (server, Arc::new(channel)) + }); + + // Send a KV request without opening the actor. + let result = rt.block_on( + channel.send_request( + "unopened-actor", + RequestData::KvGetRequest(KvGetRequest { + keys: vec![vec![1]], + }), + ), + ); + + assert!( + matches!( + result, + Err(ChannelError::ServerError(ref e)) if e.code == "actor_not_open" + ), + "expected actor_not_open error, got: {result:?}" + ); + + rt.block_on(channel.disconnect()); +} + +#[test] +fn test_error_actor_locked() { + let rt = create_runtime(); + let (_server, ch1, ch2) = rt.block_on(async { + let server = MockKvServer::start().await; + let config = KvChannelConfig { + url: server.url(), + token: None, + namespace: "test".into(), + }; + let ch1 = KvChannel::connect(config.clone()); + let ch2 = KvChannel::connect(config); + tokio::time::sleep(Duration::from_millis(300)).await; + + let ch1 = Arc::new(ch1); + let ch2 = Arc::new(ch2); + + // First channel opens the actor. + ch1.open_actor("shared-actor").await.unwrap(); + (server, ch1, ch2) + }); + + // Second channel tries to open the same actor. + let result = rt.block_on(ch2.open_actor("shared-actor")); + assert!( + matches!( + result, + Err(ChannelError::ServerError(ref e)) if e.code == "actor_locked" + ), + "expected actor_locked error, got: {result:?}" + ); + + rt.block_on(ch1.disconnect()); + rt.block_on(ch2.disconnect()); +} + +#[test] +fn test_optimistic_open_pipelining() { + let rt = create_runtime(); + let (server, channel) = rt.block_on(setup_server_and_channel("actor-pipeline")); + + // Fire off multiple KV requests concurrently (pipelined on the WebSocket). + let results: Vec> = rt.block_on(async { + let mut handles = Vec::new(); + for i in 0..5u8 { + let ch = channel.clone(); + handles.push(tokio::spawn(async move { + ch.send_request( + "actor-pipeline", + RequestData::KvPutRequest(KvPutRequest { + keys: vec![vec![i]], + values: vec![vec![i, i]], + }), + ) + .await + })); + } + let mut results = Vec::new(); + for h in handles { + results.push(h.await.unwrap()); + } + results + }); + + // All pipelined requests should succeed. + for (i, result) in results.iter().enumerate() { + assert!( + matches!(result, Ok(ResponseData::KvPutResponse)), + "pipelined request {i} failed: {result:?}" + ); + } + + // Verify all 5 keys were stored. + let store = rt.block_on(server.get_store("actor-pipeline")); + for i in 0..5u8 { + assert_eq!(store.get(&vec![i]), Some(&vec![i, i])); + } + + rt.block_on(channel.disconnect()); +} + +#[test] +fn test_reconnection_reopens_actors() { + let rt = create_runtime(); + let (server, channel) = rt.block_on(setup_server_and_channel("actor-reconnect")); + + // Verify initial connectivity. + let result = rt.block_on(channel.send_request( + "actor-reconnect", + RequestData::KvPutRequest(KvPutRequest { + keys: vec![vec![0x01]], + values: vec![vec![0xAA]], + }), + )); + assert!(result.is_ok(), "initial put failed: {result:?}"); + + // Force-close all connections to simulate network failure. + rt.block_on(async { + server.close_all_connections().await; + // Give the connection handlers time to release locks. + tokio::time::sleep(Duration::from_millis(200)).await; + }); + + // Wait for reconnect (initial backoff ~1s + connection time). + rt.block_on(async { + tokio::time::sleep(Duration::from_secs(3)).await; + }); + + // After reconnect, the channel should have re-opened the actor. + // Verify by reading back the data we stored before the disconnect. + let result = rt.block_on(channel.send_request( + "actor-reconnect", + RequestData::KvGetRequest(KvGetRequest { + keys: vec![vec![0x01]], + }), + )); + match &result { + Ok(ResponseData::KvGetResponse(resp)) => { + assert_eq!(resp.keys, vec![vec![0x01u8]]); + assert_eq!(resp.values, vec![vec![0xAAu8]]); + } + other => panic!("KV get after reconnect failed: {other:?}"), + } + + // Verify the actor was opened at least twice (initial + reconnect). + let ops = rt.block_on(server.ops()); + let open_count = ops + .iter() + .filter(|op| { + matches!(op, MockOp::Open { actor_id } if actor_id == "actor-reconnect") + }) + .count(); + assert!( + open_count >= 2, + "actor should have been opened at least twice (initial + reconnect), got {open_count}" + ); +} + +#[test] +fn test_reconnect_kv_waits_for_open_response() { + // Verify that on reconnect, KV requests block until ActorOpenResponse is + // received. Uses a semaphore gate on the mock server to hold the open + // response, then checks that a KV request hasn't completed (client is + // waiting), and finally releases the gate to confirm the request succeeds. + // + // The mock server processes messages concurrently (spawned tasks), so + // without client-side waiting, a KV request sent during the gate hold + // would hit actor_not_open (lock not yet acquired). With client-side + // waiting, the KV request is held on the client until the open completes. + let rt = create_runtime(); + let (server, channel) = rt.block_on(setup_server_and_channel("actor-rwait")); + + // Write initial data. + rt.block_on( + channel.send_request( + "actor-rwait", + RequestData::KvPutRequest(KvPutRequest { + keys: vec![vec![0x01]], + values: vec![vec![0xEE]], + }), + ), + ) + .unwrap(); + + // Set up gate (0 permits = blocks open responses). + let gate = Arc::new(Semaphore::new(0)); + rt.block_on(async { + *server.state.open_gate.lock().await = Some(gate.clone()); + }); + + // Force disconnect. + rt.block_on(async { + server.close_all_connections().await; + tokio::time::sleep(Duration::from_millis(200)).await; + }); + + // Wait for WebSocket to reconnect (backoff ~1s + connection time). + // The reconnect ActorOpenRequest is sent and received by the mock server, + // but the response is held by the gate. + rt.block_on(async { + tokio::time::sleep(Duration::from_secs(2)).await; + }); + + // Spawn a task that sends a KV request. With reconnect waiting, this + // should block until the ActorOpenResponse arrives. + let ch = channel.clone(); + let kv_handle = rt.spawn(async move { + ch.send_request( + "actor-rwait", + RequestData::KvGetRequest(KvGetRequest { + keys: vec![vec![0x01]], + }), + ) + .await + }); + + // Give the KV task time to reach the wait point. + rt.block_on(async { + tokio::time::sleep(Duration::from_millis(500)).await; + }); + + // Verify the KV task is still pending (blocked by reconnect readiness). + // Without client-side waiting, the concurrent mock server would have + // already returned actor_not_open and the task would be finished. + assert!( + !kv_handle.is_finished(), + "KV request should be waiting for ActorOpenResponse" + ); + + // Release the gate so the mock server sends ActorOpenResponse. + gate.add_permits(1); + + // KV request should now complete successfully. + let result = rt.block_on(kv_handle).unwrap(); + match &result { + Ok(ResponseData::KvGetResponse(resp)) => { + assert_eq!(resp.keys, vec![vec![0x01u8]]); + assert_eq!(resp.values, vec![vec![0xEEu8]]); + } + other => panic!("KV get after gated reconnect failed: {other:?}"), + } + + // Clean up gate. + rt.block_on(async { + *server.state.open_gate.lock().await = None; + }); + rt.block_on(channel.disconnect()); +} diff --git a/rivetkit-typescript/packages/sqlite-native/src/kv.rs b/rivetkit-typescript/packages/sqlite-native/src/kv.rs new file mode 100644 index 0000000000..8b23771022 --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/src/kv.rs @@ -0,0 +1,200 @@ +//! KV key layout for SQLite-over-KV storage. +//! +//! This module must produce byte-identical keys to the TypeScript implementation +//! in `rivetkit-typescript/packages/sqlite-vfs/src/kv.ts`. +//! +//! Key layout: +//! Meta key: [SQLITE_PREFIX, SCHEMA_VERSION, META_PREFIX, file_tag] (4 bytes) +//! Chunk key: [SQLITE_PREFIX, SCHEMA_VERSION, CHUNK_PREFIX, file_tag, chunk_index_u32_be] (8 bytes) + +/// Size of each file chunk stored in KV. Matches CHUNK_SIZE in kv.ts. +pub const CHUNK_SIZE: usize = 4096; + +/// Top-level SQLite prefix byte. Matches SQLITE_PREFIX in kv.ts. +pub const SQLITE_PREFIX: u8 = 0x08; + +/// Schema version namespace byte after SQLITE_PREFIX. +pub const SQLITE_SCHEMA_VERSION: u8 = 0x01; + +/// Key prefix byte for file metadata (after SQLITE_PREFIX + version). +pub const META_PREFIX: u8 = 0x00; + +/// Key prefix byte for file chunks (after SQLITE_PREFIX + version). +pub const CHUNK_PREFIX: u8 = 0x01; + +/// File kind tag for the actor's main database file. +pub const FILE_TAG_MAIN: u8 = 0x00; + +/// File kind tag for the actor's rollback journal sidecar. +pub const FILE_TAG_JOURNAL: u8 = 0x01; + +/// File kind tag for the actor's WAL sidecar. +pub const FILE_TAG_WAL: u8 = 0x02; + +/// File kind tag for the actor's SHM sidecar. +pub const FILE_TAG_SHM: u8 = 0x03; + +/// Returns the 4-byte metadata key for the given file tag. +/// +/// Format: `[SQLITE_PREFIX, SCHEMA_VERSION, META_PREFIX, file_tag]` +pub fn get_meta_key(file_tag: u8) -> [u8; 4] { + [SQLITE_PREFIX, SQLITE_SCHEMA_VERSION, META_PREFIX, file_tag] +} + +/// Returns the 8-byte chunk key for the given file tag and chunk index. +/// +/// Format: `[SQLITE_PREFIX, SCHEMA_VERSION, CHUNK_PREFIX, file_tag, chunk_index_u32_be]` +/// +/// The chunk index is derived from byte offset as `offset / CHUNK_SIZE`. +pub fn get_chunk_key(file_tag: u8, chunk_index: u32) -> [u8; 8] { + let ci = chunk_index.to_be_bytes(); + [ + SQLITE_PREFIX, + SQLITE_SCHEMA_VERSION, + CHUNK_PREFIX, + file_tag, + ci[0], + ci[1], + ci[2], + ci[3], + ] +} + +/// Maximum file size in bytes before chunk index overflow. +/// +/// Chunk indices are u32, so the maximum addressable byte is +/// (u32::MAX as u64 + 1) * CHUNK_SIZE. Writes or truncates beyond this would +/// wrap the chunk index. This matches MAX_FILE_SIZE_BYTES in the WASM VFS. +pub const MAX_FILE_SIZE: u64 = (u32::MAX as u64 + 1) * CHUNK_SIZE as u64; + +/// Returns a 4-byte key that is lexicographically just past all chunk keys for +/// the given file tag. Useful as the exclusive end bound for deleteRange. +/// +/// Format: `[SQLITE_PREFIX, SCHEMA_VERSION, CHUNK_PREFIX, file_tag + 1]` +/// +/// This is shorter than a chunk key but lexicographically greater than any +/// 8-byte chunk key with the same file_tag prefix. +pub fn get_chunk_key_range_end(file_tag: u8) -> [u8; 4] { + [SQLITE_PREFIX, SQLITE_SCHEMA_VERSION, CHUNK_PREFIX, file_tag + 1] +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn constants_match_typescript() { + assert_eq!(CHUNK_SIZE, 4096); + assert_eq!(SQLITE_PREFIX, 8); + assert_eq!(SQLITE_SCHEMA_VERSION, 1); + assert_eq!(META_PREFIX, 0); + assert_eq!(CHUNK_PREFIX, 1); + assert_eq!(FILE_TAG_MAIN, 0); + assert_eq!(FILE_TAG_JOURNAL, 1); + assert_eq!(FILE_TAG_WAL, 2); + assert_eq!(FILE_TAG_SHM, 3); + } + + #[test] + fn meta_key_main() { + // TypeScript: getMetaKey(FILE_TAG_MAIN) => [8, 1, 0, 0] + assert_eq!(get_meta_key(FILE_TAG_MAIN), [0x08, 0x01, 0x00, 0x00]); + } + + #[test] + fn meta_key_journal() { + // TypeScript: getMetaKey(FILE_TAG_JOURNAL) => [8, 1, 0, 1] + assert_eq!(get_meta_key(FILE_TAG_JOURNAL), [0x08, 0x01, 0x00, 0x01]); + } + + #[test] + fn meta_key_wal() { + assert_eq!(get_meta_key(FILE_TAG_WAL), [0x08, 0x01, 0x00, 0x02]); + } + + #[test] + fn meta_key_shm() { + assert_eq!(get_meta_key(FILE_TAG_SHM), [0x08, 0x01, 0x00, 0x03]); + } + + #[test] + fn chunk_key_zero_index() { + // TypeScript: getChunkKey(FILE_TAG_MAIN, 0) => [8, 1, 1, 0, 0, 0, 0, 0] + assert_eq!( + get_chunk_key(FILE_TAG_MAIN, 0), + [0x08, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00] + ); + } + + #[test] + fn chunk_key_index_one() { + // TypeScript: getChunkKey(FILE_TAG_MAIN, 1) => [8, 1, 1, 0, 0, 0, 0, 1] + assert_eq!( + get_chunk_key(FILE_TAG_MAIN, 1), + [0x08, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x01] + ); + } + + #[test] + fn chunk_key_large_index() { + // TypeScript: getChunkKey(FILE_TAG_MAIN, 256) => [8, 1, 1, 0, 0, 0, 1, 0] + assert_eq!( + get_chunk_key(FILE_TAG_MAIN, 256), + [0x08, 0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00] + ); + } + + #[test] + fn chunk_key_max_index() { + // TypeScript: getChunkKey(FILE_TAG_MAIN, 0xFFFFFFFF) => [8, 1, 1, 0, 255, 255, 255, 255] + assert_eq!( + get_chunk_key(FILE_TAG_MAIN, u32::MAX), + [0x08, 0x01, 0x01, 0x00, 0xFF, 0xFF, 0xFF, 0xFF] + ); + } + + #[test] + fn chunk_key_journal_tag() { + assert_eq!( + get_chunk_key(FILE_TAG_JOURNAL, 42), + [0x08, 0x01, 0x01, 0x01, 0x00, 0x00, 0x00, 42] + ); + } + + #[test] + fn chunk_key_big_endian_encoding() { + // 0x01020304 => bytes [1, 2, 3, 4] + assert_eq!( + get_chunk_key(FILE_TAG_MAIN, 0x01020304), + [0x08, 0x01, 0x01, 0x00, 0x01, 0x02, 0x03, 0x04] + ); + } + + #[test] + fn chunk_key_range_end_main() { + // TypeScript: getChunkKeyRangeEnd(FILE_TAG_MAIN) => [8, 1, 1, 1] + assert_eq!( + get_chunk_key_range_end(FILE_TAG_MAIN), + [0x08, 0x01, 0x01, 0x01] + ); + } + + #[test] + fn chunk_key_range_end_journal() { + // TypeScript: getChunkKeyRangeEnd(FILE_TAG_JOURNAL) => [8, 1, 1, 2] + assert_eq!( + get_chunk_key_range_end(FILE_TAG_JOURNAL), + [0x08, 0x01, 0x01, 0x02] + ); + } + + #[test] + fn range_end_is_past_all_chunk_keys() { + // The range end key must be lexicographically greater than any chunk key for the same tag. + let max_chunk = get_chunk_key(FILE_TAG_MAIN, u32::MAX); + let range_end = get_chunk_key_range_end(FILE_TAG_MAIN); + // Compare as slices. The range end [8,1,1,1] > [8,1,1,0,FF,FF,FF,FF] + // because at byte index 3, 1 > 0. + assert!(range_end.as_slice() > max_chunk.as_slice()); + } +} diff --git a/rivetkit-typescript/packages/sqlite-native/src/lib.rs b/rivetkit-typescript/packages/sqlite-native/src/lib.rs new file mode 100644 index 0000000000..6a75310271 --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/src/lib.rs @@ -0,0 +1,965 @@ +//! Native SQLite addon for RivetKit. +//! +//! Routes SQLite page-level KV operations over a WebSocket KV channel protocol. +//! This is the native Rust counterpart to the WASM implementation in `@rivetkit/sqlite-vfs`. +//! +//! The native VFS and WASM VFS must match 1:1 in behavior: +//! - KV key layout and encoding (see `kv.rs` and `sqlite-vfs/src/kv.ts`) +//! - Chunk size (4 KiB) +//! - PRAGMA settings +//! - VFS callback-to-KV-operation mapping +//! - Delete and truncate behavior +//! - Journal and BATCH_ATOMIC behavior + +use std::ffi::{c_char, c_int, c_void, CStr, CString}; +use std::num::NonZeroUsize; +use std::ptr; +use std::slice; +use std::sync::{Arc, Mutex}; + +use libsqlite3_sys::{ + sqlite3, sqlite3_bind_blob, sqlite3_bind_double, sqlite3_bind_int64, + sqlite3_bind_null, sqlite3_bind_text, sqlite3_changes, sqlite3_clear_bindings, + sqlite3_column_blob, sqlite3_column_bytes, sqlite3_column_count, sqlite3_column_double, + sqlite3_column_int64, sqlite3_column_name, sqlite3_column_text, sqlite3_column_type, + sqlite3_errmsg, sqlite3_finalize, sqlite3_prepare_v2, sqlite3_reset, sqlite3_step, + sqlite3_stmt, SQLITE_BLOB, SQLITE_DONE, SQLITE_FLOAT, SQLITE_INTEGER, SQLITE_NULL, + SQLITE_OK, SQLITE_ROW, +}; +use lru::LruCache; +use napi::bindgen_prelude::*; +use napi_derive::napi; +use serde_json::Value as JsonValue; +use tokio::runtime::{Handle, Runtime}; + +/// Typed bind parameter passed from JavaScript. +/// +/// Replaces `Vec` for statement parameters, avoiding 20x +/// serialization overhead for blob data. Instead of JSON arrays of numbers, +/// blobs are passed as `Buffer` (a single memcpy from JS heap to Rust). +/// +/// See docs-internal/engine/NATIVE_SQLITE_REVIEW_FIXES.md M7. +#[napi(object)] +pub struct BindParam { + /// One of: "null", "int", "float", "text", "blob" + pub kind: String, + pub int_value: Option, + pub float_value: Option, + pub text_value: Option, + pub blob_value: Option, +} + +/// KV key layout. Mirrors `rivetkit-typescript/packages/sqlite-vfs/src/kv.ts`. +pub mod kv; + +/// BARE serialization/deserialization for KV channel protocol messages. +/// Types generated from `engine/sdks/schemas/kv-channel-protocol/v1.bare`. +pub use rivet_kv_channel_protocol as protocol; + +/// WebSocket KV channel client with reconnection and request correlation. +pub mod channel; + +/// Custom SQLite VFS that maps VFS callbacks to KV operations via the channel. +pub mod vfs; + +#[cfg(test)] +mod integration_tests; + +use channel::{KvChannel, KvChannelConfig, OpMetrics}; + +// MARK: SQL Metrics + +/// Per-SQL-statement-type timing for diagnosing napi + spawn_blocking overhead. +pub struct SqlMetrics { + pub execute: OpMetrics, + pub query: OpMetrics, + pub exec: OpMetrics, + pub spawn_blocking_wait: OpMetrics, + pub sqlite_step: OpMetrics, + pub stmt_cache: OpMetrics, + pub result_serialize: OpMetrics, +} + +impl SqlMetrics { + pub fn new() -> Self { + Self { + execute: OpMetrics::new(), + query: OpMetrics::new(), + exec: OpMetrics::new(), + spawn_blocking_wait: OpMetrics::new(), + sqlite_step: OpMetrics::new(), + stmt_cache: OpMetrics::new(), + result_serialize: OpMetrics::new(), + } + } +} + +// MARK: Statement Cache + +/// Default number of prepared statements to cache per database. +const STMT_CACHE_CAPACITY: usize = 128; + +/// Wrapper around a raw `sqlite3_stmt` pointer that finalizes on drop. +/// Used as the value type in the LRU cache so evicted entries are +/// automatically cleaned up. +struct CachedStmt(*mut sqlite3_stmt); + +unsafe impl Send for CachedStmt {} + +impl Drop for CachedStmt { + fn drop(&mut self) { + if !self.0.is_null() { + unsafe { + sqlite3_finalize(self.0); + } + } + } +} + +// MARK: Runtime + +/// Initialize a tracing subscriber for log output (stderr). +/// Uses RUST_LOG env var for filtering (defaults to warn). try_init() +/// is a no-op if a subscriber is already set by the host process. +fn init_tracing() { + let _ = tracing_subscriber::fmt() + .with_env_filter( + tracing_subscriber::EnvFilter::try_from_default_env() + .unwrap_or_else(|_| tracing_subscriber::EnvFilter::new("warn")), + ) + .try_init(); +} + +// MARK: JS Types + +/// Configuration for connecting to the KV channel endpoint. +#[napi(object)] +pub struct ConnectConfig { + pub url: String, + pub token: Option, + pub namespace: String, +} + +/// Result of an execute() call. +#[napi(object)] +pub struct ExecuteResult { + pub changes: i64, +} + +/// Result of a query() call. +#[napi(object)] +pub struct QueryResult { + pub columns: Vec, + pub rows: Vec>, +} + +/// A shared WebSocket connection to the KV channel server. +/// One per process, shared across all actors. +/// +/// The tokio runtime is owned here so it is dropped when the channel is dropped, +/// ensuring clean process exit after disconnect. The runtime MUST NOT be dropped +/// before all actors have closed their databases. +#[napi(js_name = "KvChannel")] +pub struct JsKvChannel { + rt: Runtime, + channel: Arc, + sql_metrics: Arc, +} + +/// An open SQLite database backed by KV storage via the channel. +/// +/// The `db` field is wrapped in `Arc>>` so that +/// `close_database` can atomically take the handle while concurrent +/// `execute`/`query`/`exec` closures hold an Arc clone. Any operation +/// that finds `None` returns a "database is closed" error. This prevents +/// use-after-free if `close_database` runs between pointer extraction +/// and `spawn_blocking` task execution. +/// +/// Field order matters for drop safety: `stmt_cache` is declared before `db` +/// so cached statements are finalized before the database connection is closed. +#[napi(js_name = "NativeDatabase")] +pub struct JsNativeDatabase { + stmt_cache: Arc>>, + db: Arc>>, + rt_handle: Handle, + channel: Arc, + sql_metrics: Arc, + actor_id: String, +} + +// MARK: Exported Functions + +/// Open the shared KV channel WebSocket connection. +/// +/// In production, token is the engine's admin_token (RIVET__AUTH__ADMIN_TOKEN). +/// In local dev, token is config.token (RIVET_TOKEN), optional in dev mode. +#[napi] +pub fn connect(config: ConnectConfig) -> JsKvChannel { + init_tracing(); + + let rt = Runtime::new().expect("failed to create tokio runtime"); + // Enter the runtime context so KvChannel::connect can call tokio::spawn. + let _guard = rt.enter(); + let channel = KvChannel::connect(KvChannelConfig { + url: config.url, + token: config.token, + namespace: config.namespace, + }); + JsKvChannel { + rt, + channel: Arc::new(channel), + sql_metrics: Arc::new(SqlMetrics::new()), + } +} + +/// Open a database for an actor. Sends ActorOpenRequest optimistically. +/// +/// VFS registration and sqlite3_open_v2 run inside `spawn_blocking` because +/// they trigger synchronous VFS callbacks that call `Handle::block_on()` for +/// KV I/O. This is safe from a blocking thread but would deadlock or freeze +/// the Node.js main thread if called via `rt.block_on()`. +#[napi(js_name = "openDatabase")] +pub async fn open_database( + channel: &JsKvChannel, + actor_id: String, +) -> Result { + // Send ActorOpenRequest and wait for the response to ensure the + // server-side actor lock is acquired before VFS operations begin. + let ch = channel.channel.clone(); + let aid = actor_id.clone(); + ch.open_actor(&aid) + .await + .map_err(|e| Error::from_reason(e.to_string()))?; + + // Register VFS and open database inside spawn_blocking since VFS + // callbacks use Handle::block_on() which is safe from blocking threads + // but not from the Node.js main thread. + let rt_handle = channel.rt.handle().clone(); + let ch2 = channel.channel.clone(); + let aid2 = actor_id.clone(); + let rt_handle2 = rt_handle.clone(); + let native_db = channel + .rt + .spawn_blocking(move || { + let vfs_name = format!("kv-{aid2}"); + let kv_vfs = + vfs::KvVfs::register(&vfs_name, ch2, aid2.clone(), rt_handle2)?; + vfs::open_database(kv_vfs, &aid2) + }) + .await + .map_err(|e| Error::from_reason(e.to_string()))? + .map_err(Error::from_reason)?; + + Ok(JsNativeDatabase { + stmt_cache: Arc::new(Mutex::new(LruCache::new( + NonZeroUsize::new(STMT_CACHE_CAPACITY).unwrap(), + ))), + db: Arc::new(std::sync::Mutex::new(Some(native_db))), + rt_handle, + channel: channel.channel.clone(), + sql_metrics: channel.sql_metrics.clone(), + actor_id, + }) +} + +/// Execute a statement (INSERT, UPDATE, DELETE, CREATE, etc.). +/// +/// SQLite operations run on tokio's blocking thread pool via `spawn_blocking`. +/// VFS callbacks call `Handle::block_on()` from blocking threads (not tokio +/// worker threads), which is safe. The Node.js main thread is never blocked. +/// +/// Three threading approaches were considered: +/// +/// 1. **spawn_blocking** (chosen): napi `async fn` dispatches to tokio's +/// blocking thread pool (default cap 512). Simplest, idiomatic, tokio +/// manages the pool. Minor downside: thread may change between queries +/// (slightly worse cache locality). +/// +/// 2. **Dedicated thread per actor**: One `std::thread` per actor, receives +/// SQL via mpsc, sends results via oneshot. Best cache locality, but +/// requires manual lifecycle management and one idle thread per open actor. +/// +/// 3. **Channel + block-in-place**: Sync napi function, VFS callbacks send +/// requests via `std::sync::mpsc` and block on `recv()`. Does NOT solve +/// the core problem because the Node.js main thread is still blocked. +/// +/// See docs-internal/engine/NATIVE_SQLITE_REVIEW_FINDINGS.md Finding 1. +#[napi] +pub async fn execute( + db: &JsNativeDatabase, + sql: String, + params: Option>, +) -> Result { + let outer_start = std::time::Instant::now(); + let db_arc = db.db.clone(); + let cache = db.stmt_cache.clone(); + let sql_metrics = db.sql_metrics.clone(); + let trace_sql = std::env::var("RIVET_TRACE_SQL").is_ok(); + + let result = db.rt_handle + .spawn_blocking(move || { + let blocking_wait = outer_start.elapsed(); + sql_metrics.spawn_blocking_wait.record(blocking_wait); + if trace_sql { + eprintln!("[sql-trace] execute spawn_wait={}us", blocking_wait.as_micros()); + } + let guard = db_arc.lock().unwrap(); + let native_db = guard + .as_ref() + .ok_or_else(|| Error::from_reason("database is closed"))?; + let db_ptr = native_db.as_ptr(); + + // Phase 1: Check cache for existing statement, then drop the lock. + // The mutex must not be held during sqlite3_step because VFS + // callbacks call block_on(WebSocket I/O). + // See docs-internal/engine/NATIVE_SQLITE_REVIEW_FIXES.md L3. + let cached_stmt = { + let mut cache_guard = cache.lock().unwrap(); + pop_cached_stmt(&mut cache_guard, &sql) + }; + + let stmt = if let Some(s) = cached_stmt { + s + } else { + prepare_stmt(db_ptr, &sql)? + }; + + if let Some(ref p) = params { + if let Err(e) = bind_params(db_ptr, stmt, p) { + unsafe { sqlite3_finalize(stmt) }; + return Err(e); + } + } + + // Execute with no cache mutex held. VFS I/O happens here. + let step_start = std::time::Instant::now(); + let rc = unsafe { sqlite3_step(stmt) }; + let step_elapsed = step_start.elapsed(); + sql_metrics.sqlite_step.record(step_elapsed); + if trace_sql { + eprintln!("[sql-trace] execute sqlite_step={}us", step_elapsed.as_micros()); + } + if rc != SQLITE_DONE && rc != SQLITE_ROW { + let msg = unsafe { sqlite_errmsg(db_ptr) }; + unsafe { sqlite3_finalize(stmt) }; + return Err(Error::from_reason(msg)); + } + + let changes = unsafe { sqlite3_changes(db_ptr) } as i64; + + // Phase 2: Return statement to cache. + let cache_start = std::time::Instant::now(); + { + let mut cache_guard = cache.lock().unwrap(); + cache_guard.put(sql, CachedStmt(stmt)); + } + sql_metrics.stmt_cache.record(cache_start.elapsed()); + + Ok(ExecuteResult { changes }) + }) + .await + .map_err(|e| Error::from_reason(e.to_string()))??; + db.sql_metrics.execute.record(outer_start.elapsed()); + Ok(result) +} + +/// Run a query (SELECT, PRAGMA, etc.). +/// +/// See `execute` for threading model documentation. +#[napi] +pub async fn query( + db: &JsNativeDatabase, + sql: String, + params: Option>, +) -> Result { + let outer_start = std::time::Instant::now(); + let db_arc = db.db.clone(); + let cache = db.stmt_cache.clone(); + let sql_metrics = db.sql_metrics.clone(); + + let result = db.rt_handle + .spawn_blocking(move || { + sql_metrics.spawn_blocking_wait.record(outer_start.elapsed()); + let guard = db_arc.lock().unwrap(); + let native_db = guard + .as_ref() + .ok_or_else(|| Error::from_reason("database is closed"))?; + let db_ptr = native_db.as_ptr(); + + // Phase 1: Check cache for existing statement, then drop the lock. + // The mutex must not be held during sqlite3_step because VFS + // callbacks call block_on(WebSocket I/O). + // See docs-internal/engine/NATIVE_SQLITE_REVIEW_FIXES.md L3. + let cached_stmt = { + let mut cache_guard = cache.lock().unwrap(); + pop_cached_stmt(&mut cache_guard, &sql) + }; + + let stmt = if let Some(s) = cached_stmt { + s + } else { + prepare_stmt(db_ptr, &sql)? + }; + + if let Some(ref p) = params { + if let Err(e) = bind_params(db_ptr, stmt, p) { + unsafe { sqlite3_finalize(stmt) }; + return Err(e); + } + } + + // Read column names. + let col_count = unsafe { sqlite3_column_count(stmt) }; + let columns: Vec = (0..col_count) + .map(|i| unsafe { + let name = sqlite3_column_name(stmt, i); + if name.is_null() { + String::new() + } else { + CStr::from_ptr(name).to_string_lossy().into_owned() + } + }) + .collect(); + + // Read rows. No cache mutex held; VFS I/O happens during step. + let step_start = std::time::Instant::now(); + let mut rows: Vec> = Vec::new(); + loop { + let rc = unsafe { sqlite3_step(stmt) }; + if rc == SQLITE_DONE { + break; + } + if rc != SQLITE_ROW { + let msg = unsafe { sqlite_errmsg(db_ptr) }; + unsafe { sqlite3_finalize(stmt) }; + return Err(Error::from_reason(msg)); + } + + let row: Vec = (0..col_count) + .map(|i| unsafe { extract_column_value(stmt, i) }) + .collect(); + rows.push(row); + } + sql_metrics.sqlite_step.record(step_start.elapsed()); + + // Phase 2: Return statement to cache. + let cache_start = std::time::Instant::now(); + { + let mut cache_guard = cache.lock().unwrap(); + cache_guard.put(sql, CachedStmt(stmt)); + } + sql_metrics.stmt_cache.record(cache_start.elapsed()); + + Ok(QueryResult { columns, rows }) + }) + .await + .map_err(|e| Error::from_reason(e.to_string()))??; + db.sql_metrics.query.record(outer_start.elapsed()); + Ok(result) +} + +/// Execute multi-statement SQL without parameters. +/// Uses sqlite3_prepare_v2 in a loop with tail pointer tracking to handle +/// multiple statements (e.g., migrations). Returns columns and rows from +/// the last statement that produced results. +/// +/// See `execute` for threading model documentation. +#[napi] +pub async fn exec(db: &JsNativeDatabase, sql: String) -> Result { + let outer_start = std::time::Instant::now(); + let db_arc = db.db.clone(); + let sql_metrics = db.sql_metrics.clone(); + + let result = db.rt_handle + .spawn_blocking(move || { + sql_metrics.spawn_blocking_wait.record(outer_start.elapsed()); + let guard = db_arc.lock().unwrap(); + let native_db = guard + .as_ref() + .ok_or_else(|| Error::from_reason("database is closed"))?; + let db_ptr = native_db.as_ptr(); + + let c_sql = + CString::new(sql.as_str()).map_err(|e| Error::from_reason(e.to_string()))?; + let sql_bytes = c_sql.to_bytes(); + let sql_ptr = c_sql.as_ptr(); + let sql_end = unsafe { sql_ptr.add(sql_bytes.len()) }; + + let mut tail: *const c_char = sql_ptr; + let mut all_rows: Vec> = Vec::new(); + let mut last_columns: Vec = Vec::new(); + + while tail < sql_end && !tail.is_null() { + let mut stmt: *mut sqlite3_stmt = ptr::null_mut(); + let mut next_tail: *const c_char = ptr::null(); + let remaining = (sql_end as usize - tail as usize) as c_int; + + let rc = unsafe { + sqlite3_prepare_v2(db_ptr, tail, remaining, &mut stmt, &mut next_tail) + }; + if rc != SQLITE_OK { + return Err(Error::from_reason(unsafe { sqlite_errmsg(db_ptr) })); + } + + // No more statements. + if stmt.is_null() { + break; + } + + let col_count = unsafe { sqlite3_column_count(stmt) }; + if col_count > 0 { + last_columns = (0..col_count) + .map(|i| unsafe { + let name = sqlite3_column_name(stmt, i); + if name.is_null() { + String::new() + } else { + CStr::from_ptr(name).to_string_lossy().into_owned() + } + }) + .collect(); + } + + loop { + let rc = unsafe { sqlite3_step(stmt) }; + if rc == SQLITE_DONE { + break; + } + if rc != SQLITE_ROW { + let msg = unsafe { sqlite_errmsg(db_ptr) }; + unsafe { sqlite3_finalize(stmt) }; + return Err(Error::from_reason(msg)); + } + let row: Vec = (0..col_count) + .map(|i| unsafe { extract_column_value(stmt, i) }) + .collect(); + all_rows.push(row); + } + + unsafe { sqlite3_finalize(stmt) }; + tail = next_tail; + } + + Ok(QueryResult { + columns: last_columns, + rows: all_rows, + }) + }) + .await + .map_err(|e| Error::from_reason(e.to_string()))??; + db.sql_metrics.exec.record(outer_start.elapsed()); + Ok(result) +} + +/// Close the database connection and release the actor lock. +/// Sends ActorCloseRequest to the server. +/// +/// Locks the db mutex and takes the Option, so concurrent/subsequent +/// execute/query/exec operations see None and return "database is closed". +#[napi(js_name = "closeDatabase")] +pub async fn close_database(db: &JsNativeDatabase) -> Result<()> { + // Finalize all cached statements before closing the database. + db.stmt_cache.lock().unwrap().clear(); + + // Lock the mutex and take the database handle. Any concurrent + // spawn_blocking closures that haven't acquired the lock yet will + // find None and return an error instead of using a freed pointer. + { + let mut guard = db.db.lock().unwrap(); + let _ = guard.take(); + } + + // Send ActorCloseRequest to release the server-side lock. + let ch = db.channel.clone(); + let aid = db.actor_id.clone(); + ch.close_actor(&aid) + .await + .map_err(|e| Error::from_reason(e.to_string()))?; + + Ok(()) +} + +/// Close the KV channel WebSocket connection. +#[napi] +pub async fn disconnect(channel: &JsKvChannel) -> Result<()> { + channel.channel.disconnect().await; + Ok(()) +} + +/// Per-operation metrics snapshot. +#[napi(object)] +pub struct OpMetricsSnapshot { + pub count: i64, + pub total_duration_us: i64, + pub min_duration_us: i64, + pub max_duration_us: i64, + pub avg_duration_us: f64, +} + +/// All KV channel metrics (Layer 1). +#[napi(object)] +pub struct KvChannelMetricsSnapshot { + pub get: OpMetricsSnapshot, + pub put: OpMetricsSnapshot, + pub delete: OpMetricsSnapshot, + pub delete_range: OpMetricsSnapshot, + pub actor_open: OpMetricsSnapshot, + pub actor_close: OpMetricsSnapshot, + pub keys_total: i64, + pub requests_total: i64, + pub batch_atomic_commits: i64, + pub batch_atomic_pages: i64, +} + +/// SQL execution metrics (Layer 0). +#[napi(object)] +pub struct SqlMetricsSnapshot { + pub execute: OpMetricsSnapshot, + pub query: OpMetricsSnapshot, + pub exec: OpMetricsSnapshot, + pub spawn_blocking_wait: OpMetricsSnapshot, + pub sqlite_step: OpMetricsSnapshot, + pub stmt_cache: OpMetricsSnapshot, + pub result_serialize: OpMetricsSnapshot, +} + +/// VFS callback metrics. +#[napi(object)] +pub struct VfsMetricsSnapshot { + pub xread_count: i64, + pub xread_us: i64, + pub xwrite_count: i64, + pub xwrite_us: i64, + pub xwrite_buffered_count: i64, + pub xsync_count: i64, + pub xsync_us: i64, + pub commit_atomic_count: i64, + pub commit_atomic_us: i64, + pub commit_atomic_pages: i64, +} + +/// All metrics across all layers. +#[napi(object)] +pub struct AllMetricsSnapshot { + pub kv_channel: KvChannelMetricsSnapshot, + pub sql: SqlMetricsSnapshot, + pub vfs: VfsMetricsSnapshot, +} + +fn snapshot_op(op: &channel::OpMetrics) -> OpMetricsSnapshot { + let (count, total, min, max) = op.snapshot(); + OpMetricsSnapshot { + count: count as i64, + total_duration_us: total as i64, + min_duration_us: min as i64, + max_duration_us: max as i64, + avg_duration_us: if count > 0 { total as f64 / count as f64 } else { 0.0 }, + } +} + +/// Get a snapshot of all metrics across all layers. +#[napi(js_name = "getMetrics")] +pub fn get_metrics(channel: &JsKvChannel) -> AllMetricsSnapshot { + let m = channel.channel.metrics(); + let s = &*channel.sql_metrics; + + AllMetricsSnapshot { + kv_channel: KvChannelMetricsSnapshot { + get: snapshot_op(&m.get), + put: snapshot_op(&m.put), + delete: snapshot_op(&m.delete), + delete_range: snapshot_op(&m.delete_range), + actor_open: snapshot_op(&m.actor_open), + actor_close: snapshot_op(&m.actor_close), + keys_total: m.keys_total.load(std::sync::atomic::Ordering::Relaxed) as i64, + requests_total: m.requests_total.load(std::sync::atomic::Ordering::Relaxed) as i64, + batch_atomic_commits: m.batch_atomic_commits.load(std::sync::atomic::Ordering::Relaxed) as i64, + batch_atomic_pages: m.batch_atomic_pages.load(std::sync::atomic::Ordering::Relaxed) as i64, + }, + sql: SqlMetricsSnapshot { + execute: snapshot_op(&s.execute), + query: snapshot_op(&s.query), + exec: snapshot_op(&s.exec), + spawn_blocking_wait: snapshot_op(&s.spawn_blocking_wait), + sqlite_step: snapshot_op(&s.sqlite_step), + stmt_cache: snapshot_op(&s.stmt_cache), + result_serialize: snapshot_op(&s.result_serialize), + }, + vfs: VfsMetricsSnapshot { + xread_count: 0, + xread_us: 0, + xwrite_count: 0, + xwrite_us: 0, + xwrite_buffered_count: 0, + xsync_count: 0, + xsync_us: 0, + commit_atomic_count: 0, + commit_atomic_us: 0, + commit_atomic_pages: 0, + }, + } +} + +// MARK: Internal Helpers + +/// Pop a prepared statement from the cache if available. +/// +/// Uses `pop` to remove the statement from the cache during use, so the +/// mutex can be released before `sqlite3_step` triggers VFS I/O. The +/// caller must return the statement to the cache via `put` after execution. +fn pop_cached_stmt( + cache: &mut LruCache, + sql: &str, +) -> Option<*mut sqlite3_stmt> { + cache.pop(sql).map(|cs| { + let stmt = cs.0; + std::mem::forget(cs); // Prevent Drop from calling sqlite3_finalize. + unsafe { + sqlite3_reset(stmt); + sqlite3_clear_bindings(stmt); + } + stmt + }) +} + +/// Prepare a new statement via sqlite3_prepare_v2. +fn prepare_stmt(db_ptr: *mut sqlite3, sql: &str) -> Result<*mut sqlite3_stmt> { + let c_sql = CString::new(sql).map_err(|e| Error::from_reason(e.to_string()))?; + let mut stmt: *mut sqlite3_stmt = ptr::null_mut(); + let rc = + unsafe { sqlite3_prepare_v2(db_ptr, c_sql.as_ptr(), -1, &mut stmt, ptr::null_mut()) }; + if rc != SQLITE_OK { + return Err(Error::from_reason(unsafe { sqlite_errmsg(db_ptr) })); + } + Ok(stmt) +} + +/// Get the last SQLite error message. +unsafe fn sqlite_errmsg(db: *mut sqlite3) -> String { + let msg = sqlite3_errmsg(db); + if msg.is_null() { + "unknown SQLite error".into() + } else { + CStr::from_ptr(msg).to_string_lossy().into_owned() + } +} + +/// SQLITE_TRANSIENT tells SQLite to immediately copy bound parameter data. +fn sqlite_transient() -> Option { + Some(unsafe { std::mem::transmute(-1isize) }) +} + +/// SQLite column type constant for TEXT. +/// Defined locally because libsqlite3-sys exports vary between SQLITE3_TEXT and SQLITE_TEXT. +const SQLITE_TYPE_TEXT: c_int = 3; + +/// Bind typed parameters to a prepared statement. +fn bind_params( + db: *mut sqlite3, + stmt: *mut sqlite3_stmt, + params: &[BindParam], +) -> Result<()> { + for (i, param) in params.iter().enumerate() { + let idx = (i + 1) as c_int; + let rc = match param.kind.as_str() { + "null" => unsafe { sqlite3_bind_null(stmt, idx) }, + "int" => { + let v = param.int_value.ok_or_else(|| { + Error::from_reason(format!("missing int_value at param {idx}")) + })?; + unsafe { sqlite3_bind_int64(stmt, idx, v) } + } + "float" => { + let v = param.float_value.ok_or_else(|| { + Error::from_reason(format!("missing float_value at param {idx}")) + })?; + unsafe { sqlite3_bind_double(stmt, idx, v) } + } + "text" => { + let s = param.text_value.as_ref().ok_or_else(|| { + Error::from_reason(format!("missing text_value at param {idx}")) + })?; + let c_str = CString::new(s.as_str()) + .map_err(|e| Error::from_reason(e.to_string()))?; + unsafe { + sqlite3_bind_text( + stmt, + idx, + c_str.as_ptr(), + s.len() as c_int, + sqlite_transient(), + ) + } + } + "blob" => { + let buf = param.blob_value.as_ref().ok_or_else(|| { + Error::from_reason(format!("missing blob_value at param {idx}")) + })?; + unsafe { + sqlite3_bind_blob( + stmt, + idx, + buf.as_ptr() as *const c_void, + buf.len() as c_int, + sqlite_transient(), + ) + } + } + other => { + return Err(Error::from_reason(format!( + "unsupported bind param kind '{other}' at param {idx}" + ))); + } + }; + if rc != SQLITE_OK { + let msg = unsafe { sqlite_errmsg(db) }; + return Err(Error::from_reason(format!( + "bind error at param {idx}: {msg}" + ))); + } + } + Ok(()) +} + +/// Extract a column value from the current row as a JSON value. +unsafe fn extract_column_value(stmt: *mut sqlite3_stmt, col: c_int) -> JsonValue { + match sqlite3_column_type(stmt, col) { + SQLITE_NULL => JsonValue::Null, + SQLITE_INTEGER => { + let v = sqlite3_column_int64(stmt, col); + JsonValue::Number(v.into()) + } + SQLITE_FLOAT => { + let v = sqlite3_column_double(stmt, col); + serde_json::Number::from_f64(v) + .map(JsonValue::Number) + .unwrap_or(JsonValue::Null) + } + SQLITE_TYPE_TEXT => { + let ptr = sqlite3_column_text(stmt, col); + if ptr.is_null() { + JsonValue::Null + } else { + let s = CStr::from_ptr(ptr as *const c_char) + .to_string_lossy() + .into_owned(); + JsonValue::String(s) + } + } + SQLITE_BLOB => { + let ptr = sqlite3_column_blob(stmt, col) as *const u8; + let len = sqlite3_column_bytes(stmt, col) as usize; + if ptr.is_null() || len == 0 { + JsonValue::Array(vec![]) + } else { + let bytes = slice::from_raw_parts(ptr, len); + JsonValue::Array( + bytes + .iter() + .map(|&b| JsonValue::Number(b.into())) + .collect(), + ) + } + } + _ => JsonValue::Null, + } +} + +#[cfg(test)] +mod stmt_cache_tests { + use super::*; + + use libsqlite3_sys::{sqlite3_close, sqlite3_exec, sqlite3_open}; + + fn open_memory_db() -> *mut sqlite3 { + let mut db: *mut sqlite3 = ptr::null_mut(); + let path = CString::new(":memory:").unwrap(); + let rc = unsafe { sqlite3_open(path.as_ptr(), &mut db) }; + assert_eq!(rc, SQLITE_OK); + db + } + + #[test] + fn test_stmt_cache_pop_and_put() { + let db = open_memory_db(); + let mut cache = LruCache::new(NonZeroUsize::new(STMT_CACHE_CAPACITY).unwrap()); + + // Create a table so SELECT has something to prepare against. + unsafe { + let sql = CString::new("CREATE TABLE cache_test (id INTEGER, value TEXT)").unwrap(); + sqlite3_exec(db, sql.as_ptr(), None, ptr::null_mut(), ptr::null_mut()); + } + + let select_sql = "SELECT id, value FROM cache_test WHERE id = ?"; + + // First lookup - cache miss, prepare manually. + let popped = pop_cached_stmt(&mut cache, select_sql); + assert!(popped.is_none(), "first call should not be cached"); + let stmt1 = prepare_stmt(db, select_sql).unwrap(); + cache.put(select_sql.to_string(), CachedStmt(stmt1)); + + // Second lookup - cache hit via pop (removes from cache). + let popped = pop_cached_stmt(&mut cache, select_sql); + assert!(popped.is_some(), "second call should be cached"); + let stmt2 = popped.unwrap(); + assert_eq!(stmt1, stmt2, "cached statement pointer should match"); + // After pop, cache is empty for this key. + assert!(pop_cached_stmt(&mut cache, select_sql).is_none()); + // Put it back. + cache.put(select_sql.to_string(), CachedStmt(stmt2)); + + // Third lookup - still cached after put. + let popped = pop_cached_stmt(&mut cache, select_sql); + assert!(popped.is_some(), "third call should still be cached"); + let stmt3 = popped.unwrap(); + assert_eq!(stmt1, stmt3); + // Return to cache for cleanup. + cache.put(select_sql.to_string(), CachedStmt(stmt3)); + + // Different SQL - cache miss. + let other_sql = "SELECT id FROM cache_test"; + let popped = pop_cached_stmt(&mut cache, other_sql); + assert!(popped.is_none(), "different SQL should not be cached"); + + cache.clear(); + unsafe { sqlite3_close(db) }; + } + + #[test] + fn test_stmt_cache_eviction() { + let db = open_memory_db(); + // Tiny cache of size 2 to force eviction. + let mut cache = LruCache::new(NonZeroUsize::new(2).unwrap()); + + // Fill cache with 2 statements. + let sql1 = "SELECT 1"; + let s1 = prepare_stmt(db, sql1).unwrap(); + cache.put(sql1.to_string(), CachedStmt(s1)); + + let sql2 = "SELECT 2"; + let s2 = prepare_stmt(db, sql2).unwrap(); + cache.put(sql2.to_string(), CachedStmt(s2)); + + assert_eq!(cache.len(), 2); + + // Third statement evicts LRU (sql1). The evicted CachedStmt's + // Drop impl calls sqlite3_finalize automatically. + let sql3 = "SELECT 3"; + let s3 = prepare_stmt(db, sql3).unwrap(); + cache.put(sql3.to_string(), CachedStmt(s3)); + + assert_eq!(cache.len(), 2); + + // sql1 should be evicted. + let popped = pop_cached_stmt(&mut cache, sql1); + assert!(popped.is_none(), "evicted statement should not be cached"); + // sql2 should still be cached. + let popped = pop_cached_stmt(&mut cache, sql2); + assert!(popped.is_some(), "sql2 should still be cached"); + // Return sql2 to cache for cleanup. + cache.put(sql2.to_string(), CachedStmt(popped.unwrap())); + + cache.clear(); + unsafe { sqlite3_close(db) }; + } +} diff --git a/rivetkit-typescript/packages/sqlite-native/src/vfs.rs b/rivetkit-typescript/packages/sqlite-native/src/vfs.rs new file mode 100644 index 0000000000..5853a93900 --- /dev/null +++ b/rivetkit-typescript/packages/sqlite-native/src/vfs.rs @@ -0,0 +1,1388 @@ +//! Custom SQLite VFS backed by KV operations over the KV channel. +//! +//! Keep this file behaviorally aligned with +//! `rivetkit-typescript/packages/sqlite-vfs/src/vfs.ts`. + +use std::collections::{BTreeMap, HashMap}; +use std::ffi::{c_char, c_int, c_void, CStr, CString}; +use std::ptr; +use std::slice; +use std::sync::atomic::{AtomicU64, Ordering}; +use std::sync::Arc; + +use libsqlite3_sys::*; +use tokio::runtime::Handle; + +use crate::channel::KvChannel; +use crate::kv; +use crate::protocol::*; + +// MARK: Panic Guard + +fn panic_message(payload: &Box) -> String { + if let Some(s) = payload.downcast_ref::<&str>() { + s.to_string() + } else if let Some(s) = payload.downcast_ref::() { + s.clone() + } else { + "unknown panic".to_string() + } +} + +macro_rules! vfs_catch_unwind { + ($err_val:expr, $body:expr) => { + match std::panic::catch_unwind(std::panic::AssertUnwindSafe(|| $body)) { + Ok(result) => result, + Err(panic) => { + tracing::error!( + message = panic_message(&panic), + "vfs callback panicked" + ); + $err_val + } + } + }; +} + +// MARK: Constants + +/// File metadata version. Must match CURRENT_VERSION in the WASM VFS schema. +const META_VERSION: u16 = 1; + +/// Encoded metadata size. This is 2 bytes of version plus 8 bytes of size. +const META_ENCODED_SIZE: usize = 10; + +/// Maximum pathname length reported to SQLite. +const MAX_PATHNAME: c_int = 64; + +/// Maximum number of keys accepted by a single KV put or delete request. +const KV_MAX_BATCH_KEYS: usize = 128; + +/// First 108 bytes of a valid empty page-1 SQLite database. +/// +/// This must match `HEADER_PREFIX` in +/// `rivetkit-typescript/packages/sqlite-vfs/src/generated/empty-db-page.ts`. +const EMPTY_DB_PAGE_HEADER_PREFIX: [u8; 108] = [ + 83, 81, 76, 105, 116, 101, 32, 102, 111, 114, 109, 97, 116, 32, 51, 0, 16, 0, + 1, 1, 0, 64, 32, 32, 0, 0, 0, 3, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 3, 0, 46, 138, 17, 13, 0, 0, 0, 0, 16, 0, 0, +]; + +fn empty_db_page() -> Vec { + let mut page = vec![0u8; kv::CHUNK_SIZE]; + page[..EMPTY_DB_PAGE_HEADER_PREFIX.len()].copy_from_slice(&EMPTY_DB_PAGE_HEADER_PREFIX); + page +} + +// MARK: Metadata Encoding + +pub fn encode_file_meta(size: i64) -> Vec { + let mut buf = Vec::with_capacity(META_ENCODED_SIZE); + buf.extend_from_slice(&META_VERSION.to_le_bytes()); + buf.extend_from_slice(&(size as u64).to_le_bytes()); + buf +} + +pub fn decode_file_meta(data: &[u8]) -> Option { + if data.len() < META_ENCODED_SIZE { + return None; + } + let version_bytes: [u8; 2] = data[0..2].try_into().ok()?; + if u16::from_le_bytes(version_bytes) != META_VERSION { + return None; + } + let size_bytes: [u8; 8] = data[2..10].try_into().ok()?; + let size = u64::from_le_bytes(size_bytes); + if size > i64::MAX as u64 { + return None; + } + Some(size as i64) +} + +fn is_valid_file_size(size: i64) -> bool { + size >= 0 && (size as u64) <= kv::MAX_FILE_SIZE +} + +// MARK: VFS Metrics + +/// Per-VFS-callback operation metrics for diagnosing native vs WASM performance. +pub struct VfsMetrics { + pub xread_count: AtomicU64, + pub xread_us: AtomicU64, + pub xwrite_count: AtomicU64, + pub xwrite_us: AtomicU64, + pub xwrite_buffered_count: AtomicU64, + pub xsync_count: AtomicU64, + pub xsync_us: AtomicU64, + pub commit_atomic_count: AtomicU64, + pub commit_atomic_us: AtomicU64, + pub commit_atomic_pages: AtomicU64, +} + +impl VfsMetrics { + pub fn new() -> Self { + Self { + xread_count: AtomicU64::new(0), + xread_us: AtomicU64::new(0), + xwrite_count: AtomicU64::new(0), + xwrite_us: AtomicU64::new(0), + xwrite_buffered_count: AtomicU64::new(0), + xsync_count: AtomicU64::new(0), + xsync_us: AtomicU64::new(0), + commit_atomic_count: AtomicU64::new(0), + commit_atomic_us: AtomicU64::new(0), + commit_atomic_pages: AtomicU64::new(0), + } + } +} + +// MARK: VFS Context + +struct VfsContext { + channel: Arc, + actor_id: String, + main_file_name: String, + rt_handle: Handle, + io_methods: Box, + vfs_metrics: Arc, +} + +impl VfsContext { + fn resolve_file_tag(&self, path: &str) -> Option { + if path == self.main_file_name { + return Some(kv::FILE_TAG_MAIN); + } + + if let Some(suffix) = path.strip_prefix(&self.main_file_name) { + match suffix { + "-journal" => Some(kv::FILE_TAG_JOURNAL), + "-wal" => Some(kv::FILE_TAG_WAL), + "-shm" => Some(kv::FILE_TAG_SHM), + _ => None, + } + } else { + None + } + } + + fn send_sync(&self, data: RequestData) -> Result { + let op_name = match &data { + RequestData::KvGetRequest(r) => format!("get({}keys)", r.keys.len()), + RequestData::KvPutRequest(r) => format!("put({}keys)", r.keys.len()), + RequestData::KvDeleteRequest(r) => format!("del({}keys)", r.keys.len()), + RequestData::KvDeleteRangeRequest(_) => "delRange".to_string(), + RequestData::ActorOpenRequest => "open".to_string(), + RequestData::ActorCloseRequest => "close".to_string(), + }; + let start = std::time::Instant::now(); + let result = self + .rt_handle + .block_on(self.channel.send_request(&self.actor_id, data)) + .map_err(|err| err.to_string()); + let elapsed = start.elapsed(); + if std::env::var("RIVET_TRACE_SQL").is_ok() { + eprintln!("[sql-trace] kv_roundtrip op={} duration={}us", op_name, elapsed.as_micros()); + } + tracing::debug!( + op = %op_name, + duration_us = elapsed.as_micros() as u64, + "kv round-trip" + ); + result + } + + fn kv_get(&self, keys: Vec>) -> Result { + match self.send_sync(RequestData::KvGetRequest(KvGetRequest { keys }))? { + ResponseData::KvGetResponse(resp) => Ok(resp), + other => Err(format!("expected KvGetResponse, got {other:?}")), + } + } + + fn kv_put(&self, keys: Vec>, values: Vec>) -> Result<(), String> { + match self.send_sync(RequestData::KvPutRequest(KvPutRequest { keys, values }))? { + ResponseData::KvPutResponse => Ok(()), + other => Err(format!("expected KvPutResponse, got {other:?}")), + } + } + + fn kv_delete(&self, keys: Vec>) -> Result<(), String> { + match self.send_sync(RequestData::KvDeleteRequest(KvDeleteRequest { keys }))? { + ResponseData::KvDeleteResponse => Ok(()), + other => Err(format!("expected KvDeleteResponse, got {other:?}")), + } + } + + fn kv_delete_range(&self, start: Vec, end: Vec) -> Result<(), String> { + match self.send_sync(RequestData::KvDeleteRangeRequest(KvDeleteRangeRequest { + start, + end, + }))? { + ResponseData::KvDeleteResponse => Ok(()), + other => Err(format!("expected KvDeleteResponse, got {other:?}")), + } + } + + fn delete_file(&self, file_tag: u8) -> Result<(), String> { + let meta_key = kv::get_meta_key(file_tag); + let resp = self.kv_get(vec![meta_key.to_vec()])?; + let value_map = build_value_map(&resp); + if !value_map.contains_key(meta_key.as_slice()) { + return Ok(()); + } + + self.kv_delete_range( + kv::get_chunk_key(file_tag, 0).to_vec(), + kv::get_chunk_key_range_end(file_tag).to_vec(), + )?; + self.kv_delete(vec![meta_key.to_vec()]) + } +} + +// MARK: File State + +struct KvFileState { + batch_mode: bool, + dirty_buffer: BTreeMap>, + saved_file_size: i64, + /// Read cache: maps chunk keys to their data. Populated on KV gets, + /// updated on writes, cleared on truncate/delete. This avoids + /// redundant KV round-trips for pages SQLite reads multiple times. + read_cache: HashMap, Vec>, +} + +impl KvFileState { + fn new() -> Self { + Self { + batch_mode: false, + dirty_buffer: BTreeMap::new(), + saved_file_size: 0, + read_cache: HashMap::new(), + } + } +} + +#[repr(C)] +struct KvFile { + base: sqlite3_file, + ctx: *const VfsContext, + state: *mut KvFileState, + file_tag: u8, + meta_key: [u8; 4], + size: i64, + meta_dirty: bool, + flags: c_int, +} + +// MARK: Helpers + +unsafe fn get_file(p: *mut sqlite3_file) -> &'static mut KvFile { + &mut *(p as *mut KvFile) +} + +unsafe fn get_file_state(state: *mut KvFileState) -> &'static mut KvFileState { + &mut *state +} + +unsafe fn free_file_state(file: &mut KvFile) { + if !file.state.is_null() { + drop(Box::from_raw(file.state)); + file.state = ptr::null_mut(); + } +} + +unsafe fn get_vfs_ctx(p: *mut sqlite3_vfs) -> &'static VfsContext { + &*((*p).pAppData as *const VfsContext) +} + +fn build_value_map(resp: &KvGetResponse) -> HashMap<&[u8], &[u8]> { + resp.keys + .iter() + .zip(resp.values.iter()) + .filter(|(_, value)| !value.is_empty()) + .map(|(key, value)| (key.as_slice(), value.as_slice())) + .collect() +} + +fn split_entries(entries: Vec<(Vec, Vec)>) -> (Vec>, Vec>) { + let mut keys = Vec::with_capacity(entries.len()); + let mut values = Vec::with_capacity(entries.len()); + for (key, value) in entries { + keys.push(key); + values.push(value); + } + (keys, values) +} + +// MARK: IO Callbacks + +unsafe extern "C" fn kv_io_close(p_file: *mut sqlite3_file) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR, { + let file = get_file(p_file); + let ctx = &*file.ctx; + + let result = if file.flags & SQLITE_OPEN_DELETEONCLOSE != 0 { + ctx.delete_file(file.file_tag) + } else if file.meta_dirty { + ctx.kv_put( + vec![file.meta_key.to_vec()], + vec![encode_file_meta(file.size)], + ) + } else { + Ok(()) + }; + + free_file_state(file); + + match result { + Ok(()) => SQLITE_OK, + Err(err) => { + tracing::error!(%err, file_tag = file.file_tag, "failed to close file"); + SQLITE_IOERR + } + } + }) +} + +unsafe extern "C" fn kv_io_read( + p_file: *mut sqlite3_file, + p_buf: *mut c_void, + i_amt: c_int, + i_offset: sqlite3_int64, +) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR_READ, { + if i_amt <= 0 { + return SQLITE_OK; + } + + let file = get_file(p_file); + let state = get_file_state(file.state); + let ctx = &*file.ctx; + let read_start = std::time::Instant::now(); + ctx.vfs_metrics.xread_count.fetch_add(1, Ordering::Relaxed); + let requested_length = i_amt as usize; + let buf = slice::from_raw_parts_mut(p_buf as *mut u8, requested_length); + + if i_offset < 0 { + return SQLITE_IOERR_READ; + } + + let offset = i_offset as usize; + let file_size = file.size as usize; + if offset >= file_size { + buf.fill(0); + return SQLITE_IOERR_SHORT_READ; + } + + let start_chunk = offset / kv::CHUNK_SIZE; + let end_chunk = (offset + requested_length - 1) / kv::CHUNK_SIZE; + + let mut chunk_keys_to_fetch = Vec::new(); + let mut buffered_chunks: HashMap> = HashMap::new(); + for chunk_idx in start_chunk..=end_chunk { + // Check dirty buffer first (batch mode writes). + if state.batch_mode { + if let Some(buffered) = state.dirty_buffer.get(&(chunk_idx as u32)) { + buffered_chunks.insert(chunk_idx, buffered.clone()); + continue; + } + } + // Check read cache. + let key = kv::get_chunk_key(file.file_tag, chunk_idx as u32); + if let Some(cached) = state.read_cache.get(key.as_slice()) { + buffered_chunks.insert(chunk_idx, cached.clone()); + continue; + } + chunk_keys_to_fetch.push(key.to_vec()); + } + + let resp = if chunk_keys_to_fetch.is_empty() { + KvGetResponse { + keys: Vec::new(), + values: Vec::new(), + } + } else { + match ctx.kv_get(chunk_keys_to_fetch) { + Ok(resp) => { + // Populate read cache with fetched values. + for (key, value) in resp.keys.iter().zip(resp.values.iter()) { + if !value.is_empty() { + state.read_cache.insert(key.clone(), value.clone()); + } + } + resp + } + Err(_) => return SQLITE_IOERR_READ, + } + }; + let value_map = build_value_map(&resp); + + for chunk_idx in start_chunk..=end_chunk { + let chunk_data: Option<&[u8]> = buffered_chunks.get(&chunk_idx).map(|v| v.as_slice()).or_else(|| { + let key = kv::get_chunk_key(file.file_tag, chunk_idx as u32); + value_map.get(key.as_slice()).copied() + }); + let chunk_offset = chunk_idx * kv::CHUNK_SIZE; + let read_start = offset.saturating_sub(chunk_offset); + let read_end = std::cmp::min( + kv::CHUNK_SIZE, + offset + requested_length - chunk_offset, + ); + let dest_start = chunk_offset + read_start - offset; + + if let Some(chunk_data) = chunk_data { + let source_end = std::cmp::min(read_end, chunk_data.len()); + if source_end > read_start { + let dest_end = dest_start + (source_end - read_start); + buf[dest_start..dest_end] + .copy_from_slice(&chunk_data[read_start..source_end]); + } + if source_end < read_end { + let zero_start = dest_start + (source_end - read_start); + let zero_end = dest_start + (read_end - read_start); + buf[zero_start..zero_end].fill(0); + } + } else { + let dest_end = dest_start + (read_end - read_start); + buf[dest_start..dest_end].fill(0); + } + } + + let actual_bytes = std::cmp::min(requested_length, file_size - offset); + if actual_bytes < requested_length { + buf[actual_bytes..].fill(0); + ctx.vfs_metrics.xread_us.fetch_add(read_start.elapsed().as_micros() as u64, Ordering::Relaxed); + return SQLITE_IOERR_SHORT_READ; + } + + ctx.vfs_metrics.xread_us.fetch_add(read_start.elapsed().as_micros() as u64, Ordering::Relaxed); + SQLITE_OK + }) +} + +unsafe extern "C" fn kv_io_write( + p_file: *mut sqlite3_file, + p_buf: *const c_void, + i_amt: c_int, + i_offset: sqlite3_int64, +) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR_WRITE, { + if i_amt <= 0 { + return SQLITE_OK; + } + + let file = get_file(p_file); + let ctx = &*file.ctx; + let write_start = std::time::Instant::now(); + ctx.vfs_metrics.xwrite_count.fetch_add(1, Ordering::Relaxed); + let data = slice::from_raw_parts(p_buf as *const u8, i_amt as usize); + + if i_offset < 0 { + return SQLITE_IOERR_WRITE; + } + + let offset = i_offset as usize; + let write_length = i_amt as usize; + let write_end_offset = match offset.checked_add(write_length) { + Some(end) => end, + None => return SQLITE_IOERR_WRITE, + }; + if write_end_offset as u64 > kv::MAX_FILE_SIZE { + return SQLITE_IOERR_WRITE; + } + + let start_chunk = offset / kv::CHUNK_SIZE; + let end_chunk = (offset + write_length - 1) / kv::CHUNK_SIZE; + + { + let state = get_file_state(file.state); + if state.batch_mode { + for chunk_idx in start_chunk..=end_chunk { + let chunk_offset = chunk_idx * kv::CHUNK_SIZE; + let source_start = std::cmp::max(0isize, chunk_offset as isize - offset as isize) + as usize; + let source_end = std::cmp::min( + write_length, + chunk_offset + kv::CHUNK_SIZE - offset, + ); + state + .dirty_buffer + .insert(chunk_idx as u32, data[source_start..source_end].to_vec()); + } + + let new_size = std::cmp::max(file.size, write_end_offset as i64); + if new_size != file.size { + file.size = new_size; + file.meta_dirty = true; + } + + ctx.vfs_metrics.xwrite_buffered_count.fetch_add(1, Ordering::Relaxed); + ctx.vfs_metrics.xwrite_us.fetch_add(write_start.elapsed().as_micros() as u64, Ordering::Relaxed); + return SQLITE_OK; + } + } + + struct WritePlan { + chunk_key: Vec, + chunk_offset: usize, + write_start: usize, + write_end: usize, + existing_chunk_index: Option, + } + + let mut plans = Vec::new(); + let mut chunk_keys_to_fetch = Vec::new(); + for chunk_idx in start_chunk..=end_chunk { + let chunk_offset = chunk_idx * kv::CHUNK_SIZE; + let write_start = offset.saturating_sub(chunk_offset); + let write_end = std::cmp::min( + kv::CHUNK_SIZE, + offset + write_length - chunk_offset, + ); + let existing_bytes_in_chunk = if file.size as usize > chunk_offset { + std::cmp::min(kv::CHUNK_SIZE, file.size as usize - chunk_offset) + } else { + 0 + }; + let needs_existing = write_start > 0 || existing_bytes_in_chunk > write_end; + let chunk_key = kv::get_chunk_key(file.file_tag, chunk_idx as u32).to_vec(); + let existing_chunk_index = if needs_existing { + let idx = chunk_keys_to_fetch.len(); + chunk_keys_to_fetch.push(chunk_key.clone()); + Some(idx) + } else { + None + }; + + plans.push(WritePlan { + chunk_key, + chunk_offset, + write_start, + write_end, + existing_chunk_index, + }); + } + + let existing_chunks = if chunk_keys_to_fetch.is_empty() { + Vec::new() + } else { + match ctx.kv_get(chunk_keys_to_fetch.clone()) { + Ok(resp) => { + let value_map = build_value_map(&resp); + chunk_keys_to_fetch + .iter() + .map(|key| value_map.get(key.as_slice()).map(|value| value.to_vec())) + .collect::>() + } + Err(_) => return SQLITE_IOERR_WRITE, + } + }; + + let mut entries_to_write = Vec::with_capacity(plans.len() + 1); + for plan in &plans { + let existing_chunk = plan + .existing_chunk_index + .and_then(|idx| existing_chunks.get(idx)) + .and_then(|value| value.as_ref()); + + let mut new_chunk = if let Some(existing_chunk) = existing_chunk { + let mut chunk = vec![0u8; std::cmp::max(existing_chunk.len(), plan.write_end)]; + chunk[..existing_chunk.len()].copy_from_slice(existing_chunk); + chunk + } else { + vec![0u8; plan.write_end] + }; + + let source_start = plan.chunk_offset + plan.write_start - offset; + let source_end = source_start + (plan.write_end - plan.write_start); + new_chunk[plan.write_start..plan.write_end] + .copy_from_slice(&data[source_start..source_end]); + + entries_to_write.push((plan.chunk_key.clone(), new_chunk)); + } + + let previous_size = file.size; + let previous_meta_dirty = file.meta_dirty; + let new_size = std::cmp::max(file.size, write_end_offset as i64); + if new_size != previous_size { + file.size = new_size; + file.meta_dirty = true; + } + if file.meta_dirty { + entries_to_write.push((file.meta_key.to_vec(), encode_file_meta(file.size))); + } + + // Update read cache with the entries we're about to write. + { + let state = get_file_state(file.state); + for (key, value) in &entries_to_write { + // Only cache chunk keys, not metadata. + if key.len() == 8 { + state.read_cache.insert(key.clone(), value.clone()); + } + } + } + + let (keys, values) = split_entries(entries_to_write); + if ctx.kv_put(keys, values).is_err() { + file.size = previous_size; + file.meta_dirty = previous_meta_dirty; + return SQLITE_IOERR_WRITE; + } + file.meta_dirty = false; + + ctx.vfs_metrics.xwrite_us.fetch_add(write_start.elapsed().as_micros() as u64, Ordering::Relaxed); + SQLITE_OK + }) +} + +unsafe extern "C" fn kv_io_truncate( + p_file: *mut sqlite3_file, + size: sqlite3_int64, +) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR_TRUNCATE, { + let file = get_file(p_file); + let ctx = &*file.ctx; + + if size < 0 || size as u64 > kv::MAX_FILE_SIZE { + return SQLITE_IOERR_TRUNCATE; + } + + if size >= file.size { + if size > file.size { + let previous_size = file.size; + let previous_meta_dirty = file.meta_dirty; + file.size = size; + file.meta_dirty = true; + if ctx + .kv_put( + vec![file.meta_key.to_vec()], + vec![encode_file_meta(file.size)], + ) + .is_err() + { + file.size = previous_size; + file.meta_dirty = previous_meta_dirty; + return SQLITE_IOERR_TRUNCATE; + } + file.meta_dirty = false; + } + return SQLITE_OK; + } + + // Invalidate read cache entries for truncated chunks. + { + let state = get_file_state(file.state); + let truncate_from_chunk = if size == 0 { 0u32 } else { (size as u32 / kv::CHUNK_SIZE as u32) + 1 }; + state.read_cache.retain(|key, _| { + // Chunk keys are 8 bytes: [prefix, version, CHUNK_PREFIX, file_tag, idx_be32] + if key.len() == 8 && key[3] == file.file_tag { + let chunk_idx = u32::from_be_bytes([key[4], key[5], key[6], key[7]]); + chunk_idx < truncate_from_chunk + } else { + true + } + }); + } + + let last_chunk_to_keep = if size == 0 { + -1 + } else { + (size - 1) / kv::CHUNK_SIZE as i64 + }; + let last_existing_chunk = if file.size == 0 { + -1 + } else { + (file.size - 1) / kv::CHUNK_SIZE as i64 + }; + + let previous_size = file.size; + let previous_meta_dirty = file.meta_dirty; + file.size = size; + file.meta_dirty = true; + if ctx + .kv_put( + vec![file.meta_key.to_vec()], + vec![encode_file_meta(file.size)], + ) + .is_err() + { + file.size = previous_size; + file.meta_dirty = previous_meta_dirty; + return SQLITE_IOERR_TRUNCATE; + } + file.meta_dirty = false; + + if size > 0 && size as usize % kv::CHUNK_SIZE != 0 { + let last_chunk_key = kv::get_chunk_key(file.file_tag, last_chunk_to_keep as u32); + let resp = match ctx.kv_get(vec![last_chunk_key.to_vec()]) { + Ok(resp) => resp, + Err(_) => return SQLITE_IOERR_TRUNCATE, + }; + let value_map = build_value_map(&resp); + if let Some(last_chunk_data) = value_map.get(last_chunk_key.as_slice()) { + let truncated_len = size as usize % kv::CHUNK_SIZE; + if last_chunk_data.len() > truncated_len { + if ctx + .kv_put( + vec![last_chunk_key.to_vec()], + vec![last_chunk_data[..truncated_len].to_vec()], + ) + .is_err() + { + return SQLITE_IOERR_TRUNCATE; + } + } + } + } + + let mut keys_to_delete = Vec::new(); + let mut chunk_idx = last_chunk_to_keep + 1; + while chunk_idx <= last_existing_chunk { + keys_to_delete.push(kv::get_chunk_key(file.file_tag, chunk_idx as u32).to_vec()); + chunk_idx += 1; + } + + for chunk in keys_to_delete.chunks(KV_MAX_BATCH_KEYS) { + if ctx.kv_delete(chunk.to_vec()).is_err() { + return SQLITE_IOERR_TRUNCATE; + } + } + + SQLITE_OK + }) +} + +unsafe extern "C" fn kv_io_sync( + p_file: *mut sqlite3_file, + _flags: c_int, +) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR_FSYNC, { + let file = get_file(p_file); + if !file.meta_dirty { + return SQLITE_OK; + } + + let ctx = &*file.ctx; + if ctx + .kv_put( + vec![file.meta_key.to_vec()], + vec![encode_file_meta(file.size)], + ) + .is_err() + { + return SQLITE_IOERR_FSYNC; + } + file.meta_dirty = false; + + SQLITE_OK + }) +} + +unsafe extern "C" fn kv_io_file_size( + p_file: *mut sqlite3_file, + p_size: *mut sqlite3_int64, +) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR_FSTAT, { + let file = get_file(p_file); + *p_size = file.size; + SQLITE_OK + }) +} + +unsafe extern "C" fn kv_io_lock(_p_file: *mut sqlite3_file, _level: c_int) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR_LOCK, SQLITE_OK) +} + +unsafe extern "C" fn kv_io_unlock(_p_file: *mut sqlite3_file, _level: c_int) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR_UNLOCK, SQLITE_OK) +} + +unsafe extern "C" fn kv_io_check_reserved_lock( + _p_file: *mut sqlite3_file, + p_res_out: *mut c_int, +) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR, { + *p_res_out = 0; + SQLITE_OK + }) +} + +unsafe extern "C" fn kv_io_file_control( + p_file: *mut sqlite3_file, + op: c_int, + _p_arg: *mut c_void, +) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR, { + let file = get_file(p_file); + if file.state.is_null() { + return SQLITE_NOTFOUND; + } + let state = get_file_state(file.state); + + match op { + SQLITE_FCNTL_BEGIN_ATOMIC_WRITE => { + state.saved_file_size = file.size; + state.batch_mode = true; + file.meta_dirty = false; + state.dirty_buffer.clear(); + SQLITE_OK + } + SQLITE_FCNTL_COMMIT_ATOMIC_WRITE => { + let ctx = &*file.ctx; + let commit_start = std::time::Instant::now(); + let dirty_page_count = state.dirty_buffer.len() as u64; + let max_dirty_pages = if file.meta_dirty { + KV_MAX_BATCH_KEYS - 1 + } else { + KV_MAX_BATCH_KEYS + }; + + if state.dirty_buffer.len() > max_dirty_pages { + state.dirty_buffer.clear(); + file.size = state.saved_file_size; + file.meta_dirty = false; + state.batch_mode = false; + return SQLITE_IOERR; + } + + let mut entries = Vec::with_capacity(state.dirty_buffer.len() + 1); + for (chunk_index, data) in &state.dirty_buffer { + entries.push(( + kv::get_chunk_key(file.file_tag, *chunk_index).to_vec(), + data.clone(), + )); + } + if file.meta_dirty { + entries.push((file.meta_key.to_vec(), encode_file_meta(file.size))); + } + + let (keys, values) = split_entries(entries); + if ctx.kv_put(keys, values).is_err() { + state.dirty_buffer.clear(); + file.size = state.saved_file_size; + file.meta_dirty = false; + state.batch_mode = false; + return SQLITE_IOERR; + } + + // Move dirty buffer entries into the read cache so subsequent + // reads can serve them without a KV round-trip. + let flushed: Vec<_> = std::mem::take(&mut state.dirty_buffer).into_iter().collect(); + for (chunk_index, data) in flushed { + let key = kv::get_chunk_key(file.file_tag, chunk_index); + state.read_cache.insert(key.to_vec(), data); + } + file.meta_dirty = false; + state.batch_mode = false; + ctx.vfs_metrics.commit_atomic_count.fetch_add(1, Ordering::Relaxed); + ctx.vfs_metrics.commit_atomic_pages.fetch_add(dirty_page_count, Ordering::Relaxed); + ctx.vfs_metrics.commit_atomic_us.fetch_add(commit_start.elapsed().as_micros() as u64, Ordering::Relaxed); + // Also record in the channel-level batch metrics. + ctx.channel.metrics().batch_atomic_commits.fetch_add(1, Ordering::Relaxed); + ctx.channel.metrics().batch_atomic_pages.fetch_add(dirty_page_count, Ordering::Relaxed); + SQLITE_OK + } + SQLITE_FCNTL_ROLLBACK_ATOMIC_WRITE => { + if !state.batch_mode { + return SQLITE_OK; + } + state.dirty_buffer.clear(); + file.size = state.saved_file_size; + file.meta_dirty = false; + state.batch_mode = false; + SQLITE_OK + } + _ => SQLITE_NOTFOUND, + } + }) +} + +unsafe extern "C" fn kv_io_sector_size(_p_file: *mut sqlite3_file) -> c_int { + vfs_catch_unwind!(kv::CHUNK_SIZE as c_int, kv::CHUNK_SIZE as c_int) +} + +unsafe extern "C" fn kv_io_device_characteristics(_p_file: *mut sqlite3_file) -> c_int { + vfs_catch_unwind!(0, SQLITE_IOCAP_BATCH_ATOMIC) +} + +// MARK: VFS Callbacks + +unsafe extern "C" fn kv_vfs_open( + p_vfs: *mut sqlite3_vfs, + z_name: *const c_char, + p_file: *mut sqlite3_file, + flags: c_int, + p_out_flags: *mut c_int, +) -> c_int { + vfs_catch_unwind!(SQLITE_CANTOPEN, { + if z_name.is_null() { + return SQLITE_CANTOPEN; + } + + let ctx = get_vfs_ctx(p_vfs); + let path = match CStr::from_ptr(z_name).to_str() { + Ok(path) => path, + Err(_) => return SQLITE_CANTOPEN, + }; + let file_tag = match ctx.resolve_file_tag(path) { + Some(file_tag) => file_tag, + None => return SQLITE_CANTOPEN, + }; + let meta_key = kv::get_meta_key(file_tag); + + let resp = match ctx.kv_get(vec![meta_key.to_vec()]) { + Ok(resp) => resp, + Err(_) => return SQLITE_CANTOPEN, + }; + let value_map = build_value_map(&resp); + + let size = if let Some(size_data) = value_map.get(meta_key.as_slice()) { + let size = match decode_file_meta(size_data) { + Some(size) => size, + None => return SQLITE_IOERR, + }; + if !is_valid_file_size(size) { + return SQLITE_IOERR; + } + size + } else if flags & SQLITE_OPEN_CREATE != 0 { + if file_tag == kv::FILE_TAG_MAIN { + let size = kv::CHUNK_SIZE as i64; + let entries = vec![ + (kv::get_chunk_key(file_tag, 0).to_vec(), empty_db_page()), + (meta_key.to_vec(), encode_file_meta(size)), + ]; + let (keys, values) = split_entries(entries); + if ctx.kv_put(keys, values).is_err() { + return SQLITE_CANTOPEN; + } + size + } else { + let size = 0i64; + if ctx + .kv_put( + vec![meta_key.to_vec()], + vec![encode_file_meta(size)], + ) + .is_err() + { + return SQLITE_CANTOPEN; + } + size + } + } else { + return SQLITE_CANTOPEN; + }; + + let state = Box::into_raw(Box::new(KvFileState::new())); + let base = sqlite3_file { + pMethods: ctx.io_methods.as_ref() as *const sqlite3_io_methods, + }; + ptr::write( + p_file as *mut KvFile, + KvFile { + base, + ctx: ctx as *const VfsContext, + state, + file_tag, + meta_key, + size, + meta_dirty: false, + flags, + }, + ); + + if !p_out_flags.is_null() { + *p_out_flags = flags; + } + + SQLITE_OK + }) +} + +unsafe extern "C" fn kv_vfs_delete( + p_vfs: *mut sqlite3_vfs, + z_name: *const c_char, + _sync_dir: c_int, +) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR_DELETE, { + if z_name.is_null() { + return SQLITE_IOERR_DELETE; + } + + let ctx = get_vfs_ctx(p_vfs); + let path = match CStr::from_ptr(z_name).to_str() { + Ok(path) => path, + Err(_) => return SQLITE_IOERR_DELETE, + }; + let file_tag = match ctx.resolve_file_tag(path) { + Some(file_tag) => file_tag, + None => return SQLITE_IOERR_DELETE, + }; + + match ctx.delete_file(file_tag) { + Ok(()) => SQLITE_OK, + Err(_) => SQLITE_IOERR_DELETE, + } + }) +} + +unsafe extern "C" fn kv_vfs_access( + p_vfs: *mut sqlite3_vfs, + z_name: *const c_char, + _flags: c_int, + p_res_out: *mut c_int, +) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR_ACCESS, { + if z_name.is_null() { + *p_res_out = 0; + return SQLITE_OK; + } + + let ctx = get_vfs_ctx(p_vfs); + let path = match CStr::from_ptr(z_name).to_str() { + Ok(path) => path, + Err(_) => { + *p_res_out = 0; + return SQLITE_OK; + } + }; + let file_tag = match ctx.resolve_file_tag(path) { + Some(file_tag) => file_tag, + None => { + *p_res_out = 0; + return SQLITE_OK; + } + }; + let meta_key = kv::get_meta_key(file_tag); + let resp = match ctx.kv_get(vec![meta_key.to_vec()]) { + Ok(resp) => resp, + Err(_) => return SQLITE_IOERR_ACCESS, + }; + let value_map = build_value_map(&resp); + *p_res_out = if value_map.contains_key(meta_key.as_slice()) { 1 } else { 0 }; + + SQLITE_OK + }) +} + +unsafe extern "C" fn kv_vfs_full_pathname( + _p_vfs: *mut sqlite3_vfs, + z_name: *const c_char, + n_out: c_int, + z_out: *mut c_char, +) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR, { + if z_name.is_null() || z_out.is_null() || n_out <= 0 { + return SQLITE_IOERR; + } + + let name = CStr::from_ptr(z_name); + let bytes = name.to_bytes_with_nul(); + if bytes.len() >= n_out as usize { + return SQLITE_IOERR; + } + + ptr::copy_nonoverlapping(bytes.as_ptr() as *const c_char, z_out, bytes.len()); + SQLITE_OK + }) +} + +unsafe extern "C" fn kv_vfs_randomness( + _p_vfs: *mut sqlite3_vfs, + n_byte: c_int, + z_out: *mut c_char, +) -> c_int { + vfs_catch_unwind!(0, { + let buf = slice::from_raw_parts_mut(z_out as *mut u8, n_byte as usize); + match getrandom::getrandom(buf) { + Ok(()) => n_byte, + Err(_) => 0, + } + }) +} + +unsafe extern "C" fn kv_vfs_sleep( + _p_vfs: *mut sqlite3_vfs, + microseconds: c_int, +) -> c_int { + vfs_catch_unwind!(0, { + std::thread::sleep(std::time::Duration::from_micros(microseconds as u64)); + microseconds + }) +} + +unsafe extern "C" fn kv_vfs_current_time( + _p_vfs: *mut sqlite3_vfs, + p_time_out: *mut f64, +) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR, { + let now = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .unwrap_or_default(); + *p_time_out = 2440587.5 + (now.as_secs_f64() / 86400.0); + SQLITE_OK + }) +} + +unsafe extern "C" fn kv_vfs_get_last_error( + _p_vfs: *mut sqlite3_vfs, + _n_byte: c_int, + _z_err_msg: *mut c_char, +) -> c_int { + vfs_catch_unwind!(SQLITE_IOERR, SQLITE_OK) +} + +// MARK: KvVfs + +pub struct KvVfs { + vfs_ptr: *mut sqlite3_vfs, + _name: CString, + ctx_ptr: *mut VfsContext, +} + +unsafe impl Send for KvVfs {} +unsafe impl Sync for KvVfs {} + +impl KvVfs { + pub fn register( + name: &str, + channel: Arc, + actor_id: String, + rt_handle: Handle, + ) -> Result { + let mut io_methods: sqlite3_io_methods = unsafe { std::mem::zeroed() }; + io_methods.iVersion = 1; + io_methods.xClose = Some(kv_io_close); + io_methods.xRead = Some(kv_io_read); + io_methods.xWrite = Some(kv_io_write); + io_methods.xTruncate = Some(kv_io_truncate); + io_methods.xSync = Some(kv_io_sync); + io_methods.xFileSize = Some(kv_io_file_size); + io_methods.xLock = Some(kv_io_lock); + io_methods.xUnlock = Some(kv_io_unlock); + io_methods.xCheckReservedLock = Some(kv_io_check_reserved_lock); + io_methods.xFileControl = Some(kv_io_file_control); + io_methods.xSectorSize = Some(kv_io_sector_size); + io_methods.xDeviceCharacteristics = Some(kv_io_device_characteristics); + + let vfs_metrics = Arc::new(VfsMetrics::new()); + let ctx = Box::new(VfsContext { + channel, + actor_id: actor_id.clone(), + main_file_name: actor_id, + rt_handle, + io_methods: Box::new(io_methods), + vfs_metrics, + }); + let ctx_ptr = Box::into_raw(ctx); + + let name_cstring = CString::new(name).map_err(|err| err.to_string())?; + + let mut vfs: sqlite3_vfs = unsafe { std::mem::zeroed() }; + vfs.iVersion = 1; + vfs.szOsFile = std::mem::size_of::() as c_int; + vfs.mxPathname = MAX_PATHNAME; + vfs.zName = name_cstring.as_ptr(); + vfs.pAppData = ctx_ptr as *mut c_void; + vfs.xOpen = Some(kv_vfs_open); + vfs.xDelete = Some(kv_vfs_delete); + vfs.xAccess = Some(kv_vfs_access); + vfs.xFullPathname = Some(kv_vfs_full_pathname); + vfs.xRandomness = Some(kv_vfs_randomness); + vfs.xSleep = Some(kv_vfs_sleep); + vfs.xCurrentTime = Some(kv_vfs_current_time); + vfs.xGetLastError = Some(kv_vfs_get_last_error); + + let vfs_ptr = Box::into_raw(Box::new(vfs)); + + let rc = unsafe { sqlite3_vfs_register(vfs_ptr, 0) }; + if rc != SQLITE_OK { + unsafe { + drop(Box::from_raw(vfs_ptr)); + drop(Box::from_raw(ctx_ptr)); + } + return Err(format!("sqlite3_vfs_register failed with code {rc}")); + } + + Ok(Self { + vfs_ptr, + _name: name_cstring, + ctx_ptr, + }) + } + + pub fn name_ptr(&self) -> *const c_char { + self._name.as_ptr() + } +} + +impl Drop for KvVfs { + fn drop(&mut self) { + unsafe { + sqlite3_vfs_unregister(self.vfs_ptr); + drop(Box::from_raw(self.vfs_ptr)); + drop(Box::from_raw(self.ctx_ptr)); + } + } +} + +// MARK: NativeDatabase + +pub struct NativeDatabase { + db: *mut sqlite3, + _vfs: KvVfs, +} + +unsafe impl Send for NativeDatabase {} + +impl NativeDatabase { + pub fn as_ptr(&self) -> *mut sqlite3 { + self.db + } +} + +impl Drop for NativeDatabase { + fn drop(&mut self) { + if !self.db.is_null() { + unsafe { + sqlite3_close(self.db); + } + } + } +} + +pub fn open_database(vfs: KvVfs, file_name: &str) -> Result { + let c_name = CString::new(file_name).map_err(|err| err.to_string())?; + let mut db: *mut sqlite3 = ptr::null_mut(); + + let rc = unsafe { + sqlite3_open_v2( + c_name.as_ptr(), + &mut db, + SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, + vfs.name_ptr(), + ) + }; + if rc != SQLITE_OK { + if !db.is_null() { + unsafe { + sqlite3_close(db); + } + } + return Err(format!("sqlite3_open_v2 failed with code {rc}")); + } + + for pragma in &[ + "PRAGMA page_size = 4096;", + "PRAGMA journal_mode = DELETE;", + "PRAGMA synchronous = NORMAL;", + "PRAGMA temp_store = MEMORY;", + "PRAGMA auto_vacuum = NONE;", + "PRAGMA locking_mode = EXCLUSIVE;", + ] { + let c_sql = CString::new(*pragma).map_err(|err| err.to_string())?; + let rc = unsafe { + sqlite3_exec( + db, + c_sql.as_ptr(), + None, + ptr::null_mut(), + ptr::null_mut(), + ) + }; + if rc != SQLITE_OK { + unsafe { + sqlite3_close(db); + } + return Err(format!("{pragma} failed with code {rc}")); + } + } + + Ok(NativeDatabase { db, _vfs: vfs }) +} + +// MARK: Tests + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn encode_decode_round_trip() { + for size in [0i64, 1, 4096, 1_000_000, i64::MAX / 2] { + let encoded = encode_file_meta(size); + assert_eq!(encoded.len(), META_ENCODED_SIZE); + assert_eq!(&encoded[0..2], &META_VERSION.to_le_bytes()); + let decoded = decode_file_meta(&encoded).unwrap(); + assert_eq!(decoded, size); + } + } + + #[test] + fn encode_zero_size() { + let encoded = encode_file_meta(0); + assert_eq!(encoded, [1, 0, 0, 0, 0, 0, 0, 0, 0, 0]); + } + + #[test] + fn encode_known_size() { + let encoded = encode_file_meta(4096); + assert_eq!( + encoded, + [1, 0, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00] + ); + } + + #[test] + fn decode_invalid_version() { + let data = [2u8, 0, 0, 0, 0, 0, 0, 0, 0, 0]; + assert!(decode_file_meta(&data).is_none()); + } + + #[test] + fn decode_too_short() { + assert!(decode_file_meta(&[]).is_none()); + assert!(decode_file_meta(&[1]).is_none()); + assert!(decode_file_meta(&[1, 0]).is_none()); + assert!(decode_file_meta(&[1, 0, 0, 0, 0]).is_none()); + } + + #[test] + fn kv_file_struct_is_larger_than_sqlite3_file() { + assert!(std::mem::size_of::() > std::mem::size_of::()); + } + + #[test] + fn meta_encoded_size_constant() { + assert_eq!(META_ENCODED_SIZE, 10); + } + + #[test] + fn meta_version_matches_wasm_vfs() { + assert_eq!(META_VERSION, 1); + } + + #[test] + fn encode_matches_vbare_format() { + let encoded = encode_file_meta(42); + assert_eq!(encoded[0], 0x01); + assert_eq!(encoded[1], 0x00); + assert_eq!(&encoded[2..], &42u64.to_le_bytes()); + } + + #[test] + fn empty_db_page_matches_generated_prefix() { + let page = empty_db_page(); + assert_eq!(page.len(), kv::CHUNK_SIZE); + assert_eq!( + &page[..EMPTY_DB_PAGE_HEADER_PREFIX.len()], + &EMPTY_DB_PAGE_HEADER_PREFIX + ); + assert!(page[EMPTY_DB_PAGE_HEADER_PREFIX.len()..] + .iter() + .all(|byte| *byte == 0)); + } +} diff --git a/rivetkit-typescript/packages/sqlite-vfs-test/tests/sqlite-vfs.test.ts b/rivetkit-typescript/packages/sqlite-vfs-test/tests/sqlite-vfs.test.ts index 90ae1f797f..2e6d7765c1 100644 --- a/rivetkit-typescript/packages/sqlite-vfs-test/tests/sqlite-vfs.test.ts +++ b/rivetkit-typescript/packages/sqlite-vfs-test/tests/sqlite-vfs.test.ts @@ -35,6 +35,15 @@ function createKvStore(): KvVfsOptions { store.delete(keyToString(key)); } }, + deleteRange: async (start, end) => { + const startHex = keyToString(start); + const endHex = keyToString(end); + for (const key of store.keys()) { + if (key >= startHex && key < endHex) { + store.delete(key); + } + } + }, }; } diff --git a/rivetkit-typescript/packages/sqlite-vfs/src/kv.ts b/rivetkit-typescript/packages/sqlite-vfs/src/kv.ts index a6f24ba4f6..6dcfc5ff72 100644 --- a/rivetkit-typescript/packages/sqlite-vfs/src/kv.ts +++ b/rivetkit-typescript/packages/sqlite-vfs/src/kv.ts @@ -3,6 +3,9 @@ * * This module contains constants and utilities for building keys used in the * key-value store for SQLite file storage. + * + * Keep in sync with rivetkit-typescript/packages/sqlite-native/src/kv.rs + * (native VFS). Both must produce byte-identical keys. */ /** @@ -74,7 +77,7 @@ export function getMetaKey(fileTag: SqliteFileTag): Uint8Array { /** * Gets the key for one chunk of file data. - * Format: [SQLITE_PREFIX, CHUNK_PREFIX, file tag, chunk index (u32 big-endian)] + * Format: [SQLITE_PREFIX, SCHEMA_VERSION, CHUNK_PREFIX, file tag, chunk index (u32 big-endian)] * * The chunk index is derived from byte offset as floor(offset / CHUNK_SIZE), * which is how SQLite byte ranges map onto KV keys. @@ -94,3 +97,20 @@ export function getChunkKey( key[7] = chunkIndex & 0xff; return key; } + +/** + * Returns a key that is lexicographically just past all chunk keys for the + * given file tag. Useful as the exclusive end bound for deleteRange. + * + * The key is [SQLITE_PREFIX, SCHEMA_VERSION, CHUNK_PREFIX, fileTag + 1], + * which is shorter than a chunk key but lexicographically greater than any + * 8-byte chunk key with the same fileTag prefix. + */ +export function getChunkKeyRangeEnd(fileTag: SqliteFileTag): Uint8Array { + const key = new Uint8Array(4); + key[0] = SQLITE_PREFIX; + key[1] = SQLITE_SCHEMA_VERSION; + key[2] = CHUNK_PREFIX; + key[3] = fileTag + 1; + return key; +} diff --git a/rivetkit-typescript/packages/sqlite-vfs/src/types.ts b/rivetkit-typescript/packages/sqlite-vfs/src/types.ts index 3b0c107793..cd34b6fbc8 100644 --- a/rivetkit-typescript/packages/sqlite-vfs/src/types.ts +++ b/rivetkit-typescript/packages/sqlite-vfs/src/types.ts @@ -15,4 +15,6 @@ export interface KvVfsOptions { * is lost unless the caller captures it through this callback. */ onError?: (error: unknown) => void; + /** Delete all keys in the half-open range [start, end). */ + deleteRange: (start: Uint8Array, end: Uint8Array) => Promise; } diff --git a/rivetkit-typescript/packages/sqlite-vfs/src/vfs.ts b/rivetkit-typescript/packages/sqlite-vfs/src/vfs.ts index 1473ce9110..7639a41e94 100644 --- a/rivetkit-typescript/packages/sqlite-vfs/src/vfs.ts +++ b/rivetkit-typescript/packages/sqlite-vfs/src/vfs.ts @@ -33,6 +33,7 @@ import { FILE_TAG_SHM, FILE_TAG_WAL, getChunkKey, + getChunkKeyRangeEnd, getMetaKey, type SqliteFileTag, } from "./kv"; @@ -1221,6 +1222,7 @@ class SqliteSystem implements SqliteVfsRegistration { if (file.metaDirty) { file.metaDirty = false; } + file.metaDirty = false; return VFS.SQLITE_OK; } @@ -1364,13 +1366,16 @@ class SqliteSystem implements SqliteVfsRegistration { } /** - * Internal delete implementation + * Internal delete implementation. + * Uses deleteRange for O(1) chunk deletion instead of enumerating + * individual chunk keys. The chunk keys for a file tag are + * lexicographically contiguous, so range deletion is always safe. */ async #delete(path: string): Promise { const { options, fileTag } = this.#resolveFileOrThrow(path); const metaKey = getMetaKey(fileTag); - // Get file size to find out how many chunks to delete + // Get file size to check if the file exists const sizeData = await options.get(metaKey); if (!sizeData) { @@ -1378,20 +1383,12 @@ class SqliteSystem implements SqliteVfsRegistration { return; } - const size = decodeFileMeta(sizeData); - - // Delete all chunks - const keysToDelete: Uint8Array[] = [metaKey]; - const numChunks = Math.ceil(size / CHUNK_SIZE); - for (let i = 0; i < numChunks; i++) { - keysToDelete.push(getChunkKey(fileTag, i)); - } - - for (let b = 0; b < keysToDelete.length; b += KV_MAX_BATCH_KEYS) { - await options.deleteBatch( - keysToDelete.slice(b, b + KV_MAX_BATCH_KEYS), - ); - } + // Delete all chunks via range delete and the metadata key. + await options.deleteRange( + getChunkKey(fileTag, 0), + getChunkKeyRangeEnd(fileTag), + ); + await options.deleteBatch([metaKey]); } async xAccess( @@ -1522,6 +1519,12 @@ class SqliteSystem implements SqliteVfsRegistration { } } + // Return CHUNK_SIZE so SQLite aligns journal I/O to chunk boundaries. + // Must match the native VFS (kv_io_sector_size in sqlite-native/src/vfs.rs). + xSectorSize(_fileId: number): number { + return CHUNK_SIZE; + } + xDeviceCharacteristics(_fileId: number): number { return SQLITE_IOCAP_BATCH_ATOMIC; } diff --git a/scripts/release/sdk.ts b/scripts/release/sdk.ts index 44e9695d85..98893207c7 100644 --- a/scripts/release/sdk.ts +++ b/scripts/release/sdk.ts @@ -1,5 +1,5 @@ import { $ } from "execa"; -import { readFile } from "node:fs/promises"; +import { readFile, readdir } from "node:fs/promises"; import { join } from "node:path"; import type { ReleaseOpts } from "./main"; @@ -14,6 +14,12 @@ export const EXCLUDED_RIVETKIT_PACKAGES = [ "example-agent-os-e2e", ] as const; +// Packages excluded from the turbo build step but still published. +// These have native/Rust dependencies that require separate build steps (e.g., napi-rs cross-compilation). +const BUILD_EXCLUDED_RIVETKIT_PACKAGES = [ + "@rivetkit/sqlite-native", +] as const; + async function npmVersionExists( packageName: string, version: string, @@ -69,7 +75,10 @@ export async function publishSdk(opts: ReleaseOpts) { console.log("==> Building rivetkit packages"); // Build exclusion filters for packages that shouldn't be built - const excludeFilters = EXCLUDED_RIVETKIT_PACKAGES.flatMap((pkg) => [ + const excludeFilters = [ + ...EXCLUDED_RIVETKIT_PACKAGES, + ...BUILD_EXCLUDED_RIVETKIT_PACKAGES, + ].flatMap((pkg) => [ "-F", `!${pkg}`, ]); @@ -161,4 +170,52 @@ export async function publishSdk(opts: ReleaseOpts) { cwd: opts.root, })`pnpm --filter ${name} publish --access public --tag ${tag} --no-git-checks`; } + + // Publish sqlite-native platform packages. + // These are not in the pnpm workspace (nested under npm/) and need explicit publishing. + const sqliteNativeNpmDir = join( + opts.root, + "rivetkit-typescript/packages/sqlite-native/npm", + ); + let platformDirs: string[]; + try { + platformDirs = await readdir(sqliteNativeNpmDir); + } catch { + platformDirs = []; + console.log( + "==> sqlite-native npm/ directory not found, skipping platform packages", + ); + } + + const isRc = opts.version.includes("-rc."); + const rcTag = isRc ? "rc" : "latest"; + + for (const dir of platformDirs) { + const platformPkgPath = join(sqliteNativeNpmDir, dir, "package.json"); + let platformPkg: { name: string }; + try { + platformPkg = JSON.parse(await readFile(platformPkgPath, "utf-8")); + } catch { + continue; + } + + const versionExists = await npmVersionExists( + platformPkg.name, + opts.version, + ); + if (versionExists) { + console.log( + `Version ${opts.version} of ${platformPkg.name} already exists. Skipping...`, + ); + continue; + } + + console.log( + `==> Publishing to NPM: ${platformPkg.name}@${opts.version}`, + ); + await $({ + stdio: "inherit", + cwd: join(sqliteNativeNpmDir, dir), + })`npm publish --access public --tag ${rcTag}`; + } } diff --git a/scripts/release/update_version.ts b/scripts/release/update_version.ts index 6c4df90f15..2003d164d6 100644 --- a/scripts/release/update_version.ts +++ b/scripts/release/update_version.ts @@ -33,6 +33,21 @@ export async function updateVersion(opts: ReleaseOpts) { find: /"version": ".*"/, replace: `"version": "${opts.version}"`, }, + { + path: "rivetkit-typescript/packages/sqlite-native/npm/*/package.json", + find: /"version": ".*"/, + replace: `"version": "${opts.version}"`, + }, + { + path: "rivetkit-typescript/packages/sqlite-native/package.json", + find: /("@rivetkit\/sqlite-native-[^"]+": )"[^"]+"/g, + replace: `$1"${opts.version}"`, + }, + { + path: "rivetkit-typescript/packages/sqlite-native/Cargo.toml", + find: /^version = ".*"/m, + replace: `version = "${opts.version}"`, + }, { path: "examples/**/package.json", find: /"(@rivetkit\/[^"]+|rivetkit)": "\^?[0-9]+\.[0-9]+\.[0-9]+(?:-[^"]+)?"/g, diff --git a/website/src/content/docs/actors/limits.mdx b/website/src/content/docs/actors/limits.mdx index ae38a40c86..da6cac7810 100644 --- a/website/src/content/docs/actors/limits.mdx +++ b/website/src/content/docs/actors/limits.mdx @@ -91,6 +91,17 @@ These limits apply to the [SQLite database](/docs/actors/state#sqlite-database) |------|------------|------------|-------------| | Max storage size per actor | — | 10 GiB | Maximum total storage size for a single actor. This limit is shared with KV storage. | +### KV Preloading + +When an actor starts, the engine can pre-fetch KV data declared in the actor name metadata and deliver it alongside the start command. This removes round-trips to storage during actor startup. RivetKit emits the preload manifest from its own key layout and exposes per-actor overrides via `options`. Operators can still enforce a global cap in the [engine config](/docs/self-hosting/configuration) with `pegboard.preload_max_total_bytes`. + +| Name | Soft Limit | Hard Limit | Description | +|------|------------|------------|-------------| +| Max total preload size | 1 MiB | — | Maximum total size of all preloaded KV data sent with the start command. Configurable via `pegboard.preload_max_total_bytes`. Setting to 0 disables all preloading. | +| Max SQLite preload size | 768 KiB | — | Default maximum size of preloaded SQLite VFS data for RivetKit actors. Configurable per actor via `options.preloadMaxSqliteBytes`. Setting to 0 disables SQLite preloading for that actor. | +| Max workflow preload size | 128 KiB | — | Default maximum size of preloaded workflow data for RivetKit actors. Configurable per actor via `options.preloadMaxWorkflowBytes`. Setting to 0 disables workflow preloading for that actor. | +| Max connections preload size | 64 KiB | — | Default maximum size of preloaded connection data for RivetKit actors. Configurable per actor via `options.preloadMaxConnectionsBytes`. Setting to 0 disables connections preloading for that actor. | + ### Actor Input See [Actor Input](/docs/actors/input) for details.