Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,6 @@ Run with config file:
| `DUCKGRES_PROCESS_ISOLATION` | Enable process isolation (`1` or `true`) | `false` |
| `DUCKGRES_IDLE_TIMEOUT` | Connection idle timeout (e.g., `30m`, `1h`, `-1` to disable) | `24h` |
| `DUCKGRES_K8S_SHARED_WARM_TARGET` | Neutral shared warm-worker target for K8s multi-tenant mode (`0` disables prewarm) | `0` |
| `DUCKGRES_K8S_SHARED_WARM_WORKERS` | Enable the reserve -> activate -> hot shared warm-worker path in K8s multi-tenant mode | `false` |
| `DUCKGRES_DUCKLAKE_METADATA_STORE` | DuckLake metadata connection string | - |
| `POSTHOG_API_KEY` | PostHog project API key (`phc_...`); enables log export | - |
| `POSTHOG_HOST` | PostHog ingest host | `us.i.posthog.com` |
Expand Down Expand Up @@ -601,7 +600,7 @@ kill -USR2 <control-plane-pid>

In Kubernetes environments, `--worker-backend remote` is now the multitenant path only. It requires `--config-store`, and the control plane then spawns worker pods via the Kubernetes API, communicates with them over gRPC (Arrow Flight SQL), and uses owner references for automatic garbage collection when the control plane pod is deleted.

The shared warm-worker activation path is gated by `--k8s-shared-warm-workers` / `k8s.shared_warm_workers`. Its default is `false`, which keeps the existing remote behavior; when enabled, newly reserved warm workers must receive tenant runtime over the activation RPC before they can serve sessions.
When a shared warm-worker target is configured (`--k8s-shared-warm-target`), the pool keeps workers neutral at startup, reserves them per org, activates tenant runtime over the activation RPC, and retires them after use. The full lifecycle is: idle → reserved → activating → hot → draining → retired.

```bash
# Local multitenant K8s workflow
Expand All @@ -612,7 +611,7 @@ See [`k8s/README.md`](k8s/README.md) for the full architecture, configuration re

On the multi-tenant path, the config store now keeps per-team managed-warehouse metadata in addition to team/user auth and limits. That team-scoped contract is intended to become the source of truth for the tenant warehouse DB, the tenant DuckLake metadata store (which may live on shared Aurora or a dedicated RDS instance), object-store settings, worker identity, secret references, and provisioning state. The older config-store `DuckLakeConfig` singleton remains only as a legacy cluster-wide setting and should not be treated as authoritative for multi-tenant runtime wiring.

When `DUCKGRES_K8S_SHARED_WARM_WORKERS=true`, the shared K8s pool keeps workers neutral at startup, reserves them per team, activates tenant runtime over the control-plane RPC channel, and retires them after use. Leave it disabled to keep the compatibility path during rollout.
The shared K8s pool keeps workers neutral at startup, reserves them per org, activates tenant runtime over the control-plane RPC channel, and retires them after use.

Managed-warehouse contract notes:

Expand Down
18 changes: 0 additions & 18 deletions config_resolution.go
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,6 @@ type configCLIInputs struct {
K8sWorkerServiceAccount string
K8sMaxWorkers int
K8sSharedWarmTarget int
K8sSharedWarmWorkers bool
QueryLog bool
}

Expand All @@ -74,7 +73,6 @@ type resolvedConfig struct {
K8sWorkerServiceAccount string
K8sMaxWorkers int
K8sSharedWarmTarget int
K8sSharedWarmWorkers bool
ConfigStoreConn string
ConfigPollInterval time.Duration
AdminToken string
Expand Down Expand Up @@ -131,7 +129,6 @@ func resolveEffectiveConfig(fileCfg *FileConfig, cli configCLIInputs, getenv fun
var k8sWorkerPort int
var k8sWorkerSecret, k8sWorkerConfigMap, k8sWorkerImagePullPolicy, k8sWorkerServiceAccount string
var k8sMaxWorkers, k8sSharedWarmTarget int
var k8sSharedWarmWorkers bool
var configStoreConn string
var configPollInterval time.Duration
var adminToken string
Expand Down Expand Up @@ -385,9 +382,6 @@ func resolveEffectiveConfig(fileCfg *FileConfig, cli configCLIInputs, getenv fun
if fileCfg.K8s.SharedWarmTarget != 0 {
k8sSharedWarmTarget = fileCfg.K8s.SharedWarmTarget
}
if fileCfg.K8s.SharedWarmWorkers {
k8sSharedWarmWorkers = true
}
}

if v := getenv("DUCKGRES_HOST"); v != "" {
Expand Down Expand Up @@ -635,14 +629,6 @@ func resolveEffectiveConfig(fileCfg *FileConfig, cli configCLIInputs, getenv fun
warn("Invalid DUCKGRES_K8S_SHARED_WARM_TARGET: " + err.Error())
}
}
if v := getenv("DUCKGRES_K8S_SHARED_WARM_WORKERS"); v != "" {
if b, err := strconv.ParseBool(v); err == nil {
k8sSharedWarmWorkers = b
} else {
warn("Invalid DUCKGRES_K8S_SHARED_WARM_WORKERS: " + err.Error())
}
}

// Query log env vars
if v := getenv("DUCKGRES_QUERY_LOG_ENABLED"); v != "" {
if b, err := strconv.ParseBool(v); err == nil {
Expand Down Expand Up @@ -839,9 +825,6 @@ func resolveEffectiveConfig(fileCfg *FileConfig, cli configCLIInputs, getenv fun
if cli.Set["k8s-shared-warm-target"] {
k8sSharedWarmTarget = cli.K8sSharedWarmTarget
}
if cli.Set["k8s-shared-warm-workers"] {
k8sSharedWarmWorkers = cli.K8sSharedWarmWorkers
}
if cli.Set["query-log"] {
cfg.QueryLog.Enabled = cli.QueryLog
}
Expand Down Expand Up @@ -911,7 +894,6 @@ func resolveEffectiveConfig(fileCfg *FileConfig, cli configCLIInputs, getenv fun
K8sWorkerServiceAccount: k8sWorkerServiceAccount,
K8sMaxWorkers: k8sMaxWorkers,
K8sSharedWarmTarget: k8sSharedWarmTarget,
K8sSharedWarmWorkers: k8sSharedWarmWorkers,
ConfigStoreConn: configStoreConn,
ConfigPollInterval: configPollInterval,
AdminToken: adminToken,
Expand Down
1 change: 0 additions & 1 deletion controlplane/control.go
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,6 @@ type K8sConfig struct {
ServiceAccount string // ServiceAccount name for worker pods (default: "default")
MaxWorkers int // Global cap for the shared K8s worker pool (0 = auto-derived)
SharedWarmTarget int // Neutral shared warm-worker target for K8s multi-tenant mode (0 = disabled)
SharedWarmWorkers bool // Enable reserve->activate->hot lifecycle on the shared warm pool
}

// ControlPlane manages the TCP listener and routes connections to Flight SQL workers.
Expand Down
Loading
Loading