Skip to content

Sync andrelandgraf main into upstream main#63

Open
andrelandgraf wants to merge 24 commits intodatabricks-solutions:mainfrom
andrelandgraf:main
Open

Sync andrelandgraf main into upstream main#63
andrelandgraf wants to merge 24 commits intodatabricks-solutions:mainfrom
andrelandgraf:main

Conversation

@andrelandgraf
Copy link
Copy Markdown
Contributor

Summary

  • Syncs the latest changes currently on andrelandgraf/caspers-kitchens main into databricks-solutions/caspers-kitchens main.
  • Includes support workflow updates, support console app additions, and related pipeline/job/stage updates present on the fork main branch.

Test plan

  • Review changed files and commit history in GitHub PR UI
  • Validate Databricks bundle deploy for relevant targets in a staging workspace
  • Run smoke checks for support console and support agent flows

andrelandgraf and others added 24 commits February 18, 2026 10:36
Replace complaint/refund assets with support-request and support-console resources, update pipeline transformation, and refresh bundle configuration for the new app layout.

Co-authored-by: Cursor <cursoragent@cursor.com>
… latest Lakebase-backed agent workflow.

Co-authored-by: Cursor <cursoragent@cursor.com>
Persist a fallback regenerated report instead of returning 403 for service principal permission errors, and surface a clear warning in the UI.

Co-authored-by: Cursor <cursoragent@cursor.com>
…vior.

This aligns support feature materialization, eval scheduling, and Lakebase app integration with the deployed runtime state, while documenting the current online-table deprecation path and follow-up Synced Tables migration.
…ior.

This aligns streaming request generation and agent inference around a structured payload contract while keeping the demo endpoint name stable for non-versioned deployment.
This handles JobSettings serialization safely so workspace bootstrap can create simulator generator jobs without failing on SDK object type differences.
This avoids SDK type coercion issues in new workspaces by creating simulator jobs with a stable API payload path.
This switches job creation to the Jobs REST payload path so setup works consistently in new blank workspaces.
This uses Jobs REST payload creation to avoid SDK object serialization mismatches in blank workspace bootstraps.
This ensures blank workspaces can create UC functions under <catalog>.ai instead of failing before agent model registration.
This bootstraps required support tables before function registration so missing-table errors do not block initial environment bring-up.
This makes blank-workspace bootstrap resilient by waiting for endpoint readiness when another run is already updating the same serving endpoint.
This reduces blank-workspace provisioning stalls by allowing the endpoint to initialize with on-demand compute during initial deployment.
This allows the app bundle to deploy cleanly to any workspace selected by profile, including fresh blank environments.
Keep TLS enabled while allowing strict hostname verification to be opt-in, preventing app startup failures on regional Lakebase endpoint domains.
This aligns app connection settings with the endpoint created in the target workspace so app startup can authenticate and access support tables.
Support_Lakebase now grants both service principal and OAuth app client IDs, and includes public schema permissions required by AppKit persistent cache.
Support_Lakebase now rolls back safely on role creation errors and falls back to native role creation when databricks_auth cannot resolve an app identity.
Include service_principal_name in the grant identity set to cover runtime principals that authenticate with name-based identities.
This makes principal mismatches diagnosable in app logs so grants can target the exact identity used by app runtime.
Support plugin initializes support-owned tables at startup, so app principals need CREATE in addition to USAGE on the support schema.
App startup now skips idempotent table/index DDL when the app principal lacks ownership, preventing false-fail initialization in provisioned environments.
This removes static Lakebase host wiring by resolving PGHOST from LAKEBASE_ENDPOINT at runtime and prevents queued run buildup by only triggering the order-flow generator when no active run is already in progress.
Synced `support.*_sync` tables are created outside app-owned DDL, so default privileges can miss them; this re-grants SELECT during Support_Lakebase provisioning to keep clean-slate app deployments from failing to load support data.
nkarpov added a commit that referenced this pull request Mar 19, 2026
… scoring stream

Add new `support` target to databricks.yml with full DAG:
- Support request generator (deterministic synthetic data from delivered events)
- Support triage agent (DSPy ReAct with UC function tools, policy-based guardrails)
- Scoring stream with fake-it-till-up pattern and inference capping

Based on work from PR #63. Lakebase + App stages to follow.

Co-authored-by: Andre Landgraf <andre@landgraf.dev>
Co-authored-by: Isaac
nkarpov added a commit that referenced this pull request Mar 20, 2026
- Add support_lakebase stage: Lakebase instance, synced table for
  support_agent_reports, OLTP tables (operator_actions, support_replies,
  request_status, response_ratings), warehouse, app deployment
- Port supportconsolek React/TS app from PR #63
- Dynamic app.yaml generation with computed warehouse/endpoint values
- Add sync exclusions for supportconsolek build artifacts
- Wire Support_Lakebase_And_App into DAG (depends on agent stream)

Co-authored-by: Andre Landgraf <andre@landgraf.dev>
Co-authored-by: Isaac
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant