Add cloud clickpipe commands for managing ClickPipes#130
Open
markdawson wants to merge 23 commits into
Open
Conversation
78cfcfb to
66520ab
Compare
|
|
The OpenAPI spec describes several ClickPipe request fields as required for some pipe types and not others, but the schemas have no `required` array so the optionality heuristic infers required for everyone. The live API rejects the resulting empty defaults — cdc mode rejects empty publicationName / replicationSlotName, and database pipes (Postgres/MySQL/BigQuery) reject the destination's table/managedTable/columns/tableDefinition. Flip these to Option / skip-when-empty so callers can omit them, update build_destination to emit a database-pipe-shaped destination when called with an empty table, and record the matching exemptions in spec_coverage_test::OPTIONALITY_EXEMPTIONS. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Provisions a Postgres service and a ClickHouse service in parallel, sets up a publication on the source, creates a ClickPipe with Postgres source and replicationMode=cdc, polls until the pipe is Running, then verifies both the initial snapshot rows and follow-up inserts replicate end-to-end. The cloud-api CleanupRegistry grows ClickPipe tracking so a hung pipe is torn down before its parent service. CI runs the test on the existing nightly cron and on PRs that touch ClickPipe library or CLI source. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Apply the same underscore convention to the clickpipe_run_tags helpers introduced on this branch, plus the existing run_tags / postgres_run_tags helpers, so all suites tag and filter consistently. Service names keep dashes since they are DNS-style identifiers, not tags. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
8bd9994 to
e9715f9
Compare
Collaborator
|
@markdawson looking good; one thing I notice is that the new tests in the The cloud integration tests should directly invoke the HTTP client (see the existing tests) and not the CLI. |
Resolves conflicts in: - .gitignore: keep both .env entries - crates/clickhousectl/Cargo.toml: combine base64 (from this branch) with bollard/crossterm/rand (from main's local postgres work) - crates/clickhouse-cloud-api/tests/integration/support.rs: combine ClickPipes/tables cleanup with API key cleanup; cleanup() now takes ch_query - crates/clickhouse-cloud-api/tests/spec_coverage_test.rs: union OPTIONALITY_EXEMPTIONS (ClickPipe entries + ApiKeyPostRequest roles/hashData) - crates/clickhousectl/src/cloud/commands.rs: API key build path uses Option for hash_data/roles (matches main's spec drift fix) - Cargo.lock: regenerated; bumps rpassword 7.5.0 -> 7.5.2 so macOS builds (7.5.0 unconditionally calls libc::__errno_location, Linux-only)
…-support Stacks this branch on top of issue-149, which brings: - rpassword 7.5.2 (fixes macOS build broken by 7.5.0 on main) - Scaling-schedule API coverage + OpenAPI spec drift resolution - `cloud postgres` Beta label anchored to spec metadata - Removal of deprecated `cloud service client` command Conflict resolutions: - crates/clickhousectl/src/cloud/cli.rs: dropped stale `service client` write-command tests (command was removed on issue-149) - crates/clickhousectl/src/cloud/commands.rs: dropped unused `ServiceEndpointProtocol` import (residual from removed `service client` command)
The five tunable integers (syncIntervalSeconds, pullBatchSize, initialLoadParallelism, snapshotNumRowsPerPartition, snapshotNumberOfParallelTables) all carry `minimum: 1` in the spec, but the schema has no `required` array — the optionality heuristic emitted them as bare i64, so `..Default::default()` shipped 0 and the API rejected with "Value must be >= 1". Confirmed via the EC2 + managed-PG CDC E2E tests that omitting the keys entirely is accepted; the API picks server-side defaults. Drop the CLI's hard-coded values too so behaviour matches. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
`clickpipe create object-storage --service-account-key` previously required users to manually base64-encode the GCP service-account JSON file before passing it. BigQuery already accepted a file path and did the encoding for them — this brings GCS in line so both GCP sources work the same way. - Renames the flag to `--service-account-file <PATH>` on object-storage - New `read_gcp_service_account_file` helper, shared by both handlers - Wiremock request-shape test pins the new contract: file contents must be base64-encoded into `source.objectStorage.serviceAccountKey` Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
`tests/integration/` previously mixed generic infra (TestContext, CleanupRegistry, ClickHouse provisioning) with ClickPipes-specific helpers (AWS, EC2, Kinesis, Redpanda). With the ClickPipes E2E suite grown to nine binaries, the shared module had become large and the boundary fuzzy. - `tests/common/support.rs` — generic infra, used by every integration binary (cloud-service CRUD, Postgres-service CRUD, all ClickPipes). - `tests/clickpipes/support.rs` — ClickPipes-specific helpers; `pub use`-re-exports `crate::common::support::*` so callers still get both surfaces from `use crate::support::*;`. - Per-source binaries move from `tests/integration_clickpipe_<src>_test.rs` to `tests/clickpipes/<src>_test.rs` and are now named `clickpipe_<src>_test`. Cargo doesn't auto-discover `.rs` files in subdirectories of `tests/`, so each is declared as an explicit `[[test]]` entry in Cargo.toml. - CI workflow, cloud-api README, and CLAUDE.md updated to reflect the new binary names and layout. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds a
cloud clickpipecommand group for managing ClickPipes in ClickHouse Cloud, built on top of theclickhouse-cloud-apilibrary client.Commands
list— list ClickPipes on a serviceget— get ClickPipe details (state, scaling, source, destination)create— create a ClickPipe; source type is a nested subcommanddelete,start,stop,resync— lifecycle operations (resync is CDC-only)scale— update scaling configurationsettings get|update— manage per-pipe settingsCreate source types
clickpipe createuses nested subcommands, one per source:object-storage— S3, GCS, Azure Blobkafka— Kafka and Kafka-compatible, with TLS, SCRAM, and MSK IAM authkinesis— Amazon Kinesispostgres,mysql,mongodb— CDC sourcesbigquery— BigQuery sourceOther
--jsonoutput supported for every subcommand