Skip to content

Release 1.0#39

Merged
alvagante merged 51 commits intomainfrom
uno
Apr 10, 2026
Merged

Release 1.0#39
alvagante merged 51 commits intomainfrom
uno

Conversation

@alvagante
Copy link
Copy Markdown
Member

No description provided.

Copilot AI review requested due to automatic review settings April 4, 2026 06:48
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR prepares the codebase for the v1.0.0 release by expanding integrations (AWS/Proxmox), improving multi-source inventory linking, and refactoring the database layer to support both SQLite and PostgreSQL with a migration-first schema approach.

Changes:

  • Added AWS integration types and health-check tests; updated inventory ID prefixing for Bolt/Ansible and improved node linking with per-source sourceData.
  • Refactored database initialization to use a DatabaseAdapter abstraction (SQLite + Postgres) and migration-first schema management (removing legacy schema files).
  • Updated docs, Dockerfiles, environment templates, and release metadata to reflect the v1.0.0 configuration model (.env as source of truth).

Reviewed changes

Copilot reviewed 89 out of 313 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
backend/src/integrations/bolt/BoltPlugin.ts Clarifies node ID prefixing behavior in groups.
backend/src/integrations/aws/types.ts Adds AWS integration type definitions + auth error type.
backend/src/integrations/aws/tests/AWSPlugin.healthCheck.test.ts Adds unit tests for AWS plugin health check behavior.
backend/src/integrations/ansible/AnsibleService.ts Prefixes Ansible group host node IDs with ansible: consistently.
backend/src/integrations/NodeLinkingService.ts Extends linked nodes with per-source sourceData; adjusts identifier extraction.
backend/src/integrations/IntegrationManager.ts Switches aggregated inventory nodes to LinkedNode[]; adds provisioning capability aggregation; adds per-source inventory timeout.
backend/src/database/rbac-schema.sql Removes legacy RBAC schema file (migration-first).
backend/src/database/migrations/010_drop_integration_configs.sql Adds migration to drop integration_configs table.
backend/src/database/migrations/009_integration_configs.sql Adds historical migration that creates integration_configs.
backend/src/database/migrations/008_journal_entries.sql Adds journal_entries table migration.
backend/src/database/migrations/007_permissions_and_provisioner_role.sql Seeds new permissions and Provisioner role.
backend/src/database/migrations/002_seed_rbac_data.sql Makes seeding timestamps consistent via CURRENT_TIMESTAMP.
backend/src/database/migrations/000_initial_schema.sql Adds migration header documentation.
backend/src/database/migrations.sql Removes legacy monolithic migration file.
backend/src/database/errors.ts Adds typed DB error classes.
backend/src/database/audit-schema.sql Removes legacy audit schema file (migration-first).
backend/src/database/SQLiteAdapter.ts Introduces SQLite adapter implementing DatabaseAdapter.
backend/src/database/PostgresAdapter.ts Introduces Postgres adapter implementing DatabaseAdapter.
backend/src/database/MigrationRunner.ts Refactors migration runner to use DatabaseAdapter + dialect-aware migration selection.
backend/src/database/ExecutionRepository.ts Refactors repository to use dialect placeholders via DatabaseAdapter.
backend/src/database/DatabaseService.ts Refactors DB initialization to adapter factory + migrations only.
backend/src/database/DatabaseAdapter.ts Adds unified DB adapter interface.
backend/src/database/AdapterFactory.ts Adds adapter factory driven by DB_TYPE / DATABASE_URL.
backend/src/config/schema.ts Adds Proxmox + AWS config schemas and provisioning safety config.
backend/src/config/ConfigService.ts Parses Proxmox/AWS/provisioning env vars; adds getters.
backend/package.json Bumps backend version to 1.0.0; updates build to copy migrations; adds pg + AWS SDK deps.
backend/.env.example Updates v1.0.0 env template (integrations, provisioning safety, defaults).
README.md Updates positioning/features/docs links/version history for v1.0.0.
Dockerfile.ubuntu Adds prod-deps stage; copies migrations; uses shared entrypoint; updates env defaults.
Dockerfile.alpine Adds prod-deps stage; copies migrations; uses shared entrypoint; updates env defaults.
Dockerfile Updates version label; adjusts Bolt/OpenBolt install; copies migrations; adds env defaults.
CLAUDE.md Updates database documentation to migration-first approach.
CHANGELOG.md Adds v1.0.0 release notes including breaking changes.
.pre-commit-config.yaml Tightens duplicate/backup filename detection regex.
.kirograph/config.json Adds KiroGraph indexing config.
.kiro/todo/proxmox-restart-required.md Documents cached build issue requiring restart (internal note).
.kiro/todo/expert-mode-prototype-pollution.md Documents prototype pollution risk (internal note).
.kiro/todo/database-schema-cleanup-task.md Documents DB cleanup task completion (internal note).
.kiro/todo/REMAINING_TODOS_REPORT.md Adds prioritized TODO report (internal note).
.kiro/steering/tech.md Adds tech stack steering doc.
.kiro/steering/structure.md Adds repository structure steering doc.
.kiro/steering/security-best-practices.md Adds allowlist-secret guidance for detect-secrets.
.kiro/steering/product.md Adds product summary steering doc.
.kiro/steering/kirograph.md Adds KiroGraph usage steering doc.
.kiro/steering/git-best-practices.md Expands pre-commit hook documentation.
.kiro/specs/v1-release-prep/requirements.md Adds release-prep requirements spec.
.kiro/specs/v1-release-prep/design.md Adds release-prep design spec.
.kiro/specs/v1-release-prep/.config.kiro Adds Kiro spec config.
.kiro/specs/pabawi-release-1-0-0/.config.kiro Adds Kiro spec config.
.kiro/specs/missing-lifecycle-actions/bugfix.md Adds lifecycle actions bugfix spec.
.kiro/specs/missing-lifecycle-actions/.config.kiro Adds Kiro spec config.
.kiro/specs/azure-support/requirements.md Adds Azure integration requirements spec (future work).
.kiro/specs/azure-support/.config.kiro Adds Kiro spec config.
.kiro/specs/090/puppet-pabawi-refactoring/tasks.md Adds Puppet module refactoring plan.
.kiro/specs/090/puppet-pabawi-refactoring/requirements.md Adds Puppet module refactoring requirements.
.kiro/specs/090/puppet-pabawi-refactoring/.config.kiro Adds Kiro spec config.
.kiro/specs/090/proxmox-integration/.config.kiro Adds Kiro spec config.
.kiro/specs/090/proxmox-frontend-ui/requirements.md Adds Proxmox frontend UI requirements spec.
.kiro/specs/090/proxmox-frontend-ui/.config.kiro Adds Kiro spec config.
.kiro/settings/mcp.json Adds MCP server config for KiroGraph tools.
.kiro/hooks/kirograph-sync-on-save.json Adds KiroGraph sync hook (save).
.kiro/hooks/kirograph-sync-on-delete.json Adds KiroGraph sync hook (delete).
.kiro/hooks/kirograph-sync-on-create.json Adds KiroGraph sync hook (create).
.kiro/done/proxmox-ssl-fix.md Marks Proxmox SSL fix as done (internal note).
.kiro/done/provisioning-endpoint-fix.md Marks provisioning endpoint fix as done (internal note).
.kiro/done/node-linking-redesign.md Marks node linking redesign as done (internal note).
.kiro/done/docker-missing-schema-files.md Marks Docker schema copy fix as done (internal note).
.kiro/done/database-schema-cleanup-task.md Marks DB schema cleanup as done (internal note).
.kiro/database-cleanup-prompt.md Adds DB cleanup prompt (internal note).
.kiro/analysis/ManageTab_Lifecycle_Flow.md Adds ManageTab lifecycle flow analysis (internal note).
.github/copilot-instructions.md Updates DB schema guidance to migration-first.
.env.docker Updates Docker env template for v1.0.0 configuration model.

Copilot AI review requested due to automatic review settings April 4, 2026 07:14
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 89 out of 313 changed files in this pull request and generated 6 comments.

Comment on lines 157 to 175
// Split into statements (handle multi-statement migrations)
const statements = sql
.split(";")
.map(s => s.trim())
.filter(s => {
// Filter out empty statements and comment-only statements
.map((s) => s.trim())
.filter((s) => {
if (s.length === 0) return false;
// Remove single-line comments and check if anything remains
const withoutComments = s.split('\n')
.map(line => line.replace(/--.*$/, '').trim())
.filter(line => line.length > 0)
.join('\n');
const withoutComments = s
.split("\n")
.map((line) => line.replace(/--.*$/, "").trim())
.filter((line) => line.length > 0)
.join("\n");
return withoutComments.length > 0;
});

// Execute each statement
// Execute each statement individually via the adapter
for (const statement of statements) {
await this.execStatement(statement);
await this.db.execute(statement);
}
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The migration runner now claims PostgreSQL support, but splitting SQL by ; is not safe for many valid Postgres migrations (e.g., functions/procedures with $$...;...$$, trigger bodies, complex statements containing semicolons). This can cause migrations to fail or be applied partially. A concrete improvement is to execute the entire migration file as a single execute(sql) when dialect === "postgres", while keeping per-statement execution for SQLite (or alternatively, add a more robust SQL statement parser/marker-based splitter).

Copilot uses AI. Check for mistakes.
Comment on lines 23 to 27
// Ensure database directory exists (for SQLite)
const dbDir = dirname(this.databasePath);
if (!existsSync(dbDir)) {
mkdirSync(dbDir, { recursive: true });
}
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This directory creation is now duplicated with SQLiteAdapter.initialize() (which already creates the directory when the path isn’t :memory:). To keep responsibilities clear (and avoid doing filesystem work when DB_TYPE=postgres), consider removing this block from DatabaseService and letting the SQLite adapter handle it internally, or conditionally running it only when the selected adapter dialect is SQLite.

Copilot uses AI. Check for mistakes.
Comment on lines +247 to +260
getAllProvisioningCapabilities(): {
source: string;
capabilities: {
name: string;
description: string;
operation: "create" | "destroy";
parameters: {
name: string;
type: string;
required: boolean;
default?: unknown;
}[];
}[];
}[] {
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getAllProvisioningCapabilities() introduces a large inline return type and relies on runtime duck-typing + no-unsafe-* suppressions. This makes the API harder to maintain and refactor safely. A more robust approach is to define a shared exported type (e.g., ProvisioningCapability) and a small interface like ProvisioningCapableExecutionTool { listProvisioningCapabilities(): ProvisioningCapability[] }, then narrow via a typed type-guard—removing the need for ESLint suppressions and making the contract explicit for plugin implementers.

Copilot uses AI. Check for mistakes.
Comment on lines +247 to +289
getAllProvisioningCapabilities(): {
source: string;
capabilities: {
name: string;
description: string;
operation: "create" | "destroy";
parameters: {
name: string;
type: string;
required: boolean;
default?: unknown;
}[];
}[];
}[] {
const result: {
source: string;
capabilities: {
name: string;
description: string;
operation: "create" | "destroy";
parameters: {
name: string;
type: string;
required: boolean;
default?: unknown;
}[];
}[];
}[] = [];

for (const [name, tool] of this.executionTools) {
// Check if the plugin has listProvisioningCapabilities method
if (
"listProvisioningCapabilities" in tool &&
typeof tool.listProvisioningCapabilities === "function"
) {
try {
// eslint-disable-next-line @typescript-eslint/no-unsafe-assignment, @typescript-eslint/no-unsafe-call
const capabilities = tool.listProvisioningCapabilities();
// eslint-disable-next-line @typescript-eslint/no-unsafe-member-access
if (capabilities && capabilities.length > 0) {
result.push({
source: name,
// eslint-disable-next-line @typescript-eslint/no-unsafe-assignment
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getAllProvisioningCapabilities() introduces a large inline return type and relies on runtime duck-typing + no-unsafe-* suppressions. This makes the API harder to maintain and refactor safely. A more robust approach is to define a shared exported type (e.g., ProvisioningCapability) and a small interface like ProvisioningCapableExecutionTool { listProvisioningCapabilities(): ProvisioningCapability[] }, then narrow via a typed type-guard—removing the need for ESLint suppressions and making the contract explicit for plugin implementers.

Suggested change
getAllProvisioningCapabilities(): {
source: string;
capabilities: {
name: string;
description: string;
operation: "create" | "destroy";
parameters: {
name: string;
type: string;
required: boolean;
default?: unknown;
}[];
}[];
}[] {
const result: {
source: string;
capabilities: {
name: string;
description: string;
operation: "create" | "destroy";
parameters: {
name: string;
type: string;
required: boolean;
default?: unknown;
}[];
}[];
}[] = [];
for (const [name, tool] of this.executionTools) {
// Check if the plugin has listProvisioningCapabilities method
if (
"listProvisioningCapabilities" in tool &&
typeof tool.listProvisioningCapabilities === "function"
) {
try {
// eslint-disable-next-line @typescript-eslint/no-unsafe-assignment, @typescript-eslint/no-unsafe-call
const capabilities = tool.listProvisioningCapabilities();
// eslint-disable-next-line @typescript-eslint/no-unsafe-member-access
if (capabilities && capabilities.length > 0) {
result.push({
source: name,
// eslint-disable-next-line @typescript-eslint/no-unsafe-assignment
export interface ProvisioningCapabilityParameter {
name: string;
type: string;
required: boolean;
default?: unknown;
}
export interface ProvisioningCapability {
name: string;
description: string;
operation: "create" | "destroy";
parameters: ProvisioningCapabilityParameter[];
}
export interface ProvisioningCapabilitySource {
source: string;
capabilities: ProvisioningCapability[];
}
export interface ProvisioningCapableExecutionTool {
listProvisioningCapabilities(): ProvisioningCapability[];
}
private isProvisioningCapableExecutionTool(
tool: ExecutionToolPlugin
): tool is ExecutionToolPlugin & ProvisioningCapableExecutionTool {
return (
"listProvisioningCapabilities" in tool &&
typeof tool.listProvisioningCapabilities === "function"
);
}
getAllProvisioningCapabilities(): ProvisioningCapabilitySource[] {
const result: ProvisioningCapabilitySource[] = [];
for (const [name, tool] of this.executionTools) {
if (this.isProvisioningCapableExecutionTool(tool)) {
try {
const capabilities = tool.listProvisioningCapabilities();
if (capabilities.length > 0) {
result.push({
source: name,

Copilot uses AI. Check for mistakes.
Comment on lines +180 to +182
// Set source (singular) to the primary source for backward compatibility
// This ensures code that reads node.source still works correctly
linkedNode.source = linkedNode.sources[0];
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting linkedNode.source to linkedNode.sources[0] can be order-dependent (based on discovery/iteration order), which may result in inconsistent “primary source” selection across runs and potentially incorrect behavior for code paths that still rely on node.source. To make this deterministic, pick the primary source using a consistent rule (e.g., highest configured plugin priority, or a stable predefined ordering), and then set linkedNode.source from that chosen source.

Copilot uses AI. Check for mistakes.
Copilot AI review requested due to automatic review settings April 4, 2026 07:23
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 82 out of 303 changed files in this pull request and generated 5 comments.

Comments suppressed due to low confidence (1)

backend/src/integrations/IntegrationManager.ts:1

  • Using Promise.race for timeouts here can create unhandled rejections: if timeoutPromise wins, the Promise.all([source.getInventory(), ...]) continues running in the background, and if getInventory() later rejects it may surface as an unhandled rejection (and still consume resources). Consider wrapping each source call in a timeout helper that attaches a .catch(...) to the losing branch, or use AbortController-based cancellation (if supported by the integrations) so timed-out work is actually stopped/handled.
/**

// Get hosts (direct members)
const hosts = Array.isArray(group.hosts) ? group.hosts : [];
// Get hosts (direct members) and prefix with source
const hosts = Array.isArray(group.hosts) ? group.hosts.map((h: string) => `ansible:${h}`) : [];
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This maps every group.hosts entry into a string template without validating the runtime type. If the Ansible inventory contains non-string host entries (or mixed types), this will produce IDs like ansible:[object Object], breaking node lookups/linking. Filter to typeof h === \"string\" (and consider trimming/ignoring empty strings) before prefixing.

Suggested change
const hosts = Array.isArray(group.hosts) ? group.hosts.map((h: string) => `ansible:${h}`) : [];
const hosts = Array.isArray(group.hosts)
? group.hosts
.filter((h: unknown): h is string => typeof h === "string")
.map((h) => h.trim())
.filter((h) => h.length > 0)
.map((h) => `ansible:${h}`)
: [];

Copilot uses AI. Check for mistakes.
sources: ["ansible"],
linked: false,
nodes: hosts.map(host => `ansible:${host}`), // Prefix hosts with ansible:
nodes: hosts,
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This maps every group.hosts entry into a string template without validating the runtime type. If the Ansible inventory contains non-string host entries (or mixed types), this will produce IDs like ansible:[object Object], breaking node lookups/linking. Filter to typeof h === \"string\" (and consider trimming/ignoring empty strings) before prefixing.

Copilot uses AI. Check for mistakes.
Comment on lines +445 to +447
// Fetch nodes and groups in parallel with per-source timeout
// Prevents a single slow source from blocking the entire inventory
const SOURCE_TIMEOUT_MS = 15_000;
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using Promise.race for timeouts here can create unhandled rejections: if timeoutPromise wins, the Promise.all([source.getInventory(), ...]) continues running in the background, and if getInventory() later rejects it may surface as an unhandled rejection (and still consume resources). Consider wrapping each source call in a timeout helper that attaches a .catch(...) to the losing branch, or use AbortController-based cancellation (if supported by the integrations) so timed-out work is actually stopped/handled.

Copilot uses AI. Check for mistakes.
Comment on lines +154 to +160
await this.execute("COMMIT");
this._inTransaction = false;
}

async rollback(): Promise<void> {
await this.execute("ROLLBACK");
this._inTransaction = false;
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If COMMIT/ROLLBACK fails (e.g., connection issue), _inTransaction will remain true, leaving the adapter in an inconsistent state and potentially blocking future transactions. Set _inTransaction = false in a finally block (and consider surfacing a structured DB error type for these cases, similar to query/connection failures).

Suggested change
await this.execute("COMMIT");
this._inTransaction = false;
}
async rollback(): Promise<void> {
await this.execute("ROLLBACK");
this._inTransaction = false;
try {
await this.execute("COMMIT");
} finally {
this._inTransaction = false;
}
}
async rollback(): Promise<void> {
try {
await this.execute("ROLLBACK");
} finally {
this._inTransaction = false;
}

Copilot uses AI. Check for mistakes.
Comment on lines +23 to +31
// Ensure database directory exists (for SQLite)
const dbDir = dirname(this.databasePath);
if (!existsSync(dbDir)) {
mkdirSync(dbDir, { recursive: true });
}

// Create database connection
this.db = await this.createConnection();
// Create adapter via factory
this.adapter = await createDatabaseAdapter({ databasePath: this.databasePath });
await this.adapter.initialize();
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Directory creation is now duplicated (SQLiteAdapter.initialize already creates the parent dir when appropriate). More importantly, when running with DB_TYPE=postgres, DATABASE_PATH may not represent a filesystem path, and dirname(...)/mkdirSync(...) can behave unexpectedly or fail. Prefer moving the directory creation responsibility fully into SQLiteAdapter.initialize() and only doing it when the selected dialect is SQLite.

Copilot uses AI. Check for mistakes.
alvagante and others added 24 commits April 4, 2026 19:36
…a file

- Update COPY instruction to include all database files and migrations
- Change from copying only schema.sql to copying entire src/database/ directory
- Ensures future database-related files are automatically included in builds
- Improves maintainability by avoiding need to update Dockerfile when new database files are added
…tern

- Add new Proxmox integration with ProxmoxClient, ProxmoxService, and ProxmoxIntegration classes
- Implement Proxmox API client with authentication and resource management capabilities
- Add comprehensive test coverage for Proxmox integration and service layer
- Update IntegrationManager to register and manage Proxmox integration
- Add dedicated Proxmox routes handler for API endpoints
- Update integration types to include Proxmox configuration schema
- Refactor ConfigService and schema to support Proxmox settings
- Update server configuration to include Proxmox routes
- Add Kiro specification files for puppet-pabawi refactoring workflow
- Update vitest configuration for improved test execution
- Improves infrastructure flexibility by adding virtualization platform support alongside existing integrations
…ycle management

- Add ProvisionPage with dynamic form generation for VM and LXC creation
- Add ManageTab component for node lifecycle actions (start, stop, reboot, destroy)
- Add ProxmoxProvisionForm and ProxmoxSetupGuide components for integration setup
- Add formGenerator utility for dynamic form rendering based on capability metadata
- Add permissions system for RBAC-aware UI element visibility
- Add comprehensive validation and error handling for provisioning operations
- Add test utilities and generators for provisioning-related tests
- Add documentation for Proxmox setup, provisioning, permissions, and management workflows
- Add Kiro specification files for Proxmox frontend UI and integration features
- Update Navigation component to include new Provision page route
- Update IntegrationSetupPage to support Proxmox configuration
- Update API client with provisioning endpoints and type definitions
- Update package.json with required dependencies
- Move 7 completed task documents from .kiro/todo to .kiro/done directory
- Add comprehensive REMAINING_TODOS_REPORT.md with prioritized task breakdown
- Include test failure analysis, RBAC issues, and environment configuration items
- Add SQLite test database temporary files (test-migration.db-shm, test-migration.db-wal)
- Update frontend logger with minor improvements
- Consolidate task tracking and provide clear roadmap for remaining work
- Add getNodes() method to retrieve PVE cluster nodes with status and resource info
- Add getNextVMID() method to fetch next available VM ID from cluster
- Add getISOImages() method to list ISO images available on node storage
- Add getTemplates() method to list OS templates available on node storage
- Implement caching for node and resource queries (60-120 second TTL)
- Add corresponding integration layer methods to expose Proxmox service capabilities
- Update frontend navigation and routing to support new provisioning workflows
- Enhance ProxmoxProvisionForm with node selection and resource discovery UI
- Update API client and type definitions for provisioning operations
- Improve error handling and logging across Proxmox integration layer
- Update foundFilter initial state from 'all' to 'found' for better UX
- Aligns with filterMode default behavior to show relevant data by default
- Reduces noise by filtering out not-found entries on initial load
…ved API handling

- Fix ProxmoxClient to use form-urlencoded encoding for POST/PUT/DELETE requests instead of JSON, matching Proxmox API requirements
- Add detailed error messages in API responses by including response body text for better diagnostics
- Add getStorages() method to ProxmoxService and ProxmoxIntegration to query available storage devices on nodes with optional content type filtering
- Add getNetworkBridges() method to ProxmoxService and ProxmoxIntegration to query network interfaces on nodes with bridge type filtering
- Implement caching for both storage and network queries with 120-second TTL to reduce API calls
- Update ProxmoxProvisionForm frontend component to use new storage and network discovery endpoints
- Extend provisioning types to support storage and network configuration options
- Update API client to expose new storage and network discovery endpoints
… and code quality improvements

Co-authored-by: alvagante <283804+alvagante@users.noreply.github.com>
- DatabaseAdapter interface with query, execute, transaction support
- SQLiteAdapter: full implementation with WAL mode, foreign keys
- PostgresAdapter: full implementation with pg pool, transaction client
- AdapterFactory: creates correct adapter based on DB_TYPE env var
- Error types: DatabaseQueryError, DatabaseConnectionError
- Tests for all three components (39 tests passing)
Backend:
- AWS plugin with EC2 inventory, provisioning, lifecycle, health check
- Journal service with timeline aggregation and note support
- Integration config service with AES-256-GCM encryption and merge
- Proxmox VM/LXC compute type routing and separate create methods
- API routes for journal, integration config, and AWS endpoints
- ConfigService and schema updated for AWS env vars
- Database migrations for journal_entries and integration_configs

Frontend:
- AWS provisioning form and setup guide
- Proxmox VM and LXC provision forms (split from single form)
- Journal timeline component with note form
- Integration config management page
- RBAC UI updated for new permission types
- Navigation and routing updates

Fixes:
- Markdown table formatting in docs/integrations/aws.md
- Allowlisted example AWS keys in AWSSetupGuide.svelte
- Updated .secrets.baseline
…sment

- Add Azure support design and requirements specifications
- Add ManageTab lifecycle flow analysis documenting end-to-end error tracing
- Add missing lifecycle actions bugfix documentation and Kiro config
- Add workspace analysis summary and product/tech steering documents
- Update ProxmoxClient with improved integration handling
- Update inventory routes with enhanced node lifecycle actions support
- Update ManageTab component with better state management
- Update API client with refined lifecycle actions fetching
- Update NodeDetailPage with improved node detail rendering
- Add security assessment documentation for v0.10.0
- Add ALLOW_DESTRUCTIVE_PROVISIONING environment variable to control whether destructive provisioning actions (destroy VM/LXC, terminate EC2) are allowed
- Add ProvisioningConfig schema and provisioning safety configuration to ConfigService
- Add isDestructiveProvisioningAllowed() method to ConfigService for checking if destructive actions are enabled
- Add /api/config/provisioning endpoint to expose provisioning safety settings to frontend
- Add allowDestructiveActions guard to AWS EC2 terminate endpoint, returning 403 DESTRUCTIVE_ACTION_DISABLED when disabled
- Add allowDestructiveActions guard to Proxmox destroy endpoints, returning 403 when disabled
- Pass provisioning safety options through integration routers (AWS, Proxmox) from main integrations router
- Update ManageTab.svelte frontend to respect provisioning safety configuration
- Update configuration documentation and provisioning guide with safety settings
- Update setup.sh to include provisioning safety configuration
- Defaults to false (destructive actions disabled) for safer out-of-box experience
- Add AWS_REGIONS configuration to support querying multiple regions via JSON array or comma-separated string
- Update ConfigService to parse and validate multi-region configuration
- Add regions field to AWSConfigSchema for type safety
- Improve AWS credential validation to support default credential chain (env vars, ~/.aws/credentials, instance profile)
- Remove requirement for explicit credentials or profile when using default chain
- Update AWSPlugin health check to include regions in diagnostic output
- Fix NodeLinkingService to skip AWS URIs when extracting hostname identifiers
- Add source field to linked nodes for backward compatibility with code reading node.source
- Update .env.example with comprehensive AWS and Proxmox configuration documentation
- Clean up test database temporary files (test-migration.db-shm, test-migration.db-wal)
- Update integration and performance tests to reflect configuration changes
- Remove source-specific prefixes (ansible:, bolt:) from node identifiers in AnsibleService and BoltPlugin
- Replace node.id with node.name in AWS region, VPC, and tag grouping methods for consistency
- Update Proxmox service to use node.name instead of node.id across all grouping operations
- Add dual-mode AMI input to AWSProvisionForm with search and direct ID entry options
- Implement switchAMIMode and onDirectAMIInput functions for flexible AMI selection
- Remove redundant describeAllInstances method from AWSService
- Ensures consistent node naming convention across all integration services and improves user experience with flexible AMI input methods
…lution

- Add comprehensive AWS instance metadata collection including CPU options, network interfaces, and hardware details
- Implement node ID resolution in AWSPlugin.getNodeFacts() to support both aws:region:instanceId format and node name/id lookup
- Expand command whitelist with system inspection commands and change matching mode from exact to prefix
- Add JournalCollectors service for centralized journal event collection
- Update IntegrationManager.getLinkedInventory() to support cache control parameter
- Remove terminate_instance action from AWS provisioning operations for safety
- Fix AWS availability zone regex pattern to correctly extract region identifier
- Enhance network interface data collection with DNS names and detailed NIC information
- Add instance name tag extraction and platform metadata to AWS facts
- Update frontend components to reflect improved inventory and journal data handling
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Alessandro Franceschi <al@example42.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Alessandro Franceschi <al@example42.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Alessandro Franceschi <al@example42.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Alessandro Franceschi <al@example42.com>
alvagante and others added 9 commits April 4, 2026 19:43
- 8.1 Verify all IntegrationConfigService references removed from source code
- 8.1 Clean stale entries from .secrets.baseline
- 8.2 Audit dependencies — all production deps actively used
- 9.1 Update package.json (root, backend, frontend) to 1.0.0
- 9.2 Update product.md steering, health endpoint, verify docker-compose
- 10.1 Fix trailing whitespace, detect-secrets false positives, ESLint warnings
- All pre-commit hooks pass (except hadolint-docker which requires Docker Desktop)
- 11.1 Update Dockerfile version labels and descriptions
- 11.2 Align integration ENV defaults across all Dockerfiles
- 11.3 Fix migration copy in alpine/ubuntu Dockerfiles
- 11.4 Add backend-deps stage for production deps in alpine/ubuntu
- 11.5 Rewrite .env.docker with comprehensive integration examples
- 11.6 Update docker-compose.yml with volume mounts and health check
- 11.7 Standardize docker-entrypoint.sh across all Dockerfiles
- 11.8 Update Docker deployment documentation
- 12.1 Sync .env.example with ConfigService parsing (add missing vars, fix gaps)
- 12.2 Fix setup guide env var mismatches (BoltSetupGuide, PuppetdbSetupGuide)
- 12.3 Verify .env.docker matches ConfigService and Docker ENV defaults
- 12.4 Reconcile Dockerfile ENV defaults with ConfigService Zod schema
- 13.1 npm test: 2893 passed (1 pre-existing ProxmoxService failure)
- 13.1 npm run lint: zero errors
- 13.1 tsc --noEmit: zero type errors
- 13.1 pre-commit run --all-files: all hooks pass
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Alessandro Franceschi <al@example42.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Alessandro Franceschi <al@example42.com>
…ements

- Add connection timeout handling to PostgresAdapter for faster failure detection
- Fix Proxmox node mapping to use node IDs instead of names for consistency
- Update SSH config path resolution to properly expand home directory shortcuts
- Correct @types/ssh2 dependency placement in package-lock.json
- Update version to 1.0.0 across all package manifests and UI components
- Bump smol-toml dependency to 1.6.1 for improved stability
Copilot AI review requested due to automatic review settings April 10, 2026 18:35
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 82 out of 304 changed files in this pull request and generated 5 comments.

Comment on lines +455 to +461
let timeoutHandle: ReturnType<typeof setTimeout> | undefined;
const timeoutPromise = new Promise<never>((_, reject) => {
timeoutHandle = setTimeout(
() => { reject(new Error(`Source '${name}' timed out after ${String(SOURCE_TIMEOUT_MS)}ms`)); },
SOURCE_TIMEOUT_MS,
);
});
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Promise.race()with a timeout leaves the “losing” branch running. IftimeoutPromiserejects first, the in-flightsource.getInventory()(and thePromise.all) can later reject without being awaited/handled, which can trigger unhandled promise rejections in Node and create noisy logs or process termination depending on settings. A concrete fix is to attach a .catch(...)handler to the work promise used in the race (or wrapgetInventory()with a catch that logs and returns[]`), so late rejections are always handled; optionally, consider adding AbortController support per source if the plugin APIs can accept it.

Copilot uses AI. Check for mistakes.
Comment on lines +466 to +469
[nodes, groups] = await Promise.race([
Promise.all([
source.getInventory(),
source.getGroups().catch((error: unknown) => {
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Promise.race()with a timeout leaves the “losing” branch running. IftimeoutPromiserejects first, the in-flightsource.getInventory()(and thePromise.all) can later reject without being awaited/handled, which can trigger unhandled promise rejections in Node and create noisy logs or process termination depending on settings. A concrete fix is to attach a .catch(...)handler to the work promise used in the race (or wrapgetInventory()with a catch that logs and returns[]`), so late rejections are always handled; optionally, consider adding AbortController support per source if the plugin APIs can accept it.

Copilot uses AI. Check for mistakes.
Comment on lines +476 to +483
return [] as NodeGroup[];
}),
]),
timeoutPromise,
]);
} finally {
clearTimeout(timeoutHandle);
}
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Promise.race()with a timeout leaves the “losing” branch running. IftimeoutPromiserejects first, the in-flightsource.getInventory()(and thePromise.all) can later reject without being awaited/handled, which can trigger unhandled promise rejections in Node and create noisy logs or process termination depending on settings. A concrete fix is to attach a .catch(...)handler to the work promise used in the race (or wrapgetInventory()with a catch that logs and returns[]`), so late rejections are always handled; optionally, consider adding AbortController support per source if the plugin APIs can accept it.

Copilot uses AI. Check for mistakes.
Comment on lines +26 to +33
const timeout = new Promise<never>((_, reject) => {
setTimeout(() => {
reject(new Error("Connection timed out"));
}, 3000);
});

try {
client = await Promise.race([this._pool.connect(), timeout]);
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The connection timeout promise uses setTimeout() but the timer is never cleared. If _pool.connect() resolves quickly, the timeout will still fire later and reject without a handler, which can surface as an unhandled rejection. Track the timeout handle and clearTimeout() once the race completes; also consider ensuring a connect that resolves after a timeout doesn't leak a client (e.g., by adding a .then(release) continuation on the connect promise if the timeout wins).

Suggested change
const timeout = new Promise<never>((_, reject) => {
setTimeout(() => {
reject(new Error("Connection timed out"));
}, 3000);
});
try {
client = await Promise.race([this._pool.connect(), timeout]);
let timeoutHandle: ReturnType<typeof setTimeout> | undefined;
let timedOut = false;
const connectPromise = this._pool.connect();
const timeout = new Promise<never>((_, reject) => {
timeoutHandle = setTimeout(() => {
timedOut = true;
reject(new Error("Connection timed out"));
}, 3000);
});
void connectPromise
.then((connectedClient) => {
if (timedOut) {
connectedClient.release();
}
})
.catch(() => {});
try {
try {
client = await Promise.race([connectPromise, timeout]);
} finally {
if (timeoutHandle) {
clearTimeout(timeoutHandle);
}
}

Copilot uses AI. Check for mistakes.
Comment on lines +7 to +17
CREATE TABLE IF NOT EXISTS integration_configs (
id TEXT PRIMARY KEY,
userId TEXT NOT NULL,
integrationName TEXT NOT NULL,
config TEXT NOT NULL, -- JSON, sensitive fields encrypted
isActive INTEGER NOT NULL DEFAULT 1,
createdAt TEXT NOT NULL,
updatedAt TEXT NOT NULL,
FOREIGN KEY (userId) REFERENCES users(id) ON DELETE CASCADE,
UNIQUE(userId, integrationName)
);
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR (and the v1.0.0 changelog) states .env is the single source of truth and migration 010 drops integration_configs, but this PR also introduces migration 009 that creates the same table. On a fresh database, this means the table is created then immediately dropped, which contradicts the intended “removed” model and adds avoidable churn. If v1.0.0 should never create this table, remove migration 009 from the migrations set shipped with 1.0.0 (keeping only the drop, which is idempotent), or otherwise clarify via dialect/branching why 009 must exist in 1.0.0.

Copilot uses AI. Check for mistakes.
- Add mapping logic to track original source-specific node IDs to linked node IDs during deduplication
- Remap group node references from original source IDs to linked node IDs to maintain consistency
- Deduplicate node IDs within groups after remapping to prevent duplicate references
- Add comprehensive mocking of Proxmox API functions in property tests (getProxmoxNodes, getProxmoxNextVMID, getProxmoxISOs, getProxmoxTemplates, getProxmoxStorages, getProxmoxNetworks)
- Add TEST_NODES constant to provide known test data for form validation
- Update property test arbitraries to use known test nodes instead of arbitrary strings
- Add explicit return type annotations to all arbitrary generator functions for type safety
- Remove invalid test case for empty node field (now handled by constrained node selection)
- Update API client to properly handle Proxmox node selection in forms
…tions

- Update response type definitions to use array syntax for results field
- Remove unnecessary optional chaining on response.result object
- Simplify property access for status and error fields in executeNodeAction
- Simplify property access for status and error fields in destroyNode
- Improve type safety by clarifying that result object is always present
Copilot AI review requested due to automatic review settings April 10, 2026 19:27
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 82 out of 304 changed files in this pull request and generated 9 comments.

Comments suppressed due to low confidence (1)

Dockerfile:1

  • The Docker build previously installed Bolt via gem install openbolt ..., but the updated Dockerfile now attempts to install openbolt via apt. On Debian/Ubuntu images, openbolt is typically a RubyGem (not an apt package), so this is likely to fail the image build with 'Unable to locate package openbolt'. Consider reverting to gem install openbolt (pinning a version if needed), or verify/replace with the correct OS package name if one exists in the chosen base distro repositories.
# Stage 1: Build frontend with Vite

// Use the first node's name as the primary identifier
// (all related nodes should have the same name)
// eslint-disable-next-line @typescript-eslint/no-unnecessary-condition
const primaryName = node.name?.trim() || node.id || node.uri;
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Linking now builds linkedNode without spreading the chosen 'primary' node (previously ...node). This can drop any additional Node fields that callers may rely on (e.g., top-level status/metadata/custom properties). Also, linkedNode.source = linkedNode.sources[0] depends on iteration order, which can make the 'primary source' nondeterministic between runs and inconsistent with whichever source's fields you copied into linkedNode.\n\nA concrete fix is to construct the linked node from a deterministic 'primary' node (ideally picked by integration priority) and then override id/name to the common identifier and add sources/sourceData. If priority selection isn't available inside NodeLinkingService, consider passing a priority resolver in, or doing the 'choose primary node by priority' step in IntegrationManager before/after calling linkNodes().

Copilot uses AI. Check for mistakes.
Comment on lines +114 to +118
id: primaryName, // Use name (or stable fallback) as primary ID for lookups
name: primaryName,
uri: node.uri, // Will be overwritten with combined URIs
transport: node.transport,
config: node.config,
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Linking now builds linkedNode without spreading the chosen 'primary' node (previously ...node). This can drop any additional Node fields that callers may rely on (e.g., top-level status/metadata/custom properties). Also, linkedNode.source = linkedNode.sources[0] depends on iteration order, which can make the 'primary source' nondeterministic between runs and inconsistent with whichever source's fields you copied into linkedNode.\n\nA concrete fix is to construct the linked node from a deterministic 'primary' node (ideally picked by integration priority) and then override id/name to the common identifier and add sources/sourceData. If priority selection isn't available inside NodeLinkingService, consider passing a priority resolver in, or doing the 'choose primary node by priority' step in IntegrationManager before/after calling linkNodes().

Copilot uses AI. Check for mistakes.
Comment on lines +158 to +161
const statements = sql
.split(";")
.map(s => s.trim())
.filter(s => {
// Filter out empty statements and comment-only statements
.map((s) => s.trim())
.filter((s) => {
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Splitting migration files on ; is not safe for PostgreSQL migrations that contain procedural blocks (e.g., CREATE FUNCTION ... LANGUAGE plpgsql, DO $$ ... $$) or even some complex statements where semicolons can appear inside dollar-quoted bodies/strings. This can cause valid Postgres migrations to be executed as broken fragments.\n\nIf Postgres support is a goal here, prefer executing the SQL file as a whole (single query) for Postgres, or introduce an adapter method specifically for executing scripts (e.g., executeScript(sql: string)) so SQLite can use db.exec semantics and Postgres can send the entire script. Alternatively, enforce 'one statement per migration file' and remove the splitting logic entirely.

Copilot uses AI. Check for mistakes.
return;
}

db.run("PRAGMA journal_mode = WAL;", (walErr) => {
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous SQLite initialization path set several performance-related PRAGMAs (synchronous=NORMAL, cache_size, temp_store=MEMORY, mmap_size, etc.). The new SQLiteAdapter.initialize() only enables WAL and foreign keys, which can be a measurable regression under load.\n\nTo preserve current behavior, move the missing PRAGMAs into SQLiteAdapter.initialize() (ideally with comments and possibly making them configurable), so switching to the adapter layer doesn't change runtime characteristics.

Copilot uses AI. Check for mistakes.
Comment on lines +38 to +64
db.run("PRAGMA journal_mode = WAL;", (walErr) => {
if (walErr) {
reject(
new DatabaseConnectionError(
`Failed to enable WAL mode: ${walErr.message}`,
this._databasePath,
),
);
return;
}

db.run("PRAGMA foreign_keys = ON;", (fkErr) => {
if (fkErr) {
reject(
new DatabaseConnectionError(
`Failed to enable foreign keys: ${fkErr.message}`,
this._databasePath,
),
);
return;
}

this._db = db;
this._connected = true;
resolve();
});
});
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous SQLite initialization path set several performance-related PRAGMAs (synchronous=NORMAL, cache_size, temp_store=MEMORY, mmap_size, etc.). The new SQLiteAdapter.initialize() only enables WAL and foreign keys, which can be a measurable regression under load.\n\nTo preserve current behavior, move the missing PRAGMAs into SQLiteAdapter.initialize() (ideally with comments and possibly making them configurable), so switching to the adapter layer doesn't change runtime characteristics.

Suggested change
db.run("PRAGMA journal_mode = WAL;", (walErr) => {
if (walErr) {
reject(
new DatabaseConnectionError(
`Failed to enable WAL mode: ${walErr.message}`,
this._databasePath,
),
);
return;
}
db.run("PRAGMA foreign_keys = ON;", (fkErr) => {
if (fkErr) {
reject(
new DatabaseConnectionError(
`Failed to enable foreign keys: ${fkErr.message}`,
this._databasePath,
),
);
return;
}
this._db = db;
this._connected = true;
resolve();
});
});
// Preserve the previous SQLite runtime characteristics when using the
// adapter layer so initialization does not regress performance under load.
// These settings keep WAL enabled, relax fsync cost to NORMAL, use an
// in-memory temp store, increase the page cache, and enable mmap reads.
const pragmas = [
{
sql: "PRAGMA journal_mode = WAL;",
description: "enable WAL mode",
},
{
sql: "PRAGMA synchronous = NORMAL;",
description: "set synchronous mode",
},
{
sql: "PRAGMA cache_size = -20000;",
description: "set cache size",
},
{
sql: "PRAGMA temp_store = MEMORY;",
description: "set temp store",
},
{
sql: "PRAGMA mmap_size = 268435456;",
description: "set mmap size",
},
{
sql: "PRAGMA foreign_keys = ON;",
description: "enable foreign keys",
},
];
const applyPragma = (index: number): void => {
if (index >= pragmas.length) {
this._db = db;
this._connected = true;
resolve();
return;
}
const pragma = pragmas[index];
db.run(pragma.sql, (pragmaErr) => {
if (pragmaErr) {
reject(
new DatabaseConnectionError(
`Failed to ${pragma.description}: ${pragmaErr.message}`,
this._databasePath,
),
);
return;
}
applyPragma(index + 1);
});
};
applyPragma(0);

Copilot uses AI. Check for mistakes.
Comment on lines +6 to +12
public readonly params: unknown[] | undefined;

constructor(message: string, query: string, params?: unknown[]) {
super(message);
this.name = "DatabaseQueryError";
this.query = query;
this.params = params;
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Storing the raw SQL text and parameter values on DatabaseQueryError can lead to sensitive data exposure if these errors are ever serialized into API responses or logged at info/warn levels (e.g., credentials/tokens in params, or user-supplied values). A safer pattern is to keep params out of the error object by default (or store only a redacted/typed summary), and ensure any logging of queries/params is explicit, redacted, and restricted to debug-level logs.

Suggested change
public readonly params: unknown[] | undefined;
constructor(message: string, query: string, params?: unknown[]) {
super(message);
this.name = "DatabaseQueryError";
this.query = query;
this.params = params;
public readonly paramCount: number;
constructor(message: string, query: string, params?: unknown[]) {
super(message);
this.name = "DatabaseQueryError";
this.query = query;
this.paramCount = params?.length ?? 0;

Copilot uses AI. Check for mistakes.
Comment on lines +2 to +22
-- Add integration_configs table for storing per-user integration configurations
-- with encrypted sensitive fields and unique constraint per user/integration.
-- Requirements: 32.1, 32.2, 32.3, 32.4

-- Integration configs table: Stores per-user integration settings
CREATE TABLE IF NOT EXISTS integration_configs (
id TEXT PRIMARY KEY,
userId TEXT NOT NULL,
integrationName TEXT NOT NULL,
config TEXT NOT NULL, -- JSON, sensitive fields encrypted
isActive INTEGER NOT NULL DEFAULT 1,
createdAt TEXT NOT NULL,
updatedAt TEXT NOT NULL,
FOREIGN KEY (userId) REFERENCES users(id) ON DELETE CASCADE,
UNIQUE(userId, integrationName)
);

-- Performance indexes for integration config queries
CREATE INDEX IF NOT EXISTS idx_integration_configs_user ON integration_configs(userId);
CREATE INDEX IF NOT EXISTS idx_integration_configs_name ON integration_configs(integrationName);
CREATE INDEX IF NOT EXISTS idx_integration_configs_active ON integration_configs(isActive);
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR adds both migration 009_integration_configs.sql (creates integration_configs) and 010_drop_integration_configs.sql (drops it) while the v1.0.0 direction is explicitly '.env as single source of truth'. If these migrations are newly introduced together, fresh installs will create then immediately drop the table, adding complexity/noise to the migration history.\n\nIf integration_configs never shipped in a released migration set, consider dropping migration 009 entirely and keeping only the 'drop' behavior out of history. If it did ship previously, it would be helpful to add a short comment in 010 explaining that 009 existed in earlier versions and 010 is intentionally removing it for upgrades.

Suggested change
-- Add integration_configs table for storing per-user integration configurations
-- with encrypted sensitive fields and unique constraint per user/integration.
-- Requirements: 32.1, 32.2, 32.3, 32.4
-- Integration configs table: Stores per-user integration settings
CREATE TABLE IF NOT EXISTS integration_configs (
id TEXT PRIMARY KEY,
userId TEXT NOT NULL,
integrationName TEXT NOT NULL,
config TEXT NOT NULL, -- JSON, sensitive fields encrypted
isActive INTEGER NOT NULL DEFAULT 1,
createdAt TEXT NOT NULL,
updatedAt TEXT NOT NULL,
FOREIGN KEY (userId) REFERENCES users(id) ON DELETE CASCADE,
UNIQUE(userId, integrationName)
);
-- Performance indexes for integration config queries
CREATE INDEX IF NOT EXISTS idx_integration_configs_user ON integration_configs(userId);
CREATE INDEX IF NOT EXISTS idx_integration_configs_name ON integration_configs(integrationName);
CREATE INDEX IF NOT EXISTS idx_integration_configs_active ON integration_configs(isActive);
-- Intentionally left as a no-op.
--
-- The v1.0.0 configuration direction is ".env as single source of truth",
-- so the integration_configs table is not part of the schema for fresh installs.
-- This migration number is retained to preserve migration ordering/history
-- without creating schema that is removed by a subsequent migration.

Copilot uses AI. Check for mistakes.
*
* @returns Array of provisioning capabilities from all plugins
*/
getAllProvisioningCapabilities(): {
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Capability discovery uses a runtime 'listProvisioningCapabilities' in tool check with multiple no-unsafe-* suppressions and assumes the method is synchronous. This makes the API harder to evolve (e.g., if a plugin needs async discovery) and weakens type safety.\n\nA more maintainable approach is to extend the execution-tool plugin typing with an explicit optional method (e.g., listProvisioningCapabilities?: () => ProvisioningCapability[]) and then call it without unsafe casts/suppressions. If async is needed, consider making getAllProvisioningCapabilities() async and supporting Promise<ProvisioningCapability[]>.

Copilot uses AI. Check for mistakes.
Comment on lines +278 to +284
if (
"listProvisioningCapabilities" in tool &&
typeof tool.listProvisioningCapabilities === "function"
) {
try {
// eslint-disable-next-line @typescript-eslint/no-unsafe-assignment, @typescript-eslint/no-unsafe-call
const capabilities = tool.listProvisioningCapabilities();
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Capability discovery uses a runtime 'listProvisioningCapabilities' in tool check with multiple no-unsafe-* suppressions and assumes the method is synchronous. This makes the API harder to evolve (e.g., if a plugin needs async discovery) and weakens type safety.\n\nA more maintainable approach is to extend the execution-tool plugin typing with an explicit optional method (e.g., listProvisioningCapabilities?: () => ProvisioningCapability[]) and then call it without unsafe casts/suppressions. If async is needed, consider making getAllProvisioningCapabilities() async and supporting Promise<ProvisioningCapability[]>.

Copilot uses AI. Check for mistakes.
…ures

- Update API mock return values to match actual function signatures (arrays instead of wrapped objects)
- Change getProxmoxNodes mock from { nodes: [...] } to [...]
- Change getProxmoxNextVMID mock from { vmid: 100 } to 100
- Change getProxmoxISOs, getProxmoxTemplates, getProxmoxStorages, getProxmoxNetworks mocks to return arrays directly
- Add import of api module at top level for use in beforeEach
- Remove redundant require() calls in individual test cases
- Update tab switching test to check hostname field instead of VMID (which may be auto-populated)
- Add clarifying comments for button state assertions during submission
- Improve test assertions to verify API calls instead of just button state changes
@alvagante
Copy link
Copy Markdown
Member Author

@copilot apply changes based on the comments in this thread , then fix all the failing tests in the ci

Copilot AI review requested due to automatic review settings April 10, 2026 21:18
@alvagante alvagante review requested due to automatic review settings April 10, 2026 21:18
Copilot AI review requested due to automatic review settings April 10, 2026 21:23
@alvagante alvagante review requested due to automatic review settings April 10, 2026 21:24
Copy link
Copy Markdown
Contributor

Copilot AI commented Apr 10, 2026

@copilot apply changes based on the comments in this thread , then fix all the failing tests in the ci

Applied all changes from the review threads and fixed the CI test failures. Here's a summary:

CI test fixes (commit 3ddcc4b):

  • Increased frontend vitest testTimeout to 30 000 ms — property-based tests (100 fast-check runs each) were exceeding the 5 000 ms default in CI

Review comment fixes (same commit, plus 1c2ba4d for follow-up):

  • MigrationRunner: PostgreSQL now executes the full SQL file in one execute() call, avoiding broken splitting on ; inside dollar-quoted PL/pgSQL bodies
  • DatabaseService: removed the directory-creation block entirely — SQLiteAdapter.initialize() already handles it, and it would break when DB_TYPE=postgres
  • IntegrationManager provisioning types: replaced inline anonymous return types and eslint-disable suppressions with exported ProvisioningCapability* interfaces and a typed isProvisioningCapableExecutionTool() type-guard
  • NodeLinkingService: primary source is now selected deterministically by plugin priority (SSH:50 > Puppetserver:20 > Bolt/PuppetDB:10 > Ansible:8 > Hiera:6); primary node fields are spread into LinkedNode; extracted getNodeSource() helper; loop caches priorities to avoid redundant lookups
  • AnsibleService: group.hosts entries are filtered to typeof h === "string", trimmed, and empty strings removed before prefixing with ansible:
  • IntegrationManager timeout: .catch() attached to the work Promise.all so late rejections after a timeout are always handled
  • SQLiteAdapter: _inTransaction reset in finally blocks; restored synchronous=NORMAL, cache_size=-20000, temp_store=MEMORY, mmap_size=268435456 PRAGMAs
  • DatabaseQueryError: replaced raw params with paramCount to prevent sensitive data exposure; updated test accordingly
  • Migration 009: replaced the CREATE TABLE DDL with a no-op comment block
  • PostgresAdapter: timeout handle cleared in finally; client released if it resolves after the timeout wins

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • httpstat.us
    • Triggering command: /opt/hostedtoolcache/node/24.14.1/x64/bin/node /opt/hostedtoolcache/node/24.14.1/x64/bin/node --conditions node --conditions development --experimental-import-meta-resolve --require /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/suppress-warnings.cjs /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/dist/workers/forks.js 14.1/deps/openss--norc lude 14.1/deps/openssl/openssl/includ-I -c 14.1/include/node (dns block)
    • Triggering command: /opt/hostedtoolcache/node/24.14.1/x64/bin/node /opt/hostedtoolcache/node/24.14.1/x64/bin/node --conditions node --conditions development --experimental-import-meta-resolve --require /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/suppress-warnings.cjs /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/dist/workers/forks.js che/node-gyp/24./home/REDACTED/work/pabawi/pabawi/node_modules/.bin/eslint -I as -I race -I 1/x64/bin/node -I /home/REDACTED/.ca--conditions -I 1/x64/bin/node (dns block)
  • puppetdb.example.com
    • Triggering command: /opt/hostedtoolcache/node/24.14.1/x64/bin/node /opt/hostedtoolcache/node/24.14.1/x64/bin/node --conditions node --conditions development --experimental-import-meta-resolve --require /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/suppress-warnings.cjs /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/dist/workers/forks.js l/config (dns block)
    • Triggering command: /opt/hostedtoolcache/node/24.14.1/x64/bin/node /opt/hostedtoolcache/node/24.14.1/x64/bin/node --conditions node --conditions development --experimental-import-meta-resolve --require /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/suppress-warnings.cjs /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/dist/workers/forks.js l/config _TOKEN&#34;; }; f l/openssl/include ning�� /forks.js (dns block)
    • Triggering command: /opt/hostedtoolcache/node/24.14.1/x64/bin/node /opt/hostedtoolcache/node/24.14.1/x64/bin/node --conditions node --conditions development --experimental-import-meta-resolve --require /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/suppress-warnings.cjs /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/dist/workers/forks.js che/node-gyp/24./home/REDACTED/work/pabawi/pabawi/node_modules/.bin/tsc n/cc che/node-gyp/24.14.1/deps/openssl/openssl/includ--conditions n/es�� che/node-gyp/24.14.1/deps/uv/include s 1/x64/bin/node se Form.property -I 1/x64/bin/node (dns block)

If you need me to access, download, or install something from one of these locations, you can either:

@alvagante alvagante merged commit 777eb7e into main Apr 10, 2026
5 checks passed
@alvagante alvagante deleted the uno branch April 10, 2026 21:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants