-
Notifications
You must be signed in to change notification settings - Fork 52
Updating SKE Nodepools shows false plan #1348
Description
Description
When adding a new node pool to an existing SKE cluster, the Terraform plan incorrectly shows changes to all existing node pools, even though only a single new pool is being added. The provider appears to be matching node pools by array position rather than by a stable identifier (like pool name or ID), causing it to misidentify all pools when the array order changes or a new element is inserted.
Steps to reproduce
terraform {
required_version = ">= 1.5.0"
required_providers {
stackit = {
source = "stackitcloud/stackit"
version = "= 0.70"
}
}
}
provider stackit {
default_region = "eu01"
service_account_key_path = "key.json"
}
locals {
# Dynamically load node pools from JSON files
client_dirs = fileset("${path.module}/clients", "*.json")
all_node_pools = flatten([
for f in local.client_dirs : jsondecode(file("${path.module}/clients/${f}")).node_pools
])
}
resource "stackit_ske_cluster" "prod" {
project_id = "<PROJECT_ID>"
name = "my-cluster"
kubernetes_version_min = "1.32"
node_pools = local.all_node_pools
maintenance = {
enable_kubernetes_version_updates = false
enable_machine_image_version_updates = false
start = "01:00:00Z"
end = "02:00:00Z"
}
}- Deploy an SKE cluster with multiple node pools (e.g., 10 pools)
- Add a new JSON file in the
clients/directory containing a new node pool configuration (e.g.,new-client.json) - Run
terraform plan - Observe that instead of showing only the addition of the new pool, all existing pools show changes
Actual behavior
When running terraform plan after adding a new node pool configuration file, the output shows modifications to all existing node pools, including changes to:
- Pool names (renaming from one pool to another)
- Labels (changing client labels)
- Machine types
- Availability zones
- Min/max scaling values
- Volume sizes
Example from the plan output:
~ resource "stackit_ske_cluster" "prod" {
~ node_pools = [
~ {
~ labels = {
~ "client" = "client-b" -> "client-c"
}
~ name = "client-b-pool" -> "client-c-pool"
# ...
},
~ {
~ labels = {
~ "client" = "client-a" -> "client-b"
}
~ name = "client-a-pool" -> "client-b-pool"
# ...
},
# ... (9 more pools showing spurious changes)
+ {
+ name = "new-pool"
# ... (actual new pool being added)
},
]
}
This suggests the provider is matching pools by position in the array rather than by a stable identifier, causing a cascading effect where all pools appear to shift when a new one is inserted.
Expected behavior
The Terraform plan should show only the addition of the new node pool:
~ resource "stackit_ske_cluster" "prod" {
~ node_pools = [
# (10 unchanged elements hidden)
+ {
+ allow_system_components = true
+ availability_zones = ["eu01-3"]
+ name = "new-pool"
+ client = "new-client"
# ... (other new pool attributes)
},
]
}
All existing node pools should remain unchanged and matched correctly by their name or unique identifier, regardless of their position in the array.
Environment
- OS: macOS
- Terraform version (see
terraform --version):v1.5.0+ - Version of the STACKIT Terraform provider:
v0.70
Additional information
This issue makes it extremely difficult to manage clusters with dynamic node pool configurations, as any addition or removal causes Terraform to report a full replacement/modification of all pools. This creates:
- Risk of unintended changes: The plan diff is so noisy that it's hard to verify what will actually change
- Potential downtime: If applied, this could potentially recreate all node pools unnecessarily
- State drift concerns: The massive diff makes it unclear whether the state is correctly tracking the actual infrastructure
The provider should use a stable identifier (such as the name field) to match node pools between the desired state and current state, similar to how other Terraform providers handle array elements with unique identifiers.