Skip to content

Conversation

@jvpasinatto
Copy link
Contributor

@jvpasinatto jvpasinatto commented Nov 17, 2025

CLOUD-940 Powered by Pull Request Badge

CHANGE DESCRIPTION

Problem
The operator-testing S3 bucket is now private, so its objects can no longer be accessed through unauthenticated object URLs. As a result, tests that verify the existence of backups in this bucket are failing.

Solution
Use the respective cloud CLIs, authenticated via the pipeline credentials, to verify the existence of backups.

Also, use PBM 2.11.0 as default in e2e-tests until K8SPSMDB-1522 is fixed

CHECKLIST

Jira

  • Is the Jira ticket created and referenced properly?
  • Does the Jira ticket have the proper statuses for documentation (Needs Doc) and QA (Needs QA)?
  • Does the Jira ticket link to the proper milestone (Fix Version field)?

Tests

  • Is an E2E test/test case added for the new feature/change?
  • Are unit tests added where appropriate?
  • Are OpenShift compare files changed for E2E tests (compare/*-oc.yml)?

Config/Logging/Testability

  • Are all needed new/changed options added to default YAML files?
  • Are all needed new/changed options added to the Helm Chart?
  • Did we add proper logging messages for operator actions?
  • Did we ensure compatibility with the previous version or cluster upgrade process?
  • Does the change support oldest and newest supported MongoDB version?
  • Does the change support oldest and newest supported Kubernetes version?

sleep 10
((retry += 1))
done

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change

check_backup_deletion_gcs "$logical_dest"
}


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change

@jvpasinatto jvpasinatto marked this pull request as draft November 18, 2025 02:20
}

function check_backup_deletion_gcs() {
backup_dest_gcp=$1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
backup_dest_gcp=$1
backup_dest_gcp=$1

Comment on lines +1600 to +1609
while gsutil ls "$gcs_path" >/dev/null 2>&1; do
if [ $retry -ge 15 ]; then
echo "max retry count $retry reached. something went wrong with operator or kubernetes cluster"
echo "Backup $gcs_path still exists in $storage_name (expected it to be deleted)"
exit 1
fi
echo "waiting for backup to be deleted from $storage_name"
sleep 10
((retry += 1))
done
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
while gsutil ls "$gcs_path" >/dev/null 2>&1; do
if [ $retry -ge 15 ]; then
echo "max retry count $retry reached. something went wrong with operator or kubernetes cluster"
echo "Backup $gcs_path still exists in $storage_name (expected it to be deleted)"
exit 1
fi
echo "waiting for backup to be deleted from $storage_name"
sleep 10
((retry += 1))
done
while gsutil ls "$gcs_path" >/dev/null 2>&1; do
if [ $retry -ge 15 ]; then
echo "max retry count $retry reached. something went wrong with operator or kubernetes cluster"
echo "Backup $gcs_path still exists in $storage_name (expected it to be deleted)"
exit 1
fi
echo "waiting for backup to be deleted from $storage_name"
sleep 10
((retry += 1))
done

((retry += 1))
done

echo "Backup $gcs_path not found in $storage_name"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
echo "Backup $gcs_path not found in $storage_name"
echo "Backup $gcs_path not found in $storage_name"

sleep 10
((retry += 1))
done

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change

Comment on lines +1656 to +1688
local secret_name="aws-s3-secret"

if [[ -n "$AWS_ACCESS_KEY_ID" ]] && [[ -n "$AWS_SECRET_ACCESS_KEY" ]]; then
echo "AWS credentials already set in environment"
return 0
fi

echo "Setting up AWS credentials from secret: $secret_name"

# Disable tracing for the entire credential section
local trace_was_on=0
if [[ $- == *x* ]]; then
trace_was_on=1
set +x
fi

AWS_ACCESS_KEY_ID=$(kubectl get secret "$secret_name" -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' 2>/dev/null | base64 -d 2>/dev/null)
AWS_SECRET_ACCESS_KEY=$(kubectl get secret "$secret_name" -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' 2>/dev/null | base64 -d 2>/dev/null)

if [[ -z "$AWS_ACCESS_KEY_ID" ]] || [[ -z "$AWS_SECRET_ACCESS_KEY" ]]; then
# Re-enable tracing before error message if it was on
[[ $trace_was_on -eq 1 ]] && set -x
echo "Failed to extract AWS credentials from secret"
return 1
fi

export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY

# Re-enable tracing if it was on
[[ $trace_was_on -eq 1 ]] && set -x

echo "AWS credentials configured successfully"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
local secret_name="aws-s3-secret"
if [[ -n "$AWS_ACCESS_KEY_ID" ]] && [[ -n "$AWS_SECRET_ACCESS_KEY" ]]; then
echo "AWS credentials already set in environment"
return 0
fi
echo "Setting up AWS credentials from secret: $secret_name"
# Disable tracing for the entire credential section
local trace_was_on=0
if [[ $- == *x* ]]; then
trace_was_on=1
set +x
fi
AWS_ACCESS_KEY_ID=$(kubectl get secret "$secret_name" -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' 2>/dev/null | base64 -d 2>/dev/null)
AWS_SECRET_ACCESS_KEY=$(kubectl get secret "$secret_name" -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' 2>/dev/null | base64 -d 2>/dev/null)
if [[ -z "$AWS_ACCESS_KEY_ID" ]] || [[ -z "$AWS_SECRET_ACCESS_KEY" ]]; then
# Re-enable tracing before error message if it was on
[[ $trace_was_on -eq 1 ]] && set -x
echo "Failed to extract AWS credentials from secret"
return 1
fi
export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY
# Re-enable tracing if it was on
[[ $trace_was_on -eq 1 ]] && set -x
echo "AWS credentials configured successfully"
local secret_name="aws-s3-secret"
if [[ -n $AWS_ACCESS_KEY_ID ]] && [[ -n $AWS_SECRET_ACCESS_KEY ]]; then
echo "AWS credentials already set in environment"
return 0
fi
echo "Setting up AWS credentials from secret: $secret_name"
# Disable tracing for the entire credential section
local trace_was_on=0
if [[ $- == *x* ]]; then
trace_was_on=1
set +x
fi
AWS_ACCESS_KEY_ID=$(kubectl get secret "$secret_name" -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' 2>/dev/null | base64 -d 2>/dev/null)
AWS_SECRET_ACCESS_KEY=$(kubectl get secret "$secret_name" -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' 2>/dev/null | base64 -d 2>/dev/null)
if [[ -z $AWS_ACCESS_KEY_ID ]] || [[ -z $AWS_SECRET_ACCESS_KEY ]]; then
# Re-enable tracing before error message if it was on
[[ $trace_was_on -eq 1 ]] && set -x
echo "Failed to extract AWS credentials from secret"
return 1
fi
export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY
# Re-enable tracing if it was on
[[ $trace_was_on -eq 1 ]] && set -x
echo "AWS credentials configured successfully"

Comment on lines +1762 to +1763
# Re-enable tracing if it was on
[[ $trace_was_on -eq 1 ]] && set -x
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
# Re-enable tracing if it was on
[[ $trace_was_on -eq 1 ]] && set -x
# Re-enable tracing if it was on
[[ $trace_was_on -eq 1 ]] && set -x

# Re-enable tracing if it was on
[[ $trace_was_on -eq 1 ]] && set -x

echo "Azure credentials configured successfully"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
echo "Azure credentials configured successfully"
echo "Azure credentials configured successfully"

sleep 10
((retry += 1))
done

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change

sleep 10
((retry += 1))
done

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change

sleep 10
((retry += 1))
done

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change

@jvpasinatto jvpasinatto marked this pull request as ready for review November 19, 2025 13:38
@JNKPercona
Copy link
Collaborator

Test Name Result Time
arbiter passed 00:11:35
balancer passed 00:18:18
cross-site-sharded passed 00:18:21
custom-replset-name passed 00:10:16
custom-tls passed 00:13:45
custom-users-roles passed 00:10:37
custom-users-roles-sharded passed 00:11:18
data-at-rest-encryption passed 00:12:53
data-sharded passed 00:23:03
demand-backup passed 00:16:00
demand-backup-eks-credentials-irsa passed 00:00:07
demand-backup-fs passed 00:23:32
demand-backup-if-unhealthy passed 00:08:30
demand-backup-incremental passed 00:44:34
demand-backup-incremental-sharded passed 00:55:00
demand-backup-physical-parallel passed 00:08:29
demand-backup-physical-aws passed 00:11:48
demand-backup-physical-azure passed 00:11:26
demand-backup-physical-gcp-s3 passed 00:12:10
demand-backup-physical-gcp-native passed 00:12:21
demand-backup-physical-minio passed 00:20:17
demand-backup-physical-sharded-parallel passed 00:10:49
demand-backup-physical-sharded-aws passed 00:17:57
demand-backup-physical-sharded-azure passed 00:17:28
demand-backup-physical-sharded-gcp-native passed 00:17:35
demand-backup-physical-sharded-minio passed 00:18:29
demand-backup-sharded passed 00:25:08
expose-sharded passed 00:32:53
finalizer passed 00:10:11
ignore-labels-annotations passed 00:07:31
init-deploy passed 00:12:45
ldap passed 00:09:07
ldap-tls passed 00:12:34
limits passed 00:06:17
liveness passed 00:08:11
mongod-major-upgrade passed 00:12:11
mongod-major-upgrade-sharded passed 00:20:54
monitoring-2-0 passed 00:24:37
monitoring-pmm3 passed 00:29:22
multi-cluster-service passed 00:15:27
multi-storage passed 00:18:29
non-voting-and-hidden passed 00:16:50
one-pod passed 00:07:40
operator-self-healing-chaos passed 00:12:36
pitr passed 00:31:45
pitr-physical passed 01:00:46
pitr-sharded passed 00:20:15
pitr-to-new-cluster passed 00:25:31
pitr-physical-backup-source passed 00:53:47
preinit-updates passed 00:05:04
pvc-resize passed 00:12:13
recover-no-primary passed 00:27:07
replset-overrides passed 00:15:50
rs-shard-migration passed 00:13:25
scaling passed 00:11:05
scheduled-backup passed 00:17:41
security-context passed 00:06:46
self-healing-chaos passed 00:15:17
service-per-pod passed 00:18:52
serviceless-external-nodes passed 00:08:00
smart-update passed 00:08:07
split-horizon passed 00:07:47
stable-resource-version passed 00:04:49
storage passed 00:07:38
tls-issue-cert-manager passed 00:29:39
upgrade passed 00:09:47
upgrade-consistency passed 00:06:10
upgrade-consistency-sharded-tls passed 00:55:11
upgrade-sharded passed 00:20:02
upgrade-partial-backup passed 00:16:32
users passed 00:17:27
version-service passed 00:25:29
Summary Value
Tests Run 72/72
Job Duration 03:06:42
Total Test Time 21:11:57

commit: a1bead7
image: perconalab/percona-server-mongodb-operator:PR-2113-a1bead7a

@jvpasinatto jvpasinatto requested review from egegunes and hors November 19, 2025 22:40
@hors hors merged commit 4b9c22b into main Nov 20, 2025
19 checks passed
@hors hors deleted the cloud-940 branch November 20, 2025 10:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size/L 100-499 lines tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants