-
Notifications
You must be signed in to change notification settings - Fork 84
OADP-2785: Add PriorityClassName support to PodConfig in DataProtectionApplication #2010
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: oadp-dev
Are you sure you want to change the base?
Conversation
- Introduced PriorityClassName field in PodConfig struct to allow specification of pod priority. - Updated CRD and related manifests to include PriorityClassName description. - Modified NodeAgent and Velero deployment builders to utilize PriorityClassName from DataProtectionApplication spec. - Enhanced tests to validate PriorityClassName functionality in NodeAgent and Velero deployments.
|
@kaovilai: This pull request references OADP-2785 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
WalkthroughAdds an optional PriorityClassName to PodConfig, updates CRD schemas to expose the field, and propagates the value into NodeAgent DaemonSet and Velero Deployment pod specs with accompanying tests. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro Cache: Disabled due to data retention organization setting Knowledge base: Disabled due to 📒 Files selected for processing (2)
🔇 Additional comments (6)
Comment |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kaovilai The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
internal/controller/nodeagent.go (2)
362-367: Mirror the safer pattern used for VeleroPrefer computing the string with guards and passing it without an IIFE for readability and consistency.
Apply this diff:
- install.WithPriorityClassName(func() string { - if dpa.Spec.Configuration.NodeAgent != nil && dpa.Spec.Configuration.NodeAgent.PodConfig != nil { - return dpa.Spec.Configuration.NodeAgent.PodConfig.PriorityClassName - } - return "" - }()), + func() install.Option { + pc := "" + if dpa.Spec.Configuration != nil && + dpa.Spec.Configuration.NodeAgent != nil && + dpa.Spec.Configuration.NodeAgent.PodConfig != nil { + pc = dpa.Spec.Configuration.NodeAgent.PodConfig.PriorityClassName + } + return install.WithPriorityClassName(pc) + }(),
303-307: Nit: event message says “deployment” for a DaemonSetUse “daemonset” to avoid confusion in events.
- fmt.Sprintf("performed %s on NodeAgent deployment %s/%s", op, ds.Namespace, ds.Name), + fmt.Sprintf("performed %s on NodeAgent daemonset %s/%s", op, ds.Namespace, ds.Name),internal/controller/nodeagent_test.go (1)
571-573: PriorityClassName assignment OK; consider simplifyingYou can set PriorityClassName unconditionally (zero-value is “”) to reduce branching. Behavior is identical.
- if len(options.priorityClassName) > 0 { - testBuiltNodeAgentDaemonSet.Spec.Template.Spec.PriorityClassName = options.priorityClassName - } + testBuiltNodeAgentDaemonSet.Spec.Template.Spec.PriorityClassName = options.priorityClassName
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (7)
api/v1alpha1/dataprotectionapplication_types.go(1 hunks)bundle/manifests/oadp.openshift.io_dataprotectionapplications.yaml(3 hunks)config/crd/bases/oadp.openshift.io_dataprotectionapplications.yaml(3 hunks)internal/controller/nodeagent.go(1 hunks)internal/controller/nodeagent_test.go(3 hunks)internal/controller/velero.go(1 hunks)internal/controller/velero_test.go(3 hunks)
🔇 Additional comments (7)
api/v1alpha1/dataprotectionapplication_types.go (1)
365-367: API change is correctOptional PriorityClassName on PodConfig with the expected json tag aligns with the CRD.
config/crd/bases/oadp.openshift.io_dataprotectionapplications.yaml (1)
497-499: Bundle is in sync with base CRDVerification confirms priorityClassName is present at all three locations in the bundle (497–499, 854–856, 1374–1376) with consistent field definitions. The change has been properly propagated.
internal/controller/nodeagent_test.go (1)
258-259: New test option field looks goodThe addition aligns with PodSpec’s string field. No issues.
bundle/manifests/oadp.openshift.io_dataprotectionapplications.yaml (1)
497-500: CRD schema additions are correct and consistentAll verification checks passed:
- API type has correct json tag:
api/v1alpha1/dataprotectionapplication_types.go:367- Base and bundle CRDs are in sync with 3 priorityClassName occurrences each (nodeAgent, restic, velero)
- Velero controller correctly applies the field from
dpa.Spec.Configuration.Velero.PodConfig.PriorityClassName(velero.go:204-209)- NodeAgent controller correctly applies the field from
dpa.Spec.Configuration.NodeAgent.PodConfig.PriorityClassName(nodeagent.go:362-367)- Restic field exists in CRD for backwards compatibility despite being deprecated in favor of nodeAgent
internal/controller/velero_test.go (3)
634-634: LGTM! Clean addition of the test option field.The
priorityClassNamefield addition follows the existing naming conventions and patterns in the test helper struct.
774-777: LGTM! Correct implementation of PriorityClassName assignment.The conditional logic properly checks for a non-empty string before setting the field on the deployment's pod template spec, following the same pattern as other optional fields in this helper function.
1038-1061: LGTM! Comprehensive test coverage for PriorityClassName feature.The test case properly validates that a PriorityClassName specified in the DPA's PodConfig is correctly propagated to the Velero deployment. The test follows established patterns and uses realistic test data.
|
@coderabbitai help |
ChatThere are 3 ways to chat with CodeRabbit:
CodeRabbit commands
Other keywords and placeholders
CodeRabbit configuration file (
|
|
/retest ai-retester: The e2e test failed because the "Mongo application CSI" test timed out after 540 seconds with the error The The |
|
/retest ai-retester: The The pod The |
|
/retest ai-retester: The The The |
|
/retest |
|
/retest-required |
|
/retest ai-retester: The end-to-end test The OADP e2e test failed because the "Mongo application CSI" test timed out after 540 seconds, and the "todolist" application pod failed to initialize. The |
|
/retest ai-retester: The The It was a “pod‑not‑ready” failure – the Mongo CSI restore brought up the test Apps, but the So the restore test failed because the restored pod stayed stuck in the PodInitializing state and never became ready. |
|
/retest ai-retester: The The The |
|
/retest ai-retester: The e2e tests failed because the "Mongo application CSI" test timed out while waiting for a Pod to succeed, leading to "ContainerFailed" and a failure within the "e2e-test-aws-e2e" step. Specifically, the pod "todolist-6d7bb9554c-x7q2q" in namespace mongo-persistant didn't achieve success. The e2e test The |
|
/retest ai-retester: The end-to-end (e2e) test for the The The e2e tests failed because the todolist Pod in the mongo-persistent namespace never succeeded, timing out after a significant period. This likely indicates an issue with the application deployment or its connectivity within the test environment. |
|
/retest ai-retester: The The The e2e test failed because the |
|
/retest ai-retester: The The The test blew up because the application pod that The failing step was the “Mongo application CSI” test, which creates a In this run the pod that was created from that manifest did not start, so the eventual consistency checks inside the test ( |
|
/retest |
1 similar comment
|
/retest |
|
@kaovilai: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Why the changes were made
How to test the changes made