-
Notifications
You must be signed in to change notification settings - Fork 264
Description
Hi Trident team and community,
I’m currently working on a Trident deployment (v25.06.0) on RKE2 and could use some advice on a specific storage strategy.
Background: We are aiming to achieve storage overcommitment at the Kubernetes level. Due to company policy, our storage team manages strict Quotas at the SVM level. However, we want to allow users to declare large numbers of PVCs (Qtrees) as long as the actual physical data usage does not exceed the SVM limit.
Issue Description:
Environment:
Trident Version: 25.06.0
Storage Driver: ontap-nas-economy
Backend Config:
- denyNewVolumePools: "true"
- storagePrefix: "(SVM NAEM)_VolumePools"
FlexVol created by manually:
- trident_qtree_pool_storagePrefix_(10 random characters)
Orchestrator: Kubernetes / RKE2
Our Proposed Strategy:
-
Manually create a large FlexVol (with no specific size limit or very large limit) on the ONTAP SVM.
-
Set denyNewVolumePools: "true" in the Trident backend to prevent Trident from automatically creating new FlexVols.
-
Force Trident to use this manually created FlexVol as the primary "Storage Pool" for all subsequent Qtree (PVC) provisioning.
Questions:
-
Discovery Logic: Besides matching the storagePrefix (e.g., trident_), what specific attributes (Export Policy, Snapshot Policy, Security Style) must the manual FlexVol have for Trident to recognize and "adopt" it as a valid pool?
-
Overcommitment: In an ontap-nas-economy setup, if the underlying FlexVol is thin-provisioned, will Trident allow provisioning of Qtrees whose aggregate "logical" size exceeds the FlexVol's size, provided the physical space is available?
-
Stability: Are there any known side effects when using denyNewVolumePools: "true" with a single, manually managed large FlexVol? Specifically regarding how Trident calculates "available" space for the backend.
We would greatly appreciate any insights or experiences sharing. Thank you!