Skip to content

Commit 7ff2de5

Browse files
committed
Red Hat OpenShift Service on AWS 1.33 AI Conformance
Signed-off-by: Timothy Williams <tiwillia@redhat.com>
1 parent 629fea8 commit 7ff2de5

File tree

1 file changed

+89
-0
lines changed

1 file changed

+89
-0
lines changed

v1.33/rosa/PRODUCT.yaml

Lines changed: 89 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,89 @@
1+
# Kubernetes AI Conformance Checklist
2+
# Notes: This checklist is based on the Kubernetes AI Conformance document.
3+
# Participants should fill in the 'status', 'evidence', and 'notes' fields for each requirement.
4+
5+
metadata:
6+
kubernetesVersion: v1.33
7+
platformName: "Red Hat OpenShift Service on AWS"
8+
platformVersion: "4.20"
9+
vendorName: "Red Hat"
10+
websiteUrl: "https://www.redhat.com/en/technologies/cloud-computing/openshift/aws"
11+
documentationUrl: "https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4"
12+
productLogoUrl: "https://www.redhat.com/rhdc/managed-files/Logo-Red_Hat-OpenShift-A-Standard-RGB.svg"
13+
description: "Red Hat OpenShift Service on AWS offers a reduced-cost solution to create a managed Red Hat OpenShift Service on AWS cluster with a focus on efficiency and security."
14+
contactEmailAddress: "[Contact Email Address]"
15+
16+
spec:
17+
accelerators:
18+
- id: dra_support
19+
description: "Support Dynamic Resource Allocation (DRA) APIs to enable more flexible and fine-grained resource requests beyond simple counts."
20+
level: SHOULD
21+
status: "N/A"
22+
evidence: []
23+
notes: ""
24+
networking:
25+
- id: ai_inference
26+
description: "Support the Kubernetes Gateway API with an implementation for advanced traffic management for inference services, which enables capabilities like weighted traffic splitting, header-based routing (for OpenAI protocol headers), and optional integration with service meshes."
27+
level: MUST
28+
status: "Implemented"
29+
evidence:
30+
- "https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/network_apis/gateway-gateway-networking-k8s-io-v1"
31+
notes: "ROSA exposes this feauture from OCP 4.20 directly without modification"
32+
schedulingOrchestration:
33+
- id: gang_scheduling
34+
description: "The platform must allow for the installation and successful operation of at least one gang scheduling solution that ensures all-or-nothing scheduling for distributed AI workloads (e.g. Kueue, Volcano, etc.) To be conformant, the vendor must demonstrate that their platform can successfully run at least one such solution."
35+
level: MUST
36+
status: "Implemented"
37+
evidence:
38+
- "https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/ai_workloads/red-hat-build-of-kueue#gangscheduling"
39+
notes: "Red Hat build of Kueue enables gang admission"
40+
- id: cluster_autoscaling
41+
description: "If the platform provides a cluster autoscaler or an equivalent mechanism, it must be able to scale up/down node groups containing specific accelerator types based on pending pods requesting those accelerators."
42+
level: MUST
43+
status: "Implemented"
44+
evidence:
45+
- "https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/hardware_accelerators/about-hardware-accelerators"
46+
- "https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/machine_management/applying-autoscaling"
47+
- "https://www.redhat.com/en/blog/autoscaling-nvidia-gpus-on-red-hat-openshift"
48+
notes: "The OpenShift cluster autoscaler implementation satisfies this requirement. We have tested with several different models of GPU and users are able to control how their workloads are matched with specific hardware needs."
49+
- id: pod_autoscaling
50+
description: "If the platform supports the HorizontalPodAutoscaler, it must function correctly for pods utilizing accelerators. This includes the ability to scale these Pods based on custom metrics relevant to AI/ML workloads."
51+
level: MUST
52+
status: "Implemented"
53+
evidence:
54+
- "https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/nodes/automatically-scaling-pods-with-the-custom-metrics-autoscaler-operator#nodes-cma-autoscaling-custom-trigger-prom-gpu_nodes-cma-autoscaling-custom-trigger"
55+
- "https://developers.redhat.com/articles/2025/08/12/boost-ai-efficiency-gpu-autoscaling-openshift#custom_metrics_autoscaler__keda__and_prometheus"
56+
notes: "Inherited from OCP 4.20"
57+
observability:
58+
- id: accelerator_metrics
59+
description: "For supported accelerator types, the platform must allow for the installation and successful operation of at least one accelerator metrics solution that exposes fine-grained performance metrics via a standardized, machine-readable metrics endpoint. This must include a core set of metrics for per-accelerator utilization and memory usage. Additionally, other relevant metrics such as temperature, power draw, and interconnect bandwidth should be exposed if the underlying hardware or virtualization layer makes them available. The list of metrics should align with emerging standards, such as OpenTelemetry metrics, to ensure interoperability. The platform may provide a managed solution, but this is not required for conformance."
60+
level: MUST
61+
status: "Implemented"
62+
evidence:
63+
- "https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/enable-gpu-monitoring-dashboard.html"
64+
- "https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/hardware_accelerators/nvidia-gpu-architecture"
65+
- "https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/hardware_accelerators/amd-gpu-operator"
66+
- "https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/red_hat_build_of_opentelemetry/index"
67+
notes: "As part of the OpenShift observability solution, OpenShift provides comprehensive support for AI accelerators (e.g. NVIDIA, AMD) through dedicated GPU operators that enable standardized metrics collection and monitoring. NVIDIA GPU Operator integrates DCGM-based monitoring, exposing GPU utilization, power consumption (watts), temperature (Celsius), utilization (percent), and memory metrics. AMD GPU Operator with ROCm integration provides equivalent AI accelerator monitoring capabilities. GPU telemetry is exposed via DCGM Exporter for Prometheus consumption through /metrics endpoints. OpenShift observability solution also provides native integration with OpenTelemetry standards via the Red Hat build of OpenTelemetry."
68+
- id: ai_service_metrics
69+
description: "Provide a monitoring system capable of discovering and collecting metrics from workloads that expose them in a standard format (e.g. Prometheus exposition format). This ensures easy integration for collecting key metrics from common AI frameworks and servers."
70+
level: MUST
71+
status: "Implemented"
72+
evidence:
73+
- "https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/pdf/monitoring/OpenShift_Container_Platform-4.20-Monitoring-en-US.pdf"
74+
notes: "OpenShift provides a fully integrated monitoring system based on Prometheus, which automatically discovers and scrapes metrics endpoints exposed by workloads in the standard Prometheus exposition format, ensuring seamless integration for collecting and displaying key metrics from common AI frameworks and servers."
75+
notes: ""
76+
security:
77+
- id: secure_accelerator_access
78+
description: "Ensure that access to accelerators from within containers is properly isolated and mediated by the Kubernetes resource management framework (device plugin or DRA) and container runtime, preventing unauthorized access or interference between workloads."
79+
level: MUST
80+
status: "Implemented"
81+
evidence: []
82+
notes: "TODO: This is verified supported via OCP 4.20 and needs to be verified on the ROSA platform."
83+
operator:
84+
- id: robust_controller
85+
description: "The platform must prove that at least one complex AI operator with a CRD (e.g., Ray, Kubeflow) can be installed and functions reliably. This includes verifying that the operator's pods run correctly, its webhooks are operational, and its custom resources can be reconciled."
86+
level: MUST
87+
status: "Implemented"
88+
evidence: []
89+
notes: "TODO: This is verified supported via OCP 4.20 and needs to be verified on the ROSA platform."

0 commit comments

Comments
 (0)