Skip to content

Add docs for 26.3.0 release#353

Open
a-mccarthy wants to merge 2 commits intoNVIDIA:mainfrom
a-mccarthy:dev-26.3.0
Open

Add docs for 26.3.0 release#353
a-mccarthy wants to merge 2 commits intoNVIDIA:mainfrom
a-mccarthy:dev-26.3.0

Conversation

@a-mccarthy
Copy link
Collaborator

No description provided.

@github-actions
Copy link

Documentation preview

https://nvidia.github.io/cloud-native-docs/review/pr-353


.. _example-custom-mig-configuration:

Example: Custom MIG Configuration

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this example making a distinction between custom MIG configuration after installation compared to the previous example?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rajathagasthya Yes, i believe that was the orginal intention of the 2 examples. It feels a bit redundant though. Do you see these as separate use cases customers would need different examples?

Copy link

@rajathagasthya rajathagasthya Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not too important, but maybe worth keeping. We can make it more obvious in the example title. I suggest the following:

  1. Move the step Optional: Monitor the MIG Manager logs to confirm the new MIG geometry is applied from this example to the previous example.
  2. Rename this example to something like Configure Custom MIG Configuration to an Existing GPU Operator Deployment and say:

To configure running GPU Operator deployment with a custom MIG config:

(keep steps 1 through 4; step 5 isn't necessary)

Signed-off-by: Abigail McCarthy <20771501+a-mccarthy@users.noreply.github.com>

Co-authored-by: Rajath Agasthya <rajathagasthya@gmail.com>

* Added support for including extra manifests with the Helm chart.

* Added a the ``sandboxWorkloads.mode`` field to help manage sandboxWorkloads. with ["kubevirt", "kata"] as valid values.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to add more context to this

Signed-off-by: Abigail McCarthy <20771501+a-mccarthy@users.noreply.github.com>
| `570.195.03 <https://docs.nvidia.com/datacenter/tesla/tesla-release-notes-570-195-03/index.html>`_
| `550.163.01 <https://docs.nvidia.com/datacenter/tesla/tesla-release-notes-550-163-01/index.html>`_
| `535.274.02 <https://docs.nvidia.com/datacenter/tesla/tesla-release-notes-535-274-03/index.html>`_
- | `590.48.01 <https://docs.nvidia.com/datacenter/tesla/tesla-release-notes-590-48-01/index.html>`_
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tariq1890, what driver versions do we want to list for 26.3.0?


* Improved the GPU Operator to deploy on heterogenous clusters with different operating systems on GPU nodes.

* Fixed issues where the GPU Operator was not using getting the correct operating system on heterogenous clusters. Now the GPU Operator uses the OS version labels from GPU worker nodes, added by NFD, when determining what OS-specific paths to use for repository configuration files. (`PR #562 <https://github.com/NVIDIA/gpu-operator/issues/562>`_, `PR #2138 <https://github.com/NVIDIA/gpu-operator/pull/2138>`_)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"was not using getting" doesn't look right


* Fixed an issue on Openshift clusters where the ``dcgm-exporter`` pod gets bound to another Security Context Constraint (SCC) object, not the one named 'nvidia-dcgm-exporter' which the GPU Operator creates. (`PR #2122 <https://github.com/NVIDIA/gpu-operator/pull/2122>`_)

* Fixed an issue where the GPU Operator was not correctly cleaning up deamonsets https://github.com/NVIDIA/gpu-operator/pull/2081
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This probably needs to be a link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants