Conversation
- Consolidate documentation guide into AGENTS.md with Quick Reference - Update deployment path to reflect acp-docs integration (/immutable-infra) - Add yarn up @alauda/doom command for framework upgrades - Update .gitignore to exclude .claude/, .agents/, skills-lock.json - Remove previously tracked agent/skill files from git Co-Authored-By: chaozhou@alauda.io
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughThis PR reorganizes documentation into provider-specific sections, adds AI agent guidelines, updates site metadata and README, removes some legacy docs, and introduces four new Huawei Cloud Stack CRD YAMLs and several provider API and workflow MDX pages. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Deploying alauda-immutable-infra with
|
| Latest commit: |
74db66f
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://3fce3cf2.alauda-immutable-infra.pages.dev |
| Branch Preview URL: | https://ait-67390.alauda-immutable-infra.pages.dev |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@AGENTS.md`:
- Line 9: The document AGENTS.md currently mixes two different example version
values which causes confusion; update the text to explicitly distinguish the
integrated acp-docs/sites.yaml example (referenced by "base: /immutable-infra"
and 'version: "1.0"') from the local example (the container_platform/sites.yaml
example that uses 'version: "4.1"'), by adding a short clarifying sentence near
the existing mentions of sites.yaml (and the container_platform example) that
states which file each version applies to (integrated acp-docs/sites.yaml =
"1.0" for immutable-infra, local/example sites.yaml/container_platform = "4.1"),
so readers can tell the two configs apart.
- Line 32: The line in AGENTS.md that documents Deployment ends with a dangling
pipe character after "/immutable-infra/main/"; remove the trailing '|' or
complete the markdown table/row so the line reads cleanly (e.g., "*
**Deployment**: Integrated into acp-docs; path = `/immutable-infra/{version}/`
(release) or `/immutable-infra/main/` (non-production)") ensuring no stray pipe
remains.
In `@skills-lock.json`:
- Around line 1-10: Resolve the inconsistency between skills-lock.json and
.gitignore by deciding whether the lock file should be versioned; if it should
be tracked (recommended for reproducible builds), remove the skills-lock.json
entry from .gitignore and commit that change, otherwise keep it ignored and
remove the tracked file from the repository index so .gitignore can take effect
(ensure the file remains locally if needed), then commit the removal and verify
the repository no longer tracks skills-lock.json; reference the file name
skills-lock.json and the .gitignore entry to locate the lines to change.
skills-lock.json
Outdated
| { | ||
| "version": 1, | ||
| "skills": { | ||
| "doom-doc-assistant": { | ||
| "source": "alauda/agent-skills", | ||
| "sourceType": "github", | ||
| "computedHash": "9d207d6a8a39fd07122fba04138553da21632776c580f6585bc0f75279760138" | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if skills-lock.json would be ignored by git
git check-ignore -v skills-lock.jsonRepository: alauda/immutable-infra-docs
Length of output: 53
🏁 Script executed:
#!/bin/bash
# Check the .gitignore file
if [ -f .gitignore ]; then
echo "=== .gitignore content (first 20 lines) ==="
head -20 .gitignore
echo ""
echo "=== Searching for skills-lock.json pattern ==="
grep -n "skills-lock" .gitignore || echo "Pattern not found"
fiRepository: alauda/immutable-infra-docs
Length of output: 402
🏁 Script executed:
#!/bin/bash
# Check if skills-lock.json is tracked by git
echo "=== Checking git tracking status ==="
git ls-files --stage | grep skills-lock.json || echo "File not in git index"Repository: alauda/immutable-infra-docs
Length of output: 174
Critical: skills-lock.json is both committed to git and listed in .gitignore.
The file appears on line 7 of .gitignore but is currently tracked in the git repository (present in the git index). This contradiction should be resolved:
- If this lock file should be version-controlled (like package-lock.json), remove it from .gitignore
- If it should remain local to each developer, remove this file from the repository and let .gitignore prevent it from being added
Similar lock files (package-lock.json, yarn.lock) are typically version-controlled to ensure reproducible builds.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills-lock.json` around lines 1 - 10, Resolve the inconsistency between
skills-lock.json and .gitignore by deciding whether the lock file should be
versioned; if it should be tracked (recommended for reproducible builds), remove
the skills-lock.json entry from .gitignore and commit that change, otherwise
keep it ignored and remove the tracked file from the repository index so
.gitignore can take effect (ensure the file remains locally if needed), then
commit the removal and verify the repository no longer tracks skills-lock.json;
reference the file name skills-lock.json and the .gitignore entry to locate the
lines to change.
- Fix malformed heading (title concatenation) - Add clear project description - Update requirements (Yarn 4.9.2, not npm) - Simplify quick start to essential commands - Add link to AGENTS.md for detailed guidelines Co-Authored-By: chaozhou@alauda.io
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
README.md (1)
7-8: Specify the Node.js version requirement.The requirements list "Node.js" without a specific version. Consider specifying a minimum or recommended Node.js version (e.g., "Node.js 18.x or later") to prevent compatibility issues.
📝 Suggested improvement
- Node.js +- Node.js 18.x or later - Yarn 4.9.2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 7 - 8, Update the README requirements entry that currently just says "Node.js" to specify a concrete minimum/recommended version (for example "Node.js 18.x or later"); edit the README.md line where "Node.js" appears so it clearly states the version constraint and, optionally, add a short note about why that version is required (e.g., modern features or compatibility).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Line 8: The README currently lists an invalid Yarn version string "Yarn
4.9.2"; update that reference to a valid, available version (replace "Yarn
4.9.2" with "Yarn 2.4.3") and ensure any related install or setup instructions
that reference the old version are updated to match the new "Yarn 2.4.3" value
for consistency.
---
Nitpick comments:
In `@README.md`:
- Around line 7-8: Update the README requirements entry that currently just says
"Node.js" to specify a concrete minimum/recommended version (for example
"Node.js 18.x or later"); edit the README.md line where "Node.js" appears so it
clearly states the version constraint and, optionally, add a short note about
why that version is required (e.g., modern features or compatibility).
README.md
Outdated
| ## Requirements | ||
|
|
||
| - Node.js | ||
| - Yarn 4.9.2 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify Yarn 4.9.2 availability
# Check if Yarn 4.9.2 exists on npm registry
curl -s https://registry.npmjs.org/yarn | jq -r '.versions | keys[]' | grep -q "4.9.2" && echo "✓ Yarn 4.9.2 exists" || echo "✗ Yarn 4.9.2 not found"
# Get all Yarn 4.9.x versions
echo ""
echo "Available Yarn 4.9.x versions:"
curl -s https://registry.npmjs.org/yarn | jq -r '.versions | keys[]' | grep "^4\.9\." | sort -V
# Get latest Yarn 4.x version
echo ""
echo "Latest Yarn 4.x versions:"
curl -s https://registry.npmjs.org/yarn | jq -r '.versions | keys[]' | grep "^4\." | sort -V | tail -10Repository: alauda/immutable-infra-docs
Length of output: 152
🏁 Script executed:
#!/bin/bash
# Get comprehensive Yarn version information
echo "Total Yarn versions available:"
curl -s https://registry.npmjs.org/yarn | jq -r '.versions | keys[]' | wc -l
echo ""
echo "Latest 20 Yarn versions:"
curl -s https://registry.npmjs.org/yarn | jq -r '.versions | keys[]' | sort -V | tail -20
echo ""
echo "Check for Yarn 4.x versions:"
curl -s https://registry.npmjs.org/yarn | jq -r '.versions | keys[]' | grep "^4\."
echo ""
echo "Latest version from dist-tags:"
curl -s https://registry.npmjs.org/yarn | jq -r '.dist-tags'Repository: alauda/immutable-infra-docs
Length of output: 466
🏁 Script executed:
#!/bin/bash
# Get latest Yarn versions and dist-tags more carefully
echo "Latest dist-tags:"
curl -s https://registry.npmjs.org/yarn | jq '.dist-tags'
echo ""
echo "All Yarn versions (grouped):"
echo "Latest versions:"
curl -s https://registry.npmjs.org/yarn | jq -r '.versions | keys[]' | sort -V | tail -5Repository: alauda/immutable-infra-docs
Length of output: 292
Yarn 4.9.2 does not exist and should be updated to a valid version.
Yarn 4.9.2 is not available on the npm registry. The latest available Yarn version is 2.4.3. Update the README to specify a valid Yarn version that users can actually install.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 8, The README currently lists an invalid Yarn version
string "Yarn 4.9.2"; update that reference to a valid, available version
(replace "Yarn 4.9.2" with "Yarn 2.4.3") and ensure any related install or setup
instructions that reference the old version are updated to match the new "Yarn
2.4.3" value for consistency.
- AGENTS.md: remove hardcoded version numbers, clarify that version maps to branch name - AGENTS.md: fix sites.yaml description (used by <AcpApisOverview /> for ACP APIs Guide links) - sites.yaml: update version to 4.2 Co-Authored-By: chaozhou@alauda.io
- Remove specific version reference to avoid maintenance overhead - Yarn version is managed by .yarnrc.yml Co-Authored-By: chaozhou@alauda.io
There was a problem hiding this comment.
🧹 Nitpick comments (1)
AGENTS.md (1)
62-62: Use the fully qualified config path for consistency.Line 62 says
sites.yaml, while Line 32 explicitly usesacp-docs/sites.yaml. Use the same explicit path here to avoid editing the wrong file during agent-driven changes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENTS.md` at line 62, Update the reference in AGENTS.md so it uses the fully qualified config path `acp-docs/sites.yaml` instead of the short `sites.yaml` (match the explicit usage on line 32); modify the table entry that currently reads "`sites.yaml` | Doom template file; enables `<AcpApisOverview />` component to link to ACP APIs Guide`" to use `acp-docs/sites.yaml` so agent-driven edits consistently target the correct config file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@AGENTS.md`:
- Line 62: Update the reference in AGENTS.md so it uses the fully qualified
config path `acp-docs/sites.yaml` instead of the short `sites.yaml` (match the
explicit usage on line 32); modify the table entry that currently reads
"`sites.yaml` | Doom template file; enables `<AcpApisOverview />` component to
link to ACP APIs Guide`" to use `acp-docs/sites.yaml` so agent-driven edits
consistently target the correct config file.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
AGENTS.mdREADME.mdsites.yaml
✅ Files skipped from review due to trivial changes (1)
- sites.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
- README.md
## Directory Structure - Reorganize docs by platform (create-cluster, install, manage-nodes, upgrade-cluster) - Create platform-specific subdirectories for each provider (huawei-dcs, huawei-cloud-stack, vmware-vsphere, bare-metal) - Add index.mdx files as navigation-only entry points ## Content Improvements - vmware-vsphere: Update to Static IP (was DHCP), remove provider comparison table - bare-metal: Correct In-place Upgrade description (small-scale environments, 3-5 machines) - os-support-matrix: Simplify description, remove platform-specific references - install/index: Remove duplicate provider list, link to overview/providers instead - install/huawei-dcs: Remove DCS platform credentials from Prerequisites - create-cluster/index: Simplify to navigation-only (remove platform-specific details) - manage-nodes/index: Simplify to navigation-only (remove platform-specific details) - upgrade-cluster/index: Simplify to navigation-only (remove platform-specific details) ## Bug Fixes - manage-nodes/huawei-dcs: Fix weight from 60 to 10 to match other directories Co-Authored-By: chaozhou@alauda.io
- Fix typo: subenetName → subnetName in CRD and docs - Add missing controlPlaneEndpoint to HCSCluster configuration - Align HCS docs with CRD definitions Co-Authored-By: chaozhou@alauda.io
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docs/en/overview/os-support-matrix.mdx (1)
8-12:⚠️ Potential issue | 🟡 MinorAlign terminology between intro text and table header.
Line 8 says “MicroOS images,” but Line 12 still says “VM Image Version.” Use one term consistently to avoid scope confusion.
✏️ Proposed fix
-| ACP Version | VM Image Version | Kubernetes Version | coredns | etcd | pause | +| ACP Version | MicroOS Image Version | Kubernetes Version | coredns | etcd | pause |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/en/overview/os-support-matrix.mdx` around lines 8 - 12, The intro uses "MicroOS images" but the table header still reads "VM Image Version"; update the table header string "VM Image Version" to "MicroOS Image Version" (or alternatively change the intro to "VM images" if you prefer that convention) so the terminology is consistent; locate the header row containing the literal column name "VM Image Version" in the markdown table and adjust it to match the intro phrase "MicroOS images" (also scan for any other occurrences of "VM Image Version" to keep all mentions consistent).
🧹 Nitpick comments (3)
docs/en/apis/providers/huawei-cloud-stack/hcsmachineconfigpool.mdx (1)
6-8: Refresh query keywords to match current resource naming.Line 8 uses a legacy-sounding phrase (
hcs ip hostname pool) that may reduce search precision for this page.✏️ Proposed fix
queries: - hcsmachineconfigpool crd - hcs config pool - - hcs ip hostname pool + - hcs machine config pool🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/en/apis/providers/huawei-cloud-stack/hcsmachineconfigpool.mdx` around lines 6 - 8, The page's keyword list uses an outdated term "hcs ip hostname pool" which reduces search precision; update the keywords in the header (the list containing "hcsmachineconfigpool", "hcs config pool", "hcs ip hostname pool") to use the current resource naming—replace "hcs ip hostname pool" with the modern canonical phrase (e.g., "hcsmachineconfigpool" or the official "HCS MachineConfig Pool" term used across docs) and ensure the other entries match the exact, current resource token casing/spelling so search and indexing pick up the correct resource.docs/en/install/huawei-dcs.mdx (1)
38-49: Make verification commands provider-specific to avoid false positives.Line 43 currently matches any
infrastructure.cluster.x-k8s.ioCRD, so verification can pass even when DCS CRDs are incomplete.🔧 Proposed fix
# Check if the DCS provider pods are running -kubectl get pods -n cpaas-system | grep dcs +kubectl get pods -n cpaas-system | grep -E '(^|-)dcs(-|$)' # Verify CRDs are registered -kubectl get crd | grep infrastructure.cluster.x-k8s.io +kubectl get crd | grep -E '^dcs.*\.infrastructure\.cluster\.x-k8s\.io$'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/en/install/huawei-dcs.mdx` around lines 38 - 49, The verification commands are too broad and can yield false positives; update the pod and CRD checks to match DCS-specific names. Replace the kubectl CRD check (currently using grep infrastructure.cluster.x-k8s.io) with explicit checks for the DCS CRDs (e.g., kubectl get crd | grep dcsclusters.infrastructure.cluster.x-k8s.io && kubectl get crd | grep dcsmachines.infrastructure.cluster.x-k8s.io && kubectl get crd | grep dcsmachinetemplates.infrastructure.cluster.x-k8s.io && kubectl get crd | grep dcsiphostnamepools.infrastructure.cluster.x-k8s.io) and make the pod check target the DCS controller/pods more specifically (e.g., grep for the dcs controller deployment name or a DCS-specific label instead of a generic "dcs" if available) so the commands in the docs reliably verify DCS components.docs/en/install/huawei-cloud-stack.mdx (1)
49-56: Narrow CRD verification to HCS resources for clearer output.Line 50 currently matches all infra CRDs; scoping to HCS CRDs makes verification less noisy.
Suggested fix
-# Verify CRDs are registered -kubectl get crd | grep infrastructure.cluster.x-k8s.io +# Verify HCS CRDs are registered +kubectl get crd | grep -E '^hcs.*\.infrastructure\.cluster\.x-k8s\.io$'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/en/install/huawei-cloud-stack.mdx` around lines 49 - 56, Narrow the CRD verification command to only show Huawei Cloud Stack CRDs by changing the current general grep to filter HCS CRD names (e.g., match hcsclusters.infrastructure.cluster.x-k8s.io, hcsmachines.infrastructure.cluster.x-k8s.io, hcsmachinetemplates.infrastructure.cluster.x-k8s.io, hcsmachineconfigpools.infrastructure.cluster.x-k8s.io); update the line with the kubectl command (currently using "kubectl get crd | grep ...") to use a single grep/egrep pattern that matches those HCS resource names so the output is limited to the HCS CRDs listed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/en/apis/providers/index.mdx`:
- Around line 13-18: The Documentation column currently lists plain text (for
the rows Huawei DCS, Huawei Cloud Stack, VMware vSphere, and Bare Metal) but is
labeled as "Documentation"; update each cell in that column to be an actionable
link (e.g., [DCS Provider APIs](./huawei-dcs/...), [HCS Provider
APIs](./huawei-cloud-stack/...), [vSphere Provider APIs](./vmware-vsphere/...),
[Bare Metal Provider APIs](./bare-metal/...)) pointing to the appropriate
documentation pages, or if links aren't available, rename the "Documentation"
header to something like "Notes" to avoid implying links; modify the table
header "Documentation" and the corresponding row cells in the table to reflect
this change.
In `@docs/en/create-cluster/bare-metal.mdx`:
- Around line 7-20: Update the compound-adjective usage to "bare-metal" where it
modifies nouns: change the header sentence phrase "creating Kubernetes clusters
on bare metal servers" to "creating Kubernetes clusters on bare-metal servers"
and change the bullet "Network configuration for bare metal environments" to
"Network configuration for bare-metal environments" so both instances use the
hyphenated form for consistency.
In `@docs/en/create-cluster/index.mdx`:
- Around line 18-19: Replace the plain text entries "VMware vSphere (In
Development)" and "Bare Metal (Planned)" with proper markdown/MDX links pointing
to their existing documentation pages so the platform list matches others;
locate the entries in docs/en/create-cluster/index.mdx (the two lines containing
VMware vSphere and Bare Metal) and convert each into an <a> or MDX link using
the correct target page slugs (e.g., linking VMware vSphere to its existing
vSphere doc and Bare Metal to its Bare Metal doc) while preserving the status
text in parentheses.
In `@docs/en/overview/providers/index.mdx`:
- Around line 19-33: Change the compound modifiers to use hyphens: update "Bare
metal servers without virtualization" to "Bare-metal servers without
virtualization" (in the table under "Bare Metal") and change "A single provider
version can be compatible with multiple ACP versions" to "A single-provider
version can be compatible with multiple ACP versions" (in the "Provider
Independence" paragraph) so both compound modifiers read cleanly.
In `@docs/en/upgrade-cluster/bare-metal.mdx`:
- Around line 7-17: Change the compound-adjective phrasing for clarity:
hyphenate "bare-metal" where it modifies a noun (e.g., "bare-metal servers") and
make "in-place upgrade" plural to match "upgrades" (or hyphenate "in-place
upgrades") so the sentence reads smoothly (adjust in the sentence containing
"bare metal servers" and the sentence "bare metal upgrades use **in-place
upgrade**" by updating to "bare-metal servers" and "in-place upgrades" or
"in-place upgrade(s)" as appropriate).
In
`@docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsclusters.yaml`:
- Around line 163-177: The CRD schema currently allows identityRef.name to be
empty by providing a default "" and no required constraint; update the
identityRef object schema (the identityRef property in the provider CRD) to make
the name property required and remove the empty default — e.g., remove the
default: "" for identityRef.properties.name and add a required: ["name"] entry
inside the identityRef schema (and/or add a minLength: 1 for name) so
identityRef.name is validated at admission; apply the same change to the other
occurrence of identityRef (the block referenced at the other location).
- Around line 96-106: Rename the misspelled CRD field key "liseners" to
"listeners" everywhere in the CRD schema (the YAML mapping that currently
defines "liseners: items: properties: ... type: array") so the API shape matches
clients; update any nested references or validation blocks that refer to
"liseners" (e.g., where "liseners" defines items/properties/id and protocolPort)
to use "listeners" instead and run a quick schema validation to ensure no
remaining references to the old key remain.
In
`@docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachineconfigpools.yaml`:
- Around line 62-93: The configs array and its nested objects allow empty
entries; update the schema for configs (MachineConfig) and its nested
MachineConfigNetworkConfig to enforce required fields and non-empty arrays: add
minItems: 1 on configs and on networks, add a required list on MachineConfig
including "hostname" and "networks", and add a required list on
MachineConfigNetworkConfig including at least one of "ipAddress" or "subnetId"
(and/or "subnetName" if you require it), and optionally set minProperties: 1 on
those objects so empty objects cannot be accepted; modify the properties blocks
for hostname, networks, ipAddress, subnetName, and subnetId to reflect these
required constraints.
In
`@docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachines.yaml`:
- Around line 71-73: The CRD currently exposes adminPassword as a plain string
field (adminPassword) in the HC SMachine CRD and machineTemplate CRD; replace
this with a Secret reference field (e.g., adminPasswordSecretRef or
adminPasswordRef) of type core/v1 ObjectReference (or SecretKeySelector) so
controllers consume credentials from Kubernetes Secrets rather than inline
plaintext. Update the schema entries that define adminPassword in
infrastructure.cluster.x-k8s.io_hcsmachines.yaml and
infrastructure.cluster.x-k8s.io_hcsmachinetemplates.yaml: remove/deperecate the
string adminPassword, add the new adminPasswordSecretRef property, and ensure
validation/type is an object reference; then update any parsing/usage code in
the HC SMachine controller/operator that reads adminPassword to instead read the
referenced Secret (e.g., where the controller previously accessed
spec.adminPassword, switch to spec.adminPasswordSecretRef and fetch the Secret
key). Ensure OpenAPI schema and example docs are updated to show the Secret
reference pattern.
In
`@docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachinetemplates.yaml`:
- Around line 166-194: Remove the instance-identity fields from the machine
template spec by deleting the providerID and serverId properties from the CRD
schema (the entries labeled providerID and serverId under the machine template
spec), since templates must not contain cloud-assigned unique IDs; keep
volume-related fields (e.g., rootVolume) intact; ensure any usage of
providerID/serverId remains in runtime-assigned places such as Machine.status or
provider-specific machine resources rather than in the template spec and remove
them from any "required" lists if present.
---
Outside diff comments:
In `@docs/en/overview/os-support-matrix.mdx`:
- Around line 8-12: The intro uses "MicroOS images" but the table header still
reads "VM Image Version"; update the table header string "VM Image Version" to
"MicroOS Image Version" (or alternatively change the intro to "VM images" if you
prefer that convention) so the terminology is consistent; locate the header row
containing the literal column name "VM Image Version" in the markdown table and
adjust it to match the intro phrase "MicroOS images" (also scan for any other
occurrences of "VM Image Version" to keep all mentions consistent).
---
Nitpick comments:
In `@docs/en/apis/providers/huawei-cloud-stack/hcsmachineconfigpool.mdx`:
- Around line 6-8: The page's keyword list uses an outdated term "hcs ip
hostname pool" which reduces search precision; update the keywords in the header
(the list containing "hcsmachineconfigpool", "hcs config pool", "hcs ip hostname
pool") to use the current resource naming—replace "hcs ip hostname pool" with
the modern canonical phrase (e.g., "hcsmachineconfigpool" or the official "HCS
MachineConfig Pool" term used across docs) and ensure the other entries match
the exact, current resource token casing/spelling so search and indexing pick up
the correct resource.
In `@docs/en/install/huawei-cloud-stack.mdx`:
- Around line 49-56: Narrow the CRD verification command to only show Huawei
Cloud Stack CRDs by changing the current general grep to filter HCS CRD names
(e.g., match hcsclusters.infrastructure.cluster.x-k8s.io,
hcsmachines.infrastructure.cluster.x-k8s.io,
hcsmachinetemplates.infrastructure.cluster.x-k8s.io,
hcsmachineconfigpools.infrastructure.cluster.x-k8s.io); update the line with the
kubectl command (currently using "kubectl get crd | grep ...") to use a single
grep/egrep pattern that matches those HCS resource names so the output is
limited to the HCS CRDs listed.
In `@docs/en/install/huawei-dcs.mdx`:
- Around line 38-49: The verification commands are too broad and can yield false
positives; update the pod and CRD checks to match DCS-specific names. Replace
the kubectl CRD check (currently using grep infrastructure.cluster.x-k8s.io)
with explicit checks for the DCS CRDs (e.g., kubectl get crd | grep
dcsclusters.infrastructure.cluster.x-k8s.io && kubectl get crd | grep
dcsmachines.infrastructure.cluster.x-k8s.io && kubectl get crd | grep
dcsmachinetemplates.infrastructure.cluster.x-k8s.io && kubectl get crd | grep
dcsiphostnamepools.infrastructure.cluster.x-k8s.io) and make the pod check
target the DCS controller/pods more specifically (e.g., grep for the dcs
controller deployment name or a DCS-specific label instead of a generic "dcs" if
available) so the commands in the docs reliably verify DCS components.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
docs/en/overview/assets/architecture.jpegis excluded by!**/*.jpeg
📒 Files selected for processing (57)
docs/en/apis/kubernetes_apis/index.mdxdocs/en/apis/kubernetes_apis/infrastructure.cluster.x-k8s.io/index.mdxdocs/en/apis/providers/bare-metal/index.mdxdocs/en/apis/providers/huawei-cloud-stack/hcscluster.mdxdocs/en/apis/providers/huawei-cloud-stack/hcsmachine.mdxdocs/en/apis/providers/huawei-cloud-stack/hcsmachineconfigpool.mdxdocs/en/apis/providers/huawei-cloud-stack/hcsmachinetemplate.mdxdocs/en/apis/providers/huawei-cloud-stack/index.mdxdocs/en/apis/providers/huawei-dcs/dcscluster.mdxdocs/en/apis/providers/huawei-dcs/dcsiphostnamepool.mdxdocs/en/apis/providers/huawei-dcs/dcsmachine.mdxdocs/en/apis/providers/huawei-dcs/dcsmachinetemplate.mdxdocs/en/apis/providers/huawei-dcs/index.mdxdocs/en/apis/providers/index.mdxdocs/en/apis/providers/vmware-vsphere/index.mdxdocs/en/create-cluster/bare-metal.mdxdocs/en/create-cluster/huawei-cloud-stack.mdxdocs/en/create-cluster/huawei-dcs.mdxdocs/en/create-cluster/index.mdxdocs/en/create-cluster/vmware-vsphere.mdxdocs/en/how-to/machine-health-check.mdxdocs/en/how-to/node-configuration.mdxdocs/en/install.mdxdocs/en/install/bare-metal.mdxdocs/en/install/huawei-cloud-stack.mdxdocs/en/install/huawei-dcs.mdxdocs/en/install/index.mdxdocs/en/install/vmware-vsphere.mdxdocs/en/machine-configuration/index.mdxdocs/en/machine-configuration/managing.mdxdocs/en/machine-configuration/node_disruption.mdxdocs/en/machine-configuration/overview.mdxdocs/en/manage-nodes/bare-metal.mdxdocs/en/manage-nodes/huawei-cloud-stack.mdxdocs/en/manage-nodes/huawei-dcs.mdxdocs/en/manage-nodes/index.mdxdocs/en/manage-nodes/vmware-vsphere.mdxdocs/en/overview/architecture.mdxdocs/en/overview/os-support-matrix.mdxdocs/en/overview/providers/bare-metal.mdxdocs/en/overview/providers/huawei-cloud-stack.mdxdocs/en/overview/providers/huawei-dcs.mdxdocs/en/overview/providers/index.mdxdocs/en/overview/providers/vmware-vsphere.mdxdocs/en/upgrade-cluster/bare-metal.mdxdocs/en/upgrade-cluster/huawei-cloud-stack.mdxdocs/en/upgrade-cluster/huawei-dcs.mdxdocs/en/upgrade-cluster/index.mdxdocs/en/upgrade-cluster/vmware-vsphere.mdxdocs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsclusters.yamldocs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachineconfigpools.yamldocs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachines.yamldocs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachinetemplates.yamldocs/shared/crds/providers/huawei-dcs/infrastructure.cluster.x-k8s.io_dcsclusters.yamldocs/shared/crds/providers/huawei-dcs/infrastructure.cluster.x-k8s.io_dcsiphostnamepools.yamldocs/shared/crds/providers/huawei-dcs/infrastructure.cluster.x-k8s.io_dcsmachines.yamldocs/shared/crds/providers/huawei-dcs/infrastructure.cluster.x-k8s.io_dcsmachinetemplates.yaml
💤 Files with no reviewable changes (3)
- docs/en/install.mdx
- docs/en/apis/kubernetes_apis/infrastructure.cluster.x-k8s.io/index.mdx
- docs/en/apis/kubernetes_apis/index.mdx
✅ Files skipped from review due to trivial changes (10)
- docs/en/upgrade-cluster/index.mdx
- docs/en/create-cluster/vmware-vsphere.mdx
- docs/en/overview/architecture.mdx
- docs/en/manage-nodes/vmware-vsphere.mdx
- docs/en/overview/providers/huawei-cloud-stack.mdx
- docs/en/install/index.mdx
- docs/en/apis/providers/vmware-vsphere/index.mdx
- docs/en/apis/providers/huawei-cloud-stack/hcsmachinetemplate.mdx
- docs/en/manage-nodes/index.mdx
- docs/en/install/vmware-vsphere.mdx
| This document provides instructions for upgrading Kubernetes clusters on bare metal servers. | ||
|
|
||
| ## Status | ||
|
|
||
| 📋 **Planned** | ||
|
|
||
| Documentation will be available when development begins. | ||
|
|
||
| ## Key Considerations | ||
|
|
||
| Unlike VM-based platforms, bare metal upgrades use **in-place upgrade** to avoid lengthy physical machine reinstallation. Critical data persists on local disks during the upgrade process. |
There was a problem hiding this comment.
Fix compound-adjective phrasing for clarity.
Line 7 and Line 17 read awkwardly; hyphenation/pluralization would improve readability.
✏️ Proposed fix
-This document provides instructions for upgrading Kubernetes clusters on bare metal servers.
+This document provides instructions for upgrading Kubernetes clusters on bare-metal servers.
@@
-Unlike VM-based platforms, bare metal upgrades use **in-place upgrade** to avoid lengthy physical machine reinstallation. Critical data persists on local disks during the upgrade process.
+Unlike VM-based platforms, bare-metal upgrades use **in-place upgrades** to avoid lengthy physical machine reinstallation. Critical data persists on local disks during the upgrade process.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| This document provides instructions for upgrading Kubernetes clusters on bare metal servers. | |
| ## Status | |
| 📋 **Planned** | |
| Documentation will be available when development begins. | |
| ## Key Considerations | |
| Unlike VM-based platforms, bare metal upgrades use **in-place upgrade** to avoid lengthy physical machine reinstallation. Critical data persists on local disks during the upgrade process. | |
| This document provides instructions for upgrading Kubernetes clusters on bare-metal servers. | |
| ## Status | |
| 📋 **Planned** | |
| Documentation will be available when development begins. | |
| ## Key Considerations | |
| Unlike VM-based platforms, bare-metal upgrades use **in-place upgrades** to avoid lengthy physical machine reinstallation. Critical data persists on local disks during the upgrade process. |
🧰 Tools
🪛 LanguageTool
[uncategorized] ~7-~7: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...ns for upgrading Kubernetes clusters on bare metal servers. ## Status 📋 Planned Do...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
[uncategorized] ~17-~17: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...siderations Unlike VM-based platforms, bare metal upgrades use in-place upgrade to av...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/en/upgrade-cluster/bare-metal.mdx` around lines 7 - 17, Change the
compound-adjective phrasing for clarity: hyphenate "bare-metal" where it
modifies a noun (e.g., "bare-metal servers") and make "in-place upgrade" plural
to match "upgrades" (or hyphenate "in-place upgrades") so the sentence reads
smoothly (adjust in the sentence containing "bare metal servers" and the
sentence "bare metal upgrades use **in-place upgrade**" by updating to
"bare-metal servers" and "in-place upgrades" or "in-place upgrade(s)" as
appropriate).
| liseners: | ||
| items: | ||
| properties: | ||
| id: | ||
| type: string | ||
| protocolPort: | ||
| format: int32 | ||
| type: integer | ||
| type: object | ||
| type: array | ||
| name: |
There was a problem hiding this comment.
Fix the misspelled CRD field key before release.
liseners is very likely a typo for listeners. Publishing this as-is hard-codes the wrong API shape and will reject/prune correctly spelled client payloads.
Proposed fix
- liseners:
+ listeners:
items:
properties:
id:
type: string📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| liseners: | |
| items: | |
| properties: | |
| id: | |
| type: string | |
| protocolPort: | |
| format: int32 | |
| type: integer | |
| type: object | |
| type: array | |
| name: | |
| listeners: | |
| items: | |
| properties: | |
| id: | |
| type: string | |
| protocolPort: | |
| format: int32 | |
| type: integer | |
| type: object | |
| type: array | |
| name: |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsclusters.yaml`
around lines 96 - 106, Rename the misspelled CRD field key "liseners" to
"listeners" everywhere in the CRD schema (the YAML mapping that currently
defines "liseners: items: properties: ... type: array") so the API shape matches
clients; update any nested references or validation blocks that refer to
"liseners" (e.g., where "liseners" defines items/properties/id and protocolPort)
to use "listeners" instead and run a quick schema validation to ensure no
remaining references to the old key remain.
| identityRef: | ||
| description: IdentityRef is a reference to the identity used to provision | ||
| the cluster. | ||
| properties: | ||
| name: | ||
| default: "" | ||
| description: |- | ||
| Name of the referent. | ||
| This field is effectively required, but due to backwards compatibility is | ||
| allowed to be empty. Instances of this type with an empty value here are | ||
| almost certainly wrong. | ||
| More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names | ||
| type: string | ||
| type: object | ||
| x-kubernetes-map-type: atomic |
There was a problem hiding this comment.
Require identityRef.name at schema level.
identityRef is required, but identityRef.name is currently optional with an empty default. This allows invalid specs through admission and pushes failure to runtime reconciliation.
Proposed fix
identityRef:
description: IdentityRef is a reference to the identity used to provision
the cluster.
properties:
name:
- default: ""
description: |-
Name of the referent.
This field is effectively required, but due to backwards compatibility is
allowed to be empty. Instances of this type with an empty value here are
almost certainly wrong.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type: string
+ required:
+ - name
type: object
x-kubernetes-map-type: atomicAlso applies to: 283-286
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsclusters.yaml`
around lines 163 - 177, The CRD schema currently allows identityRef.name to be
empty by providing a default "" and no required constraint; update the
identityRef object schema (the identityRef property in the provider CRD) to make
the name property required and remove the empty default — e.g., remove the
default: "" for identityRef.properties.name and add a required: ["name"] entry
inside the identityRef schema (and/or add a minLength: 1 for name) so
identityRef.name is validated at admission; apply the same change to the other
occurrence of identityRef (the block referenced at the other location).
| configs: | ||
| description: |- | ||
| Configs is the list of machine configurations available in this pool. | ||
| Each config can be claimed by a HCSMachine. | ||
| items: | ||
| description: MachineConfig defines the configuration for a single | ||
| machine in the pool. | ||
| properties: | ||
| hostname: | ||
| description: Hostname is the hostname to assign to the machine. | ||
| type: string | ||
| networks: | ||
| description: Networks is the list of network configurations | ||
| for the machine. | ||
| items: | ||
| description: MachineConfigNetworkConfig defines the network | ||
| configuration for a machine. | ||
| properties: | ||
| ipAddress: | ||
| description: IpAddress is the static IP address to assign | ||
| to the machine. | ||
| type: string | ||
| subnetName: | ||
| description: SubnetName is the name of the subnet to use. | ||
| type: string | ||
| subnetId: | ||
| description: SubnetId is the ID of the subnet to use. | ||
| type: string | ||
| type: object | ||
| type: array | ||
| type: object | ||
| type: array |
There was a problem hiding this comment.
Require core config fields to prevent invalid pool entries.
The current schema allows empty configs items and empty network objects, which can admit unusable configurations.
🧩 Proposed fix
configs:
@@
items:
@@
properties:
hostname:
@@
networks:
@@
items:
@@
properties:
ipAddress:
@@
subnetName:
@@
subnetId:
@@
+ required:
+ - ipAddress
+ - subnetId
type: object
+ minItems: 1
type: array
+ required:
+ - hostname
+ - networks
type: object
+ minItems: 1
type: array🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachineconfigpools.yaml`
around lines 62 - 93, The configs array and its nested objects allow empty
entries; update the schema for configs (MachineConfig) and its nested
MachineConfigNetworkConfig to enforce required fields and non-empty arrays: add
minItems: 1 on configs and on networks, add a required list on MachineConfig
including "hostname" and "networks", and add a required list on
MachineConfigNetworkConfig including at least one of "ipAddress" or "subnetId"
(and/or "subnetName" if you require it), and optionally set minProperties: 1 on
those objects so empty objects cannot be accepted; modify the properties blocks
for hostname, networks, ipAddress, subnetName, and subnetId to reflect these
required constraints.
| adminPassword: | ||
| description: AdminPassword is the admin password for the instance. | ||
| type: string |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for credential patterns in Huawei Cloud Stack CRDs
rg -n -C2 'adminPassword|adminPasswordSecret|secretRef|SecretRef' docs/shared/crds/providers/huawei-cloud-stack/ 2>/dev/null || echo "No matches in crds directory"
echo "---"
# Also check related API documentation
rg -n -C2 'adminPassword|adminPasswordSecret|secretRef|SecretRef' docs/en/apis/providers/huawei-cloud-stack/ 2>/dev/null || echo "No matches in apis directory"Repository: alauda/immutable-infra-docs
Length of output: 1596
Avoid plaintext admin credentials in CRD specs.
Plaintext passwords appear in both infrastructure.cluster.x-k8s.io_hcsmachines.yaml (lines 71-73) and infrastructure.cluster.x-k8s.io_hcsmachinetemplates.yaml (lines 74-76). This creates security exposure via API visibility, etcd storage, backups, and audit logs. No secret reference alternative currently exists. Replace with a Secret reference field instead.
Proposed approach
- adminPassword:
- description: AdminPassword is the admin password for the instance.
- type: string
+ adminPasswordSecretRef:
+ description: Reference to a Secret key containing the admin password.
+ properties:
+ name:
+ type: string
+ minLength: 1
+ key:
+ type: string
+ minLength: 1
+ required:
+ - name
+ - key
+ type: object📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| adminPassword: | |
| description: AdminPassword is the admin password for the instance. | |
| type: string | |
| adminPasswordSecretRef: | |
| description: Reference to a Secret key containing the admin password. | |
| properties: | |
| name: | |
| type: string | |
| minLength: 1 | |
| key: | |
| type: string | |
| minLength: 1 | |
| required: | |
| - name | |
| - key | |
| type: object |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachines.yaml`
around lines 71 - 73, The CRD currently exposes adminPassword as a plain string
field (adminPassword) in the HC SMachine CRD and machineTemplate CRD; replace
this with a Secret reference field (e.g., adminPasswordSecretRef or
adminPasswordRef) of type core/v1 ObjectReference (or SecretKeySelector) so
controllers consume credentials from Kubernetes Secrets rather than inline
plaintext. Update the schema entries that define adminPassword in
infrastructure.cluster.x-k8s.io_hcsmachines.yaml and
infrastructure.cluster.x-k8s.io_hcsmachinetemplates.yaml: remove/deperecate the
string adminPassword, add the new adminPasswordSecretRef property, and ensure
validation/type is an object reference; then update any parsing/usage code in
the HC SMachine controller/operator that reads adminPassword to instead read the
referenced Secret (e.g., where the controller previously accessed
spec.adminPassword, switch to spec.adminPasswordSecretRef and fetch the Secret
key). Ensure OpenAPI schema and example docs are updated to show the Secret
reference pattern.
| providerID: | ||
| description: ProviderID is the unique identifier as specified | ||
| by the cloud provider. | ||
| type: string | ||
| rootVolume: | ||
| description: RootVolume is the root disk volume configuration. | ||
| properties: | ||
| format: | ||
| description: Format is the format of the volume. | ||
| type: string | ||
| mountPath: | ||
| description: Path is the path to the volume. | ||
| type: string | ||
| size: | ||
| description: Size is the volume size in GB. | ||
| minimum: 1 | ||
| type: integer | ||
| type: | ||
| description: Type is the volume type (e.g., SSD, SAS, | ||
| SATA). | ||
| minLength: 1 | ||
| type: string | ||
| required: | ||
| - size | ||
| - type | ||
| type: object | ||
| serverId: | ||
| description: ServerId is the HCS ECS server instance ID. | ||
| type: string |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the presence and context of providerID and serverId fields
rg -n -C2 'providerID|serverId' docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachinetemplates.yaml
echo "---"
rg -n -C2 'providerID|serverId' docs/en/apis/providers/huawei-cloud-stackRepository: alauda/immutable-infra-docs
Length of output: 584
🏁 Script executed:
#!/bin/bash
# Get more context about the file structure and where these fields sit
head -200 docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachinetemplates.yaml | tail -100 | cat -nRepository: alauda/immutable-infra-docs
Length of output: 5823
🏁 Script executed:
#!/bin/bash
# Find the "template:" key and surrounding structure
grep -n "template:" docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachinetemplates.yaml | head -5
# Also show the beginning of the file to understand structure
head -100 docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachinetemplates.yaml | cat -nRepository: alauda/immutable-infra-docs
Length of output: 5280
🏁 Script executed:
#!/bin/bash
# Find the HCSMachine CRD file and check for providerID/serverId there
find docs/shared/crds/providers/huawei-cloud-stack -name "*hcsmachine.yaml" -o -name "*hcsmachines.yaml" | head -5
# Also search for any HCSMachine related files
fd -e yaml docs/shared/crds/providers/huawei-cloud-stack | grep -i hcsmachineRepository: alauda/immutable-infra-docs
Length of output: 668
🏁 Script executed:
#!/bin/bash
# Search for providerID and serverId in the HCSMachine CRD
echo "=== providerID and serverId in HCSMachine ==="
rg -n 'providerID|serverId' docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachines.yamlRepository: alauda/immutable-infra-docs
Length of output: 210
🏁 Script executed:
#!/bin/bash
# Get context around providerID and serverId in HCSMachine
echo "=== Context around providerID in HCSMachine ==="
sed -n '155,170p' docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachines.yaml | cat -n
echo ""
echo "=== Context around serverId in HCSMachine ==="
sed -n '180,195p' docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachines.yaml | cat -nRepository: alauda/immutable-infra-docs
Length of output: 1619
Remove instance identity fields from machine template spec.
providerID and serverId are cloud-provider-assigned instance identifiers that must be unique per machine. These should not be statically defined in template.spec, as a template creates multiple machines and each requires its own unique identity—typically assigned by the HCS controller at runtime.
Proposed fix
- providerID:
- description: ProviderID is the unique identifier as specified
- by the cloud provider.
- type: string
@@
- serverId:
- description: ServerId is the HCS ECS server instance ID.
- type: string📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| providerID: | |
| description: ProviderID is the unique identifier as specified | |
| by the cloud provider. | |
| type: string | |
| rootVolume: | |
| description: RootVolume is the root disk volume configuration. | |
| properties: | |
| format: | |
| description: Format is the format of the volume. | |
| type: string | |
| mountPath: | |
| description: Path is the path to the volume. | |
| type: string | |
| size: | |
| description: Size is the volume size in GB. | |
| minimum: 1 | |
| type: integer | |
| type: | |
| description: Type is the volume type (e.g., SSD, SAS, | |
| SATA). | |
| minLength: 1 | |
| type: string | |
| required: | |
| - size | |
| - type | |
| type: object | |
| serverId: | |
| description: ServerId is the HCS ECS server instance ID. | |
| type: string | |
| rootVolume: | |
| description: RootVolume is the root disk volume configuration. | |
| properties: | |
| format: | |
| description: Format is the format of the volume. | |
| type: string | |
| mountPath: | |
| description: Path is the path to the volume. | |
| type: string | |
| size: | |
| description: Size is the volume size in GB. | |
| minimum: 1 | |
| type: integer | |
| type: | |
| description: Type is the volume type (e.g., SSD, SAS, | |
| SATA). | |
| minLength: 1 | |
| type: string | |
| required: | |
| - size | |
| - type | |
| type: object |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@docs/shared/crds/providers/huawei-cloud-stack/infrastructure.cluster.x-k8s.io_hcsmachinetemplates.yaml`
around lines 166 - 194, Remove the instance-identity fields from the machine
template spec by deleting the providerID and serverId properties from the CRD
schema (the entries labeled providerID and serverId under the machine template
spec), since templates must not contain cloud-assigned unique IDs; keep
volume-related fields (e.g., rootVolume) intact; ensure any usage of
providerID/serverId remains in runtime-assigned places such as Machine.status or
provider-specific machine resources rather than in the template spec and remove
them from any "required" lists if present.
Align code blocks with list items by adding 2 spaces indentation Co-Authored-By: chaozhou@alauda.io
- Update API references and index files - Add new compound words to spell dictionary - Sync HCS with DCS documentation patterns Co-Authored-By: chaozhou@alauda.io
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
docs/en/upgrade-cluster/bare-metal.mdx (1)
17-17:⚠️ Potential issue | 🟡 MinorFix sentence grammar on Line 17.
Use an article before “in-place upgrade approach” to make the sentence read naturally.
✏️ Proposed wording tweak
-Unlike VM-based platforms, bare-metal upgrades use **in-place upgrade** approach to avoid lengthy physical machine reinstallation. Critical data persists on local disks during the upgrade process. +Unlike VM-based platforms, bare-metal upgrades use an **in-place upgrade** approach to avoid lengthy physical machine reinstallation. Critical data persists on local disks during the upgrade process.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/en/upgrade-cluster/bare-metal.mdx` at line 17, The sentence uses "in-place upgrade approach" without an article; update the sentence containing "in-place upgrade approach" to read "an in-place upgrade approach" so it becomes: "Unlike VM-based platforms, bare-metal upgrades use an in-place upgrade approach to avoid lengthy physical machine reinstallation. Critical data persists on local disks during the upgrade process." Ensure only the article is added and punctuation/spacing is preserved.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/en/manage-nodes/huawei-cloud-stack.mdx`:
- Around line 27-29: Update the instructions that currently read "Only modify
values enclosed in `<>` brackets" to explicitly state that any concrete
hostnames/IPs shown in example blocks or configuration snippets (for example the
hard-coded hostnames and IP addresses in the Huawei Cloud Stack example section)
must also be replaced with environment-specific values to avoid IP/hostname
reuse and provisioning conflicts; edit the wording near that sentence and add a
short clarifying note referencing the example/configuration snippets in the
Huawei Cloud Stack section (the block that contains the concrete hostnames/IPs)
to ensure readers replace those values as well as any `<>` placeholders.
- Around line 314-321: The exported HCSMachineTemplate should be sanitized
before re-applying: after running the kubectl get > new-template.yaml step, edit
new-template.yaml to set metadata.name to a unique <new-template> and remove
server-assigned fields (metadata.resourceVersion, metadata.uid,
metadata.creationTimestamp, metadata.managedFields) and the top-level status
section; then run kubectl apply -f new-template.yaml -n cpaas-system. Apply the
same sanitization/rename instructions to the second occurrence of the
export/apply workflow in the docs (the block referenced after line 325) so users
do not attempt to re-create a resource with server-generated metadata.
- Around line 182-186: Replace the invalid null value for selector.matchLabels:
in the manifest where "selector.matchLabels: null" appears, remove the null and
either use an empty map "{}" or provide explicit key/value labels and ensure
those same labels are added to spec.template.metadata.labels so the Cluster API
selector and pod/template labels match (e.g., set selector.matchLabels to a
label set and mirror it in spec.template.metadata.labels).
---
Duplicate comments:
In `@docs/en/upgrade-cluster/bare-metal.mdx`:
- Line 17: The sentence uses "in-place upgrade approach" without an article;
update the sentence containing "in-place upgrade approach" to read "an in-place
upgrade approach" so it becomes: "Unlike VM-based platforms, bare-metal upgrades
use an in-place upgrade approach to avoid lengthy physical machine
reinstallation. Critical data persists on local disks during the upgrade
process." Ensure only the article is added and punctuation/spacing is preserved.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
.cspell/compound.txtdocs/en/apis/providers/huawei-cloud-stack/hcsmachineconfigpool.mdxdocs/en/apis/providers/index.mdxdocs/en/create-cluster/bare-metal.mdxdocs/en/create-cluster/index.mdxdocs/en/install/huawei-cloud-stack.mdxdocs/en/install/huawei-dcs.mdxdocs/en/manage-nodes/huawei-cloud-stack.mdxdocs/en/overview/os-support-matrix.mdxdocs/en/overview/providers/index.mdxdocs/en/upgrade-cluster/bare-metal.mdx
🚧 Files skipped from review as they are similar to previous changes (6)
- docs/en/install/huawei-cloud-stack.mdx
- docs/en/apis/providers/huawei-cloud-stack/hcsmachineconfigpool.mdx
- docs/en/create-cluster/index.mdx
- docs/en/install/huawei-dcs.mdx
- docs/en/apis/providers/index.mdx
- docs/en/overview/providers/index.mdx
| - Only modify values enclosed in `<>` brackets | ||
| - Replace placeholder values with your environment-specific settings | ||
| - Preserve all other default configurations unless explicitly required |
There was a problem hiding this comment.
Clarify the placeholder rule to avoid accidental IP/hostname reuse.
Line 27 says only <> values should be changed, but Lines 61-72 include concrete hostnames/IPs that users must also edit. This contradiction can cause failed provisioning or network conflicts.
Suggested doc fix
-When working with the configurations in this document:
-- Only modify values enclosed in `<>` brackets
-- Replace placeholder values with your environment-specific settings
+When working with the configurations in this document:
+- Modify values enclosed in `<>` brackets
+- Also replace environment-specific example literals (for example hostnames, IPs, and file names)
+- Do not reuse sample IP addresses/hostnames in production
- Preserve all other default configurations unless explicitly required- - hostname: worker-1
+ - hostname: <worker-1-hostname>
networks:
- subnetName: <subnet-name>
- ipAddress: 192.168.1.21
- - hostname: worker-2
+ ipAddress: <worker-1-ip>
+ - hostname: <worker-2-hostname>
networks:
- subnetName: <subnet-name>
- ipAddress: 192.168.1.22
- - hostname: worker-3
+ ipAddress: <worker-2-ip>
+ - hostname: <worker-3-hostname>
networks:
- subnetName: <subnet-name>
- ipAddress: 192.168.1.23
+ ipAddress: <worker-3-ip>Also applies to: 61-72
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/en/manage-nodes/huawei-cloud-stack.mdx` around lines 27 - 29, Update the
instructions that currently read "Only modify values enclosed in `<>` brackets"
to explicitly state that any concrete hostnames/IPs shown in example blocks or
configuration snippets (for example the hard-coded hostnames and IP addresses in
the Huawei Cloud Stack example section) must also be replaced with
environment-specific values to avoid IP/hostname reuse and provisioning
conflicts; edit the wording near that sentence and add a short clarifying note
referencing the example/configuration snippets in the Huawei Cloud Stack section
(the block that contains the concrete hostnames/IPs) to ensure readers replace
those values as well as any `<>` placeholders.
| selector: | ||
| matchLabels: null | ||
| template: | ||
| spec: | ||
| clusterName: <cluster-name> |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Cluster API v1beta1 MachineDeployment selector.matchLabels null validation
💡 Result:
In cluster.x-k8s.io/v1beta1 MachineDeployment, .spec.selector.matchLabels is a map/object, not a nullable field. So setting it explicitly to null (as some provider docs/examples show) will fail Kubernetes schema validation because null is not an object. [1], [2], [3]
Fix: either omit matchLabels entirely or set it to an empty map ({}) or (recommended) set real labels that also appear in .spec.template.metadata.labels (because the selector must match the template labels). [1], [3]
Example:
spec:
selector:
matchLabels: {}
template:
metadata:
labels: {}Or with explicit labels:
spec:
selector:
matchLabels:
cluster.x-k8s.io/deployment-name: md-0
template:
metadata:
labels:
cluster.x-k8s.io/deployment-name: md-0Sources:
[1] MachineDeployment schema (v1beta1): matchLabels is type object and selector must match template labels (manifests.fyi)
[2] CAPA docs example showing matchLabels: null
[3] CAPZ docs example showing matchLabels: null
🌐 Web query:
Kubernetes Cluster API MachineDeployment spec.selector.matchLabels required labels
💡 Result:
In Cluster API, a MachineDeployment does not require any specific label keys in spec.selector.matchLabels. What’s required is the relationship between the selector and the template:
spec.selectoris the label selector for the Machines/MachineSets theMachineDeploymentwill manage, and it must match the machine template’s labels (i.e., the labels you set inspec.template.metadata.labels). This means every key/value you put inspec.selector.matchLabelsmust also exist with the same value inspec.template.metadata.labels. [1]- Cluster API defines/uses well-known labels like
cluster.x-k8s.io/cluster-name(user-managed) andcluster.x-k8s.io/deployment-name(set by Cluster API on Machines controlled by a MachineDeployment), but these are not “required labels” for theMachineDeploymentselector—rather, they’re commonly used/recommended for identification and grouping. [2]
Practical guidance: pick a selector that’s unique within the namespace (commonly include cluster.x-k8s.io/cluster-name plus a pool/role label) to avoid accidentally adopting unrelated MachineSets/Machines. [1]
Sources: [1] [2]
🌐 Web query:
Cluster API MachineDeployment template metadata labels selector matching
💡 Result:
In Cluster API, a MachineDeployment’s .spec.selector must match the Machine template’s labels (.spec.template.metadata.labels). In other words, the Machines created by the template must satisfy the selector, or the deployment can’t correctly target/adopt MachineSets/Machines. [1]
Practical rules:
- If you use
selector.matchLabels, everykey: valueinmatchLabelsmust appear identically in.spec.template.metadata.labels. (The template may have extra labels beyond the selector.) - If you use
selector.matchExpressions, the template’s labels must satisfy the expression requirements (they are ANDed). [2]
Minimal example:
spec:
selector:
matchLabels:
nodepool: pool-a
template:
metadata:
labels:
nodepool: pool-a # must match selector
extra: anything # allowed (not selected on)Also note: template labels propagate down to the generated MachineSet labels and its template labels. [3]
Sources: [1] [2] [3]
Remove null from selector.matchLabels and set explicit matching labels.
At Line 183, selector.matchLabels: null is invalid in Cluster API v1beta1—matchLabels is a map/object type and will fail Kubernetes schema validation. The example must include either an empty map {} or explicit labels that also appear in spec.template.metadata.labels (selector and template labels must match).
Suggested doc fix
selector:
- matchLabels: null
+ matchLabels:
+ nodepool: <worker-nodepool-label>
template:
+ metadata:
+ labels:
+ nodepool: <worker-nodepool-label>
spec:
clusterName: <cluster-name>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/en/manage-nodes/huawei-cloud-stack.mdx` around lines 182 - 186, Replace
the invalid null value for selector.matchLabels: in the manifest where
"selector.matchLabels: null" appears, remove the null and either use an empty
map "{}" or provide explicit key/value labels and ensure those same labels are
added to spec.template.metadata.labels so the Cluster API selector and
pod/template labels match (e.g., set selector.matchLabels to a label set and
mirror it in spec.template.metadata.labels).
| kubectl get hcsmachinetemplate <current-template> -n cpaas-system -o yaml > new-template.yaml | ||
| ``` | ||
|
|
||
| 2. **Deploy New Template** | ||
|
|
||
| ```bash | ||
| kubectl apply -f new-template.yaml -n cpaas-system | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check if the file exists and read the relevant lines
fd huawei-cloud-stack.mdxRepository: alauda/immutable-infra-docs
Length of output: 296
🏁 Script executed:
# Read the specific lines around 314-321
if [ -f "docs/en/manage-nodes/huawei-cloud-stack.mdx" ]; then
sed -n '300,340p' docs/en/manage-nodes/huawei-cloud-stack.mdx | cat -n
fiRepository: alauda/immutable-infra-docs
Length of output: 1526
🌐 Web query:
Kubernetes kubectl get -o yaml resource cloning metadata fields required removal resourceVersion uid creationTimestamp managedFields status
💡 Result:
When you “clone” a live object by doing kubectl get <type> <name> -o yaml and re-applying it as a new resource, you must remove fields that are server-assigned / runtime-only (otherwise you’ll get validation / conflict errors, or you’ll accidentally carry cluster-specific state).
Remove these (almost always)
.metadata.resourceVersion(optimistic concurrency version) (thanhtunguet.info).metadata.uid(cluster-assigned unique ID) (thanhtunguet.info).metadata.creationTimestamp(runtime timestamp) (thanhtunguet.info).metadata.managedFields(server-side apply field ownership tracking) (kubernetes.io).status(runtime state; generally not valid/meaningful to submit on create) (thanhtunguet.info)
Often also removed for “clean” declarative manifests:
.metadata.generation(tracks spec changes; cluster/runtime-derived) (thanhtunguet.info)
Common resource-specific removals
- Service:
.spec.clusterIP,.spec.clusterIPs(cluster-assigned) (thanhtunguet.info)
Practical one-liner (yq v4)
kubectl get <type> <name> -o yaml \
| yq 'del(
.metadata.resourceVersion,
.metadata.uid,
.metadata.creationTimestamp,
.metadata.managedFields,
.metadata.generation,
.status
)' \
> cloned.yamlThen edit metadata.name (and metadata.namespace if needed) and kubectl apply -f cloned.yaml.
Citations:
- 1: https://thanhtunguet.info/posts/backup-and-restore-kubernetes-namespace/?utm_source=openai
- 2: https://thanhtunguet.info/posts/backup-and-restore-kubernetes-namespace/?utm_source=openai
- 3: https://thanhtunguet.info/posts/backup-and-restore-kubernetes-namespace/?utm_source=openai
- 4: https://kubernetes.io/docs/reference/using-api/server-side-apply?utm_source=openai
- 5: https://thanhtunguet.info/posts/backup-and-restore-kubernetes-namespace/?utm_source=openai
- 6: https://thanhtunguet.info/posts/backup-and-restore-kubernetes-namespace/?utm_source=openai
- 7: https://thanhtunguet.info/posts/backup-and-restore-kubernetes-namespace/?utm_source=openai
The workflow for creating new machine templates is missing required metadata sanitization and renaming steps.
The documentation instructs users to export an existing HCSMachineTemplate (lines 314-321) and apply it directly without editing. This will fail or cause unintended updates because exported resources retain server-assigned metadata fields. Per Kubernetes best practices, before applying an exported resource as a new object, you must:
- Change
metadata.nameto a unique name - Remove server-generated fields:
metadata.resourceVersion,metadata.uid,metadata.creationTimestamp,metadata.managedFields, andstatus
Without these steps, users will encounter validation or conflict errors.
Suggested doc fix
kubectl get hcsmachinetemplate <current-template> -n cpaas-system -o yaml > new-template.yaml
```
+
+ Then edit `new-template.yaml` before applying:
+ - Change `metadata.name` to `<new-template>`
+ - Remove server-generated fields such as:
+ - `metadata.resourceVersion`
+ - `metadata.uid`
+ - `metadata.creationTimestamp`
+ - `metadata.managedFields`
+ - `status`Also applies to: 325-330
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/en/manage-nodes/huawei-cloud-stack.mdx` around lines 314 - 321, The
exported HCSMachineTemplate should be sanitized before re-applying: after
running the kubectl get > new-template.yaml step, edit new-template.yaml to set
metadata.name to a unique <new-template> and remove server-assigned fields
(metadata.resourceVersion, metadata.uid, metadata.creationTimestamp,
metadata.managedFields) and the top-level status section; then run kubectl apply
-f new-template.yaml -n cpaas-system. Apply the same sanitization/rename
instructions to the second occurrence of the export/apply workflow in the docs
(the block referenced after line 325) so users do not attempt to re-create a
resource with server-generated metadata.
Summary by CodeRabbit
New Features
Documentation
Chores