-
Notifications
You must be signed in to change notification settings - Fork 21
docs(install): add documentation for generic Kubernetes deployment #408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for cozystack ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughRenames the Kubernetes install index to be distribution‑agnostic, adds a Talos‑recommended subsection, and introduces a new comprehensive "Generic Kubernetes" guide (k3s, kubeadm, RKE2, etc.) describing prerequisites, sysctl and Kubernetes configuration, install steps, examples, troubleshooting, and references. Changes
Sequence Diagram(s)(omitted) Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related issues
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @lexfrei, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances Cozystack's documentation by providing a dedicated guide for deploying the platform on generic Kubernetes distributions. This expansion allows users to integrate Cozystack with existing Kubernetes clusters or choose non-Talos environments, offering greater flexibility and accessibility beyond the previously recommended Talos Linux setup. The new guide ensures users have all necessary information, from prerequisites to troubleshooting, for a successful deployment. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds valuable documentation for deploying Cozystack on generic Kubernetes distributions. The new guide is comprehensive, covering prerequisites, host configuration, Kubernetes setup for different distributions, and installation steps, complete with Ansible examples and a troubleshooting section. The changes are well-structured and clear. I've identified a few areas for improvement in the new documentation to enhance correctness and prevent potential user issues, including a likely version typo, potentially risky sysctl values, and inconsistent network configuration examples.
| fs.inotify.max_user_watches = 2099999999 | ||
| fs.inotify.max_user_instances = 2099999999 | ||
| fs.inotify.max_queued_events = 2099999999 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The inotify limits are set to an extremely high and identical value of 2099999999. This is likely a copy-paste error and poses a risk of kernel memory exhaustion, as each watch consumes non-swappable kernel memory. The values for max_user_watches, max_user_instances, and max_queued_events serve different purposes and should not be identical.
It is recommended to use more conventional and safer values, for example:
fs.inotify.max_user_watches = 524288fs.inotify.max_user_instances = 512fs.inotify.max_queued_events = 16384
Please update these values in both the 99-cozystack.conf example and the Ansible playbook example (lines 266-268).
| fs.inotify.max_user_watches = 2099999999 | |
| fs.inotify.max_user_instances = 2099999999 | |
| fs.inotify.max_queued_events = 2099999999 | |
| fs.inotify.max_user_watches = 524288 | |
| fs.inotify.max_user_instances = 512 | |
| fs.inotify.max_queued_events = 16384 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These values are intentional and required for Cozystack.
Cozystack runs FluxCD, multiple controllers, LINSTOR, KubeOVN/Cilium, and nested Kubernetes clusters — all creating massive inotify watch counts. Standard values (524288) cause too many open files errors.
These values were validated in production. They work because:
- Watches are allocated on-demand, not pre-allocated
- Kernel only uses memory for actual watches
- Cozystack architecture genuinely requires high counts
| ipv4-pod-cidr: "10.42.0.0/16" | ||
| ipv4-pod-gateway: "10.42.0.1" | ||
| ipv4-svc-cidr: "10.43.0.0/16" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CIDR ranges in the cozystack-config.yaml example are inconsistent with those provided in the kubeadm configuration example. The documentation correctly states that these values must match the Kubernetes cluster configuration. To prevent deployment failures for users following the guide, these examples should be consistent.
kubeadmconfig usespodSubnet: "10.244.0.0/16"andserviceSubnet: "10.96.0.0/16".cozystack-config.yamlexample usesipv4-pod-cidr: "10.42.0.0/16"andipv4-svc-cidr: "10.43.0.0/16".
The same inconsistency is present in the Ansible deployment playbook variables (lines 295-296). I suggest aligning the cozystack-config.yaml and Ansible playbook examples with the kubeadm configuration.
| ipv4-pod-cidr: "10.42.0.0/16" | |
| ipv4-pod-gateway: "10.42.0.1" | |
| ipv4-svc-cidr: "10.43.0.0/16" | |
| ipv4-pod-cidr: "10.244.0.0/16" | |
| ipv4-pod-gateway: "10.244.0.1" | |
| ipv4-svc-cidr: "10.96.0.0/16" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in 95848e8. Added explicit warning about distribution-specific CIDR defaults:
- k3s:
10.42.0.0/16,10.43.0.0/16 - kubeadm:
10.244.0.0/16,10.96.0.0/16 - RKE2:
10.42.0.0/16,10.43.0.0/16
The example now clearly states it's for k3s, with a note to adjust for other distributions.
|
|
||
| Cozystack has been tested on: | ||
|
|
||
| - **k3s** v1.32+ (recommended for single-node and edge deployments) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The supported k3s version v1.32+ appears to be a typo, as this version of Kubernetes (and therefore k3s) has not been released. The latest stable Kubernetes version is v1.30. Please verify and correct the minimum supported k3s version. For consistency with kubeadm and RKE2 in this document, v1.28+ would be a reasonable value.
| - **k3s** v1.32+ (recommended for single-node and edge deployments) | |
| - **k3s** v1.28+ (recommended for single-node and edge deployments) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
k3s v1.32+ is correct. Kubernetes 1.32 was released in December 2024, and k3s versions follow Kubernetes releases.
This documentation was written and tested with k3s v1.35 which is current. The v1.32+ requirement reflects the minimum tested version for the isp-full-generic bundle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@content/en/docs/install/kubernetes/_index.md`:
- Around line 27-29: The phrase "command line" in the third list item should be
hyphenated as "command-line"; update the text in the list item that currently
reads "[Using talosctl]({{% ref \"./talosctl\" %}}), a specialized command line
tool for managing Talos" to use "command-line" so the item reads that talosctl
is "a specialized command-line tool for managing Talos."
In `@content/en/docs/install/kubernetes/generic.md`:
- Around line 97-104: The table separator rows use compact pipes (e.g.
|-----------|-------------|) and trigger MD060; update the separator lines for
both Markdown tables so there are spaces around each pipe (e.g. | ----------- |
------------- |) to satisfy markdownlint. Locate the header/separator rows for
the two tables (the lines immediately under the header row with dashes) and add
a single space before and after each pipe character; keep the dash counts
unchanged so column widths remain the same.
🧹 Nitpick comments (2)
content/en/docs/install/kubernetes/generic.md (2)
166-168: Pin Cozystack manifests to a specific version for reproducible installs.Using
releases/latestcan change behavior without notice. Consider pinning to a version and documenting how to upgrade.♻️ Proposed change (example)
-kubectl apply -f https://github.com/cozystack/cozystack/releases/latest/download/cozystack-crds.yaml +kubectl apply -f https://github.com/cozystack/cozystack/releases/download/vX.Y.Z/cozystack-crds.yaml-kubectl apply -f https://github.com/cozystack/cozystack/releases/latest/download/cozystack-operator-generic.yaml +kubectl apply -f https://github.com/cozystack/cozystack/releases/download/vX.Y.Z/cozystack-operator-generic.yaml- kubernetes.core.k8s: - src: https://github.com/cozystack/cozystack/releases/latest/download/cozystack-crds.yaml + kubernetes.core.k8s: + src: https://github.com/cozystack/cozystack/releases/download/vX.Y.Z/cozystack-crds.yaml- kubernetes.core.k8s: - src: https://github.com/cozystack/cozystack/releases/latest/download/cozystack-operator-generic.yaml + kubernetes.core.k8s: + src: https://github.com/cozystack/cozystack/releases/download/vX.Y.Z/cozystack-operator-generic.yamlAlso applies to: 217-219, 299-301, 331-333
322-327: Document the Ansible collection dependency foransible.utils.ipaddr.The filter used to compute
ipv4-pod-gatewayrequires theansible.utilscollection. If it isn’t present, the playbook fails. Please add a note in prerequisites or replace the computation with a static value.
|
Any possible to automate everything with k3s server manifests? Running steps of commands isn't the k3s way. |
|
@cheese in progress. Ansible collection will be published later. |
|
A configmap named |
|
@cheese Thanks for catching this! Added the missing |
Add comprehensive guide for deploying Cozystack on non-Talos Kubernetes distributions (k3s, kubeadm, RKE2) using the isp-full-generic bundle. Documentation includes: - Host and sysctl requirements - Kubernetes configuration for each distribution - Step-by-step installation instructions - Example Ansible playbooks - Troubleshooting section Related: cozystack/cozystack#1950 Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Aleksei Sviridkin <f@lex.la>
Add explicit warning that pod/service CIDRs must match the Kubernetes distribution defaults: - k3s: 10.42.0.0/16, 10.43.0.0/16 - kubeadm: 10.244.0.0/16, 10.96.0.0/16 - RKE2: 10.42.0.0/16, 10.43.0.0/16 Note: inotify limits (2099999999) are intentional and required for Cozystack operation - standard values are insufficient. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Aleksei Sviridkin <f@lex.la>
- Hyphenate "command-line" in _index.md - Add spaces to table separators (MD060) Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Aleksei Sviridkin <f@lex.la>
…l guide The generic operator manifest reads KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT from a ConfigMap named cozystack-operator-config. Without this ConfigMap, the operator pod fails to start with CreateContainerConfigError. Add the missing step to both the manual instructions and the Ansible playbook example. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Aleksei Sviridkin <f@lex.la>
bdc9b5d to
500ba85
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@content/en/docs/install/kubernetes/generic.md`:
- Around line 65-68: The inotify sysctl values in the Kubernetes install doc are
set to extremely high numbers (fs.inotify.max_user_watches,
fs.inotify.max_user_instances, fs.inotify.max_queued_events); either lower them
to industry-proven defaults (e.g., 524288 for max_user_watches, 8192 for
max_user_instances, 65536 for max_queued_events) and run deployment tests, or
add a clear justification in the documentation next to those keys explaining why
Cozystack needs ~2,099,999,999 values and the expected memory/behavior
tradeoffs; update the section around the inotify entries to reflect the chosen
approach and include testing notes if you keep nonstandard values.
🧹 Nitpick comments (2)
content/en/docs/install/kubernetes/generic.md (2)
239-239: Clarify the description for KUBERNETES_SERVICE_HOST.The current wording is confusing: "the same address used in
api-server-endpointabove, without thehttps://prefix and port" suggests stripping components from a URL, butKUBERNETES_SERVICE_HOSTshould contain only the hostname or IP address.Consider rewording to be more explicit:
📝 Suggested clarification
-Replace `<YOUR_NODE_IP>` with the IP address of your Kubernetes API server (the same address used in `api-server-endpoint` above, without the `https://` prefix and port). +Replace `<YOUR_NODE_IP>` with the IP address or hostname of your Kubernetes API server (extract just the host part from the `api-server-endpoint` URL above, e.g., `10.0.0.1` from `https://10.0.0.1:6443`).
271-384: Verify Ansible collection dependencies are documented.The playbook examples use filters and modules from multiple Ansible collections:
- Line 293:
ansible.posix.sysctlrequires theansible.posixcollection- Line 336:
kubernetes.core.k8srequires thekubernetes.corecollection- Line 363:
ansible.utils.ipaddrrequires theansible.utilscollectionConsider adding a prerequisites note before the playbooks to inform users they need to install these collections:
📦 Suggested addition
Add before line 273:
{{% alert color="info" %}} **Note**: The playbooks below require the following Ansible collections: - `ansible.posix` - `kubernetes.core` - `ansible.utils` Install them with: `ansible-galaxy collection install ansible.posix kubernetes.core ansible.utils` {{% /alert %}}
Replace extreme inotify values (2099999999) with proven industry defaults: max_user_watches=524288, max_user_instances=8192, max_queued_events=65536. Updated both the sysctl configuration example and the Ansible playbook. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Aleksei Sviridkin <f@lex.la>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@content/en/docs/install/kubernetes/generic.md`:
- Around line 335-338: The play task named "Apply Cozystack CRDs" uses
kubernetes.core.k8s with a remote URL in src which may not be supported; update
this task (and the similar block referenced at lines 380-383) to first fetch the
YAML to a local temporary file using ansible.builtin.uri (or fetch into a
variable) and then apply it with kubernetes.core.k8s via the definition
parameter, or alternatively replace the task with an
ansible.builtin.command/ansible.builtin.shell step that runs kubectl apply -f
<downloaded-file-or-URL>—modify the task referencing "Apply Cozystack CRDs" and
the kubernetes.core.k8s usage accordingly.
- Around line 443-445: Update the broken link in the ordered list item that
currently points to
"/docs/getting-started/install-cozystack#4-access-the-dashboard": change its
anchor to the correct dashboard section anchor for "5.3 Access the Cozystack
Dashboard" (i.e., use the slug for section 5.3) so the second item in the list
correctly reads [Set up the root tenant]({{% ref
"/docs/getting-started/install-cozystack#5-3-access-the-cozystack-dashboard"
%}}) and leaves the other two links unchanged; edit the line containing that
list item in content/en/docs/install/kubernetes/generic.md.
🧹 Nitpick comments (4)
content/en/docs/install/kubernetes/generic.md (4)
119-119: Consider documenting the max-pods value.The
--kubelet-arg=max-pods=220flag sets a specific limit without explanation. Consider adding a brief comment explaining why 220 is recommended (e.g., to accommodate Cozystack's resource requirements).
201-201: Consider adding a note about calculating the gateway IP.The
ipv4-pod-gatewayis shown as"10.42.0.1", which is the first IP in the pod CIDR. Consider adding a brief note that this should be the first IP of your pod CIDR (e.g., for10.244.0.0/16, use10.244.0.1). The Ansible playbook at line 363 demonstrates automatic calculation.
271-384: Add Ansible collection requirements for the playbooks.The playbooks use modules and filters from collections that must be installed:
kubernetes.core(for k8s module)ansible.utils(for ipaddr filter at line 363)ansible.posix(for sysctl module at line 293)📦 Recommended addition
Add a requirements note before the playbook examples:
### Prerequisites Install required Ansible collections: ```bash ansible-galaxy collection install kubernetes.core ansible.utils ansible.posixEnsure
kubectlis configured with access to your cluster.</details> --- `402-411`: **Clarify what a Platform Package is and how to edit it.** The troubleshooting solution references "Platform Package" configuration without explaining what it is, where to find it, or how to edit it. Users encountering this issue won't know how to apply the fix. <details> <summary>📝 Suggested improvement</summary> Add a brief explanation before the YAML snippet: ```markdown **Solution**: Edit your Platform custom resource to include explicit `MASTER_NODES` configuration. Find your Platform resource: ```bash kubectl get platform -AEdit it to add the configuration:
kubectl edit platform <platform-name> -n <namespace>Add the following under
spec:This helps users understand what to edit and how to do it. </details> Also applies to: 419-429 </blockquote></details> </blockquote></details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| - name: Apply Cozystack CRDs | ||
| kubernetes.core.k8s: | ||
| src: https://github.com/cozystack/cozystack/releases/latest/download/cozystack-crds.yaml | ||
| state: present |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The kubernetes.core.k8s module may not support remote URLs in the src parameter.
The src parameter typically expects a local file path. Loading remote YAML files requires either:
- Using
ansible.builtin.urito fetch and then apply withdefinition - Using
kubectl apply -fviaansible.builtin.command
🔧 Proposed fix using kubectl command
- - name: Apply Cozystack CRDs
- kubernetes.core.k8s:
- src: https://github.com/cozystack/cozystack/releases/latest/download/cozystack-crds.yaml
- state: present
+ - name: Apply Cozystack CRDs
+ ansible.builtin.command:
+ cmd: kubectl apply -f https://github.com/cozystack/cozystack/releases/latest/download/cozystack-crds.yaml
+ changed_when: trueAlso applies to: 380-383
🤖 Prompt for AI Agents
In `@content/en/docs/install/kubernetes/generic.md` around lines 335 - 338, The
play task named "Apply Cozystack CRDs" uses kubernetes.core.k8s with a remote
URL in src which may not be supported; update this task (and the similar block
referenced at lines 380-383) to first fetch the YAML to a local temporary file
using ansible.builtin.uri (or fetch into a variable) and then apply it with
kubernetes.core.k8s via the definition parameter, or alternatively replace the
task with an ansible.builtin.command/ansible.builtin.shell step that runs
kubectl apply -f <downloaded-file-or-URL>—modify the task referencing "Apply
Cozystack CRDs" and the kubernetes.core.k8s usage accordingly.
| 1. [Configure storage with LINSTOR]({{% ref "/docs/getting-started/install-cozystack#3-configure-storage" %}}) | ||
| 2. [Set up the root tenant]({{% ref "/docs/getting-started/install-cozystack#4-access-the-dashboard" %}}) | ||
| 3. [Deploy your first application]({{% ref "/docs/applications" %}}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify internal Hugo ref link targets
echo "=== Checking for install-cozystack documentation ==="
find content/en/docs -name "*install-cozystack*" -type f 2>/dev/null | head -20
echo "=== Checking for applications documentation ==="
find content/en/docs -name "applications*" -type d 2>/dev/null | head -10
echo "=== Looking for any markdown files in getting-started ==="
find content/en/docs/getting-started -name "*.md" -type f 2>/dev/null
echo "=== Looking for any markdown files in applications ==="
find content/en/docs/applications -name "*.md" -type f 2>/dev/nullRepository: cozystack/website
Length of output: 1659
🏁 Script executed:
#!/bin/bash
# Check the content of install-cozystack.md to verify anchor sections
echo "=== Checking install-cozystack.md for section headings ==="
cat -n content/en/docs/getting-started/install-cozystack.md | head -100
echo ""
echo "=== Searching for 'Configure storage' and 'dashboard' sections ==="
rg -i "##.*configure.*storage|##.*dashboard|##.*access" content/en/docs/getting-started/install-cozystack.md -A 2Repository: cozystack/website
Length of output: 5369
🏁 Script executed:
#!/bin/bash
# Get the complete section structure of install-cozystack.md
echo "=== Full section headings in install-cozystack.md ==="
rg "^## |^### " content/en/docs/getting-started/install-cozystack.md
echo ""
echo "=== Counting total lines in the file ==="
wc -l content/en/docs/getting-started/install-cozystack.md
echo ""
echo "=== Last 50 lines to see section 5 and beyond ==="
tail -50 content/en/docs/getting-started/install-cozystack.mdRepository: cozystack/website
Length of output: 2442
Fix broken documentation link in step 2.
The link to /docs/getting-started/install-cozystack#4-access-the-dashboard is incorrect. Section 4 covers "Configure Networking", not dashboard access. The dashboard access section is "5.3 Access the Cozystack Dashboard". Update the link to point to the correct section anchor.
The other two links are valid:
/docs/getting-started/install-cozystack#3-configure-storage✓/docs/applications✓
🤖 Prompt for AI Agents
In `@content/en/docs/install/kubernetes/generic.md` around lines 443 - 445, Update
the broken link in the ordered list item that currently points to
"/docs/getting-started/install-cozystack#4-access-the-dashboard": change its
anchor to the correct dashboard section anchor for "5.3 Access the Cozystack
Dashboard" (i.e., use the slug for section 5.3) so the second item in the list
correctly reads [Set up the root tenant]({{% ref
"/docs/getting-started/install-cozystack#5-3-access-the-cozystack-dashboard"
%}}) and leaves the other two links unchanged; edit the line containing that
list item in content/en/docs/install/kubernetes/generic.md.
|
I tried. operator and flux are running but not deploying anything. operator log: |
The Node Preparation Playbook was missing fs.file-max, fs.aio-max-nr, and vm.swappiness sysctl values that are listed in the Sysctl Configuration section of the same document. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Aleksei Sviridkin <f@lex.la>
The documentation was missing the step to create the Package resource after deploying the operator. Without this step, Cozystack components are never installed. Added step 6 to the manual instructions and corresponding tasks to the Ansible playbook example. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Aleksei Sviridkin <f@lex.la>
|
@cheese Thanks for testing this! The issue you're seeing (operator + flux running but nothing deploying) is because a In v1.0.0, the operator reconciles the apiVersion: cozystack.io/v1alpha1
kind: Package
metadata:
name: cozystack.cozystack-platform
spec:
variant: isp-full-genericThe name must match the PackageSource name — you can check it with I've already added this as step 6 in the docs (plus a corresponding task in the Ansible playbook example). After applying this, you should see HelmReleases being created and Cilium starting to install. |
Cozystack deploys its own metrics-server. The k3s built-in metrics-server addon controller overwrites the RBAC RoleBinding, breaking Cozystack's metrics-server with a permission error on the extension-apiserver-authentication configmap in kube-system. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Aleksei Sviridkin <f@lex.la>
Fix incorrect deployment name in monitoring step (deploy/cozystack → deploy/cozystack-operator) and add info alert about expected transient errors during initial Cilium reconciliation. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Aleksei Sviridkin <f@lex.la>
Replace vague "all releases will show READY: True" with an actionable kubectl wait command to verify Cilium deployment and node networking. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Aleksei Sviridkin <f@lex.la>
…y text Add missing net.bridge.bridge-nf-call-iptables and net.bridge.bridge-nf-call-ip6tables to the Ansible playbook sysctl entries to match the manual configuration section. Update _index.md "Further Steps" to be distribution-agnostic instead of referencing Talos Linux specifically. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Aleksei Sviridkin <f@lex.la>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@content/en/docs/install/kubernetes/generic.md`:
- Around line 423-430: The until check on the registered variable pkg_source
dereferences the Ready condition
unsafely—(pkg_source.resources[0].status.conditions | selectattr('type',
'equalto', 'Ready') | first).status—causing failures if Ready isn't present;
update the until expression to guard against missing values by providing a safe
default before accessing .status (for example, use first | default({}) or check
that the filtered conditions list has an element and that .status exists) so the
play will retry instead of aborting when the Ready condition is absent.
| - name: Wait for PackageSource to be ready | ||
| kubernetes.core.k8s_info: | ||
| api_version: cozystack.io/v1alpha1 | ||
| kind: PackageSource | ||
| name: cozystack.cozystack-platform | ||
| register: pkg_source | ||
| until: pkg_source.resources | length > 0 and (pkg_source.resources[0].status.conditions | selectattr('type', 'equalto', 'Ready') | first).status == "True" | ||
| retries: 30 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against missing Ready condition to avoid first/.status failures.
If Ready isn’t present yet, first returns undefined and .status will raise an error, aborting the play early. Add a safe default before dereferencing.
Suggested fix
- until: pkg_source.resources | length > 0 and (pkg_source.resources[0].status.conditions | selectattr('type', 'equalto', 'Ready') | first).status == "True"
+ until: >
+ pkg_source.resources | length > 0 and
+ (
+ (pkg_source.resources[0].status.conditions
+ | selectattr('type', 'equalto', 'Ready')
+ | map(attribute='status')
+ | first
+ | default('False')
+ ) == "True"
+ )🤖 Prompt for AI Agents
In `@content/en/docs/install/kubernetes/generic.md` around lines 423 - 430, The
until check on the registered variable pkg_source dereferences the Ready
condition unsafely—(pkg_source.resources[0].status.conditions |
selectattr('type', 'equalto', 'Ready') | first).status—causing failures if Ready
isn't present; update the until expression to guard against missing values by
providing a safe default before accessing .status (for example, use first |
default({}) or check that the filtered conditions list has an element and that
.status exists) so the play will retry instead of aborting when the Ready
condition is absent.
|
Is |
Summary
Add comprehensive documentation for deploying Cozystack on non-Talos Kubernetes distributions (k3s, kubeadm, RKE2) using the
isp-full-genericbundle introduced in PR cozystack/cozystack#1939.Changes
content/en/docs/install/kubernetes/generic.mdcontent/en/docs/install/kubernetes/_index.mdto include link to generic guideDocumentation Includes
Related
isp-full-genericbundleTest Plan
Summary by CodeRabbit