Skip to content

Conversation

@tangledbytes
Copy link
Member

@tangledbytes tangledbytes commented Sep 16, 2025

Explain the changes

Add support for noobaa bench warp command. Example:

$ nb bench warp -n noobaa mixed --bucket first.bucket --warp-args "--concurrent 1 --objects 100 --obj.size 10MiB"
❯ nb bench warp --help                             
Run warp benchmark

Options:
    --access-key='':
        Access Key to access the S3 bucket

    --bucket='first.bucket':
        Bucket to use for benchmark data. ALL DATA WILL BE DELETED IN BUCKET!

    --clients=0:
        Number of warp instances

    --endpoint-type='internal':
        Endpoint type could be internal,podip,nodeport,loadbalancer,manual

    --image='minio/warp:latest':
        Warp image

    --secret-key='':
        Secret Key to access the S3 bucket

    --use-https=false:
        Use HTTPS endpoints for benchmark

    --warp-args='':
        Arguments to be passed directly to warp CLI

Usage:
  noobaa bench warp [flags] [options]

Use "noobaa options" for a list of global command-line options (applies to all commands).
Sample Run
❯ nb bench warp -n noobaa mixed --bucket first.bucket --endpoint-type podip --warp-args "--concurrent 1 --objects 10 --obj.size 1MiB"
INFO[0000] Starting warp benchmark                      
INFO[0000] ✅ Exists: Secret "noobaa-admin"              
INFO[0000] ✅ Created: StatefulSet "warp"                
INFO[0000] ✅ Exists:  "warp"                            
INFO[0002] ✅ Exists:  "warp"                            
INFO[0004] ✅ Exists:  "warp"                            
INFO[0006] ✅ Exists:  "warp"                            
INFO[0006] ✅ Created: Service "warp"                    
INFO[0006] ✅ Exists: Service "s3"                       
INFO[0006] ✅ Created: Job "warp-job"                    
INFO[0006] Benchmark started - you can check the warp client logs by "kubectl logs -f -n noobaa -l app=warp" 
INFO[0006] Benchmark still running                      
INFO[0011] Benchmark still running                      
INFO[0016] Benchmark still running                      
INFO[0021] Benchmark still running                      
INFO[0026] Benchmark still running                      
INFO[0031] Benchmark still running                      
INFO[0036] Benchmark still running                      
INFO[0041] Benchmark still running                      
INFO[0046] Benchmark still running                      
INFO[0051] Benchmark still running                      
INFO[0056] Benchmark still running                      
INFO[0061] Benchmark still running                      
INFO[0066] Benchmark still running                      
INFO[0071] Benchmark still running                      
INFO[0076] Benchmark still running                      
INFO[0081] Benchmark still running                      
INFO[0086] Benchmark still running                      
INFO[0091] Benchmark still running                      
INFO[0096] Benchmark still running                      
INFO[0101] Benchmark still running                      
INFO[0106] Benchmark still running                      
INFO[0111] Benchmark still running                      
INFO[0116] Benchmark still running                      
INFO[0121] Benchmark still running                      
INFO[0126] Benchmark still running                      
INFO[0131] Benchmark still running                      
INFO[0136] Benchmark still running                      
INFO[0141] Benchmark still running                      
INFO[0146] Benchmark still running                      
INFO[0151] Benchmark still running                      
INFO[0156] Benchmark still running                      
INFO[0161] Benchmark still running                      
INFO[0166] Benchmark still running                      
INFO[0171] Benchmark still running                      
INFO[0176] Benchmark still running                      
INFO[0181] Benchmark still running                      
INFO[0186] Benchmark still running                      
INFO[0191] Benchmark still running                      
INFO[0196] Benchmark still running                      
INFO[0201] Benchmark still running                      
INFO[0206] Benchmark still running                      
INFO[0211] Benchmark still running                      
INFO[0216] Benchmark still running                      
INFO[0221] Benchmark still running                      
INFO[0226] Benchmark still running                      
INFO[0231] Benchmark still running                      
INFO[0236] Benchmark still running                      
INFO[0241] Benchmark still running                      
INFO[0246] Benchmark still running                      
INFO[0251] Benchmark still running                      
INFO[0256] Benchmark still running                      
INFO[0261] Benchmark still running                      
INFO[0266] Benchmark still running                      
INFO[0271] Benchmark still running                      
INFO[0276] Benchmark still running                      
INFO[0281] Benchmark still running                      
INFO[0286] Benchmark still running                      
INFO[0291] Benchmark still running                      
INFO[0296] Benchmark still running                      
INFO[0301] Benchmark still running                      
INFO[0306] Benchmark still running                      
INFO[0311] Benchmark still running                      
INFO[0316] Benchmark still running                      
INFO[0321] Benchmark still running                      
INFO[0326] Benchmark still running                      

Report: DELETE. Concurrency: 1. Ran: 5m1s
 * Average: 2.66 obj/s
 * Reqs: Avg: 3.0ms, 50%: 2.3ms, 90%: 6.4ms, 99%: 12.5ms, Fastest: 1.2ms, Slowest: 96.1ms, StdDev: 2.6ms

Throughput, split into 297 x 1s:
 * Fastest: 11.00 obj/s
 * 50% Median: 3.00 obj/s
 * Slowest: 0.00 obj/s

──────────────────────────────────

Report: GET. Concurrency: 1. Ran: 5m3s
 * Average: 11.96 MiB/s, 11.96 obj/s
 * Reqs: Avg: 41.3ms, 50%: 8.2ms, 90%: 134.3ms, 99%: 651.2ms, Fastest: 3.5ms, Slowest: 2437.1ms, StdDev: 103.8ms
 * TTFB: Avg: 41ms, Best: 3ms, 25th: 5ms, Median: 8ms, 75th: 36ms, 90th: 134ms, 99th: 651ms, Worst: 2.437s StdDev: 104ms

Throughput, split into 299 x 1s:
 * Fastest: 42.1MiB/s, 42.11 obj/s
 * 50% Median: 11.2MiB/s, 11.22 obj/s
 * Slowest: 0.00 obj/s

──────────────────────────────────

Report: PUT. Concurrency: 1. Ran: 5m1s
 * Average: 3.99 MiB/s, 3.99 obj/s
 * Reqs: Avg: 159.4ms, 50%: 86.5ms, 90%: 419.2ms, 99%: 1243.4ms, Fastest: 22.6ms, Slowest: 3263.5ms, StdDev: 238.6ms

Throughput, split into 297 x 1s:
 * Fastest: 10.7MiB/s, 10.72 obj/s
 * 50% Median: 4.1MiB/s, 4.05 obj/s
 * Slowest: 0.00 obj/s

──────────────────────────────────

Report: STAT. Concurrency: 1. Ran: 5m1s
 * Average: 7.97 obj/s
 * Reqs: Avg: 2.3ms, 50%: 1.9ms, 90%: 3.1ms, 99%: 12.3ms, Fastest: 1.0ms, Slowest: 73.6ms, StdDev: 1.6ms

Throughput, split into 297 x 1s:
 * Fastest: 38.00 obj/s
 * 50% Median: 6.00 obj/s
 * Slowest: 0.00 obj/s


──────────────────────────────────

Report: Total. Concurrency: 1. Ran: 5m3s
 * Average: 15.94 MiB/s, 26.54 obj/s

Throughput, split into 299 x 1s:
 * Fastest: 50.2MiB/s, 97.19 obj/s
 * 50% Median: 15.8MiB/s, 25.80 obj/s
 * Slowest: 313.8KiB/s, 0.31 obj/s

INFO[0331] Cleaning up Warp                             
INFO[0331] 🗑️  Deleting: Job "warp-job"                 
INFO[0332] 🗑️  Deleted : Job "warp-job"                 
INFO[0332] 🗑️  Deleting: Service "warp"                 
INFO[0332] 🗑️  Deleted : Service "warp"                 
INFO[0332] 🗑️  Deleting: StatefulSet "warp"             
INFO[0332] 🗑️  Deleted : StatefulSet "warp" 

Issues: Fixed #xxx / Gap #xxx

Testing Instructions:

  • Doc added/updated
  • Tests added

Summary by CodeRabbit

  • New Features

    • Added a "bench warp" CLI to run MinIO Warp benchmarks with multiple endpoint modes, TLS option, client auto-detection, custom args, live logs, and automatic cleanup.
    • Included bundled Warp deployment assets (StatefulSet, headless Service, Job) for one-command in-cluster provisioning and execution.
  • Chores

    • Registered the new bench command in the CLI Advanced group and added small runtime utilities to improve signal handling.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Sep 16, 2025

Walkthrough

Adds Kubernetes manifests and bundled assets for MinIO Warp, a new bench warp CLI workflow that provisions Warp resources and runs benchmark jobs, integrates the bench command into the main CLI, adds Map and OnSignal utilities, and replaces manual signal handling in the admission server.

Changes

Cohort / File(s) Summary
Warp Kubernetes Manifests
deploy/warp/warp.yaml, deploy/warp/warp-svc.yaml, deploy/warp/warp-job.yaml
New StatefulSet (1 replica, Parallel) running minio/warp:latest, headless Service warp (clusterIP: None, publishNotReadyAddresses) exposing port 7761, and a Job to run the Warp client with WARP_ACCESS_KEY/WARP_SECRET_KEY env vars, restartPolicy Never and backoffLimit: 0.
Bundled Warp Assets
pkg/bundle/deploy.go
Adds SHA256 and file constants embedding the three new Warp YAML manifests into the binary (Sha256_deploy_warp_* and File_deploy_warp_*).
Bench Warp Orchestration
pkg/bench/warp.go
New Cobra CLI commands (Cmd, CmdWarp) and RunBenchWarp implementation: flag parsing, credential retrieval, endpoint-type handling, provisioning StatefulSet/Service, starting and polling a Warp Job, streaming logs, and cleanup. Adds EndpointType alias, constants (internal, podip, nodeport, loadbalancer, manual), and helper functions (prepareWarpHostList, getClientNums, etc.).
CLI Integration
pkg/cli/cli.go
Imports bench and registers bench.Cmd() in the Advanced group; initializes controller-runtime logging with zap in Run().
Signal Handling Utility + Adoption
pkg/util/util.go, pkg/admission/server.go
Adds Map[T any, U any](...) and OnSignal(cb func(), signals ...os.Signal) utilities; replaces manual os/signal handling in pkg/admission/server.go with util.OnSignal for SIGINT/SIGTERM.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant CLI as noobaa CLI
  participant Bench as bench.warp
  participant K8s as Kubernetes API
  participant SS as Warp StatefulSet/Service
  participant Job as Warp Job Pods
  participant Target as Object Store / Endpoints

  User->>CLI: noobaa bench warp [flags]
  CLI->>Bench: RunBenchWarp()
  Bench->>K8s: Create Service + StatefulSet
  K8s-->>Bench: Pods created / ready
  Bench->>K8s: Create Warp Job (env, args)
  K8s-->>Bench: Job & Pod events
  Job->>Target: Run warp client workload (HTTP/S)
  Bench->>K8s: Stream Job pod logs / monitor status
  alt Job Succeeds
    K8s-->>Bench: Job Succeeded
  else Job Fails
    K8s-->>Bench: Job Failed
  end
  Bench->>K8s: Cleanup Job, Pods, StatefulSet, Service
  Bench-->>CLI: Exit with status/logs
Loading
sequenceDiagram
  autonumber
  participant Adm as Admission Server
  participant Util as util.OnSignal
  participant OS as OS Signals

  Adm->>Util: Register OnSignal(cb, SIGINT, SIGTERM)
  OS-->>Util: Deliver signal
  Util-->>Adm: Invoke callback
  Adm->>Adm: server.Shutdown(ctx)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

  • Pay attention to: pkg/bench/warp.go (large new file with control flow, Kubernetes API interactions, signal/cleanup logic, credential handling).
  • Verify correctness and security of credential retrieval and usage (no exposed secrets).
  • Review bundling constants in pkg/bundle/deploy.go for accurate SHA256 values and manifest content.
  • Confirm pkg/util.OnSignal behavior (blocking vs. non-blocking) and its integration in pkg/admission/server.go.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 40.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ❓ Inconclusive The description provides good context with command examples and sample output, but the Issues and Testing Instructions sections are incomplete with placeholder content. Complete the Issues section with referenced issue/gap numbers if applicable, and add detailed Testing Instructions explaining how to validate the 'bench warp' command functionality.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: adding support for the 'bench warp' command to the CLI.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🧹 Nitpick comments (17)
deploy/warp/warp-job.yaml (1)

3-15: Provide labels and wire credentials via Secret refs (not empty envs)

Empty env entries are brittle. Add labels for lifecycle ops and source keys from a Secret.

 metadata:
   name: warp-job
+  labels:
+    app: warp
 spec:
   template:
     spec:
       containers:
       - name: warp-job
-        env:
-          - name: WARP_ACCESS_KEY
-          - name: WARP_SECRET_KEY
+        env:
+          - name: WARP_ACCESS_KEY
+            valueFrom:
+              secretKeyRef:
+                name: REPLACE_ME_WARP_CREDS_SECRET
+                key: access_key
+          - name: WARP_SECRET_KEY
+            valueFrom:
+              secretKeyRef:
+                name: REPLACE_ME_WARP_CREDS_SECRET
+                key: secret_key

Confirm the bench workflow patches args/creds before apply; otherwise this Job will run with defaults.

pkg/util/util.go (1)

2422-2428: Stop signal notifications to avoid leaks; consider a context‑based variant

Call signal.Stop to unregister and prevent goroutine/channel leaks. Optional: offer OnSignalCtx that cancels a context.

 func OnSignal(cb func(), signals ...os.Signal) {
-	signalChan := make(chan os.Signal, 1)
-	signal.Notify(signalChan, signals...)
-	<-signalChan
-
-	cb()
+	signalChan := make(chan os.Signal, 1)
+	signal.Notify(signalChan, signals...)
+	defer signal.Stop(signalChan)
+	<-signalChan
+	cb()
 }

If useful, I can add OnSignalCtx(ctx context.Context, cb func(), signals ...os.Signal) that returns when either ctx is done or a signal arrives.

pkg/admission/server.go (1)

69-77: Graceful shutdown: use timeout context; fix minor typos/severity

Use a bounded context for Shutdown and warn on real errors. Also “singal” → “signal”.

-	// listening shutdown singal
-	util.OnSignal(func() {
-		log.Info("Got shutdown signal, shutting down webhook server gracefully...")
-		err = server.Shutdown(context.Background())
-		if err != nil {
-			log.Info("Failed to Shutdown admission server")
-		}
-	}, syscall.SIGINT, syscall.SIGTERM)
+	// listening shutdown signal
+	util.OnSignal(func() {
+		log.Info("Got shutdown signal, shutting down webhook server gracefully...")
+		ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
+		defer cancel()
+		if err := server.Shutdown(ctx); err != nil && err != http.ErrServerClosed {
+			log.Warnf("Failed to shutdown admission server gracefully: %v", err)
+		}
+	}, syscall.SIGINT, syscall.SIGTERM)

Additional import needed:

 import (
   "context"
   "crypto/tls"
   "fmt"
   "net/http"
   "os"
   "syscall"
+  "time"
deploy/warp/warp-svc.yaml (2)

11-12: Align port names with the StatefulSet

The container port is named “http”; align Service port name for consistency and discovery.

   ports:
-    - port: 7761
-      name: warp
+    - port: 7761
+      name: http

14-14: Add newline at EOF

Minor YAML lint nit.

deploy/warp/warp.yaml (2)

15-18: Drop template.metadata.name (ignored in Pod templates)

Name in a Pod template is ignored and can confuse reviewers.

   template:
     metadata:
-      name: warp
       labels:
         app: warp

20-29: Prefer soft anti‑affinity to avoid unschedulable replicas

Required anti‑affinity can block scaling when node count < replicas. Prefer preferredDuringScheduling.

-      affinity:
-        podAntiAffinity:
-          requiredDuringSchedulingIgnoredDuringExecution:
-            - labelSelector:
-                matchExpressions:
-                - key: app
-                  operator: In
-                  values:
-                  - warp
-              topologyKey: "kubernetes.io/hostname"
+      affinity:
+        podAntiAffinity:
+          preferredDuringSchedulingIgnoredDuringExecution:
+            - weight: 100
+              podAffinityTerm:
+                topologyKey: "kubernetes.io/hostname"
+                labelSelector:
+                  matchExpressions:
+                  - key: app
+                    operator: In
+                    values: ["warp"]
pkg/bundle/deploy.go (1)

6950-6964: Nit: Port name consistency and labels.

Service port is named "warp" while the container exposes "http". Prefer consistent names to simplify targeting by name and add app labels for discoverability.

Apply this diff (pairs with the StatefulSet port rename above):

 spec:
   publishNotReadyAddresses: true
   clusterIP: None
   ports:
-    - port: 7761
-      name: warp
+    - port: 7761
+      name: warp
   selector:
     app: warp

Optional: consider adding standard labels (app.kubernetes.io/name, app.kubernetes.io/part-of: bench-warp).

pkg/bench/warp.go (9)

241-254: Polling loop can run forever — bound it or make it user-configurable

Consider a timeout (e.g., flag) and stop with a warning when exceeded.

- for true {
+ deadline := time.Now().Add(24 * time.Hour)
+ for time.Now().Before(deadline) {
   ...
-   time.Sleep(5 * time.Second)
+   time.Sleep(5 * time.Second)
 }
+ if time.Now().After(deadline) {
+   log.Warn("Benchmark timed out after 24h; proceeding to cleanup")
+ }

272-275: Close log streams to avoid leaking readers

Readers returned by GetPodLogs must be closed after use.

- for _, container := range logs {
-   io.Copy(os.Stdout, container)
- }
+ for _, rc := range logs {
+   _, _ = io.Copy(os.Stdout, rc)
+   _ = rc.Close()
+ }

216-218: Robust arg parsing

strings.Split breaks on multiple spaces and can’t handle quoted args. At minimum use Fields; ideally use a shellwords parser.

Minimal fix:

- args = append(args, strings.Split(warpArgs, " ")...)
+ args = append(args, strings.Fields(warpArgs)...)

Optional (better): parse with shellwords (github.com/mattn/go-shellwords) and append parsed tokens.


98-105: Validate endpoint-type early

Reject invalid values to avoid producing an empty --host list.

 endpointType := util.GetFlagStringOrPrompt(cmd, "endpoint-type")
 
+ switch endpointType {
+ case EndpointInternal, EndpointPodIP, EndpointNodeport, EndpointLoadbalancer:
+ default:
+   log.Fatalf("❌ Invalid --endpoint-type %q. Valid: %s,%s,%s,%s",
+     endpointType, EndpointInternal, EndpointPodIP, EndpointNodeport, EndpointLoadbalancer)
+ }

128-131: Deduplicate cleanup on signal and normal exit

cleanupWarp can run twice concurrently. Guard with sync.Once.

@@
-import (
+import (
   "context"
   "fmt"
   "io"
   "os"
   "strings"
   "syscall"
   "time"
+  "sync"
@@
-go util.OnSignal(func() {
-  cleanupWarp()
-}, syscall.SIGINT, syscall.SIGTERM)
+var cleanupOnce sync.Once
+go util.OnSignal(func() {
+  cleanupOnce.Do(cleanupWarp)
+}, syscall.SIGINT, syscall.SIGTERM)
@@
 pollWarp()
-cleanupWarp()
+cleanupOnce.Do(cleanupWarp)
@@
-func cleanupWarp() {
+func cleanupWarp() {
   util.Logger().Info("Cleaning up Warp")
@@
   util.KubeDeleteAllOf(&corev1.Pod{}, client.InNamespace(options.Namespace), client.MatchingLabels{
     "job-name": "warp-job",
   })
 }

Also applies to: 139-140, 278-296


148-148: Typo in log message

“Unexepected” → “Unexpected”.

- log.Fatal("Unexepected number of containers in the Warp STS")
+ log.Fatal("Unexpected number of containers in the Warp STS")

331-343: PodIP mode: rely on Service Endpoints/EndpointSlices instead of hardcoded labels

Selector value may differ from "noobaa"; query via Endpoints/EndpointSlices of the s3 Service to collect Pod IPs directly.


298-316: Defensive checks: ensure non-empty host list and non-zero port

If no IPs or port=0, fail fast with a clear error.

   getPort := func(svc *corev1.Service, portname string, fn func(corev1.ServicePort) int32) int32 {
@@
     return 0
   }
@@
- return strings.Join(util.Map(ips, func(ip string) string {
+ if len(ips) == 0 || port == 0 {
+   log.Fatalf("❌ Failed to resolve endpoint(s) for %q (https=%v); ips=%v port=%d", endpointType, https, ips, port)
+ }
+ return strings.Join(util.Map(ips, func(ip string) string {
     return fmt.Sprintf("%s:%d", ip, port)
   }), ",")

Also applies to: 377-380


45-50: CLI UX: document expected bench arg and provide examples

Add ValidArgs or Example to guide users (e.g., mixed/put/get/list).

 cmd := &cobra.Command{
   Use:   "warp",
   Short: "Run warp benchmark",
   Run:   RunBenchWarp,
   Args:  cobra.ExactArgs(1),
+  Example: `  noobaa bench warp mixed --bucket first.bucket --warp-args "--concurrent 1 --objects 100 --obj.size 10MiB"`,
+  ValidArgs: []string{"mixed", "put", "get", "delete", "stat", "list"},
 }
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e4f93c9 and a326d74.

📒 Files selected for processing (8)
  • deploy/warp/warp-job.yaml (1 hunks)
  • deploy/warp/warp-svc.yaml (1 hunks)
  • deploy/warp/warp.yaml (1 hunks)
  • pkg/admission/server.go (2 hunks)
  • pkg/bench/warp.go (1 hunks)
  • pkg/bundle/deploy.go (1 hunks)
  • pkg/cli/cli.go (2 hunks)
  • pkg/util/util.go (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
pkg/admission/server.go (1)
pkg/util/util.go (1)
  • OnSignal (2422-2428)
pkg/cli/cli.go (1)
pkg/bench/warp.go (1)
  • Cmd (32-42)
pkg/bench/warp.go (4)
pkg/cli/cli.go (2)
  • Cmd (70-167)
  • Run (62-67)
pkg/util/util.go (13)
  • Logger (904-906)
  • GetFlagStringOrPrompt (1316-1328)
  • KubeObject (284-294)
  • KubeCheck (570-591)
  • OnSignal (2422-2428)
  • KubeApply (298-334)
  • Context (909-911)
  • KubeCheckQuiet (619-632)
  • KubeList (647-668)
  • GetPodLogs (715-754)
  • KubeDelete (445-498)
  • KubeDeleteAllOf (522-536)
  • Map (2413-2420)
pkg/bundle/deploy.go (5)
  • File_deploy_internal_secret_empty_yaml (4902-4909)
  • File_deploy_warp_warp_yaml (6967-7006)
  • File_deploy_warp_warp_svc_yaml (6950-6963)
  • File_deploy_warp_warp_job_yaml (6929-6946)
  • File_deploy_internal_service_s3_yaml (4966-4992)
pkg/options/options.go (1)
  • Namespace (57-57)
🪛 YAMLlint (1.37.1)
deploy/warp/warp-svc.yaml

[error] 14-14: no new line character at the end of file

(new-line-at-end-of-file)

🪛 Checkov (3.2.334)
deploy/warp/warp.yaml

[medium] 1-38: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[medium] 1-38: Minimize the admission of root containers

(CKV_K8S_23)

deploy/warp/warp-job.yaml

[medium] 1-16: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[medium] 1-16: Minimize the admission of root containers

(CKV_K8S_23)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: run-admission-test
  • GitHub Check: run-operator-tests
  • GitHub Check: run-kms-tls-token-test
  • GitHub Check: cnpg-deployment-test
  • GitHub Check: run-hac-test
  • GitHub Check: run-cli-tests
  • GitHub Check: run-core-config-map-tests
  • GitHub Check: run-azure-vault-test
  • GitHub Check: run-kms-key-rotate-test
  • GitHub Check: run-kms-kmip-test
  • GitHub Check: run-kms-dev-test
  • GitHub Check: run-kms-tls-sa-test
  • GitHub Check: golangci-lint
🔇 Additional comments (4)
pkg/util/util.go (1)

2413-2420: Generic Map helper looks good

Preallocated result and simple loop are fine.

pkg/cli/cli.go (1)

11-11: CLI wiring for bench command looks correct

pkg/cli/cli.go imports github.com/noobaa/noobaa-operator/v5/pkg/bench and registers bench.Cmd() in the "Advanced:" group; pkg/bench/warp.go implements the "bench" command with the "warp" subcommand (flags + RunBenchWarp, Args: cobra.ExactArgs(1)).

pkg/bench/warp.go (2)

201-211: Resolved — MinIO Warp supports brace expansion in --warp-client Verified: Warp does its own brace expansion, so passing a brace pattern directly is valid and shell expansion isn't required; current code is fine.


213-215: No change required — Warp accepts scheme-less host:port and uses --tls/--insecure for TLS

Warp expects "host:port" (not a full URL); supplying "https://..." causes endpoint validation errors, so the current code appending --tls/--insecure is correct.

@tangledbytes tangledbytes force-pushed the utkarsh/feat/benchmark-cli branch from a326d74 to 6ca0199 Compare September 16, 2025 13:35
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
deploy/warp/warp-job.yaml (1)

13-16: Avoid :latest; add minimal security context, resources, and TTL

Even though the CLI overrides image/env/args at runtime, it does not touch security context, resources, or ttlSecondsAfterFinished. Hardening here sets sane defaults and ensures finished Jobs get garbage‑collected.

 spec:
   template:
     spec:
       containers:
       - name: warp-job
         env:
           - name: WARP_ACCESS_KEY
           - name: WARP_SECRET_KEY
-        image: "minio/warp:latest"
-        imagePullPolicy: Always
+        image: "minio/warp:vX.Y.Z" # pin an immutable tag; CLI can still override
+        imagePullPolicy: IfNotPresent
+        securityContext:
+          allowPrivilegeEscalation: false
+          readOnlyRootFilesystem: true
+          runAsNonRoot: true
+        resources:
+          requests:
+            cpu: "1"
+            memory: "1Gi"
+          limits:
+            cpu: "2"
+            memory: "2Gi"
       restartPolicy: Never
   backoffLimit: 0
+  ttlSecondsAfterFinished: 3600
deploy/warp/warp.yaml (1)

31-38: Pin image and add security context/resources

Mirror the Job hardening in the StatefulSet. The CLI only overrides the image tag, not security/resource fields.

         - name: warp
-          image: "minio/warp:latest"
-          imagePullPolicy: Always
+          image: "minio/warp:vX.Y.Z" # pin; CLI can override
+          imagePullPolicy: IfNotPresent
           args:
             - client
           ports:
             - name: http
               containerPort: 7761
+          securityContext:
+            allowPrivilegeEscalation: false
+            readOnlyRootFilesystem: true
+            runAsNonRoot: true
+          resources:
+            requests:
+              cpu: "500m"
+              memory: "512Mi"
+            limits:
+              cpu: "2"
+              memory: "2Gi"
pkg/bench/warp.go (1)

371-406: NodePort discovery collects only one IP; broaden and add fallback

Gather all ExternalIPs, and if none exist, all InternalIPs, without early breaks. This improves robustness on diverse clusters.

- for _, node := range nodes.Items {
-   for _, address := range node.Status.Addresses {
-     if address.Type == corev1.NodeExternalIP {
-       ips = append(ips, address.Address)
-     }
-   }
-   // Use the first external IP we find on the node
-   if len(ips) > 0 {
-     break
-   }
- }
- // Fallback to interalIP if no external found
- if len(ips) == 0 {
-   for _, node := range nodes.Items {
-     for _, a := range node.Status.Addresses {
-       if a.Type == corev1.NodeInternalIP {
-         ips = append(ips, a.Address)
-       }
-     }
-     if len(ips) > 0 {
-       break
-     }
-   }
- }
+ // Prefer all ExternalIPs; fallback to all InternalIPs if none
+ for _, node := range nodes.Items {
+   for _, a := range node.Status.Addresses {
+     if a.Type == corev1.NodeExternalIP {
+       ips = append(ips, a.Address)
+     }
+   }
+ }
+ if len(ips) == 0 {
+   for _, node := range nodes.Items {
+     for _, a := range node.Status.Addresses {
+       if a.Type == corev1.NodeInternalIP {
+         ips = append(ips, a.Address)
+       }
+     }
+   }
+ }
🧹 Nitpick comments (10)
deploy/warp/warp-svc.yaml (2)

14-14: Add trailing newline to satisfy linters/tools

File lacks a newline at EOF; add one to quiet YAMLlint and keep diffs clean.


10-12: Optional: align port name with container’s named port (“http”)

Not required, but renaming the Service port from “warp” to “http” matches the containerPort name in deploy/warp/warp.yaml and aids consistency in tooling.

Apply:

 ports:
-  - port: 7761
-    name: warp
+  - port: 7761
+    name: http
pkg/bench/warp.go (8)

67-69: Defaulting to :latest is risky

Consider defaulting to a tested, immutable tag (and document how to override with --image). This avoids accidental upgrades changing benchmark behavior.

-cmd.Flags().String(
-  "image", "minio/warp:latest",
-  "Warp image",
-)
+cmd.Flags().String(
+  "image", "minio/warp:vX.Y.Z",
+  "Warp image (override with a pinned tag)",
+)

126-129: Redundant zero check after getClientNums

getClientNums() already validates nodes and sets a non‑zero default. The extra check is dead code.

-clients = getClientNums(clients)
-if clients == 0 {
-  log.Fatal("❌ Number of clients cannot be 0")
-}
+clients = getClientNums(clients)

171-172: Typo in fatal message

“Unexepected” → “Unexpected”.

-  log.Fatal("❌ Unexepected number of containers in the Warp STS")
+  log.Fatal("❌ Unexpected number of containers in the Warp STS")

190-193: Create the headless Service before the StatefulSet

Best practice is to ensure the headless Service exists so pod DNS (warp-0.warp..svc) is resolvable as pods start. Here it’s created after the STS is Ready; move it earlier.

Add before applying the StatefulSet (after setting namespace/replicas):

 warpSts.Namespace = options.Namespace
 warpSts.Spec.Replicas = &clients
 if image != "" {
   containers[0].Image = image
 }
+// Ensure headless Service exists for stable DNS names
+warpSvc := util.KubeObject(bundle.File_deploy_warp_warp_svc_yaml).(*corev1.Service)
+warpSvc.Namespace = options.Namespace
+util.KubeApply(warpSvc)
 util.KubeApply(warpSts)

Remove the later block:

- warpSvc := util.KubeObject(bundle.File_deploy_warp_warp_svc_yaml).(*corev1.Service)
- warpSvc.Namespace = options.Namespace
- util.KubeApply(warpSvc)

Also applies to: 174-181


291-303: Close pod log streams to avoid leaks

Release ReadClosers after copying.

- for _, container := range logs {
-   if _, err := io.Copy(os.Stdout, container); err != nil {
+ for name, r := range logs {
+   if _, err := io.Copy(os.Stdout, r); err != nil {
      log.Warn("encountered error while copying logs -", err)
    }
+   if err := r.Close(); err != nil {
+     log.Debugf("failed closing log stream for %s: %v", name, err)
+   }
  }

407-422: Optional: include all LoadBalancer ingress entries

Append all IPs/hostnames, not just index 0.

- if len(s3svc.Status.LoadBalancer.Ingress) == 0 {
+ if len(s3svc.Status.LoadBalancer.Ingress) == 0 {
    log.Fatal("❌ Failed to find loadbalancer ingress")
  }
-
- if s3svc.Status.LoadBalancer.Ingress[0].IP != "" {
-   ips = append(ips, s3svc.Status.LoadBalancer.Ingress[0].IP)
- } else if s3svc.Status.LoadBalancer.Ingress[0].Hostname != "" {
-   ips = append(ips, s3svc.Status.LoadBalancer.Ingress[0].Hostname)
- } else {
-   log.Fatal("❌ Failed to find loadbalancer IP/Hostname")
- }
+ for _, ing := range s3svc.Status.LoadBalancer.Ingress {
+   if ing.IP != "" {
+     ips = append(ips, ing.IP)
+   } else if ing.Hostname != "" {
+     ips = append(ips, ing.Hostname)
+   }
+ }
+ if len(ips) == 0 {
+   log.Fatal("❌ Failed to find loadbalancer IP/Hostname")
+ }

424-427: Guard against unresolved port names

If the named port isn’t found, port remains 0 and host list becomes invalid; fail fast.

   getPort := func(svc *corev1.Service, portname string, fn func(corev1.ServicePort) int32) int32 {
     for _, port := range svc.Spec.Ports {
       if port.Name == portname {
         return fn(port)
       }
     }
-
-    return 0
+    return 0
   }
@@
- return strings.Join(util.Map(ips, func(ip string) string {
+ if port == 0 {
+   log.Fatalf("❌ failed to resolve port %q on S3 service", portname)
+ }
+ return strings.Join(util.Map(ips, func(ip string) string {
   return fmt.Sprintf("%s:%d", ip, port)
 }), ",")

Also applies to: 335-343


439-444: Unreachable warning after reassignment

The warn branch can never trigger after setting clients = nodeCount. Preserve original input to decide whether to warn.

-func getClientNums(clients int32) int32 {
+func getClientNums(clients int32) int32 {
   ...
-  if clients > int32(nodeCount) || clients == 0 {
-    clients = int32(nodeCount)
-    if clients > int32(nodeCount) {
-      log.Warn("Number of clients cannot exceed number of nodes")
-    }
-  }
+  if clients > int32(nodeCount) || clients == 0 {
+    if clients > int32(nodeCount) {
+      log.Warn("Number of clients cannot exceed number of nodes")
+    }
+    clients = int32(nodeCount)
+  }
   return clients
 }
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a326d74 and 6ca0199.

📒 Files selected for processing (8)
  • deploy/warp/warp-job.yaml (1 hunks)
  • deploy/warp/warp-svc.yaml (1 hunks)
  • deploy/warp/warp.yaml (1 hunks)
  • pkg/admission/server.go (2 hunks)
  • pkg/bench/warp.go (1 hunks)
  • pkg/bundle/deploy.go (1 hunks)
  • pkg/cli/cli.go (4 hunks)
  • pkg/util/util.go (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • pkg/bundle/deploy.go
  • pkg/admission/server.go
  • pkg/util/util.go
  • pkg/cli/cli.go
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: tangledbytes
PR: noobaa/noobaa-operator#1702
File: deploy/warp/warp-job.yaml:9-16
Timestamp: 2025-09-16T11:55:50.631Z
Learning: In the noobaa-operator warp benchmark feature, the warp-job.yaml template serves as a base that gets modified by the CLI code in pkg/bench/warp.go. The CLI dynamically overwrites the container image, environment variables, and args fields, but does not modify security context, resource limits, or TTL settings.
Learnt from: tangledbytes
PR: noobaa/noobaa-operator#1702
File: pkg/bundle/deploy.go:6929-6947
Timestamp: 2025-09-16T12:37:45.701Z
Learning: The bench CLI (pkg/bench/warp.go) dynamically overrides the Warp Job’s image, env, and args; it does not modify ttlSecondsAfterFinished or resource requests. Keeping TTL/resources in the embedded YAML is the intended pattern.
📚 Learning: 2025-09-16T11:55:50.631Z
Learnt from: tangledbytes
PR: noobaa/noobaa-operator#1702
File: deploy/warp/warp-job.yaml:9-16
Timestamp: 2025-09-16T11:55:50.631Z
Learning: In the noobaa-operator warp benchmark feature, the warp-job.yaml template serves as a base that gets modified by the CLI code in pkg/bench/warp.go. The CLI dynamically overwrites the container image, environment variables, and args fields, but does not modify security context, resource limits, or TTL settings.

Applied to files:

  • pkg/bench/warp.go
  • deploy/warp/warp-job.yaml
  • deploy/warp/warp-svc.yaml
  • deploy/warp/warp.yaml
📚 Learning: 2025-09-16T12:37:45.701Z
Learnt from: tangledbytes
PR: noobaa/noobaa-operator#1702
File: pkg/bundle/deploy.go:6929-6947
Timestamp: 2025-09-16T12:37:45.701Z
Learning: The bench CLI (pkg/bench/warp.go) dynamically overrides the Warp Job’s image, env, and args; it does not modify ttlSecondsAfterFinished or resource requests. Keeping TTL/resources in the embedded YAML is the intended pattern.

Applied to files:

  • pkg/bench/warp.go
  • deploy/warp/warp-job.yaml
  • deploy/warp/warp.yaml
🧬 Code graph analysis (1)
pkg/bench/warp.go (3)
pkg/util/util.go (12)
  • Logger (904-906)
  • GetFlagStringOrPrompt (1316-1328)
  • KubeObject (284-294)
  • KubeCheck (570-591)
  • OnSignal (2422-2428)
  • KubeApply (298-334)
  • Context (909-911)
  • KubeCheckQuiet (619-632)
  • KubeList (647-668)
  • GetPodLogs (715-754)
  • KubeDelete (445-498)
  • Map (2413-2420)
pkg/bundle/deploy.go (5)
  • File_deploy_internal_secret_empty_yaml (4902-4909)
  • File_deploy_warp_warp_yaml (6967-7006)
  • File_deploy_warp_warp_svc_yaml (6950-6963)
  • File_deploy_warp_warp_job_yaml (6929-6946)
  • File_deploy_internal_service_s3_yaml (4966-4992)
pkg/options/options.go (1)
  • Namespace (57-57)
🪛 Checkov (3.2.334)
deploy/warp/warp-job.yaml

[medium] 1-16: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[medium] 1-16: Minimize the admission of root containers

(CKV_K8S_23)

deploy/warp/warp.yaml

[medium] 1-38: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[medium] 1-38: Minimize the admission of root containers

(CKV_K8S_23)

🪛 YAMLlint (1.37.1)
deploy/warp/warp-svc.yaml

[error] 14-14: no new line character at the end of file

(new-line-at-end-of-file)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: cnpg-deployment-test
  • GitHub Check: run-kms-kmip-test
  • GitHub Check: run-kms-tls-token-test
  • GitHub Check: run-kms-tls-sa-test
  • GitHub Check: run-admission-test
  • GitHub Check: run-azure-vault-test
  • GitHub Check: golangci-lint
  • GitHub Check: run-core-config-map-tests
  • GitHub Check: run-kms-dev-test
  • GitHub Check: run-cli-tests
  • GitHub Check: run-hac-test
  • GitHub Check: run-operator-tests
  • GitHub Check: run-kms-key-rotate-test
🔇 Additional comments (5)
deploy/warp/warp-job.yaml (1)

10-12: Env var names are correct for Warp

Warp supports WARP_ACCESS_KEY/WARP_SECRET_KEY and corresponding CLI flags. Keeping envs is fine. (github.com)

pkg/bench/warp.go (4)

62-65: Good: clients flag defined as Int32 to match GetInt32

This fixes the earlier type mismatch concern.


254-255: LGTM: no fatal on KubeApply(false)

Treating “updated” as success avoids erroneous exits when the Job already exists.


330-355: LGTM: internal DNS uses fixed “s3” with HTTPS handled via port

This avoids constructing non-existent DNS names like s3-https..svc; selecting the HTTPS port by name is correct.


205-216: Credentials envs align with Warp docs; flags also supported

Using envs is fine; alternatively you could pass --access-key/--secret-key instead of setting envs. (github.com)

If you’d like, I can switch to flags for credentials to keep all configuration visible in args; say the word.

Comment on lines +241 to +247
if useHttps {
args = append(args, "--insecure", "--tls")
}
if warpArgs != "" {
args = append(args, strings.Split(warpArgs, " ")...)
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

warp-args splitting breaks quotes; use shlex

strings.Split cannot handle quoted args or escaped spaces, causing incorrect Warp invocations. Parse like a shell.

Import:

 import (
   "context"
   "fmt"
   "io"
   "os"
   "slices"
   "strings"
   "syscall"
   "time"
+  "github.com/google/shlex"

Replace splitting:

- if warpArgs != "" {
-   args = append(args, strings.Split(warpArgs, " ")...)
- }
+ if warpArgs != "" {
+   parsed, err := shlex.Split(warpArgs)
+   if err != nil {
+     util.Logger().Fatalf("❌ failed to parse --warp-args: %v", err)
+   }
+   args = append(args, parsed...)
+ }

Also applies to: 3-23

🤖 Prompt for AI Agents
In pkg/bench/warp.go around lines 241 to 247, using strings.Split(warpArgs, " ")
breaks quoted/escaped arguments; replace it with a shell-style splitter (e.g.
import github.com/google/shlex and call shlex.Split(warpArgs)) and append the
returned slice to args, handling the possible error from shlex.Split (log/return
on error). Apply the same replacement to the other occurrences mentioned (lines
~3-23) so all warpArgs parsing uses shlex.Split and proper error handling.

@tangledbytes tangledbytes requested review from a team, alphaprinz, dannyzaken, jackyalbo, liranmauda and shirady and removed request for a team and shirady September 16, 2025 14:19
Copy link
Member

@dannyzaken dannyzaken left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Finally got the time to go through this PR :)
LGTM. We can start using it and improve where needed.

Signed-off-by: Utkarsh Srivastava <srivastavautkarsh8097@gmail.com>
@tangledbytes tangledbytes force-pushed the utkarsh/feat/benchmark-cli branch from 6ca0199 to 6e96e4f Compare December 3, 2025 03:46
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (4)
deploy/warp/warp-job.yaml (1)

1-16: Add security context, resource limits, and TTL for cleanup.

The image is CLI-controlled (per prior feedback), but security hardening, resource constraints, and job cleanup should be added to the template. The CLI only overrides image, env, and args—not security context, resources, or TTL.

Apply this diff to harden the Job:

 apiVersion: batch/v1
 kind: Job
 metadata:
   name: warp-job
 spec:
   template:
     spec:
       containers:
       - name: warp-job
         env:
           - name: WARP_ACCESS_KEY
           - name: WARP_SECRET_KEY
         image: "minio/warp:latest"
         imagePullPolicy: Always
+        securityContext:
+          allowPrivilegeEscalation: false
+          readOnlyRootFilesystem: true
+          runAsNonRoot: true
+        resources:
+          requests:
+            cpu: "1"
+            memory: "1Gi"
+          limits:
+            cpu: "2"
+            memory: "2Gi"
       restartPolicy: Never
   backoffLimit: 0
+  ttlSecondsAfterFinished: 3600

Based on learnings, the CLI pattern in pkg/bench/warp.go leaves security, resources, and TTL untouched.

deploy/warp/warp.yaml (1)

1-38: Add security context and resource limits to the Warp StatefulSet.

The image is CLI-controlled, but container hardening and resource constraints should be added to match the Job manifest and prevent resource contention. The pod anti-affinity rule (lines 20–29) expects resource limits to be effective.

Apply this diff to harden the container:

       containers:
         - name: warp
           image: "minio/warp:latest"
           imagePullPolicy: Always
           args:
             - client
           ports:
             - name: http
               containerPort: 7761
+          securityContext:
+            allowPrivilegeEscalation: false
+            readOnlyRootFilesystem: true
+            runAsNonRoot: true
+          resources:
+            requests:
+              cpu: "500m"
+              memory: "512Mi"
+            limits:
+              cpu: "2"
+              memory: "2Gi"

Based on learnings, the CLI does not modify security or resource fields in the StatefulSet template.

pkg/bench/warp.go (2)

244-246: warp-args splitting breaks on quoted arguments.

strings.Split(warpArgs, " ") cannot handle arguments with quoted spaces. For example, --warp-args '--name "foo bar"' incorrectly splits to ["--name", "\"foo", "bar\""].

Consider using a shell-style tokenizer like github.com/google/shlex:

+import "github.com/google/shlex"
+
 if warpArgs != "" {
-    args = append(args, strings.Split(warpArgs, " ")...)
+    parsed, err := shlex.Split(warpArgs)
+    if err != nil {
+        util.Logger().Fatalf("❌ failed to parse --warp-args: %v", err)
+    }
+    args = append(args, parsed...)
 }

257-304: Job failure not surfaced as non-zero exit code.

When the Warp job fails, the code logs an error but pollWarp doesn't propagate this to the caller, so CI pipelines won't detect benchmark failures.

-func pollWarp() {
+func pollWarp() bool {
     // ... existing polling logic ...
 
     if warpJob.Status.Failed != 0 {
         log.Error("❌ Warp Job Failed")
     }
 
     // ... log streaming ...
+    return warpJob.Status.Failed == 0
 }

Then in RunBenchWarp:

-    pollWarp()
-    cleanupWarp()
+    ok := pollWarp()
+    cleanupWarp()
+    if !ok {
+        util.Logger().Fatal("❌ Benchmark failed")
+    }
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6ca0199 and 6e96e4f.

📒 Files selected for processing (8)
  • deploy/warp/warp-job.yaml (1 hunks)
  • deploy/warp/warp-svc.yaml (1 hunks)
  • deploy/warp/warp.yaml (1 hunks)
  • pkg/admission/server.go (2 hunks)
  • pkg/bench/warp.go (1 hunks)
  • pkg/bundle/deploy.go (1 hunks)
  • pkg/cli/cli.go (4 hunks)
  • pkg/util/util.go (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • pkg/admission/server.go
  • deploy/warp/warp-svc.yaml
  • pkg/cli/cli.go
  • pkg/bundle/deploy.go
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: tangledbytes
Repo: noobaa/noobaa-operator PR: 1702
File: deploy/warp/warp-job.yaml:9-16
Timestamp: 2025-09-16T11:55:50.673Z
Learning: In the noobaa-operator warp benchmark feature, the warp-job.yaml template serves as a base that gets modified by the CLI code in pkg/bench/warp.go. The CLI dynamically overwrites the container image, environment variables, and args fields, but does not modify security context, resource limits, or TTL settings.
Learnt from: tangledbytes
Repo: noobaa/noobaa-operator PR: 1702
File: pkg/bundle/deploy.go:6929-6947
Timestamp: 2025-09-16T12:37:45.725Z
Learning: The bench CLI (pkg/bench/warp.go) dynamically overrides the Warp Job’s image, env, and args; it does not modify ttlSecondsAfterFinished or resource requests. Keeping TTL/resources in the embedded YAML is the intended pattern.
📚 Learning: 2025-09-16T11:55:50.673Z
Learnt from: tangledbytes
Repo: noobaa/noobaa-operator PR: 1702
File: deploy/warp/warp-job.yaml:9-16
Timestamp: 2025-09-16T11:55:50.673Z
Learning: In the noobaa-operator warp benchmark feature, the warp-job.yaml template serves as a base that gets modified by the CLI code in pkg/bench/warp.go. The CLI dynamically overwrites the container image, environment variables, and args fields, but does not modify security context, resource limits, or TTL settings.

Applied to files:

  • pkg/bench/warp.go
  • deploy/warp/warp-job.yaml
  • deploy/warp/warp.yaml
📚 Learning: 2025-09-16T12:37:45.725Z
Learnt from: tangledbytes
Repo: noobaa/noobaa-operator PR: 1702
File: pkg/bundle/deploy.go:6929-6947
Timestamp: 2025-09-16T12:37:45.725Z
Learning: The bench CLI (pkg/bench/warp.go) dynamically overrides the Warp Job’s image, env, and args; it does not modify ttlSecondsAfterFinished or resource requests. Keeping TTL/resources in the embedded YAML is the intended pattern.

Applied to files:

  • pkg/bench/warp.go
  • deploy/warp/warp-job.yaml
  • deploy/warp/warp.yaml
📚 Learning: 2025-09-02T11:05:25.671Z
Learnt from: Neon-White
Repo: noobaa/noobaa-operator PR: 0
File: :0-0
Timestamp: 2025-09-02T11:05:25.671Z
Learning: In the noobaa-operator repository, the golangci-lint timeout is configured in `.golangci.yml` under `run.timeout` and is set to 5 minutes.

Applied to files:

  • pkg/bench/warp.go
🪛 Checkov (3.2.334)
deploy/warp/warp-job.yaml

[medium] 1-16: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[medium] 1-16: Minimize the admission of root containers

(CKV_K8S_23)

deploy/warp/warp.yaml

[medium] 1-38: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[medium] 1-38: Minimize the admission of root containers

(CKV_K8S_23)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: run-admission-test
  • GitHub Check: run-kms-dev-test
  • GitHub Check: run-operator-tests
  • GitHub Check: run-hac-test
  • GitHub Check: run-kms-key-rotate-test
  • GitHub Check: run-core-config-map-tests
  • GitHub Check: golangci-lint
  • GitHub Check: run-kms-kmip-test
  • GitHub Check: run-kms-tls-token-test
  • GitHub Check: run-kms-tls-sa-test
  • GitHub Check: run-azure-vault-test
  • GitHub Check: run-cli-tests
  • GitHub Check: cnpg-deployment-test
🔇 Additional comments (4)
pkg/util/util.go (1)

2415-2430: Clean utility additions.

The Map function correctly pre-allocates the result slice and the OnSignal helper appropriately encapsulates signal handling for CLI use cases.

pkg/bench/warp.go (3)

35-92: Command and flag setup looks correct.

The CLI structure and flag definitions are well organized. The Int32 flag type now matches the GetInt32 usage.


151-162: Signal cleanup and main workflow are acceptable.

The signal handler goroutine and main flow could theoretically both call cleanupWarp concurrently if a signal arrives during the cleanup phase, but since KubeDelete handles already-deleted resources gracefully, this is acceptable for a CLI tool.


306-323: Cleanup logic is correct.

The foreground deletion propagation for the Job and ordered cleanup of resources is appropriate.

warpSts := util.KubeObject(bundle.File_deploy_warp_warp_yaml).(*appsv1.StatefulSet)
containers := warpSts.Spec.Template.Spec.Containers
if len(containers) != 1 {
log.Fatal("❌ Unexepected number of containers in the Warp STS")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Typo: "Unexepected" → "Unexpected"

-		log.Fatal("❌ Unexepected number of containers in the Warp STS")
+		log.Fatal("❌ Unexpected number of containers in the Warp STS")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
log.Fatal("❌ Unexepected number of containers in the Warp STS")
log.Fatal("❌ Unexpected number of containers in the Warp STS")
🤖 Prompt for AI Agents
In pkg/bench/warp.go around line 171, there's a typo in the fatal log message:
"Unexepected" should be "Unexpected"; update the string passed to log.Fatal to
read "❌ Unexpected number of containers in the Warp STS" (preserve emoji and
punctuation).

break
}
}
// Fallback to interalIP if no external found
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Typo in comment: "interalIP" → "internalIP"

-	// Fallback to interalIP if no external found
+	// Fallback to internalIP if no external found
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Fallback to interalIP if no external found
// Fallback to internalIP if no external found
🤖 Prompt for AI Agents
In pkg/bench/warp.go around line 389, fix the typo in the comment "interalIP" to
"internalIP"; update the comment text accordingly so it reads "Fallback to
internalIP if no external found" to improve clarity and correctness.

Comment on lines +439 to +444
if clients > int32(nodeCount) || clients == 0 {
clients = int32(nodeCount)
if clients > int32(nodeCount) {
log.Warn("Number of clients cannot exceed number of nodes")
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Logic bug: warning message is dead code.

The inner condition clients > int32(nodeCount) is always false because clients was just assigned int32(nodeCount) on the previous line. The warning never fires.

-	if clients > int32(nodeCount) || clients == 0 {
-		clients = int32(nodeCount)
-		if clients > int32(nodeCount) {
-			log.Warn("Number of clients cannot exceed number of nodes")
-		}
-	}
+	if clients > int32(nodeCount) {
+		log.Warn("Number of clients cannot exceed number of nodes")
+		clients = int32(nodeCount)
+	} else if clients == 0 {
+		clients = int32(nodeCount)
+	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if clients > int32(nodeCount) || clients == 0 {
clients = int32(nodeCount)
if clients > int32(nodeCount) {
log.Warn("Number of clients cannot exceed number of nodes")
}
}
if clients > int32(nodeCount) {
log.Warn("Number of clients cannot exceed number of nodes")
clients = int32(nodeCount)
} else if clients == 0 {
clients = int32(nodeCount)
}
🤖 Prompt for AI Agents
In pkg/bench/warp.go around lines 439 to 444, the inner warning is dead code
because you set clients = int32(nodeCount) before checking clients > nodeCount;
instead, reorder and split the logic: first check if clients == 0 and set
clients = int32(nodeCount); then separately check if clients > int32(nodeCount)
and if so log the warning and clamp clients to int32(nodeCount). Ensure you do
not overwrite clients before performing the greater-than check so the warning
can fire when appropriate.

@tangledbytes tangledbytes merged commit 3831c69 into noobaa:master Dec 3, 2025
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants