Conversation
16cc7d8 to
37daf75
Compare
cmd/root.go
Outdated
|
|
||
| func runStart(ctx context.Context, rt runtime.Runtime) error { | ||
| tel := telemetry.New() | ||
| defer tel.Flush() |
There was a problem hiding this comment.
TODO: create one telemetry client per run/command.
For this we need to re-work our dependency injection (coming in a subsequent PR).
Then we can easily flush after a command finishes, in PersistentPostRun.
However we need to investigate if/how to still send the event when calling os.Exit(1) (which we do on error)
since it kills the process before any post-run hooks fire.
Once we have this, we would also have a single sessionID per command which all events for that command would send.
There was a problem hiding this comment.
Rebased and addressed the TODO:
- We now create only one instance of telemetry client at root and pass only the required env vars
- We do not execute
os.Exit(1)from the commands themselves but in one central place (main), after the "flushing" has happened: no more risk of exiting before the events have been sent. To achieve that we now use cobra'sRunEinstead ofRun.RunEreturns an error, which cobra propagates back.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds an asynchronous telemetry subsystem: a telemetry.Client (New, Emit, Flush), persistent anonymized machine ID, Env fields for analytics endpoint and disable flag, README env docs, CLI wiring to emit telemetry on start, and unit/integration tests for telemetry behavior. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI (Execute / runStart)
participant TEL as Telemetry Client
participant FS as File System
participant NET as Analytics Server
CLI->>TEL: New(endpoint, disabled)
TEL->>FS: LoadOrCreateMachineID()
FS-->>TEL: machine_id
CLI->>TEL: Emit("cli_cmd", payload)
TEL->>TEL: Enrich payload (version, os/arch, is_ci, session_id, machine_id)
TEL->>NET: POST /analytics (async)
NET-->>TEL: HTTP response / error
CLI->>TEL: Flush()
TEL-->>CLI: All events flushed
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
README.md (1)
21-24: Normalize env var naming in the table and document endpoint override.Use only variable names in the first column (Line 24), and add
LSTK_ANALYTICS_ENDPOINTto reflect the new config surface.Proposed README tweak
| Variable | Description | |---|---| | `LOCALSTACK_AUTH_TOKEN` | Auth token; for CI only | -| `LOCALSTACK_DISABLE_EVENTS=1` | Disables telemetry event reporting | +| `LOCALSTACK_DISABLE_EVENTS` | Set to `1` to disable telemetry event reporting | +| `LSTK_ANALYTICS_ENDPOINT` | Override telemetry ingestion endpoint (advanced/testing) |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 21 - 24, Update the environment variable table so the first column contains only variable names (e.g., replace `LOCALSTACK_DISABLE_EVENTS=1` with `LOCALSTACK_DISABLE_EVENTS`), keep `LOCALSTACK_AUTH_TOKEN` described as "Auth token; for CI only", and add a new row for `LSTK_ANALYTICS_ENDPOINT` with a short description like "Overrides analytics endpoint" to document the endpoint override surface.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/telemetry/client_test.go`:
- Around line 41-43: The test mutates the global env.Vars (setting
env.Vars.DisableEvents and env.Vars.AnalyticsEndpoint) without restoring it,
causing config leakage; capture the original env.Vars value at the start of the
test (e.g. orig := env.Vars) and defer restoring it (defer func() { env.Vars =
orig }()), then proceed to set env.Vars.DisableEvents and
env.Vars.AnalyticsEndpoint for the test and allow the deferred restore to reset
global state after the test finishes.
- Line 48: The test currently does a blocking receive "got := <-ch" which can
hang if telemetry isn't sent; change the receive in the test (in the test
function using variable ch in internal/telemetry/client_test.go) to use a select
with a timeout (e.g., time.After) so the test fails fast on timeout and reports
an error instead of hanging; ensure you import time and call t.Fatalf or t.Error
when the timeout case fires to make the failure explicit.
- Around line 26-34: The httptest server handler (the anonymous http.HandlerFunc
created for srv := httptest.NewServer(...)) currently calls require.* (e.g.,
require.NoError and require.Len) inside the handler goroutine which calls
FailNow from a non-test goroutine; instead capture errors and validation results
in local variables (or send them on a channel) within the handler (e.g., store
body read error, json.Unmarshal error, and len of req.Events into variables or
an error value), close the request body, and return normally; then, in the test
goroutine after making the request (and after srv.Close()), assert those
captured values using require.* (referencing the same variables used in the
handler such as bodyErr, unmarshalErr, eventsLen or a returned handlerErr) so
all require.* calls execute in the test goroutine.
In `@internal/telemetry/client.go`:
- Around line 103-107: The code calls c.httpClient.Do(req) and then discards the
error from resp.Body.Close() with "_ = resp.Body.Close()"; change this to check
and handle the Close error instead of ignoring it—e.g. capture closeErr :=
resp.Body.Close(); if closeErr != nil { return closeErr } or log the error using
the client's logger depending on the surrounding function's error flow—update
the handling in the function containing c.httpClient.Do(req) so
resp.Body.Close() is not silently ignored.
---
Nitpick comments:
In `@README.md`:
- Around line 21-24: Update the environment variable table so the first column
contains only variable names (e.g., replace `LOCALSTACK_DISABLE_EVENTS=1` with
`LOCALSTACK_DISABLE_EVENTS`), keep `LOCALSTACK_AUTH_TOKEN` described as "Auth
token; for CI only", and add a new row for `LSTK_ANALYTICS_ENDPOINT` with a
short description like "Overrides analytics endpoint" to document the endpoint
override surface.
ℹ️ Review info
Configuration used: Repository UI (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (10)
README.mdcmd/root.gogo.modinternal/env/env.gointernal/telemetry/client.gointernal/telemetry/client_test.gointernal/telemetry/machine_id.gotest/integration/env/env.gotest/integration/go.modtest/integration/telemetry_test.go
There was a problem hiding this comment.
Thanks for taking a first stab at this @carole-lavillonniere! Looks like we got the event schema down, so from an analytics-backend side this should work no problem (thanks for validating!).
It makes sense that we're filling the sessionId with a newly generated UUID. As we've spoken about, this is not used for anything else but localstack sessions at the moment, but at least we have clear semantics for the CLI for now (one command = new session id). I don't have much to add there.
My main ask is around the machine ID, but I think it would be great if we could rework this using a producer/consumer pattern for reasons I explain inline. Happy to merge this as a first iteration though so we can get going.
internal/telemetry/client.go
Outdated
| return | ||
| } | ||
| req.Header.Set("Content-Type", "application/json") | ||
| req.Header.Set("X-Client", clientHeader) |
There was a problem hiding this comment.
not sure what we would use this for at the moment. i like the idea of setting the user agent header though.
for example, we could build a typical user agent header:
localstack lstk/<version> <os>/<kernel>/<arch>
^^^^^^^^^^ ^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^
namespace software system info
and send that. this looks future proof, standard conform, and i can see that being super useful in the analytics backend at some point :)
i would love to introduce this in the emulator also
There was a problem hiding this comment.
Changed to the pattern you suggested. Skipped the kernel though since it's not directly available in go libraries, we would need to make some non-cross-platform system calls.
internal/telemetry/client.go
Outdated
| SessionID string `json:"session_id"` | ||
| } | ||
|
|
||
| func (c *Client) Track(name string, payload map[string]any) { |
There was a problem hiding this comment.
nit on naming: i would prefer something like Log or Emit as method name here. to me "Track" implies some sort of long-living process, a lifecycle, or a correlation of some sort. whereas this is really just a fire-and-forget logging method, and Log or Emit matches that mental model more.
There was a problem hiding this comment.
✔️ Emit it is.
internal/telemetry/client.go
Outdated
| SessionID string `json:"session_id"` | ||
| } | ||
|
|
||
| func (c *Client) Track(name string, payload map[string]any) { |
There was a problem hiding this comment.
i think we should pass a context object here, and hand it down to the http request, so we have lifecycle control. though not necessarily needed if we go with a producer/consumer pattern.
There was a problem hiding this comment.
💯 Added. I think it's good to pass the context regardless of the architecture. It's idiomatic go, it could be used in the future for propagation of values, trace contexts etc.
However I think it would be wrong behaviour to cancel the http requests for analytics. The events should go through even if:
- the user cancels the command with Ctrl+C
- the parent context had a timeout (not the case at the moment)
- the command exited early because of an error
That's why I made use of context.WithoutCancel(ctx).
internal/telemetry/client.go
Outdated
| } | ||
|
|
||
| c.wg.Add(1) | ||
| go func() { |
There was a problem hiding this comment.
i understand that we don't want Track to block to avoid callers to manage concurrency, but i think there are better ways to solve this than creating a new go routine for each call and tracking their invocations using a WaitGroup.
i see a few drawbacks here:
- there's unbounded go routine creation. if we use this client to emit events in a loop, we may end up creating lots of go routines without any backpressure. nothing is slowing requests down, so this may end up creating resource spikes if used incorrectly.
- if memory serves, calling
WaitGroup.AddwithWaitGroup.Waitconcurrently is not safe - there's no cancellation or lifecycle control
my preference would be creating a producer/worker pattern:
- use a (optionally bounded) channel of events
- use a single go routine that reads from the channel and sends the data in a loop
CloseorShutdownthat stops acceptance of new events and drains the channel
now, Track becomes just a write into the channel. it gives us controlled concurrency, predictable resource usage, simpler shutdown mechanics (👀 Flush), option to create drop policies when queue is full, better encapsulation between concerns (marshalling data vs loading data)
There was a problem hiding this comment.
I was about to fight back with these arguments:
there's unbounded go routine creation. if we use this client to emit events in a loop, we may end up creating lots of go routines without any backpressure. nothing is slowing requests down, so this may end up creating resource spikes if used incorrectly.
For now we mostly have discrete commands (only exception for now is lstk logs --follow), and likely each command emitting 1 event once we have analytics in place. I think this will become more relevant when we have a long-running interactive shell or REPL mode (which is planned), but even then I doubt we will be emitting so many events, even less so concurrently.
if memory serves, calling WaitGroup.Add with WaitGroup.Wait concurrently is not safe
That's right, however we only Wait when calling Flush, which is deferred to when the command is finished running. So in practice at the moment we cannot call Wait concurrently to Add.
BUT the new producer/consumer pattern aka fire-and-forget channel does not really bring more complexity to the code, so I just went ahead and did it. It's a more solid design, I like it! Looking forward to your next round of review @thrau, thanks for taking the time to review this!
internal/telemetry/machine_id.go
Outdated
| return "" | ||
| } | ||
|
|
||
| id := uuid.NewString() |
There was a problem hiding this comment.
it would be great if we could attempt to use a salt hashed version of the machine id, or docker daemon id if available, so we have a predictable machine id. here's a reference in localstack: https://github.com/localstack/localstack/blob/ada377c2c63ac89f968cb31627a0d1a1dabb8284/localstack-core/localstack/utils/analytics/metadata.py#L172-L200
if we get it in the same way, we could ideally link the generated machine IDs the ones generated in localstack.
There was a problem hiding this comment.
Thanks for the reference!
Now we use those, in order:
- docker daemon ID (dkr_ prefix)
- /etc/machine-id (sys_ prefix)
- persisted random ID (gen_ prefix)
internal/telemetry/client.go
Outdated
| // before process exit to avoid dropping telemetry events. It returns quickly | ||
| // when no events are pending, and is bounded by the HTTP client's timeout in | ||
| // the worst case. | ||
| func (c *Client) Flush() { |
There was a problem hiding this comment.
nit on naming: as a caller, i would expect Flush to force buffered items to be emptied to the underlying IO writer. this method looks more like a WaitForShutdown? though we could avoid this if we go with the producer/consumer pattern.
There was a problem hiding this comment.
Renamed to Close with new producer/consumer pattern.
|
cc @vittoriopolverino in case you are interested |
4bf0c52 to
4187957
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
test/integration/telemetry_test.go (1)
23-46: AssertX-Client: lstk/v2in the integration path.The helper verifies event body parsing, but it doesn’t enforce the header contract introduced by this PR. Capture the header in
mockAnalyticsServerand assert it inTestStartCommandSendsTelemetryEvent.Proposed test enhancement
-func mockAnalyticsServer(t *testing.T) (*httptest.Server, <-chan map[string]any) { +type receivedTelemetry struct { + event map[string]any + xClient string +} + +func mockAnalyticsServer(t *testing.T) (*httptest.Server, <-chan receivedTelemetry) { t.Helper() - ch := make(chan map[string]any, 10) + ch := make(chan receivedTelemetry, 10) srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { @@ for _, event := range req.Events { - ch <- event + ch <- receivedTelemetry{ + event: event, + xClient: r.Header.Get("X-Client"), + } } w.WriteHeader(http.StatusOK) })) @@ - case event := <-events: + case received := <-events: + assert.Equal(t, "lstk/v2", received.xClient) + event := received.event assert.Equal(t, "cli_cmd", event["name"])Also applies to: 70-88
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/telemetry_test.go` around lines 23 - 46, The mockAnalyticsServer helper currently parses and forwards event bodies but doesn't assert the required header; update mockAnalyticsServer to read r.Header.Get("X-Client") and send that value on a new channel (or include it with each event) so tests can assert the header; then modify TestStartCommandSendsTelemetryEvent to receive and assert the header equals "lstk/v2" (and apply the same header-check change for the other test path referenced around the similar helper usage), referencing the mockAnalyticsServer function and the TestStartCommandSendsTelemetryEvent test to locate changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/telemetry/machine_id.go`:
- Around line 17-49: LoadOrCreateMachineID currently resolves the storage path
inside machineIDPath (calling config.ConfigDir()) which couples IO to global
config; refactor by splitting path resolution from IO: add a new function like
LoadOrCreateMachineIDAt(path string) (or a constructor that accepts a storage
path) that performs the read/create/write logic, keep machineIDPath as a helper
for callers that want default behavior, and change callers/tests to call the
injectable LoadOrCreateMachineIDAt with a temp dir so tests can control
persistence; ensure LoadOrCreateMachineID delegates to the new function for
backwards compatibility.
---
Nitpick comments:
In `@test/integration/telemetry_test.go`:
- Around line 23-46: The mockAnalyticsServer helper currently parses and
forwards event bodies but doesn't assert the required header; update
mockAnalyticsServer to read r.Header.Get("X-Client") and send that value on a
new channel (or include it with each event) so tests can assert the header; then
modify TestStartCommandSendsTelemetryEvent to receive and assert the header
equals "lstk/v2" (and apply the same header-check change for the other test path
referenced around the similar helper usage), referencing the mockAnalyticsServer
function and the TestStartCommandSendsTelemetryEvent test to locate changes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 3dd8c4d4-de15-45f9-a949-88ebaebec417
📒 Files selected for processing (10)
README.mdcmd/root.gogo.modinternal/env/env.gointernal/telemetry/client.gointernal/telemetry/client_test.gointernal/telemetry/machine_id.gotest/integration/env/env.gotest/integration/go.modtest/integration/telemetry_test.go
🚧 Files skipped from review as they are similar to previous changes (7)
- internal/env/env.go
- test/integration/go.mod
- README.md
- internal/telemetry/client_test.go
- internal/telemetry/client.go
- cmd/root.go
- go.mod
| func LoadOrCreateMachineID() string { | ||
| path, err := machineIDPath() | ||
| if err != nil { | ||
| return "" | ||
| } | ||
|
|
||
| data, err := os.ReadFile(path) | ||
| if err == nil { | ||
| id := strings.TrimSpace(string(data)) | ||
| if id != "" { | ||
| return id | ||
| } | ||
| } else if !os.IsNotExist(err) { | ||
| return "" | ||
| } | ||
|
|
||
| id := uuid.NewString() | ||
| if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { | ||
| return "" | ||
| } | ||
| if err := os.WriteFile(path, []byte(id), 0600); err != nil { | ||
| return "" | ||
| } | ||
| return id | ||
| } | ||
|
|
||
| func machineIDPath() (string, error) { | ||
| dir, err := config.ConfigDir() | ||
| if err != nil { | ||
| return "", err | ||
| } | ||
| return filepath.Join(dir, machineIDFileName), nil | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Inject machine ID storage path instead of hard-coding config resolution.
LoadOrCreateMachineID() currently binds persistence directly to config.ConfigDir() (Line 44), which makes side effects harder to isolate and test. Please split path resolution from IO so callers can inject a temp path when needed.
Proposed refactor
func LoadOrCreateMachineID() string {
path, err := machineIDPath()
if err != nil {
return ""
}
+ return LoadOrCreateMachineIDAt(path)
+}
+func LoadOrCreateMachineIDAt(path string) string {
data, err := os.ReadFile(path)
if err == nil {
id := strings.TrimSpace(string(data))
if id != "" {
return id
@@
if err := os.WriteFile(path, []byte(id), 0600); err != nil {
return ""
}
return id
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func LoadOrCreateMachineID() string { | |
| path, err := machineIDPath() | |
| if err != nil { | |
| return "" | |
| } | |
| data, err := os.ReadFile(path) | |
| if err == nil { | |
| id := strings.TrimSpace(string(data)) | |
| if id != "" { | |
| return id | |
| } | |
| } else if !os.IsNotExist(err) { | |
| return "" | |
| } | |
| id := uuid.NewString() | |
| if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { | |
| return "" | |
| } | |
| if err := os.WriteFile(path, []byte(id), 0600); err != nil { | |
| return "" | |
| } | |
| return id | |
| } | |
| func machineIDPath() (string, error) { | |
| dir, err := config.ConfigDir() | |
| if err != nil { | |
| return "", err | |
| } | |
| return filepath.Join(dir, machineIDFileName), nil | |
| } | |
| func LoadOrCreateMachineID() string { | |
| path, err := machineIDPath() | |
| if err != nil { | |
| return "" | |
| } | |
| return LoadOrCreateMachineIDAt(path) | |
| } | |
| func LoadOrCreateMachineIDAt(path string) string { | |
| data, err := os.ReadFile(path) | |
| if err == nil { | |
| id := strings.TrimSpace(string(data)) | |
| if id != "" { | |
| return id | |
| } | |
| } else if !os.IsNotExist(err) { | |
| return "" | |
| } | |
| id := uuid.NewString() | |
| if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { | |
| return "" | |
| } | |
| if err := os.WriteFile(path, []byte(id), 0600); err != nil { | |
| return "" | |
| } | |
| return id | |
| } | |
| func machineIDPath() (string, error) { | |
| dir, err := config.ConfigDir() | |
| if err != nil { | |
| return "", err | |
| } | |
| return filepath.Join(dir, machineIDFileName), nil | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/telemetry/machine_id.go` around lines 17 - 49, LoadOrCreateMachineID
currently resolves the storage path inside machineIDPath (calling
config.ConfigDir()) which couples IO to global config; refactor by splitting
path resolution from IO: add a new function like LoadOrCreateMachineIDAt(path
string) (or a constructor that accepts a storage path) that performs the
read/create/write logic, keep machineIDPath as a helper for callers that want
default behavior, and change callers/tests to call the injectable
LoadOrCreateMachineIDAt with a temp dir so tests can control persistence; ensure
LoadOrCreateMachineID delegates to the new function for backwards compatibility.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cmd/root.go`:
- Around line 64-67: Replace the direct stderr write in the root.ExecuteContext
error path with the project’s typed output emitter: instead of
fmt.Fprintf(os.Stderr, "Error: %v\n", err) call the appropriate internal/output
emitter (e.g., output.EmitWarning or output.EmitStatus, or the project’s
error-specific emitter) to emit the error message, using the same context and
formatted text only if !output.IsSilent(err); update the block that checks
output.IsSilent(err) around root.ExecuteContext to call output.Emit... with the
formatted error string so all command errors go through the typed output
pipeline.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 544f66eb-6b6e-4278-912e-680c068ca486
📒 Files selected for processing (4)
cmd/help_test.gocmd/root.gocmd/start.gocmd/stop.go
| if err := root.ExecuteContext(ctx); err != nil { | ||
| if !output.IsSilent(err) { | ||
| fmt.Fprintf(os.Stderr, "Error: %v\n", err) | ||
| } |
There was a problem hiding this comment.
Use internal/output typed emitters instead of direct stderr writes.
Line 66 prints directly via fmt.Fprintf(os.Stderr, ...), which bypasses the typed output pathway used by command handlers.
As per coding guidelines, "Emit typed events through internal/output (EmitInfo, EmitSuccess, EmitNote, EmitWarning, EmitStatus, EmitProgress, etc.) instead of printing from domain/command handlers".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/root.go` around lines 64 - 67, Replace the direct stderr write in the
root.ExecuteContext error path with the project’s typed output emitter: instead
of fmt.Fprintf(os.Stderr, "Error: %v\n", err) call the appropriate
internal/output emitter (e.g., output.EmitWarning or output.EmitStatus, or the
project’s error-specific emitter) to emit the error message, using the same
context and formatted text only if !output.IsSilent(err); update the block that
checks output.IsSilent(err) around root.ExecuteContext to call output.Emit...
with the formatted error string so all command errors go through the typed
output pipeline.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
internal/telemetry/client.go (1)
106-110:⚠️ Potential issue | 🟡 MinorDo not discard
resp.Body.Close()errors.
resp.Body.Close()is still ignored in non-test code.Based on learnings, `!(test_*)/**/*.go`: “Errors returned by functions should always be checked unless in test files”.Suggested fix
resp, err := c.httpClient.Do(req) if err != nil { return } - _ = resp.Body.Close() + if err := resp.Body.Close(); err != nil { + return + } }()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/telemetry/client.go` around lines 106 - 110, The resp.Body.Close() call right after c.httpClient.Do(req) currently discards its error; update the function containing c.httpClient.Do(req) to check the error returned by resp.Body.Close() (e.g., err := resp.Body.Close()) and handle it appropriately rather than ignoring it — either return/wrap the error if the enclosing method returns an error or log it using the package's logger (referencing resp, resp.Body.Close, c.httpClient.Do and the enclosing method name in client.go) so the close failure is not silently dropped.cmd/root.go (1)
64-67:⚠️ Potential issue | 🟡 MinorAvoid direct stderr writes in the root execution error path.
This bypasses the typed output pipeline and also ignores the
fmt.Fprintfwrite error. Route command errors through the project’s typed output emitters.As per coding guidelines,
{cmd,internal}/**/*.go: “Emit typed events through internal/output (EmitInfo, EmitSuccess, EmitNote, EmitWarning, EmitStatus, EmitProgress, etc.) instead of printing from domain/command handlers”.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/root.go` around lines 64 - 67, Replace the direct stderr write in the root execution error path: when root.ExecuteContext(ctx) returns an error and output.IsSilent(err) is false, route the error through the project's typed output emitter instead of fmt.Fprintf; e.g., call the appropriate internal/output emitter (EmitWarning or the project's EmitError-equivalent) with context and the error message (use err.Error() or format the message) and remove the fmt.Fprintf call so errors respect the typed output pipeline and no raw stderr writes occur; keep the output.IsSilent(err) check and ensure you import and use the internal output emitter used elsewhere in the codebase.
🧹 Nitpick comments (1)
internal/telemetry/client.go (1)
114-117: Update staleFlushdoc wording (Track→Emit).The comment still references “Track goroutines”, but the public API method is now
Emit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/telemetry/client.go` around lines 114 - 117, Update the doc comment for Flush to reflect the current public API by replacing the outdated "Track goroutines" phrasing with "Emit goroutines" (or "in-flight Emit goroutines") so it accurately references the Emit method; locate the comment around the Flush function in internal/telemetry/client.go and change the wording accordingly while keeping the rest of the comment intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/telemetry/machine_id.go`:
- Line 45: The defer c.Close() and the os.WriteFile(...) calls currently ignore
returned errors; update the code to capture and handle both errors: in the defer
replace the ignored call with a closure that assigns err := c.Close() and if err
!= nil log or return a wrapped error (e.g. fmt.Errorf("close failed: %w", err))
appropriate to the surrounding function, and for os.WriteFile(...) check its
error result and handle it the same way (log/return a wrapped error) so neither
c.Close() nor os.WriteFile() errors are silently dropped; reference the
c.Close() defer and the os.WriteFile(...) call when making the changes.
- Around line 40-47: dockerDaemonID() calls c.Info with context.Background() and
can hang startup; update dockerDaemonID to create a cancellable context with a
short timeout (e.g. 1-3s) via context.WithTimeout, defer the cancel, and pass
that timed context into c.Info so an unresponsive Docker daemon won't block
LoadOrCreateMachineID(); also handle context deadline/cancel errors similarly to
other c.Info errors and return "" on timeout.
---
Duplicate comments:
In `@cmd/root.go`:
- Around line 64-67: Replace the direct stderr write in the root execution error
path: when root.ExecuteContext(ctx) returns an error and output.IsSilent(err) is
false, route the error through the project's typed output emitter instead of
fmt.Fprintf; e.g., call the appropriate internal/output emitter (EmitWarning or
the project's EmitError-equivalent) with context and the error message (use
err.Error() or format the message) and remove the fmt.Fprintf call so errors
respect the typed output pipeline and no raw stderr writes occur; keep the
output.IsSilent(err) check and ensure you import and use the internal output
emitter used elsewhere in the codebase.
In `@internal/telemetry/client.go`:
- Around line 106-110: The resp.Body.Close() call right after
c.httpClient.Do(req) currently discards its error; update the function
containing c.httpClient.Do(req) to check the error returned by resp.Body.Close()
(e.g., err := resp.Body.Close()) and handle it appropriately rather than
ignoring it — either return/wrap the error if the enclosing method returns an
error or log it using the package's logger (referencing resp, resp.Body.Close,
c.httpClient.Do and the enclosing method name in client.go) so the close failure
is not silently dropped.
---
Nitpick comments:
In `@internal/telemetry/client.go`:
- Around line 114-117: Update the doc comment for Flush to reflect the current
public API by replacing the outdated "Track goroutines" phrasing with "Emit
goroutines" (or "in-flight Emit goroutines") so it accurately references the
Emit method; locate the comment around the Flush function in
internal/telemetry/client.go and change the wording accordingly while keeping
the rest of the comment intact.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: c272ca46-1abb-4bf4-9af4-4c9afe198059
📒 Files selected for processing (4)
cmd/root.gointernal/telemetry/client.gointernal/telemetry/client_test.gointernal/telemetry/machine_id.go
✅ Files skipped from review due to trivial changes (1)
- internal/telemetry/client_test.go
| func dockerDaemonID() string { | ||
| c, err := dockerclient.NewClientWithOpts(dockerclient.FromEnv, dockerclient.WithAPIVersionNegotiation()) | ||
| if err != nil { | ||
| return "" | ||
| } | ||
| defer c.Close() | ||
| info, err := c.Info(context.Background()) | ||
| if err != nil { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "machine_id.go" | head -20Repository: localstack/lstk
Length of output: 93
🏁 Script executed:
cat -n internal/telemetry/machine_id.goRepository: localstack/lstk
Length of output: 2715
🏁 Script executed:
# Search for usage of LoadOrCreateMachineID and dockerDaemonID to understand context
rg "LoadOrCreateMachineID|dockerDaemonID" -A 3 -B 3Repository: localstack/lstk
Length of output: 1922
Add a timeout for the Docker daemon ID lookup to prevent startup delays.
dockerDaemonID() calls c.Info(context.Background()) without a timeout. Since LoadOrCreateMachineID() runs during telemetry client initialization, an unresponsive Docker daemon can block CLI startup.
Suggested fix
import (
"context"
"crypto/md5"
"encoding/hex"
"os"
"path/filepath"
"strings"
+ "time"
@@
- info, err := c.Info(context.Background())
+ ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
+ defer cancel()
+ info, err := c.Info(ctx)
if err != nil {
return ""
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func dockerDaemonID() string { | |
| c, err := dockerclient.NewClientWithOpts(dockerclient.FromEnv, dockerclient.WithAPIVersionNegotiation()) | |
| if err != nil { | |
| return "" | |
| } | |
| defer c.Close() | |
| info, err := c.Info(context.Background()) | |
| if err != nil { | |
| func dockerDaemonID() string { | |
| c, err := dockerclient.NewClientWithOpts(dockerclient.FromEnv, dockerclient.WithAPIVersionNegotiation()) | |
| if err != nil { | |
| return "" | |
| } | |
| defer c.Close() | |
| ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond) | |
| defer cancel() | |
| info, err := c.Info(ctx) | |
| if err != nil { |
🧰 Tools
🪛 GitHub Check: Lint
[failure] 45-45:
Error return value of c.Close is not checked (errcheck)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/telemetry/machine_id.go` around lines 40 - 47, dockerDaemonID()
calls c.Info with context.Background() and can hang startup; update
dockerDaemonID to create a cancellable context with a short timeout (e.g. 1-3s)
via context.WithTimeout, defer the cancel, and pass that timed context into
c.Info so an unresponsive Docker daemon won't block LoadOrCreateMachineID();
also handle context deadline/cancel errors similarly to other c.Info errors and
return "" on timeout.
internal/telemetry/machine_id.go
Outdated
| if err != nil { | ||
| return "" | ||
| } | ||
| defer c.Close() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n internal/telemetry/machine_id.goRepository: localstack/lstk
Length of output: 2715
Check error returns from Close() and WriteFile() in non-test code.
Both c.Close() (line 45) and os.WriteFile() (line 81) have their errors ignored, violating the requirement that all functions should have their errors checked in non-test files.
Suggested fix
- defer c.Close()
+ defer func() {
+ if err := c.Close(); err != nil {
+ return ""
+ }
+ }()
@@
- _ = os.WriteFile(path, []byte(id), 0600)
+ if err := os.WriteFile(path, []byte(id), 0600); err != nil {
+ return id
+ }
return idPer the coding guideline !(test_*)/**/*.go: "Errors returned by functions should always be checked unless in test files".
🧰 Tools
🪛 GitHub Check: Lint
[failure] 45-45:
Error return value of c.Close is not checked (errcheck)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/telemetry/machine_id.go` at line 45, The defer c.Close() and the
os.WriteFile(...) calls currently ignore returned errors; update the code to
capture and handle both errors: in the defer replace the ignored call with a
closure that assigns err := c.Close() and if err != nil log or return a wrapped
error (e.g. fmt.Errorf("close failed: %w", err)) appropriate to the surrounding
function, and for os.WriteFile(...) check its error result and handle it the
same way (log/return a wrapped error) so neither c.Close() nor os.WriteFile()
errors are silently dropped; reference the c.Close() defer and the
os.WriteFile(...) call when making the changes.
8eec0d8 to
6a5b719
Compare
6a5b719 to
f7a5bab
Compare
silv-io
left a comment
There was a problem hiding this comment.
LGTM! Just some minor nitpicks
| enriched["version"] = version.Version() | ||
| enriched["os"] = runtime.GOOS | ||
| enriched["arch"] = runtime.GOARCH | ||
| enriched["is_ci"] = os.Getenv("CI") != "" |
There was a problem hiding this comment.
nit: small detail here. In the emulator and old CLI we actually test purely for the existence of this variable, not if it's not empty.
os.LookupEnv would be the option here then to maintain parity.
| SessionID string `json:"session_id"` | ||
| } | ||
|
|
||
| func (c *Client) Emit(ctx context.Context, name string, payload map[string]any) { |
There was a problem hiding this comment.
thought: I wonder if this naming won't be confusing with the other events we have. But I raised the idea of merging those into the same concept in the future either way so it makes sense.
Adds a telemetry client that fires a single
cli_cmdevent whenlstk startruns (this is merely to not have deadcode. Defining telemetry is still WIP PRO-210)version,os,arch,is_ci,machine_id(persistent uuid stored in the lstk config dir), a per-session UUID (ultimately one session ID per command), aX-Clientheader with valuelstk/v2LOCALSTACK_DISABLE_EVENTS=1✔️ Tested
Ran analytics-backend locally and observed:
{"timestamp": "2026-03-03T12:08:38.280", "level": "INFO", "threadName": "Dummy-8", "process": 1312488, "name": "analytics.core", "message": "record_events: Event(name='cli_cmd', metadata=EventMetadata(session_id='6ec424b8-0196-4815-83b7-2ae45e2d6146', client_time='2026-03-03 11:08:28.019701'), payload={'arch': 'amd64', 'cmd': 'lstk start', 'is_ci': False, 'machine_id': '5df95f6a-6dc0-4449-9ea8-f0f1b23b8f7c', 'os': 'linux', 'params': [], 'version': 'dev'})"}