-
Notifications
You must be signed in to change notification settings - Fork 17
Description
Bug: language_server_linux_x64 memory leak causes system OOM on Remote SSH
Environment
- Windsurf version: Latest (Feb 2026)
- Connection: Remote SSH to AWS EC2
- Remote OS: Ubuntu 24.04 LTS (kernel 6.14.0-1018-aws)
- Remote Instance: AWS r7i.2xlarge (8 vCPU, 64 GB RAM, 16 GB swap)
- Local OS: macOS
- Language server binary:
language_server_linux_x64(Go binary, located in.windsurf-server/bin/.../extensions/windsurf/bin/)
Description
When using Windsurf via Remote SSH, the language_server_linux_x64 process has an unbounded memory leak that grows from ~3.5 GB at startup to 60+ GB within hours, consuming all available RAM and swap, ultimately freezing the entire remote system.
This has been consistently reproducible across multiple days (Feb 11–15, 2026), happening every time the IDE is used for a few hours.
Reproduction
- Connect Windsurf to a remote Linux machine via Remote SSH
- Open a workspace and use Cascade normally
- After several hours of usage,
language_server_linux_x64will have consumed all available RAM - The remote machine becomes completely unresponsive (hard freeze, requires power cycle)
Diagnostic Evidence
A custom system monitor service logging every 60 seconds captured the memory growth leading up to the crash:
TIME language_server RSS System Memory Used Load Average
───────────── ────────────────── ────────────────── ────────────
21:11 UTC 33 GB (51% of RAM) 38 GB 1,038
21:12 UTC 42 GB (66% of RAM) 47 GB 877
21:13 UTC 58 GB (89% of RAM) 62 GB 830
21:14 UTC 59 GB (91% of RAM) 62 GB 838
21:15 UTC 60 GB (92% of RAM) 62 GB 781
21:17 UTC 60 GB (93% of RAM) 62 GB (swap: 3.3 GB) 702
... SYSTEM FREEZE — required instance stop/start to recover
Key observations:
- Only
language_server_linux_x64grows — all other Windsurf node processes remain under 2 GB - Memory growth is not linear — it accelerates dramatically (33 GB → 60 GB in 6 minutes)
- CPU idle drops to 0.0% as the system thrashes
- Load average exceeds 1,000 due to processes blocked on I/O
- The system freeze is a hard hang — no OOM killer logs, no kernel panic, journal shows corruption from unclean shutdown
- This happened every day across Feb 11–15 with consistent patterns
- Previous instance type was r7i.xlarge (4 vCPU, 32 GB) — upgraded to r7i.2xlarge (8 vCPU, 64 GB) thinking it was CPU, but the language server just consumed the additional memory
What was ruled out:
- CPU saturation — occurred with 50% CPU idle; the CPU spike is a consequence of OOM, not the cause
- sshd configuration — hardened keepalive settings, MaxSessions, MaxStartups; sshd itself is fine
- pam_systemd — disabled to prevent logind timeout; didn't fix the root cause
- Network/ENA driver — no errors in dmesg; network adapter is healthy
- Other processes — only
language_server_linux_x64shows unbounded growth
Workaround
A watchdog service that monitors the language server's RSS every 10 seconds and kill -9s it when it exceeds 16 GB. Windsurf auto-restarts the language server, resetting the leak. This prevents the system freeze but causes periodic brief reconnections.
Expected Behavior
language_server_linux_x64 should have stable memory usage and not grow unboundedly over time.
Suggested Investigation Areas
- The Go binary may have a goroutine leak or unbounded cache/buffer growth
- Possibly related to gRPC streaming state accumulation (see also issue # Windsurf IDE Freeze Bug Report #284 which reports 70+ duplicate gRPC calls)
- May be specific to or exacerbated by Remote SSH connections where network latency causes state accumulation