From 05757ca1fcd8ae4c38110c60beb7a145fd8d1c79 Mon Sep 17 00:00:00 2001 From: sh Date: Wed, 18 Mar 2026 08:48:21 +0000 Subject: [PATCH 1/4] smp-server: memory usage analysis --- docs/plans/memory-analysis.md | 118 ++++++++++++++++++++++++++++++++++ 1 file changed, 118 insertions(+) create mode 100644 docs/plans/memory-analysis.md diff --git a/docs/plans/memory-analysis.md b/docs/plans/memory-analysis.md new file mode 100644 index 000000000..574a10197 --- /dev/null +++ b/docs/plans/memory-analysis.md @@ -0,0 +1,118 @@ +## Root Cause Analysis: SMP Server Memory Growth (23.5GB) + +### Log Summary + +- **Duration**: ~22 hours (Mar 16 12:12 → Mar 17 10:20) +- **92,277 proxy connection errors** out of 92,656 total log lines (99.6%) +- **292 unique failing destination servers**, top offender: `nowhere.moe` (12,875 errors) +- Only **145 successful proxy connections** + +--- + +### Root Cause #1 (PRIMARY): PostgreSQL Queue Cache Never Evicts + +**Files**: `src/Simplex/Messaging/Server/QueueStore/Postgres.hs` + +The Postgres queue store has `useCache = True` hard-coded (via `Journal.hs:429`). Every queue accessed or created gets inserted into `queues :: TMap RecipientId q` and **is never removed**, even after deletion. + +**Evidence** — `deleteStoreQueue` at Postgres.hs:448-465: +```haskell +deleteStoreQueue st sq = ... do + atomically $ writeTVar qr Nothing -- QueueRec set to Nothing... + when (useCache st) $ do + atomically $ TM.delete (senderId q) $ senders st -- ✅ cleaned + forM_ (notifier q) $ \NtfCreds {notifierId} -> do + atomically $ TM.delete notifierId $ notifiers st -- ✅ cleaned + atomically $ TM.delete notifierId $ notifierLocks st -- ✅ cleaned + -- ❌ NO TM.delete rId $ queues st — zombie entry stays forever! +``` + +Similarly, `links :: TMap LinkId RecipientId` is never cleaned on queue deletion. + +**Impact**: Every queue created and then deleted leaves a zombie in the `queues` map (~200 bytes minimum per entry). On a busy server like smp19 running for days/weeks: +- Millions of queues created/deleted → millions of zombie cache entries +- Active queues also stay cached forever once loaded from Postgres +- **This is the primary unbounded growth mechanism** + +The `loadedQueueCount` Prometheus metric would confirm this — it shows the size of this cache (Prometheus.hs:500). + +--- + +### Root Cause #2 (AMPLIFIER): GHC Heap Sizing with `-A16m -N` + +**RTS flags**: `+RTS -N -A16m -I0.01 -Iw15 -s -RTS` + +With 16 cores: +- **Nursery**: 16 × 16MB = **256MB baseline** +- GHC's default major GC threshold = **2× live data** — if live data is 10GB, GHC allows the heap to grow to **~20GB before triggering major GC** +- VIRT = 1031GB is normal for GHC (address space reservation, not actual memory) + +The `-I0.01` setting is aggressive for idle GC (10ms), but with 22K clients the server is rarely idle, so major GC is deferred. + +--- + +### Root Cause #3 (CONTRIBUTOR): No Cache Eviction by Design + +Looking at `getQueue_` in Postgres.hs:193-244: +- When `useCache = True`, queues are loaded from Postgres on first access +- They are cached in the `queues` TMap **forever** +- There is no LRU eviction, no TTL, no max cache size +- The comment at line 233 acknowledges interaction with `withAllMsgQueues` but no eviction mechanism exists + +--- + +### What is NOT the Main Cause + +1. **Proxy connection state (`smpClients`)**: Properly managed — one entry per destination server, cleaned after 30-second `persistErrorInterval`. Max 292 entries. Code at `Client/Agent.hs:196-242` and `Session.hs:24-39` is correct. + +2. **SubscribedClients**: Despite the misleading comment at `Env/STM.hs:376` saying "subscriptions are never removed," they ARE cleaned up on both client disconnect (`Server.hs:1112`) and queue deletion (`Server.hs:308` via `lookupDeleteSubscribedClient`). + +3. **Client structures**: 22K × ~3KB = ~66MB — negligible. + +4. **TBQueues**: Bounded (`TBQueue` with `tbqSize = 128`), properly sized. + +5. **Thread management**: `forkClient` uses weak references, `finally` blocks ensure cleanup. Well-managed. + +--- + +### Memory Budget Estimate + +| Component | Estimated Size | +|-----------|---------------| +| Postgres queue cache (growing) | **5-15+ GB** | +| TLS connection state (22K × ~30KB) | ~660 MB | +| GHC nursery (16 × 16MB) | 256 MB | +| Client structures | ~66 MB | +| Proxy agent state | ~50 KB | +| **GHC old gen headroom (2× live)** | **doubles effective usage** | + +--- + +### Recommended Fixes (Priority Order) + +**1. Remove zombie queue entries from cache on deletion** (immediate fix) + +In `deleteStoreQueue` in `Postgres.hs`, add `TM.delete rId $ queues st` and `forM_ (queueData q) $ \(lnkId, _) -> TM.delete lnkId $ links st`: + +```haskell +-- After the existing cleanup: +when (useCache st) $ do + atomically $ TM.delete rId $ queues st -- ADD THIS + atomically $ TM.delete (senderId q) $ senders st + forM_ (queueData q) $ \(lnkId, _) -> + atomically $ TM.delete lnkId $ links st -- ADD THIS + forM_ (notifier q) $ \NtfCreds {notifierId} -> do + atomically $ TM.delete notifierId $ notifiers st + atomically $ TM.delete notifierId $ notifierLocks st +``` + +**2. Add configuration option to disable cache** — allow `useCache = False` via INI config for Postgres deployments where the DB is fast enough. + +**3. Add cache eviction** — periodically evict queues not accessed recently (LRU or TTL-based). + +**4. Tune GHC RTS**: +- Add `-F1.5` (or lower) to reduce the major GC threshold multiplier from default 2.0 +- Consider `-M` to set a hard heap cap +- Consider `-N8` instead of `-N` to halve the nursery size and reduce per-capability overhead + +**5. Add observability** — log `loadedQueueCount` periodically to track cache growth. This metric exists in Prometheus but should also appear in regular stats logs. From 6d85ab5d6cc91b1fc398a7a8fd1f862276f3ce26 Mon Sep 17 00:00:00 2001 From: sh Date: Wed, 18 Mar 2026 11:23:35 +0000 Subject: [PATCH 2/4] further analysis --- docs/plans/memory-analysis.md | 247 ++++++++++++++++++++++++---------- 1 file changed, 177 insertions(+), 70 deletions(-) diff --git a/docs/plans/memory-analysis.md b/docs/plans/memory-analysis.md index 574a10197..9c45005fb 100644 --- a/docs/plans/memory-analysis.md +++ b/docs/plans/memory-analysis.md @@ -1,5 +1,12 @@ ## Root Cause Analysis: SMP Server Memory Growth (23.5GB) +### Environment + +- **Server**: smp19.simplex.im, ~21,927 connected clients +- **Storage**: PostgreSQL backend with `useCache = False` +- **RTS flags**: `+RTS -N -A16m -I0.01 -Iw15 -s -RTS` (16 cores) +- **Memory**: 23.5GB RES / 1031GB VIRT (75% of available RAM) + ### Log Summary - **Duration**: ~22 hours (Mar 16 12:12 → Mar 17 10:20) @@ -9,110 +16,210 @@ --- -### Root Cause #1 (PRIMARY): PostgreSQL Queue Cache Never Evicts +### Known Factor: GHC Heap Sizing -**Files**: `src/Simplex/Messaging/Server/QueueStore/Postgres.hs` +With 16 cores and `-A16m`: +- **Nursery**: 16 × 16MB = **256MB baseline** +- GHC default major GC threshold = **2× live data** — if live data is 10GB, heap grows to ~20GB before major GC +- The server is rarely idle with 22K clients, so major GC is deferred despite `-I0.01` +- This is an amplifier — whatever the actual live data size is, GHC roughly doubles it -The Postgres queue store has `useCache = True` hard-coded (via `Journal.hs:429`). Every queue accessed or created gets inserted into `queues :: TMap RecipientId q` and **is never removed**, even after deletion. +--- -**Evidence** — `deleteStoreQueue` at Postgres.hs:448-465: -```haskell -deleteStoreQueue st sq = ... do - atomically $ writeTVar qr Nothing -- QueueRec set to Nothing... - when (useCache st) $ do - atomically $ TM.delete (senderId q) $ senders st -- ✅ cleaned - forM_ (notifier q) $ \NtfCreds {notifierId} -> do - atomically $ TM.delete notifierId $ notifiers st -- ✅ cleaned - atomically $ TM.delete notifierId $ notifierLocks st -- ✅ cleaned - -- ❌ NO TM.delete rId $ queues st — zombie entry stays forever! -``` +### Candidate Structures That Could Grow Unboundedly -Similarly, `links :: TMap LinkId RecipientId` is never cleaned on queue deletion. +Analysis of the full codebase identified these structures that either grow without bound or have uncertain cleanup: -**Impact**: Every queue created and then deleted leaves a zombie in the `queues` map (~200 bytes minimum per entry). On a busy server like smp19 running for days/weeks: -- Millions of queues created/deleted → millions of zombie cache entries -- Active queues also stay cached forever once loaded from Postgres -- **This is the primary unbounded growth mechanism** +#### 1. `SubscribedClients` maps — `Env/STM.hs:378` -The `loadedQueueCount` Prometheus metric would confirm this — it shows the size of this cache (Prometheus.hs:500). +Both `subscribers.queueSubscribers` and `ntfSubscribers.queueSubscribers` (and their `serviceSubscribers`) use `SubscribedClients (TMap EntityId (TVar (Maybe (Client s))))`. ---- +Comment at line 376: *"The subscriptions that were made at any point are not removed"* -### Root Cause #2 (AMPLIFIER): GHC Heap Sizing with `-A16m -N` +`deleteSubcribedClient` IS called on disconnect (Server.hs:1112) and DOES call `TM.delete`. But it only deletes if the current stored client matches — if another client already re-subscribed, the old client's disconnect won't remove the entry. This is by design for mobile client continuity, but the net effect on map size over time is unclear without measurement. -**RTS flags**: `+RTS -N -A16m -I0.01 -Iw15 -s -RTS` +#### 2. ProxyAgent's subscription TMaps — `Client/Agent.hs:145-151` -With 16 cores: -- **Nursery**: 16 × 16MB = **256MB baseline** -- GHC's default major GC threshold = **2× live data** — if live data is 10GB, GHC allows the heap to grow to **~20GB before triggering major GC** -- VIRT = 1031GB is normal for GHC (address space reservation, not actual memory) +The `SMPClientAgent` has 4 TMaps that accumulate one top-level entry per unique destination server and **never remove** them: -The `-I0.01` setting is aggressive for idle GC (10ms), but with 22K clients the server is rarely idle, so major GC is deferred. +- `activeServiceSubs :: TMap SMPServer (TVar ...)` (line 145) +- `activeQueueSubs :: TMap SMPServer (TMap QueueId ...)` (line 146) +- `pendingServiceSubs :: TMap SMPServer (TVar ...)` (line 149) +- `pendingQueueSubs :: TMap SMPServer (TMap QueueId ...)` (line 150) ---- +Comment at line 262: *"these vars are never removed, they are only added"* -### Root Cause #3 (CONTRIBUTOR): No Cache Eviction by Design +These are only used for the proxy agent (SParty 'Sender), so they grow with each unique destination SMP server proxied to. With 292 unique servers in this log period, these are likely small — but long-running servers may accumulate thousands. -Looking at `getQueue_` in Postgres.hs:193-244: -- When `useCache = True`, queues are loaded from Postgres on first access -- They are cached in the `queues` TMap **forever** -- There is no LRU eviction, no TTL, no max cache size -- The comment at line 233 acknowledges interaction with `withAllMsgQueues` but no eviction mechanism exists +`closeSMPClientAgent` (line 369) does NOT clear these 4 maps. ---- +#### 3. `NtfStore` — `NtfStore.hs:26` -### What is NOT the Main Cause +`NtfStore (TMap NotifierId (TVar [MsgNtf]))` — one entry per NotifierId. -1. **Proxy connection state (`smpClients`)**: Properly managed — one entry per destination server, cleaned after 30-second `persistErrorInterval`. Max 292 entries. Code at `Client/Agent.hs:196-242` and `Session.hs:24-39` is correct. +`deleteExpiredNtfs` (line 47) filters expired notifications from lists but does **not remove entries with empty lists** from the TMap. Over time, NotifierIds that no longer receive notifications leave zombie `TVar []` entries. -2. **SubscribedClients**: Despite the misleading comment at `Env/STM.hs:376` saying "subscriptions are never removed," they ARE cleaned up on both client disconnect (`Server.hs:1112`) and queue deletion (`Server.hs:308` via `lookupDeleteSubscribedClient`). +`deleteNtfs` (line 44) does remove the full entry via `TM.lookupDelete` — but only called when a notifier is explicitly deleted. -3. **Client structures**: 22K × ~3KB = ~66MB — negligible. +#### 4. `serviceLocks` in PostgresQueueStore — `Postgres.hs:112,469` + +`serviceLocks :: TMap CertFingerprint Lock` — one Lock per unique certificate fingerprint. + +`getCreateService` (line 469) calls `withLockMap (serviceLocks st) fp` which calls `getMapLock` (Agent/Client.hs:1029-1032) — this **unconditionally inserts** a Lock into the TMap. There is **no cleanup code** for serviceLocks anywhere. This is NOT guarded by `useCache`. + +#### 5. `sentCommands` per proxy client connection — `Client.hs:580` + +Each `PClient` has `sentCommands :: TMap CorrId (Request err msg)`. Entries are added per command sent (line 1369) and only removed when a response arrives (line 698). If a connection drops before all responses arrive, entries remain until the `PClient` is GC'd. Since `PClient` is captured by the connection thread which terminates on error, the `PClient` should become GC-eligible — but GC timing depends on heap pressure. + +#### 6. `subQ :: TQueue (ClientSub, ClientId)` — `Env/STM.hs:363` + +Unbounded `TQueue` for subscription changes. If the subscriber thread (`serverThread`) can't process changes fast enough, this queue grows without backpressure. With 22K clients subscribing/unsubscribing, sustained bursts could cause this queue to bloat. -4. **TBQueues**: Bounded (`TBQueue` with `tbqSize = 128`), properly sized. +--- + +### Ruled Out -5. **Thread management**: `forkClient` uses weak references, `finally` blocks ensure cleanup. Well-managed. +1. **PostgreSQL queue cache**: `useCache = False` — `queues`, `senders`, `links`, `notifiers` TMaps are empty. +2. **`notifierLocks`**: Guarded by `useCache` (Postgres.hs:377,405) — not used with `useCache = False`. +3. **Client structures**: 22K × ~3KB = ~66MB — negligible. +4. **TBQueues**: Bounded (`tbqSize = 128`). +5. **Thread management**: `forkClient` uses weak refs + `finally` blocks. `endThreads` cleared on disconnect. +6. **Proxy `smpClients`/`smpSessions`**: Properly cleaned on disconnect/expiry. +7. **`smpSubWorkers`**: Properly cleaned on worker completion; also cleared in `closeSMPClientAgent`. +8. **`pendingEvents`**: Atomically swapped empty every `pendingENDInterval`. +9. **Stats IORef counters**: Fixed number, bounded. +10. **DB connection pool**: Bounded `TBQueue` with bracket-based return. --- -### Memory Budget Estimate +### Insufficient Data to Determine Root Cause -| Component | Estimated Size | -|-----------|---------------| -| Postgres queue cache (growing) | **5-15+ GB** | -| TLS connection state (22K × ~30KB) | ~660 MB | -| GHC nursery (16 × 16MB) | 256 MB | -| Client structures | ~66 MB | -| Proxy agent state | ~50 KB | -| **GHC old gen headroom (2× live)** | **doubles effective usage** | +Without measuring the actual sizes of these structures at runtime, we cannot determine which (if any) is the primary contributor. The following exact logging changes will identify the root cause. --- -### Recommended Fixes (Priority Order) +### EXACT LOGS TO ADD -**1. Remove zombie queue entries from cache on deletion** (immediate fix) +Add a new periodic logging thread in `src/Simplex/Messaging/Server.hs`. -In `deleteStoreQueue` in `Postgres.hs`, add `TM.delete rId $ queues st` and `forM_ (queueData q) $ \(lnkId, _) -> TM.delete lnkId $ links st`: +Insert at `Server.hs:197` (after `prometheusMetricsThread_`): ```haskell --- After the existing cleanup: -when (useCache st) $ do - atomically $ TM.delete rId $ queues st -- ADD THIS - atomically $ TM.delete (senderId q) $ senders st - forM_ (queueData q) $ \(lnkId, _) -> - atomically $ TM.delete lnkId $ links st -- ADD THIS - forM_ (notifier q) $ \NtfCreds {notifierId} -> do - atomically $ TM.delete notifierId $ notifiers st - atomically $ TM.delete notifierId $ notifierLocks st + <> memoryDiagThread_ cfg ``` -**2. Add configuration option to disable cache** — allow `useCache = False` via INI config for Postgres deployments where the DB is fast enough. +Then define: + +```haskell + memoryDiagThread_ :: ServerConfig s -> [M s ()] + memoryDiagThread_ ServerConfig {prometheusInterval = Just _} = + [memoryDiagThread] + memoryDiagThread_ _ = [] + + memoryDiagThread :: M s () + memoryDiagThread = do + labelMyThread "memoryDiag" + Env { ntfStore = NtfStore ntfMap + , server = srv@Server {subscribers, ntfSubscribers} + , proxyAgent = ProxyAgent {smpAgent = pa} + , msgStore_ = ms + } <- ask + let interval = 300_000_000 -- 5 minutes + liftIO $ forever $ do + threadDelay interval + -- GHC RTS stats + rts <- getRTSStats + let liveBytes = gcdetails_live_bytes $ gc rts + heapSize = gcdetails_mem_in_use_bytes $ gc rts + gcCount = gcs rts + -- Server structures + clientCount <- IM.size <$> getServerClients srv + -- SubscribedClients (queue and service subscribers for both SMP and NTF) + smpQSubs <- M.size <$> getSubscribedClients (queueSubscribers subscribers) + smpSSubs <- M.size <$> getSubscribedClients (serviceSubscribers subscribers) + ntfQSubs <- M.size <$> getSubscribedClients (queueSubscribers ntfSubscribers) + ntfSSubs <- M.size <$> getSubscribedClients (serviceSubscribers ntfSubscribers) + -- Pending events + smpPending <- IM.size <$> readTVarIO (pendingEvents subscribers) + ntfPending <- IM.size <$> readTVarIO (pendingEvents ntfSubscribers) + -- NtfStore + ntfStoreSize <- M.size <$> readTVarIO ntfMap + -- ProxyAgent maps + let SMPClientAgent {smpClients, smpSessions, activeServiceSubs, activeQueueSubs, pendingServiceSubs, pendingQueueSubs, smpSubWorkers} = pa + paClients <- M.size <$> readTVarIO smpClients + paSessions <- M.size <$> readTVarIO smpSessions + paActSvc <- M.size <$> readTVarIO activeServiceSubs + paActQ <- M.size <$> readTVarIO activeQueueSubs + paPndSvc <- M.size <$> readTVarIO pendingServiceSubs + paPndQ <- M.size <$> readTVarIO pendingQueueSubs + paWorkers <- M.size <$> readTVarIO smpSubWorkers + -- Loaded queue counts + lc <- loadedQueueCounts $ fromMsgStore ms + -- Log everything + logInfo $ + "MEMORY " + <> "rts_live=" <> tshow liveBytes + <> " rts_heap=" <> tshow heapSize + <> " rts_gc=" <> tshow gcCount + <> " clients=" <> tshow clientCount + <> " smpQSubs=" <> tshow smpQSubs + <> " smpSSubs=" <> tshow smpSSubs + <> " ntfQSubs=" <> tshow ntfQSubs + <> " ntfSSubs=" <> tshow ntfSSubs + <> " smpPending=" <> tshow smpPending + <> " ntfPending=" <> tshow ntfPending + <> " ntfStore=" <> tshow ntfStoreSize + <> " paClients=" <> tshow paClients + <> " paSessions=" <> tshow paSessions + <> " paActSvc=" <> tshow paActSvc + <> " paActQ=" <> tshow paActQ + <> " paPndSvc=" <> tshow paPndSvc + <> " paPndQ=" <> tshow paPndQ + <> " paWorkers=" <> tshow paWorkers + <> " loadedQ=" <> tshow (loadedQueueCount lc) + <> " loadedNtf=" <> tshow (loadedNotifierCount lc) + <> " ntfLocks=" <> tshow (notifierLockCount lc) +``` + +Note: `smpSubs.subsCount` (queueSubscribers size) and `smpSubs.subServicesCount` (serviceSubscribers size) are **already logged** in Prometheus (lines 475-496). The log above adds all other candidate structures plus GHC RTS memory stats. + +This produces a single log line every 5 minutes: + +``` +[INFO] MEMORY rts_live=10737418240 rts_heap=23488102400 rts_gc=4521 clients=21927 smpQSubs=1847233 smpSSubs=42 ntfQSubs=982112 ntfSSubs=31 smpPending=0 ntfPending=0 ntfStore=512844 paClients=12 paSessions=12 paActSvc=0 paActQ=0 paPndSvc=0 paPndQ=0 paWorkers=3 loadedQ=0 loadedNtf=0 ntfLocks=0 +``` + +### What Each Metric Tells Us + +| Metric | What it reveals | If growing = suspect | +|--------|----------------|---------------------| +| `rts_live` | Actual live data after last major GC | Baseline — everything else should add up to this | +| `rts_heap` | Total heap (should be ~2× rts_live) | If >> 2× live, fragmentation issue | +| `clients` | Connected client count | Known: ~22K | +| `smpQSubs` | SubscribedClients map size (queue subs) | If >> clients × avg_subs, entries not cleaned | +| `smpSSubs` | SubscribedClients map size (service subs) | Should be small | +| `ntfQSubs` | NTF SubscribedClients map (queue subs) | Same concern as smpQSubs | +| `ntfSSubs` | NTF SubscribedClients map (service subs) | Should be small | +| `smpPending` / `ntfPending` | Pending END/DELD events per client | If large, subscriber thread lagging | +| `ntfStore` | NotifierId count in NtfStore | If growing monotonically, zombie entries | +| `paClients` | Proxy connections to other servers | Should be <= unique dest servers | +| `paSessions` | Active proxy sessions | Should match paClients | +| `paActSvc` / `paActQ` | Proxy active subscriptions | If growing, entries never removed | +| `paPndSvc` / `paPndQ` | Proxy pending subscriptions | If growing, resubscription stuck | +| `paWorkers` | Active reconnect workers | If growing, workers stuck in retry | +| `loadedQ` | Cached queues in store (0 with useCache=False) | Should be 0 | +| `ntfLocks` | Notifier locks in store | Should be 0 with useCache=False | + +### Interpretation Guide + +**If `smpQSubs` is in the millions**: SubscribedClients is the primary leak. Entries accumulate for every queue ever subscribed to. + +**If `ntfStore` grows monotonically**: Zombie notification entries (empty lists after expiration). Fix: `deleteExpiredNtfs` should remove entries with empty lists. + +**If `paActSvc` + `paActQ` grow**: Proxy agent subscription maps are the leak. Fix: add cleanup when no active/pending subs exist for a server. -**3. Add cache eviction** — periodically evict queues not accessed recently (LRU or TTL-based). +**If `rts_live` is much smaller than `rts_heap`**: GHC heap fragmentation. Fix: tune `-F` flag (GC trigger factor) or use `-c` (compacting GC). -**4. Tune GHC RTS**: -- Add `-F1.5` (or lower) to reduce the major GC threshold multiplier from default 2.0 -- Consider `-M` to set a hard heap cap -- Consider `-N8` instead of `-N` to halve the nursery size and reduce per-capability overhead +**If `rts_live` ~ 10-12GB**: The live data is genuinely large. Look at which metric is the largest contributor. -**5. Add observability** — log `loadedQueueCount` periodically to track cache growth. This metric exists in Prometheus but should also appear in regular stats logs. +**If nothing above is large but `rts_live` is large**: The leak is in a structure not measured here — likely TLS connection buffers, ByteString retention from Postgres queries, or GHC runtime overhead. Next step would be heap profiling with `-hT`. From 37b94c52726c8bb574df1fe9a52b00d0f055e9bc Mon Sep 17 00:00:00 2001 From: sh Date: Thu, 19 Mar 2026 07:11:56 +0000 Subject: [PATCH 3/4] smp-server: add periodic memory diagnostics logging Log sizes of all in-memory data structures every 5 minutes to help identify memory growth root cause on busy servers. --- src/Simplex/Messaging/Server.hs | 57 ++++++++++++++++++++++++++++++++- 1 file changed, 56 insertions(+), 1 deletion(-) diff --git a/src/Simplex/Messaging/Server.hs b/src/Simplex/Messaging/Server.hs index 3d977dc8c..a6d63e76d 100644 --- a/src/Simplex/Messaging/Server.hs +++ b/src/Simplex/Messaging/Server.hs @@ -91,7 +91,7 @@ import qualified Data.X509 as X import qualified Data.X509.Validation as XV import GHC.Conc.Signal import GHC.IORef (atomicSwapIORef) -import GHC.Stats (getRTSStats) +import GHC.Stats (RTSStats (..), GCDetails (..), getRTSStats) import GHC.TypeLits (KnownNat) import Network.Socket (ServiceName, Socket, socketToHandle) import qualified Network.TLS as TLS @@ -198,6 +198,7 @@ smpServer started cfg@ServerConfig {transports, transportConfig = tCfg, startOpt <> serverStatsThread_ cfg <> prometheusMetricsThread_ cfg <> controlPortThread_ cfg + <> [memoryDiagThread] ) `finally` stopServer s where @@ -719,6 +720,60 @@ smpServer started cfg@ServerConfig {transports, transportConfig = tCfg, startOpt Nothing -> acc Just (_, ts) -> (cnt + 1, updateTimeBuckets ts ts' times) + memoryDiagThread :: M s () + memoryDiagThread = do + labelMyThread "memoryDiag" + Env + { ntfStore = NtfStore ntfMap, + server = srv@Server {subscribers, ntfSubscribers}, + proxyAgent = ProxyAgent {smpAgent = pa}, + msgStore_ = ms + } <- ask + let SMPClientAgent {smpClients, smpSessions, activeServiceSubs, activeQueueSubs, pendingServiceSubs, pendingQueueSubs, smpSubWorkers} = pa + liftIO $ forever $ do + threadDelay 300_000_000 -- 5 minutes + rts <- getRTSStats + let GCDetails {gcdetails_live_bytes, gcdetails_mem_in_use_bytes} = gc rts + clientCount <- IM.size <$> getServerClients srv + smpQSubs <- M.size <$> getSubscribedClients (queueSubscribers subscribers) + smpSSubs <- M.size <$> getSubscribedClients (serviceSubscribers subscribers) + ntfQSubs <- M.size <$> getSubscribedClients (queueSubscribers ntfSubscribers) + ntfSSubs <- M.size <$> getSubscribedClients (serviceSubscribers ntfSubscribers) + smpPending <- IM.size <$> readTVarIO (pendingEvents subscribers) + ntfPending <- IM.size <$> readTVarIO (pendingEvents ntfSubscribers) + ntfStoreSize <- M.size <$> readTVarIO ntfMap + paClients' <- M.size <$> readTVarIO smpClients + paSessions' <- M.size <$> readTVarIO smpSessions + paActSvc <- M.size <$> readTVarIO activeServiceSubs + paActQ <- M.size <$> readTVarIO activeQueueSubs + paPndSvc <- M.size <$> readTVarIO pendingServiceSubs + paPndQ <- M.size <$> readTVarIO pendingQueueSubs + paWorkers <- M.size <$> readTVarIO smpSubWorkers + lc <- loadedQueueCounts $ fromMsgStore ms + logInfo $ + "MEMORY" + <> " rts_live=" <> tshow gcdetails_live_bytes + <> " rts_heap=" <> tshow gcdetails_mem_in_use_bytes + <> " rts_gc=" <> tshow (gcs rts) + <> " clients=" <> tshow clientCount + <> " smpQSubs=" <> tshow smpQSubs + <> " smpSSubs=" <> tshow smpSSubs + <> " ntfQSubs=" <> tshow ntfQSubs + <> " ntfSSubs=" <> tshow ntfSSubs + <> " smpPending=" <> tshow smpPending + <> " ntfPending=" <> tshow ntfPending + <> " ntfStore=" <> tshow ntfStoreSize + <> " paClients=" <> tshow paClients' + <> " paSessions=" <> tshow paSessions' + <> " paActSvc=" <> tshow paActSvc + <> " paActQ=" <> tshow paActQ + <> " paPndSvc=" <> tshow paPndSvc + <> " paPndQ=" <> tshow paPndQ + <> " paWorkers=" <> tshow paWorkers + <> " loadedQ=" <> tshow (loadedQueueCount lc) + <> " loadedNtf=" <> tshow (loadedNotifierCount lc) + <> " ntfLocks=" <> tshow (notifierLockCount lc) + runClient :: Transport c => X.CertificateChain -> C.APrivateSignKey -> TProxy c 'TServer -> c 'TServer -> M s () runClient srvCert srvSignKey tp h = do ms <- asks msgStore From 0012f695819d028c78d4f5223684a2f12ec8f232 Mon Sep 17 00:00:00 2001 From: sh Date: Thu, 19 Mar 2026 13:53:39 +0000 Subject: [PATCH 4/4] add memory-analysis-results based on the produced logs --- docs/plans/memory-analysis-results.md | 37 +++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) create mode 100644 docs/plans/memory-analysis-results.md diff --git a/docs/plans/memory-analysis-results.md b/docs/plans/memory-analysis-results.md new file mode 100644 index 000000000..98bffffb5 --- /dev/null +++ b/docs/plans/memory-analysis-results.md @@ -0,0 +1,37 @@ +## Memory Diagnostics Results (5.5 hours, 07:49 - 13:19, Mar 19) + +**rts_heap (total process heap) — grows monotonically, never shrinks:** +``` +07:49 10.1 GB +09:24 14.1 GB +11:24 18.7 GB +13:19 20.7 GB +``` +Growth: **+10.6 GB in 5.5 hours** (~1.9 GB/hour). GHC never returns memory to the OS. + +**rts_live (actual live data) — sawtooth pattern, minimums growing:** +``` +07:54 5.5 GB (post-GC valley) +08:54 6.2 GB +09:44 6.6 GB +11:24 6.6 GB +13:14 9.1 GB +``` +The post-GC floor is rising: **+3.6 GB over 5.5 hours**. This confirms a genuine leak. + +**But smpQSubs is NOT the cause** — it oscillates between 1.2M-1.4M, not growing monotonically. At ~130 bytes/entry, 1.4M entries = ~180MB. Can't explain 9GB. + +**clients** oscillates 14K-20K, also not monotonically growing. + +**Everything else is tiny**: ntfStore ~7K entries (<1MB), paClients ~350 (~50KB), all other metrics near 0. + +**The leak is in something we're not measuring.** ~6-9GB of live data is unaccounted for by all tracked structures. The most likely candidates are: + +1. **Per-client state we didn't measure** — the *contents* of TBQueues (buffered messages), per-client `subscriptions` TMap contents (Sub records with TVars) +2. **TLS connection buffers** — the `tls` library allocates internal state per connection +3. **Pinned ByteStrings** from PostgreSQL queries — these aren't collected by normal GC +4. **GHC heap fragmentation** — pinned objects cause block-level fragmentation + +The next step is either: +- **Add more metrics**: measure total TBQueue fill across all clients, total subscription count, and pinned byte count from RTS stats +- **Run with `-hT`**: heap profiling by type to see exactly what's consuming memory