Skip to content

Conversation

@gabsuren
Copy link
Collaborator

@gabsuren gabsuren commented Oct 28, 2025

TODO - Will remove comments after the review ( left it for easier review)

Description

This PR fixes critical memory leaks and crashes in the ESP WebSocket client that occur during reconnection scenarios(CONFIG_ESP_WS_CLIENT_SEPARATE_TX_LOCK = y).

  • Double-free crashes: Heap corruption during abort/reconnect scenarios
  • Data loss: First packet after reconnection not received
  • Error buffer accumulation: 2KB memory leak on disconnect

Changes Made:

  • Add state check in abort_connection to prevent double-close
  • Fix memory leak: free errormsg_buffer on disconnect
  • Reset connection state on reconnect to prevent stale data
  • Implement lock ordering for separate TX lock mode
  • Added sdkconfig.ci.tx_lock conf

Related

#898

Checklist

Before submitting a Pull Request, please ensure the following:

  • 🚨 This PR does not introduce breaking changes.
  • [ ✓ ] All CI checks (GH Actions) pass.
  • [ ✓] Documentation is updated as needed.
  • Tests are updated or added as necessary.
  • [ ✓] Code is well-commented, especially in complex areas.
  • [ ✓] Git history is clean — commits are squashed to the minimum necessary.

Note

Prevents double-close races and error-buffer leaks, improves TX lock ordering and PONG handling, resets state on connect, and tweaks recv/poll flow; adds tx-lock CI config.

  • esp_websocket_client.c:
    • Abort/Disconnect:
      • Add state guard in esp_websocket_client_abort_connection() to skip abort when closing/closed and set error type.
      • Free and null client->errormsg_buffer to fix leak; add debug logging.
    • Locking/Send path:
      • On send errors, coordinate tx_lock and lock (when CONFIG_ESP_WS_CLIENT_SEPARATE_TX_LOCK) before calling esp_websocket_client_abort_connection().
      • PONG handling: release/reacquire locks with timeout, check state/transport before send, and free RX buffer on early return.
    • On connect:
      • Reset payload_len/offset, last_fin, and last_opcode.
      • Perform immediate non-blocking read poll and invoke recv to capture early data.
    • Recv/poll loop:
      • Acquire client->lock before recv when read_select > 0; avoid duplicate lock ops.
    • Cleanup:
      • After destroying transport list, null client->transport_list and client->transport.
  • Examples/Config:
    • Add examples/target/sdkconfig.ci.tx_lock enabling CONFIG_ESP_WS_CLIENT_SEPARATE_TX_LOCK and TX lock timeout.

Written by Cursor Bugbot for commit 52abfc0. This will update automatically on new commits. Configure here.

@CLAassistant
Copy link

CLAassistant commented Oct 28, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

cursor[bot]

This comment was marked as outdated.

@gabsuren gabsuren changed the title Fix/ws race on abort fix(websocket): Fix websocket client race on abort and memory leak(IDFGH-16555) Oct 28, 2025
@gabsuren gabsuren force-pushed the fix/ws_race_on_abort branch 3 times, most recently from 67bd7e3 to 46871bf Compare October 28, 2025 13:09
#else
// When separate TX lock is not configured, we already hold client->lock
// which protects the transport, so we can send PONG directly
esp_transport_ws_send_raw(client->transport, WS_TRANSPORT_OPCODES_PONG | WS_TRANSPORT_OPCODES_FIN, data, client->payload_len,

Check warning

Code scanning / clang-tidy

The value '138' provided to the cast expression is not in the valid range of values for 'ws_transport_opcodes' [clang-analyzer-optin.core.EnumCastOutOfRange] Warning

The value '138' provided to the cast expression is not in the valid range of values for 'ws_transport_opcodes' [clang-analyzer-optin.core.EnumCastOutOfRange]
@gabsuren gabsuren requested a review from david-cermak October 29, 2025 09:09
@gabsuren gabsuren force-pushed the fix/ws_race_on_abort branch from 46871bf to 5577e03 Compare October 29, 2025 10:54
cursor[bot]

This comment was marked as outdated.

@gabsuren gabsuren force-pushed the fix/ws_race_on_abort branch 3 times, most recently from ca2956e to 0e58789 Compare October 30, 2025 10:53
}
ESP_LOGD(TAG, "Calling abort_connection due to send error");
#ifdef CONFIG_ESP_WS_CLIENT_SEPARATE_TX_LOCK
xSemaphoreGiveRecursive(client->tx_lock);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is better to move this verification to abort connection function.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@euripedesrocha I am not sure about it, as abort connection is used in 5 different places, if we move it inside abort_connection it adds complex detection logic "which lock am I holding?" Only 1 place (send error path) needs lock switching (tx_lock → client->lock)

@gabsuren gabsuren force-pushed the fix/ws_race_on_abort branch from 0e58789 to 62925a5 Compare November 10, 2025 10:13
@gabsuren gabsuren force-pushed the fix/ws_race_on_abort branch 2 times, most recently from 15dcb35 to f474654 Compare November 10, 2025 10:27
@gabsuren
Copy link
Collaborator Author

#898 (comment)

@gabsuren gabsuren force-pushed the fix/ws_race_on_abort branch from 50e3068 to 22eb17e Compare November 19, 2025 11:44
- Add state check in abort_connection to prevent double-close
- Fix memory leak: free errormsg_buffer on disconnect
- Reset connection state on reconnect to prevent stale data
- Implement lock ordering for separate TX lock mode
- Read buffered data immediately after connection to prevent data loss
- Added sdkconfig.ci.tx_lock config
@gabsuren gabsuren force-pushed the fix/ws_race_on_abort branch from 22eb17e to 52abfc0 Compare November 19, 2025 11:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants