Skip to content

Conversation

@lucasssvaz
Copy link
Member

Description of Change

This pull request adds robust support for multi-device tests in the CI build and run scripts. The changes introduce new functions for detecting, building, and running multi-device tests, and ensure that these tests are handled separately from regular single-device tests. The updates also improve argument handling, error reporting, and test selection logic for both build and run workflows.

Multi-device test support:

  • Added is_multi_device_test functions to both .github/scripts/tests_build.sh and .github/scripts/tests_run.sh to detect tests configured for multiple devices via multi_device in ci.yml. [1] [2]
  • Implemented build_multi_device_sketch and build_multi_device_test functions in tests_build.sh to build all required device sketches for multi-device tests, using custom build directories and handling errors per device.
  • Added run_multi_device_test function in tests_run.sh to run multi-device tests with proper argument construction, port assignment, and result aggregation, including retries and platform checks.

Build and run workflow changes:

  • Updated both scripts to search for and handle multi-device test directories when a sketch name is provided, falling back to regular sketch lookup if not found. [1] [2]
  • Modified chunked and regular build/run logic to process all multi-device tests first, then proceed with regular single-device tests, ensuring proper error handling and reporting for each. [1] [2] [3]

General improvements and bug fixes:

  • Improved argument parsing and error messages for missing targets and ports, including default port assignment for multi-device scenarios.
  • Refactored test invocation to consistently use arrays for pytest command construction, improving quoting and argument handling.
  • Updated count_sketches in sketch_utils.sh to skip sketches that are part of multi-device tests, preventing duplicate builds.

Test Scenarios

Tested locally

Related links

Requires espressif/pytest-embedded#393

@lucasssvaz lucasssvaz requested a review from me-no-dev November 24, 2025 23:55
@lucasssvaz lucasssvaz self-assigned this Nov 24, 2025
@lucasssvaz lucasssvaz added Type: CI & Testing Related to continuous integration, automated testing, or test infrastructure. Status: Blocked upstream 🛑 PR is waiting on upstream changes to be merged first labels Nov 24, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Nov 24, 2025

Messages
📖 This PR seems to be quite large (total lines of code: 1582), you might consider splitting it into smaller PRs

👋 Hello lucasssvaz, we appreciate your contribution to this project!


📘 Please review the project's Contributions Guide for key guidelines on code, documentation, testing, and more.

🖊️ Please also make sure you have read and signed the Contributor License Agreement for this project.

Click to see more instructions ...


This automated output is generated by the PR linter DangerJS, which checks if your Pull Request meets the project's requirements and helps you fix potential issues.

DangerJS is triggered with each push event to a Pull Request and modify the contents of this comment.

Please consider the following:
- Danger mainly focuses on the PR structure and formatting and can't understand the meaning behind your code or changes.
- Danger is not a substitute for human code reviews; it's still important to request a code review from your colleagues.
- Addressing info messages (📖) is strongly recommended; they're less critical but valuable.
- To manually retry these Danger checks, please navigate to the Actions tab and re-run last Danger workflow.

Review and merge process you can expect ...


We do welcome contributions in the form of bug reports, feature requests and pull requests.

1. An internal issue has been created for the PR, we assign it to the relevant engineer.
2. They review the PR and either approve it or ask you for changes or clarifications.
3. Once the GitHub PR is approved we do the final review, collect approvals from core owners and make sure all the automated tests are passing.
- At this point we may do some adjustments to the proposed change, or extend it by adding tests or documentation.
4. If the change is approved and passes the tests it is merged into the default branch.

Generated by 🚫 dangerJS against 0bea36f

@github-actions
Copy link
Contributor

github-actions bot commented Nov 25, 2025

Test Results

0 tests   0 ✅  0s ⏱️
0 suites  0 💤
0 files    0 ❌

Results for commit 0bea36f.

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Nov 25, 2025

Memory usage test (comparing PR against master branch)

The table below shows the summary of memory usage change (decrease - increase) in bytes and percentage for each target.

MemoryFLASH [bytes]FLASH [%]RAM [bytes]RAM [%]
TargetDECINCDECINCDECINCDECINC
ESP32C50⚠️ +760.00⚠️ +0.01000.000.00
ESP32P40⚠️ +760.00⚠️ +0.01000.000.00
ESP32S30⚠️ +640.00⚠️ +0.01000.000.00
ESP32S2000.000.00000.000.00
ESP32C30⚠️ +760.00⚠️ +0.01000.000.00
ESP32C60⚠️ +760.00⚠️ +0.01000.000.00
ESP32H20⚠️ +760.00⚠️ +0.01000.000.00
ESP320⚠️ +200.000.00000.000.00
Click to expand the detailed deltas report [usage change in BYTES]
TargetESP32C5ESP32P4ESP32S3ESP32S2ESP32C3ESP32C6ESP32H2ESP32
ExampleFLASHRAMFLASHRAMFLASHRAMFLASHRAMFLASHRAMFLASHRAMFLASHRAMFLASHRAM
libraries/BLE/examples/Server⚠️ +760⚠️ +760⚠️ +640--⚠️ +760⚠️ +760⚠️ +760⚠️ +200
libraries/BLE/examples/Server_secure_authorization⚠️ +760--⚠️ +640--⚠️ +760⚠️ +760⚠️ +760--
libraries/BLE/examples/Server_secure_static_passkey⚠️ +760⚠️ +760⚠️ +640--⚠️ +760⚠️ +760⚠️ +760⚠️ +200
libraries/Insights/examples/MinimalDiagnostics00--00000000--00
libraries/NetworkClientSecure/examples/WiFiClientSecure000000000000--00
libraries/ESP32/examples/Camera/CameraWebServer----0000------00
ESP32/examples/Camera/CameraWebServer (2)----0000------00
ESP32/examples/Camera/CameraWebServer (3)----00----------

@lucasssvaz lucasssvaz added the CI Failure Expected For PRs where CI failure is expected label Nov 25, 2025
@me-no-dev me-no-dev requested a review from Copilot November 26, 2025 10:13
Copilot finished reviewing on behalf of me-no-dev November 26, 2025 10:16
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request adds comprehensive support for multi-device testing in the CI infrastructure, enabling tests that require communication between multiple ESP32 devices (e.g., WiFi AP/client, BLE server/client). The changes introduce detection, building, and running capabilities for multi-device tests while maintaining backward compatibility with single-device tests.

Key Changes

  • Added multi-device test infrastructure with functions to detect, build, and run tests requiring multiple devices
  • Implemented two example multi-device test suites: WiFi AP/client and BLE server/client communication tests
  • Enhanced documentation with comprehensive guide for creating, building, and running multi-device tests

Reviewed changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 18 comments.

Show a summary per file
File Description
tests/validation/wifi_ap/test_wifi_ap.py Python test orchestrating WiFi AP and client communication
tests/validation/wifi_ap/ap/ap.ino Arduino sketch for WiFi Access Point device
tests/validation/wifi_ap/client/client.ino Arduino sketch for WiFi client device
tests/validation/wifi_ap/ci.yml Configuration defining multi-device setup and platform support
tests/validation/ble/test_ble.py Python test coordinating BLE server/client pairing and secure communication
tests/validation/ble/server/server.ino BLE server sketch with secure characteristic support
tests/validation/ble/client/client.ino BLE client sketch with service discovery and authentication
tests/validation/ble/ci.yml BLE test configuration with multi-device specification
tests/conftest.py Added helper functions for IP validation and random string generation
.github/scripts/tests_run.sh Enhanced with multi-device test detection, port management, and execution logic
.github/scripts/tests_build.sh Updated with multi-device build functions and custom build directory handling
.github/scripts/sketch_utils.sh Modified sketch counting to skip multi-device test sketches
docs/en/contributing.rst Comprehensive documentation for creating and running multi-device tests

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review for a chance to win a $100 gift card. Take the survey.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI Failure Expected For PRs where CI failure is expected Status: Blocked upstream 🛑 PR is waiting on upstream changes to be merged first Type: CI & Testing Related to continuous integration, automated testing, or test infrastructure.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants