Skip to content

Commit 4b0c88b

Browse files
add fast retries to browser testing (#32694)
1 parent 5071fde commit 4b0c88b

File tree

2 files changed

+13
-1
lines changed

2 files changed

+13
-1
lines changed

content/en/synthetics/browser_tests/_index.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -183,9 +183,17 @@ You can customize alert conditions to define the circumstances under which you w
183183

184184
{{< img src="synthetics/browser_tests/alerting_rules.png" alt="Browser test alerting rule" style="width:80%" >}}
185185

186+
#### Alerting rule
187+
186188
* An alert is triggered if any assertion fails for `X` minutes from any `n` of `N` locations. This alerting rule allows you to specify for how much time and in how many locations a test needs to fail before triggering the notification.
187189
* Retry `X` times before location is marked as failed. This allows you to define how many consecutive test failures need to happen for a location to be considered as failed. By default, there is a 300ms wait before retrying a test that failed. This interval can be configured with the [API][6].
188190

191+
#### Fast retry
192+
193+
When a test fails, fast retry allows you to retry the test X times after Y ms before marking it as failed. Customizing the retry interval helps reduce false positives and improves your alerting accuracy.
194+
195+
Since location uptime is computed based on the final test result after retries complete, fast retry intervals directly impact what appears in your total uptime graph. The total uptime is computed based on the configured alert conditions, and notifications are sent based on the total uptime.
196+
189197
### Configure the test monitor
190198

191199
A notification is sent according to the set of alerting conditions. Use this section to define how and what to message your teams.

content/en/synthetics/browser_tests/test_results.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ In the **Properties** section, you can see the test ID, test creation and edit d
3636

3737
In the **History** section, you can see three graphs:
3838

39-
- The **Global Uptime** graph displays the total uptime of all test locations in a given time interval. The global uptime visualization displays red only if the [alert conditions][20] configured for a test are triggered in the given time interval.
39+
- The **Global Uptime** graph displays the total uptime of all test locations in a given time interval. The global uptime visualization displays red only if the [alert conditions][20] configured for a test are triggered in the given time interval. Since location uptime is computed based on the final test result after retries complete, [fast retry][24] intervals directly impact what appears in your total uptime graph.
4040
- The **Time-to-interactive by location and device** graph displays the amount of time until a page can be interacted with in seconds. For more information about uptime monitoring, see the [Website Uptime Monitoring with SLOs][14] guide.
4141
- The **Test duration by location and device** graph displays the amount of time in minutes each location and device takes to complete in a given time interval.
4242

@@ -154,8 +154,11 @@ The step duration represents the amount of time the step takes to execute with t
154154
A test result is considered `FAILED` if it does not satisfy its assertions or if a step failed for another reason. You can troubleshoot failed runs by looking at their screenshots, checking for potential [errors](#errors-and-warnings) at the step level, and looking into [resources][17] and [backend traces](#backend-traces) generated by their steps.
155155

156156
### Compare screenshots
157+
157158
To help during the investigation, click **Compare Screenshots** to receive side-by-side screenshots of the failed result and the last successful execution. The comparison helps you to spot any differences that could have caused the test to fail.
159+
158160
{{< img src="synthetics/browser_tests/test_results/compare_screenshots.png" alt="Compare screenshots between your failed and successful runs" style="width:90%;" >}}
161+
159162
**Note**: Comparison is performed between two test runs with the same version, start URL, device, browser, and run type (scheduled, manual trigger, CI/CD). If there is no successful prior run with the same parameters, no comparison is offered.
160163
### Common browser test errors
161164

@@ -205,3 +208,4 @@ Alerts from your Synthetic test monitors appear in the **Events** tab under **Te
205208
[21]: /logs/guide/ease-troubleshooting-with-cross-product-correlation/#leverage-trace-correlation-to-troubleshoot-synthetic-tests
206209
[22]: /real_user_monitoring/explorer
207210
[23]: /real_user_monitoring/session_replay
211+
[24]: /synthetics/browser_tests/?tab=requestoptions#fast-retry

0 commit comments

Comments
 (0)