You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/synthetics/browser_tests/_index.md
+8Lines changed: 8 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -183,9 +183,17 @@ You can customize alert conditions to define the circumstances under which you w
183
183
184
184
{{< img src="synthetics/browser_tests/alerting_rules.png" alt="Browser test alerting rule" style="width:80%" >}}
185
185
186
+
#### Alerting rule
187
+
186
188
* An alert is triggered if any assertion fails for `X` minutes from any `n` of `N` locations. This alerting rule allows you to specify for how much time and in how many locations a test needs to fail before triggering the notification.
187
189
* Retry `X` times before location is marked as failed. This allows you to define how many consecutive test failures need to happen for a location to be considered as failed. By default, there is a 300ms wait before retrying a test that failed. This interval can be configured with the [API][6].
188
190
191
+
#### Fast retry
192
+
193
+
When a test fails, fast retry allows you to retry the test X times after Y ms before marking it as failed. Customizing the retry interval helps reduce false positives and improves your alerting accuracy.
194
+
195
+
Since location uptime is computed based on the final test result after retries complete, fast retry intervals directly impact what appears in your total uptime graph. The total uptime is computed based on the configured alert conditions, and notifications are sent based on the total uptime.
196
+
189
197
### Configure the test monitor
190
198
191
199
A notification is sent according to the set of alerting conditions. Use this section to define how and what to message your teams.
Copy file name to clipboardExpand all lines: content/en/synthetics/browser_tests/test_results.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ In the **Properties** section, you can see the test ID, test creation and edit d
36
36
37
37
In the **History** section, you can see three graphs:
38
38
39
-
- The **Global Uptime** graph displays the total uptime of all test locations in a given time interval. The global uptime visualization displays red only if the [alert conditions][20] configured for a test are triggered in the given time interval.
39
+
- The **Global Uptime** graph displays the total uptime of all test locations in a given time interval. The global uptime visualization displays red only if the [alert conditions][20] configured for a test are triggered in the given time interval. Since location uptime is computed based on the final test result after retries complete, [fast retry][24] intervals directly impact what appears in your total uptime graph.
40
40
- The **Time-to-interactive by location and device** graph displays the amount of time until a page can be interacted with in seconds. For more information about uptime monitoring, see the [Website Uptime Monitoring with SLOs][14] guide.
41
41
- The **Test duration by location and device** graph displays the amount of time in minutes each location and device takes to complete in a given time interval.
42
42
@@ -154,8 +154,11 @@ The step duration represents the amount of time the step takes to execute with t
154
154
A test result is considered `FAILED` if it does not satisfy its assertions or if a step failed for another reason. You can troubleshoot failed runs by looking at their screenshots, checking for potential [errors](#errors-and-warnings) at the step level, and looking into [resources][17] and [backend traces](#backend-traces) generated by their steps.
155
155
156
156
### Compare screenshots
157
+
157
158
To help during the investigation, click **Compare Screenshots** to receive side-by-side screenshots of the failed result and the last successful execution. The comparison helps you to spot any differences that could have caused the test to fail.
159
+
158
160
{{< img src="synthetics/browser_tests/test_results/compare_screenshots.png" alt="Compare screenshots between your failed and successful runs" style="width:90%;" >}}
161
+
159
162
**Note**: Comparison is performed between two test runs with the same version, start URL, device, browser, and run type (scheduled, manual trigger, CI/CD). If there is no successful prior run with the same parameters, no comparison is offered.
160
163
### Common browser test errors
161
164
@@ -205,3 +208,4 @@ Alerts from your Synthetic test monitors appear in the **Events** tab under **Te
0 commit comments