diff --git a/docs/environment-variables.md b/docs/environment-variables.md
index fe605386..03a0ca0d 100644
--- a/docs/environment-variables.md
+++ b/docs/environment-variables.md
@@ -15,6 +15,11 @@ OpenObserve is configured using the following environment variables.
| ZO_LOCAL_MODE | true | If local mode is set to true, OpenObserve becomes single node deployment.If it is set to false, it indicates cluster mode deployment which supports multiple nodes with different roles. For local mode one needs to configure SQLite DB, for cluster mode one needs to configure PostgreSQL (recommended) or MySQL. |
| ZO_LOCAL_MODE_STORAGE | disk | Applicable only for local mode. By default, local disk is used as storage. OpenObserve supports both disk and S3 in local mode. |
| ZO_NODE_ROLE | all | Node role assignment. Possible values are ingester, querier, router, compactor, alertmanager, and all. A single node can have multiple roles by specifying them as a comma-separated list. For example, compactor, alertmanager. |
+| ZO_NODE_ROLE_GROUP | "" | Each query-processing node can be assigned to a specific group using ZO_NODE_ROLE_GROUP.
+**interactive**: Handles queries triggered directly by users through the UI.
+**background**: Handles automated or scheduled queries, such as alerts and reports.
+**empty string** (default): Handles all query types.
+In high-load environments, alerts or reports might run large, resource-intensive queries. By assigning dedicated groups, administrators can prevent such queries from blocking or slowing down real-time user searches. |
| ZO_NODE_HEARTBEAT_TTL | 30 | Time-to-live (TTL) for node heartbeats in seconds. |
| ZO_INSTANCE_NAME | - | In the cluster mode, each node has a instance name. Default is instance hostname. |
| ZO_CLUSTER_COORDINATOR | nats | Defines how nodes in the cluster discover each other. |
@@ -82,7 +87,19 @@ OpenObserve is configured using the following environment variables.
| ZO_FILE_MOVE_FIELDS_LIMIT | 2000 | Field count threshold per WAL file. If exceeded, merging is skipped on the ingester. |
| ZO_MEM_TABLE_MAX_SIZE | 0 | Total size limit of all memtables. Multiple memtables exist for different organizations and stream types. Each memtable cannot exceed ZO_MAX_FILE_SIZE_IN_MEMORY, and the combined size cannot exceed this limit. If exceeded, the system returns a MemoryTableOverflowError to prevent out-of-memory conditions. Default is 50 percent of total memory. |
| ZO_MEM_PERSIST_INTERVAL | 5 | Interval in seconds at which immutable memtables are persisted from memory to disk. Default is 5 seconds. |
-
+| ZO_FEATURE_SHARED_MEMTABLE_ENABLED | false | When set to true, it turns on the shared memtable feature and several organizations can use the same in-memory table instead of each organization creating its own. This helps reduce memory use when many organizations send data at the same time. It also works with older non-shared write-ahead log (WAL) files. |
+| ZO_MEM_TABLE_BUCKET_NUM | 1 | This setting controls how many in-memory tables OpenObserve creates, and works differently depending on whether shared memtable is enabled or disabled.
+**When ZO_FEATURE_SHARED_MEMTABLE_ENABLED is true (shared memtable enabled)**:
+
+OpenObserve creates the specified number of shared in-memory tables that all organizations use together.
**If the number is higher**: OpenObserve creates more shared tables. Each table holds data from fewer organizations. This can make data writing faster because each table handles less data. However, it also uses more memory.
+**If the number is lower**: OpenObserve creates fewer shared tables. Each table holds data from more organizations. This saves memory but can make data writing slightly slower when many organizations send data at the same time.
+
+**When ZO_FEATURE_SHARED_MEMTABLE_ENABLED is false (shared memtable disabled)**:
+
+Each organization creates its own set of in-memory tables based on the ZO_MEM_TABLE_BUCKET_NUM value.
+
+For example, if ZO_MEM_TABLE_BUCKET_NUM is set to 4, each organization will create 4 separate in-memory tables.
+This is particularly useful when you have only one organization, as creating multiple in-memory tables for that single organization can improve ingestion performance.|
## Indexing
| Environment Variable | Default Value | Description |
@@ -573,9 +590,9 @@ OpenObserve is configured using the following environment variables.
| Environment Variable | Default Value | Description |
| -------------------------------- | --------- | ------------------------------------------------------- |
| ZO_QUICK_MODE_ENABLED | false | Indicates if quick mode is enabled. |
-| ZO_QUICK_MODE_NUM_FIELDS | 500 | The number of fields to consider for quick mode. |
| ZO_QUICK_MODE_STRATEGY | | Possible values are `first`, `last`, `both`. |
-| ZO_QUICK_MODE_FORCE_ENABLED | true | |
+| ZO_QUICK_MODE_FORCE_ENABLED | true | Enables automatic activation of Quick Mode from the backend. When set to true, OpenObserve applies Quick Mode automatically if the number of fields in a stream exceeds the limit defined by `ZO_QUICK_MODE_NUM_FIELDS`, even when the Quick Mode toggle in the UI is turned off.|
+| ZO_QUICK_MODE_NUM_FIELDS | 500 | This defines the number of fields beyond which the quick mode will be force enabled. |
## Miscellaneous
| Environment Variable | Default Value | Description |
diff --git a/docs/images/add-new-fields.png b/docs/images/add-new-fields.png
new file mode 100644
index 00000000..28016b3e
Binary files /dev/null and b/docs/images/add-new-fields.png differ
diff --git a/docs/images/analyze-results.png b/docs/images/analyze-results.png
new file mode 100644
index 00000000..942c1f1e
Binary files /dev/null and b/docs/images/analyze-results.png differ
diff --git a/docs/images/built-in-patterns-import.png b/docs/images/built-in-patterns-import.png
new file mode 100644
index 00000000..eef795ce
Binary files /dev/null and b/docs/images/built-in-patterns-import.png differ
diff --git a/docs/images/cached-ratio-dashboard.png b/docs/images/cached-ratio-dashboard.png
new file mode 100644
index 00000000..01d67b71
Binary files /dev/null and b/docs/images/cached-ratio-dashboard.png differ
diff --git a/docs/images/chart-align-options.png b/docs/images/chart-align-options.png
new file mode 100644
index 00000000..7ac01095
Binary files /dev/null and b/docs/images/chart-align-options.png differ
diff --git a/docs/images/dashboard-query-example.png b/docs/images/dashboard-query-example.png
new file mode 100644
index 00000000..6db0bdd6
Binary files /dev/null and b/docs/images/dashboard-query-example.png differ
diff --git a/docs/images/delete-fields.png b/docs/images/delete-fields.png
new file mode 100644
index 00000000..d29842a9
Binary files /dev/null and b/docs/images/delete-fields.png differ
diff --git a/docs/images/example-1-query-recommendations.png b/docs/images/example-1-query-recommendations.png
index 26a945ed..7cd454cf 100644
Binary files a/docs/images/example-1-query-recommendations.png and b/docs/images/example-1-query-recommendations.png differ
diff --git a/docs/images/example-2-query-recommendations.png b/docs/images/example-2-query-recommendations.png
index c9c7e707..da56e9b1 100644
Binary files a/docs/images/example-2-query-recommendations.png and b/docs/images/example-2-query-recommendations.png differ
diff --git a/docs/images/execution-tree.png b/docs/images/execution-tree.png
new file mode 100644
index 00000000..7394f708
Binary files /dev/null and b/docs/images/execution-tree.png differ
diff --git a/docs/images/explain-query.png b/docs/images/explain-query.png
new file mode 100644
index 00000000..4e1419bc
Binary files /dev/null and b/docs/images/explain-query.png differ
diff --git a/docs/images/field-name-data-type.png b/docs/images/field-name-data-type.png
new file mode 100644
index 00000000..14221924
Binary files /dev/null and b/docs/images/field-name-data-type.png differ
diff --git a/docs/images/filter-by-tags.png b/docs/images/filter-by-tags.png
new file mode 100644
index 00000000..f3dc66a0
Binary files /dev/null and b/docs/images/filter-by-tags.png differ
diff --git a/docs/images/gridlines-off.png b/docs/images/gridlines-off.png
new file mode 100644
index 00000000..af42d659
Binary files /dev/null and b/docs/images/gridlines-off.png differ
diff --git a/docs/images/import-selected-patterns.png b/docs/images/import-selected-patterns.png
new file mode 100644
index 00000000..4f0b442a
Binary files /dev/null and b/docs/images/import-selected-patterns.png differ
diff --git a/docs/images/legend-height-config.png b/docs/images/legend-height-config.png
new file mode 100644
index 00000000..0be6fddc
Binary files /dev/null and b/docs/images/legend-height-config.png differ
diff --git a/docs/images/legend-position-bottom.png b/docs/images/legend-position-bottom.png
new file mode 100644
index 00000000..35bb6a30
Binary files /dev/null and b/docs/images/legend-position-bottom.png differ
diff --git a/docs/images/legend-position-right.png b/docs/images/legend-position-right.png
new file mode 100644
index 00000000..77431296
Binary files /dev/null and b/docs/images/legend-position-right.png differ
diff --git a/docs/images/legend-type-plain.png b/docs/images/legend-type-plain.png
new file mode 100644
index 00000000..cd761a5b
Binary files /dev/null and b/docs/images/legend-type-plain.png differ
diff --git a/docs/images/legend-type-scroll.png b/docs/images/legend-type-scroll.png
new file mode 100644
index 00000000..59714755
Binary files /dev/null and b/docs/images/legend-type-scroll.png differ
diff --git a/docs/images/legend-width-config.png b/docs/images/legend-width-config.png
new file mode 100644
index 00000000..87b65008
Binary files /dev/null and b/docs/images/legend-width-config.png differ
diff --git a/docs/images/logical-plan.png b/docs/images/logical-plan.png
new file mode 100644
index 00000000..3c623b4e
Binary files /dev/null and b/docs/images/logical-plan.png differ
diff --git a/docs/images/manage-patterns.png b/docs/images/manage-patterns.png
new file mode 100644
index 00000000..55cc0761
Binary files /dev/null and b/docs/images/manage-patterns.png differ
diff --git a/docs/images/pattern-detail-view.png b/docs/images/pattern-detail-view.png
new file mode 100644
index 00000000..8df7e04d
Binary files /dev/null and b/docs/images/pattern-detail-view.png differ
diff --git a/docs/images/physical-plan.png b/docs/images/physical-plan.png
new file mode 100644
index 00000000..17c54641
Binary files /dev/null and b/docs/images/physical-plan.png differ
diff --git a/docs/images/search-patterns.png b/docs/images/search-patterns.png
new file mode 100644
index 00000000..ef791793
Binary files /dev/null and b/docs/images/search-patterns.png differ
diff --git a/docs/images/select-patterns.png b/docs/images/select-patterns.png
new file mode 100644
index 00000000..abb0c8dc
Binary files /dev/null and b/docs/images/select-patterns.png differ
diff --git a/docs/images/select-query-recommendations.png b/docs/images/select-query-recommendations.png
index b273c9e4..ea165406 100644
Binary files a/docs/images/select-query-recommendations.png and b/docs/images/select-query-recommendations.png differ
diff --git a/docs/images/update-settings.png b/docs/images/update-settings.png
new file mode 100644
index 00000000..bcf6d005
Binary files /dev/null and b/docs/images/update-settings.png differ
diff --git a/docs/images/use-query-recommendations.png b/docs/images/use-query-recommendations.png
index a63d0b79..827bfb58 100644
Binary files a/docs/images/use-query-recommendations.png and b/docs/images/use-query-recommendations.png differ
diff --git a/docs/user-guide/.pages b/docs/user-guide/.pages
index acb26453..88829c92 100644
--- a/docs/user-guide/.pages
+++ b/docs/user-guide/.pages
@@ -1,6 +1,6 @@
nav:
- Concepts: concepts.md
- - Log Search: logs
+ - Logs: logs
- Metrics: metrics
- Streams: streams
- Ingestion: ingestion
diff --git a/docs/user-guide/dashboards/.pages b/docs/user-guide/dashboards/.pages
index 5f1199be..bfd99d13 100644
--- a/docs/user-guide/dashboards/.pages
+++ b/docs/user-guide/dashboards/.pages
@@ -9,6 +9,7 @@ nav:
- Filters: filters
- Comparison Against in Dashboards: comparison-against-in-dashboards.md
- Custom Charts: custom-charts
+ - Histogram Caching: histogram-caching.md
diff --git a/docs/user-guide/dashboards/config/legend-and-gridline.md b/docs/user-guide/dashboards/config/legend-and-gridline.md
new file mode 100644
index 00000000..6187be4a
--- /dev/null
+++ b/docs/user-guide/dashboards/config/legend-and-gridline.md
@@ -0,0 +1,170 @@
+This document describes the legend and gridline configuration options available in OpenObserve dashboard panels.
+
+## Overview
+Dashboard panels include configuration options for controlling legend display and gridline visibility. These options help optimize chart readability and space usage.
+
+## Legend configuration options
+The following options are available in the panel configuration sidebar under the legend section.
+
+### Legends positions
+
+Controls the placement of the legend relative to the chart.
+
+| Option | Description |
+|--------|-------------|
+| Auto | Places the legend at the bottom of the chart (default) |
+| Right | Places the legend on the right side of the chart |
+| Bottom | Places the legend below the chart |
+
+**Example: Legend positioned on the right**
+
+
+
+When set to **Right**, the legend appears vertically alongside the chart. This configuration is useful for charts with many data series.
+
+**Example: Legend positioned at the bottom**
+
+
+
+When set to **Bottom**, the legend appears horizontally below the chart. This is the default position and works well for most chart types.
+
+### Legends type
+
+Controls the legend display behavior.
+
+| Option | Description |
+|--------|-------------|
+| Auto | Automatically selects scroll behavior (default) |
+| Plain | Displays all legend items in a static, non-scrollable layout |
+| Scroll | Enables scrolling through legend items when space is limited |
+
+**Plain behavior:**
+
+- All legend items are visible simultaneously
+- Legend height adjusts to fit all items
+- No pagination controls
+
+**Example: Plain legend type**
+
+
+
+The plain type displays all legend items in a fixed layout. All series are visible at once without scrolling.
+
+**Scroll behavior:**
+
+- Legend items can be scrolled
+- Pagination indicator shows current page
+- Navigation arrows allow moving between pages
+- Conserves vertical space
+
+**Example: Scroll legend type**
+
+
+
+The scroll type includes navigation arrows and a page indicator (shown as "1/2" in the bottom right). Users can navigate between pages of legend items.
+
+### Legend height
+
+- **Input:** Enter numeric input.
+- **Units:** Choose between px (pixels) or % (percentage).
+- **Availability:** Enabled only when Legends Position is set to Auto or Bottom and Legend Type is set to Plain.
+- **Purpose:** Overrides the default height of the legend area.
+
+**Example: Legend height configuration**
+
+
+
+
+### Legend width
+
+- **Input:** Enter numeric input.
+- **Units:** Choose between px (pixels) or % (percentage).
+- **Availability:** Enabled only when Legends Position is set to Right and Legend Type is set to Plain.
+- **Purpose:** Overrides the default width of the legend area.
+
+**Example: Legend width configuration**
+
+
+
+
+### Chart align
+
+Controls the horizontal alignment of the chart within its container.
+
+| Option | Description |
+|--------|-------------|
+| Auto | Centers the chart (default) |
+| Left | Aligns the chart to the left |
+| Center | Centers the chart |
+
+**Note:** This option is only available for pie and donut charts when legends position is set to "Right".
+
+**Example: Chart align options for pie chart**
+
+
+
+Legends Position is set to Right and Legend Type is set to Plain. You can select how the chart is horizontally aligned within the remaining space.
+
+
+## Gridline configuration
+
+### Show gridlines
+
+A toggle switch that controls gridline visibility on the chart.
+
+- **Type:** Boolean toggle (on/off)
+- **Default:** On
+- **Effect:** When enabled, displays horizontal and vertical reference lines on the chart.
+
+**Example: Gridlines disabled**
+
+
+
+When gridlines are disabled, the chart displays without reference lines, providing a cleaner appearance.
+
+
+## Chart type support
+
+### Legend options
+
+Legend configuration is available for:
+
+- Line chart
+- Area chart (stacked and unstacked)
+- Bar chart (vertical and horizontal)
+- Scatter plots
+- Stacked chart (vertical and horizontal)
+- Pie chart
+- Donut chart
+
+Legend configuration is not available for:
+
+- Table chart
+- Heatmaps
+- Metric chart
+- Gauge chart
+- Geomap chart
+- Sankey chart
+- Map chart
+
+### Gridlines option
+
+The show gridlines option is available for:
+
+- Line chart
+- Area chart (stacked and unstacked)
+- Bar chart (vertical and horizontal)
+- Scatter plots
+- Stacked chart (vertical and horizontal)
+
+The show gridlines option is not available for:
+
+- Pie chart
+- Donut chart
+- Heatmaps
+- Table chart
+- Metric chart
+- Gauge chart
+- Geomap chart
+- Sankey chart
+- Map chart
diff --git a/docs/user-guide/dashboards/histogram-caching.md b/docs/user-guide/dashboards/histogram-caching.md
new file mode 100644
index 00000000..a21bcb50
--- /dev/null
+++ b/docs/user-guide/dashboards/histogram-caching.md
@@ -0,0 +1,343 @@
+---
+title: Histogram Caching in Dashboards
+description: Learn how histogram caching in OpenObserve dashboards reuses results for overlapping time ranges, improves query performance.
+---
+
+## Overview
+When dashboard panels run queries over relative time ranges such as `Past 15 minutes` or `Past 1 hour`, the queried data often overlaps between refreshes.
+To avoid scanning the same data repeatedly, OpenObserve uses **histogram caching**.
+
+### Why caching is needed
+Most dashboard queries overlap in time when refreshed.
+
+
+**For example**:
+
+- First query: `07:00` to `07:15`
+- Next query: `07:01` to `07:16`
+
+The two ranges share 14 minutes of the same data.
+Only the one new minute from `07:15` to `07:16` needs fresh scanning.
+Histogram caching reuses data for that overlapping part to prevent redundant processing.
+
+### How caching works
+Histogram caching operates automatically for dashboard panels that visualize time-series data.
+When a new query overlaps with the previous time window, OpenObserve:
+
+- Fetches the cached portion of results for the shared range.
+- Scans and appends only the new slice of time.
+- Updates metadata such as start and end timestamps.
+
+### Example query
+```sql
+SELECT histogram(_timestamp) AS x_axis_1,
+ count(k8s_node_name) AS y_axis_1
+FROM "default"
+GROUP BY x_axis_1
+ORDER BY y_axis_1 DESC
+```
+
+In this query:
+
+- `x_axis_1` represents time intervals on the X-axis, created by the `histogram()` function.
+- `y_axis_1` represents the count of log entries for each interval.
+
+When run repeatedly over relative time ranges, only the new data beyond the last cached timestamp is scanned, while older intervals are reused.
+
+
+## Inspecting caching behavior
+You can observe histogram caching in your browser’s developer tools by viewing the query metadata and response stream.
+
+### View query response in the browser
+
+Open the dashboard panel that already contains a runnable query.
+
+1. Right-click anywhere on the page and select **Inspect**.
+2. Select the **Network** tab.
+3. Keep the **Network** tab open and click **Run query** in the OpenObserve dashboard.
+4. Select the latest request.
+5. Open the **Response** tab to see the event stream returned by the server.
+
+
+### Key metadata fields
+| Field name | Meaning | Typical observation |
+| --------------------- | ---------------------------------------- | ------------------------------------------------- |
+| `cached_ratio` | Percentage of results fetched from cache | `0` for cold query, `100` for full cache reuse |
+| `order_by_metadata` | Field and direction used for sorting | `[["x_axis_1","asc"]]` or `[["y_axis_1","desc"]]` |
+| `scan_records` | Number of records read from disk | Lower on cache hits |
+| `took`, `took_detail` | Total query time and breakdown | Shorter when cache reuse occurs |
+| `is_histogram_eligible` | Eligibility for histogram caching | True for time-based histogram queries |
+| `search_response_hits` | Actual rows returned | Sorted consistently with order metadata |
+| `progress` | Query progress percentage | Moves from `0` to `100` |
+
+
+### Understanding cache ratios
+
+`cached_ratio` indicates how much of a query’s output is reused.
+
+- `0` > first run
+- `100` > complete cache reuse
+- Partial value (`40`–`70`) > partial reuse for extended ranges
+- Slightly less than `100` > recent data freshly scanned for accuracy
+
+
+
+## Histogram caching scenarios
+
+The following examples describe how histogram caching behaves across various query types and ordering patterns.
+Each scenario includes the SQL query and what happens when executed in OpenObserve dashboards.
+
+### Basic histogram with COUNT ORDER BY DESC
+
+```sql
+SELECT histogram(_timestamp) AS x_axis_1, count(_timestamp) AS y_axis_1
+FROM "default"
+GROUP BY x_axis_1
+ORDER BY y_axis_1 DESC
+```
+
+**What happens**: Results are cached with data ordered by count descending. Later queries reuse the cached data while preserving order.
+
+### Basic histogram with COUNT ORDER BY ASC
+```sql
+SELECT histogram(_timestamp) AS x_axis_1, count(_timestamp) AS y_axis_1
+FROM "default"
+GROUP BY x_axis_1
+ORDER BY y_axis_1 ASC
+```
+
+**What happens**: Results are cached with data ordered by count ascending. A separate cache entry exists for ascending order.
+
+### Histogram with SUM ORDER BY
+```sql
+SELECT histogram(_timestamp) AS time_bucket, SUM(bytes) AS total_bytes
+FROM "default"
+GROUP BY time_bucket
+ORDER BY total_bytes DESC
+```
+
+**What happens**: Results are cached with records ordered by the sum of bytes. Cache preserves both ordering field and direction.
+
+### Histogram with AVG ORDER BY
+```sql
+SELECT histogram(_timestamp) AS time_bucket, AVG(response_time) AS avg_response
+FROM "default"
+GROUP BY time_bucket
+ORDER BY avg_response DESC
+```
+
+**What happens**: Results are cached with data ordered by average response time descending. Cached order metadata ensures correct sorting.
+
+### Histogram with multiple aggregations, non-timestamp ORDER BY
+```sql
+SELECT histogram(_timestamp) AS time_bucket,
+ count(*) AS event_count,
+ SUM(bytes) AS total_bytes
+FROM "default"
+GROUP BY time_bucket
+ORDER BY total_bytes DESC
+```
+
+**What happens**: Results are cached with records ordered by total bytes. Both aggregations are stored; ordering follows total_bytes.
+
+### Histogram with MAX ORDER BY
+```sql
+SELECT histogram(_timestamp) AS time_bucket, MAX(value) AS max_val
+FROM "default"
+GROUP BY time_bucket
+ORDER BY max_val DESC
+```
+
+**What happens**: Results are cached with data ordered by maximum value. Order metadata is preserved for consistent sorting.
+
+### Multiple histogram columns with non-timestamp ORDER BY
+```sql
+SELECT histogram(_timestamp) AS time_bucket,
+ status_code,
+ count(*) AS request_count
+FROM "default"
+GROUP BY time_bucket, status_code
+ORDER BY request_count DESC
+```
+
+**What happens**: Results are cached with grouping on both time_bucket and status_code. Cache preserves correct grouping and order.
+
+### Standard histogram with timestamp ORDER BY (default ascending)
+```sql
+SELECT histogram(_timestamp) AS x_axis_1, count(_timestamp) AS y_axis_1
+FROM "default"
+GROUP BY x_axis_1
+ORDER BY x_axis_1 ASC
+```
+
+**What happens**: Results are cached in chronological order. Fast first-and-last logic improves reuse.
+
+### Standard histogram with timestamp ORDER BY DESC
+```sql
+SELECT histogram(_timestamp) AS x_axis_1, count(_timestamp) AS y_axis_1
+FROM "default"
+GROUP BY x_axis_1
+ORDER BY x_axis_1 DESC
+```
+
+**What happens**: Results are cached in reverse chronological order. Cache reuse follows the same efficiency path.
+
+### Histogram with no ORDER BY (default time order)
+```sql
+SELECT histogram(_timestamp) AS x_axis_1, count(_timestamp) AS y_axis_1
+FROM "default"
+GROUP BY x_axis_1
+```
+
+**What happens**: Results are cached in default ascending timestamp order. Overlapping time windows reuse cache data.
+
+### Histogram with explicit _timestamp ORDER BY
+```sql
+SELECT histogram(_timestamp) AS time_bucket, count(*) AS cnt
+FROM "default"
+GROUP BY time_bucket
+ORDER BY _timestamp DESC
+```
+
+**What happens**: Results are cached using timestamp ordering identical to the default histogram order.
+
+### Non-histogram aggregate query
+```sql
+SELECT count(*) AS total_count, SUM(bytes) AS total_bytes
+FROM "default"
+WHERE _timestamp BETWEEN '2025-10-01' AND '2025-10-06'
+```
+
+**What happens**: Cached using standard mechanisms, not histogram caching.
+
+### Raw log query (no aggregation)
+```sql
+SELECT * FROM "default"
+WHERE _timestamp BETWEEN '2025-10-01' AND '2025-10-06'
+ORDER BY _timestamp DESC
+LIMIT 1000
+```
+
+**What happens**: Cached normally through standard query caching, not histogram caching.
+
+### Non-histogram with non-timestamp ORDER BY
+```sql
+SELECT method, count(*) AS cnt
+FROM "default"
+GROUP BY method
+ORDER BY cnt DESC
+```
+
+**What happens**: Not eligible for histogram caching since no histogram function is used.
+
+### Histogram with mixed ORDER BY (timestamp first, then count)
+```sql
+SELECT histogram(_timestamp) AS time_bucket, count(*) AS cnt
+FROM "default"
+GROUP BY time_bucket
+ORDER BY time_bucket ASC, cnt DESC
+```
+
+**What happens**: Treated as timestamp-ordered. Cache reuse continues for overlapping windows.
+
+### Histogram with empty result set
+```sql
+SELECT histogram(_timestamp) AS time_bucket, count(*) AS cnt
+FROM "default"
+WHERE log_level = 'NONEXISTENT'
+GROUP BY time_bucket
+ORDER BY cnt DESC
+```
+
+**What happens**: Cache handles empty results correctly and stores metadata for the query range.
+
+### Very large time-range histogram
+```sql
+SELECT histogram(_timestamp) AS time_bucket, count(*) AS cnt
+FROM "default"
+WHERE _timestamp BETWEEN '2025-01-01' AND '2025-12-31'
+GROUP BY time_bucket
+ORDER BY cnt DESC
+```
+**What happens**: Cache stores complete range metadata for long durations. Later queries reuse relevant parts efficiently.
+
+### Histogram with recent data (within cache delay)
+```sql
+SELECT histogram(_timestamp) AS time_bucket, count(*) AS cnt
+FROM "default"
+WHERE _timestamp >= now() - INTERVAL '10 minutes'
+GROUP BY time_bucket
+ORDER BY cnt DESC
+```
+**What happens**: Recent buckets within the cache-delay window are freshly scanned, older buckets are reused from cache.
+
+### Histogram with multiple non-timestamp ORDER BY columns
+```sql
+SELECT histogram(_timestamp) AS time_bucket,
+ status_code,
+ count(*) AS cnt,
+ SUM(bytes) AS total_bytes
+FROM "default"
+GROUP BY time_bucket, status_code
+ORDER BY cnt DESC, total_bytes DESC
+```
+**What happens**: Cache preserves ordering across both fields. Results are reused across refreshes with identical ordering.
+
+### Large dataset histogram (test scan performance)
+```sql
+SELECT histogram(_timestamp) AS time_bucket, count(*) AS cnt
+FROM "default"
+WHERE _timestamp BETWEEN '2025-01-01' AND '2025-10-31'
+GROUP BY time_bucket
+ORDER BY cnt DESC
+```
+**What happens**: Cache efficiently handles large scans. On later runs, reused buckets prevent redundant disk scans.
+
+### High-cardinality histogram (many buckets)
+```sql
+SELECT histogram(_timestamp, '1m') AS time_bucket, count(*) AS cnt
+FROM "default"
+WHERE _timestamp BETWEEN '2025-10-01' AND '2025-10-06'
+GROUP BY time_bucket
+ORDER BY cnt DESC
+```
+**What happens**: Minute-level histograms are cached after the first scan. Future runs reuse cached buckets, minimizing scan time.
+
+### First query (cold cache)
+```sql
+SELECT histogram(_timestamp) AS x_axis_1, count(*) AS y_axis_1
+FROM "default"
+GROUP BY x_axis_1
+ORDER BY y_axis_1 DESC
+```
+
+**What happens**: This initial query builds the cache. result_cache_ratio is 0 and cache files are created with correct timestamps.
+
+### Second query (warm cache)
+```sql
+SELECT histogram(_timestamp) AS x_axis_1, count(*) AS y_axis_1
+FROM "default"
+GROUP BY x_axis_1
+ORDER BY y_axis_1 DESC
+```
+**What happens**: Cache is reused. result_cache_ratio rises to 100. Only new data is scanned, improving performance.
+
+### Query with overlapping time range
+```sql
+SELECT histogram(_timestamp) AS x_axis_1, count(*) AS y_axis_1
+FROM "default"
+WHERE _timestamp BETWEEN '2025-10-01' AND '2025-10-05'
+GROUP BY x_axis_1
+ORDER BY y_axis_1 DESC
+```
+**What happens**: Cache reuses overlapping results. Only the subset is returned with no duplicates.
+
+### Query with extended time range
+```sql
+SELECT histogram(_timestamp) AS x_axis_1, count(*) AS y_axis_1
+FROM "default"
+WHERE _timestamp BETWEEN '2025-10-01' AND '2025-10-10'
+GROUP BY x_axis_1
+ORDER BY y_axis_1 DESC
+```
+**What happens**: Cache partially reuses existing data and merges new results for the extended range. Duplicates are avoided, and ordering is preserved.
\ No newline at end of file
diff --git a/docs/user-guide/dashboards/panels/partial-data-error.png b/docs/user-guide/dashboards/panels/partial-data-error.png
new file mode 100644
index 00000000..ec664d9f
Binary files /dev/null and b/docs/user-guide/dashboards/panels/partial-data-error.png differ
diff --git a/docs/user-guide/dashboards/panels/troubleshooting.md b/docs/user-guide/dashboards/panels/troubleshooting.md
index e8868ae3..12d45c42 100644
--- a/docs/user-guide/dashboards/panels/troubleshooting.md
+++ b/docs/user-guide/dashboards/panels/troubleshooting.md
@@ -1,49 +1,67 @@
This page provides instructions on the warning or error icons displayed on the panel toolbar, explains how to identify and troubleshoot these warnings.
+??? "1. Error: When the query duration is modified due to the query range is restricted"
+ ## Error: When the query duration is modified due to the query range is restricted
+ 
-## Error: When the query duration is modified due to the query range is restricted
-
+ ### Cause
+ This occurs when the time range of your query exceeds the limit set at the **Max Query Range (in hours)** field in the stream settings.
+ ### Resolution
+ 1. Go to the **Streams** page.
+ 2. Find the stream you are working with and select **Stream Details** under the **Actions** column.
+ 
-### Cause
-This occurs when the time range of your query exceeds the limit set at the **Max Query Range (in hours)** field in the stream settings.
-### Resolution
-1. Go to the **Streams** page.
-2. Find the stream you are working with and select **Stream Details** under the **Actions** column.
-
+ 3. In the stream settings, update the **Max Query Range (in hours)** to a value that supports your query duration.
-3. In the stream settings, update the **Max Query Range (in hours)** to a value that supports your query duration.
+??? "Warning: The data shown is cached and is different from the selected time range"
-## Warning: The data shown is cached and is different from the selected time range
+ ## Warning: The data shown is cached and is different from the selected time range
-
+ 
-### Cause
-This warning appears when the selected time range has changed, but the panel did not automatically refresh.
+ ### Cause
+ This warning appears when the selected time range has changed, but the panel did not automatically refresh.
-### Resolution
-Click the **Refresh** button in the panel toolbar (the circular arrow icon) to reload the panel with data that matches the current time range.
+ ### Resolution
+ Click the **Refresh** button in the panel toolbar (the circular arrow icon) to reload the panel with data that matches the current time range.
-## Warning: Limiting the displayed series to ensure optimal performance
+??? "Warning: Limiting the displayed series to ensure optimal performance"
+ ## Warning: Limiting the displayed series to ensure optimal performance
-
+ 
-### Cause
-When the `ZO_MAX_DASHBOARD_SERIES` variable is set, the panel will display only the specified number of series. A warning message will appear on the panel indicating that the displayed data has been limited for optimal performance.
+ ### Cause
+ When the `ZO_MAX_DASHBOARD_SERIES` variable is set, the panel will display only the specified number of series. A warning message will appear on the panel indicating that the displayed data has been limited for optimal performance.
-!!! Note
- - If a panel includes multiple Y-axes, the limit will apply **individually per Y-axis**.
- - The **Compare Against** feature will not count toward the series limit. However, if enabled, it may not fully display its comparison results due to this cap. The exact behavior is subject to confirmation.
+ !!! Note
+ - If a panel includes multiple Y-axes, the limit will apply **individually per Y-axis**.
+ - The **Compare Against** feature will not count toward the series limit. However, if enabled, it may not fully display its comparison results due to this cap. The exact behavior is subject to confirmation.
-### Resolution:
+ ### Resolution
-Update the following environment variable:
-```
-ZO_MAX_DASHBOARD_SERIES=
-```
-Replace with the maximum number of time series you want each panel to display.
+ Update the following environment variable:
+ ```
+ ZO_MAX_DASHBOARD_SERIES=
+ ```
+ Replace with the maximum number of time series you want each panel to display.
-**Example**
-If you set ZO_MAX_DASHBOARD_SERIES=50:
+ **Example**
+ If you set ZO_MAX_DASHBOARD_SERIES=50:
-- A panel with a single Y-axis will display up to 50 series.
-- A panel with two Y-axes will display up to 50 series on each axis.
\ No newline at end of file
+ - A panel with a single Y-axis will display up to 50 series.
+ - A panel with two Y-axes will display up to 50 series on each axis.
+
+??? "Warning: The data shown is incomplete because the loading was interrupted"
+ ## Warning: The data shown is incomplete because the loading was interrupted
+ 
+
+ ### Cause
+ This warning appears when the query execution was interrupted or cancelled. It results in the panel displaying partial or no data while indicating that the data is not fully loaded.
+
+ Typical causes include:
+
+ - Query cancellation due to user navigation or timeout
+ - Backend failed to send the complete result set
+
+ ### Resolution
+ Refresh the panel toolbar to rerun the query. This will attempt to fetch the complete data and remove the warning.
diff --git a/docs/user-guide/logs/.pages b/docs/user-guide/logs/.pages
index 353f5684..c0357db4 100644
--- a/docs/user-guide/logs/.pages
+++ b/docs/user-guide/logs/.pages
@@ -3,4 +3,5 @@ nav:
- Logs in OpenObserve: logs.md
- Search Around: search-around.md
- Quick Mode and Interesting Fields: quickmode.md
+ - Explain and Analyze Query: explain-analyze-query.md
diff --git a/docs/user-guide/logs/explain-analyze-query.md b/docs/user-guide/logs/explain-analyze-query.md
new file mode 100644
index 00000000..f98c915c
--- /dev/null
+++ b/docs/user-guide/logs/explain-analyze-query.md
@@ -0,0 +1,175 @@
+---
+title: Explain and Analyze Query in OpenObserve
+description: Learn how to use Explain Query and Analyze Query to view execution plans, understand query performance, and identify bottlenecks in OpenObserve.
+---
+This document explains how to use the Explain Query and Analyze Query features in OpenObserve to view and understand your SQL query execution plans.
+
+## Overview
+OpenObserve provides Explain Query and Analyze Query tools to help users understand how SQL queries are processed internally. These options display the logical and physical query plans generated by OpenObserve's query engine and show how data flows through different execution stages. The Analyze view additionally provides execution time, row counts, and memory usage for each operation.
+
+??? "How to access Explain Query"
+ ## How to access Explain Query
+
+ 1. From the left navigation panel, click **Logs**.
+ 2. In the SQL editor at the top, write your SQL query.
+ 3. Select the time range in the time range selector.
+ 4. Click the **Run query** button to execute your query.
+ 5. After the query completes and results are displayed, click the **More Actions menu** (≡) at the top-right corner.
+
+ 
+ 6. Select **Explain Query** from the dropdown menu.
+ 7. A Query Plan window opens displaying. The left panel shows your original SQL query for reference. The right panel shows two tabs, **Logical Plan** and **Physical Plan**.
+ 
+
+
+ 
+
+??? "How to access Analyze Query"
+ ## How to access Analyze Query
+
+ 1. First, open the Explain Query page following the steps above.
+ 2. Click the **"Analyze"** button in the top-right corner of the Query Plan window.
+ 3. The **Analyze Results** section displays with:
+ 
+
+ - **Execution Summary**: Overall performance statistics at the top
+ - **Execution Tree**: Detailed per-operator performance data below
+
+
+## Understanding query plans
+
+### How to read the execution tree
+
+
+The execution tree displays operators from top to bottom on your screen, but you should **read it from bottom to top** to understand the flow of execution.
+
+!!! note "Why bottom to top?"
+
+ - The **bottom** of the tree shows where execution **starts**.
+ - The **top** of the tree shows where execution **ends**.
+ - Data flows **upward** through each operator.
+
+**Visual guide:**
+
+```
+┌─────────────────────────────────┐
+│ SortPreservingMergeExec │ ← Final results
+└─────────────────────────────────┘
+ ▲
+ │
+ │
+┌─────────────────────────────────┐
+│ ProjectionExec │
+└─────────────────────────────────┘
+ ▲
+ │
+┌─────────────────────────────────┐
+│ FilterExec │
+└─────────────────────────────────┘
+ ▲
+ │
+┌─────────────────────────────────┐
+│ DataSourceExec │ ← Starts here
+└─────────────────────────────────┘
+```
+
+!!! note "Tree navigation"
+
+ - Click arrow icons (▼ ▶) to expand or collapse sections.
+ - Indentation shows parent-child relationships.
+
+> **Note:** For detailed technical explanations about operators and query execution, refer to:
+> - [DataFusion EXPLAIN documentation](https://datafusion.apache.org/user-guide/sql/explain.html)
+> - [Reading Explain Plans guide](https://datafusion.apache.org/user-guide/explain-usage.html)
+
+---
+
+## Explain Query
+
+The Explain Query option shows how OpenObserve interprets and executes a SQL query. It displays two views:
+
+- Logical Plan
+- Physical Plan
+
+### Logical Plan
+
+The Logical Plan shows the sequence of operations your query follows. It represents the high-level structure of your query.
+
+
+!!! note "Common operators you will see:"
+
+ - **Projection**: Related to columns selected in the `SELECT` clause
+ - **Aggregate**: Related to GROUP BY and aggregate functions (`SUM`, `COUNT`, `AVG`, etc.)
+ - **Filter**: Related to `WHERE` clause conditions
+ - **TableScan**: Related to the data source in the `FROM` clause
+ - **Sort**: Related to `ORDER` BY clause
+ - **Limit**: Related to `LIMIT` clause
+
+!!! note "You can use the logical plan to:"
+
+ - Verify that the operations match your SQL query structure.
+ - Confirm that your dataset and time range are being applied.
+
+### Physical Plan
+
+The Physical Plan shows how OpenObserve executes your query, including the specific execution operators used.
+
+
+!!! note "Common operators you will see:"
+
+ - **DataSourceExec**: Reads data from storage
+ - **RemoteScanExec**: Reads data from distributed partitions or remote nodes
+ - **FilterExec**: Applies filtering operations
+ - **ProjectionExec**: Handles column selection and expression computation
+ - **AggregateExec**: Performs aggregation operations
+ - May show `mode=Partial` or `mode=FinalPartitioned`
+ - **RepartitionExec**: Redistributes data across partitions
+ - May show `Hash([column], N)` or `RoundRobinBatch(N)`
+ - **CoalesceBatchesExec**: Combines data batches
+ - **SortExec**: Sorts data
+ - May show `TopK(fetch=N)` for optimized sorting
+ - **SortPreservingMergeExec**: Merges sorted data streams
+ - **CooperativeExec**: Coordinates distributed execution
+
+---
+
+## Analyze Query
+
+The Analyze Query option displays execution details for each operator in the physical plan.
+
+
+### Execution Summary
+
+Shows overall query execution information:
+
+- **Total Time**: Time taken to execute the entire query
+- **Total Rows**: Total number of rows processed
+- **Peak Memory**: Memory usage during execution
+
+### Execution Tree
+
+Displays each operator involved in the query execution as a hierarchical tree with execution details.
+
+**Every operator node shows:**
+
+- **Operator name**: Such as ProjectionExec, AggregateExec, or FilterExec
+- **Mode or parameters**: Such as aggregation mode (Partial, FinalPartitioned) or partition count
+- **Rows processed**: Number of rows output by the operator
+- **Execution time**: Time taken by that operator
+
+!!! note "Visual indicators:"
+
+ - Row counts are shown with an icon next to each operator.
+ - Execution times are shown in microseconds with an icon next to each operator.
+ - The tree uses indentation to show parent-child relationships.
+
+### Using analyze results
+
+The analyze results help you understand query performance by showing:
+
+- How many rows each operator processes
+- How much time each operator takes
+- Where in the execution tree operations occur
+
+Compare row counts and execution times across operators to understand your query's behavior.
+
diff --git a/docs/user-guide/logs/index.md b/docs/user-guide/logs/index.md
index 0f524bf3..6d535969 100644
--- a/docs/user-guide/logs/index.md
+++ b/docs/user-guide/logs/index.md
@@ -1,4 +1,4 @@
-## What are Logs
+## Overview
Logs are a type of stream in OpenObserve that record structured event data from applications, systems, or services. Each log entry includes a timestamp, message, and optional metadata fields such as severity, service name, or container details.
You can use the Logs page to:
@@ -9,12 +9,14 @@ You can use the Logs page to:
- Save views and schedule recurring searches
- Export logs for offline analysis
-## Access
+!!! note "Who can access"
-- **Enterprise** and **Cloud** editions support Role-Based Access Control (RBAC) to restrict log access per stream and role.
-- **Open Source** edition provides full access to all logs for all users.
+ - **Enterprise** and **Cloud** editions support Role-Based Access Control (RBAC) to restrict log access per stream and role.
+ - **Open Source** edition provides full access to all logs for all users.
Learn more:
- [Logs in OpenObserve](logs.md)
-- [Search Around](search-around.md)
\ No newline at end of file
+- [Search Around](search-around.md)
+- [Quick Mode and Interesting Fields](quickmode.md)
+- [Explain and Analyze Query](explain-analyze-query.md)
\ No newline at end of file
diff --git a/docs/user-guide/management/aggregation-cache.md b/docs/user-guide/management/aggregation-cache.md
index 1303f012..cee14c93 100644
--- a/docs/user-guide/management/aggregation-cache.md
+++ b/docs/user-guide/management/aggregation-cache.md
@@ -3,7 +3,8 @@ title: Streaming Aggregation in OpenObserve
description: Learn how streaming aggregation works in OpenObserve Enterprise.
---
-This page explains what streaming aggregation is and shows how to use it to improve query performance with aggregation cache in OpenObserve.
+This page explains what streaming aggregation is and how it improves query performance in OpenObserve.
+
!!! info "Availability"
This feature is available in Enterprise Edition.
@@ -12,7 +13,7 @@ This page explains what streaming aggregation is and shows how to use it to impr
## What is streaming aggregation?
- Streaming aggregation in OpenObserve enables **aggregation cache**. When streaming aggregation is enabled, OpenObserve begins caching the factors required to compute aggregates for each time partition during query execution. These cached values can then be reused for later queries that cover the same or overlapping time ranges.
+ Streaming aggregation is an Enterprise feature that enables **aggregation cache**. By default, the streaming aggregation feature is enabled. It allows OpenObserve to cache the factors required to compute aggregates for each time partition during query execution. These cached values can then be reused for later queries that cover the same or overlapping time ranges.
??? "Why aggregation cache matters"
@@ -28,18 +29,14 @@ This page explains what streaming aggregation is and shows how to use it to impr
??? "Relationship between streaming aggregation and aggregation cache"
- - **Streaming aggregation** is the feature toggle in Enterprise settings.
- - **Aggregation cache** is the mechanism that becomes active when streaming aggregation is enabled.
+ - **Streaming aggregation** is the underlying technology that enables aggregation cache.
+ - **Aggregation cache** is the mechanism that stores and reuses your query results.
!!! Note "Who can use it"
All Enterprise users.
!!! Note "Where to find it"
- To enable aggregation cache:
-
- 1. Go to **Management > General Settings**.
- 2. Turn on the **Enable Streaming Aggregation** toggle.
- 3. Select **Save**.
+ It is **enabled by default** in Enterprise Edition. No manual configuration is required.
!!! Note "Environment variables"
@@ -74,7 +71,7 @@ This page explains what streaming aggregation is and shows how to use it to impr
---
## Query Behavior
- When aggregation cache is enabled, OpenObserve writes intermediate results into disk as Arrow IPC files. These files store values that can be safely combined later instead of full raw logs.
+ OpenObserve writes intermediate results into disk as Arrow IPC files. These files store values that can be safely combined later instead of full raw logs.
Example query
```sql
SELECT avg(response_time), sum(bytes), count(*)
@@ -161,9 +158,9 @@ This page explains what streaming aggregation is and shows how to use it to impr
| `zo_query_aggregation_cache_bytes` | Monitor memory consumption to ensure the cache stays within acceptable limits and doesn't exhaust system resources |
---
-=== "How to use"
+=== "Verifying aggregation cache"
- ## How to use streaming aggregation
+ ## Verifying aggregation cache
**Example query**
```sql
SELECT
@@ -226,8 +223,8 @@ This page explains what streaming aggregation is and shows how to use it to impr
---
## Performance benefits
-Streaming aggregation is enabled in all the following test runs:
-
+The following test runs demonstrate aggregation cache performance improvements:
+
**Test run 1**:
- Time range: `2025-08-13 00:00:00 - 2025-08-20 00:00:00`
diff --git a/docs/user-guide/management/query-management.md b/docs/user-guide/management/query-management.md
index 8eb92233..6c7b91e2 100644
--- a/docs/user-guide/management/query-management.md
+++ b/docs/user-guide/management/query-management.md
@@ -5,7 +5,8 @@ description: >-
---
This page explains what Query Management is and shows how to use it.
-> This feature is available only in [high-availability (HA)](../../openobserve-enterprise-edition-installation-guide.md) deployments.
+!!! info "Availability"
+ This feature is available in Enterprise Edition. Not available in Open Source and Cloud.
=== "Overview"
## What is Query Management?
@@ -37,7 +38,7 @@ This page explains what Query Management is and shows how to use it.
The **Running Queries** table displays the following fields:
- **Email**: The email ID of the user who initiated the queries.
- - **Search Type**: The origin of the query (for example, dashboards, alerts, or others).
+ - **Search Type**: The origin of the query. For example, dashboards, alerts, or others.
- **Number of Queries**: Total active queries for that user.
- **Total Exec. Duration**: Combined time spent executing all active queries.
- **Total Query Range**: Total log duration the queries are scanning.
@@ -48,7 +49,8 @@ This page explains what Query Management is and shows how to use it.
- **Email:** The email of the user who triggered the query.
- **Organization ID:** The organization context.
- - **Search Type:** The source of the query, such as dashboards, UI, or alerts.
+ - **Search Type:** The source of the query, such as dashboards, alerts, or others.
+ - **Query Source:** Displays the specific origin of the query. If the query originates from a **dashboard**, this field shows the dashboard name. If the query originates from an **alert**, it shows the alert name.
- **Execution Duration:** The total time the query has been running.
- **Query Range:** The time range being queried.
- **Query Type:** Whether the system classifies the query as Short or Long.
diff --git a/docs/user-guide/management/sensitive-data-redaction.md b/docs/user-guide/management/sensitive-data-redaction.md
index 4867a298..c1735470 100644
--- a/docs/user-guide/management/sensitive-data-redaction.md
+++ b/docs/user-guide/management/sensitive-data-redaction.md
@@ -163,7 +163,7 @@ Once your patterns are created and tested, you can apply them to specific fields
**Hash**:
- - Replaces the matched sensitive value with a **deterministic hashed token** while keeping its position within the field.
+ - Replaces the matched sensitive value with a **searchable hash** while keeping its position within the field.
**Drop**:
@@ -214,14 +214,6 @@ Once your patterns are created and tested, you can apply them to specific fields
## Test Redact, Hash and Drop operations
-The following regex patterns are applied to the `message` field of the `pii_test` stream:
-
-| Pattern Name | Action | Timing | Description |
-|--------------|--------|--------|-------------|
-| Full Name | Redact | Ingestion | Masks names like "John Doe" |
-| Email | Redact | Query | Masks email addresses at query time |
-| IP Address | Drop | Ingestion | Removes IP addresses before storage |
-| Credit Card | Drop | Query | Excludes credit card numbers from results |
??? "Test 1: Redact at ingestion time"
### Redact at ingestion time
@@ -362,19 +354,83 @@ The following regex patterns are applied to the `message` field of the `pii_test
6. Verify results:

-## Search hashed values uUsing `match_all_hash`
+## Search hashed values using `match_all_hash`
The `match_all_hash` user-defined function (UDF) complements the SDR Hash feature. It allows you to search for logs that contain the hashed equivalent of a specific sensitive value.
-When data is hashed using Sensitive Data Redaction, the original value is replaced with a deterministic hash. You can use `match_all_hash()` to find all records that contain the hashed token, even though the original value no longer exists in storage.
-Example:
+When data is hashed using Sensitive Data Redaction, the original value is replaced with a searchable hash. You can use `match_all_hash()` to find all records that contain the hashed token, even though the original value no longer exists in storage.
+**Example**:
```sql
match_all_hash('4111-1111-1111-1111')
```
This query returns all records where the SDR Hash of the provided value exists in any field.
In the example below, it retrieves the log entry containing
[REDACTED:907fe4882defa795fa74d530361d8bfb], the hashed version of the given card number.
+

+## Import patterns from built-in library
+OpenObserve provides a built-in library of 147+ pre-configured regex patterns that can be imported directly into your organization. These patterns cover common sensitive data types and security-related formats, allowing you to quickly implement data protection without writing regex patterns from scratch.
+
+
+**To import patterns from the built-in library:**
+
+??? "Step 1: Navigate to the Import section"
+ ### Step 1: Navigate to the Import section
+
+ 1. Go to **Management** > **Sensitive Data Redaction**.
+ 2. Click the **Import** button in the top-right corner.
+ 3. The **Import Pattern** screen opens with three tabs:
+ - **Built-in Patterns**: Pre-configured patterns from OpenObserve's pattern library
+ - **File Upload/JSON**: Import patterns from a JSON file
+ - **URL Import**: Import patterns from a URL
+ 4. Select the **Built-in Patterns** tab.
+
+ 
+
+??? "Step 2: Browse and search patterns"
+ ### Step 2: Browse and search patterns
+
+ The built-in patterns library displays 147 patterns. You can:
+
+ - **Search patterns**: Use the search bar to find patterns by name
+ 
+ - **Filter by tags**: Use the "Filter by Tag" dropdown to narrow patterns by category
+ 
+ - **Refresh**: Click the **Refresh** button to pull the latest patterns from the GitHub repository
+
+??? "Step 3: View pattern details"
+ ### Step 3: View pattern details
+
+ To view details about a pattern before importing:
+
+ 1. Click the **three dots (⋮)** icon next to any pattern in the list.
+ 2. A detail panel displays:
+
+ - **Description**: What the pattern detects
+ - **Pattern**: The actual regex expression
+ - **Tags**: Categories the pattern belongs to
+ - **Rarity**: How commonly this pattern is used
+ - **Valid Examples**: Sample data that matches this pattern
+
+ 
+
+ This helps you verify the pattern will match your expected data format before importing.
+
+??? "Step 4: Select and import patterns"
+ ### Step 4: Select and import patterns
+
+ 1. **Select patterns**: Check the box next to each pattern you want to import. You can select multiple patterns at once.
+ 
+ 2. **Import**: Click the **Import** button in the top-right.
+ 
+
+ After importing patterns, you can edit, export, and delete the patterns.
+ 
+
+ !!! warning "Duplicate handling"
+ The system does not allow you to import the same pattern more than once to avoid duplicates.
+
+
## Limitations
- **Pattern Matching Engine**: OpenObserve uses the Intel Hyperscan library for regex evaluation. All Hyperscan limitations apply to pattern syntax and matching behavior.
diff --git a/docs/user-guide/management/streaming-search.md b/docs/user-guide/management/streaming-search.md
index 04faab65..5ce8cf3f 100644
--- a/docs/user-guide/management/streaming-search.md
+++ b/docs/user-guide/management/streaming-search.md
@@ -2,20 +2,14 @@
title: OpenObserve Streaming Search
description: Learn how OpenObserve's Streaming Search delivers incremental query results using HTTP/2 partitioning for faster log analysis and real-time data processing.
---
-This user guide provides details on how to configure, and use the **Streaming Search** feature to improve query performance and responsiveness.
+This user guide provides details on how the **Streaming Search** feature improves query performance and responsiveness.
## What is Streaming Search?
Streaming Search allows OpenObserve to return query results through a single, persistent HTTP/2 connection. Instead of issuing one HTTP request per partition, the system streams the entire result set over one connection. This reduces overhead, improves dashboard performance, and provides faster response times during long-range queries.
!!! note "Where to Find"
- The **Streaming Search** toggle is located under **Management > General Settings**.
-
-!!! note "Who Can Access"
- The `Root` user and any other user with permission to **update** the **Settings** module can modify the **Streaming Search** setting. Access is controlled through role-based access control (RBAC).
-
- 
-
+ This feature is **enabled by default** and automatically optimizes query performance.
## Concepts
### Partition
@@ -39,7 +33,7 @@ The environment variable is enabled by default and it defines the duration of th
### HTTP/2 Streaming
-When **Streaming Search** is enabled, OpenObserve uses a single HTTP/2 connection to send the entire result set.
+OpenObserve uses a single HTTP/2 connection to send the entire result set.
This enables:
@@ -47,16 +41,7 @@ This enables:
- Fewer HTTP round trips
- Faster response and improved dashboard rendering
-> **Note:** When **Streaming Search is enabled**, the mini-partition output is included in the same `_search_stream` response. All data is delivered over a single HTTP/2 connection. You do not see a separate request for the mini-partition.
-
-## Enable or Disable Streaming Search
-
-1. Go to **Management**.
-2. Select **General Settings**.
-
-3. Locate the **Enable Streaming Search** option.
-4. Toggle this switch to **On** to enable streaming mode, or **Off** to disable it.
-5. Click **Save** to apply the changes.
+> **Note:** The mini-partition output is included in the same `_search_stream` response. All data is delivered over a single HTTP/2 connection. You do not see a separate request for the mini-partition.
## Without Streaming Search
@@ -100,5 +85,4 @@ Panel 2 triggers 1 `_search_stream` request and completes loading the data in 1.
## Considerations
- Requires HTTP/2 support in the network stack.
-- Falls back to standard query mode if disabled.
- Partitioning behavior is automatic. Mini-partitioning improves the time-to-first-result without affecting the accuracy of final results.
diff --git a/docs/user-guide/streams/query-recommendations.md b/docs/user-guide/streams/query-recommendations.md
index c3388e0b..80f36401 100644
--- a/docs/user-guide/streams/query-recommendations.md
+++ b/docs/user-guide/streams/query-recommendations.md
@@ -3,7 +3,7 @@ title: Query Recommendations Stream in OpenObserve
description: Understand the purpose, structure, and usage of the query_recommendations stream in the _meta organization in OpenObserve.
---
-This document explains the function and application of the query_recommendations stream within the _meta organization in OpenObserve. It provides guidance for users who want to optimize query performance using system-generated recommendations based on observed query patterns.
+This document explains the function and application of the `query_recommendations` stream within the `_meta` organization in OpenObserve. It provides guidance for users who want to optimize query performance using system-generated recommendations based on observed query patterns.
!!! info "Availability"
This feature is available in Enterprise Edition.
@@ -26,6 +26,24 @@ OpenObserve continuously analyzes user queries across streams to identify optimi
- You are planning schema-level optimizations.
- You want to validate whether frequently queried fields would benefit from indexing.
+## How recommendations are generated
+OpenObserve periodically analyzes recent query usage for each organization and stream. It examines which fields were queried, what operators were used, and how frequently. Based on this analysis, OpenObserve generates system recommendations that appear in the `query_recommendations` stream.
+
+The following scenarios can trigger a recommendation:
+
+| Recommendation | Trigger condition (When this recommendation is shown) | Why the recommendation is made |
+| ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ |
+| **Use `str_match`** | The field already has a secondary index, but queries on that field do not use `=`, `IN`, or `str_match`. | These operators are optimized for indexed fields. Using them allows queries to take advantage of existing indexes. |
+| **Use `match_all`** | The field is configured for full-text search, but queries on that field do not use the `match_all` function. | The `match_all` function uses the full-text index efficiently. Without it, queries may perform slower scans. |
+| **Enable secondary index for col `field`** | Queries frequently use equality or membership operators (`=`, `IN`, or `str_match`) on a field that is not indexed. | Adding a secondary index on such fields can significantly reduce query latency. |
+| **Use full text search** | Queries frequently use pattern-based operators (`LIKE` or regex match) on a field. | This pattern suggests that the field would perform better with full-text search enabled. |
+
+
+!!! note "Additional details"
+ - Fields used as partition keys are excluded from recommendations.
+ - The engine estimates distinct value counts for the most active streams to help decide whether indexing will be effective.
+ - Each recommendation includes the observed operators, total occurrences, and reasoning.
+
## How to use it
1. Switch to the `_meta` organization in OpenObserve.
2. Go to the **Logs** section.
@@ -46,11 +64,12 @@ OpenObserve continuously analyzes user queries across streams to identify optimi
| `total_occurrences` | Total number of queries examined. |
| `num_distinct_values` | Count of distinct values seen in the field. |
| `duration_hrs` | Duration (in hours) over which this pattern was observed. |
-| `reason` | Explanation behind the recommendation. |
-| `recommendation` | Specific action suggested (typically, create secondary index). |
-| `type` | Always `SecondaryIndexStreamSettings` for this stream. |
+| `reason` | Explanation behind the recommendation, including observed operators and occurrence counts. |
+| `recommendation` | Specific suggestion such as **Use str_match**, **Use match_all**, **Enable secondary index**, or **Use full text search**. |
+| `type` | Indicates the type of optimization. Can be `SecondaryIndexStreamSettings`, `FTSStreamSettings`, or `QueryOptimisation`. |
-## Examples and how to interpret them
+## Examples and how to interpret recommendations
+The examples below show how OpenObserve surfaces query patterns and recommends indexing or operator changes based on the above logic.
**Example 1**

@@ -66,4 +85,5 @@ This recommendation indicates that across the last 360000000 hours of query data
This recommendation is for the `status` field in the `alert_test` stream. All 5 queries used `status` with an equality operator. Although the number is small, the uniform pattern indicates a potential for future optimization.
!!! note "Interpretation"
- Consider indexing status if query volume increases or performance becomes a concern.
\ No newline at end of file
+ Consider indexing status if query volume increases or performance becomes a concern.
+
diff --git a/docs/user-guide/streams/schema-settings.md b/docs/user-guide/streams/schema-settings.md
index 5ff52bf5..7a3fd8af 100644
--- a/docs/user-guide/streams/schema-settings.md
+++ b/docs/user-guide/streams/schema-settings.md
@@ -38,7 +38,7 @@ User-Defined Schema (UDS) allows you to select a subset of fields that are:
- Retained for storage
- Searchable and indexable
-All other fields will either be ignored or stored in a special `_raw` field if the **Store Original Data** toggle is enabled. These unselected fields will not be searchable.
+All other fields will either be ignored or stored in a special `_all` field if the **Store Original Data** toggle is enabled. These unselected fields will not be searchable.
To enable UDS support, set the following environment variable `ZO_ALLOW_USER_DEFINED_SCHEMAS` to `true` .
@@ -65,6 +65,45 @@ Sensitive Data Redaction (SDR) lets you redact or drop sensitive data during ing
For detailed steps to create and manage SDR rules, refer to the [Sensitive Data Redaction](https://openobserve.ai/docs/user-guide/management/sensitive-data-redaction/) guide.
+
+## Manage fields in Schema Settings
+The Schema Settings tab in the Stream Details page allows you to view, search, and manage fields in your stream schema. You can add, remove, or delete fields as needed to maintain accurate schema definitions for your data.
+
+### Search fields
+Use the search bar in the top-right corner to quickly find a **field by name** or **index type**.
+This feature helps locate specific fields efficiently, especially in large schemas.
+
+### Add field
+To add a new field manually:
+1. Select the **+** icon next to the search bar.
+
+2. The **Add Field**(s) section appears above the field list.
+3. Enter the **Field Name** and **Data Type**.
+
+4. Select **Update Settings** to save the new field to the schema.
+
+You can define multiple fields in this section before applying the changes. This option is used when creating or extending the User Defined Schema (UDS).
+
+### Remove defined schema
+To remove defined schema entries, first select one or more fields using the checkbox beside each field name.
+Then select **Remove Defined Schema** at the bottom of the panel.
+
+This action removes the selected fields from the **User Defined Schema** but does not delete them from the stream. The removed fields appear under **Other Fields**.
+
+
+### Delete fields
+To delete fields from the schema, first select one or more fields using the checkbox beside each field name.
+Then select Delete at the bottom of the panel.
+
+This action permanently deletes the selected fields from the schema. Use this action carefully, as it directly modifies the schema definition.
+
+
+
+### Bulk selection
+Each field includes a checkbox for selection. You can select multiple fields and apply **Add to Defined Schema**, **Remove Defined Schema**, or **Delete** actions in bulk to simplify schema management and save time.
+
+
+
## Next Steps
- [Extended Retention](extended-retention.md)