-
- Community chat
- The best way to contact our engineers and share your ideas with them is through our Gitter channel.
-
-
- Stack Overflow
- The ThingsBoard team actively monitors posts that are tagged with "thingsboard" on the user forum. If you can't find an existing question that addresses your issue, feel free to ask a new one. Our team will be happy to assist you.
-
+
If you are unable to find a solution to your problem from any of the guides provided above, please do not hesitate to contact the ThingsBoard team for further assistance.
-
Contact us
+
Contact us
diff --git a/_includes/docs/mqtt-broker/user-guide/backpressure.md b/_includes/docs/mqtt-broker/user-guide/backpressure.md
new file mode 100644
index 0000000000..0495ea105b
--- /dev/null
+++ b/_includes/docs/mqtt-broker/user-guide/backpressure.md
@@ -0,0 +1,188 @@
+* TOC
+{:toc}
+
+In high-throughput messaging systems like TBMQ, [backpressure](https://medium.com/@jayphelps/backpressure-explained-the-flow-of-data-through-software-2350b3e77ce7)
+handling is essential to ensure stability, maintain performance, and prevent out-of-memory (OOM) errors under heavy load.
+Backpressure can occur in two directions: **inbound**, when data is flowing into the broker from publishers, and **outbound**, when the broker delivers data to subscribers.
+TBMQ is designed to effectively handle both.
+
+TBMQ uses Netty as the backbone for its MQTT communication, handling all low-level networking and I/O operations.
+While Netty provides high performance and scalability, it also requires careful control of inbound and outbound message flow.
+
+## Inbound Backpressure
+
+TBMQ is architected to handle virtually unlimited load from publishers.
+Incoming messages are not stored in memory indefinitely — instead, they are immediately persisted to **Kafka**, which acts as the backbone for further processing and routing.
+This design ensures that even under extreme publisher throughput, memory usage remains stable and predictable.
+
+To support growing workloads, TBMQ can be **horizontally scaled** by deploying multiple broker instances, distributing the load and increasing throughput capacity.
+However, in cases where users prefer not to scale horizontally or avoid investing heavily in infrastructure or advanced configuration tuning,
+TBMQ offers additional mechanisms to manage incoming traffic effectively.
+
+These include TCP-level backpressure, controlled via Netty's socket receive buffer, and application-level rate limiting, which allows enforcing both per-client and cluster-wide message rate policies.
+Together, these options provide flexible, cost-efficient ways to protect the broker from overload and ensure stable performance under varying traffic conditions.
+
+### TCP-Level Backpressure
+
+One of the key mechanisms is TCP-level backpressure, enabled through the Netty socket receive buffer. This buffer can be configured using the `so_receive_buffer` parameter:
+
+```yaml
+# Socket receive buffer size for Netty in KB.
+# If the buffer limit is reached, TCP will trigger backpressure and notify the sender to slow down.
+# If set to 0 (default), the system's default buffer size will be used.
+so_receive_buffer: "${NETTY_SO_RECEIVE_BUFFER:0}"
+```
+
+When the receive buffer is filled and not being drained fast enough (e.g., due to high load or slow downstream processing), TCP will signal the remote sender to apply backpressure.
+This allows the broker to slow down inbound traffic naturally without immediately dropping connections or overloading memory.
+
+> In most cases, it is recommended to leave this value at 0, which lets the operating system apply an optimized default.
+> Only consider tuning this parameter in low-latency or high-throughput scenarios after profiling, or when you want tighter control over memory usage and backpressure behavior.
+
+### Rate limiting
+
+While not a reactive backpressure mechanism, rate limiting in TBMQ serves as an additional layer of protection by proactively controlling the volume of incoming messages.
+It complements true backpressure mechanisms by enforcing traffic constraints before the system becomes overloaded.
+TBMQ supports both cluster-wide rate limits (to control total incoming traffic) and per-client rate limits (to prevent individual publishers from overwhelming the broker).
+These settings allow operators to define message rate policies that help maintain system stability, fairness across clients, and protect against spikes in traffic.
+
+```yaml
+rate-limits:
+ total:
+ # Enable/disable total incoming and outgoing messages rate limits for the broker (per whole cluster)
+ enabled: "${MQTT_TOTAL_RATE_LIMITS_ENABLED:false}"
+ # Limit the total message rate across the cluster (e.g., 1000 messages per second, 50000 per minute)
+ config: "${MQTT_TOTAL_RATE_LIMITS_CONFIG:1000:1,50000:60}"
+
+ incoming-publish:
+ # Enable/disable per-client publish rate limits
+ enabled: "${MQTT_INCOMING_RATE_LIMITS_ENABLED:false}"
+ # Limit how many messages each client can send over time (e.g., 10 messages per second, 300 per minute)
+ client-config: "${MQTT_INCOMING_RATE_LIMITS_CLIENT_CONFIG:10:1,300:60}"
+```
+
+Together, TCP backpressure and configurable rate limits make TBMQ highly resilient and capable of self-regulating traffic before any internal processing bottlenecks or memory pressure occur.
+
+## Outbound Backpressure
+
+If a subscriber cannot keep up, the broker’s outbound channel buffer may become overwhelmed.
+Without backpressure control, this can lead to uncontrolled memory growth and eventually cause the broker to run out of memory.
+To address this, TBMQ introduces a backpressure-aware delivery mechanism that detects when a Netty channel becomes non-writable and temporarily pauses message delivery.
+Delivery resumes automatically once the channel becomes writable again.
+This ensures efficient memory usage and stable operation even under heavy load.
+
+### Netty Channel Writability Monitoring
+
+TBMQ uses Netty as the underlying network framework, which provides built-in support for monitoring the **writability** (`channelWritabilityChanged` event) of each channel.
+This allows TBMQ to detect when a subscriber’s connection becomes overwhelmed with outbound data and apply backpressure by pausing further writes to that channel.
+
+Netty determines writability based on **write buffer watermarks** — a pair of thresholds:
+
+- **High Watermark**: If the outbound buffer size exceeds this threshold, the channel is marked as **non-writable**. TBMQ will stop sending messages to that client until the buffer drains.
+- **Low Watermark**: When the buffer size drops below this value, the channel becomes **writable** again, and TBMQ resumes message delivery.
+
+These thresholds are configurable via environment variables:
+
+- `NETTY_WRITE_BUFFER_LOW_WATER_MARK` – defines the low watermark in bytes (default: `32768`, i.e. 32 KB)
+- `NETTY_WRITE_BUFFER_HIGH_WATER_MARK` – defines the high watermark in bytes (default: `65536`, i.e. 64 KB)
+
+These values are applied during Netty server bootstrap using the `WRITE_BUFFER_WATER_MARK` channel option.
+
+By leveraging this mechanism, TBMQ ensures that no client connection can consume excessive memory due to unchecked message delivery.
+Instead, delivery is paused and resumed dynamically based on channel health, preserving broker stability under load.
+
+### Handling Non-Persistent and Persistent Clients
+
+TBMQ differentiates backpressure behavior based on whether the subscriber client is **persistent** or **non-persistent**, ensuring efficient use of memory and storage resources.
+
+#### Non-Persistent Clients
+
+For non-persistent clients, TBMQ does **not store messages** if the channel becomes non-writable. Instead, when backpressure is detected:
+
+- The broker **skips** delivery of messages to that client.
+- These dropped messages are not retained or retried, which aligns with MQTT expectations for non-persistent sessions.
+- A global dropped message counter is maintained to track how many messages were skipped due to backpressure. This metric provides visibility into system behavior under load and helps identify bottlenecks.
+
+This approach avoids memory buildup for short-lived or unreliable clients that are not expected to maintain state.
+
+#### Persistent Clients
+
+Persistent clients have guaranteed message delivery, so skipping messages is not acceptable.
+TBMQ ensures durability even under backpressure by using persistent storage for message queuing and controlling delivery based on channel writability.
+
+For **Device clients**, messages are stored in **Redis** before delivery. If the channel becomes non-writable, message sending is paused.
+Once the channel becomes writable again, TBMQ resumes delivery by reading pending messages from Redis.
+
+- Redis has a **per-client message queue limit** (e.g., 10,000 messages). If this limit is exceeded before the client becomes writable, older messages may be dropped.
+- This limit is configurable via an environment variable `MQTT_PERSISTENT_SESSION_DEVICE_PERSISTED_MESSAGES_LIMIT`.
+- Additionally, each message stored in Redis has a **time-to-live (TTL)** to ensure stale messages are eventually cleaned up.
+The TTL is configurable via the environment variable `MQTT_PERSISTENT_SESSION_DEVICE_PERSISTED_MESSAGES_TTL`.
+For MQTT 5.0 clients that specify a `message expiry interval`, TBMQ respects the client-defined value and uses it in place of the configured default.
+
+For **Application clients**, messages are stored in **Kafka**. If the channel to the client becomes non-writable,
+TBMQ temporarily **pauses the Kafka consumer** for that client to avoid polling and buffering unnecessary messages.
+Once the channel becomes writable, the consumer is resumed and message delivery continues.
+
+- Kafka's **retention policy** ensures that even when consumers are paused, messages remain available for a defined period:
+```
+retention.ms=604800000 (7 days)
+retention.bytes=1048576000 (1 GB)
+```
+- These settings can be customized via the following environment variable `TB_KAFKA_APP_PERSISTED_MSG_TOPIC_PROPERTIES`.
+
+This mechanism ensures that persistent clients can reliably receive messages even under backpressure, without overloading the broker or losing data.
+
+### Shared Subscriptions and Backpressure Handling
+
+TBMQ also applies backpressure handling logic in the context of [shared subscriptions](/docs/{{docsPrefix}}mqtt-broker/user-guide/shared-subscriptions/), ensuring reliable and efficient message delivery across all subscription types.
+A shared subscription group may contain one or more subscribers, and messages are distributed among them according to MQTT 5.0 rules.
+When backpressure is detected, the broker adjusts delivery based on the type and persistence level of the shared group.
+
+#### Non-Persistent Shared Subscription Group
+
+If a subscriber in the group becomes non-writable, TBMQ skips it and attempts to deliver the message to another writable subscriber in the group.
+If all subscribers in the group are non-writable, the message is dropped entirely and not queued or retained.
+This behavior matches the expectation for non-persistent clients, where message loss is acceptable under overload conditions.
+
+#### Persistent Device Shared Subscription Group
+
+If a subscriber is non-writable, it is skipped, and the message is routed to another writable subscriber in the same group.
+If none of the subscribers are writable, the message is saved to Redis, using a per-group queue associated with the shared subscription key.
+Redis ensures that once any subscriber in the group becomes writable again, delivery resumes from the stored messages.
+Queue size and TTL are controlled via the same configuration as for individual persistent device clients.
+
+#### Persistent Application Shared Subscription Group
+
+When a subscriber in the group becomes non-writable, TBMQ removes it from the Kafka consumer group associated with the shared subscription.
+Other writable subscribers continue polling messages from Kafka as usual.
+If all subscribers in the group become non-writable, the consumer group becomes temporarily empty, and no messages are polled.
+Kafka retains undelivered messages according to the topic’s configured retention policy (environment variable `TB_KAFKA_APP_PERSISTED_MSG_SHARED_TOPIC_PROPERTIES`),
+ensuring that once any subscriber becomes writable and rejoins the group, message delivery resumes.
+
+This approach ensures that TBMQ maintains performance, reliability, and resource efficiency even when handling shared subscriptions under pressure. Each strategy is tailored to the persistence level of the clients in the group.
+
+## Recommendations
+
+To maximize the effectiveness of TBMQ’s backpressure handling and ensure system resilience under variable load, we recommend the following:
+
+- **Monitor the number of non-writable clients**: Track the number of clients currently under outbound backpressure using the `nonWritableClients` counter.
+This metric is available both in logs and through the monitoring system (e.g., Prometheus).
+For production environments, it's recommended to set up alerts when the value increases unexpectedly or stays elevated over time.
+
+- **Start with Default Backpressure Settings**: For most deployments, the default Netty buffer thresholds — 32 KB low watermark and 64 KB high watermark — provide robust performance.
+These settings have been tested to support **around 10,000 messages per second per subscriber** under typical conditions.
+
+- **Ensure Sufficient Redis and Kafka Capacity**: Persistent client buffering relies on Redis and Kafka. Monitor their memory, disk, and throughput to avoid secondary bottlenecks.
+
+- **Use Horizontal Scaling**: For sustained high throughput, scale broker nodes horizontally. Backpressure is not a substitute for adequate compute and I/O resources.
+
+- **Test Under Load**: Perform load testing with simulated slow and fast consumers to validate how your configuration handles backpressure in real scenarios.
+
+By following these practices, you can take full advantage of TBMQ’s backpressure handling mechanisms, ensuring reliable operation, efficient resource usage, and high performance even in demanding MQTT workloads.
+
+## Conclusion
+
+The backpressure handling mechanisms in TBMQ significantly enhance the broker’s resilience and efficiency when dealing with varying client consumption rates.
+By dynamically monitoring channel writability, intelligently controlling message delivery, and integrating with both transport-level and application-level flow control,
+TBMQ ensures reliable performance and optimal resource utilization—even under sustained or bursty high-load conditions.
+This makes TBMQ well-suited for demanding MQTT workloads at scale.
diff --git a/_includes/docs/mqtt-broker/user-guide/integrations/how-to-connect-thingsboard-to-tbmq.md b/_includes/docs/mqtt-broker/user-guide/integrations/how-to-connect-thingsboard-to-tbmq.md
index febae2c237..10fd3d05e8 100644
--- a/_includes/docs/mqtt-broker/user-guide/integrations/how-to-connect-thingsboard-to-tbmq.md
+++ b/_includes/docs/mqtt-broker/user-guide/integrations/how-to-connect-thingsboard-to-tbmq.md
@@ -7,7 +7,7 @@ In this guide, we integrate the TBMQ with the ThingsBoard using MQTT integration
We utilize TBMQ client credentials with the type **APPLICATION** to connect ThingsBoard integration as an APPLICATION client.
APPLICATION clients specialize in subscribing to topics with high message rates.
The messages will be persisted when the client is offline and will be delivered once it goes online, ensuring the availability of crucial data.
-Read more about the APPLICATION client [here](https://thingsboard.io/docs/mqtt-broker/user-guide/mqtt-client-type/).
+Read more about the APPLICATION client [here](/docs/{{docsPrefix}}mqtt-broker/user-guide/mqtt-client-type/).
ThingsBoard MQTT Integration acts as an MQTT client. It subscribes to topics and converts the received data into telemetry and attribute updates.
In case of a downlink message, MQTT integration converts it to the device-suitable format and pushes it to TBMQ.
@@ -18,8 +18,8 @@ ThingsBoard instance that is running in the cloud can’t connect to the TBMQ de
In this tutorial, we will use:
- - The instance of [ThingsBoard Professional Edition](https://thingsboard.io/docs/user-guide/install/pe/installation-options/) installed **locally**;
- - [TBMQ](https://thingsboard.io/docs/mqtt-broker/install/installation-options/) installed **locally** and accessible by ThingsBoard PE instance;
+ - The instance of [ThingsBoard Professional Edition](/docs/user-guide/install/pe/installation-options/) installed **locally**;
+ - [TBMQ](/docs/{{docsPrefix}}mqtt-broker/install/installation-options/) installed **locally** and accessible by ThingsBoard PE instance;
- mosquitto_pub MQTT client to send messages.
## TBMQ setup
@@ -33,7 +33,7 @@ To do this, login to your TBMQ user interface and follow the next steps.
{% capture difference %}
**Please note**:
-The Basic authenticaion must be [enabled](/docs/mqtt-broker/security/authentication/basic/).
+The Basic authenticaion must be [enabled](/docs/{{docsPrefix}}mqtt-broker/security/authentication/basic/).
{% endcapture %}
{% include templates/info-banner.md content=difference %}
diff --git a/_includes/docs/mqtt-broker/user-guide/keep-alive.md b/_includes/docs/mqtt-broker/user-guide/keep-alive.md
index fd9b4b3861..0e3ad3708f 100644
--- a/_includes/docs/mqtt-broker/user-guide/keep-alive.md
+++ b/_includes/docs/mqtt-broker/user-guide/keep-alive.md
@@ -27,7 +27,7 @@ The Keep Alive interval is set when a client connects to the broker, in the `CON
If the Keep Alive interval **is set to 60 seconds**, the client must send any MQTT control packet (e.g., a `PINGREQ`, `PUBLISH`, `SUBSCRIBE`, etc.) within **90 seconds** (60 * 1.5) to inform the broker that it’s still connected.
-If the client fails to do so, the broker assumes the client is disconnected, and it will **terminate the connection** and trigger a **[Last Will message](/docs/mqtt-broker/user-guide/last-will)** (if set) to inform about unexpected disconnection.
+If the client fails to do so, the broker assumes the client is disconnected, and it will **terminate the connection** and trigger a **[Last Will message](/docs/{{docsPrefix}}mqtt-broker/user-guide/last-will)** (if set) to inform about unexpected disconnection.
Normally, when the client sends a `PINGREQ` to maintain the connection, the broker responds with a `PINGRESP`, confirming that the connection is still alive and the client is functioning properly.
This exchange ensures the connection remains healthy even when no data is being transmitted.
diff --git a/_includes/docs/mqtt-broker/user-guide/last-will.md b/_includes/docs/mqtt-broker/user-guide/last-will.md
index 6339053050..6ead81fc9c 100644
--- a/_includes/docs/mqtt-broker/user-guide/last-will.md
+++ b/_includes/docs/mqtt-broker/user-guide/last-will.md
@@ -11,7 +11,7 @@ Similarly, in **agriculture**, devices monitor field conditions to optimize irri
## How the Last Will works
-1. The Last Will message and its [parameters](/docs/mqtt-broker/user-guide/last-will/#parameters-of-the-last-will) are set when a client connects to the broker, in the `CONNECT` packet. The broker stores a Last Will message data in the session state.
+1. The Last Will message and its [parameters](/docs/{{docsPrefix}}mqtt-broker/user-guide/last-will/#parameters-of-the-last-will) are set when a client connects to the broker, in the `CONNECT` packet. The broker stores a Last Will message data in the session state.
2. In case happens unexpected (ungraceful) disconnection, the Last Will message is sent to the clients that are subscribed to the Will Topic.
## Ungraceful disconnection - publish
@@ -39,7 +39,7 @@ The Last Will message will be removed from the session state when:
The feature “Last Will and Testament” was introduced in [MQTT 3.1](https://public.dhe.ibm.com/software/dw/webservices/ws-mqtt/mqtt-v3r1.html#connect) with the following parameters:
* **Will Topic**. The MQTT topic where the Last Will message will be published.
* **Will Message**. The content of the Last Will. This can be an empty message.
-* **Will QoS**. The Quality of Service level for the Last Will message ([which QoS to use?](/docs/mqtt-broker/user-guide/qos)).
+* **Will QoS**. The Quality of Service level for the Last Will message ([which QoS to use?](/docs/{{docsPrefix}}mqtt-broker/user-guide/qos)).
* **Will Retain**. Use the Retain flag if the Will Message should be available to any new clients subscribing to the topic after the message is published. This is useful for status messages that need to stay available until updated.
In [MQTT 5.0](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901040), the feature gained additional properties:
@@ -54,7 +54,7 @@ In [MQTT 5.0](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_T
* **User Properties**. Custom metadata in the form of key-value pairs that the client can include in the Will Message.
{% capture difference %}
-Unsure how to set up a session with the Last Will? Check out the WebSocket Client [documentation](/docs/mqtt-broker/user-guide/ui/websocket-client/#last-will) for detailed step-by-step instructions.
+Unsure how to set up a session with the Last Will? Check out the WebSocket Client [documentation](/docs/{{docsPrefix}}mqtt-broker/user-guide/ui/websocket-client/#last-will) for detailed step-by-step instructions.
{% endcapture %}
{% include templates/info-banner.md content=difference %}
diff --git a/_includes/docs/mqtt-broker/user-guide/mqtt-broker.md b/_includes/docs/mqtt-broker/user-guide/mqtt-broker.md
new file mode 100644
index 0000000000..9f5c2cc595
--- /dev/null
+++ b/_includes/docs/mqtt-broker/user-guide/mqtt-broker.md
@@ -0,0 +1,214 @@
+* TOC
+{:toc}
+
+
+[MQTT](/docs/{{docsPrefix}}mqtt-broker/user-guide/mqtt-protocol/) is a lightweight, publish/subscribe messaging protocol widely used in the Internet of Things (IoT) and other distributed systems.
+It is specifically designed to work efficiently in environments with limited resources, low bandwidth, high latency, or unreliable network connections.
+
+At the heart of every MQTT architecture lies the
MQTT broker —
+the central server that manages all communication between clients.
+The broker is responsible for receiving messages from publishers, determining which subscribers are interested in those messages, and reliably delivering them according to the protocol’s rules.
+In this way, the broker acts as the backbone of the MQTT system, ensuring seamless, secure, and efficient message exchange.
+
+
+
+### MQTT Clients
+
+An
MQTT client is any device, application, or service that connects to the broker to send or receive messages.
+Clients can range from small, resource-constrained IoT sensors to complex enterprise applications.
+Common examples include:
+
+* **IoT devices** that publish sensor data, such as temperature, humidity, or GPS location.
+* **Mobile or web applications** that subscribe to updates from connected devices.
+* **Back-end services** that collect, process, or visualize incoming data streams.
+
+Each client can take on one or more roles:
+
+* **Publisher**: Sends (publishes) messages to a defined topic.
+* **Subscriber**: Receives (subscribes to) messages from one or more topics.
+* **Hybrid**: Acts as both a publisher and a subscriber, depending on the use case.
+
+By separating publishers and subscribers through the broker, MQTT clients remain loosely coupled, making systems more flexible, scalable, and easier to maintain.
+
+### MQTT Topics
+
+An
MQTT topic is a structured string used by the broker to **route messages** between publishers and subscribers.
+Topics define the subject or channel of communication and are the backbone of the publish/subscribe model.
+
+Key characteristics:
+
+* **Hierarchical structure**: Topics are organized in levels separated by slashes (`/`).
+ Example: `home/livingroom/temperature`
+* **Message flow**:
+
+ * **Publishers** send messages to a specific topic.
+ * **Subscribers** register their interest in one or more topics and receive all messages published to them.
+
+#### Wildcards in Topics
+
+MQTT supports special characters called
MQTT wildcards to simplify subscription patterns:
+
+* `+` (single-level wildcard) — matches exactly one level in the topic hierarchy.
+ Example: `home/+/temperature` → matches `home/livingroom/temperature` and `home/kitchen/temperature`.
+* `#` (multi-level wildcard) — matches all remaining levels in the topic hierarchy.
+ Example: `home/#` → matches `home/livingroom/temperature`, `home/kitchen/humidity`, and anything else under `home/`.
+
+This flexible topic system allows clients to filter messages with precision, making MQTT highly efficient for large-scale, event-driven communication.
+
+### Role of the MQTT Broker
+
+The **MQTT broker** is the central component that enables communication in an MQTT system.
+Clients never communicate directly with each other — all messages flow through the broker.
+By acting as the trusted intermediary, the broker guarantees that messages are delivered securely, reliably, and according to the rules of the protocol.
+
+Key responsibilities of the broker include:
+
+* **Managing client connections**: Establishing, monitoring, and maintaining sessions with MQTT clients.
+* **Authentication and authorization**: Validating client identities and enforcing access control policies to ensure only authorized clients can publish or subscribe.
+* **Message routing**: Receiving published messages and efficiently distributing them to all clients subscribed to the relevant topics.
+* **Session and state management**: Tracking client subscriptions and, if configured, storing undelivered messages for clients that are offline.
+* **Quality of Service (QoS)**: Guaranteeing message delivery according to the selected
MQTT QoS level — *At most once (QoS 0)*, *At least once (QoS 1)*, or *Exactly once (QoS 2)*.
+
+In short, the broker serves as the **backbone of the MQTT network**, ensuring that communication between clients is scalable, secure, and dependable.
+
+### How It Works
+
+The operation of an MQTT system can be broken down into distinct stages — from the moment a client connects to the broker, through authentication and authorization, to message publishing and distribution.
+
+#### Client Connection
+
+* A client (device, app, or service) initiates a connection to the broker using the **CONNECT** packet.
+* This packet typically includes:
+
+ * Client identifier (`clientId`)
+ * Protocol version (e.g., MQTT 3.1.1 or MQTT 5.0)
+ * Optional username and password
+ * Clean session flag or session expiry interval (for session persistence)
+ * Last Will and Testament (LWT) message, if defined
+
+#### Authentication & Authorization
+
+* The broker validates the connection request by checking credentials (username/password, certificates for SSL/TLS, or token-based mechanisms).
+* Once authenticated, the broker enforces **authorization policies**, determining which topics the client is allowed to **publish** to and **subscribe** from.
+* If the connection is accepted, the broker replies with a **CONNACK** packet confirming session parameters. If not, the connection is refused.
+
+#### Subscribing to Topics
+
+* To receive messages, the client sends a **SUBSCRIBE** packet specifying one or more topics (with optional wildcards) and the desired **QoS level**.
+* The broker registers the client’s subscription and replies with a **SUBACK** packet that confirms which QoS levels were granted.
+
+#### Publishing Messages
+
+* When a client wants to send data, it sends a **PUBLISH** packet to the broker.
+* The packet contains:
+
+ * The topic name
+ * The message payload
+ * The QoS level for delivery reliability
+ * Retain flag (if the message should be stored as the last known good value for that topic)
+* Depending on QoS, the broker and client may exchange acknowledgment packets (**PUBACK**, **PUBREC**, **PUBREL**, **PUBCOMP**) to guarantee delivery.
+
+#### Message Distribution
+
+* The broker receives the published message and looks up all active subscriptions that match the topic.
+* For each matching subscriber, the broker forwards the message:
+
+ * Respecting the **QoS level** agreed upon with each subscriber.
+ * Delivering retained messages where applicable.
+ * Storing messages for offline subscribers if persistent sessions are enabled.
+
+#### Receiving Messages
+
+* Subscribers receive the message in a **PUBLISH** packet from the broker.
+* Based on QoS, the subscriber may need to send back acknowledgment packets to confirm receipt.
+* Once processed, subscribers can act on the message — logging it, storing it, visualizing it, or triggering actions.
+
+#### Disconnecting
+
+* When a client no longer needs the connection, it sends a **DISCONNECT** packet.
+* If the client disconnects unexpectedly, the broker triggers the **Last Will and Testament (LWT)** message (if configured) and may keep the session alive based on the persistence settings.
+
+This end-to-end lifecycle — from connection and authentication to message delivery and disconnection —
+makes MQTT a lightweight but **robust messaging protocol** for everything from simple IoT gadgets to massive distributed systems.
+
+### Key Features of an MQTT Broker
+
+An MQTT broker combines **protocol-level features** of MQTT with **system-level capabilities** to ensure efficient, secure, and reliable messaging.
+
+#### MQTT Protocol Features Supported by the Broker
+
+* **Quality of Service (QoS)**: Guarantees message delivery at different levels — *at most once (0)*, *at least once (1)*, or *exactly once (2)*.
+* **Keep Alive mechanism**: Ensures the connection between client and broker stays active by requiring periodic communication, helping detect broken connections quickly.
+* **Last Will and Testament (LWT)**: Sends a predefined message if a client disconnects unexpectedly, helping detect failures automatically.
+* **Retained messages**: Stores the last message on a topic so new subscribers receive the most recent state instantly.
+* **Topic-based routing**: Efficiently matches published messages to subscribers using hierarchical topics and wildcards.
+* **Session persistence**: Maintains subscriptions and undelivered messages for clients that reconnect, allowing reliable communication even after temporary disconnections.
+* **Shared subscriptions** (MQTT 5.0): Distributes messages among a group of subscribers to balance load.
+
+These are some of the most important MQTT features supported by brokers.
+Depending on the version of the protocol (MQTT 3.1.1 or 5.0) and the specific broker implementation, many more features may be available to enhance reliability, efficiency, and security.
+
+> TBMQ supports the full range of MQTT 3.x and MQTT 5.0 protocol features.
+
+#### Broker Capabilities
+
+* **Scalability**: Handles thousands or millions of simultaneous client connections and messages with consistent reliability.
+* **Performance**: Optimized for low latency and high throughput, even in large distributed systems.
+* **Durability**: Ensures that critical messages and session data are stored persistently (e.g., in databases or disk-backed queues), so they survive restarts or crashes.
+* **Security**: Provides TLS/SSL encryption, authentication, and fine-grained access control to ensure safe communication.
+* **High availability & clustering**: Supports clustering, load balancing, and fault tolerance for production-grade deployments.
+* **Integration**: Connects seamlessly with external systems such as databases, Kafka, or cloud services for data processing and analytics.
+
+> TBMQ provides all of these capabilities out of the box: horizontal scalability to millions of clients, high throughput with low latency, persistence and durability powered by Redis/Kafka,
+> built-in TLS/SSL security, clustering with fault tolerance, and integration with external systems like Kafka, other MQTT brokers, and HTTP-based services.
+
+### Types of MQTT Brokers
+
+MQTT brokers come in different forms depending on how they are deployed, licensed, and used. The main categories are:
+
+1. **Open-source brokers**
+
+ * Free to use and highly customizable, with active developer communities.
+ * Suitable for prototyping, self-hosted deployments, and integration into larger systems.
+
+2. **Commercial brokers**
+
+ * Provide enterprise-grade features such as clustering, monitoring dashboards, advanced security, and SLA-backed support.
+ * Ideal for organizations that need guaranteed reliability, high availability, and professional support.
+
+3. **Cloud-based brokers (MQTT-as-a-Service)**
+
+ * Fully managed services where the provider handles deployment, scaling, maintenance, and uptime.
+ * Great for rapid adoption and use cases where infrastructure management should be outsourced.
+
+4. **Embedded brokers**
+
+ * Extremely lightweight brokers that run directly on edge devices, gateways, or inside applications.
+ * Useful for local processing, offline-first scenarios, or edge computing environments where low latency is critical.
+
+### How to Choose the Right MQTT Broker
+
+Selecting the right MQTT broker depends on your project’s scale, requirements, and long-term goals. The following criteria can help guide the decision:
+
+* **Scalability**: Ensure the broker can handle your projected number of client connections and message throughput, with room to grow as your system expands.
+* **High availability & clustering**: Look for features like clustering, replication, and load balancing to guarantee uptime and fault tolerance in production environments.
+* **Performance**: Evaluate latency, throughput, and resource efficiency under real-world load conditions to ensure the broker meets your responsiveness needs.
+* **Security**: Check for support of TLS/SSL encryption, authentication, authorization, and fine-grained access controls to protect data and devices.
+* **Persistence**: Consider whether the broker provides durable message storage — including retained messages, offline queues, or integration with external databases.
+* **Integration capabilities**: Verify compatibility with your ecosystem, such as Kafka, SQL/NoSQL databases, monitoring tools, or cloud platforms.
+* **Community & support**: An active open-source community or available enterprise support can make a big difference in troubleshooting and long-term maintenance.
+* **Cost**: Balance your budget against needs — choosing between open-source (free, DIY), commercial (license + support), or cloud (subscription-based, managed) options.
+
+By weighing these factors, you can select a broker that not only meets your current needs but also scales with your system as it evolves.
+
+>
TBMQ is built to meet all these criteria —
+> it offers enterprise-level scalability, clustering, persistence, strong security, and deep integration options while remaining easy to operate and cost-efficient.
+> This makes it a strong choice for both open-source adopters and enterprises looking for a production-ready MQTT platform.
+
+### Final Words
+
+The MQTT broker is the backbone of any MQTT-based system, enabling efficient and reliable communication between distributed devices and services.
+It plays a critical role in diverse domains such as IoT ecosystems, smart homes, industrial automation, connected vehicles, and large-scale data infrastructures.
+
+By offloading responsibilities like message routing, delivery guarantees, and connection management to the broker, client devices remain simple, lightweight, and resource-efficient.
+This not only reduces device complexity but also improves scalability, security, and overall system reliability — making the MQTT broker a cornerstone of modern connected applications.
diff --git a/_includes/docs/mqtt-broker/user-guide/mqtt-client-type.md b/_includes/docs/mqtt-broker/user-guide/mqtt-client-type.md
index d7f2ccd498..eba07a567c 100644
--- a/_includes/docs/mqtt-broker/user-guide/mqtt-client-type.md
+++ b/_includes/docs/mqtt-broker/user-guide/mqtt-client-type.md
@@ -20,13 +20,13 @@ requirements and performance expectations of each use case.
This segregation of clients simplifies the implementation of different IoT scenarios, thereby optimizing overall system performance.
The determination of client type occurs during the processing of the _CONNECT_ packet, with client authentication playing
-a pivotal role in identifying the client type. Further details regarding client authentication can be found in the [security](/docs/mqtt-broker/security/overview/) guide,
+a pivotal role in identifying the client type. Further details regarding client authentication can be found in the [security](/docs/{{docsPrefix}}mqtt-broker/security/overview/) guide,
which provides comprehensive information on securing client connections.
If both Basic and TLS authentications are disabled, the connecting client will always be assigned the DEVICE type.
However, when Basic or TLS authentication is enabled, the client type is determined by the MQTT credentials used during the authentication process.
Each MQTT client credential incorporates a `clientType` field that explicitly defines the client type.
-For step-by-step instructions on creating MQTT credentials, please refer to the designated [guide](/docs/mqtt-broker/user-guide/ui/mqtt-client-credentials/).
+For step-by-step instructions on creating MQTT credentials, please refer to the designated [guide](/docs/{{docsPrefix}}mqtt-broker/user-guide/ui/mqtt-client-credentials/).
## Client persistence
@@ -45,9 +45,9 @@ These properties, defined in the respective MQTT specifications, provide insight
TBMQ employs Kafka consumer(s) that actively polls messages from the `tbmq.msg.all` topic, subsequently forwarding these messages to their intended recipients.
However, the processing logic differs between persistent and non-persistent clients.
-* For [non-persistent clients](/docs/mqtt-broker/architecture/#non-persistent-client), messages are directly published to the subscribed clients.
+* For [non-persistent clients](/docs/{{docsPrefix}}mqtt-broker/architecture/#non-persistent-client), messages are directly published to the subscribed clients.
-* [Persistent clients](/docs/mqtt-broker/architecture/#persistent-client) maintain a session state that persists beyond individual connections, allowing them to receive messages even when they were offline.
+* [Persistent clients](/docs/{{docsPrefix}}mqtt-broker/architecture/#persistent-client) maintain a session state that persists beyond individual connections, allowing them to receive messages even when they were offline.
This persistence enables TBMQ to ensure message delivery to the client once it reconnects. Consequently, a distinct approach is employed for message processing intended for such clients.
However, **please note**, that if the subscribing client is both persistent and subscribed with a Quality of Service (QoS) level of _0_ (_AT_MOST_ONCE_),
all the messages associated with that subscription will be delivered to the client without any supplementary steps.
diff --git a/_includes/docs/mqtt-broker/user-guide/mqtt-over-ws.md b/_includes/docs/mqtt-broker/user-guide/mqtt-over-ws.md
index f5c4b28b3d..04c87c2937 100644
--- a/_includes/docs/mqtt-broker/user-guide/mqtt-over-ws.md
+++ b/_includes/docs/mqtt-broker/user-guide/mqtt-over-ws.md
@@ -41,7 +41,7 @@ This feature, combined with MQTT's publish/subscribe model, enables dynamic and
## MQTT over WebSocket in TBMQ
TBMQ utilizes two listeners, WS (WebSocket) and WSS (WebSocket Secure), to facilitate communication over WebSocket.
-You can refer to the overview of these listeners provided [here](/docs/mqtt-broker/security/#ws-listener).
+You can refer to the overview of these listeners provided [here](/docs/{{docsPrefix}}mqtt-broker/security/#ws-listener).
{% capture difference %}
**Note:** For existing deployments prior v1.3.0, it's essential to update configuration files to enable WebSocket communication.
@@ -50,7 +50,7 @@ To address this, pull the latest configuration files or update existing ones to
{% endcapture %}
{% include templates/info-banner.md content=difference %}
-For detailed WebSocket-related parameters, please refer to the provided [link](/docs/mqtt-broker/install/config/#mqtt-listeners-parameters)
+For detailed WebSocket-related parameters, please refer to the provided [link](/docs/{{docsPrefix}}mqtt-broker/install/config/#mqtt-listeners-parameters)
(locate `LISTENER_WS_ENABLED` and related environment variables).
## Getting started
@@ -60,7 +60,7 @@ In this guide, we present an illustrative example of how to establish MQTT over
### Installing TBMQ
Before we delve in, make sure the TBMQ is successfully installed.
-To obtain detailed instructions on how to install TBMQ on different platforms, we recommend exploring the [Installation options](/docs/mqtt-broker/install/installation-options/) documentation.
+To obtain detailed instructions on how to install TBMQ on different platforms, we recommend exploring the [Installation options](/docs/{{docsPrefix}}mqtt-broker/install/installation-options/) documentation.
For this guide, we will follow the below instructions for quick TBMQ installation.
@@ -234,8 +234,13 @@ Packet send... { cmd: 'disconnect' }
Closing client...
```
-Moreover, you can utilize the [WebSocket client](/docs/mqtt-broker/user-guide/ui/websocket-client/) to subscribe to the topic and receive messages, allowing you to verify the result.
+Moreover, you can utilize the [WebSocket client](/docs/{{docsPrefix}}mqtt-broker/user-guide/ui/websocket-client/) to subscribe to the topic and receive messages, allowing you to verify the result.
+
+{% if docsPrefix == "pe/" %}
+
+{% else %}

+{% endif %}
### Connection details
@@ -261,7 +266,7 @@ This means clients might receive security warnings when connecting to the server
Self-signed certificates are cost-effective for development or private networks but are not recommended for public or production environments due to trust issues with end users.
{% capture difference %}
If you're utilizing a self-signed certificate for the broker, it's crucial to manually include it within the browser's trust store to ensure seamless connectivity.
-This step is essential for [WebSocket client](/docs/mqtt-broker/user-guide/ui/websocket-client/) functionality within the browser environment.
+This step is essential for [WebSocket client](/docs/{{docsPrefix}}mqtt-broker/user-guide/ui/websocket-client/) functionality within the browser environment.
{% endcapture %}
{% include templates/info-banner.md content=difference %}
@@ -286,10 +291,10 @@ like API keys or OAuth tokens, is commonly used to ensure secure communication.
In non-browser environments such as Node.js, and programming languages like Python and Java, when utilizing the appropriate MQTT library,
two-way authentication functions seamlessly and remains an exceptionally effective security measure.
-Let's review the example. Make sure WSS listener is [enabled and configured](/docs/mqtt-broker/security/#wss-listener) properly.
+Let's review the example. Make sure WSS listener is [enabled and configured](/docs/{{docsPrefix}}mqtt-broker/security/#wss-listener) properly.
For establishing a two-way authenticated connection, ensure that the MQTT client credentials of type 'X.509 Certificate Chain' are created,
-with the client certificate Common Name (CN) specified. Refer to [this guide](/docs/mqtt-broker/user-guide/ui/mqtt-client-credentials/#ssl-credentials) for detailed instructions.
+with the client certificate Common Name (CN) specified. Refer to [this guide](/docs/{{docsPrefix}}mqtt-broker/user-guide/ui/mqtt-client-credentials/#ssl-credentials) for detailed instructions.
Replace `example.com` with your actual DNS and replace `/path/to/your/client/key/file.pem`, `/path/to/your/client/cert/file.pem`,
and `/path/to/your/ca/cert/file.pem` with the respective paths to your certificate files.
diff --git a/_includes/docs/mqtt-broker/user-guide/shared-subscriptions.md b/_includes/docs/mqtt-broker/user-guide/shared-subscriptions.md
index de81d26efe..fd030cf92b 100644
--- a/_includes/docs/mqtt-broker/user-guide/shared-subscriptions.md
+++ b/_includes/docs/mqtt-broker/user-guide/shared-subscriptions.md
@@ -61,7 +61,7 @@ If strict message ordering is critical for your application, consider using a **
## Subscribing to Shared Subscriptions
-In this tutorial, we will be connecting [DEVICE](/docs/mqtt-broker/user-guide/mqtt-client-type/#device-client) non-persistent clients and using the [Mosquitto](https://mosquitto.org/download/) client library.
+In this tutorial, we will be connecting [DEVICE](/docs/{{docsPrefix}}mqtt-broker/user-guide/mqtt-client-type/#device-client) non-persistent clients and using the [Mosquitto](https://mosquitto.org/download/) client library.
For Ubuntu users, it can be installed using the following command:
```
sudo apt-get install mosquitto-clients
@@ -169,8 +169,8 @@ These considerations ensure that message distribution and persistence are handle
To utilize the shared subscription feature for APPLICATION clients in TBMQ, you need to follow an additional step.
First, you'll need to create an Application Shared Subscription entity in the PostgreSQL database.
-To do so follow the instructions from the following [guide](/docs/mqtt-broker/user-guide/ui/shared-subscriptions/).
-This can also be done through the REST API, and detailed instructions can be found in the next [documentation](/docs/mqtt-broker/application-shared-subscription/).
+To do so follow the instructions from the following [guide](/docs/{{docsPrefix}}mqtt-broker/user-guide/ui/shared-subscriptions/).
+This can also be done through the REST API, and detailed instructions can be found in the next [documentation](/docs/{{docsPrefix}}mqtt-broker/application-shared-subscription/).
The entity creation process includes the automatic creation of a corresponding Kafka topic.
{% capture difference %}
diff --git a/_includes/docs/mqtt-broker/user-guide/topics.md b/_includes/docs/mqtt-broker/user-guide/topics.md
index f5726b1871..7d4534e30f 100644
--- a/_includes/docs/mqtt-broker/user-guide/topics.md
+++ b/_includes/docs/mqtt-broker/user-guide/topics.md
@@ -202,7 +202,7 @@ For example, if you set `MQTT_TOPIC_MAX_SEGMENTS_COUNT` to `2`, the broker will
## Shared subscription topics
-It is also worth mentioning feature [shared subscriptions](/docs/mqtt-broker/user-guide/shared-subscriptions/) that are used to distribute messages across multiple subscribers, allowing for load balancing and efficient use of resources.
+It is also worth mentioning feature [shared subscriptions](/docs/{{docsPrefix}}mqtt-broker/user-guide/shared-subscriptions/) that are used to distribute messages across multiple subscribers, allowing for load balancing and efficient use of resources.
Shared subscription topic has a **specific format** to differentiate it from regular topic.
@@ -221,4 +221,4 @@ Example of a shared subscription topic:
```
$share/group1/country/+/city/+/home/#
-```
\ No newline at end of file
+```
diff --git a/_includes/docs/mqtt-broker/user-guide/ui/monitoring.md b/_includes/docs/mqtt-broker/user-guide/ui/monitoring.md
index 6dd722724e..0e75bddb94 100644
--- a/_includes/docs/mqtt-broker/user-guide/ui/monitoring.md
+++ b/_includes/docs/mqtt-broker/user-guide/ui/monitoring.md
@@ -3,7 +3,11 @@
TBMQ offers user-friendly tools that enable users to monitor broker activity and conveniently access features through the **Home** and **Monitoring** pages.
+{% if docsPrefix == "pe/" %}
+
+{% else %}

+{% endif %}
## Charts
@@ -18,17 +22,21 @@ At the top of the **Home** page, you will find a set of six charts that display
Please note that on the Monitoring page, users have the ability to delve deeper into the chart data.
They can zoom in on specific sections, set custom date ranges to display data, or open the charts in full-screen mode.
+{% if docsPrefix == "pe/" %}
+
+{% else %}

+{% endif %}
## Sessions
The Sessions card provides an overview of both connected and disconnected sessions.
-Users can access comprehensive information about these sessions, including their status, duration, and additional details by going to the [Sessions](/docs/mqtt-broker/user-guide/ui/sessions/) page.
+Users can access comprehensive information about these sessions, including their status, duration, and additional details by going to the [Sessions](/docs/{{docsPrefix}}mqtt-broker/user-guide/ui/sessions/) page.
## Credentials
The system displays the number of Client Credentials categorized into two types: **Device** and **Application**.
-For more information regarding the different types of Credentials, please refer to the [documentation](/docs/mqtt-broker/user-guide/mqtt-client-type/).
+For more information regarding the different types of Credentials, please refer to the [documentation](/docs/{{docsPrefix}}mqtt-broker/user-guide/mqtt-client-type/).

@@ -117,4 +125,8 @@ Services are automatically added to the registry on their first launch, and the
The key is not managed by TTL and entries are stored indefinitely. TBMQ does not automatically remove services from the registry, even if they stop running.
You can manually delete a service from the UI (or using REST API) using the "Delete" button that is available only when the service status is `Outdated`.
+{% if docsPrefix == "pe/" %}
+
+{% else %}

+{% endif %}
diff --git a/_includes/docs/mqtt-broker/user-guide/ui/mqtt-client-credentials.md b/_includes/docs/mqtt-broker/user-guide/ui/mqtt-client-credentials.md
index 76ac10ac38..4c4a7c28af 100644
--- a/_includes/docs/mqtt-broker/user-guide/ui/mqtt-client-credentials.md
+++ b/_includes/docs/mqtt-broker/user-guide/ui/mqtt-client-credentials.md
@@ -1,13 +1,13 @@
* TOC
{:toc}
-TBMQ offers various options for managing MQTT client credentials via both its Web UI and [REST API](/docs/mqtt-broker/mqtt-client-credentials-management/).
+TBMQ offers various options for managing MQTT client credentials via both its Web UI and [REST API](/docs/{{docsPrefix}}mqtt-broker/mqtt-client-credentials-management/).
TBMQ supports the following types of client credentials to authenticate client connections:
-- [Basic](/docs/mqtt-broker/security/#basic-authentication) - basic security measures based on combinations of client ID, username and password.
+- [Basic](/docs/{{docsPrefix}}mqtt-broker/security/#basic-authentication) - basic security measures based on combinations of client ID, username and password.
- **Advantages:** Simple and easy to implement. Widely supported by applications and services. Low network overhead.
- **Disadvantages:** Limited security.
-- [X.509 Certificate Chain](/docs/mqtt-broker/security/#tls-authentication) - advanced security measures based on X509 certificate chain that helps in verifying the identity of clients.
+- [X.509 Certificate Chain](/docs/{{docsPrefix}}mqtt-broker/security/#tls-authentication) - advanced security measures based on X509 certificate chain that helps in verifying the identity of clients.
- **Advantages:** Enhanced security compared to the basic client credentials type. With SSL client credentials, both the client and TBMQ can authenticate each other.
The SSL client credentials type provides more flexibility in terms of access control, as it allows for more granular access control policies based on the certificate subject name and other attributes.
- **Disadvantages:** Complexity and increased cost. Setting up and managing SSL client credentials can be more complex and requires more expertise. SSL encryption and decryption require more computing resources.
@@ -15,9 +15,9 @@ TBMQ supports the following types of client credentials to authenticate client c
- **Advantages:** Higher security level compared to basic authentication. It uses a challenge-response process to exchange hashed credentials, ensuring the password is never sent in plain text.
- **Disadvantages:** Requires additional computational resources to generate and validate the salted password hashes.
-Before using any of the client credential types mentioned above, please ensure that the appropriate _Authentication_ is [enabled](/docs/mqtt-broker/security/authentication/basic/).
+Before using any of the client credential types mentioned above, please ensure that the appropriate _Authentication_ is [enabled](/docs/{{docsPrefix}}mqtt-broker/security/authentication/basic/).
-For more information on security issues, please consult this [guide](/docs/mqtt-broker/security/overview/).
+For more information on security issues, please consult this [guide](/docs/{{docsPrefix}}mqtt-broker/security/overview/).
## Adding MQTT client credentials
@@ -29,7 +29,7 @@ To add new client credentials, please follow these steps:
- **Device**. Use for clients that usually publish a lot of messages, but subscribe to a few topics with low message rate, i.e. IoT devices.
- **Applications**. Use for clients that subscribe to topics with high message rates and require message persistence when the client is offline, such as applications like **ThingsBoard, AWS IoT Core** etc.
- For more information on client types, please refer to the [docs](/docs/mqtt-broker/user-guide/mqtt-client-type/).
+ For more information on client types, please refer to the [docs](/docs/{{docsPrefix}}mqtt-broker/user-guide/mqtt-client-type/).
4. Select the desired _Credentials Type_ and configure the authentication parameters and authorization rules.
@@ -54,9 +54,8 @@ MQTT Basic authentication is based on different combinations of the client ID, u
Broker administrators can modify the password for MQTT Basic client credentials. To do this, follow these instructions:
1. Go to _Authentication_ - _Client Credentials_ page.
2. Click on the corresponding row of the Credentials.
-3. Click the _Edit_ button.
-4. Click the _Change password_ button. Input your current password and set a new one.
-5. Confirm changes.
+3. Click the _Change password_ button.
+4. Input your current password, set a new one and confirm changes.
{% include images-gallery.html imageCollection="change-password-basic-credentials" %}
@@ -90,7 +89,11 @@ Please consider the following examples:
* If Subscribe authorization rule patterns is set to default value `.*` - client will be able to subscribe to any topic.
* If Publish/Subscribe authorization rules has no rules (field is empty) - client will be forbidden to publish/subscribe to any topics.
+{% if docsPrefix == "pe/" %}
+
+{% else %}

+{% endif %}
### SCRAM
@@ -105,7 +108,7 @@ Please consider the following examples:
## Delete client credentials
-Broker administrators can remove client credentials from TBMQ system using the Web UI or [REST API](/docs/mqtt-broker/mqtt-client-credentials-management/).
+Broker administrators can remove client credentials from TBMQ system using the Web UI or [REST API](/docs/{{docsPrefix}}mqtt-broker/mqtt-client-credentials-management/).
There are a few ways of deleting client credentials:
1. **Delete single**.
@@ -121,7 +124,7 @@ There are a few ways of deleting client credentials:
“Check Connectivity” is a useful tool that automatically generates commands to **subscribe to a topic** and **publish a message**.
This feature utilizes the user's host, port, and client credentials to generate the necessary commands.
-It is available only for [Basic](/docs/mqtt-broker/user-guide/ui/mqtt-client-credentials/#mqtt-basic-credentials) credentials.
+It is available only for [Basic](/docs/{{docsPrefix}}mqtt-broker/user-guide/ui/mqtt-client-credentials/#mqtt-basic-credentials) credentials.
To open a window with commands, please follow the next steps:
1. Click “Check connectivity” button to open the corresponding window.
diff --git a/_includes/docs/mqtt-broker/user-guide/ui/sessions.md b/_includes/docs/mqtt-broker/user-guide/ui/sessions.md
index 72cba83a35..921dbc6280 100644
--- a/_includes/docs/mqtt-broker/user-guide/ui/sessions.md
+++ b/_includes/docs/mqtt-broker/user-guide/ui/sessions.md
@@ -31,8 +31,8 @@ The **Details** tab contains the next information:
* **Session end** - for persistent disconnected clients, when the session info and messages will be removed.
* **Client ID** - the identifier of the client.
* **Client IP** - the IP address of the client.
- * [**Client type**](/docs/mqtt-broker/user-guide/mqtt-client-type) (Device/Application).
- * [**Client Credentials**](/docs/mqtt-broker/user-guide/ui/mqtt-client-credentials/) that authenticated current session.
+ * [**Client type**](/docs/{{docsPrefix}}mqtt-broker/user-guide/mqtt-client-type) (Device/Application).
+ * [**Client Credentials**](/docs/{{docsPrefix}}mqtt-broker/user-guide/ui/mqtt-client-credentials/) that authenticated current session.
* **MQTT version** - determines which version of MQTT protocol to be used - MQTT 3.1 (3), MQTT 3.1.1 (4), or MQTT 5.0 (5).
## Subscriptions
diff --git a/_includes/docs/mqtt-broker/user-guide/ui/settings.md b/_includes/docs/mqtt-broker/user-guide/ui/settings.md
index c945e27f75..d927d40920 100644
--- a/_includes/docs/mqtt-broker/user-guide/ui/settings.md
+++ b/_includes/docs/mqtt-broker/user-guide/ui/settings.md
@@ -2,8 +2,10 @@
* TOC
{:toc}
+{% if docsPrefix != "pe/" %}
{% assign sinceVersion = "2.0" %}
{% include templates/mqtt-broker/since.md %}
+{% endif %}
TBMQ provides a dedicated Settings page that allows administrators to manage key system configurations directly from the user interface.
The Settings page is divided into three tabs, each focused on a specific category of system configuration.
@@ -19,7 +21,7 @@ Define broker user password policies, including password strength requirements,
### Password policy
-To log into TBMQ, the [user](/docs/mqtt-broker/user-guide/ui/users/) uses an email and password.
+To log into TBMQ, the [user](/docs/{{docsPrefix}}mqtt-broker/user-guide/ui/users/) uses an email and password.
You can enhance the security of your account by updating your security settings, including the **password policy**.
For example, you can increase a minimum password length, require a mix of uppercase and lowercase letters, and specify the minimum number of digits and special characters.
@@ -45,7 +47,7 @@ When the password policy is updated, new users will be required to adhere to the
Note that if you have enabled the **Force to reset password if not valid** option, all users (not only new ones) who do not meet the new requirements will be forced to update their passwords.
{% capture securityDocumentation %}
-To see other security-related settings, please refer to our [Security documentation](/docs/mqtt-broker/security/overview/).
+To see other security-related settings, please refer to our [Security documentation](/docs/{{docsPrefix}}mqtt-broker/security/overview/).
{% endcapture %}
{% include templates/info-banner.md content=securityDocumentation %}
@@ -71,7 +73,7 @@ including activity logging options and message retention limits for the in-brows
### WebSocket client
-In this section, you can configure additional settings related to the [WebSocket Client](/docs/mqtt-broker/user-guide/ui/websocket-client/) - a browser-accessible tool that provides management of MQTT clients, subscription to topics, receiving messages, and publishing messages.
+In this section, you can configure additional settings related to the [WebSocket Client](/docs/{{docsPrefix}}mqtt-broker/user-guide/ui/websocket-client/) - a browser-accessible tool that provides management of MQTT clients, subscription to topics, receiving messages, and publishing messages.
* **Log MQTT client activity** feature can be helpful in debugging connection issues and monitoring message flows by providing real-time client activity logs.
If set to true, you will see logs for the following [MQTT.js](https://github.com/mqttjs/MQTT.js) events in the browser developer console:
diff --git a/_includes/docs/mqtt-broker/user-guide/ui/shared-subscriptions.md b/_includes/docs/mqtt-broker/user-guide/ui/shared-subscriptions.md
index 121907d5e8..d51cbaeb90 100644
--- a/_includes/docs/mqtt-broker/user-guide/ui/shared-subscriptions.md
+++ b/_includes/docs/mqtt-broker/user-guide/ui/shared-subscriptions.md
@@ -2,18 +2,18 @@
* TOC
{:toc}
-The Application Shared Subscription entity provides the capability to leverage the [Shared Subscriptions](/docs/mqtt-broker/user-guide/shared-subscriptions/)
+The Application Shared Subscription entity provides the capability to leverage the [Shared Subscriptions](/docs/{{docsPrefix}}mqtt-broker/user-guide/shared-subscriptions/)
feature for **APPLICATION** clients. This feature enables multiple clients to subscribe and receive messages from a shared subscription.
## Usage Notes
In TBMQ Application shared subscriptions are entities that used for management of shared subscriptions.
-* Add Application shared subscriptions if you plan to use shared subscriptions feature with [Application clients](/docs/mqtt-broker/user-guide/mqtt-client-type/#application-client).
+* Add Application shared subscriptions if you plan to use shared subscriptions feature with [Application clients](/docs/{{docsPrefix}}mqtt-broker/user-guide/mqtt-client-type/#application-client).
* After creation of the entity **Topic filter** and **Partitions** fields **can not be changed**.
* Application Shared Subscription feature **works with MQTT v5 and earlier versions**.
-Broker administrators are able to manage shared subscriptions via Web UI or [REST API](/docs/mqtt-broker/application-shared-subscription/).
+Broker administrators are able to manage shared subscriptions via Web UI or [REST API](/docs/{{docsPrefix}}mqtt-broker/application-shared-subscription/).
## Adding Shared Subscription
@@ -45,7 +45,7 @@ To edit entity please do the following steps:
## Deleting Shared Subscriptions
-Shared Subscriptions entities can be removed from TBMQ system using the Web UI or [REST API](/docs/mqtt-broker/application-shared-subscription/).
+Shared Subscriptions entities can be removed from TBMQ system using the Web UI or [REST API](/docs/{{docsPrefix}}mqtt-broker/application-shared-subscription/).
There are a few ways of deleting:
diff --git a/_includes/docs/mqtt-broker/user-guide/ui/subscriptions.md b/_includes/docs/mqtt-broker/user-guide/ui/subscriptions.md
index b4d9ccf0c9..f71a01835e 100644
--- a/_includes/docs/mqtt-broker/user-guide/ui/subscriptions.md
+++ b/_includes/docs/mqtt-broker/user-guide/ui/subscriptions.md
@@ -2,8 +2,10 @@
* TOC
{:toc}
+{% if docsPrefix != "pe/" %}
{% assign sinceVersion = "2.0" %}
{% include templates/mqtt-broker/since.md %}
+{% endif %}
In MQTT, a subscription is a mechanism that allows clients to receive messages directed to specific topics.
When a client subscribes to a topic, it expresses its interest in receiving all messages published to that topic.
@@ -21,15 +23,14 @@ The table contains the following information about each subscription:
You can easily add, remove or edit subscriptions from the 'Session details' window.
1. Open the 'Subscriptions' page in the left-hand menu.
-2. Click on the icon button 'Session details'.
-3. Click on the tab 'Subscriptions' to [manage session subscriptions](/docs/mqtt-broker/user-guide/ui/sessions/#subscriptions).
-4. Add, edit or delete client subscriptions.
-5. Click 'Update' to save changes.
+2. Click on the table row to open [session subscription](/docs/{{docsPrefix}}mqtt-broker/user-guide/ui/sessions/#subscriptions) details.
+3. Add, edit or delete client subscriptions.
+4. Click 'Update' to save changes.
{% include images-gallery.html imageCollection="subscription-session" %}
{% capture monitoringSubscriptions %}
-You can track the number of current subscriptions and other broker activity on the [Monitoring](/docs/mqtt-broker/user-guide/ui/monitoring/) and Home pages.
+You can track the number of current subscriptions and other broker activity on the [Monitoring](/docs/{{docsPrefix}}mqtt-broker/user-guide/ui/monitoring/) and Home pages.
{% endcapture %}
{% include templates/info-banner.md title="Subscriptions chart" content=monitoringSubscriptions %}
diff --git a/_includes/docs/mqtt-broker/user-guide/ui/unauthorized-clients.md b/_includes/docs/mqtt-broker/user-guide/ui/unauthorized-clients.md
index 69f309bd3f..12c8ba4579 100644
--- a/_includes/docs/mqtt-broker/user-guide/ui/unauthorized-clients.md
+++ b/_includes/docs/mqtt-broker/user-guide/ui/unauthorized-clients.md
@@ -2,15 +2,17 @@
* TOC
{:toc}
+{% if docsPrefix != "pe/" %}
{% assign sinceVersion = "2.0" %}
{% include templates/mqtt-broker/since.md %}
+{% endif %}
**Unauthorized clients** in MQTT are those clients that attempted but failed to establish a connection with the MQTT broker due to various reasons such as bad credentials, incorrect TLS configuration etc.
Regularly reviewing and analyzing unauthorized client attempts can help identify potential security threats and misconfigured clients.
{% capture unauthorizedClientEnableAuth %}
-The Unauthorized Clients feature functions only if the corresponding authentication method is [enabled](/docs/mqtt-broker/security/authentication/basic/).
+The Unauthorized Clients feature functions only if the corresponding authentication method is [enabled](/docs/{{docsPrefix}}mqtt-broker/security/authentication/basic/).
{% endcapture %}
{% include templates/info-banner.md title="Check configuration" content=unauthorizedClientEnableAuth %}
diff --git a/_includes/docs/mqtt-broker/user-guide/ui/users.md b/_includes/docs/mqtt-broker/user-guide/ui/users.md
index 0c2eb41c80..8b3b658e1e 100644
--- a/_includes/docs/mqtt-broker/user-guide/ui/users.md
+++ b/_includes/docs/mqtt-broker/user-guide/ui/users.md
@@ -1,6 +1,12 @@
-TBMQ presently offers a single tier of user roles, namely 'Administrator'. Administrators are authorized to create, modify, and remove user accounts.
+{% if docsPrefix == null %}
+TBMQ provides a single user role: **Administrator**. Administrators have full permissions to create, update, and delete user accounts.
+{% endif %}
-User management can be performed through TBMQ's Web UI or [REST API](/docs/mqtt-broker/user-management/), which enables users to modify user details.
+{% if docsPrefix == "pe/" %}
+TBMQ PE includes two predefined user roles: **Administrator** and **Viewer**. For a detailed explanation of role-based access control, see [RBAC](/docs/pe/mqtt-broker/security/rbac/).
+{% endif %}
+
+User management can be performed through TBMQ's Web UI or [REST API](/docs/{{docsPrefix}}mqtt-broker/user-management/).
* TOC
{:toc}
@@ -14,7 +20,7 @@ To add a new User, please follow these steps:
{% include images-gallery.html imageCollection="add-user-broker" %}
-Note that all new users are initially created with the default password `sysadmin`. Upon first logging in, users will be required to change default password.
+Note that all new users are initially created with the default password `sysadmin`. Upon logging in, users will be prompted to change the default password.
## Edit user
@@ -28,18 +34,25 @@ To edit the details of an existing administrator, please follow these steps:
## Delete user
-Logged-in user can delete other users, but not itself. To delete user follow these steps:
+Logged-in user can delete other users, but not itself. To delete the user, follow these steps:
1. Find the user in the _Users_ table and click on the corresponding row.
2. Click the _Delete user_ button and confirm the action by selecting _Yes_.
{% include images-gallery.html imageCollection="delete-user-broker" %}
-## Login as admin user
+## Login as another user
-TBMQ allows administrators to securely log in as other users automatically, without requiring their credentials or manual authentication.
+TBMQ allows Admin users to securely log in as other users automatically, without requiring their credentials or manual authentication.
1. Find the user in the _Users_ table (you can only log in as other users).
2. Click the _Login_ button in the corresponding row.
{% include images-gallery.html imageCollection="login-as-user" %}
+
+{% if docsPrefix == "pe/" %}
+## User created via OAuth 2.0
+
+{% include templates/mqtt-broker/security/user-password.md %}
+
+{% endif %}
diff --git a/_includes/docs/mqtt-broker/user-guide/ui/websocket-client.md b/_includes/docs/mqtt-broker/user-guide/ui/websocket-client.md
index afac615147..438f4fa0c2 100644
--- a/_includes/docs/mqtt-broker/user-guide/ui/websocket-client.md
+++ b/_includes/docs/mqtt-broker/user-guide/ui/websocket-client.md
@@ -2,12 +2,16 @@
{:toc}
The TBMQ WebSocket Client is a browser-accessible tool aimed at simplifying the debugging process and testing of MQTT clients across various scenarios.
-Leveraging the [MQTT over WebSocket](/docs/mqtt-broker/user-guide/mqtt-over-ws/) feature, it's designed with principles of simplicity and ease of use in mind.
+Leveraging the [MQTT over WebSocket](/docs/{{docsPrefix}}mqtt-broker/user-guide/mqtt-over-ws/) feature, it's designed with principles of simplicity and ease of use in mind.
It offers seamless management of MQTT clients, subscription to topics, and message reception or publication.
TBMQ WebSocket Client utilizes the [MQTT.js](https://github.com/mqttjs/MQTT.js) library for communication between client and broker.
+{% if docsPrefix == "pe/" %}
+
+{% else %}

+{% endif %}
## Connections
@@ -58,7 +62,7 @@ WebSocket connections allow users to establish and configure various parameters,
* **Authentication**. TBMQ allows to create websocket connection with different types of handling credentials details like clientID (required), username, password:
* **Auto-generated credentials**. Credentials with random Client ID, random Username and empty Password. Please note that corresponding Credentials will be created.
* **Custom authentication**. Credentials with custom Client ID, Username, Password.
- * **Use existing credentials**. User selects existing credentials of the [Basic](/docs/mqtt-broker/security/#basic-authentication) type and, if required, input Password.
+ * **Use existing credentials**. User selects existing credentials of the [Basic](/docs/{{docsPrefix}}mqtt-broker/security/#basic-authentication) type and, if required, input Password.
Password input field appears when the selected credentials require password to establish the connection.
@@ -112,7 +116,11 @@ The status of the WebSocket Client may be one of the following:
3. **Reconnecting** This status is displayed when the client is in the process of re-establishing a connection with the broker. The reconnecting can be cancelled clicking on the button Cancel in the top right corner.
4. **Connection failed**. Indicates that the client was unable to establish a connection with the broker. This status may also include additional information such as the cause of the failure - for instance, authentication issues, session taken over, among others.
+{% if docsPrefix == "pe/" %}
+
+{% else %}

+{% endif %}
## Subscriptions
@@ -183,7 +191,11 @@ The table messages can be filtered by:
* **Type 'All/Received/Published'** - click on the type label in the header of the Messages table.
* **Topic/QoS/Retain** - click on the _filter_ icon next to _Clear messages_ button.
+{% if docsPrefix == "pe/" %}
+
+{% else %}

+{% endif %}
### Publish a message
@@ -208,7 +220,11 @@ Here is a list of basic options for publishing a message, along with brief expla
After filling out the necessary information and settings, locate and **click on the Send icon** to publish your message.
The message will now be dispatched to the broker and relayed to all clients who are subscribed to the given topic.
+{% if docsPrefix == "pe/" %}
+
+{% else %}

+{% endif %}
For MQTT clients utilizing **MQTT Version 5**, there are also additional parameters available to further customize your message publishing experience.
The combination of these features provides a comprehensive and flexible environment for MQTT message handling.
@@ -223,4 +239,8 @@ Below is a brief explanation of each setting:
* **Response Topic**. String which is used as the Topic Name for a response message.
* **User Properties**. Allows user-defined metadata in form of key-value pairs.
+{% if docsPrefix == "pe/" %}
+
+{% else %}

+{% endif %}
diff --git a/_includes/docs/mqtt-broker/user-management.md b/_includes/docs/mqtt-broker/user-management.md
new file mode 100644
index 0000000000..7605d9cfc2
--- /dev/null
+++ b/_includes/docs/mqtt-broker/user-management.md
@@ -0,0 +1,71 @@
+* TOC
+{:toc}
+
+By default, the system is initially established with a singular admin user, with username **sysadmin@thingsboard.org** and password **sysadmin**.
+
+However, when operating in a production environment, it is strongly advised to create a new admin user, either remove the default user entirely
+or modify the password associated with the aforementioned user.
+
+Throughout this documentation, all provided examples will employ the **curl** command to execute REST requests, thus showcasing practical implementations of the API interactions.
+
+{% include templates/mqtt-broker/authentication.md %}
+
+## Get all users
+
+```bash
+curl --location --request GET "http://localhost:8083/api/admin?pageSize=50&page=0" \
+--header "X-Authorization: Bearer $ACCESS_TOKEN"
+```
+{: .copy-code}
+
+Within the system, every user entity possesses a distinct and unique identifier known as the **id**.
+This id serves as a reference point and can be utilized to perform operations such as _updating_ or _deleting_ users.
+
+## Create/update user
+
+{% if docsPrefix == null %}
+
+```bash
+curl --location --request POST 'http://localhost:8083/api/admin' \
+--header "X-Authorization: Bearer $ACCESS_TOKEN" \
+--header 'Content-Type: application/json' \
+--data-raw '{
+ "id":$USER_ID,
+ "email":"test@gmail.com",
+ "password":"test",
+ "firstName":"test",
+ "lastName":"test"
+}'
+```
+{: .copy-code}
+
+{% else %}
+
+```bash
+curl --location --request POST 'http://localhost:8083/api/admin' \
+--header "X-Authorization: Bearer $ACCESS_TOKEN" \
+--header 'Content-Type: application/json' \
+--data-raw '{
+ "id":$USER_ID,
+ "email":"test@gmail.com",
+ "password":"test",
+ "firstName":"test",
+ "lastName":"test",
+ "roleName": "ADMIN"
+}'
+```
+{: .copy-code}
+
+{% endif %}
+
+If _$USER_ID_ is _null_ or _id_ field is absent in the request body, the new admin user will be created, otherwise the user with _$USER_ID_ identifier will be updated.
+
+## Delete user
+
+```bash
+curl --location --request DELETE 'http://localhost:8083/api/admin/$USER_ID' \
+--header "X-Authorization: Bearer $ACCESS_TOKEN"
+```
+{: .copy-code}
+
+Paste actual ID of the user you want to delete instead of _$USER_ID_.
diff --git a/_includes/docs/pe/mqtt-broker/domains.md b/_includes/docs/pe/mqtt-broker/domains.md
new file mode 100644
index 0000000000..923138dc42
--- /dev/null
+++ b/_includes/docs/pe/mqtt-broker/domains.md
@@ -0,0 +1,180 @@
+* TOC
+{:toc}
+
+After installing **TBMQ PE**, as described in the [Installation Options guide](/docs/pe/mqtt-broker/install/installation-options/),
+your instance is accessible by default via its public **IP address** or the **DNS record of the cloud Load Balancer**.
+However, configuring a **custom domain name** provides several important advantages:
+
+* **Simplified access** - users can log in using an easy-to-remember hostname instead of an IP or Load Balancer DNS name.
+* **Secure SSL connections** - domains enable the use of trusted SSL/TLS certificates.
+* **White labeling** - TBMQ uses the domain to apply custom branding to the login page (logos, titles, and colors). The domain simplifies configuration management, as each TBMQ deployment supports only one login page branding configuration.
+* **OAuth 2.0 / SSO integration** - multiple domains allow separate login configurations for each authentication provider.
+
+## Domain Registration
+
+{% capture domain_owner_note %}
+**Note:** You must be the owner of the domain you are registering.
+{% endcapture %}
+{% include templates/info-banner.md content=domain_owner_note %}
+
+To use your own hostname with TBMQ, you must first configure DNS and then register the domain inside TBMQ.
+
+### Step 1. Configure DNS
+
+On your DNS provider’s website:
+
+* Add an **A record** (or **CNAME record**) to map your domain to the IP or hostname where TBMQ is hosted.
+
+ * See [How to Create an A Record](#how-to-create-an-a-record-for-your-domain)
+ * Or [How to Create a CNAME Record](#how-to-create-a-cname-record-for-your-domain)
+
+* Add a valid **SSL certificate** for the chosen domain.
+
+### Step 2. Register Domain in TBMQ
+
+* Log in to your **TBMQ PE** account.
+
+{% include images-gallery.html imageCollection="register-domain" showListImageTitles="true" %}
+
+## Logging in with Your Domain
+
+After successful registration, you can access your TBMQ instance using the configured domain name.
+Open a web browser and enter the domain in the address bar - you should see the TBMQ login page.
+
+{% include images-gallery.html imageCollection="login-with-domain" %}
+
+## Viewing Domain Details
+
+To view details about a registered domain, simply click on it to open the domain details dialog.
+
+{% include images-gallery.html imageCollection="domain-details" %}
+
+## Deleting a Domain
+
+To delete the domain click "trash" icon in the domain's row you want to delete. In the confirmation dialog, click "Yes" if you are sure you want to delete the domain.
+
+{% include images-gallery.html imageCollection="delete-domain" %}
+
+## How to Create an A Record for Your Domain {#how-to-create-an-a-record-for-your-domain}
+
+### What Is an A Record?
+
+An **A record (Address Record)** links a domain name directly to an **IPv4 address**.
+It tells DNS resolvers where to find your server.
+
+**Example:**
+
+```
+mqtt.mycompany.com → 203.0.113.45
+```
+
+### When to Use an A Record
+
+Use an **A record** when your TBMQ instance has a **fixed public IP address** - for example, a VM, Kubernetes service, or on-premise server.
+
+### How to Create an A Record
+
+The exact procedure depends on your DNS provider.
+Refer to their documentation for detailed instructions:
+
+* [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values.html){:target="_blank"}
+* [GoDaddy](https://www.godaddy.com/help/add-an-a-record-19238){:target="_blank"}
+* [Cloudflare](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-dns-records/){:target="_blank"}
+* [ClouDNS](https://www.cloudns.net/wiki/article/10/){:target="_blank"}
+* [Google Cloud DNS](https://cloud.google.com/dns/docs/records){:target="_blank"}
+* [Name.com](https://www.name.com/support/articles/115004893508-adding-an-a-record){:target="_blank"}
+* [DNSimple](https://support.dnsimple.com/articles/manage-a-record/){:target="_blank"}
+* [Infoblox NIOS](https://docs.infoblox.com/space/BloxOneDDI/186811892/Creating+A+Record){:target="_blank"}
+* [Namecheap](https://www.namecheap.com/support/knowledgebase/article.aspx/434/2237/how-do-i-set-up-host-records-for-a-domain/){:target="_blank"}
+
+If your provider is not listed, check their documentation or contact their support team for assistance.
+
+## How to Create a CNAME Record for Your Domain {#how-to-create-a-cname-record-for-your-domain}
+
+### What Is a CNAME Record?
+
+A **CNAME (Canonical Name Record)** maps one domain name to another domain name.
+It acts as an **alias**, allowing several domains or subdomains to point to the same hostname.
+
+**Example:**
+
+```
+mqtt.mycompany.com → broker.mycompany.net
+```
+
+### When to Use a CNAME Record
+
+Use a **CNAME record** when:
+
+* You want multiple domains (e.g., `mqtt.mycompany.com`, `iot.mycompany.com`) to resolve to the same host.
+* Your server’s IP may change, but the target domain remains constant.
+* You want to simplify DNS management by maintaining only one A record (on the primary domain).
+
+### How to Create a CNAME Record
+
+Each DNS provider has its own interface for adding CNAME records.
+Below are direct links to their setup guides:
+
+* [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values.html){:target="_blank"}
+* [GoDaddy](https://www.godaddy.com/help/add-a-cname-record-19236){:target="_blank"}
+* [Cloudflare](https://community.cloudflare.com/t/how-do-i-add-a-cname-record/59){:target="_blank"}
+* [ClouDNS](https://www.cloudns.net/wiki/article/13/){:target="_blank"}
+* [Google Cloud DNS](https://cloud.google.com/dns/docs/records){:target="_blank"}
+* [Name.com](https://www.name.com/support/articles/115004895548-adding-a-cname-record){:target="_blank"}
+* [easyDNS](https://kb.easydns.com/knowledge/how-to-make-a-dns-entry/){:target="_blank"}
+* [DNSimple](https://support.dnsimple.com/articles/manage-cname-record/#adding-a-cname-record){:target="_blank"}
+* [DNSMadeEasy](https://support.dnsmadeeasy.com/hc/en-us/articles/34327195668507-CNAME-Record){:target="_blank"}
+* [No-IP.com](https://www.noip.com/support/knowledgebase/how-to-configure-your-no-ip-hostname/){:target="_blank"}
+* [Infoblox NIOS](https://docs.infoblox.com/display/BloxOneDDI/Creating+a+CNAME+Record){:target="_blank"}
+* [Namecheap](https://www.namecheap.com/support/knowledgebase/article.aspx/9646/2237/how-to-create-a-cname-record-for-your-domain){:target="_blank"}
+
+If your provider is not listed, check their documentation or contact their support team for assistance.
+
+## Troubleshooting
+
+If your domain does not resolve or TBMQ is not accessible, verify the DNS configuration.
+
+### Check DNS Record
+
+Use the [Google Admin Toolbox DIG](https://toolbox.googleapps.com/apps/dig/){:target="_blank"}
+or run the following command on Linux:
+
+```bash
+dig your-domain.com any
+```
+{: .copy-code}
+
+Replace `your-domain.com` with your actual domain name.
+Example:
+
+```bash
+dig mqtt.mycompany.com any
+```
+
+### Review the Output
+
+If no `ANSWER SECTION` appears, the record was not added correctly.
+For example, this output shows **no record found**:
+
+```bash
+;; ANSWER SECTION:
+mqtt.mycompany.com. 3600 IN HINFO "RFC8482" ""
+```
+
+A correct record should look like this:
+
+```bash
+;; ANSWER SECTION:
+mqtt.mycompany.com. 3600 IN CNAME broker.mycompany.net.
+```
+
+or, if using an A record:
+
+```bash
+;; ANSWER SECTION:
+mqtt.mycompany.com. 3600 IN A 203.0.113.45
+```
+
+### Contact Support
+
+If the configuration appears correct but the issue persists, please [contact us](https://thingsboard.io/docs/pe/mqtt-broker/help/){:target="_blank"} for further assistance.
diff --git a/_includes/docs/pe/mqtt-broker/image-gallery.md b/_includes/docs/pe/mqtt-broker/image-gallery.md
new file mode 100644
index 0000000000..ad69a56ced
--- /dev/null
+++ b/_includes/docs/pe/mqtt-broker/image-gallery.md
@@ -0,0 +1,91 @@
+{% assign feature = "Image gallery" %}{% include templates/mqtt-broker/pe-tbmq-feature-banner.md %}
+
+* TOC
+{:toc}
+
+The Image Gallery serves as a centralized repository for managing images in the TBMQ application.
+It provides the source for logo and favicon images used in the **White labeling** feature on the [application](/docs/pe/mqtt-broker/white-labeling/#customize-tbmq-web-interface) and [login](/docs/pe/mqtt-broker/white-labeling/#customize-the-login-page) pages.
+
+
+
+## Add image
+
+Add your images to the Image gallery serves in [image file format](#upload-image) (PNG, JPEG, GIF, etc.) or [JSON file format](#import-image-from-json).
+
+### Upload image
+
+To upload new image in **image file format**, follow these steps:
+
+{% include images-gallery.html imageCollection="upload-image-1" showListImageTitles="true" %}
+
+### Import image from JSON
+
+To import your images in **JSON file format**, follow these steps:
+
+{% include images-gallery.html imageCollection="upload-image-2" showListImageTitles="true" %}
+
+## Change the image view mode
+
+You can view images in one of two modes: list or grid.
+To change the image viewing mode, simply select the mode that suits you in the top left corner of the Image gallery window.
+
+{% include images-gallery.html imageCollection="image-viewing-mode" %}
+
+## Image operations
+
+You can [download](#download-image), [export to JSON](#export-image-to-json), [edit](#edit-image), and [delete](#delete-image) image using the corresponding icon opposite the image's name.
+Let's take a closer look at each operation.
+
+### Download image
+
+Downloading an image in image file format can be done in two ways, depending on the selected image viewing format:
+
+- If you're using the list view of images, click the "Download image" icon next to the image name that you want to export.
+- If you're using the grid view, hover your mouse pointer over the image you want to export and click the "Download image" icon.
+
+The image in image file format will be saved to your PC.
+
+{% include images-gallery.html imageCollection="download-image-1" %}
+
+### Export image to JSON
+
+Exporting an image to JSON can be done in two ways, depending on the selected image viewing format:
+
+- If you're using the list view of images, click the "Export image to JSON" icon next to the image name that you want to download.
+- If you're using the grid view, hover your mouse pointer over the image you want to download and click the "Export image to JSON" icon.
+
+The image in JSON format will be saved to your PC.
+
+{% include images-gallery.html imageCollection="export-image-1" %}
+
+### Edit image
+
+To open editing an image window, click the "Edit image" icon next to the image name that you want to edit (if you're using the grid view, hover your mouse pointer over the image you want to edit and click the "Edit image" icon).
+An editing window will open. In this window you can change the name, download, export it to JSON, and also [update the image](#update-image).
+
+To change the name of the image, enter a new name and click the "Save" icon in the "Edit image" window.
+
+{% include images-gallery.html imageCollection="edit-image-1" %}
+
+#### Update image
+
+Updating the image can be useful, for example, when one picture serves as the background for multiple places.
+This allows you to make changes just once, and all places using that image will automatically receive the updated version, saving you the effort of editing each place individually.
+
+To update the image, click the "Update image" button in the editing window. Select a new image or simply drag it to the "Update image" window and click "Update".
+
+{% include images-gallery.html imageCollection="update-image-1" %}
+
+### Delete image
+
+To delete an image from the image list, follow these steps:
+
+{% include images-gallery.html imageCollection="delete-image-1" showListImageTitles="true" %}
+
+To delete an image that is displayed as an image grid, follow these steps:
+
+{% include images-gallery.html imageCollection="delete-image-2" showListImageTitles="true" %}
+
+You can also delete multiple images (only via the image list view) at once:
+
+{% include images-gallery.html imageCollection="delete-image-3" showListImageTitles="true" %}
diff --git a/_includes/docs/pe/mqtt-broker/install/config.md b/_includes/docs/pe/mqtt-broker/install/config.md
new file mode 100644
index 0000000000..833e4dcb17
--- /dev/null
+++ b/_includes/docs/pe/mqtt-broker/install/config.md
@@ -0,0 +1,3193 @@
+
+
+## HTTP server parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | server.shutdown |
+ SERVER_SHUTDOWN |
+ graceful |
+ Shutdown type (graceful or immediate) |
+
+
+ | server.address |
+ HTTP_BIND_ADDRESS |
+ 0.0.0.0 |
+ HTTP Server bind address (has no effect if web-environment is disabled) |
+
+
+ | server.port |
+ HTTP_BIND_PORT |
+ 8083 |
+ HTTP Server bind port (has no effect if web-environment is disabled) |
+
+
+ | server.forward_headers_strategy |
+ HTTP_FORWARD_HEADERS_STRATEGY |
+ framework |
+ Server headers forwarding strategy. Required for SWAGGER UI when reverse proxy is used |
+
+
+ | server.http2.enabled |
+ HTTP2_ENABLED |
+ true |
+ Enable/disable HTTP/2 support |
+
+
+ | server.log_controller_error_stack_trace |
+ HTTP_LOG_CONTROLLER_ERROR_STACK_TRACE |
+ false |
+ Log errors with stacktrace when REST API throws exception |
+
+
+ | server.http.max_payload_size |
+ HTTP_MAX_PAYLOAD_SIZE_LIMIT_CONFIGURATION |
+ /api/image*/**=52428800;/api/resource/**=52428800;/api/**=16777216 |
+ Semi-colon-separated list of urlPattern=maxPayloadSize pairs that define max http request size in bytes for specified url pattern. After first match all other will be skipped |
+
+
+ | server.ssl.enabled |
+ SSL_ENABLED |
+ false |
+ Enable/disable SSL support |
+
+
+ | server.ssl.credentials.type |
+ SSL_CREDENTIALS_TYPE |
+ PEM |
+ Server credentials type (PEM - pem certificate file; KEYSTORE - java keystore) |
+
+
+ | server.ssl.credentials.pem.cert_file |
+ SSL_PEM_CERT |
+ server.pem |
+ Path to the server certificate file (holds server certificate or certificate chain, may include server private key) |
+
+
+ | server.ssl.credentials.pem.key_file |
+ SSL_PEM_KEY |
+ server_key.pem |
+ Path to the server certificate private key file. Optional by default. Required if the private key is not present in server certificate file |
+
+
+ | server.ssl.credentials.pem.key_password |
+ SSL_PEM_KEY_PASSWORD |
+ server_key_password |
+ Server certificate private key password (optional) |
+
+
+ | server.ssl.credentials.keystore.type |
+ SSL_KEY_STORE_TYPE |
+ PKCS12 |
+ Type of the key store (JKS or PKCS12) |
+
+
+ | server.ssl.credentials.keystore.store_file |
+ SSL_KEY_STORE |
+ classpath:keystore/keystore.p12 |
+ Path to the key store that holds the SSL certificate |
+
+
+ | server.ssl.credentials.keystore.store_password |
+ SSL_KEY_STORE_PASSWORD |
+ thingsboard_mqtt_broker |
+ Password used to access the key store |
+
+
+ | server.ssl.credentials.keystore.key_alias |
+ SSL_KEY_ALIAS |
+ tomcat |
+ Key alias |
+
+
+ | server.ssl.credentials.keystore.key_password |
+ SSL_KEY_PASSWORD |
+ thingsboard_mqtt_broker |
+ Password used to access the key |
+
+
+
+
+
+## MQTT listeners parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | listener.proxy_enabled |
+ MQTT_PROXY_PROTOCOL_ENABLED |
+ false |
+ Enable proxy protocol support as a global setting for all listeners. Disabled by default. If enabled, supports both v1 and v2.
+ Useful to get the real IP address of the client in the logs, for session details info and unauthorized clients feature |
+
+
+ | listener.leak_detector_level |
+ NETTY_LEAK_DETECTOR_LVL |
+ DISABLED |
+ Netty leak detector level: DISABLED, SIMPLE, ADVANCED, PARANOID. It is set globally for all listeners |
+
+
+ | listener.write_buffer_high_water_mark |
+ NETTY_WRITE_BUFFER_HIGH_WATER_MARK |
+ 64 |
+ The threshold (in KB) where Netty considers the channel non-writable. When the limit reached, TBMQ stops delivering data to subscriber until the channel is writable again.
+ Non-persistent clients lose data in this case |
+
+
+ | listener.write_buffer_low_water_mark |
+ NETTY_WRITE_BUFFER_LOW_WATER_MARK |
+ 32 |
+ The threshold (in KB) where Netty considers the channel writable again. When the limit reached, TBMQ starts delivering data to subscriber |
+
+
+ | listener.so_receive_buffer |
+ NETTY_SO_RECEIVE_BUFFER |
+ 0 |
+ Socket receive buffer size for Netty in KB. If the buffer limit is reached, TCP will trigger backpressure and notify the sender to slow down.
+ If set to 0 (default), the system's default buffer size will be used |
+
+
+ | listener.tcp.enabled |
+ LISTENER_TCP_ENABLED |
+ true |
+ Enable/disable MQTT TCP port listener |
+
+
+ | listener.tcp.bind_address |
+ LISTENER_TCP_BIND_ADDRESS |
+ 0.0.0.0 |
+ MQTT TCP listener bind address |
+
+
+ | listener.tcp.bind_port |
+ LISTENER_TCP_BIND_PORT |
+ 1883 |
+ MQTT TCP listener bind port |
+
+
+ | listener.tcp.proxy_enabled |
+ MQTT_TCP_PROXY_PROTOCOL_ENABLED |
+ |
+ Enable proxy protocol support for the MQTT TCP listener. Unset by default – in this case it inherits the global MQTT_PROXY_PROTOCOL_ENABLED value.
+ If explicitly set, supports both v1 and v2 and takes precedence over the global setting.
+ Useful to get the real IP address of the client in the logs, for session details info and unauthorized clients feature |
+
+
+ | listener.tcp.netty.boss_group_thread_count |
+ TCP_NETTY_BOSS_GROUP_THREADS |
+ 1 |
+ Netty boss group threads count |
+
+
+ | listener.tcp.netty.worker_group_thread_count |
+ TCP_NETTY_WORKER_GROUP_THREADS |
+ 12 |
+ Netty worker group threads count |
+
+
+ | listener.tcp.netty.max_payload_size |
+ TCP_NETTY_MAX_PAYLOAD_SIZE |
+ 65536 |
+ Max payload size in bytes |
+
+
+ | listener.tcp.netty.so_keep_alive |
+ TCP_NETTY_SO_KEEPALIVE |
+ true |
+ Enable/disable keep-alive mechanism to periodically probe the other end of a connection |
+
+
+ | listener.tcp.netty.shutdown_quiet_period |
+ TCP_NETTY_SHUTDOWN_QUIET_PERIOD |
+ 0 |
+ Period in seconds in graceful shutdown during which no new tasks are submitted |
+
+
+ | listener.tcp.netty.shutdown_timeout |
+ TCP_NETTY_SHUTDOWN_TIMEOUT |
+ 5 |
+ The max time in seconds to wait until the executor is stopped |
+
+
+ | listener.ssl.enabled |
+ LISTENER_SSL_ENABLED |
+ false |
+ Enable/disable MQTT SSL port listener |
+
+
+ | listener.ssl.bind_address |
+ LISTENER_SSL_BIND_ADDRESS |
+ 0.0.0.0 |
+ MQTT SSL listener bind address |
+
+
+ | listener.ssl.bind_port |
+ LISTENER_SSL_BIND_PORT |
+ 8883 |
+ MQTT SSL listener bind port |
+
+
+ | listener.ssl.proxy_enabled |
+ MQTT_SSL_PROXY_PROTOCOL_ENABLED |
+ |
+ Enable proxy protocol support for the MQTT TLS listener. Unset by default – in this case it inherits the global MQTT_PROXY_PROTOCOL_ENABLED value.
+ If explicitly set, supports both v1 and v2 and takes precedence over the global setting.
+ Useful to get the real IP address of the client in the logs, for session details info and unauthorized clients feature |
+
+
+ | listener.ssl.config.protocol |
+ LISTENER_SSL_PROTOCOL |
+ TLSv1.2 |
+ SSL protocol: see this link |
+
+
+ | listener.ssl.config.enabled_cipher_suites |
+ LISTENER_SSL_ENABLED_CIPHER_SUITES |
+ |
+ Sets the cipher suites enabled for use on mqtts listener. The value is a comma-separated list of cipher suits (e.g. TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256).
+ Defaults to empty list meaning all supported cipher suites of the used provider are taken |
+
+
+ | listener.ssl.config.credentials.type |
+ LISTENER_SSL_CREDENTIALS_TYPE |
+ PEM |
+ Server credentials type (PEM - pem certificate file; KEYSTORE - java keystore) |
+
+
+ | listener.ssl.config.credentials.pem.cert_file |
+ LISTENER_SSL_PEM_CERT |
+ mqttserver.pem |
+ Path to the server certificate file (holds server certificate or certificate chain, may include server private key) |
+
+
+ | listener.ssl.config.credentials.pem.key_file |
+ LISTENER_SSL_PEM_KEY |
+ mqttserver_key.pem |
+ Path to the server certificate private key file. Optional by default. Required if the private key is not present in server certificate file |
+
+
+ | listener.ssl.config.credentials.pem.key_password |
+ LISTENER_SSL_PEM_KEY_PASSWORD |
+ server_key_password |
+ Server certificate private key password (optional) |
+
+
+ | listener.ssl.config.credentials.keystore.type |
+ LISTENER_SSL_KEY_STORE_TYPE |
+ JKS |
+ Type of the key store (JKS or PKCS12) |
+
+
+ | listener.ssl.config.credentials.keystore.store_file |
+ LISTENER_SSL_KEY_STORE |
+ mqttserver.jks |
+ Path to the key store that holds the SSL certificate |
+
+
+ | listener.ssl.config.credentials.keystore.store_password |
+ LISTENER_SSL_KEY_STORE_PASSWORD |
+ server_ks_password |
+ Password used to access the key store |
+
+
+ | listener.ssl.config.credentials.keystore.key_alias |
+ LISTENER_SSL_KEY_ALIAS |
+ |
+ Optional alias of the private key. If not set, the platform will load the first private key from the keystore |
+
+
+ | listener.ssl.config.credentials.keystore.key_password |
+ LISTENER_SSL_KEY_PASSWORD |
+ server_key_password |
+ Optional password to access the private key. If not set, the platform will attempt to load the private keys that are not protected with the password |
+
+
+ | listener.ssl.netty.boss_group_thread_count |
+ SSL_NETTY_BOSS_GROUP_THREADS |
+ 1 |
+ Netty boss group threads count |
+
+
+ | listener.ssl.netty.worker_group_thread_count |
+ SSL_NETTY_WORKER_GROUP_THREADS |
+ 12 |
+ Netty worker group threads count |
+
+
+ | listener.ssl.netty.max_payload_size |
+ SSL_NETTY_MAX_PAYLOAD_SIZE |
+ 65536 |
+ Max payload size in bytes |
+
+
+ | listener.ssl.netty.so_keep_alive |
+ SSL_NETTY_SO_KEEPALIVE |
+ true |
+ Enable/disable keep-alive mechanism to periodically probe the other end of a connection |
+
+
+ | listener.ssl.netty.shutdown_quiet_period |
+ SSL_NETTY_SHUTDOWN_QUIET_PERIOD |
+ 0 |
+ Period in seconds in graceful shutdown during which no new tasks are submitted |
+
+
+ | listener.ssl.netty.shutdown_timeout |
+ SSL_NETTY_SHUTDOWN_TIMEOUT |
+ 5 |
+ The max time in seconds to wait until the executor is stopped |
+
+
+ | listener.ws.enabled |
+ LISTENER_WS_ENABLED |
+ true |
+ Enable/disable MQTT WS port listener |
+
+
+ | listener.ws.bind_address |
+ LISTENER_WS_BIND_ADDRESS |
+ 0.0.0.0 |
+ MQTT WS listener bind address |
+
+
+ | listener.ws.bind_port |
+ LISTENER_WS_BIND_PORT |
+ 8084 |
+ MQTT WS listener bind port |
+
+
+ | listener.ws.proxy_enabled |
+ MQTT_WS_PROXY_PROTOCOL_ENABLED |
+ |
+ Enable proxy protocol support for the MQTT WS listener. Unset by default – in this case it inherits the global MQTT_PROXY_PROTOCOL_ENABLED value.
+ If explicitly set, supports both v1 and v2 and takes precedence over the global setting.
+ Useful to get the real IP address of the client in the logs, for session details info and unauthorized clients feature |
+
+
+ | listener.ws.netty.sub_protocols |
+ WS_NETTY_SUB_PROTOCOLS |
+ mqttv3.1,mqtt |
+ Comma-separated list of subprotocols that the WebSocket can negotiate. The subprotocol setting `mqtt` represents MQTT 3.1.1 and MQTT 5 |
+
+
+ | listener.ws.netty.boss_group_thread_count |
+ WS_NETTY_BOSS_GROUP_THREADS |
+ 1 |
+ Netty boss group threads count |
+
+
+ | listener.ws.netty.worker_group_thread_count |
+ WS_NETTY_WORKER_GROUP_THREADS |
+ 12 |
+ Netty worker group threads count |
+
+
+ | listener.ws.netty.max_payload_size |
+ WS_NETTY_MAX_PAYLOAD_SIZE |
+ 65536 |
+ Max payload size in bytes |
+
+
+ | listener.ws.netty.so_keep_alive |
+ WS_NETTY_SO_KEEPALIVE |
+ true |
+ Enable/disable keep-alive mechanism to periodically probe the other end of a connection |
+
+
+ | listener.ws.netty.shutdown_quiet_period |
+ WS_NETTY_SHUTDOWN_QUIET_PERIOD |
+ 0 |
+ Period in seconds in graceful shutdown during which no new tasks are submitted |
+
+
+ | listener.ws.netty.shutdown_timeout |
+ WS_NETTY_SHUTDOWN_TIMEOUT |
+ 5 |
+ The max time in seconds to wait until the executor is stopped |
+
+
+ | listener.wss.enabled |
+ LISTENER_WSS_ENABLED |
+ false |
+ Enable/disable MQTT WSS port listener |
+
+
+ | listener.wss.bind_address |
+ LISTENER_WSS_BIND_ADDRESS |
+ 0.0.0.0 |
+ MQTT WSS listener bind address |
+
+
+ | listener.wss.bind_port |
+ LISTENER_WSS_BIND_PORT |
+ 8085 |
+ MQTT WSS listener bind port |
+
+
+ | listener.wss.proxy_enabled |
+ MQTT_WSS_PROXY_PROTOCOL_ENABLED |
+ |
+ Enable proxy protocol support for the MQTT WSS listener. Unset by default – in this case it inherits the global MQTT_PROXY_PROTOCOL_ENABLED value.
+ If explicitly set, supports both v1 and v2 and takes precedence over the global setting.
+ Useful to get the real IP address of the client in the logs, for session details info and unauthorized clients feature |
+
+
+ | listener.wss.config.protocol |
+ LISTENER_WSS_PROTOCOL |
+ TLSv1.2 |
+ SSL protocol: see this link |
+
+
+ | listener.wss.config.enabled_cipher_suites |
+ LISTENER_WSS_ENABLED_CIPHER_SUITES |
+ |
+ Sets the cipher suites enabled for use on wss listener. The value is a comma-separated list of cipher suits (e.g. TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256).
+ Defaults to empty list meaning all supported cipher suites of the used provider are taken |
+
+
+ | listener.wss.config.credentials.type |
+ LISTENER_WSS_CREDENTIALS_TYPE |
+ PEM |
+ Server credentials type (PEM - pem certificate file; KEYSTORE - java keystore) |
+
+
+ | listener.wss.config.credentials.pem.cert_file |
+ LISTENER_WSS_PEM_CERT |
+ ws_mqtt_server.pem |
+ Path to the server certificate file (holds server certificate or certificate chain, may include server private key) |
+
+
+ | listener.wss.config.credentials.pem.key_file |
+ LISTENER_WSS_PEM_KEY |
+ ws_mqtt_server_key.pem |
+ Path to the server certificate private key file. Optional by default. Required if the private key is not present in server certificate file |
+
+
+ | listener.wss.config.credentials.pem.key_password |
+ LISTENER_WSS_PEM_KEY_PASSWORD |
+ ws_server_key_password |
+ Server certificate private key password (optional) |
+
+
+ | listener.wss.config.credentials.keystore.type |
+ LISTENER_WSS_KEY_STORE_TYPE |
+ JKS |
+ Type of the key store (JKS or PKCS12) |
+
+
+ | listener.wss.config.credentials.keystore.store_file |
+ LISTENER_WSS_KEY_STORE |
+ ws_mqtt_server.jks |
+ Path to the key store that holds the SSL certificate |
+
+
+ | listener.wss.config.credentials.keystore.store_password |
+ LISTENER_WSS_KEY_STORE_PASSWORD |
+ ws_server_ks_password |
+ Password used to access the key store |
+
+
+ | listener.wss.config.credentials.keystore.key_alias |
+ LISTENER_WSS_KEY_ALIAS |
+ |
+ Optional alias of the private key. If not set, the platform will load the first private key from the keystore |
+
+
+ | listener.wss.config.credentials.keystore.key_password |
+ LISTENER_WSS_KEY_PASSWORD |
+ ws_server_key_password |
+ Optional password to access the private key. If not set, the platform will attempt to load the private keys that are not protected with the password |
+
+
+ | listener.wss.netty.sub_protocols |
+ WSS_NETTY_SUB_PROTOCOLS |
+ mqttv3.1,mqtt |
+ Comma-separated list of subprotocols that the WebSocket can negotiate. The subprotocol setting `mqtt` represents MQTT 3.1.1 and MQTT 5 |
+
+
+ | listener.wss.netty.boss_group_thread_count |
+ WSS_NETTY_BOSS_GROUP_THREADS |
+ 1 |
+ Netty boss group threads count |
+
+
+ | listener.wss.netty.worker_group_thread_count |
+ WSS_NETTY_WORKER_GROUP_THREADS |
+ 12 |
+ Netty worker group threads count |
+
+
+ | listener.wss.netty.max_payload_size |
+ WSS_NETTY_MAX_PAYLOAD_SIZE |
+ 65536 |
+ Max payload size in bytes |
+
+
+ | listener.wss.netty.so_keep_alive |
+ WSS_NETTY_SO_KEEPALIVE |
+ true |
+ Enable/disable keep-alive mechanism to periodically probe the other end of a connection |
+
+
+ | listener.wss.netty.shutdown_quiet_period |
+ WSS_NETTY_SHUTDOWN_QUIET_PERIOD |
+ 0 |
+ Period in seconds in graceful shutdown during which no new tasks are submitted |
+
+
+ | listener.wss.netty.shutdown_timeout |
+ WSS_NETTY_SHUTDOWN_TIMEOUT |
+ 5 |
+ The max time in seconds to wait until the executor is stopped |
+
+
+
+
+
+## Kafka parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | queue.msg-all.consumers-count |
+ TB_MSG_ALL_CONSUMERS_COUNT |
+ 4 |
+ Number of parallel consumers for 'tbmq.msg.all' topic. Should not be more than the number of partitions in topic |
+
+
+ | queue.msg-all.threads-count |
+ TB_MSG_ALL_THREADS_COUNT |
+ 4 |
+ Number of threads in the pool to process consumers tasks. Should not be less than number of consumers |
+
+
+ | queue.msg-all.poll-interval |
+ TB_MSG_ALL_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.msg.all' topic |
+
+
+ | queue.msg-all.pack-processing-timeout |
+ TB_MSG_ALL_PACK_PROCESSING_TIMEOUT |
+ 20000 |
+ Timeout in milliseconds for processing the pack of messages from 'tbmq.msg.all' topic |
+
+
+ | queue.msg-all.ack-strategy.type |
+ TB_MSG_ALL_ACK_STRATEGY_TYPE |
+ SKIP_ALL |
+ Processing strategy for 'tbmq.msg.all' topic. Can be: SKIP_ALL, RETRY_ALL |
+
+
+ | queue.msg-all.ack-strategy.retries |
+ TB_MSG_ALL_ACK_STRATEGY_RETRIES |
+ 1 |
+ Number of retries, 0 is unlimited. Use for RETRY_ALL processing strategy |
+
+
+ | queue.msg-all.msg-parallel-processing |
+ TB_MSG_ALL_PARALLEL_PROCESSING |
+ false |
+ Enable/disable processing of consumed messages in parallel (grouped by publishing client id to preserve order).
+ Helpful when the same client publishes lots of messages in a short amount of time.
+ It is recommended to count the impact of this parameter before setting it on production |
+
+
+ | queue.application-persisted-msg.poll-interval |
+ TB_APP_PERSISTED_MSG_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from Application topics |
+
+
+ | queue.application-persisted-msg.pack-processing-timeout |
+ TB_APP_PERSISTED_MSG_PACK_PROCESSING_TIMEOUT |
+ 20000 |
+ Timeout in milliseconds for processing the pack of messages |
+
+
+ | queue.application-persisted-msg.ack-strategy.type |
+ TB_APP_PERSISTED_MSG_ACK_STRATEGY_TYPE |
+ RETRY_ALL |
+ Processing strategy for Application topics. Can be: SKIP_ALL, RETRY_ALL |
+
+
+ | queue.application-persisted-msg.ack-strategy.retries |
+ TB_APP_PERSISTED_MSG_ACK_STRATEGY_RETRIES |
+ 3 |
+ Number of retries, 0 is unlimited. Use for RETRY_ALL processing strategy |
+
+
+ | queue.application-persisted-msg.client-id-validation |
+ TB_APP_PERSISTED_MSG_CLIENT_ID_VALIDATION |
+ true |
+ Enable/disable check that application client id contains only alphanumeric chars for Kafka topic creation |
+
+
+ | queue.application-persisted-msg.shared-topic-validation |
+ TB_APP_PERSISTED_MSG_SHARED_TOPIC_VALIDATION |
+ true |
+ Enable/disable check that application shared subscription topic filter contains only alphanumeric chars or '+' or '#' for Kafka topic creation |
+
+
+ | queue.device-persisted-msg.consumers-count |
+ TB_DEVICE_PERSISTED_MSG_CONSUMERS_COUNT |
+ 3 |
+ Number of parallel consumers for 'tbmq.msg.persisted' topic. Should not be more than the number of partitions in topic |
+
+
+ | queue.device-persisted-msg.threads-count |
+ TB_DEVICE_PERSISTED_MSG_THREADS_COUNT |
+ 3 |
+ Number of threads in the pool to process consumers tasks |
+
+
+ | queue.device-persisted-msg.poll-interval |
+ TB_DEVICE_PERSISTED_MSG_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.msg.persisted' topic |
+
+
+ | queue.device-persisted-msg.pack-processing-timeout |
+ TB_DEVICE_PERSISTED_MSG_PACK_PROCESSING_TIMEOUT |
+ 20000 |
+ Timeout in milliseconds for processing the pack of messages from 'tbmq.msg.persisted' topic |
+
+
+ | queue.device-persisted-msg.ack-strategy.type |
+ TB_DEVICE_PERSISTED_MSG_ACK_STRATEGY_TYPE |
+ RETRY_ALL |
+ Queue processing strategy. Can be: SKIP_ALL, RETRY_ALL |
+
+
+ | queue.device-persisted-msg.ack-strategy.retries |
+ TB_DEVICE_PERSISTED_MSG_ACK_STRATEGY_RETRIES |
+ 3 |
+ Number of retries, 0 is unlimited. Use for RETRY_ALL processing strategy |
+
+
+ | queue.device-persisted-msg.ack-strategy.pause-between-retries |
+ TB_DEVICE_PERSISTED_MSG_ACK_STRATEGY_PAUSE_BETWEEN_RETRIES |
+ 1 |
+ Time in seconds to wait in consumer thread before retries |
+
+
+ | queue.retained-msg.poll-interval |
+ TB_RETAINED_MSG_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.msg.retained' topic |
+
+
+ | queue.retained-msg.acknowledge-wait-timeout-ms |
+ TB_RETAINED_MSG_ACK_WAIT_TIMEOUT_MS |
+ 500 |
+ Interval in milliseconds to wait for system messages to be delivered to 'tbmq.msg.retained' topic |
+
+
+ | queue.client-session.poll-interval |
+ TB_CLIENT_SESSION_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.client.session' topic |
+
+
+ | queue.client-session.acknowledge-wait-timeout-ms |
+ TB_CLIENT_SESSION_ACK_WAIT_TIMEOUT_MS |
+ 500 |
+ Interval in milliseconds to wait for system messages to be delivered to 'tbmq.client.session' topic |
+
+
+ | queue.client-subscriptions.poll-interval |
+ TB_CLIENT_SUBSCRIPTIONS_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.client.subscriptions' topic |
+
+
+ | queue.client-subscriptions.acknowledge-wait-timeout-ms |
+ TB_CLIENT_SUBSCRIPTIONS_ACK_WAIT_TIMEOUT_MS |
+ 500 |
+ Interval in milliseconds to wait for system messages to be delivered to 'tbmq.client.subscriptions' topic |
+
+
+ | queue.client-session-event.consumers-count |
+ TB_CLIENT_SESSION_EVENT_CONSUMERS_COUNT |
+ 2 |
+ Number of parallel consumers for `tbmq.client.session.event.request` topic |
+
+
+ | queue.client-session-event.max-pending-requests |
+ TB_CLIENT_SESSION_EVENT_MAX_PENDING_REQUESTS |
+ 10000 |
+ Number of pending client session events |
+
+
+ | queue.client-session-event.poll-interval |
+ TB_CLIENT_SESSION_EVENT_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.client.session.event.request' topic |
+
+
+ | queue.client-session-event.batch-wait-timeout-ms |
+ TB_CLIENT_SESSION_EVENT_BATCH_WAIT_MS |
+ 2000 |
+ Max interval in milliseconds to process 'tbmq.client.session.event.request' messages after consuming them |
+
+
+ | queue.client-session-event-response.response-sender-threads |
+ TB_CLIENT_SESSION_EVENT_RESPONSE_SENDER_THREADS |
+ 8 |
+ Number of threads for sending event responses to session event requests |
+
+
+ | queue.client-session-event-response.poll-interval |
+ TB_CLIENT_SESSION_EVENT_RESPONSE_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.client.session.event.response' topics |
+
+
+ | queue.client-session-event-response.max-request-timeout |
+ TB_CLIENT_SESSION_EVENT_RESPONSE_MAX_REQUEST_TIMEOUT |
+ 100000 |
+ Max time in milliseconds for client session events before they are expired |
+
+
+ | queue.client-session-event-response.cleanup-interval |
+ TB_CLIENT_SESSION_EVENT_RESPONSE_CLEANUP_INTERVAL |
+ 100 |
+ Period in milliseconds to clean-up stale client session events |
+
+
+ | queue.disconnect-client-command.poll-interval |
+ TB_DISCONNECT_CLIENT_COMMAND_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.client.disconnect' topics |
+
+
+ | queue.persisted-downlink-msg.consumers-count |
+ TB_PERSISTED_DOWNLINK_MSG_CONSUMERS_COUNT |
+ 2 |
+ Number of parallel consumers for `tbmq.msg.downlink.persisted` topics |
+
+
+ | queue.persisted-downlink-msg.threads-count |
+ TB_PERSISTED_DOWNLINK_MSG_THREADS_COUNT |
+ 2 |
+ Number of threads in the pool to process consumers tasks |
+
+
+ | queue.persisted-downlink-msg.poll-interval |
+ TB_PERSISTED_DOWNLINK_MSG_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.msg.downlink.persisted' topics |
+
+
+ | queue.basic-downlink-msg.consumers-count |
+ TB_BASIC_DOWNLINK_MSG_CONSUMERS_COUNT |
+ 2 |
+ Number of parallel consumers for `tbmq.msg.downlink.basic` topics |
+
+
+ | queue.basic-downlink-msg.threads-count |
+ TB_BASIC_DOWNLINK_MSG_THREADS_COUNT |
+ 2 |
+ Number of threads in the pool to process consumers tasks |
+
+
+ | queue.basic-downlink-msg.poll-interval |
+ TB_BASIC_DOWNLINK_MSG_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.msg.downlink.basic' topics |
+
+
+ | queue.application-removed-event.poll-interval |
+ TB_APPLICATION_REMOVED_EVENT_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.sys.app.removed' topic |
+
+
+ | queue.application-removed-event.processing.cron |
+ TB_APPLICATION_REMOVED_EVENT_PROCESSING_CRON |
+ 0 0 3 * * * |
+ Cron expression to when execute the consuming and processing of messages |
+
+
+ | queue.application-removed-event.processing.zone |
+ TB_APPLICATION_REMOVED_EVENT_PROCESSING_ZONE |
+ UTC |
+ Timezone for the processing cron-job |
+
+
+ | queue.historical-data-total.poll-interval |
+ TB_HISTORICAL_DATA_TOTAL_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.sys.historical.data' topic |
+
+
+ | queue.integration-uplink.poll-interval |
+ TB_IE_UPLINK_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.ie.uplink' topic |
+
+
+ | queue.integration-uplink-notifications.poll-interval |
+ TB_IE_UPLINK_NOTIFICATIONS_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.ie.uplink.notifications' topics |
+
+
+ | queue.internode-notifications.poll-interval |
+ TB_NODE_NOTIFICATION_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.sys.internode.notifications' topics |
+
+
+ | queue.blocked-client.poll-interval |
+ TB_BLOCKED_CLIENT_POLL_INTERVAL |
+ 100 |
+ Interval in milliseconds to poll messages from 'tbmq.client.blocked' topic |
+
+
+ | queue.blocked-client.acknowledge-wait-timeout-ms |
+ TB_BLOCKED_CLIENT_ACK_WAIT_TIMEOUT_MS |
+ 500 |
+ Interval in milliseconds to wait for system messages to be delivered to 'tbmq.client.blocked' topic |
+
+
+ | queue.kafka.bootstrap.servers |
+ TB_KAFKA_SERVERS |
+ localhost:9092 |
+ List of kafka bootstrap servers used to establish connection |
+
+
+ | queue.kafka.enable-topic-deletion |
+ TB_KAFKA_ENABLE_TOPIC_DELETION |
+ true |
+ Controls whether TBMQ is allowed to delete Kafka topics that were created for
+ Application MQTT Clients or Application Shared subscriptions.
+ When set to 'true', TBMQ may automatically remove topics during cleanup
+ (for example, when an Application client or shared subscription is deleted).
+ When set to 'false', TBMQ will skip topic deletions and simply stop using them.
+ This helps prevent accidental data loss in production environments |
+
+
+ | queue.kafka.default.consumer.partition-assignment-strategy |
+ TB_KAFKA_DEFAULT_CONSUMER_PARTITION_ASSIGNMENT_STRATEGY |
+ org.apache.kafka.clients.consumer.StickyAssignor |
+ A list of class names or class types, ordered by preference, of supported partition assignment strategies that the client will use to distribute partition ownership amongst consumer instances when group management is used |
+
+
+ | queue.kafka.default.consumer.session-timeout-ms |
+ TB_KAFKA_DEFAULT_CONSUMER_SESSION_TIMEOUT_MS |
+ 10000 |
+ The timeout in milliseconds used to detect client failures when using Kafka's group management facility |
+
+
+ | queue.kafka.default.consumer.max-poll-interval-ms |
+ TB_KAFKA_DEFAULT_CONSUMER_MAX_POLL_INTERVAL_MS |
+ 300000 |
+ The maximum delay in milliseconds between invocations of poll() when using consumer group management |
+
+
+ | queue.kafka.default.consumer.max-poll-records |
+ TB_KAFKA_DEFAULT_CONSUMER_MAX_POLL_RECORDS |
+ 2000 |
+ The maximum number of records returned in a single call to poll() |
+
+
+ | queue.kafka.default.consumer.max-partition-fetch-bytes |
+ TB_KAFKA_DEFAULT_CONSUMER_MAX_PARTITION_FETCH_BYTES |
+ 16777216 |
+ The maximum amount of data in bytes per-partition the server will return |
+
+
+ | queue.kafka.default.consumer.fetch-max-bytes |
+ TB_KAFKA_DEFAULT_CONSUMER_FETCH_MAX_BYTES |
+ 134217728 |
+ The maximum amount of data in bytes the server should return for a fetch request |
+
+
+ | queue.kafka.default.consumer.heartbeat-interval-ms |
+ TB_KAFKA_DEFAULT_CONSUMER_HEARTBEAT_INTERVAL_MS |
+ 3000 |
+ The expected time between heartbeats to the consumer coordinator when using Kafka’s group management facilities.
+ Heartbeats are used to ensure that the consumer’s session stays active and to facilitate rebalancing when new consumers join or leave the group.
+ The value must be set lower than TB_KAFKA_DEFAULT_CONSUMER_SESSION_TIMEOUT_MS, but typically should be set no higher than 1/3 of that value.
+ It can be adjusted even lower to control the expected time for normal rebalances. Value in milliseconds. Default is 3 sec |
+
+
+ | queue.kafka.default.producer.acks |
+ TB_KAFKA_DEFAULT_PRODUCER_ACKS |
+ 1 |
+ The number of acknowledgments the producer requires the leader to have received before considering a request complete |
+
+
+ | queue.kafka.default.producer.retries |
+ TB_KAFKA_DEFAULT_PRODUCER_RETRIES |
+ 1 |
+ Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error |
+
+
+ | queue.kafka.default.producer.batch-size |
+ TB_KAFKA_DEFAULT_PRODUCER_BATCH_SIZE |
+ 16384 |
+ The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. Size in bytes |
+
+
+ | queue.kafka.default.producer.linger-ms |
+ TB_KAFKA_DEFAULT_PRODUCER_LINGER_MS |
+ 5 |
+ The producer groups together any records that arrive in between request transmissions into a single batched request, set in milliseconds |
+
+
+ | queue.kafka.default.producer.buffer-memory |
+ TB_KAFKA_DEFAULT_PRODUCER_BUFFER_MEMORY |
+ 33554432 |
+ The total bytes of memory the producer can use to buffer records waiting to be sent to the server |
+
+
+ | queue.kafka.default.producer.compression-type |
+ TB_KAFKA_DEFAULT_COMPRESSION_TYPE |
+ none |
+ The compression type for all data generated by the producer. Valid values are `none`, `gzip`, `snappy`, `lz4`, or `zstd` |
+
+
+ | queue.kafka.admin.config |
+ TB_KAFKA_ADMIN_CONFIG |
+ retries:1 |
+ List of configs separated by semicolon used for admin kafka client creation |
+
+
+ | queue.kafka.admin.command-timeout |
+ TB_KAFKA_ADMIN_COMMAND_TIMEOUT_SEC |
+ 30 |
+ Kafka Admin client command timeout (in seconds). Applies to operations like describeCluster, listTopics, etc |
+
+
+ | queue.kafka.consumer-stats.enabled |
+ TB_KAFKA_CONSUMER_STATS_ENABLED |
+ true |
+ Prints lag if enabled between consumer group offset and last messages offset in Kafka topics |
+
+
+ | queue.kafka.consumer-stats.print-interval-ms |
+ TB_KAFKA_CONSUMER_STATS_PRINT_INTERVAL_MS |
+ 60000 |
+ Statistics printing interval in milliseconds for Kafka's consumer-groups stats |
+
+
+ | queue.kafka.consumer-stats.kafka-response-timeout-ms |
+ TB_KAFKA_CONSUMER_STATS_RESPONSE_TIMEOUT_MS |
+ 1000 |
+ Time to wait in milliseconds for the stats-loading requests to Kafka to finish |
+
+
+ | queue.kafka.consumer-stats.consumer-config |
+ TB_KAFKA_CONSUMER_STATS_CONSUMER_CONFIG |
+ |
+ List of configs separated by semicolon used for kafka stats consumer |
+
+
+ | queue.kafka.home-page.consumer-config |
+ TB_KAFKA_HOME_PAGE_CONSUMER_CONFIG |
+ |
+ List of configs separated by semicolon used for kafka admin client for home page |
+
+
+ | queue.kafka.home-page.kafka-response-timeout-ms |
+ TB_KAFKA_HOME_PAGE_RESPONSE_TIMEOUT_MS |
+ 1000 |
+ Time to wait in milliseconds for the home page requests to Kafka to finish |
+
+
+ | queue.kafka.msg-all.topic |
+ TB_KAFKA_MSG_ALL_TOPIC |
+ tbmq.msg.all |
+ Topic for persisting incoming PUBLISH messages |
+
+
+ | queue.kafka.msg-all.topic-properties |
+ TB_KAFKA_MSG_ALL_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:2147483648;partitions:16;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.msg.all` topic |
+
+
+ | queue.kafka.msg-all.additional-consumer-config |
+ TB_KAFKA_MSG_ALL_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.msg.all` topic |
+
+
+ | queue.kafka.msg-all.additional-producer-config |
+ TB_KAFKA_MSG_ALL_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.msg.all` topic |
+
+
+ | queue.kafka.application-persisted-msg.topic-properties |
+ TB_KAFKA_APP_PERSISTED_MSG_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.msg.app` topics |
+
+
+ | queue.kafka.application-persisted-msg.additional-consumer-config |
+ TB_KAFKA_APP_PERSISTED_MSG_ADDITIONAL_CONSUMER_CONFIG |
+ max.poll.records:200 |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.msg.app` topics |
+
+
+ | queue.kafka.application-persisted-msg.additional-producer-config |
+ TB_KAFKA_APP_PERSISTED_MSG_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.msg.app` topics |
+
+
+ | queue.kafka.application-persisted-msg.shared-topic.topic-properties |
+ TB_KAFKA_APP_PERSISTED_MSG_SHARED_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;replication.factor:1 |
+ Kafka topic properties separated by semicolon for application shared subscription topics |
+
+
+ | queue.kafka.application-persisted-msg.shared-topic.additional-consumer-config |
+ TB_KAFKA_APP_PERSISTED_MSG_SHARED_ADDITIONAL_CONSUMER_CONFIG |
+ max.poll.records:500 |
+ Additional Kafka consumer configs separated by semicolon for application shared subscription topics |
+
+
+ | queue.kafka.application-persisted-msg.shared-topic.additional-producer-config |
+ TB_KAFKA_APP_PERSISTED_MSG_SHARED_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for application shared subscription topics |
+
+
+ | queue.kafka.device-persisted-msg.topic |
+ TB_KAFKA_DEVICE_PERSISTED_MSG_TOPIC |
+ tbmq.msg.persisted |
+ Topic for persisting messages related to Device clients before saving them in Database |
+
+
+ | queue.kafka.device-persisted-msg.topic-properties |
+ TB_KAFKA_DEVICE_PERSISTED_MSG_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:12;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.msg.persisted` topic |
+
+
+ | queue.kafka.device-persisted-msg.additional-consumer-config |
+ TB_KAFKA_DEVICE_PERSISTED_MSG_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.msg.persisted` topic |
+
+
+ | queue.kafka.device-persisted-msg.additional-producer-config |
+ TB_KAFKA_DEVICE_PERSISTED_MSG_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.msg.persisted` topic |
+
+
+ | queue.kafka.retained-msg.topic |
+ TB_KAFKA_RETAINED_MSG_TOPIC |
+ tbmq.msg.retained |
+ Topic for retained messages |
+
+
+ | queue.kafka.retained-msg.topic-properties |
+ TB_KAFKA_RETAINED_MSG_TOPIC_PROPERTIES |
+ segment.bytes:26214400;partitions:1;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.msg.retained` topic |
+
+
+ | queue.kafka.retained-msg.additional-consumer-config |
+ TB_KAFKA_RETAINED_MSG_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.msg.retained` topic |
+
+
+ | queue.kafka.retained-msg.additional-producer-config |
+ TB_KAFKA_RETAINED_MSG_ADDITIONAL_PRODUCER_CONFIG |
+ retries:3 |
+ Additional Kafka producer configs separated by semicolon for `tbmq.msg.retained` topic |
+
+
+ | queue.kafka.client-session.topic |
+ TB_KAFKA_CLIENT_SESSION_TOPIC |
+ tbmq.client.session |
+ Topic for persisting client sessions |
+
+
+ | queue.kafka.client-session.topic-properties |
+ TB_KAFKA_CLIENT_SESSION_TOPIC_PROPERTIES |
+ segment.bytes:26214400;partitions:1;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.client.session` topic |
+
+
+ | queue.kafka.client-session.additional-consumer-config |
+ TB_KAFKA_CLIENT_SESSION_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.client.session` topic |
+
+
+ | queue.kafka.client-session.additional-producer-config |
+ TB_KAFKA_CLIENT_SESSION_ADDITIONAL_PRODUCER_CONFIG |
+ retries:3 |
+ Additional Kafka producer configs separated by semicolon for `tbmq.client.session` topic |
+
+
+ | queue.kafka.client-subscriptions.topic |
+ TB_KAFKA_CLIENT_SUBSCRIPTIONS_TOPIC |
+ tbmq.client.subscriptions |
+ Topic for persisting client subscriptions |
+
+
+ | queue.kafka.client-subscriptions.topic-properties |
+ TB_KAFKA_CLIENT_SUBSCRIPTIONS_TOPIC_PROPERTIES |
+ segment.bytes:26214400;partitions:1;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.client.subscriptions` topic |
+
+
+ | queue.kafka.client-subscriptions.additional-consumer-config |
+ TB_KAFKA_CLIENT_SUBSCRIPTIONS_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.client.subscriptions` topic |
+
+
+ | queue.kafka.client-subscriptions.additional-producer-config |
+ TB_KAFKA_CLIENT_SUBSCRIPTIONS_ADDITIONAL_PRODUCER_CONFIG |
+ retries:3 |
+ Additional Kafka producer configs separated by semicolon for `tbmq.client.subscriptions` topic |
+
+
+ | queue.kafka.client-session-event.topic |
+ TB_KAFKA_CLIENT_SESSION_EVENT_TOPIC |
+ tbmq.client.session.event.request |
+ Topic for sending client session event requests |
+
+
+ | queue.kafka.client-session-event.topic-properties |
+ TB_KAFKA_CLIENT_SESSION_EVENT_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:24;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.client.session.event.request` topic |
+
+
+ | queue.kafka.client-session-event.additional-consumer-config |
+ TB_KAFKA_CLIENT_SESSION_EVENT_ADDITIONAL_CONSUMER_CONFIG |
+ max.poll.records:1000 |
+ Additional Kafka consumer configs separated by semicolon for ``tbmq.client.session.event.request`` topic |
+
+
+ | queue.kafka.client-session-event.additional-producer-config |
+ TB_KAFKA_CLIENT_SESSION_EVENT_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for ``tbmq.client.session.event.request`` topic |
+
+
+ | queue.kafka.client-session-event-response.topic-prefix |
+ TB_KAFKA_CLIENT_SESSION_EVENT_RESPONSE_TOPIC_PREFIX |
+ tbmq.client.session.event.response |
+ Prefix for topics for sending client session event responses to Broker nodes |
+
+
+ | queue.kafka.client-session-event-response.topic-properties |
+ TB_KAFKA_CLIENT_SESSION_EVENT_RESPONSE_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:1;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.client.session.event.response` topics |
+
+
+ | queue.kafka.client-session-event-response.additional-consumer-config |
+ TB_KAFKA_CLIENT_SESSION_EVENT_RESPONSE_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.client.session.event.response` topics |
+
+
+ | queue.kafka.client-session-event-response.additional-producer-config |
+ TB_KAFKA_CLIENT_SESSION_EVENT_RESPONSE_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.client.session.event.response` topics |
+
+
+ | queue.kafka.disconnect-client-command.topic-prefix |
+ TB_KAFKA_DISCONNECT_CLIENT_COMMAND_TOPIC_PREFIX |
+ tbmq.client.disconnect |
+ Prefix for topics for sending disconnect client commands to Broker nodes |
+
+
+ | queue.kafka.disconnect-client-command.topic-properties |
+ TB_KAFKA_DISCONNECT_CLIENT_COMMAND_RESPONSE_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:1;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.client.disconnect` topics |
+
+
+ | queue.kafka.disconnect-client-command.additional-consumer-config |
+ TB_KAFKA_DISCONNECT_CLIENT_COMMAND_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.client.disconnect` topics |
+
+
+ | queue.kafka.disconnect-client-command.additional-producer-config |
+ TB_KAFKA_DISCONNECT_CLIENT_COMMAND_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.client.disconnect` topics |
+
+
+ | queue.kafka.basic-downlink-msg.topic-prefix |
+ TB_KAFKA_BASIC_DOWNLINK_MSG_TOPIC_PREFIX |
+ tbmq.msg.downlink.basic |
+ Prefix for topics for non-persistent Device messages that should be transferred to other Broker nodes |
+
+
+ | queue.kafka.basic-downlink-msg.topic-properties |
+ TB_KAFKA_BASIC_DOWNLINK_MSG_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:12;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.msg.downlink.basic` topics |
+
+
+ | queue.kafka.basic-downlink-msg.additional-consumer-config |
+ TB_KAFKA_BASIC_DOWNLINK_MSG_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.msg.downlink.basic` topics |
+
+
+ | queue.kafka.basic-downlink-msg.additional-producer-config |
+ TB_KAFKA_BASIC_DOWNLINK_MSG_ADDITIONAL_PRODUCER_CONFIG |
+ batch.size:32768 |
+ Additional Kafka producer configs separated by semicolon for `tbmq.msg.downlink.basic` topics |
+
+
+ | queue.kafka.persisted-downlink-msg.topic-prefix |
+ TB_KAFKA_PERSISTED_DOWNLINK_MSG_TOPIC_PREFIX |
+ tbmq.msg.downlink.persisted |
+ Prefix for topics for persistent Device messages that should be transferred to other Broker nodes |
+
+
+ | queue.kafka.persisted-downlink-msg.topic-properties |
+ TB_KAFKA_PERSISTED_DOWNLINK_MSG_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:12;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.msg.downlink.persisted` topics |
+
+
+ | queue.kafka.persisted-downlink-msg.additional-consumer-config |
+ TB_KAFKA_PERSISTED_DOWNLINK_MSG_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.msg.downlink.persisted` topics |
+
+
+ | queue.kafka.persisted-downlink-msg.additional-producer-config |
+ TB_KAFKA_PERSISTED_DOWNLINK_MSG_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.msg.downlink.persisted` topics |
+
+
+ | queue.kafka.application-removed-event.topic |
+ TB_KAFKA_APPLICATION_REMOVED_EVENT_TOPIC |
+ tbmq.sys.app.removed |
+ Topic for sending events to remove application topics when application clients are changed to be device clients |
+
+
+ | queue.kafka.application-removed-event.topic-properties |
+ TB_KAFKA_APPLICATION_REMOVED_EVENT_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:1;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.sys.app.removed` topic |
+
+
+ | queue.kafka.application-removed-event.additional-consumer-config |
+ TB_KAFKA_APPLICATION_REMOVED_EVENT_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.sys.app.removed` topic |
+
+
+ | queue.kafka.application-removed-event.additional-producer-config |
+ TB_KAFKA_APPLICATION_REMOVED_EVENT_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.sys.app.removed` topic |
+
+
+ | queue.kafka.historical-data-total.topic |
+ TB_KAFKA_HISTORICAL_DATA_TOTAL_TOPIC |
+ tbmq.sys.historical.data |
+ Topic for sending historical data stats to be summed from each broker |
+
+
+ | queue.kafka.historical-data-total.topic-properties |
+ TB_KAFKA_HISTORICAL_DATA_TOTAL_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:1;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.sys.historical.data` topic |
+
+
+ | queue.kafka.historical-data-total.additional-consumer-config |
+ TB_KAFKA_HISTORICAL_DATA_TOTAL_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.sys.historical.data` topic |
+
+
+ | queue.kafka.historical-data-total.additional-producer-config |
+ TB_KAFKA_HISTORICAL_DATA_TOTAL_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.sys.historical.data` topic |
+
+
+ | queue.kafka.integration-downlink.topic-prefix |
+ TB_KAFKA_IE_DOWNLINK_TOPIC_PREFIX |
+ tbmq.ie.downlink |
+ Prefix for topics for sending integration configurations and validation requests from tbmq to integration executors |
+
+
+ | queue.kafka.integration-downlink.http.topic-properties |
+ TB_KAFKA_IE_DOWNLINK_HTTP_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:6;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.ie.downlink.http` topic |
+
+
+ | queue.kafka.integration-downlink.http.additional-consumer-config |
+ TB_KAFKA_IE_DOWNLINK_HTTP_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.ie.downlink.http` topic |
+
+
+ | queue.kafka.integration-downlink.http.additional-producer-config |
+ TB_KAFKA_IE_DOWNLINK_HTTP_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.ie.downlink.http` topic |
+
+
+ | queue.kafka.integration-downlink.kafka.topic-properties |
+ TB_KAFKA_IE_DOWNLINK_KAFKA_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:6;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.ie.downlink.kafka` topic |
+
+
+ | queue.kafka.integration-downlink.kafka.additional-consumer-config |
+ TB_KAFKA_IE_DOWNLINK_KAFKA_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.ie.downlink.kafka` topic |
+
+
+ | queue.kafka.integration-downlink.kafka.additional-producer-config |
+ TB_KAFKA_IE_DOWNLINK_KAFKA_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.ie.downlink.kafka` topic |
+
+
+ | queue.kafka.integration-downlink.mqtt.topic-properties |
+ TB_KAFKA_IE_DOWNLINK_MQTT_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:6;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.ie.downlink.mqtt` topic |
+
+
+ | queue.kafka.integration-downlink.mqtt.additional-consumer-config |
+ TB_KAFKA_IE_DOWNLINK_MQTT_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.ie.downlink.mqtt` topic |
+
+
+ | queue.kafka.integration-downlink.mqtt.additional-producer-config |
+ TB_KAFKA_IE_DOWNLINK_MQTT_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.ie.downlink.mqtt` topic |
+
+
+ | queue.kafka.integration-uplink.topic |
+ TB_KAFKA_IE_UPLINK_TOPIC |
+ tbmq.ie.uplink |
+ Topic for sending messages/events from integration executors to tbmq |
+
+
+ | queue.kafka.integration-uplink.topic-properties |
+ TB_KAFKA_IE_UPLINK_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:6;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.ie.uplink` topic |
+
+
+ | queue.kafka.integration-uplink.additional-consumer-config |
+ TB_KAFKA_IE_UPLINK_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.ie.uplink` topic |
+
+
+ | queue.kafka.integration-uplink.additional-producer-config |
+ TB_KAFKA_IE_UPLINK_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.ie.uplink` topic |
+
+
+ | queue.kafka.integration-uplink-notifications.topic-prefix |
+ TB_KAFKA_IE_UPLINK_NOTIF_TOPIC_PREFIX |
+ tbmq.ie.uplink.notifications |
+ Prefix for topics for sending notifications or replies from integration executors to specific tbmq node |
+
+
+ | queue.kafka.integration-uplink-notifications.topic-properties |
+ TB_KAFKA_IE_UPLINK_NOTIF_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:1;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.ie.uplink.notifications` topics |
+
+
+ | queue.kafka.integration-uplink-notifications.additional-consumer-config |
+ TB_KAFKA_IE_UPLINK_NOTIF_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.ie.uplink.notifications` topic |
+
+
+ | queue.kafka.integration-uplink-notifications.additional-producer-config |
+ TB_KAFKA_IE_UPLINK_NOTIF_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.ie.uplink.notifications` topic |
+
+
+ | queue.kafka.integration-msg.topic-properties |
+ TB_KAFKA_IE_MSG_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.msg.ie` topics |
+
+
+ | queue.kafka.integration-msg.additional-consumer-config |
+ TB_KAFKA_IE_MSG_ADDITIONAL_CONSUMER_CONFIG |
+ max.poll.records:50 |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.msg.ie` topics |
+
+
+ | queue.kafka.integration-msg.additional-producer-config |
+ TB_KAFKA_IE_MSG_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.msg.ie` topics |
+
+
+ | queue.kafka.internode-notifications.topic-prefix |
+ TB_KAFKA_INTERNODE_NOTIFICATIONS_TOPIC_PREFIX |
+ tbmq.sys.internode.notifications |
+ Prefix for topics for sending system notifications to Broker nodes |
+
+
+ | queue.kafka.internode-notifications.topic-properties |
+ TB_KAFKA_INTERNODE_NOTIFICATIONS_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:1;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.sys.internode.notifications` topics |
+
+
+ | queue.kafka.internode-notifications.additional-consumer-config |
+ TB_KAFKA_INTERNODE_NOTIFICATIONS_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.sys.internode.notifications` topics |
+
+
+ | queue.kafka.internode-notifications.additional-producer-config |
+ TB_KAFKA_INTERNODE_NOTIFICATIONS_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.sys.internode.notifications` topics |
+
+
+ | queue.kafka.blocked-client.topic |
+ TB_KAFKA_BLOCKED_CLIENT_TOPIC |
+ tbmq.client.blocked |
+ Topic for blocked clients |
+
+
+ | queue.kafka.blocked-client.topic-properties |
+ TB_KAFKA_BLOCKED_CLIENT_TOPIC_PROPERTIES |
+ segment.bytes:26214400;partitions:1;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.client.blocked` topic |
+
+
+ | queue.kafka.blocked-client.additional-consumer-config |
+ TB_KAFKA_BLOCKED_CLIENT_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.client.blocked` topic |
+
+
+ | queue.kafka.blocked-client.additional-producer-config |
+ TB_KAFKA_BLOCKED_CLIENT_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.client.blocked` topic |
+
+
+ | queue.kafka.kafka-prefix |
+ TB_KAFKA_PREFIX |
+ |
+ The common prefix for all Kafka topics, producers, consumer groups, and consumers. Defaults to empty string meaning no prefix is added |
+
+
+
+
+
+## General service parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | service.type |
+ TB_SERVICE_TYPE |
+ tbmq |
+ Microservice type. Allowed value: tbmq |
+
+
+ | service.id |
+ TB_SERVICE_ID |
+ |
+ Unique id for this service (autogenerated if empty) |
+
+
+
+
+
+## Actor system parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | actors.system.throughput |
+ ACTORS_SYSTEM_THROUGHPUT |
+ 5 |
+ Number of messages the actor system will process per actor before switching to processing of messages for next actor |
+
+
+ | actors.system.scheduler-pool-size |
+ ACTORS_SYSTEM_SCHEDULER_POOL_SIZE |
+ 1 |
+ Thread pool size for actor system scheduler |
+
+
+ | actors.system.max-actor-init-attempts |
+ ACTORS_SYSTEM_MAX_ACTOR_INIT_ATTEMPTS |
+ 10 |
+ Maximum number of attempts to init the actor before disabling the actor |
+
+
+ | actors.system.processing-metrics.enabled |
+ ACTORS_SYSTEM_PROCESSING_METRICS_ENABLED |
+ false |
+ Enable/disable actors processing metrics |
+
+
+ | actors.system.disconnect-wait-timeout-ms |
+ ACTORS_SYSTEM_DISCONNECT_WAIT_TIMEOUT_MS |
+ 2000 |
+ Actors disconnect timeout in milliseconds |
+
+
+ | actors.persisted-device.dispatcher-pool-size |
+ ACTORS_SYSTEM_PERSISTED_DEVICE_DISPATCHER_POOL_SIZE |
+ 8 |
+ Number of threads processing the Device actor's messages |
+
+
+ | actors.persisted-device.wait-before-actor-stop-minutes |
+ ACTORS_SYSTEM_PERSISTED_DEVICE_WAIT_BEFORE_ACTOR_STOP_MINUTES |
+ 5 |
+ Minutes to wait before deleting Device actor after disconnect |
+
+
+ | actors.client.dispatcher-pool-size |
+ ACTORS_SYSTEM_CLIENT_DISPATCHER_POOL_SIZE |
+ 8 |
+ Number of threads processing the MQTT client actors messages |
+
+
+ | actors.client.wait-before-generated-actor-stop-seconds |
+ ACTORS_SYSTEM_CLIENT_WAIT_BEFORE_GENERATED_ACTOR_STOP_SECONDS |
+ 10 |
+ Time in seconds to wait until the actor is stopped for clients that did not specify client id |
+
+
+ | actors.client.wait-before-named-actor-stop-seconds |
+ ACTORS_SYSTEM_CLIENT_WAIT_BEFORE_NAMED_ACTOR_STOP_SECONDS |
+ 60 |
+ Time in seconds to wait until the actor is stopped for clients that specified client id |
+
+
+ | actors.rule.mail_thread_pool_size |
+ ACTORS_RULE_MAIL_THREAD_POOL_SIZE |
+ 4 |
+ Thread pool size for mail sender executor service |
+
+
+ | actors.rule.mail_password_reset_thread_pool_size |
+ ACTORS_RULE_MAIL_PASSWORD_RESET_THREAD_POOL_SIZE |
+ 4 |
+ Thread pool size for password reset emails executor service |
+
+
+
+
+
+## Platform integrations parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | integrations.init.connection-check-api-request-timeout-sec |
+ INTEGRATIONS_INIT_CONNECTION_CHECK_API_REQUEST_TIMEOUT_SEC |
+ 20 |
+ Connection check timeout for API request in seconds |
+
+
+ | integrations.cleanup.period |
+ INTEGRATIONS_CLEANUP_PERIOD_SEC |
+ 10800 |
+ The parameter to specify the period of execution cleanup task for disconnected integrations. Value set in seconds. Default value corresponds to three hours |
+
+
+ | integrations.cleanup.ttl |
+ INTEGRATIONS_CLEANUP_TTL_SEC |
+ 604800 |
+ Administration TTL (in seconds) for cleaning up disconnected integrations.
+ The cleanup removes integration topics that persist messages.
+ The current value is set to one week. A value of 0 or negative disables this TTL |
+
+
+
+
+
+## Database time series parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | database.ts_max_intervals |
+ DATABASE_TS_MAX_INTERVALS |
+ 700 |
+ Max number of DB queries generated by single API call to fetch time series records |
+
+
+
+
+
+## SQL configuration parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | sql.batch_sort |
+ SQL_BATCH_SORT |
+ true |
+ Specify whether to sort entities before batch update. Should be enabled for cluster mode to avoid deadlocks |
+
+
+ | sql.ts_key_value_partitioning |
+ SQL_TS_KV_PARTITIONING |
+ DAYS |
+ Specify partitioning size for timestamp key-value storage. Example: DAYS, MONTHS, YEARS, INDEFINITE |
+
+
+ | sql.remove_null_chars |
+ SQL_REMOVE_NULL_CHARS |
+ true |
+ Specify whether to remove null characters from strValue before insert |
+
+
+ | sql.ts.batch_size |
+ SQL_TS_BATCH_SIZE |
+ 1000 |
+ Batch size for persisting time series inserts |
+
+
+ | sql.ts.batch_max_delay |
+ SQL_TS_BATCH_MAX_DELAY_MS |
+ 100 |
+ Max timeout for time series entries queue polling. Value set in milliseconds |
+
+
+ | sql.ts.batch_threads |
+ SQL_TS_BATCH_THREADS |
+ 3 |
+ Number of threads that execute batch insert/update statements for time series data. Batch thread count have to be a prime number like 3 or 5 to gain perfect hash distribution |
+
+
+ | sql.ts_latest.batch_size |
+ SQL_TS_LATEST_BATCH_SIZE |
+ 1000 |
+ Batch size for persisting latest time series inserts |
+
+
+ | sql.ts_latest.batch_max_delay |
+ SQL_TS_LATEST_BATCH_MAX_DELAY_MS |
+ 50 |
+ Max timeout for latest time series entries queue polling. Value set in milliseconds |
+
+
+ | sql.ts_latest.batch_threads |
+ SQL_TS_LATEST_BATCH_THREADS |
+ 3 |
+ Number of threads that execute batch insert/update statements for latest time series data. Batch thread count have to be a prime number like 3 or 5 to gain perfect hash distribution |
+
+
+ | sql.unauthorized-client.insert.batch_size |
+ SQL_UNAUTHORIZED_CLIENT_INSERT_BATCH_SIZE |
+ 1000 |
+ Batch size for persisting unauthorized client inserts |
+
+
+ | sql.unauthorized-client.insert.batch_max_delay |
+ SQL_UNAUTHORIZED_CLIENT_INSERT_BATCH_MAX_DELAY_MS |
+ 50 |
+ Max timeout for unauthorized client insert entries queue polling. Value set in milliseconds |
+
+
+ | sql.unauthorized-client.insert.batch_threads |
+ SQL_UNAUTHORIZED_CLIENT_INSERT_BATCH_THREADS |
+ 3 |
+ Number of threads that execute batch insert/update statements for unauthorized client data. Batch thread count have to be a prime number like 3 or 5 to gain perfect hash distribution |
+
+
+ | sql.unauthorized-client.delete.batch_size |
+ SQL_UNAUTHORIZED_CLIENT_DELETE_BATCH_SIZE |
+ 1000 |
+ Batch size for processing unauthorized client deletes |
+
+
+ | sql.unauthorized-client.delete.batch_max_delay |
+ SQL_UNAUTHORIZED_CLIENT_DELETE_BATCH_MAX_DELAY_MS |
+ 50 |
+ Max timeout for unauthorized client delete entries queue polling. Value set in milliseconds |
+
+
+ | sql.unauthorized-client.delete.batch_threads |
+ SQL_UNAUTHORIZED_CLIENT_DELETE_BATCH_THREADS |
+ 3 |
+ Number of threads that execute batch delete statements for unauthorized client data. Batch thread count have to be a prime number like 3 or 5 to gain perfect hash distribution |
+
+
+ | sql.events.batch_size |
+ SQL_EVENTS_BATCH_SIZE |
+ 10000 |
+ Batch size for persisting events updates |
+
+
+ | sql.events.batch_max_delay |
+ SQL_EVENTS_BATCH_MAX_DELAY_MS |
+ 100 |
+ Max timeout for events entries queue polling. The value set in milliseconds |
+
+
+ | sql.events.batch_threads |
+ SQL_EVENTS_BATCH_THREADS |
+ 3 |
+ Batch size for processing events insert/update. Batch thread count has to be a prime number like 3 or 5 to gain perfect hash distribution |
+
+
+ | sql.events.partition_size |
+ SQL_EVENTS_REGULAR_PARTITION_SIZE_HOURS |
+ 168 |
+ Number of hours to partition the events. The current value corresponds to one week |
+
+
+ | sql.events.max-symbols |
+ SQL_EVENTS_MAX_SYMBOLS |
+ 4096 |
+ Maximum number of symbols per event. The event content will be truncated if needed |
+
+
+ | sql.ttl.ts.enabled |
+ SQL_TTL_TS_ENABLED |
+ true |
+ The parameter to specify whether to use TTL (Time To Live) for time series records |
+
+
+ | sql.ttl.ts.execution_interval_ms |
+ SQL_TTL_TS_EXECUTION_INTERVAL_MS |
+ 86400000 |
+ The parameter to specify the period of execution TTL task for time series records. Value set in milliseconds. Default value corresponds to one day |
+
+
+ | sql.ttl.ts.ts_key_value_ttl |
+ SQL_TTL_TS_KEY_VALUE_TTL |
+ 604800 |
+ The parameter to specify system TTL(Time To Live) value for time series records. Value set in seconds. 0 - records are never expired. Default value corresponds to seven days |
+
+
+ | sql.ttl.unauthorized_client.enabled |
+ SQL_TTL_UNAUTHORIZED_CLIENT_ENABLED |
+ true |
+ The parameter to specify whether to use TTL (Time To Live) for unauthorized clients |
+
+
+ | sql.ttl.unauthorized_client.execution_interval_ms |
+ SQL_TTL_UNAUTHORIZED_CLIENT_EXECUTION_INTERVAL_MS |
+ 86400000 |
+ The parameter to specify the period of execution TTL task for unauthorized clients. Value set in milliseconds. Default value corresponds to one day |
+
+
+ | sql.ttl.unauthorized_client.ttl |
+ SQL_TTL_UNAUTHORIZED_CLIENT_TTL |
+ 259200 |
+ The parameter to specify system TTL(Time To Live) value for unauthorized clients. Value set in seconds. 0 - records are never expired. Default value corresponds to three days |
+
+
+ | sql.ttl.events.enabled |
+ SQL_TTL_EVENTS_ENABLED |
+ true |
+ Enable/disable TTL (Time To Live) for event records |
+
+
+ | sql.ttl.events.execution_interval_ms |
+ SQL_TTL_EVENTS_EXECUTION_INTERVAL_MS |
+ 3600000 |
+ Number of milliseconds (max random initial delay and fixed period). Defaults to 1 hour |
+
+
+ | sql.ttl.events.events_ttl |
+ SQL_TTL_EVENTS_TTL_SEC |
+ 1209600 |
+ Number of seconds for TTL. TTL is set to 14 days by default. The accuracy of the cleanup depends on the sql.events.partition_size parameter |
+
+
+
+
+
+## Redis lettuce configuration parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | lettuce.auto-flush |
+ REDIS_LETTUCE_CMDS_AUTO_FLUSH_ENABLED |
+ true |
+ Enable/disable auto-flush. If disabled, commands are buffered and flushed based on cmd count or time interval |
+
+
+ | lettuce.buffered-cmd-count |
+ REDIS_LETTUCE_BUFFERED_CMDS_COUNT |
+ 5 |
+ Number of buffered commands before flush is triggered. Used when auto-flush is disabled |
+
+
+ | lettuce.flush-interval-ms |
+ REDIS_LETTUCE_FLUSH_INTERVAL_MS |
+ 5 |
+ Maximum time in milliseconds to buffer commands before flushing, regardless of cmd count |
+
+
+ | lettuce.config.command-timeout |
+ REDIS_LETTUCE_COMMAND_TIMEOUT_SEC |
+ 30 |
+ Maximum time (in seconds) to wait for a lettuce command to complete.
+ This affects health checks and any command execution (e.g. GET, SET, PING).
+ Reduce this to fail fast if Redis is unresponsive |
+
+
+ | lettuce.config.shutdown-quiet-period |
+ REDIS_LETTUCE_SHUTDOWN_QUIET_PERIOD_SEC |
+ 1 |
+ The shutdown quiet period for lettuce client set in seconds |
+
+
+ | lettuce.config.shutdown-timeout |
+ REDIS_LETTUCE_SHUTDOWN_TIMEOUT_SEC |
+ 10 |
+ The shutdown timeout for lettuce client set in seconds |
+
+
+ | lettuce.config.cluster.topology-refresh.enabled |
+ REDIS_LETTUCE_CLUSTER_TOPOLOGY_REFRESH_ENABLED |
+ false |
+ Enables or disables periodic cluster topology updates.
+ Useful for Redis Cluster setup to handle topology changes,
+ such as node failover, restarts, or IP address changes |
+
+
+ | lettuce.config.cluster.topology-refresh.period |
+ REDIS_LETTUCE_CLUSTER_TOPOLOGY_REFRESH_PERIOD_SEC |
+ 60 |
+ Specifies the interval (in seconds) for periodic cluster topology updates |
+
+
+
+
+
+## Redis jedis configuration parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | jedis.cluster.topology-refresh.enabled |
+ REDIS_JEDIS_CLUSTER_TOPOLOGY_REFRESH_ENABLED |
+ false |
+ Enables or disables periodic cluster topology updates.
+ Useful for Redis cluster setup to handle topology changes,
+ such as node failover, restarts, or IP address changes |
+
+
+ | jedis.cluster.topology-refresh.period |
+ REDIS_JEDIS_CLUSTER_TOPOLOGY_REFRESH_PERIOD_SEC |
+ 60 |
+ Specifies the interval (in seconds) for periodic cluster topology updates |
+
+
+
+
+
+## SQL DAO configuration parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | spring.data.jpa.repositories.enabled |
+ SPRING_DATA_JPA_REPOSITORIES_ENABLED |
+ true |
+ Enable/Disable the Spring Data JPA repositories support |
+
+
+ | spring.jpa.open-in-view |
+ SPRING_JPA_OPEN_IN_VIEW |
+ false |
+ Enable/disable OSIV |
+
+
+ | spring.jpa.hibernate.ddl-auto |
+ SPRING_JPA_HIBERNATE_DDL_AUTO |
+ none |
+ You can set a Hibernate feature that controls the DDL behavior in a more fine-grained way.
+ The standard Hibernate property values are none, validate, update, create-drop.
+ Spring Boot chooses a default value for you based on whether it thinks your database is embedded (default create-drop) or not (default none) |
+
+
+ | spring.datasource.driverClassName |
+ SPRING_DRIVER_CLASS_NAME |
+ org.postgresql.Driver |
+ Database driver for Spring JPA |
+
+
+ | spring.datasource.url |
+ SPRING_DATASOURCE_URL |
+ jdbc:postgresql://localhost:5432/thingsboard_mqtt_broker |
+ Database connection URL |
+
+
+ | spring.datasource.username |
+ SPRING_DATASOURCE_USERNAME |
+ postgres |
+ Database username |
+
+
+ | spring.datasource.password |
+ SPRING_DATASOURCE_PASSWORD |
+ postgres |
+ Database user password |
+
+
+ | spring.datasource.hikari.maximumPoolSize |
+ SPRING_DATASOURCE_MAXIMUM_POOL_SIZE |
+ 16 |
+ This property allows the number of connections in the pool to increase as demand increases.
+ At the same time, the property ensures that the pool doesn't grow to the point of exhausting a system's resources, which ultimately affects an application's performance and availability |
+
+
+ | spring.datasource.hikari.maxLifetime |
+ SPRING_DATASOURCE_MAX_LIFETIME |
+ 600000 |
+ This property controls the max lifetime in milliseconds of a connection. Only when it is closed will it then be removed. Default is 10 minutes |
+
+
+ | spring.datasource.hikari.connectionTimeout |
+ SPRING_DATASOURCE_CONNECTION_TIMEOUT_MS |
+ 30000 |
+ Maximum time (in milliseconds) HikariCP will wait to acquire a connection from the pool.
+ If exceeded, an exception is thrown. Default is 30 seconds |
+
+
+
+
+
+## General Spring parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | spring.lifecycle.timeout-per-shutdown-phase |
+ SPRING_LIFECYCLE_TIMEOUT_PER_SHUTDOWN_PHASE |
+ 1m |
+ The server will wait for active requests to finish their work up to a specified amount of time before graceful shutdown |
+
+
+ | spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation |
+ SPRING_JPA_HIBERNATE_JDBC_LOB_NON_CONTEXTUAL_CREATION |
+ true |
+ Setting this property to true disables contextual LOB creation and forces the use of Hibernate's own LOB implementation. Fixes Postgres JPA Error |
+
+
+ | spring.jpa.properties.hibernate.order_by.default_null_ordering |
+ SPRING_JPA_HIBERNATE_ORDER_BY_DEFAULT_NULL_ORDERING |
+ last |
+ Default ordering for null values |
+
+
+ | spring.data.redis.repositories.enabled |
+ SPRING_DATA_REDIS_REPOSITORIES_ENABLED |
+ false |
+ Disables redis repositories scanning |
+
+
+ | spring.freemarker.checkTemplateLocation |
+ SPRING_FREEMARKER_CHECK_TEMPLATE_LOCATION |
+ false |
+ Spring freemarker configuration to check that the templates location exists |
+
+
+ | spring.mvc.async.request-timeout |
+ SPRING_MVC_ASYNC_REQUEST_TIMEOUT |
+ 30000 |
+ The default timeout for asynchronous requests in milliseconds |
+
+
+ | spring.mvc.pathmatch.matching-strategy |
+ SPRING_MVC_PATH_MATCH_MATCHING_STRATEGY |
+ ANT_PATH_MATCHER |
+ For endpoints matching in Swagger |
+
+
+ | spring.servlet.multipart.max-file-size |
+ SPRING_SERVLET_MULTIPART_MAX_FILE_SIZE |
+ 50MB |
+ Total file size cannot exceed 50MB when configuring file uploads |
+
+
+ | spring.servlet.multipart.max-request-size |
+ SPRING_SERVLET_MULTIPART_MAX_REQUEST_SIZE |
+ 50MB |
+ Total request size for a multipart/form-data cannot exceed 50MB |
+
+
+
+
+
+## Security parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | security.mqtt.auth_strategy |
+ SECURITY_MQTT_AUTH_STRATEGY |
+ BOTH |
+ DEPRECATED: BOTH or SINGLE - the former means the first attempt of client authentication will be by 'basic' provider
+ and then by 'ssl' provider if 'basic' is not successful;
+ the latter means only one attempt is done according to the listener communication chosen (see listener.tcp/listener.ssl) |
+
+
+ | security.mqtt.basic.enabled |
+ SECURITY_MQTT_BASIC_ENABLED |
+ false |
+ DEPRECATED: If enabled the server will try to authenticate client with clientId and/or username and/or password |
+
+
+ | security.mqtt.ssl.enabled |
+ SECURITY_MQTT_SSL_ENABLED |
+ false |
+ DEPRECATED: If enabled the server will try to authenticate client with client certificate chain |
+
+
+ | security.mqtt.ssl.skip_validity_check_for_client_cert |
+ SECURITY_MQTT_SSL_SKIP_VALIDITY_CHECK_FOR_CLIENT_CERT |
+ false |
+ DEPRECATED: Skip certificate validity check for client certificates |
+
+
+ | security.jwt.tokenExpirationTime |
+ JWT_TOKEN_EXPIRATION_TIME |
+ 9000 |
+ User JWT Token expiration time in seconds (2.5 hours) |
+
+
+ | security.jwt.refreshTokenExpTime |
+ JWT_REFRESH_TOKEN_EXPIRATION_TIME |
+ 604800 |
+ User JWT Refresh Token expiration time in seconds (1 week) |
+
+
+ | security.jwt.tokenIssuer |
+ JWT_TOKEN_ISSUER |
+ thingsboard.io |
+ User JWT Token issuer |
+
+
+ | security.jwt.tokenSigningKey |
+ JWT_TOKEN_SIGNING_KEY |
+ Qk1xUnloZ0VQTlF1VlNJQXZ4cWhiNWt1cVd1ZzQ5cWpENUhMSHlaYmZIM0JrZ2pPTVlhQ3N1Z0ZMUnd0SDBieg== |
+ User JWT Token sign key |
+
+
+ | security.basic.enabled |
+ SECURITY_BASIC_ENABLED |
+ false |
+ Enable/Disable basic security options |
+
+
+ | security.user_token_access_enabled |
+ SECURITY_USER_TOKEN_ACCESS_ENABLED |
+ true |
+ Enable/disable access to other Administrators JWT token by System Administrator |
+
+
+ | security.user_login_case_sensitive |
+ SECURITY_USER_LOGIN_CASE_SENSITIVE |
+ true |
+ Enable/disable case-sensitive username login |
+
+
+ | security.oauth2.loginProcessingUrl |
+ SECURITY_OAUTH2_LOGIN_PROCESSING_URL |
+ /login/oauth2/code/ |
+ Redirect URL where access code from external user management system will be processed |
+
+
+ | security.oauth2.githubMapper.emailUrl |
+ SECURITY_OAUTH2_GITHUB_MAPPER_EMAIL_URL_KEY |
+ https://api.github.com/user/emails |
+ The email addresses that will be mapped from the URL |
+
+
+
+
+
+## MQTT parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | mqtt.connect.threads |
+ MQTT_CONNECT_THREADS |
+ 4 |
+ Number of threads for clients connection thread pool |
+
+
+ | mqtt.msg-subscriptions-parallel-processing |
+ MQTT_MSG_SUBSCRIPTIONS_PARALLEL_PROCESSING |
+ false |
+ Enable/disable processing of found subscriptions in parallel for published messages.
+ Helpful when the "PUBLISH" message should be delivered to lots of subscribers.
+ It is recommended to count the impact of this parameter before setting it on production |
+
+
+ | mqtt.pre-connect-queue.max-size |
+ MQTT_PRE_CONNECT_QUEUE_MAX_SIZE |
+ 10000 |
+ Max number of messages that can be stored in queue before client gets connected and start processing them |
+
+
+ | mqtt.max-in-flight-msgs |
+ MQTT_MAX_IN_FLIGHT_MSGS |
+ 1000 |
+ Max number of PUBLISH messages not yet responded |
+
+
+ | mqtt.flow-control.enabled |
+ MQTT_FLOW_CONTROL_ENABLED |
+ true |
+ Enable/disable flow control MQTT 5 feature for server. If disabled, the server will not control the number of messages sent to subscribers by "Receive Maximum".
+ This feature works for MQTT 3.x clients as well when enabled. "Receive Maximum" for MQTT 3.x clients can be set using `MQTT_FLOW_CONTROL_MQTT_3X_RECEIVE_MAX` parameter |
+
+
+ | mqtt.flow-control.timeout |
+ MQTT_FLOW_CONTROL_TIMEOUT |
+ 1000 |
+ Timeout to wait in case there is nothing to process regarding the flow control feature. The separate thread is responsible for sending delayed messages to subscribers.
+ If no clients are affected by flow control restrictions, there is no need to continuously try to find and send such messages |
+
+
+ | mqtt.flow-control.ttl |
+ MQTT_FLOW_CONTROL_TTL |
+ 600 |
+ Time in seconds to store delayed messages for subscribers. Delayed messages are those that can not be sent immediately due to flow control restrictions.
+ Default is 10 minutes |
+
+
+ | mqtt.flow-control.delayed-queue-max-size |
+ MQTT_FLOW_CONTROL_DELAYED_QUEUE_MAX_SIZE |
+ 1000 |
+ Max allowed queue length for delayed messages - publishing messages from broker to client when in-flight window is full |
+
+
+ | mqtt.flow-control.mqtt3x-receive-max |
+ MQTT_FLOW_CONTROL_MQTT_3X_RECEIVE_MAX |
+ 65535 |
+ Receive maximum value for MQTT 3.x clients |
+
+
+ | mqtt.retransmission.enabled |
+ MQTT_RETRANSMISSION_ENABLED |
+ false |
+ Enable/disable MQTT msg retransmission |
+
+
+ | mqtt.retransmission.scheduler-pool-size |
+ MQTT_RETRANSMISSION_SCHEDULER_POOL_SIZE |
+ 0 |
+ Retransmission scheduler pool size (0 means the number of processors available to the JVM multiplied by 2 will be used) |
+
+
+ | mqtt.retransmission.initial-delay |
+ MQTT_RETRANSMISSION_INITIAL_DELAY |
+ 10 |
+ Initial delay for the msg retransmission in seconds |
+
+
+ | mqtt.retransmission.period |
+ MQTT_RETRANSMISSION_PERIOD |
+ 5 |
+ Increment period for the subsequent retransmissions of the msg in seconds (retransmission interval is increased by period for each run) |
+
+
+ | mqtt.keep-alive.monitoring-delay-ms |
+ MQTT_KEEP_ALIVE_MONITORING_DELAY_MS |
+ 1000 |
+ Time in milliseconds between subsequent checks for the non-active clients |
+
+
+ | mqtt.keep-alive.max-keep-alive |
+ MQTT_KEEP_ALIVE_MAX_KEEP_ALIVE_SEC |
+ 600 |
+ Max value in seconds allowed by the server for keep-alive that can be used by clients. Defaults to 10 minutes, used for MQTT v5 clients |
+
+
+ | mqtt.topic.max-segments-count |
+ MQTT_TOPIC_MAX_SEGMENTS_COUNT |
+ 0 |
+ Maximum number of segments in topics. If it's too large, processing of topics with too much segments can lead to errors. 0 means limitation is disabled |
+
+
+ | mqtt.topic.alias-max |
+ MQTT_TOPIC_ALIAS_MAX |
+ 10 |
+ Max count of topic aliases per connection. 0 indicates that the Broker does not accept any Topic Aliases for all connections meaning the 'Topic Alias' feature is disabled |
+
+
+ | mqtt.topic.min-length-for-alias-replacement |
+ MQTT_TOPIC_MIN_LENGTH_FOR_ALIAS_REPLACEMENT |
+ 50 |
+ Minimal required topic name length that Broker publishes to client that can be replaced with topic alias
+ (e.g. if topic has more than 50 chars - it can be replaced with alias) |
+
+
+ | mqtt.shared-subscriptions.processing-type |
+ MQTT_SHARED_SUBSCRIPTIONS_PROCESSING_TYPE |
+ ROUND_ROBIN |
+ Processing strategy type - how messages are split between clients in shared subscription. Supported types: ROUND_ROBIN |
+
+
+ | mqtt.subscription-trie.wait-for-clear-lock-ms |
+ MQTT_SUB_TRIE_WAIT_FOR_CLEAR_LOCK_MS |
+ 100 |
+ Maximum pause in milliseconds for clearing subscription storage from empty nodes.
+ If wait is unsuccessful the subscribing clients will be resumed, but the clear will fail |
+
+
+ | mqtt.subscription-trie.clear-nodes-cron |
+ MQTT_SUB_TRIE_CLEAR_NODES_CRON |
+ 0 0 0 * * * |
+ Cron job to schedule clearing of empty subscription nodes. Defaults to 'every day at midnight' |
+
+
+ | mqtt.subscription-trie.clear-nodes-zone |
+ MQTT_SUB_TRIE_CLEAR_NODES_ZONE |
+ UTC |
+ Timezone for the subscription clearing cron-job |
+
+
+ | mqtt.retain-msg-trie.wait-for-clear-lock-ms |
+ MQTT_RETAIN_MSG_TRIE_WAIT_FOR_CLEAR_LOCK_MS |
+ 100 |
+ Maximum pause in milliseconds for clearing retain msg storage from empty nodes.
+ If wait is unsuccessful retain messages processing will be resumed, but the clear will fail |
+
+
+ | mqtt.retain-msg-trie.clear-nodes-cron |
+ MQTT_RETAIN_MSG_TRIE_CLEAR_NODES_CRON |
+ 0 0 0 * * * |
+ Cron job to schedule clearing of empty retain msg nodes. Defaults to 'every day at midnight' |
+
+
+ | mqtt.retain-msg-trie.clear-nodes-zone |
+ MQTT_RETAIN_MSG_TRIE_CLEAR_NODES_ZONE |
+ UTC |
+ Timezone for retain msg clearing cron-job |
+
+
+ | mqtt.retain-msg.expiry-processing-period-ms |
+ MQTT_RETAIN_MSG_EXPIRY_PROCESSING_PERIOD_MS |
+ 60000 |
+ Period in milliseconds to clear retained messages by expiry feature of MQTT |
+
+
+ | mqtt.client-session-expiry.cron |
+ MQTT_CLIENT_SESSION_EXPIRY_CRON |
+ 0 0 * ? * * |
+ Cron job to schedule clearing of expired and not active client sessions. Defaults to 'every hour', e.g. at 20:00:00 UTC |
+
+
+ | mqtt.client-session-expiry.zone |
+ MQTT_CLIENT_SESSION_EXPIRY_ZONE |
+ UTC |
+ Timezone for the client sessions clearing cron-job |
+
+
+ | mqtt.client-session-expiry.max-expiry-interval |
+ MQTT_CLIENT_SESSION_EXPIRY_MAX_EXPIRY_INTERVAL |
+ 604800 |
+ Max expiry interval allowed of inactive sessions in seconds. The current value corresponds to one week |
+
+
+ | mqtt.client-session-expiry.ttl |
+ MQTT_CLIENT_SESSION_EXPIRY_TTL |
+ 604800 |
+ Administration TTL in seconds for clearing sessions that do not expire by session expiry interval
+ (e.g. MQTTv3 cleanSession=false or MQTTv5 cleanStart=false && sessionExpiryInterval == 0).
+ The current value corresponds to one week. 0 or negative value means this TTL is disabled |
+
+
+ | mqtt.version-3-1.max-client-id-length |
+ MQTT_3_1_MAX_CLIENT_ID_LENGTH |
+ 1024 |
+ Max ClientId length for 3.1 version of protocol |
+
+
+ | mqtt.write-and-flush |
+ MQTT_MSG_WRITE_AND_FLUSH |
+ true |
+ If enabled, each message is published to non-persistent subscribers with flush. When disabled, the messages are buffered in the channel and are flushed once in a while |
+
+
+ | mqtt.buffered-msg-count |
+ MQTT_BUFFERED_MSG_COUNT |
+ 5 |
+ Number of messages buffered in the channel before the flush is made. Used when `MQTT_MSG_WRITE_AND_FLUSH` = false |
+
+
+ | mqtt.buffered-delivery.session-cache-max-size |
+ MQTT_BUFFERED_CACHE_MAX_SIZE |
+ 10000 |
+ When either `MQTT_MSG_WRITE_AND_FLUSH` or `MQTT_PERSISTENT_MSG_WRITE_AND_FLUSH` is set to false,
+ the broker buffers outgoing messages in the outbound channel to improve throughput.
+ The respective buffer sizes are controlled by `MQTT_BUFFERED_MSG_COUNT` (for non-persistent clients)
+ and `MQTT_PERSISTENT_BUFFERED_MSG_COUNT` (for persistent clients).
+ Defines the maximum number of session entries that can be stored in the flush state cache.
+ When the cache exceeds this size, the least recently used sessions are evicted
+ and their pending message buffers are flushed automatically |
+
+
+ | mqtt.buffered-delivery.session-cache-expiration-ms |
+ MQTT_BUFFERED_CACHE_EXPIRY_MS |
+ 300000 |
+ Time in milliseconds after which an inactive session entry in the flush cache expires.
+ A session is considered inactive if it receives no new messages during this period.
+ Upon expiration, the session is evicted from the cache and its buffer is flushed.
+ Default is 5 minutes |
+
+
+ | mqtt.buffered-delivery.scheduler-execution-interval-ms |
+ MQTT_BUFFERED_SCHEDULER_INTERVAL_MS |
+ 100 |
+ Interval in milliseconds at which the scheduler checks all sessions in the cache
+ for potential flushing. A smaller value results in more frequent flush checks |
+
+
+ | mqtt.buffered-delivery.idle-session-flush-timeout-ms |
+ MQTT_BUFFERED_IDLE_FLUSH_MS |
+ 200 |
+ Maximum duration in milliseconds that a session can remain idle (i.e., without being flushed)
+ before its message buffer is automatically flushed to the client.
+ In essence, a flush occurs either when the buffer limit is reached or when this timeout elapses |
+
+
+ | mqtt.persistent-session.device.persisted-messages.limit |
+ MQTT_PERSISTENT_SESSION_DEVICE_PERSISTED_MESSAGES_LIMIT |
+ 10000 |
+ Maximum number of PUBLISH messages stored for each persisted DEVICE client |
+
+
+ | mqtt.persistent-session.device.persisted-messages.ttl |
+ MQTT_PERSISTENT_SESSION_DEVICE_PERSISTED_MESSAGES_TTL |
+ 604800 |
+ TTL of persisted DEVICE messages in seconds. The current value corresponds to one week |
+
+
+ | mqtt.persistent-session.device.persisted-messages.write-and-flush |
+ MQTT_PERSISTENT_MSG_WRITE_AND_FLUSH |
+ true |
+ If enabled, each message is published to persistent DEVICE client subscribers with flush. When disabled, the messages are buffered in the channel and are flushed once in a while |
+
+
+ | mqtt.persistent-session.device.persisted-messages.buffered-msg-count |
+ MQTT_PERSISTENT_BUFFERED_MSG_COUNT |
+ 5 |
+ Number of messages buffered in the channel before the flush is made. Used when `MQTT_PERSISTENT_MSG_WRITE_AND_FLUSH` = false |
+
+
+ | mqtt.persistent-session.app.persisted-messages.write-and-flush |
+ MQTT_APP_MSG_WRITE_AND_FLUSH |
+ false |
+ If enabled, each message is published to persistent APPLICATION client subscribers with flush. When disabled, the messages are buffered in the channel and are flushed once in a while |
+
+
+ | mqtt.persistent-session.app.persisted-messages.buffered-msg-count |
+ MQTT_APP_BUFFERED_MSG_COUNT |
+ 10 |
+ Number of messages buffered in the channel before the flush is made. Used when `MQTT_APP_MSG_WRITE_AND_FLUSH` = false |
+
+
+ | mqtt.rate-limits.threads-count |
+ MQTT_RATE_LIMITS_THREADS_COUNT |
+ 1 |
+ The number of parallel threads dedicated to processing total rate limit checks for incoming messages |
+
+
+ | mqtt.rate-limits.batch-size |
+ MQTT_RATE_LIMITS_BATCH_SIZE |
+ 50 |
+ The number of messages to process in each batch when checking total rate limits for incoming messages |
+
+
+ | mqtt.rate-limits.period-ms |
+ MQTT_RATE_LIMITS_PERIOD_MS |
+ 50 |
+ The period, in milliseconds, to wait before processing a batch of messages for total rate limits for incoming messages |
+
+
+ | mqtt.rate-limits.total.enabled |
+ MQTT_TOTAL_RATE_LIMITS_ENABLED |
+ false |
+ Enable/disable total incoming and outgoing messages rate limits for the broker (per whole cluster) |
+
+
+ | mqtt.rate-limits.total.config |
+ MQTT_TOTAL_RATE_LIMITS_CONFIG |
+ 1000:1,50000:60 |
+ Limit the maximum count of total incoming and outgoing messages for specified time intervals in seconds. Comma separated list of limit:seconds pairs.
+ Example: 1000 messages per second or 50000 messages per minute |
+
+
+ | mqtt.rate-limits.incoming-publish.enabled |
+ MQTT_INCOMING_RATE_LIMITS_ENABLED |
+ false |
+ Enable/disable publish rate limits per client for incoming messages to the broker from publishers |
+
+
+ | mqtt.rate-limits.incoming-publish.client-config |
+ MQTT_INCOMING_RATE_LIMITS_CLIENT_CONFIG |
+ 10:1,300:60 |
+ Limit the maximum count of publish messages per publisher for specified time intervals in seconds. Comma separated list of limit:seconds pairs.
+ Example: 10 messages per second or 300 messages per minute |
+
+
+ | mqtt.rate-limits.outgoing-publish.enabled |
+ MQTT_OUTGOING_RATE_LIMITS_ENABLED |
+ false |
+ Enable/disable publish rate limits per client for outgoing messages from the broker to subscribers. Used only for non-persistent subscribers with QoS = 0 ("AT_MOST_ONCE") |
+
+
+ | mqtt.rate-limits.outgoing-publish.client-config |
+ MQTT_OUTGOING_RATE_LIMITS_CLIENT_CONFIG |
+ 10:1,300:60 |
+ Limit the maximum count of publish messages per subscriber for specified time intervals in seconds. Comma separated list of limit:seconds pairs.
+ Example: 10 messages per second or 300 messages per minute |
+
+
+ | mqtt.rate-limits.device-persisted-messages.enabled |
+ MQTT_DEVICE_PERSISTED_MSGS_RATE_LIMITS_ENABLED |
+ false |
+ Enable/disable Device clients persisted messages rate limits for the broker (per whole cluster) |
+
+
+ | mqtt.rate-limits.device-persisted-messages.config |
+ MQTT_DEVICE_PERSISTED_MSGS_RATE_LIMITS_CONFIG |
+ 100:1,1000:60 |
+ Limit the maximum count of Device clients persisted messages for specified time intervals in seconds. Comma separated list of limit:seconds pairs.
+ Example: 100 messages per second or 1000 messages per minute |
+
+
+ | mqtt.sessions-limit |
+ MQTT_SESSIONS_LIMIT |
+ 0 |
+ Limit the total number of sessions (connected + disconnected) stored on the broker, not individually for each server (node) if it is a cluster, but as a collective limit.
+ For example, when set to 1000 either the single broker node or cluster of 2 or X nodes can store 1000 sessions in total. It is a soft limit meaning a bit more than 1000 sessions can be stored.
+ A setting of 0 means the limitation is disabled |
+
+
+ | mqtt.sessions-limit-correction |
+ MQTT_SESSIONS_LIMIT_CORRECTION |
+ false |
+ Enable/disable sessions limit value correction in the cache |
+
+
+ | mqtt.sessions-limit-correction-period-ms |
+ MQTT_SESSIONS_LIMIT_CORRECTION_PERIOD_MS |
+ 10800000 |
+ Period in milliseconds to execute the job to correct the value of sessions limit in the cache. Defaults to 3 hours |
+
+
+ | mqtt.application-clients-limit |
+ MQTT_APPLICATION_CLIENTS_LIMIT |
+ 0 |
+ Limit the total number of Application persistent clients and external system integrations. A setting of 0 means the limitation is disabled |
+
+
+ | mqtt.handler.all_msg_callback_threads |
+ MQTT_HANDLER_ALL_MSG_CALLBACK_THREADS |
+ 2 |
+ Number of threads in thread pool for processing all publish messages callbacks after sending them to Kafka |
+
+
+ | mqtt.handler.device_msg_callback_threads |
+ MQTT_HANDLER_DEVICE_MSG_CALLBACK_THREADS |
+ 2 |
+ Number of threads in thread pool for processing device persisted publish messages callbacks after sending them to Kafka |
+
+
+ | mqtt.handler.app_msg_callback_threads |
+ MQTT_HANDLER_APP_MSG_CALLBACK_THREADS |
+ 2 |
+ Number of threads in thread pool for processing application persisted publish messages callbacks after sending them to Kafka |
+
+
+ | mqtt.handler.downlink_msg_callback_threads |
+ MQTT_HANDLER_DOWNLINK_MSG_CALLBACK_THREADS |
+ 2 |
+ Number of threads in thread pool for processing downlink messages callbacks after sending them to Kafka |
+
+
+ | mqtt.response-info |
+ MQTT_RESPONSE_INFO |
+ |
+ Response info value for MQTT 5 request-response feature to be returned to clients that request it.
+ If not set the broker will not reply with response info to mqtt 5 clients that connect with "request response info" = 1.
+ Set it to topic to be used for request-response feature, e.g. "example/" |
+
+
+ | mqtt.blocked-client.cleanup.period |
+ BLOCKED_CLIENT_CLEANUP_PERIOD_MINUTES |
+ 5 |
+ The parameter to specify the period of execution cleanup task for expired blocked clients. Value set in minutes. Default value corresponds to five minutes |
+
+
+ | mqtt.blocked-client.cleanup.ttl |
+ BLOCKED_CLIENT_CLEANUP_TTL_MINUTES |
+ 10080 |
+ Time to Live for expired blocked clients. After this time, the expired blocked client is removed completely. Value set in minutes. Default value corresponds to one week |
+
+
+
+
+
+## Cache parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | cache.stats.enabled |
+ CACHE_STATS_ENABLED |
+ true |
+ Enable/disable cache stats logging |
+
+
+ | cache.stats.intervalSec |
+ CACHE_STATS_INTERVAL_SEC |
+ 60 |
+ Cache stats logging interval in seconds |
+
+
+ | cache.cache-prefix |
+ CACHE_PREFIX |
+ |
+ The common prefix for all cache keys. Defaults to empty string meaning no prefix is added |
+
+
+ | cache.specs.mqttClientCredentials.timeToLiveInMinutes |
+ CACHE_SPECS_MQTT_CLIENT_CREDENTIALS_TTL |
+ 1440 |
+ Cache TTL in minutes. Defaults to 1 day |
+
+
+ | cache.specs.basicCredentialsPassword.timeToLiveInMinutes |
+ CACHE_SPECS_BASIC_CREDENTIALS_PASSWORD_TTL |
+ 1 |
+ Cache TTL in minutes. It is recommended to set this TTL as a small value to not store them for a long time (e.g., 1-5 minutes) |
+
+
+ | cache.specs.sslRegexBasedCredentials.timeToLiveInMinutes |
+ CACHE_SPECS_SSL_REGEX_BASED_CREDENTIALS_TTL |
+ 1440 |
+ Cache TTL in minutes. Defaults to 1 day |
+
+
+ | cache.specs.clientSessionCredentials.timeToLiveInMinutes |
+ CACHE_SPECS_CLIENT_SESSION_CREDENTIALS_TTL |
+ 0 |
+ Cache TTL in minutes. Defaults to 0 meaning the cache is eternal |
+
+
+ | cache.specs.clientMqttVersion.timeToLiveInMinutes |
+ CACHE_SPECS_CLIENT_MQTT_VERSION_TTL |
+ 0 |
+ Cache TTL in minutes. Defaults to 0 meaning the cache is eternal |
+
+
+ | cache.image.etag.timeToLiveInMinutes |
+ CACHE_IMAGE_ETAG_TTL |
+ 10080 |
+ Image ETags cache TTL in minutes. Defaults to 7 days |
+
+
+ | cache.image.etag.maxSize |
+ CACHE_IMAGE_ETAG_MAX_SIZE |
+ 10000 |
+ Max size of entries in the cache. 0 means the cache is disabled |
+
+
+ | cache.image.systemImagesBrowserTtlInMinutes |
+ CACHE_IMAGE_SYSTEM_BROWSER_TTL |
+ 0 |
+ Browser cache TTL for system images in minutes. 0 means the cache is disabled |
+
+
+
+
+
+## Redis configuration parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | redis.connection.type |
+ REDIS_CONNECTION_TYPE |
+ standalone |
+ Connection type: standalone or cluster or sentinel |
+
+
+ | redis.standalone.host |
+ REDIS_HOST |
+ localhost |
+ Redis connection host |
+
+
+ | redis.standalone.port |
+ REDIS_PORT |
+ 6379 |
+ Redis connection port |
+
+
+ | redis.standalone.useDefaultClientConfig |
+ REDIS_USE_DEFAULT_CLIENT_CONFIG |
+ true |
+ Use the default Redis configuration file |
+
+
+ | redis.standalone.clientName |
+ REDIS_CLIENT_NAME |
+ standalone |
+ This value may be used only if you did not use the default ClientConfig, to specify client name |
+
+
+ | redis.standalone.connectTimeout |
+ REDIS_CLIENT_CONNECT_TIMEOUT |
+ 30000 |
+ This value may be used only if you did not use the default ClientConfig, to specify connection timeout |
+
+
+ | redis.standalone.readTimeout |
+ REDIS_CLIENT_READ_TIMEOUT |
+ 60000 |
+ This value may be used only if you did not use the default ClientConfig, to specify read timeout |
+
+
+ | redis.standalone.usePoolConfig |
+ REDIS_CLIENT_USE_POOL_CONFIG |
+ false |
+ This value may be used only if you did not use the default ClientConfig, to use pool config section |
+
+
+ | redis.cluster.nodes |
+ REDIS_NODES |
+ |
+ Comma-separated list of "host:port" pairs to bootstrap from |
+
+
+ | redis.cluster.maxRedirects |
+ REDIS_MAX_REDIRECTS |
+ 12 |
+ Maximum number of redirects to follow when executing commands across the cluster |
+
+
+ | redis.cluster.useDefaultPoolConfig |
+ REDIS_CLUSTER_USE_DEFAULT_POOL_CONFIG |
+ true |
+ If set false will be used pool config build from values of the pool config section |
+
+
+ | redis.sentinel.master |
+ REDIS_MASTER |
+ |
+ Name of master node |
+
+
+ | redis.sentinel.sentinels |
+ REDIS_SENTINELS |
+ |
+ Comma-separated list of "host:port" pairs of sentinels |
+
+
+ | redis.sentinel.password |
+ REDIS_SENTINEL_PASSWORD |
+ |
+ Password to authenticate with sentinel |
+
+
+ | redis.sentinel.useDefaultPoolConfig |
+ REDIS_SENTINEL_USE_DEFAULT_POOL_CONFIG |
+ true |
+ If set false will be used pool config build from values of the pool config section |
+
+
+ | redis.db |
+ REDIS_DB |
+ 0 |
+ DB index |
+
+
+ | redis.password |
+ REDIS_PASSWORD |
+ |
+ DB password |
+
+
+ | redis.pool_config.maxTotal |
+ REDIS_POOL_CONFIG_MAX_TOTAL |
+ 128 |
+ Maximum number of connections that can be allocated by the connection pool |
+
+
+ | redis.pool_config.maxIdle |
+ REDIS_POOL_CONFIG_MAX_IDLE |
+ 128 |
+ Maximum number of idle connections that can be maintained in the pool without being closed |
+
+
+ | redis.pool_config.minIdle |
+ REDIS_POOL_CONFIG_MIN_IDLE |
+ 16 |
+ Minimum number of idle connections that can be maintained in the pool without being closed |
+
+
+ | redis.pool_config.testOnBorrow |
+ REDIS_POOL_CONFIG_TEST_ON_BORROW |
+ true |
+ Enable/Disable PING command sent when a connection is borrowed |
+
+
+ | redis.pool_config.testOnReturn |
+ REDIS_POOL_CONFIG_TEST_ON_RETURN |
+ true |
+ The property is used to specify whether to test the connection before returning it to the connection pool |
+
+
+ | redis.pool_config.testWhileIdle |
+ REDIS_POOL_CONFIG_TEST_WHILE_IDLE |
+ true |
+ Indicates whether to use the ping command to monitor the connection validity during idle resource monitoring. Invalid connections will be destroyed |
+
+
+ | redis.pool_config.minEvictableMs |
+ REDIS_POOL_CONFIG_MIN_EVICTABLE_MS |
+ 60000 |
+ Minimum time the connection should be idle before it can be evicted from the connection pool. The value is set in milliseconds |
+
+
+ | redis.pool_config.evictionRunsMs |
+ REDIS_POOL_CONFIG_EVICTION_RUNS_MS |
+ 30000 |
+ Specifies the time interval in milliseconds between two consecutive eviction runs |
+
+
+ | redis.pool_config.maxWaitMills |
+ REDIS_POOL_CONFIG_MAX_WAIT_MS |
+ 60000 |
+ Maximum time in milliseconds where a client is willing to wait for a connection from the pool when all connections are exhausted |
+
+
+ | redis.pool_config.numberTestsPerEvictionRun |
+ REDIS_POOL_CONFIG_NUMBER_TESTS_PER_EVICTION_RUN |
+ 3 |
+ Specifies the number of connections to test for eviction during each eviction run |
+
+
+ | redis.pool_config.blockWhenExhausted |
+ REDIS_POOL_CONFIG_BLOCK_WHEN_EXHAUSTED |
+ true |
+ Determines the behavior when a thread requests a connection from the pool, but there are no available connections, and the pool cannot create more due to the maxTotal configuration |
+
+
+
+
+
+## Statistics parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | stats.enabled |
+ STATS_ENABLED |
+ true |
+ Enable/disable stats printing to the logs |
+
+
+ | stats.print-interval-ms |
+ STATS_PRINT_INTERVAL_MS |
+ 60000 |
+ Period in milliseconds to print stats. Default value corresponds to 1 minute |
+
+
+ | stats.timer.percentiles |
+ STATS_TIMER_PERCENTILES |
+ 0.5 |
+ Metrics percentiles returned by actuator for timer metrics. List of comma-separated (,) double values |
+
+
+ | stats.application-processor.enabled |
+ APPLICATION_PROCESSOR_STATS_ENABLED |
+ true |
+ Enable/disable specific Application clients stats |
+
+
+ | stats.system-info.persist-frequency |
+ STATS_SYSTEM_INFO_PERSIST_FREQUENCY_SEC |
+ 60 |
+ Persist frequency of system info (CPU, memory usage, etc.) in seconds |
+
+
+
+
+
+## Historical data statistics parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | historical-data-report.enabled |
+ HISTORICAL_DATA_REPORT_ENABLED |
+ true |
+ Enable/disable historical data stats reporting and persistence to the time series |
+
+
+ | historical-data-report.interval |
+ HISTORICAL_DATA_REPORT_INTERVAL |
+ 1 |
+ Period in minutes (1-60) to collect stats for each broker. Used in cron expression |
+
+
+ | historical-data-report.zone |
+ HISTORICAL_DATA_REPORT_ZONE |
+ UTC |
+ Timezone for the historical data stats processing |
+
+
+
+
+
+## Metrics management parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | management.health.diskspace.enabled |
+ HEALTH_DISKSPACE_ENABLED |
+ false |
+ Enable/disable disk space health check |
+
+
+ | management.endpoint.health.show-details |
+ HEALTH_SHOW_DETAILS |
+ never |
+ Controls whether health endpoint shows full component details (e.g., Redis, DB, TBMQ).
+ Options:
+ - 'never': always hide details (default if security is enabled).
+ - 'when-authorized': show details only to authenticated users.
+ - 'always': always include full health details in the response |
+
+
+ | management.endpoints.web.exposure.include |
+ METRICS_ENDPOINTS_EXPOSE |
+ health,info,prometheus |
+ Specify which Actuator endpoints should be exposed via HTTP.
+ Use 'health,info' to expose only basic health and information endpoints.
+ For exposing Prometheus metrics, update this to include 'prometheus' in the list (e.g., 'health,info,prometheus') |
+
+
+
+
+
+## Spring CORS configuration
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | spring.mvc.cors.mappings."[/api/**]".allowed-origin-patterns |
+ MVC_CORS_API_ALLOWED_ORIGIN_PATTERNS |
+ * |
+ Comma-separated list of origins to allow. '*' allows all origins. When not set, CORS support is disabled |
+
+
+ | spring.mvc.cors.mappings."[/api/**]".allowed-methods |
+ MVC_CORS_API_ALLOWED_METHODS |
+ * |
+ Comma-separated list of methods to allow. '*' allows all methods |
+
+
+ | spring.mvc.cors.mappings."[/api/**]".allowed-headers |
+ MVC_CORS_API_ALLOWED_HEADERS |
+ * |
+ Comma-separated list of headers to allow in a request. '*' allows all headers |
+
+
+ | spring.mvc.cors.mappings."[/api/**]".max-age |
+ MVC_CORS_API_MAX_AGE |
+ 1800 |
+ How long, in seconds, the response from a pre-flight request can be cached by clients |
+
+
+ | spring.mvc.cors.mappings."[/api/**]".allow-credentials |
+ MVC_CORS_API_ALLOW_CREDENTIALS |
+ true |
+ Set whether credentials are supported. When not set, credentials are not supported |
+
+
+
+
+
+## Spring doc common parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | springdoc.api-docs.enabled |
+ SWAGGER_ENABLED |
+ true |
+ If false swagger API docs will be unavailable |
+
+
+ | springdoc.default-produces-media-type |
+ SWAGGER_DEFAULT_PRODUCES_MEDIA_TYPE |
+ application/json |
+ Swagger default produces media-type |
+
+
+
+
+
+## Swagger common parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | swagger.api_path |
+ SWAGGER_API_PATH |
+ /api/** |
+ General swagger match pattern of swagger UI links |
+
+
+ | swagger.security_path_regex |
+ SWAGGER_SECURITY_PATH_REGEX |
+ /api/.* |
+ General swagger match pattern path of swagger UI links |
+
+
+ | swagger.non_security_path_regex |
+ SWAGGER_NON_SECURITY_PATH_REGEX |
+ /api/noauth.* |
+ Non-security API path match pattern of swagger UI links |
+
+
+ | swagger.title |
+ SWAGGER_TITLE |
+ TBMQ REST API |
+ The title on the API doc UI page |
+
+
+ | swagger.description |
+ SWAGGER_DESCRIPTION |
+ TBMQ Professional Edition REST API documentation |
+ The description on the API doc UI page |
+
+
+ | swagger.contact.name |
+ SWAGGER_CONTACT_NAME |
+ TBMQ team |
+ The contact name on the API doc UI page |
+
+
+ | swagger.contact.url |
+ SWAGGER_CONTACT_URL |
+ https://thingsboard.io/products/mqtt-broker/ |
+ The contact URL on the API doc UI page |
+
+
+ | swagger.contact.email |
+ SWAGGER_CONTACT_EMAIL |
+ info@thingsboard.io |
+ The contact email on the API doc UI page |
+
+
+ | swagger.license.title |
+ SWAGGER_LICENSE_TITLE |
+ Apache License Version 2.0 |
+ The license title on the API doc UI page |
+
+
+ | swagger.license.url |
+ SWAGGER_LICENSE_URL |
+ https://github.com/thingsboard/tbmq/blob/main/LICENSE |
+ Link to the license body on the API doc UI page |
+
+
+ | swagger.version |
+ SWAGGER_VERSION |
+ |
+ The version of the API doc to display. Default to the package version |
+
+
+ | swagger.group_name |
+ SWAGGER_GROUP_NAME |
+ TBMQ |
+ The group name (definition) on the API doc UI page |
+
+
+
+
+
+## Application info parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | app.version |
+ |
+ "@project.version@" |
+ Application version |
+
+
+
+
+
+## Analysis parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | analysis.log.analyzed-client-ids |
+ ANALYSIS_LOG_CLIENT_IDS |
+ |
+ List of Client Ids separated with comas. Additional events for those clients will be logged |
+
+
+
diff --git a/_includes/docs/pe/mqtt-broker/install/ie-config.md b/_includes/docs/pe/mqtt-broker/install/ie-config.md
new file mode 100644
index 0000000000..1dd2240947
--- /dev/null
+++ b/_includes/docs/pe/mqtt-broker/install/ie-config.md
@@ -0,0 +1,589 @@
+
+
+## HTTP server parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | server.address |
+ HTTP_BIND_ADDRESS |
+ 0.0.0.0 |
+ HTTP Server bind address |
+
+
+ | server.port |
+ HTTP_BIND_PORT |
+ 8082 |
+ HTTP Server bind port |
+
+
+
+
+
+## Kafka parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | queue.integration-downlink.poll-interval |
+ TB_IE_DOWNLINK_POLL_INTERVAL |
+ 1000 |
+ Interval in milliseconds to poll messages from 'tbmq.ie.downlink' topics |
+
+
+ | queue.integration-msg.poll-interval |
+ TB_IE_MSG_POLL_INTERVAL |
+ 1000 |
+ Interval in milliseconds to poll messages from 'tbmq.msg.ie' topics |
+
+
+ | queue.integration-msg.pack-processing-timeout |
+ TB_IE_MSG_PACK_PROCESSING_TIMEOUT |
+ 30000 |
+ Timeout in milliseconds for processing the pack of messages |
+
+
+ | queue.integration-msg.ack-strategy.type |
+ TB_IE_MSG_ACK_STRATEGY_TYPE |
+ SKIP_ALL |
+ Processing strategy for 'tbmq.msg.ie' topics. Can be: SKIP_ALL, RETRY_ALL |
+
+
+ | queue.integration-msg.ack-strategy.retries |
+ TB_IE_MSG_ACK_STRATEGY_RETRIES |
+ 5 |
+ Number of retries, 0 is unlimited. Use for RETRY_ALL processing strategy |
+
+
+ | queue.integration-msg.ack-strategy.pause-between-retries |
+ TB_IE_MSG_ACK_STRATEGY_PAUSE_BETWEEN_RETRIES |
+ 1 |
+ Time in seconds to wait in consumer thread before retries |
+
+
+ | queue.kafka.bootstrap.servers |
+ TB_KAFKA_SERVERS |
+ localhost:9092 |
+ List of kafka bootstrap servers used to establish connection |
+
+
+ | queue.kafka.enable-topic-deletion |
+ TB_KAFKA_ENABLE_TOPIC_DELETION |
+ true |
+ Controls whether TBMQ is allowed to delete Kafka topics that were created for
+ Integrations.
+ When set to 'true', TBMQ may automatically remove topics during cleanup
+ (for example, when the Integration is deleted).
+ When set to 'false', TBMQ will skip topic deletions and simply stop using them.
+ This helps prevent accidental data loss in production environments |
+
+
+ | queue.kafka.default.consumer.partition-assignment-strategy |
+ TB_KAFKA_DEFAULT_CONSUMER_PARTITION_ASSIGNMENT_STRATEGY |
+ org.apache.kafka.clients.consumer.StickyAssignor |
+ A list of class names or class types, ordered by preference, of supported partition assignment strategies that the client will use to distribute partition ownership amongst consumer instances when group management is used |
+
+
+ | queue.kafka.default.consumer.session-timeout-ms |
+ TB_KAFKA_DEFAULT_CONSUMER_SESSION_TIMEOUT_MS |
+ 10000 |
+ The timeout in milliseconds used to detect client failures when using Kafka's group management facility |
+
+
+ | queue.kafka.default.consumer.max-poll-interval-ms |
+ TB_KAFKA_DEFAULT_CONSUMER_MAX_POLL_INTERVAL_MS |
+ 300000 |
+ The maximum delay in milliseconds between invocations of poll() when using consumer group management |
+
+
+ | queue.kafka.default.consumer.max-poll-records |
+ TB_KAFKA_DEFAULT_CONSUMER_MAX_POLL_RECORDS |
+ 2000 |
+ The maximum number of records returned in a single call to poll() |
+
+
+ | queue.kafka.default.consumer.max-partition-fetch-bytes |
+ TB_KAFKA_DEFAULT_CONSUMER_MAX_PARTITION_FETCH_BYTES |
+ 16777216 |
+ The maximum amount of data in bytes per-partition the server will return |
+
+
+ | queue.kafka.default.consumer.fetch-max-bytes |
+ TB_KAFKA_DEFAULT_CONSUMER_FETCH_MAX_BYTES |
+ 134217728 |
+ The maximum amount of data in bytes the server should return for a fetch request |
+
+
+ | queue.kafka.default.consumer.heartbeat-interval-ms |
+ TB_KAFKA_DEFAULT_CONSUMER_HEARTBEAT_INTERVAL_MS |
+ 3000 |
+ The expected time between heartbeats to the consumer coordinator when using Kafka’s group management facilities.
+ Heartbeats are used to ensure that the consumer’s session stays active and to facilitate rebalancing when new consumers join or leave the group.
+ The value must be set lower than TB_KAFKA_DEFAULT_CONSUMER_SESSION_TIMEOUT_MS, but typically should be set no higher than 1/3 of that value.
+ It can be adjusted even lower to control the expected time for normal rebalances. Value in milliseconds. Default is 3 sec |
+
+
+ | queue.kafka.default.producer.acks |
+ TB_KAFKA_DEFAULT_PRODUCER_ACKS |
+ 1 |
+ The number of acknowledgments the producer requires the leader to have received before considering a request complete |
+
+
+ | queue.kafka.default.producer.retries |
+ TB_KAFKA_DEFAULT_PRODUCER_RETRIES |
+ 1 |
+ Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error |
+
+
+ | queue.kafka.default.producer.batch-size |
+ TB_KAFKA_DEFAULT_PRODUCER_BATCH_SIZE |
+ 16384 |
+ The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. Size in bytes |
+
+
+ | queue.kafka.default.producer.linger-ms |
+ TB_KAFKA_DEFAULT_PRODUCER_LINGER_MS |
+ 5 |
+ The producer groups together any records that arrive in between request transmissions into a single batched request, set in milliseconds |
+
+
+ | queue.kafka.default.producer.buffer-memory |
+ TB_KAFKA_DEFAULT_PRODUCER_BUFFER_MEMORY |
+ 33554432 |
+ The total bytes of memory the producer can use to buffer records waiting to be sent to the server |
+
+
+ | queue.kafka.default.producer.compression-type |
+ TB_KAFKA_DEFAULT_COMPRESSION_TYPE |
+ none |
+ The compression type for all data generated by the producer. Valid values are `none`, `gzip`, `snappy`, `lz4`, or `zstd` |
+
+
+ | queue.kafka.admin.config |
+ TB_KAFKA_ADMIN_CONFIG |
+ retries:1 |
+ List of configs separated by semicolon used for admin kafka client creation |
+
+
+ | queue.kafka.admin.command-timeout |
+ TB_KAFKA_ADMIN_COMMAND_TIMEOUT_SEC |
+ 30 |
+ Kafka Admin client command timeout (in seconds). Applies to operations like describeCluster, listTopics, etc |
+
+
+ | queue.kafka.consumer-stats.enabled |
+ TB_KAFKA_CONSUMER_STATS_ENABLED |
+ true |
+ Prints lag if enabled between consumer group offset and last messages offset in Kafka topics |
+
+
+ | queue.kafka.consumer-stats.print-interval-ms |
+ TB_KAFKA_CONSUMER_STATS_PRINT_INTERVAL_MS |
+ 60000 |
+ Statistics printing interval in milliseconds for Kafka's consumer-groups stats |
+
+
+ | queue.kafka.consumer-stats.kafka-response-timeout-ms |
+ TB_KAFKA_CONSUMER_STATS_RESPONSE_TIMEOUT_MS |
+ 1000 |
+ Time to wait in milliseconds for the stats-loading requests to Kafka to finish |
+
+
+ | queue.kafka.consumer-stats.consumer-config |
+ TB_KAFKA_CONSUMER_STATS_CONSUMER_CONFIG |
+ |
+ List of configs separated by semicolon used for kafka stats consumer |
+
+
+ | queue.kafka.integration-downlink.topic-prefix |
+ TB_KAFKA_IE_DOWNLINK_TOPIC_PREFIX |
+ tbmq.ie.downlink |
+ Prefix for topics for sending integration configurations and validation requests from tbmq to integration executors |
+
+
+ | queue.kafka.integration-downlink.http.topic-properties |
+ TB_KAFKA_IE_DOWNLINK_HTTP_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:6;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.ie.downlink.http` topic |
+
+
+ | queue.kafka.integration-downlink.http.additional-consumer-config |
+ TB_KAFKA_IE_DOWNLINK_HTTP_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.ie.downlink.http` topic |
+
+
+ | queue.kafka.integration-downlink.http.additional-producer-config |
+ TB_KAFKA_IE_DOWNLINK_HTTP_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.ie.downlink.http` topic |
+
+
+ | queue.kafka.integration-downlink.kafka.topic-properties |
+ TB_KAFKA_IE_DOWNLINK_KAFKA_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:6;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.ie.downlink.kafka` topic |
+
+
+ | queue.kafka.integration-downlink.kafka.additional-consumer-config |
+ TB_KAFKA_IE_DOWNLINK_KAFKA_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.ie.downlink.kafka` topic |
+
+
+ | queue.kafka.integration-downlink.kafka.additional-producer-config |
+ TB_KAFKA_IE_DOWNLINK_KAFKA_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.ie.downlink.kafka` topic |
+
+
+ | queue.kafka.integration-downlink.mqtt.topic-properties |
+ TB_KAFKA_IE_DOWNLINK_MQTT_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:6;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.ie.downlink.mqtt` topic |
+
+
+ | queue.kafka.integration-downlink.mqtt.additional-consumer-config |
+ TB_KAFKA_IE_DOWNLINK_MQTT_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.ie.downlink.mqtt` topic |
+
+
+ | queue.kafka.integration-downlink.mqtt.additional-producer-config |
+ TB_KAFKA_IE_DOWNLINK_MQTT_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.ie.downlink.mqtt` topic |
+
+
+ | queue.kafka.integration-uplink.topic |
+ TB_KAFKA_IE_UPLINK_TOPIC |
+ tbmq.ie.uplink |
+ Topic for sending messages/events from integration executors to tbmq |
+
+
+ | queue.kafka.integration-uplink.topic-properties |
+ TB_KAFKA_IE_UPLINK_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:6;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.ie.uplink` topic |
+
+
+ | queue.kafka.integration-uplink.additional-consumer-config |
+ TB_KAFKA_IE_UPLINK_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.ie.uplink` topic |
+
+
+ | queue.kafka.integration-uplink.additional-producer-config |
+ TB_KAFKA_IE_UPLINK_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.ie.uplink` topic |
+
+
+ | queue.kafka.integration-uplink-notifications.topic-prefix |
+ TB_KAFKA_IE_UPLINK_NOTIF_TOPIC_PREFIX |
+ tbmq.ie.uplink.notifications |
+ Prefix for topics for sending notifications or replies from integration executors to specific tbmq node |
+
+
+ | queue.kafka.integration-uplink-notifications.topic-properties |
+ TB_KAFKA_IE_UPLINK_NOTIF_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;partitions:1;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.ie.uplink.notifications` topics |
+
+
+ | queue.kafka.integration-uplink-notifications.additional-consumer-config |
+ TB_KAFKA_IE_UPLINK_NOTIF_ADDITIONAL_CONSUMER_CONFIG |
+ |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.ie.uplink.notifications` topic |
+
+
+ | queue.kafka.integration-uplink-notifications.additional-producer-config |
+ TB_KAFKA_IE_UPLINK_NOTIF_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.ie.uplink.notifications` topic |
+
+
+ | queue.kafka.integration-msg.topic-properties |
+ TB_KAFKA_IE_MSG_TOPIC_PROPERTIES |
+ retention.ms:604800000;segment.bytes:26214400;retention.bytes:1048576000;replication.factor:1 |
+ Kafka topic properties separated by semicolon for `tbmq.msg.ie` topics |
+
+
+ | queue.kafka.integration-msg.additional-consumer-config |
+ TB_KAFKA_IE_MSG_ADDITIONAL_CONSUMER_CONFIG |
+ max.poll.records:50 |
+ Additional Kafka consumer configs separated by semicolon for `tbmq.msg.ie` topics |
+
+
+ | queue.kafka.integration-msg.additional-producer-config |
+ TB_KAFKA_IE_MSG_ADDITIONAL_PRODUCER_CONFIG |
+ |
+ Additional Kafka producer configs separated by semicolon for `tbmq.msg.ie` topics |
+
+
+ | queue.kafka.kafka-prefix |
+ TB_KAFKA_PREFIX |
+ |
+ The common prefix for all Kafka topics, producers, consumer groups, and consumers. Defaults to empty string meaning no prefix is added |
+
+
+
+
+
+## Service parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | service.type |
+ TB_SERVICE_TYPE |
+ tbmq-integration-executor |
+ Microservice type. Allowed value: tbmq-integration-executor |
+
+
+ | service.id |
+ TB_SERVICE_ID |
+ |
+ Unique id for this service (autogenerated if empty) |
+
+
+ | service.integrations.supported |
+ TB_SERVICE_INTEGRATIONS_SUPPORTED |
+ ALL |
+ Allow to enable integration on microservice integration executor.
+ Allowed values: HTTP, KAFKA, MQTT. By default, ALL |
+
+
+ | service.integrations.excluded |
+ TB_SERVICE_INTEGRATIONS_EXCLUDED |
+ NONE |
+ List of integrations to exclude from processing on service/microservice integration executor.
+ Allowed values: HTTP, KAFKA, MQTT. By default, NONE |
+
+
+
+
+
+## Integration common parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | integrations.statistics.enabled |
+ INTEGRATIONS_STATISTICS_ENABLED |
+ true |
+ Enable/disable integrations statistics |
+
+
+ | integrations.statistics.persist-frequency |
+ INTEGRATIONS_STATISTICS_PERSIST_FREQUENCY |
+ 3600000 |
+ Integration statistic persistence frequency in milliseconds |
+
+
+ | integrations.init.connection-timeout-sec |
+ INTEGRATIONS_INIT_CONNECTION_TIMEOUT_SEC |
+ 15 |
+ Maximum connection timeout allowed for integrations in seconds. Any greater user defined timeout will be reduced down to this limit |
+
+
+ | integrations.init.connection-check-api-request-timeout-sec |
+ INTEGRATIONS_INIT_CONNECTION_CHECK_API_REQUEST_TIMEOUT_SEC |
+ 20 |
+ Connection check timeout for API request in seconds |
+
+
+ | integrations.reinit.enabled |
+ INTEGRATIONS_REINIT_ENABLED |
+ true |
+ Enable/disable integrations hot reinitialization. This process is done for integrations with state 'FAILED' |
+
+
+ | integrations.reinit.frequency |
+ INTEGRATIONS_REINIT_FREQUENCY_MS |
+ 300000 |
+ Checking interval in milliseconds for reinit integrations. Defaults to 5 minutes |
+
+
+ | integrations.destroy.graceful-timeout-ms |
+ INTEGRATIONS_DESTROY_TIMEOUT_MS |
+ 1000 |
+ The duration (in milliseconds) to wait during each iteration of the graceful shutdown process for integrations to terminate properly.
+ Default value is set to 1 seconds |
+
+
+ | integrations.destroy.count |
+ INTEGRATIONS_DESTROY_COUNT |
+ 10 |
+ The number of iterations to attempt a graceful shutdown before forcefully stopping the process |
+
+
+ | integrations.destroy.forced-shutdown-timeout-ms |
+ INTEGRATIONS_DESTROY_FORCED_SHUTDOWN_TIMEOUT_MS |
+ 15000 |
+ The maximum duration (in milliseconds) to wait before forcefully stopping the application if the graceful shutdown process has not started or exceeds the allowed time |
+
+
+ | integrations.allow-local-network-hosts |
+ INTEGRATIONS_ALLOW_LOCAL_NETWORK_HOSTS |
+ true |
+ Enable/disable integrations local network hosts |
+
+
+ | integrations.uplink.callback-threads-count |
+ INTEGRATIONS_UPLINK_THREADS |
+ 4 |
+ Number of threads in the pool to process callbacks of uplink events to tbmq nodes |
+
+
+ | integrations.manage.lifecycle-threads-count |
+ INTEGRATIONS_MANAGE_LIFECYCLE_THREADS |
+ 4 |
+ Number of threads in the pool to process lifecycle events (CREATE/UPDATE/DELETE) of integrations |
+
+
+ | integrations.manage.command-threads-count |
+ INTEGRATIONS_MANAGE_COMMAND_THREADS |
+ 4 |
+ Number of threads in the pool to process integration validation requests |
+
+
+ | integrations.external.threads-count |
+ INTEGRATIONS_EXTERNAL_THREADS |
+ 10 |
+ Number of threads in the pool dedicated to handling external operations, such as producing messages to Kafka topics |
+
+
+ | integrations.netty.threads-count |
+ INTEGRATIONS_NETTY_SHARED_GROUP_THREADS |
+ 0 |
+ Netty shared worker group threads count. Defaults to 0 meaning the threads count is number of availableProcessors * 2.
+ Used to send messages to external MQTT brokers using MQTT bridge (integration) |
+
+
+
+
+
+## Management parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | management.health.diskspace.enabled |
+ HEALTH_DISKSPACE_ENABLED |
+ false |
+ Enable/disable disk space health check |
+
+
+ | management.endpoint.health.show-details |
+ HEALTH_SHOW_DETAILS |
+ never |
+ Controls whether health endpoint shows full component details (e.g., Redis, DB, TBMQ).
+ Options:
+ - 'never': always hide details (default if security is enabled).
+ - 'when-authorized': show details only to authenticated users.
+ - 'always': always include full health details in the response |
+
+
+ | management.endpoints.web.exposure.include |
+ METRICS_ENDPOINTS_EXPOSE |
+ health,info,prometheus |
+ Specify which Actuator endpoints should be exposed via HTTP.
+ Use 'health,info' to expose only basic health and information endpoints.
+ For exposing Prometheus metrics, update this to include 'prometheus' in the list (e.g., 'health,info,prometheus') |
+
+
+
+
+
+## Statistics parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | stats.ie.enabled |
+ STATS_IE_ENABLED |
+ true |
+ Enable/disable stats printing to the logs |
+
+
+ | stats.ie.print-interval-ms |
+ STATS_IE_PRINT_INTERVAL_MS |
+ 60000 |
+ Period in milliseconds to print stats. Default value corresponds to 1 minute |
+
+
+ | stats.timer.percentiles |
+ STATS_TIMER_PERCENTILES |
+ 0.5 |
+ Metrics percentiles returned by actuator for timer metrics. List of comma-separated (,) double values |
+
+
+ | stats.system-info.persist-frequency |
+ STATS_SYSTEM_INFO_PERSIST_FREQUENCY_SEC |
+ 60 |
+ Persist frequency of system info (CPU, memory usage, etc.) in seconds |
+
+
+
+
+
+## Event configuration parameters
+
+
+
+
+ | Parameter | Environment Variable | Default Value | Description |
+
+
+
+
+ | event.error.rate-limits.enabled |
+ EVENT_ERROR_RATE_LIMITS_ENABLED |
+ true |
+ If true rate limits will be active |
+
+
+ | event.error.rate-limits.integration |
+ EVENT_ERROR_RATE_LIMITS_INTEGRATION |
+ 5000:3600,100:60 |
+ No more than 5000 messages per hour or 100 messages per minute for all integrations |
+
+
+ | event.error.rate-limits.ttl-minutes |
+ EVENT_ERROR_RATE_LIMITS_TTL |
+ 60 |
+ Time (in minutes) to prevent duplicate persistence of rate limit events once the error event rate limit is reached |
+
+
+
diff --git a/_includes/docs/pe/mqtt-broker/oauth-2-support.md b/_includes/docs/pe/mqtt-broker/oauth-2-support.md
new file mode 100644
index 0000000000..8af5cb2e86
--- /dev/null
+++ b/_includes/docs/pe/mqtt-broker/oauth-2-support.md
@@ -0,0 +1,506 @@
+* TOC
+{:toc}
+
+## Overview
+
+TBMQ Professional Edition allows you to provide Single Sign-On (SSO) functionality for your users and automatically create Administrators or Viewers using external user management platforms that support the OAuth 2.0 protocol.
+Examples of platforms that support OAuth 2.0 include: [Google](#login-with-google), [Auth0](#login-with-auth0), [Keycloak](#login-with-keycloak), [Okta](https://www.okta.com/){:target="_blank"}, [Azure](https://portal.azure.com/){:target="_blank"}, etc.
+
+## OAuth 2.0 authentication flow
+
+TBMQ supports the Authorization Code grant type to exchange an authorization code for an access token.
+Once the user is redirected back to the TBMQ client, the platform retrieves the authorization code from the URL and uses it to request an access token from the external user management platform.
+Using the [basic mapper](#basic-mapper) or [custom mapper](#custom-mapper), the external user info object is converted from the external platform into a TBMQ internal OAuth 2.0 user.
+After this, the regular TBMQ authorization flow takes place.
+
+## Setting up authentication via an external provider
+
+OAuth 2.0 clients are configured separately from domains, allowing reuse of the configured client and making the settings clearer.
+To use authentication through an external provider, first configure an OAuth 2.0 client with the necessary credentials.
+Then, either add a new domain or use an existing one, and update the OAuth 2.0 client list with the new client.
+
+### Operations with domain
+
+**Adding a domain**
+
+Follow these steps to add a new domain:
+
+* On the "Domains" tab of the "OAuth 2.0" page, click the "plus" icon to add a new domain;
+* Provide your domain name and OAuth 2.0 client;
+* Click "Add" to finalize.
+
+{% include images-gallery.html imageCollection="adding-domain-1" %}
+
+**Editing a domain**
+
+To update the settings for an existing domain, follow these steps:
+
+* Click on the domain to view its details;
+* Switch to editing mode by clicking the large orange button;
+* Make the required modifications;
+* Save your changes by clicking the "Apply changes" button.
+
+{% include images-gallery.html imageCollection="editing-domain-1" %}
+
+**Deleting a domain**
+
+To remove a domain, follow these steps:
+
+* Click the "trash" icon in the row of the domain you wish to remove;
+* Confirm the deletion by clicking "Yes".
+
+{% include images-gallery.html imageCollection="deleting-domain-1" %}
+
+### Operations with OAuth 2.0 client
+
+**Adding an OAuth 2.0 client**
+
+Follow these steps to add a new OAuth 2.0 client to TBMQ:
+
+* Navigate to the "OAuth 2.0 clients" tab on the "OAuth 2.0" page, and click the "plus" icon;
+* Enter a descriptive title for the client;
+* Select the authentication provider from the dropdown menu;
+* Provide the Client ID and Client Secret obtained from your authentication provider;
+* Configure advanced settings as necessary;
+* Click "Add" to finalize.
+
+{% include images-gallery.html imageCollection="adding-oauth2-client-1" %}
+
+**Editing an OAuth 2.0 client**
+
+To update an existing OAuth 2.0 client:
+
+* Click on the client to view its details;
+* Switch to editing mode by clicking the large orange button;
+* Make the required modifications;
+* Save your changes by clicking the "Apply changes" button.
+
+{% include images-gallery.html imageCollection="editing-oauth2-client-1" %}
+
+**Deleting an OAuth 2.0 client**
+
+To remove obsolete or unused clients:
+
+* Click the "trash" icon in the row of the client you wish to remove;
+* Confirm the deletion by clicking "Yes".
+
+{% include images-gallery.html imageCollection="deleting-oauth2-client-1" %}
+
+## Login with Google
+
+In this example, we will use [Google](https://developers.google.com/identity/protocols/oauth2/openid-connect){:target="_blank"} for authentication.
+
+To map external user information from Google to the OAuth platform, use the built-in [basic mapper](#basic-mapper).
+If the [basic mapper](#basic-mapper) does not fit your business needs, you can configure a [custom mapper](#custom-mapper) for more flexibility.
+
+### Preparations
+
+To use Google OAuth 2.0 authentication, you must set up a project in the [Google API Console](https://console.developers.google.com/){:target="_blank"} to obtain OAuth 2.0 credentials.
+
+Follow the instructions on the [OpenID Connect](https://developers.google.com/identity/protocols/oauth2/openid-connect){:target="_blank"} page or the steps below to configure the OAuth 2.0 client.
+After completing this setup, you will have a Client ID and a Client Secret.
+
+* Go to the "Credentials" page in the left menu and select "OAuth client ID" from the "Create credentials" dropdown;
+* Enter an OAuth client name and add the TBMQ redirect URI in the "Authorized Redirect URIs" section using the format:
+
+```
+http(s)://domain:port/login/oauth2/code/
+```
+{: .copy-code}
+
+* Replace `domain` with your TBMQ domain and specify the port used for HTTP access.
+ For example, if your domain is *my.tbmq.org*:
+
+```
+https://my.tbmq.org/login/oauth2/code/
+```
+
+* Click "Create".
+
+The OAuth client is now created. You have credentials consisting of a *Client ID* and a *Client Secret*.
+
+{% include images-gallery.html imageCollection="google-credentials-for-oauth-1" %}
+
+### Configuring Google as an OAuth 2.0 authentication provider in TBMQ
+
+To configure OAuth 2.0 authentication in TBMQ via Google, follow these steps:
+
+* Log in to your TBMQ instance;
+* Go to the "OAuth 2.0" page in the "Security" section;
+* On the "Domains" tab, click the "plus" icon;
+* Enter your domain name or IP address of your TBMQ instance;
+* Click "Create new" in the "OAuth 2.0 clients" section to add one.
+
+{% include images-gallery.html imageCollection="google-configuration-of-thingsboard-google-1" %}
+
+Adding a new OAuth 2.0 client:
+
+* Enter "Google" as the title;
+* Set the provider to "Google";
+* Enter the Client ID and Client Secret from the [Google API Console](https://console.developers.google.com/){:target="_blank"}.
+
+Then expand the "Advanced settings" section and configure the "General" block:
+
+* Use this [link](https://developers.google.com/identity/protocols/oauth2/openid-connect#discovery){:target="_blank"} to see the latest URLs such as "Access Token URI" and "Authorization URI";
+* Select "POST" as the client authentication method;
+* Enable the "Allow user creation" option;
+* Add the following to the scope field: `email`, `openid`, `profile`.
+
+{% include images-gallery.html imageCollection="google-configuration-of-thingsboard-google-2" %}
+
+Go to the "Mapper" block:
+
+* Keep the mapper type as "BASIC";
+* Specify the role to be assigned;
+* Click "Add".
+
+{% include images-gallery.html imageCollection="google-configuration-of-thingsboard-google-3" %}
+
+* The OAuth client has been added successfully. Click "Add" again to confirm the addition of the domain.
+
+A new domain has now been added.
+
+{% include images-gallery.html imageCollection="google-configuration-of-thingsboard-google-4" %}
+
+### Sign in
+
+Now, go to the TBMQ login screen. You will see a new "Login with Google" option.
+Select one of your Google accounts, and you will be logged into TBMQ using your Google email.
+
+{% include images-gallery.html imageCollection="login-with-google-1" %}
+
+Go to the "Users" page to find the newly created user.
+
+{% include images-gallery.html imageCollection="login-with-google-2" %}
+
+## Login with Auth0
+
+In this sample, we will configure **OAuth** authentication using an external provider – [Auth0](https://auth0.com/){:target="_blank"}.
+
+To map external user information from Auth0 to the OAuth platform, we use the built-in [basic mapper](#basic-mapper).
+
+If the [basic mapper](#basic-mapper) does not fit your business needs, you can configure the [custom mapper](#custom-mapper) to implement mapping that suits your requirements.
+
+### Preparations
+
+Now let's add another provider to our list – [Auth0](https://auth0.com/){:target="_blank"}.
+
+To apply the configuration properly, we first need to obtain OAuth 2.0 credentials:
+
+* Go to the [Auth0 management console](https://manage.auth0.com/){:target="_blank"}. Open the "Applications" page, and click the "+ Create Application" button;
+* Name your application "TBMQ" and choose the application type **Regular Web Applications**;
+* Next, choose the technology being used. Please select **Java Spring Boot**;
+* Once your application is created, you are redirected to the application details page. Navigate to the **Settings** tab to find the *Client ID* and *Client Secret*;
+* In the **Allowed Callback URLs** field, update the redirect URI using the format:
+
+```
+http(s)://domain:port/login/oauth2/code/
+```
+{: .copy-code}
+
+* Replace `domain` with your TBMQ domain and specify the port used for HTTP access.
+ For example, if your domain is *my.tbmq.org*:
+
+```
+https://my.tbmq.org/login/oauth2/code/
+```
+
+{% capture difference %}
+Please note that it is not necessary to update the Application Login URI.
+{% endcapture %}
+{% include templates/info-banner.md content=difference %}
+
+* In the **Advanced Settings** section, you will find all the necessary URLs (endpoints) required for configuring OAuth 2.0;
+* Click the **Save Changes** button.
+
+{% include images-gallery.html imageCollection="auth0-credentials-1" %}
+
+### Configuring Auth0 as an OAuth 2.0 authentication provider in TBMQ
+
+To configure OAuth 2.0 authentication in TBMQ via Auth0, follow these steps:
+
+* Log in to your TBMQ instance;
+* Go to the "OAuth 2.0" page in the "Security" section;
+* On the "Domains" tab, click the "plus" icon;
+* Enter your domain name or IP address of your TBMQ instance;
+* Click "Create new" in the "OAuth 2.0 clients" section to add a new one.
+
+{% include images-gallery.html imageCollection="oauth0-configuration-of-thingsboard-1" %}
+
+Adding a new OAuth 2.0 client:
+
+* In the opened window, enter **Auth0** as the title for the client;
+* Select **Custom** as the provider from the dropdown;
+* Enter the *Client ID* and *Client Secret* obtained from the [Auth0 management console](https://manage.auth0.com/){:target="_blank"}.
+
+In the **General** block of the "Advanced settings" section:
+
+* Fill in all the required URLs using the values obtained from the [Auth0 management console](https://manage.auth0.com/){:target="_blank"};
+* Select **POST** as the client authentication method;
+* Enter **Auth0** as the provider label;
+* Add the following scopes to the scope field: `openid`, `email`, `profile`.
+
+{% include images-gallery.html imageCollection="oauth0-configuration-of-thingsboard-2" %}
+
+
+Proceed to the "Mapper" block:
+- Leave the mapper type as **BASIC**;
+- Specify the role to be used;
+- Click **Add** to complete the addition of the new OAuth 2.0 client.
+
+{% include images-gallery.html imageCollection="oauth0-configuration-of-thingsboard-3" %}
+
+* The Auth0 client has been successfully added. Click **Add** again to confirm the addition of the domain.
+
+{% include images-gallery.html imageCollection="oauth0-configuration-of-thingsboard-4" %}
+
+### Sign in
+
+Navigate to the login screen. You will now find the login method Auth0.
+Click the "Login with Auth0" button.
+
+{% include images-gallery.html imageCollection="login-with-oauth0-1" %}
+
+Go to the "Users" page. There you will find that a new user has been created.
+
+{% include images-gallery.html imageCollection="login-with-oauth0-2" %}
+
+## Login with Keycloak
+
+In this sample, we will use [Keycloak](https://www.keycloak.org/){:target="_blank"} for authentication.
+
+To map external user information from Keycloak to the OAuth platform, we use the built-in [basic mapper](#basic-mapper).
+
+If the [basic mapper](#basic-mapper) does not fit your business needs, you can configure the [custom mapper](#custom-mapper) to implement a mapping that fits your requirements.
+
+### Preparations
+
+To use Keycloak for authentication, you need to set up a project in [Keycloak](https://www.keycloak.org/){:target="_blank"} to obtain OAuth 2.0 credentials.
+Follow the [official instructions](https://www.keycloak.org/guides){:target="_blank"} or the steps below.
+By the end, you should have a new Keycloak client with credentials consisting of a Client ID and a Client Secret.
+
+**Start Keycloak**
+
+Get started with Keycloak using your [preferred method](https://www.keycloak.org/guides){:target="_blank"}.
+In this example, we will run a test Keycloak server on Docker.
+
+* Make sure you have [Docker](https://docs.docker.com/compose/install/){:target="_blank"} installed;
+* Run the command below to start Keycloak locally on port 8081 and create an initial admin user with the username **admin** and password **admin**:
+
+```bash
+docker run -p 8081:8080 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:26.0.5 start-dev
+```
+{: .copy-code}
+
+{% include images-gallery.html imageCollection="terminal-start-keycloak" %}
+
+**Log in to the admin console**
+
+* Log in to the [Keycloak Admin Console](http://localhost:8081/admin){:target="_blank"} using **admin** as username and password.
+
+{% include images-gallery.html imageCollection="log-in-to-admin-console" %}
+
+**Create a realm**
+
+* Click "Keycloak" next to the master realm, then click the "Create realm" button;
+* Enter **ThingsBoard** in the realm name field, and click "Create".
+
+The new realm has been created.
+
+{% include images-gallery.html imageCollection="create-new-realm" %}
+
+**Create a new client**
+
+A client represents an application or service that requests user authentication.
+
+* Go to the "Clients" page in the left-hand menu, and click the "Create client" button;
+* Enter **thingsboard** as the client ID. Leave the client type as **OpenID Connect**. Click "Next";
+* Enable the **Client authentication** option. Confirm that **Standard flow** is enabled. Click "Next";
+* In the "Login settings" section, add the TBMQ redirect URI in the **Authorized Redirect URIs** section using the format:
+
+```
+http(s)://domain:port/login/oauth2/code/
+```
+{: .copy-code}
+
+* Replace `domain` with your TBMQ domain and specify the port used for HTTP access.
+ For example, if your domain is *my.thingsboard.instance*:
+
+```
+https://my.thingsboard.instance/login/oauth2/code/
+```
+
+* Click "Save".
+
+Client created successfully.
+
+{% include images-gallery.html imageCollection="create-client" %}
+
+
+You now have credentials consisting of a Client ID and a Client Secret.
+The Client ID is available on the "Settings" tab. The Client Secret is located on the "Credentials" tab.
+
+{% include images-gallery.html imageCollection="client-id-and-secret" %}
+
+#### Endpoints
+
+As a fully compliant OpenID Connect Provider, Keycloak exposes a set of endpoints that applications and services can use for authentication and authorization.
+
+* Go to the "Realm settings" page;
+* Scroll down and locate the "OpenID Endpoint Configuration" link, then click it;
+* A new window will open with the configuration. Check the "Pretty-print" option for easier reading.
+
+Here you can find values such as **Access Token URI**, **Authorization URI**, **JSON Web Key URI**, and **User Info URI**, which are required for configuring the OAuth 2.0 client in TBMQ.
+A description of the available endpoints is provided [here](https://www.keycloak.org/securing-apps/oidc-layers){:target="_blank"}.
+
+{% include images-gallery.html imageCollection="endpoint-configuration" %}
+
+### Create a user
+
+Now add a user. Only added users will be able to authenticate via Keycloak.
+
+* Go to the "Users" page in the left-hand menu;
+* Click "Create new user";
+* Enter the username and email address. First name and last name are optional;
+* Click "Create".
+
+The user has been created.
+
+{% include images-gallery.html imageCollection="create-user" %}
+
+Set a password for this user:
+
+* Navigate to the "Credentials" tab and click **Set password**;
+* Fill in the password form. Toggle **Temporary** to **Off** so that the user is not forced to change the password on first login;
+* Click **Save password**.
+
+The password has been set successfully.
+
+{% include images-gallery.html imageCollection="create-password" %}
+
+### Configuring Keycloak as an OAuth 2.0 authentication provider in TBMQ
+
+To configure OAuth 2.0 authentication in TBMQ via Keycloak, follow the steps below:
+
+* Log in to your TBMQ instance;
+* Go to the "OAuth 2.0" page of the "Security" section;
+* Navigate to the "OAuth 2.0 clients" tab, and click the "plus" icon;
+* Enter **Keycloak** as the title;
+* Select **Custom** as the provider from the dropdown menu;
+* Enter the *Client ID* and *Client Secret* retrieved from the [Keycloak console](http://localhost:8081/admin){:target="_blank"}.
+
+Then expand the "Advanced settings" menu and configure the "General" block:
+
+* Use the [endpoint configuration file](#endpoints) to find the current values for **Access Token URI**, **Authorization URI**, **JSON Web Key URI**, and **User Info URI**. Fill in the corresponding fields;
+* Set the client authentication method to **POST**;
+* Enter **Keycloak** as the provider label;
+* Add the following scopes: `email`, `openid`, `profile`.
+
+{% include images-gallery.html imageCollection="keycloak-add-thingsboard-oauth-client-1" %}
+
+Go to the "Mapper" block:
+
+* Leave the mapper type as **BASIC**;
+* Specify the role to be used;
+* Click **Add** to confirm.
+
+A new OAuth 2.0 client has been added.
+
+{% include images-gallery.html imageCollection="keycloak-add-thingsboard-oauth-client-2" %}
+
+
+Now, add a new domain:
+
+* Go to the "Domains" tab of the "OAuth 2.0" page, and click the "plus" icon;
+* Enter your domain name or IP address of your TBMQ instance;
+* Specify **Keycloak** as the OAuth 2.0 client;
+* Click **Add** again to confirm.
+
+A new domain has been added.
+
+{% include images-gallery.html imageCollection="keycloak-add-domain" %}
+
+### Sign in
+
+Go to the TBMQ login screen. You will now see the option **Login with Keycloak**.
+Click this button. A window will open prompting you to sign in to your Keycloak account.
+Enter your Keycloak credentials and click **Sign In**. You are now logged into TBMQ using Keycloak.
+
+{% include images-gallery.html imageCollection="login-with-keycloak-1" %}
+
+Go to the "Users" page. There you will find that a new user has been created.
+
+{% include images-gallery.html imageCollection="login-with-keycloak-2" %}
+
+## Mapping of the external user into the TBMQ internal user structure
+
+Mapping an external user info object into a TBMQ user can be achieved using the [Basic](#basic-mapper), [Custom](#custom-mapper), GitHub, and Apple mappers.
+
+### Basic mapper
+
+The basic mapper merges an external OAuth 2.0 user info object into the TBMQ OAuth 2.0 user with a predefined set of rules.
+
+To use the basic mapper, set the mapper type to **Basic**.
+
+{% include images-gallery.html imageCollection="mapper-basic-1" %}
+
+Details of the available properties:
+
+* **Allow user creation** – If enabled, and the user account does not yet exist in TBMQ, it will be created automatically.
+ If disabled, the user will receive an *access denied* error when trying to log in with an external OAuth 2.0 provider, if no corresponding TBMQ user exists.
+
+* **Email attribute key** – The attribute key from the external OAuth 2.0 user info that will be used for the TBMQ user email property.
+
+* **First name attribute key** – The attribute key from the external OAuth 2.0 user info that will be used for the TBMQ user first name property.
+
+* **Last name attribute key** – The attribute key from the external OAuth 2.0 user info that will be used for the TBMQ user last name property.
+
+* **Role** – Choose from the predefined roles to be assigned to the user.
+
+> {% include templates/mqtt-broker/security/user-password.md %}
+
+### Custom mapper
+
+If the basic mapper functionality does not meet your business needs, you can configure a custom mapper to implement logic that fits your specific goals.
+
+A custom mapper is designed as a separate microservice running alongside the TBMQ microservice.
+TBMQ forwards all mapping requests to this microservice and expects a TBMQ OAuth 2.0 user object in response.
+
+Refer to this [base implementation](https://github.com/thingsboard/tbmq-custom-oauth2-mapper){:target="_blank"} as a starting point for your custom mapper.
+
+To use the custom mapper, set the mapper type to **Custom**.
+
+{% include images-gallery.html imageCollection="mapper-custom-1" %}
+
+Details of the available properties:
+
+* **URL** – The URL of the custom mapper endpoint;
+* **username** – If the custom mapper endpoint is configured with basic authentication, specify the *username* here;
+* **password** – If the custom mapper endpoint is configured with basic authentication, specify the *password* here.
+
+## HAProxy configuration
+
+If TBMQ is running behind a load balancer such as HAProxy, configure the balancing algorithm properly to ensure that the correct session is maintained on the TBMQ instance:
+
+```bash
+backend tbmq-api-backend
+ ...
+ balance source # balance must be set to 'source'
+ ...
+```
+
+Also, configure ACL mapping for HTTP and HTTPS requests:
+
+```bash
+frontend http-in
+ ...
+ acl tbmq_api_acl path_beg /api/ /swagger /webjars /v2/ /oauth2/ /login/oauth2/ # '/oauth2/ /login/oauth2/' added
+ ...
+```
+
+```bash
+frontend https_in
+ ...
+ acl tbmq_api_acl path_beg /api/ /swagger /webjars /v2/ /oauth2/ /login/oauth2/ # '/oauth2/ /login/oauth2/' added
+ ...
+```
diff --git a/_includes/docs/pe/mqtt-broker/rbac.md b/_includes/docs/pe/mqtt-broker/rbac.md
new file mode 100644
index 0000000000..52a13d81f1
--- /dev/null
+++ b/_includes/docs/pe/mqtt-broker/rbac.md
@@ -0,0 +1,24 @@
+* TOC
+{:toc}
+
+**Role-Based Access Control (RBAC)** in TBMQ PE provides a structured and secure way to manage access to broker features
+and operations through predefined [user roles](/docs/pe/mqtt-broker/user-guide/ui/users/).
+This system enables administrators to grant appropriate permissions to users based on their role within the organization.
+
+### Available Roles
+
+TBMQ PE introduces two predefined user roles:
+
+* **Admin**: Full access to all broker features.
+* **Viewer**: Read-only access to all broker data — without the ability to perform changes or administrative actions.
+
+### Benefits
+
+* **Enhanced security**: Limit access to sensitive broker features based on user responsibilities.
+* **Simplified administration**: Easily assign predefined roles to users without managing granular permissions.
+* **Compliance and auditing**: Support best practices in access control by separating duties between administrative and observability roles.
+
+### Use Cases
+
+* Assign the **Admin** role to team members responsible for configuring or maintaining the broker environment.
+* Grant the **Viewer** role to operations or monitoring personnel who need visibility into system health and client behavior without risking configuration changes.
diff --git a/_includes/docs/pe/mqtt-broker/white-labeling.md b/_includes/docs/pe/mqtt-broker/white-labeling.md
new file mode 100644
index 0000000000..36f6858979
--- /dev/null
+++ b/_includes/docs/pe/mqtt-broker/white-labeling.md
@@ -0,0 +1,100 @@
+{% assign feature = "White labeling" %}{% include templates/mqtt-broker/pe-tbmq-feature-banner.md %}
+
+* TOC
+{:toc}
+
+## Overview
+
+White labeling lets you tailor your TBMQ instance to match your brand and preferences - especially useful for companies delivering IoT solutions to their customers.
+
+Set your company or product name, upload your logo, choose color palettes.
+
+## Customize TBMQ web interface
+
+To configure your company or product **logo** and **color scheme**, go to the "White labeling" page.
+
+{% include images-gallery.html imageCollection="white-labeling-default" %}
+
+In the "General" tab you can set or change the following options:
+
+ - Application title - you can specify a custom page title, which is displayed in the browser tab;
+ - Favicon (website icon) - you can change the default website icon to your own;
+ - Logo - you can change the standard logo in the upper left corner to your company logo;
+ - Logo height - you can resize the logo;
+
+
+
+ - White labeling allows you to customize the color theme by adjusting the primary and accent palettes to match your desired UI design.
+
+ - Primary palette - you can customize the background color and font color by choosing one of the suggested UI design options or customizing an existing one;
+ - Accent palette - you can customize the color for some elements, for example, for a toggle;
+
+ 
+
+ - Advanced CSS - you can stylize any elements of the TBMQ user interface as you wish. We will talk more about this functionality [below](#advanced-css);
+ - Show/hide platform name and version - by checking this option, the name of the platform and its current version will be displayed in the lower left corner.
+
+ 
+
+The final look of the customized user interface:
+
+
+
+### Advanced CSS
+
+Using CSS, you can stylize any elements of the TBMQ user interface as you wish. Such elements can be background, icons, fonts, etc.
+
+To use CSS in your UI design, do the following:
+
+{% include images-gallery.html imageCollection="advanced-css" showListImageTitles="true" %}
+
+
+CSS code example for customize icons color, scroll color and active button:
+
+```css
+/*icon color*/
+
+a.mat-mdc-button.mat-mdc-button-base .mat-icon {
+ color: #a60062;
+}
+
+/*scroll color*/
+
+mat-toolbar::-webkit-scrollbar-thumb,
+div::-webkit-scrollbar-thumb,
+ng-component::-webkit-scrollbar-thumb {
+ background-color: #c526a5 !important;
+ background-image: linear-gradient(#e72c83, #a742c6);
+ border-radius: 200px/300px !important;
+ border: 0.1rem linear-gradient(#e72c83, #a742c6);
+}
+
+/*active button color*/
+
+.mat-mdc-button.mat-mdc-button-base.tb-active{
+ color: #a60062;
+}
+```
+{: .copy-code}
+
+Using the functionality described in this documentation, you can customize the appearance of the TBMQ UI according to your preferences.
+
+## Customize the login page
+
+On the "Login" tab, you can configure the TBMQ **login page**.
+
+- Enter the registered domain name, or refer to [this documentation](/docs/pe/mqtt-broker/security/domains/#domain-registration){:target="_blank"} to learn how to register a new domain;
+- It is recommended to prevent usage of hostnames from headers of the request;
+- Enter a custom application title;
+- Replace the default website icon and logo with your own;
+- Define the primary and accent color palettes;
+- Set the page background color.
+
+Once done, save the changes.
+
+{% include images-gallery.html imageCollection="customize-login-page" %}
+
+
+Now, use your custom domain name to access the TBMQ web interface login page and verify the result of your configuration.
+
+{% include images-gallery.html imageCollection="verify-result-customize-login-page" %}
diff --git a/_includes/docwithnav.html b/_includes/docwithnav.html
index 5b7d10da15..6d4080f735 100755
--- a/_includes/docwithnav.html
+++ b/_includes/docwithnav.html
@@ -10,6 +10,8 @@
{% assign searchPath = "/docs/" | append: "pe/mobile" %}
{% elsif docsTag == "paas-eu" %}
{% assign searchPath = "/docs/" | append: "paas/eu" %}
+ {% elsif docsTag == "mqtt-broker-pe" %}
+ {% assign searchPath = "/docs/" | append: "pe/mqtt-broker" %}
{% else %}
{% assign searchPath = "/docs/" | append: docsTag %}
{% endif %}
diff --git a/_includes/head-header.html b/_includes/head-header.html
index c715785766..1407064a93 100644
--- a/_includes/head-header.html
+++ b/_includes/head-header.html
@@ -8,6 +8,9 @@
{% if docsTag == "mqtt-broker" %}
{% assign githubLink = "https://github.com/thingsboard/tbmq" %}
{% assign githubLabel = "Star thingsboard/tbmq on GitHub" %}
+ {% elsif docsTag == "mqtt-broker-pe" %}
+ {% assign githubLink = "https://github.com/thingsboard/tbmq" %}
+ {% assign githubLabel = "Star thingsboard/tbmq on GitHub" %}
{% elsif docsTag == "gw" %}
{% assign githubLink = "https://github.com/thingsboard/thingsboard-gateway" %}
{% assign githubLabel = "Star thingsboard/thingsboard-gateway on GitHub" %}
diff --git a/_includes/templates/mqtt-broker-guides-banner.md b/_includes/templates/mqtt-broker-guides-banner.md
index 2518b9d5d9..1f097f1be9 100644
--- a/_includes/templates/mqtt-broker-guides-banner.md
+++ b/_includes/templates/mqtt-broker-guides-banner.md
@@ -1,18 +1,18 @@
{% if currentGuide != "GettingStartedGuide" %}
-- [**Getting started guide**](/docs/mqtt-broker/getting-started/) - This guide provide quick overview of TBMQ.
+- [**Getting started guide**](/docs/{{docsPrefix}}mqtt-broker/getting-started/) - This guide provide quick overview of TBMQ.
{% endif %}
{% if currentGuide != "InstallationGuides" %}
-- [**Installation guides**](/docs/mqtt-broker/install/installation-options/) - Learn how to set up TBMQ using Docker or deploy it in K8S environments on AWS, GCP, and Azure.
+- [**Installation guides**](/docs/{{docsPrefix}}mqtt-broker/install/installation-options/) - Learn how to set up TBMQ using Docker or deploy it in K8S environments on AWS, GCP, and Azure.
{% endif %}
{% if currentGuide != "SecurityGuide" %}
-- [**Security guide**](/docs/mqtt-broker/security/overview/) - Learn how to enable authentication and authorization for MQTT clients.
+- [**Security guide**](/docs/{{docsPrefix}}mqtt-broker/security/overview/) - Learn how to enable authentication and authorization for MQTT clients.
{% endif %}
{% if currentGuide != "ConfigurationGuide" %}
-- [**Configuration guide**](/docs/mqtt-broker/install/config/) - Learn about TBMQ configuration files and parameters.
+- [**Configuration guide**](/docs/{{docsPrefix}}mqtt-broker/install/config/) - Learn about TBMQ configuration files and parameters.
{% endif %}
{% if currentGuide != "MQTTClientTypeGuide" %}
-- [**MQTT client type guide**](/docs/mqtt-broker/user-guide/mqtt-client-type/) - Learn about TBMQ client types.
+- [**MQTT client type guide**](/docs/{{docsPrefix}}mqtt-broker/user-guide/mqtt-client-type/) - Learn about TBMQ client types.
{% endif %}
{% if currentGuide != "TBIntegrationGuide" %}
-- [**Integration with ThingsBoard**](/docs/mqtt-broker/user-guide/integrations/how-to-connect-thingsboard-to-tbmq/) - Learn about how to integrate TBMQ with ThingsBoard.
+- [**Integration with ThingsBoard**](/docs/{{docsPrefix}}mqtt-broker/user-guide/integrations/how-to-connect-thingsboard-to-tbmq/) - Learn about how to integrate TBMQ with ThingsBoard.
{% endif %}
diff --git a/_includes/templates/mqtt-broker/application-shared-subscriptions.md b/_includes/templates/mqtt-broker/application-shared-subscriptions.md
index 1eb97b4e63..baa27aa84e 100644
--- a/_includes/templates/mqtt-broker/application-shared-subscriptions.md
+++ b/_includes/templates/mqtt-broker/application-shared-subscriptions.md
@@ -27,4 +27,4 @@ By default, this variable is enabled, meaning that the validation process is act
However, if you choose to disable this validation by setting the variable to _false_,
the system will no longer create Kafka topics for shared subscriptions having topic filters with special characters,
resulting in a failure to create the corresponding topics.
-It's important to consider this when configuring your environment and handling client IDs with special characters.
+It's important to consider this when configuring your environment and handling topic filters with special characters.
diff --git a/_includes/templates/mqtt-broker/install/aws/gp3-sc.md b/_includes/templates/mqtt-broker/install/aws/gp3-sc.md
new file mode 100644
index 0000000000..dcac75d52b
--- /dev/null
+++ b/_includes/templates/mqtt-broker/install/aws/gp3-sc.md
@@ -0,0 +1,43 @@
+The `gp3` EBS volume type is the recommended default for Amazon EKS, offering better performance, cost efficiency, and flexibility compared to `gp2`.
+
+Please download the storage class configuration file:
+
+```bash
+curl -o gp3-def-sc.yml https://raw.githubusercontent.com/thingsboard/tbmq/{{ site.release.broker_branch }}/k8s/helm/aws/gp3-def-sc.yml
+```
+{: .copy-code}
+
+Apply the configuration:
+
+```bash
+kubectl apply -f gp3-def-sc.yml
+```
+{: .copy-code}
+
+If a `gp2` StorageClass exists, it may conflict with `gp3`. You can either make `gp2` storage class non-default:
+
+```bash
+kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
+```
+{: .copy-code}
+
+Or delete the `gp2` StorageClass (if unused):
+
+```bash
+kubectl delete storageclass gp2
+```
+{: .copy-code}
+
+Check the `gp3` storage class available and marked as default:
+
+```bash
+kubectl get sc
+```
+{: .copy-code}
+
+You should see the similar output:
+
+```text
+NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+gp3 (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 30s
+```
diff --git a/_includes/templates/mqtt-broker/install/azure/aks-configure-and-create-cluster.md b/_includes/templates/mqtt-broker/install/azure/aks-configure-and-create-cluster.md
index abb51c0a32..fca5b1f2f7 100644
--- a/_includes/templates/mqtt-broker/install/azure/aks-configure-and-create-cluster.md
+++ b/_includes/templates/mqtt-broker/install/azure/aks-configure-and-create-cluster.md
@@ -7,7 +7,7 @@ az group create --name $AKS_RESOURCE_GROUP --location $AKS_LOCATION
To see more info about `az group` please follow the next [link](https://learn.microsoft.com/en-us/cli/azure/group?view=azure-cli-latest).
-After the Resource group is created, we can create AKS cluster by using the next command:
+After the Resource group is created, we can create the AKS cluster by using the next command:
```bash
az aks create --resource-group $AKS_RESOURCE_GROUP \
@@ -35,4 +35,4 @@ We will use this gateway as Path-Based Load Balancer for the TBMQ.
Full list af `az aks create` options can be found [here](https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az_aks_create).
-Alternatively, you may use this [guide](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli) for custom cluster setup.
\ No newline at end of file
+Alternatively, you may use this [guide](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli) for custom cluster setup.
diff --git a/_includes/templates/mqtt-broker/install/azure/aks-define-env-variables.md b/_includes/templates/mqtt-broker/install/azure/aks-define-env-variables.md
index 318e27dd5d..91bb78accd 100644
--- a/_includes/templates/mqtt-broker/install/azure/aks-define-env-variables.md
+++ b/_includes/templates/mqtt-broker/install/azure/aks-define-env-variables.md
@@ -8,7 +8,6 @@ export AKS_LOCATION=eastus
export AKS_GATEWAY=tbmq-gateway
export TB_CLUSTER_NAME=tbmq-cluster
export TB_DATABASE_NAME=tbmq-db
-export TB_REDIS_NAME=tbmq-redis
echo "You variables ready to create resource group $AKS_RESOURCE_GROUP in location $AKS_LOCATION
and cluster in it $TB_CLUSTER_NAME with database $TB_DATABASE_NAME"
```
@@ -16,8 +15,8 @@ and cluster in it $TB_CLUSTER_NAME with database $TB_DATABASE_NAME"
where:
-* TBMQResources - a logical group in which Azure resources are deployed and managed. We will refer to it later in this guide using **AKS_RESOURCE_GROUP**;
-* eastus - is the location where you want to create resource group. We will refer to it later in this guide using **AKS_LOCATION**. You can see all locations list by executing `az account list-locations`;
-* tbmq-gateway - the name of Azure application gateway;
-* tbmq-cluster - cluster name. We will refer to it later in this guide using **TB_CLUSTER_NAME**;
-* tbmq-db is the name of your database server. You may input a different name. We will refer to it later in this guide using **TB_DATABASE_NAME**.
\ No newline at end of file
+* TBMQResources — a logical group in which Azure resources are deployed and managed. We will refer to it later in this guide using **AKS_RESOURCE_GROUP**;
+* eastus — is the location where you want to create the resource group. We will refer to it later in this guide using **AKS_LOCATION**. You can see all locations list by executing `az account list-locations`;
+* tbmq-gateway — the name of Azure application gateway;
+* tbmq-cluster — cluster name. We will refer to it later in this guide using **TB_CLUSTER_NAME**;
+* tbmq-db — is the name of your database server. You may input a different name. We will refer to it later in this guide using **TB_DATABASE_NAME**.
diff --git a/_includes/templates/mqtt-broker/install/azure/aks-prerequisites.md b/_includes/templates/mqtt-broker/install/azure/aks-prerequisites.md
index 3faf789bc3..30d46ffb7d 100644
--- a/_includes/templates/mqtt-broker/install/azure/aks-prerequisites.md
+++ b/_includes/templates/mqtt-broker/install/azure/aks-prerequisites.md
@@ -1,11 +1,11 @@
### Install and configure tools
-To deploy TBMQ on the AKS cluster you will need to install [kubectl](https://kubernetes.io/docs/tasks/tools/),
+To deploy TBMQ on the AKS cluster, you will need to install [kubectl](https://kubernetes.io/docs/tasks/tools/),
[helm](https://helm.sh/docs/intro/install/), and [az](https://learn.microsoft.com/en-us/cli/azure/) tools.
-After installation is done you need to log in to the cli using the next command:
+After installation is done, you need to log in to the cli using the next command:
```bash
az login
```
-{: .copy-code}
\ No newline at end of file
+{: .copy-code}
diff --git a/_includes/templates/mqtt-broker/install/azure/aks-update-kubectl-ctx.md b/_includes/templates/mqtt-broker/install/azure/aks-update-kubectl-ctx.md
index 840cc33920..2a2c9b462b 100644
--- a/_includes/templates/mqtt-broker/install/azure/aks-update-kubectl-ctx.md
+++ b/_includes/templates/mqtt-broker/install/azure/aks-update-kubectl-ctx.md
@@ -12,4 +12,4 @@ kubectl get nodes
```
{: .copy-code}
-You should see cluster`s nodes list.
\ No newline at end of file
+You should see cluster`s nodes list.
diff --git a/_includes/templates/mqtt-broker/install/cluster-common/configure-http-load-balancer.md b/_includes/templates/mqtt-broker/install/cluster-common/configure-http-load-balancer.md
index 4b907c799a..95d3faebcc 100644
--- a/_includes/templates/mqtt-broker/install/cluster-common/configure-http-load-balancer.md
+++ b/_includes/templates/mqtt-broker/install/cluster-common/configure-http-load-balancer.md
@@ -12,9 +12,9 @@ kubectl get ingress
```
{: .copy-code}
-Once provisioned, you should see similar output:
+Once provisioned, you should see the similar output:
```text
-NAME CLASS HOSTS ADDRESS PORTS AGE
-tb-broker-http-loadbalancer
* 34.111.24.134 80 7m25s
-```
\ No newline at end of file
+NAME CLASS HOSTS ADDRESS PORTS AGE
+tbmq-http-loadbalancer * 34.111.24.134 80 7m25s
+```
diff --git a/_includes/templates/mqtt-broker/install/cluster-common/configure-license-key.md b/_includes/templates/mqtt-broker/install/cluster-common/configure-license-key.md
new file mode 100644
index 0000000000..58cbdf0b0d
--- /dev/null
+++ b/_includes/templates/mqtt-broker/install/cluster-common/configure-license-key.md
@@ -0,0 +1,25 @@
+{% if docsPrefix == "pe/" %}
+## Get the license key
+
+Before proceeding, make sure you’ve selected your subscription plan or chosen to purchase a perpetual license.
+If you haven’t done this yet, please visit the [Pricing page](/pricing/?section=tbmq-options){: target="_blank"} to compare available options
+and obtain your license key.
+
+> **Note:** Throughout this guide, we’ll refer to your license key as **YOUR_LICENSE_KEY_HERE**.
+
+## Configure the license key
+
+Create a k8s secret with your license key:
+
+```bash
+export TBMQ_LICENSE_KEY=YOUR_LICENSE_KEY_HERE
+kubectl create -n thingsboard-mqtt-broker secret generic tbmq-license --from-literal=license-key=$TBMQ_LICENSE_KEY
+```
+{: .copy-code}
+
+{% capture replace_license_key %}
+Don’t forget to replace **YOUR_LICENSE_KEY_HERE** with the value of your license key.
+{% endcapture %}
+{% include templates/info-banner.md content=replace_license_key %}
+
+{% endif %}
diff --git a/_includes/templates/mqtt-broker/install/cluster-common/configure-mqtt-load-balancer.md b/_includes/templates/mqtt-broker/install/cluster-common/configure-mqtt-load-balancer.md
index 7745fd8af0..27e3ce344b 100644
--- a/_includes/templates/mqtt-broker/install/cluster-common/configure-mqtt-load-balancer.md
+++ b/_includes/templates/mqtt-broker/install/cluster-common/configure-mqtt-load-balancer.md
@@ -1,6 +1,6 @@
Configure MQTT load balancer to be able to use MQTT protocol to connect devices.
-Create TCP load balancer using following command:
+Create TCP load balancer using the following command:
```bash
kubectl apply -f receipts/mqtt-load-balancer.yml
@@ -26,11 +26,11 @@ kubectl create configmap tbmq-mqtts-config \
* where **YOUR_PEM_FILENAME** is the name of your **server certificate file**.
* where **YOUR_PEM_KEY_FILENAME** is the name of your **server certificate private key file**.
-Then, uncomment all sections in the ‘tb-broker.yml’ file that are marked with “Uncomment the following lines to enable two-way MQTTS”.
+Then, uncomment all sections in the ‘tbmq.yml’ file that are marked with “Uncomment the following lines to enable two-way MQTTS”.
Execute command to apply changes:
```bash
-kubectl apply -f tb-broker.yml
+kubectl apply -f tbmq.yml
```
-{: .copy-code}
\ No newline at end of file
+{: .copy-code}
diff --git a/_includes/templates/mqtt-broker/install/cluster-common/k8s-type-upgrade-ce-to-pe.md b/_includes/templates/mqtt-broker/install/cluster-common/k8s-type-upgrade-ce-to-pe.md
new file mode 100644
index 0000000000..e6b69f2da4
--- /dev/null
+++ b/_includes/templates/mqtt-broker/install/cluster-common/k8s-type-upgrade-ce-to-pe.md
@@ -0,0 +1,14 @@
+### Upgrade from TBMQ CE to TBMQ PE (v2.2.0)
+
+To upgrade your existing **TBMQ Community Edition (CE)** to **TBMQ Professional Edition (PE)**, ensure you are running the latest **TBMQ CE {{site.release.broker_full_ver}}** version before starting the process.
+Merge your current configuration with the latest [TBMQ PE K8S scripts](https://github.com/thingsboard/tbmq-pe-k8s/tree/{{ site.release.broker_branch }}).
+Do not forget to [configure the license key](#configure-the-license-key).
+
+Run the following commands, including the upgrade script to migrate PostgreSQL database data from CE to PE:
+
+```bash
+./k8s-delete-tbmq.sh
+./k8s-upgrade-tbmq.sh --fromVersion=ce
+./k8s-deploy-tbmq.sh
+```
+{: .copy-code}
diff --git a/_includes/templates/mqtt-broker/install/cluster-common/provision-kafka-new.md b/_includes/templates/mqtt-broker/install/cluster-common/provision-kafka-new.md
new file mode 100644
index 0000000000..f6be55ca2d
--- /dev/null
+++ b/_includes/templates/mqtt-broker/install/cluster-common/provision-kafka-new.md
@@ -0,0 +1,61 @@
+TBMQ requires a running Kafka cluster. You can set up Kafka in two ways:
+
+* **Deploy a self-managed Apache Kafka cluster**
+* **Deploy a managed Kafka cluster with the Strimzi Operator**
+
+Choose the option that best fits your environment and operational needs.
+
+### Option 1. Deploy an Apache Kafka Cluster
+
+* Runs as a **StatefulSet** with 3 pods in **KRaft dual-role mode** (each node acts as both controller and broker).
+* Suitable if you want a lightweight, self-managed Kafka setup.
+{% if docsPrefix == null %}
+* [See the full deployment guide here](https://github.com/thingsboard/tbmq/blob/{{site.release.broker_branch}}/k8s/{{deployment}}/kafka/README.md).
+{% else %}
+* [See the full deployment guide here](https://github.com/thingsboard/tbmq-pe-k8s/blob/{{site.release.broker_branch}}/{{deployment}}/kafka/README.md).
+{% endif %}
+
+**Quick steps:**
+
+```bash
+kubectl apply -f kafka/tbmq-kafka.yml
+```
+{: .copy-code}
+
+Update TBMQ configuration files (`tbmq.yml` and `tbmq-ie.yml`) and uncomment the section marked:
+
+```yaml
+# Uncomment the following lines to connect to Apache Kafka
+```
+
+### Option 2. Deploy a Kafka Cluster with the Strimzi Operator
+
+* Uses the **Strimzi Cluster Operator** for Kubernetes to manage Kafka.
+* Provides easier upgrades, scaling, and operational management.
+{% if docsPrefix == null %}
+* [See the full deployment guide here](https://github.com/thingsboard/tbmq/blob/{{site.release.broker_branch}}/k8s/{{deployment}}/kafka/operator/README.md).
+{% else %}
+* [See the full deployment guide here](https://github.com/thingsboard/tbmq-pe-k8s/blob/{{site.release.broker_branch}}/{{deployment}}/kafka/operator/README.md).
+{% endif %}
+
+**Quick steps:**
+
+Install the Strimzi operator:
+
+```bash
+helm install tbmq-kafka -f kafka/operator/values-strimzi-kafka-operator.yaml oci://quay.io/strimzi-helm/strimzi-kafka-operator --version 0.47.0
+```
+{: .copy-code}
+
+Deploy the Kafka cluster:
+
+```bash
+kubectl apply -f kafka/operator/kafka-cluster.yaml
+```
+{: .copy-code}
+
+Update TBMQ configuration files (`tbmq.yml` and `tbmq-ie.yml`) and uncomment the section marked:
+
+```yaml
+# Uncomment the following lines to connect to Strimzi
+```
diff --git a/_includes/templates/mqtt-broker/install/cluster-common/provision-redis-cluster.md b/_includes/templates/mqtt-broker/install/cluster-common/provision-redis-cluster.md
index d2cd6633dc..48fb3c8fb5 100644
--- a/_includes/templates/mqtt-broker/install/cluster-common/provision-redis-cluster.md
+++ b/_includes/templates/mqtt-broker/install/cluster-common/provision-redis-cluster.md
@@ -1,70 +1,64 @@
-We recommend deploying Bitnami Redis Cluster from Helm. For that, review the `redis` folder.
-
-```bash
-ls redis/
-```
-{: .copy-code}
-
-You can find there _default-values-redis.yml_ file -
-default values downloaded from [Bitnami artifactHub](https://artifacthub.io/packages/helm/bitnami/redis-cluster).
-And _values-redis.yml_ file with modified values.
-We recommend keeping the first file untouched and making changes to the second one only. This way the upgrade process to the next version will go more smoothly as it will be possible to see diff.
-
-To add the Bitnami helm repo:
-
-```bash
-helm repo add bitnami https://charts.bitnami.com/bitnami
-helm repo update
-```
-{: .copy-code}
-
-To install Bitnami Redis cluster, execute the following command:
-
-```bash
-helm install redis -f redis/values-redis.yml bitnami/redis-cluster --version 10.2.5
-```
-{: .copy-code}
-
-Once deployed, you should see the information about deployment state, followed by the command to get your REDIS_PASSWORD:
-
-```text
-NAME: redis
-LAST DEPLOYED: Tue Apr 8 11:22:44 2025
-NAMESPACE: thingsboard-mqtt-broker
-STATUS: deployed
-REVISION: 1
-TEST SUITE: None
-NOTES:
-CHART NAME: redis-cluster
-CHART VERSION: 10.2.5
-APP VERSION: 7.2.5** Please be patient while the chart is being deployed **
-
-
-To get your password run:
- export REDIS_PASSWORD=$(kubectl get secret --namespace "thingsboard-mqtt-broker" redis-redis-cluster -o jsonpath="{.data.redis-password}" | base64 -d)
-```
-
-Let's modify this command to print the password to the terminal:
-
-```bash
-echo $(kubectl get secret --namespace "thingsboard-mqtt-broker" redis-redis-cluster -o jsonpath="{.data.redis-password}" | base64 -d)
-```
-{: .copy-code}
-
-You need to copy the output and paste it into the _tb-broker-cache-configmap.yml_ file, replacing `YOUR_REDIS_PASSWORD`.
-
-```bash
-nano tb-broker-cache-configmap.yml
-```
-{: .copy-code}
-
-{% capture redis-nodes %}
-
-The value of `REDIS_NODES` in _tb-broker-cache-configmap.yml_ is set to `"redis-redis-cluster-headless:6379"` by default.
-The host name is based on the release name (redis) and the default naming conventions of the Bitnami chart.
-If you modify the `nameOverride` or `fullnameOverride` fields in your Redis values file, or change the release name during installation,
-you must update this value accordingly to match the actual headless service name created by the chart.
-
-{% endcapture %}
-{% include templates/info-banner.md content=redis-nodes %}
-
+TBMQ relies on **Valkey** to store messages for [DEVICE persistent clients](/docs/{{docsPrefix}}mqtt-broker/architecture/#persistent-device-client).
+The cache also improves performance by reducing the number of direct database reads, especially when authentication is enabled and multiple clients connect at once.
+Without caching, every new connection triggers a database query to validate MQTT client credentials, which can cause the unnecessary load under high connection rates.
+
+To set up Valkey in Google Cloud, refer to the Google Memorystore for Valkey documentation:
+
+* **Create Memorystore for Valkey instances**:
+ Instructions to provision both **Cluster Mode Enabled** and **Cluster Mode Disabled** instances, including prerequisites like service connection policies and networking setup.
+ ([Google Cloud][1])
+
+* **General overview**:
+ Details on the managed Valkey service, architecture, and key concepts such as shards, endpoints, and supported Valkey versions (including 8.0).
+ ([Google Cloud][2])
+
+* **Networking requirements**:
+ Guidance on Private Service Connect and service connection policy setup necessary for secure connectivity.
+ ([Google Cloud][3])
+
+* **Instance & node sizing**:
+ Recommendations for choosing node types according to workload (e.g., `standard-small`, `highmem-medium`), memory capacity, and performance characteristics.
+ ([Google Cloud][4])
+
+* **Cluster vs Standalone (Cluster Mode Enabled vs Disabled)**:
+ Comparison of horizontal scaling, throughput, and feature support—helpful in choosing the appropriate mode for your use case.
+ ([Google Cloud][5])
+
+* **High Availability & Replicas**:
+ Best practices for multi-zone deployment, replica usage for read scaling, and resilience in production scenarios.
+ ([Google Cloud][6])
+
+* **Best practices & scaling guidance**:
+ Advice on memory management, eviction policies, when to scale, and how to handle growing workloads effectively.
+ ([Google Cloud][7])
+
+Once your Valkey cluster is ready, update the cache configuration in `tbmq-cache-configmap.yml` with the correct endpoint values:
+
+* **For standalone Valkey**:
+ Uncomment and set the following values. Make sure the `REDIS_HOST` value does **not** include the port (`:6379`).
+
+ ```yaml
+ REDIS_CONNECTION_TYPE: "standalone"
+ REDIS_HOST: "YOUR_VALKEY_ENDPOINT_URL_WITHOUT_PORT"
+ #REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"
+ ```
+
+* **For Valkey cluster**:
+ Provide a comma-separated list of "host:port" node endpoints to bootstrap from.
+
+ ```yaml
+ REDIS_CONNECTION_TYPE: "cluster"
+ REDIS_NODES: "COMMA_SEPARATED_LIST_OF_NODES"
+ #REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"
+ # Recommended in Kubernetes for handling dynamic IPs and failover:
+ #REDIS_LETTUCE_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"
+ #REDIS_JEDIS_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"
+ ```
+
+[1]: https://cloud.google.com/memorystore/docs/valkey/create-instances?utm_source=chatgpt.com "Create instances | Memorystore for Valkey | Google Cloud"
+[2]: https://cloud.google.com/memorystore/docs/valkey/product-overview?utm_source=chatgpt.com "Memorystore for Valkey overview - Google Cloud"
+[3]: https://cloud.google.com/memorystore/docs/valkey/networking?utm_source=chatgpt.com "Networking | Memorystore for Valkey | Google Cloud"
+[4]: https://cloud.google.com/memorystore/docs/valkey/instance-node-specification?utm_source=chatgpt.com "Instance and node specification | Memorystore for Valkey | Google Cloud"
+[5]: https://cloud.google.com/memorystore/docs/valkey/cluster-mode-enabled-and-disabled?utm_source=chatgpt.com "Enable and disable cluster mode | Memorystore for Valkey - Google Cloud"
+[6]: https://cloud.google.com/memorystore/docs/valkey/ha-and-replicas?utm_source=chatgpt.com "High availability and replicas | Memorystore for Valkey | Google Cloud"
+[7]: https://cloud.google.com/memorystore/docs/valkey/general-best-practices?utm_source=chatgpt.com "Best practices for Memorystore for Valkey - Google Cloud"
diff --git a/_includes/templates/mqtt-broker/install/cluster-common/starting.md b/_includes/templates/mqtt-broker/install/cluster-common/starting.md
index d6fdf326f1..dd677bbf10 100644
--- a/_includes/templates/mqtt-broker/install/cluster-common/starting.md
+++ b/_includes/templates/mqtt-broker/install/cluster-common/starting.md
@@ -12,4 +12,4 @@ kubectl get pods
```
{: .copy-code}
-If everything went fine, you should be able to see `tb-broker-0` and `tb-broker-1` pods. Every pod should be in the `READY` state.
\ No newline at end of file
+If everything went fine, you should be able to see `tbmq-0` and `tbmq-1` pods. Every pod should be in the `READY` state.
diff --git a/_includes/templates/mqtt-broker/install/cluster-common/troubleshooting.md b/_includes/templates/mqtt-broker/install/cluster-common/troubleshooting.md
index cc184b6c04..cc0830d586 100644
--- a/_includes/templates/mqtt-broker/install/cluster-common/troubleshooting.md
+++ b/_includes/templates/mqtt-broker/install/cluster-common/troubleshooting.md
@@ -1,14 +1,15 @@
-In case of any issues you can examine service logs for errors. For example to see TBMQ logs execute the following command:
+In case of any issues, you can examine service logs for errors. For example, to see TBMQ logs, execute the following command:
```bash
-kubectl logs -f tb-broker-0
+kubectl logs -f tbmq-0
```
{: .copy-code}
Use the next command to see the state of all statefulsets.
+
```bash
kubectl get statefulsets
```
{: .copy-code}
-See [kubectl Cheat Sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) command reference for more details.
\ No newline at end of file
+See [kubectl Cheat Sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) command reference for more details.
diff --git a/_includes/templates/mqtt-broker/install/cluster-common/validate-mqtt-access.md b/_includes/templates/mqtt-broker/install/cluster-common/validate-mqtt-access.md
index 3f3f481240..06b4988775 100644
--- a/_includes/templates/mqtt-broker/install/cluster-common/validate-mqtt-access.md
+++ b/_includes/templates/mqtt-broker/install/cluster-common/validate-mqtt-access.md
@@ -1,4 +1,4 @@
-To connect to the cluster via MQTT you will need to get corresponding service IP. You can do this with the command:
+To connect to the cluster via MQTT, you will need to get the corresponding service IP. You can do this with the command:
```bash
kubectl get services
@@ -8,8 +8,8 @@ kubectl get services
You should see the similar picture:
```text
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-tb-broker-mqtt-loadbalancer LoadBalancer 10.100.119.170 ******* 1883:30308/TCP,8883:31609/TCP 6m58s
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+tbmq-mqtt-loadbalancer LoadBalancer 10.100.119.170 ******* 1883:30308/TCP,8883:31609/TCP 6m58s
```
-Use `EXTERNAL-IP` field of the load-balancer to connect to the cluster via MQTT protocol.
\ No newline at end of file
+Use `EXTERNAL-IP` field of the load-balancer to connect to the cluster via MQTT protocol.
diff --git a/_includes/templates/mqtt-broker/install/cluster-common/validate-the-setup.md b/_includes/templates/mqtt-broker/install/cluster-common/validate-the-setup.md
index 61596d8aa8..6823d65f76 100644
--- a/_includes/templates/mqtt-broker/install/cluster-common/validate-the-setup.md
+++ b/_includes/templates/mqtt-broker/install/cluster-common/validate-the-setup.md
@@ -1,6 +1,6 @@
-Now you can open TBMQ web interface in your browser using DNS name of the load balancer.
+Now you can open the TBMQ web interface in your browser using the DNS name of the load balancer.
-You can get DNS name of the load-balancers using the next command:
+You can get the DNS name of the load-balancers using the next command:
```bash
kubectl get ingress
@@ -10,10 +10,10 @@ kubectl get ingress
You should see the similar picture:
```text
-NAME CLASS HOSTS ADDRESS PORTS AGE
-tb-broker-http-loadbalancer * 34.111.24.134 80 3d1h
+NAME CLASS HOSTS ADDRESS PORTS AGE
+tbmq-http-loadbalancer * 34.111.24.134 80 3d1h
```
-Use `ADDRESS` field of the tb-broker-http-loadbalancer to connect to the cluster.
+Use `ADDRESS` field of the tbmq-http-loadbalancer to connect to the cluster.
{% include templates/mqtt-broker/login.md %}
diff --git a/_includes/templates/mqtt-broker/install/gcp/configure-https-load-balancer.md b/_includes/templates/mqtt-broker/install/gcp/configure-https-load-balancer.md
index 6d8af77aaa..3c60fd3e63 100644
--- a/_includes/templates/mqtt-broker/install/gcp/configure-https-load-balancer.md
+++ b/_includes/templates/mqtt-broker/install/gcp/configure-https-load-balancer.md
@@ -6,7 +6,7 @@ gcloud compute addresses create {{staticIP}} --global
```
{: .copy-code}
-Replace the *PUT_YOUR_DOMAIN_HERE* with valid domain name in the *https-load-balancer.yml* file:
+Replace the *PUT_YOUR_DOMAIN_HERE* with a valid domain name in the *https-load-balancer.yml* file:
```bash
nano receipts/https-load-balancer.yml
@@ -27,11 +27,11 @@ kubectl get ingress
```
{: .copy-code}
-Once provisioned, you should see similar output:
+Once provisioned, you should see a similar output:
```text
-NAME CLASS HOSTS ADDRESS PORTS AGE
-tb-broker-https-loadbalancer gce * 34.111.24.134 80 7m25s
+NAME CLASS HOSTS ADDRESS PORTS AGE
+tbmq-https-loadbalancer gce * 34.111.24.134 80 7m25s
```
Now, **assign the domain name** you have used to the load balancer IP address (the one you see instead of 34.111.24.134 in the command output).
@@ -75,4 +75,4 @@ kubectl describe managedcertificate managed-cert
```
{: .copy-code}
-Certificate will be eventually provisioned if you have configured domain records properly.
\ No newline at end of file
+Certificate will be eventually provisioned if you have configured domain records properly.
diff --git a/_includes/templates/mqtt-broker/install/gcp/env-variables.md b/_includes/templates/mqtt-broker/install/gcp/env-variables.md
index e52dc6c976..4701c681ce 100644
--- a/_includes/templates/mqtt-broker/install/gcp/env-variables.md
+++ b/_includes/templates/mqtt-broker/install/gcp/env-variables.md
@@ -18,8 +18,8 @@ echo "You have selected project: $GCP_PROJECT, region: $GCP_REGION, gcp zones: $
where:
-* first line uses gcloud command to fetch your current GCP project id. We will refer to it later in this guide using **$GCP_PROJECT**;
+* the first line uses gcloud command to fetch your current GCP project id. We will refer to it later in this guide using **$GCP_PROJECT**;
* *us-central1* is one of the available compute [regions](https://cloud.google.com/compute/docs/regions-zones#available). We will refer to it later in this guide using **$GCP_REGION**;
* *default* is a default GCP network name; We will refer to it later in this guide using **$GCP_NETWORK**;
* *{{tbClusterName}}* is the name of your cluster. You may input a different name. We will refer to it later in this guide using **$TB_CLUSTER_NAME**;
-* *{{tbDbClusterName}}* is the name of your database server. You may input a different name. We will refer to it later in this guide using **$TB_DATABASE_NAME**;
+* *{{tbDbClusterName}}* is the name of your database server. You may input a different name. We will refer to it later in this guide using **$TB_DATABASE_NAME**.
diff --git a/_includes/templates/mqtt-broker/install/gcp/gke-prerequisites.md b/_includes/templates/mqtt-broker/install/gcp/gke-prerequisites.md
new file mode 100644
index 0000000000..061dc35d4d
--- /dev/null
+++ b/_includes/templates/mqtt-broker/install/gcp/gke-prerequisites.md
@@ -0,0 +1,23 @@
+### Install and configure tools
+
+To deploy TBMQ {{tbmqSuffix}} on GKE cluster you'll need to install
+[`kubectl`](https://kubernetes.io/docs/tasks/tools/) and [`gcloud`](https://cloud.google.com/sdk/downloads) tools.
+See [before you begin](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster#before_you_begin) guide for more info.
+
+Create a new Google Cloud Platform project (recommended) or choose the existing one.
+
+Make sure you have selected the correct project by executing the following command:
+
+```bash
+gcloud init
+```
+{: .copy-code}
+
+### Enable GCP services
+
+Enable the GKE and SQL services for your project by executing the following command:
+
+```bash
+gcloud services enable container.googleapis.com sql-component.googleapis.com sqladmin.googleapis.com
+```
+{: .copy-code}
diff --git a/_includes/templates/mqtt-broker/install/gcp/install.md b/_includes/templates/mqtt-broker/install/gcp/install.md
index 14811c2cc9..2164d0dae1 100644
--- a/_includes/templates/mqtt-broker/install/gcp/install.md
+++ b/_includes/templates/mqtt-broker/install/gcp/install.md
@@ -1,11 +1,11 @@
-Execute the following command to run installation:
+Execute the following command to run the installation:
```bash
./k8s-install-tbmq.sh
```
{: .copy-code}
-After this command finishes, you should see the next line in the console:
+After this command is finished, you should see the next line in the console:
```
Installation finished successfully!
diff --git a/_includes/templates/mqtt-broker/install/gcp/provision-postgresql.md b/_includes/templates/mqtt-broker/install/gcp/provision-postgresql.md
new file mode 100644
index 0000000000..6d3d0de51a
--- /dev/null
+++ b/_includes/templates/mqtt-broker/install/gcp/provision-postgresql.md
@@ -0,0 +1,80 @@
+#### Prerequisites
+
+Enable service networking to allow your K8S cluster to connect to the DB instance:
+
+```bash
+gcloud services enable servicenetworking.googleapis.com --project=$GCP_PROJECT
+
+gcloud compute addresses create google-managed-services-$GCP_NETWORK \
+--global \
+--purpose=VPC_PEERING \
+--prefix-length=16 \
+--network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK
+
+gcloud services vpc-peerings connect \
+--service=servicenetworking.googleapis.com \
+--ranges=google-managed-services-$GCP_NETWORK \
+--network=$GCP_NETWORK \
+--project=$GCP_PROJECT
+
+```
+{: .copy-code}
+
+#### Create database server instance
+
+Create the PostgreSQL instance with database version "**PostgreSQL 17**" and the following recommendations:
+
+* use the same region where your K8S cluster **GCP_REGION** is located;
+* use the same VPC network where your K8S cluster **GCP_REGION** is located;
+* use private IP address to connect to your instance and disable public IP address;
+* use highly available DB instance for production and single zone instance for development clusters;
+* use at least 2 vCPUs and 7.5 GB RAM, which is sufficient for most of the workloads. You may scale it later if needed.
+
+Execute the following command:
+
+```bash
+
+gcloud beta sql instances create $TB_DATABASE_NAME \
+--database-version=POSTGRES_17 \
+--region=$GCP_REGION --availability-type=regional \
+--no-assign-ip --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK \
+--cpu=2 --memory=7680MB
+```
+{: .copy-code}
+
+Alternatively, you may follow [this](https://cloud.google.com/sql/docs/postgres/create-instance) guide to configure your database.
+
+Note your IP address (**YOUR_DB_IP_ADDRESS**) from the command output. Successful command output should look similar to this:
+
+```text
+Created [https://sqladmin.googleapis.com/sql/v1beta4/projects/YOUR_PROJECT_ID/instances/$TB_DATABASE_NAME].
+NAME DATABASE_VERSION LOCATION TIER PRIMARY_ADDRESS PRIVATE_ADDRESS STATUS
+$TB_DATABASE_NAME POSTGRES_17 us-central1-f db-custom-2-7680 35.192.189.68 - RUNNABLE
+```
+
+#### Set database password
+
+Set the password for your new database server instance:
+
+```bash
+gcloud sql users set-password postgres \
+--instance=$TB_DATABASE_NAME \
+--password=secret
+```
+{: .copy-code}
+
+where:
+
+* *instance* is the name of your database server instance;
+* *secret* is the password. You **should** input a different password. We will refer to it later in this guide using **YOUR_DB_PASSWORD**.
+
+#### Create the database
+
+Create "{{tbDbName}}" database inside your postgres database server instance:
+
+```bash
+gcloud sql databases create {{tbDbName}} --instance=$TB_DATABASE_NAME
+```
+{: .copy-code}
+
+where, *{{tbDbName}}* is the name of your database. You may input a different name. We will refer to it later in this guide using **YOUR_DB_NAME**.
diff --git a/_includes/templates/mqtt-broker/install/gcp/provision-redis-cluster.md b/_includes/templates/mqtt-broker/install/gcp/provision-redis-cluster.md
deleted file mode 100644
index 149e962c66..0000000000
--- a/_includes/templates/mqtt-broker/install/gcp/provision-redis-cluster.md
+++ /dev/null
@@ -1,128 +0,0 @@
-**WARNING:** If this template is included, we should update GKE prerequisites by adding `networkconnectivity.googleapis.com`
-
-You need to set up Google Cloud Memorystore for Redis Cluster. TBMQ uses cache to store messages for [DEVICE persistent clients](/docs/mqtt-broker/architecture/#persistent-device-client),
-to improve performance and avoid frequent DB reads (see below for more details).
-
-It is useful when clients connect to TBMQ with the authentication enabled.
-For every connection, the request is made to find MQTT client credentials that can authenticate the client.
-Thus, there could be an excessive amount of requests to be processed for a large number of connecting clients at once.
-
-To ensure reliability and durability across Redis restarts or failovers,
-we recommend using Redis Cluster with persistence enabled.
-
-Before creating a Redis Cluster, you must configure a Service Connection Policy (SCP) for your project, network, and region.
-This is required because Redis Cluster uses Private Service Connect (PSC) to enable VPC-level access to the managed Redis instances.
-Without this step, the cluster creation will fail with an error indicating that no service connection policy is associated with the project/network/region.
-
-To configure the SCP, follow this [guide](https://cloud.google.com/vpc/docs/configure-service-connection-policies).
-
-An alternative way to do this is by using `gcloud` tool:
-
-```bash
-gcloud network-connectivity service-connection-policies create redis-cluster-scp \
- --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK \
- --region=$GCP_REGION \
- --service-class=gcp-memorystore-redis \
- --subnets=projects/$GCP_PROJECT/regions/$GCP_REGION/subnetworks/$GCP_NETWORK
-```
-
-In order to set up the Redis cluster, follow this [guide](https://cloud.google.com/memorystore/docs/cluster/create-instances#create_an_instance).
-
-Another way to do this is by using `gcloud` tool:
-
-```bash
-gcloud redis clusters create $TB_REDIS_NAME \
- --region=$GCP_REGION \
- --shard-count=3 \
- --replica-count=1 \
- --persistence-mode=RDB \
- --rdb-snapshot-period=12h \
- --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK
-```
-{: .copy-code}
-
-`gcloud redis instances create` has many options. A few important parameters are:
-
-* **region** - location of your Redis instance (e.g., us-central1);
-* **shard-count** - number of Redis shards (nodes). Minimum is 1, recommended 3+ for large workloads.
-* **replica-count** - number of replicas per shard (HA).
-* **redis-version** - recommended to use `redis_7_2`;
-* **persistence-mode** - enables disk-based snapshotting for durability.
-* **rdb-snapshot-period** - configures automatic snapshots every 12 hours.
-* **network** - VPC network name (must match the GCP project network).
-
-To see the full list of parameters, check the CLI [reference](https://cloud.google.com/sdk/gcloud/reference/redis/clusters/create).
-
-Example of response:
-
-```text
-Create request issued for: [tbmq-redis]
-Waiting for operation [projects/$GCP_PROJECT/locations/europe-west6/operations/operation-1744037679309-632316a59a3f8-36d8b3df-07092f17] to complete...done.
-Created cluster [tbmq-redis].
-authorizationMode: AUTH_MODE_DISABLED
-automatedBackupConfig:
- automatedBackupMode: DISABLED
-clusterEndpoints:
-- connections:
- - pscAutoConnection:
- address: 10.172.0.6
- connectionType: CONNECTION_TYPE_DISCOVERY
- forwardingRule: https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT/regions/europe-west6/forwardingRules/sca-auto-fr-5f22b780-399f-4572-840e-52999ae09e2b
- network: projects/$GCP_PROJECT/global/networks/$GCP_NETWORK
- projectId: $GCP_PROJECT
- pscConnectionId: '19404658127405062'
- pscConnectionStatus: PSC_CONNECTION_STATUS_ACTIVE
- serviceAttachment: projects/430913155293/regions/europe-west6/serviceAttachments/gcp-memorystore-auto-cb3ac4ed-c840-4e-psc-sa
- - pscAutoConnection:
- address: 10.172.0.7
- forwardingRule: https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT/regions/europe-west6/forwardingRules/sca-auto-fr-60653003-26b2-44bc-b270-da4246bef8c0
- network: projects/$GCP_PROJECT/global/networks/$GCP_NETWORK
- projectId: $GCP_PROJECT
- pscConnectionId: '19404658127405063'
- pscConnectionStatus: PSC_CONNECTION_STATUS_ACTIVE
- serviceAttachment: projects/430913155293/regions/europe-west6/serviceAttachments/gcp-memorystore-auto-cb3ac4ed-c840-4e-psc-sa-2
-createTime: '2025-04-07T14:54:39.351678778Z'
-deletionProtectionEnabled: false
-discoveryEndpoints:
-- address: 10.172.0.6
- port: 6379
- pscConfig:
- network: projects/$GCP_PROJECT/global/networks/$GCP_NETWORK
-encryptionInfo:
- encryptionType: GOOGLE_DEFAULT_ENCRYPTION
-name: projects/$GCP_PROJECT/locations/europe-west6/clusters/tbmq-redis
-nodeType: REDIS_HIGHMEM_MEDIUM
-persistenceConfig:
- mode: RDB
- rdbConfig:
- rdbSnapshotPeriod: TWELVE_HOURS
- rdbSnapshotStartTime: '2025-04-07T14:54:39.320409249Z'
-preciseSizeGb: 39.0
-pscConnections:
-- address: 10.172.0.6
- forwardingRule: https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT/regions/europe-west6/forwardingRules/sca-auto-fr-5f22b780-399f-4572-840e-52999ae09e2b
- network: projects/$GCP_PROJECT/global/networks/$GCP_NETWORK
- projectId: $GCP_PROJECT
- pscConnectionId: '19404658127405062'
- serviceAttachment: projects/430913155293/regions/europe-west6/serviceAttachments/gcp-memorystore-auto-cb3ac4ed-c840-4e-psc-sa
-- address: 10.172.0.7
- forwardingRule: https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT/regions/europe-west6/forwardingRules/sca-auto-fr-60653003-26b2-44bc-b270-da4246bef8c0
- network: projects/$GCP_PROJECT/global/networks/$GCP_NETWORK
- projectId: $GCP_PROJECT
- pscConnectionId: '19404658127405063'
- serviceAttachment: projects/430913155293/regions/europe-west6/serviceAttachments/gcp-memorystore-auto-cb3ac4ed-c840-4e-psc-sa-2
-pscServiceAttachments:
-- connectionType: CONNECTION_TYPE_DISCOVERY
- serviceAttachment: projects/430913155293/regions/europe-west6/serviceAttachments/gcp-memorystore-auto-cb3ac4ed-c840-4e-psc-sa
-- serviceAttachment: projects/430913155293/regions/europe-west6/serviceAttachments/gcp-memorystore-auto-cb3ac4ed-c840-4e-psc-sa-2
-replicaCount: 1
-shardCount: 3
-sizeGb: 39
-state: ACTIVE
-transitEncryptionMode: TRANSIT_ENCRYPTION_MODE_DISABLED
-uid: cb3ac4ed-c840-4edd-8eb4-1d2a31f758ba
-zoneDistributionConfig:
- mode: MULTI_ZONE
-```
-
-We need to take `discoveryEndpoints.address` parameter value and replace `YOUR_REDIS_ENDPOINT_URL_WITHOUT_PORT` in the file _tb-broker-cache-configmap.yml_.
\ No newline at end of file
diff --git a/_includes/templates/mqtt-broker/install/gcp/regional-gke-cluster.md b/_includes/templates/mqtt-broker/install/gcp/regional-gke-cluster.md
new file mode 100644
index 0000000000..e307894abb
--- /dev/null
+++ b/_includes/templates/mqtt-broker/install/gcp/regional-gke-cluster.md
@@ -0,0 +1,20 @@
+Create a regional cluster distributed across 3 zones with nodes of your preferred machine type.
+The example below provisions one **e2-standard-4** node per zone **(three nodes total)**, but you can modify the `--machine-type` and `--num-nodes` to suit your workload requirements.
+For a full list of available machine types and their specifications, refer to the [GCP machine types documentation](https://cloud.google.com/compute/docs/machine-resource).
+
+Execute the following command (recommended):
+
+```bash
+gcloud container clusters create $TB_CLUSTER_NAME \
+--release-channel stable \
+--region $GCP_REGION \
+--network=$GCP_NETWORK \
+--node-locations $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3 \
+--enable-ip-alias \
+--num-nodes=1 \
+--node-labels=role=main \
+--machine-type=e2-standard-4
+```
+{: .copy-code}
+
+Alternatively, you may use [this](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-regional-cluster) guide for custom cluster setup.
diff --git a/_includes/templates/mqtt-broker/install/gcp/update-kubectl-region.md b/_includes/templates/mqtt-broker/install/gcp/update-kubectl-region.md
new file mode 100644
index 0000000000..09faea57a0
--- /dev/null
+++ b/_includes/templates/mqtt-broker/install/gcp/update-kubectl-region.md
@@ -0,0 +1,7 @@
+
+Update the context of kubectl using command:
+
+```
+gcloud container clusters get-credentials $TB_CLUSTER_NAME --region $GCP_REGION
+```
+{: .copy-code}
diff --git a/_includes/templates/mqtt-broker/install/helm/aws/configure-deployment.md b/_includes/templates/mqtt-broker/install/helm/aws/configure-deployment.md
index 7d304cbaa3..9efea8e73e 100644
--- a/_includes/templates/mqtt-broker/install/helm/aws/configure-deployment.md
+++ b/_includes/templates/mqtt-broker/install/helm/aws/configure-deployment.md
@@ -95,49 +95,7 @@ eksctl create cluster -f cluster.yml
### Create GP3 storage class and make it default
-When provisioning persistent storage in Amazon EKS, the `gp3` volume type is the modern, recommended default. It offers superior performance, cost-efficiency, and flexibility compared to `gp2`.
-
-Please download the storage class configuration file:
-
-```bash
-curl -o gp3-def-sc.yml https://raw.githubusercontent.com/thingsboard/tbmq/{{ site.release.broker_branch }}/k8s/helm/aws/gp3-def-sc.yml
-```
-{: .copy-code}
-
-Apply the configuration:
-
-```bash
-kubectl apply -f gp3-def-sc.yml
-```
-{: .copy-code}
-
-If a `gp2` StorageClass exists, it may conflict with `gp3`. You can either make `gp2` storage class non-default:
-
-```bash
-kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
-```
-{: .copy-code}
-
-Or delete the `gp2` StorageClass (if unused):
-
-```bash
-kubectl delete storageclass gp2
-```
-{: .copy-code}
-
-Check the `gp3` storage class available and marked as default:
-
-```bash
-kubectl get sc
-```
-{: .copy-code}
-
-You should see similar output:
-
-```text
-NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
-gp3 (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 30s
-```
+{% include templates/mqtt-broker/install/aws/gp3-sc.md %}
### Attach Policy
diff --git a/_includes/templates/mqtt-broker/install/linux-macos/linux-macos-install.md b/_includes/templates/mqtt-broker/install/linux-macos/linux-macos-install.md
new file mode 100644
index 0000000000..dec053564b
--- /dev/null
+++ b/_includes/templates/mqtt-broker/install/linux-macos/linux-macos-install.md
@@ -0,0 +1,17 @@
+{% if docsPrefix == null %}
+
+```shell
+wget https://raw.githubusercontent.com/thingsboard/tbmq/{{ site.release.broker_branch }}/msa/tbmq/configs/tbmq-install-and-run.sh &&
+sudo chmod +x tbmq-install-and-run.sh && ./tbmq-install-and-run.sh
+```
+{: .copy-code}
+
+{% else %}
+
+```shell
+wget https://raw.githubusercontent.com/thingsboard/tbmq-pe-docker-compose/{{ site.release.broker_branch }}/basic/tbmq-install-and-run.sh &&
+sudo chmod +x tbmq-install-and-run.sh && ./tbmq-install-and-run.sh
+```
+{: .copy-code}
+
+{% endif %}
diff --git a/_includes/templates/mqtt-broker/install/linux-macos/linux-macos.md b/_includes/templates/mqtt-broker/install/linux-macos/linux-macos.md
index b23ba3370c..22ea97bcb9 100644
--- a/_includes/templates/mqtt-broker/install/linux-macos/linux-macos.md
+++ b/_includes/templates/mqtt-broker/install/linux-macos/linux-macos.md
@@ -1,7 +1,3 @@
For Linux or macOS users who have Docker installed, the execution of the following commands is recommended:
-```shell
-wget https://raw.githubusercontent.com/thingsboard/tbmq/{{ site.release.broker_branch }}/msa/tbmq/configs/tbmq-install-and-run.sh &&
-sudo chmod +x tbmq-install-and-run.sh && ./tbmq-install-and-run.sh
-```
-{: .copy-code}
+{% include templates/mqtt-broker/install/linux-macos/linux-macos-install.md %}
diff --git a/_includes/templates/mqtt-broker/install/ssl/mqtts.md b/_includes/templates/mqtt-broker/install/ssl/mqtts.md
index 88fd6ff2c5..95bf0d0c4a 100644
--- a/_includes/templates/mqtt-broker/install/ssl/mqtts.md
+++ b/_includes/templates/mqtt-broker/install/ssl/mqtts.md
@@ -2,7 +2,7 @@
To enable **MQTT over SSL/TLS (MQTTS)** in TBMQ, you need to provide valid SSL certificates and configure TBMQ to use them.
-For details on supported formats and configuration options, see the [MQTT over SSL](/docs/mqtt-broker/security/mqtts/) guide.
+For details on supported formats and configuration options, see the [MQTT over SSL](/docs/{{docsPrefix}}mqtt-broker/security/mqtts/) guide.
**Prepare SSL Certificates**
diff --git a/_includes/templates/mqtt-broker/install/windows/windows-install.md b/_includes/templates/mqtt-broker/install/windows/windows-install.md
index b96ac1bf02..f4de09b32a 100644
--- a/_includes/templates/mqtt-broker/install/windows/windows-install.md
+++ b/_includes/templates/mqtt-broker/install/windows/windows-install.md
@@ -20,8 +20,20 @@ Set-ExecutionPolicy Unrestricted
* **Install TBMQ**
+{% if docsPrefix == null %}
+
```bash
Invoke-WebRequest -Uri "https://raw.githubusercontent.com/thingsboard/tbmq/{{ site.release.broker_branch }}/msa/tbmq/configs/windows/tbmq-install-and-run.ps1" `
-OutFile ".\tbmq-install-and-run.ps1"; .\tbmq-install-and-run.ps1
```
{: .copy-code}
+
+{% else %}
+
+```bash
+Invoke-WebRequest -Uri "https://raw.githubusercontent.com/thingsboard/tbmq-pe-docker-compose/{{ site.release.broker_branch }}/basic/windows/tbmq-install-and-run.ps1" `
+-OutFile ".\tbmq-install-and-run.ps1"; .\tbmq-install-and-run.ps1
+```
+{: .copy-code}
+
+{% endif %}
diff --git a/_includes/templates/mqtt-broker/pe-tbmq-explore-banner.md b/_includes/templates/mqtt-broker/pe-tbmq-explore-banner.md
new file mode 100644
index 0000000000..827f271500
--- /dev/null
+++ b/_includes/templates/mqtt-broker/pe-tbmq-explore-banner.md
@@ -0,0 +1,6 @@
+{% if docsPrefix != 'pe/' %}
+{% capture difference %}
+Interested in the **TBMQ Professional Edition**? Explore the TBMQ PE documentation [here](/docs/pe/mqtt-broker/){:target="_blank"}.
+{% endcapture %}
+{% include templates/info-banner.md content=difference %}
+{% endif %}
diff --git a/_includes/templates/mqtt-broker/pe-tbmq-feature-banner.md b/_includes/templates/mqtt-broker/pe-tbmq-feature-banner.md
new file mode 100644
index 0000000000..efffcdfd27
--- /dev/null
+++ b/_includes/templates/mqtt-broker/pe-tbmq-feature-banner.md
@@ -0,0 +1,4 @@
+{% capture peFeatureContent %}
+Only **TBMQ Professional Edition** supports **{{ feature }}** feature.
+{% endcapture %}
+{% include templates/info-banner.md title="TBMQ PE feature" content=peFeatureContent %}
\ No newline at end of file
diff --git a/_includes/templates/mqtt-broker/security/user-password.md b/_includes/templates/mqtt-broker/security/user-password.md
new file mode 100644
index 0000000000..d91d370fe0
--- /dev/null
+++ b/_includes/templates/mqtt-broker/security/user-password.md
@@ -0,0 +1,4 @@
+If a user signs in using [OAuth 2.0](/docs/pe/mqtt-broker/security/oauth-2-support/#basic-mapper) while the **Allow user creation** option is enabled, the user account is automatically created in the system.
+These users are created without a password set and can continue using Single Sign-On (SSO) to log in.
+If the user wants to log in using a regular username/password flow, they should go to **Account → Security** and [set a new password](/docs/pe/mqtt-broker/user-guide/ui/settings/#change-password).
+When changing the password, leave the **Current password** field empty.
diff --git a/_includes/templates/mqtt-broker/ssl/tbmq-certificates-chain.md b/_includes/templates/mqtt-broker/ssl/tbmq-certificates-chain.md
index 133ea31dbf..c06b924147 100644
--- a/_includes/templates/mqtt-broker/ssl/tbmq-certificates-chain.md
+++ b/_includes/templates/mqtt-broker/ssl/tbmq-certificates-chain.md
@@ -1,7 +1,7 @@
#### Step 1. Prepare your server and client certificate chain
-Follow the [MQTT over SSL](/docs/mqtt-broker/security/mqtts/) guide to provision server certificate if you are hosting your own TBMQ instance.
+Follow the [MQTT over SSL](/docs/{{docsPrefix}}mqtt-broker/security/mqtts/) guide to provision server certificate if you are hosting your own TBMQ instance.
Once provisioned, you should prepare a CA root certificate in pem format. This certificate will be used by MQTT clients to validate the server certificate.
Save the CA root certificate to your working directory as "**ca.pem**".
@@ -193,7 +193,7 @@ If the certificate is issued by a well-known public CA, it is already trusted by
If both TBMQ and the clients use certificates issued by the same CA, no additional configuration is required.
If it is another private or internal CA, you must add the CA certificate (`rootCert.pem`) to the Java truststore used by TBMQ.
-Run the [following command](/docs/mqtt-broker/security/mqtts/#adding-certificate-into-java-truststore) to import the CA certificate into the truststore.
+Run the [following command](/docs/{{docsPrefix}}mqtt-broker/security/mqtts/#adding-certificate-into-java-truststore) to import the CA certificate into the truststore.
#### Step 5. Test the connection
@@ -205,7 +205,7 @@ mosquitto_pub --cafile ca.pem -d -q 1 -h "YOUR_TBMQ_HOST" -p "8883" \
```
{: .copy-code}
-Similar command for the [self-signed](/docs/mqtt-broker/security/mqtts/#self-signed-certificates-generation) server certificate:
+Similar command for the [self-signed](/docs/{{docsPrefix}}mqtt-broker/security/mqtts/#self-signed-certificates-generation) server certificate:
```bash
mosquitto_pub --insecure --cafile server.pem -d -q 1 -h "YOUR_TBMQ_HOST" -p "8883" \
diff --git a/_includes/templates/mqtt-broker/ssl/tbmq-certificates-leaf.md b/_includes/templates/mqtt-broker/ssl/tbmq-certificates-leaf.md
index 09479846db..c6e8562bcf 100644
--- a/_includes/templates/mqtt-broker/ssl/tbmq-certificates-leaf.md
+++ b/_includes/templates/mqtt-broker/ssl/tbmq-certificates-leaf.md
@@ -1,7 +1,7 @@
#### Step 1. Prepare your server and client certificate
-Follow the [MQTT over SSL](/docs/mqtt-broker/security/mqtts/) guide to provision server certificate if you are hosting your own TBMQ instance.
+Follow the [MQTT over SSL](/docs/{{docsPrefix}}mqtt-broker/security/mqtts/) guide to provision server certificate if you are hosting your own TBMQ instance.
Once provisioned, you should prepare a CA root certificate in pem format. This certificate will be used by MQTT clients to validate the server certificate.
Save the CA root certificate to your working directory as "**ca.pem**".
@@ -40,7 +40,7 @@ For the MQTT client to establish a secure TLS connection, its certificate must b
If the certificate is signed by a well-known public CA, it is already trusted by default.
If it is a self-signed, import the client certificate (`cert.pem`) into the Java truststore used by TBMQ.
-Run the [following command](/docs/mqtt-broker/security/mqtts/#adding-certificate-into-java-truststore) to import the certificate into the truststore.
+Run the [following command](/docs/{{docsPrefix}}mqtt-broker/security/mqtts/#adding-certificate-into-java-truststore) to import the certificate into the truststore.
#### Step 5. Test the connection
diff --git a/_includes/templates/mqtt-broker/troubleshooting/logs/view-logs/docker-compose-view-logs.md b/_includes/templates/mqtt-broker/troubleshooting/logs/view-logs/docker-compose-view-logs.md
index 878a6370f8..0c6211f318 100644
--- a/_includes/templates/mqtt-broker/troubleshooting/logs/view-logs/docker-compose-view-logs.md
+++ b/_includes/templates/mqtt-broker/troubleshooting/logs/view-logs/docker-compose-view-logs.md
@@ -1,14 +1,14 @@
View last logs in runtime:
```bash
-docker compose logs -f tb-mqtt-broker-1 tb-mqtt-broker-2
+docker compose logs -f tbmq-1 tbmq-2
```
{: .copy-code}
{% capture dockerComposeStandalone %}
If you still rely on Docker Compose as docker-compose (with a hyphen) execute next command:
-**docker-compose logs -f tb-mqtt-broker-1 tb-mqtt-broker-2**
+**docker-compose logs -f tbmq-1 tbmq-2**
{% endcapture %}
{% include templates/info-banner.md content=dockerComposeStandalone %}
@@ -16,32 +16,32 @@ You can use **grep** command to show only the output with desired string in it.
For example, you can use the following command in order to check if there are any errors on the backend side:
```bash
-docker compose logs tb-mqtt-broker-1 tb-mqtt-broker-2 | grep ERROR
+docker compose logs tbmq-1 tbmq-2 | grep ERROR
```
{: .copy-code}
{% capture dockerComposeStandalone %}
If you still rely on Docker Compose as docker-compose (with a hyphen) execute next command:
-**docker-compose logs tb-mqtt-broker-1 tb-mqtt-broker-2 \| grep ERROR**
+**docker-compose logs tbmq-1 tbmq-2 \| grep ERROR**
{% endcapture %}
{% include templates/info-banner.md content=dockerComposeStandalone %}
**Tip:** you can redirect logs to file and then analyze with any text editor:
```bash
-docker compose logs -f tb-mqtt-broker-1 tb-mqtt-broker-2 > tb-mqtt-broker.log
+docker compose logs -f tbmq-1 tbmq-2 > tbmq.log
```
{: .copy-code}
{% capture dockerComposeStandalone %}
If you still rely on Docker Compose as docker-compose (with a hyphen) execute next command:
-**docker-compose logs -f tb-mqtt-broker-1 tb-mqtt-broker-2 > tb-mqtt-broker.log**
+**docker-compose logs -f tbmq-1 tbmq-2 > tbmq.log**
{% endcapture %}
{% include templates/info-banner.md content=dockerComposeStandalone %}
-**Note:** you can always log into TBMQ container and view logs there:
+**Note:** you can always log into the TBMQ container and view logs there:
```bash
docker ps
diff --git a/_includes/templates/mqtt-broker/troubleshooting/logs/view-logs/kubernetes-view-logs.md b/_includes/templates/mqtt-broker/troubleshooting/logs/view-logs/kubernetes-view-logs.md
index aff806dfc7..2b1ecd3beb 100644
--- a/_includes/templates/mqtt-broker/troubleshooting/logs/view-logs/kubernetes-view-logs.md
+++ b/_includes/templates/mqtt-broker/troubleshooting/logs/view-logs/kubernetes-view-logs.md
@@ -35,7 +35,7 @@ kubectl logs -f tb-broker-1 > tb-broker-1.log
```
{: .copy-code}
-**Note:** you can always log into TBMQ container and view logs there:
+**Note:** you can always log into the TBMQ container and view logs there:
```bash
kubectl exec -it tb-broker-0 -- bash
diff --git a/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release-aws-cluster.md b/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release-aws-cluster.md
index fbdf807c13..8d49d4223c 100644
--- a/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release-aws-cluster.md
+++ b/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release-aws-cluster.md
@@ -2,7 +2,7 @@ TBMQ v2.1.0 introduces enhancements, including a new Integration Executor micros
#### Add Integration Executor microservice
-This release adds support for external integrations via the new [Integration Executor](/docs/mqtt-broker/integrations/) microservice.
+This release adds support for external integrations via the new [Integration Executor](/docs/{{docsPrefix}}mqtt-broker/integrations/) microservice.
To retrieve the latest configuration files, including those for Integration Executors, pull the updates from the release branch.
Follow the steps outlined in the [run upgrade instructions](#run-upgrade) up to the execution of the upgrade script (do not execute **.sh** commands yet).
diff --git a/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release-cluster.md b/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release-cluster.md
index 7b93d549fa..9da6618bf2 100644
--- a/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release-cluster.md
+++ b/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release-cluster.md
@@ -2,7 +2,7 @@ TBMQ v2.1.0 introduces enhancements, including a new Integration Executor micros
#### Add Integration Executor microservice
-This release adds support for external integrations via the new [Integration Executor](/docs/mqtt-broker/integrations/) microservice.
+This release adds support for external integrations via the new [Integration Executor](/docs/{{docsPrefix}}mqtt-broker/integrations/) microservice.
To retrieve the latest configuration files, including those for Integration Executors, pull the updates from the release branch.
Follow the steps outlined in the [run upgrade instructions](#run-upgrade) up to the execution of the upgrade script (do not execute **.sh** commands yet).
diff --git a/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release-docker-cluster.md b/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release-docker-cluster.md
index 8f878779ed..de6e391bb2 100644
--- a/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release-docker-cluster.md
+++ b/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release-docker-cluster.md
@@ -2,7 +2,7 @@ TBMQ v2.1.0 introduces enhancements, including a new Integration Executor micros
#### Add Integration Executor microservice
-This release adds support for external integrations via the new [Integration Executor](/docs/mqtt-broker/integrations/) microservice.
+This release adds support for external integrations via the new [Integration Executor](/docs/{{docsPrefix}}mqtt-broker/integrations/) microservice.
For the complete updated `docker-compose.yml`, see the [official example here](https://github.com/thingsboard/tbmq/blob/release-2.1.0/docker/docker-compose.yml).
diff --git a/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release.md b/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release.md
index c5ea7eb831..8ce1c1ad4f 100644
--- a/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release.md
+++ b/_includes/templates/mqtt-broker/upgrade/update-to-2.1.0-release.md
@@ -2,7 +2,7 @@ TBMQ v2.1.0 introduces enhancements, including a new Integration Executor micros
#### Add Integration Executor microservice
-This release adds support for external integrations via the new [Integration Executor](/docs/mqtt-broker/integrations/) microservice.
+This release adds support for external integrations via the new [Integration Executor](/docs/{{docsPrefix}}mqtt-broker/integrations/) microservice.
diff --git a/_includes/templates/mqtt-broker/upgrade/upgrading.md b/_includes/templates/mqtt-broker/upgrade/upgrading.md
index f72f39c2b8..5a17324ebb 100644
--- a/_includes/templates/mqtt-broker/upgrade/upgrading.md
+++ b/_includes/templates/mqtt-broker/upgrade/upgrading.md
@@ -1,7 +1,9 @@
-Review the [release notes](/docs/mqtt-broker/releases/) and [upgrade instruction](/docs/mqtt-broker/install/upgrade-instructions/)
+Review the [release notes](/docs/{{docsPrefix}}mqtt-broker/releases/) and [upgrade instruction](/docs/{{docsPrefix}}mqtt-broker/install/upgrade-instructions/)
for detailed information on the latest changes.
+{% if docsPrefix == null %}
If there are no **Upgrade to x.x.x** notes for your version, you can proceed directly with the general [upgrade instructions](#run-upgrade).
+{% endif %}
If the documentation does not cover the specific upgrade instructions for your case,
-please [contact us](/docs/mqtt-broker/help/) so we can provide further guidance.
+please [contact us](/docs/{{docsPrefix}}mqtt-broker/help/) so we can provide further guidance.
diff --git a/_layouts/docwithnav-pe-mqtt-broker.html b/_layouts/docwithnav-pe-mqtt-broker.html
new file mode 100644
index 0000000000..2b1fbe11b9
--- /dev/null
+++ b/_layouts/docwithnav-pe-mqtt-broker.html
@@ -0,0 +1 @@
+{% include docwithnav.html docsTag="mqtt-broker-pe" cssTag="docs" %}
\ No newline at end of file
diff --git a/_layouts/mqtt-broker.html b/_layouts/mqtt-broker.html
index 6c1e2136ac..a62f902f1f 100644
--- a/_layouts/mqtt-broker.html
+++ b/_layouts/mqtt-broker.html
@@ -6,8 +6,7 @@