-1. In **Connectivity Method**, select **VPC Peering** or **Public IP**, fill in your Kafka brokers endpoints. You can use commas `,` to separate multiple endpoints.
+1. In **Connectivity Method**, select **Public**, fill in your Kafka brokers endpoints. You can use commas `,` to separate multiple endpoints.
2. Select an **Authentication** option according to your Kafka authentication configuration.
- If your Kafka does not require authentication, keep the default option **Disable**.
@@ -139,11 +86,11 @@ The steps vary depending on the connectivity method you select.
6. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
+
1. In **Connectivity Method**, select **Private Link**.
-2. In **Private Endpoint**, select the private endpoint that you created in the [Network](#network) section. Make sure the AZs of the private endpoint match the AZs of the Kafka deployment.
-3. Fill in the **Bootstrap Ports** that you obtained from the [Network](#network) section. It is recommended that you set at least one port for one AZ. You can use commas `,` to separate multiple ports.
+2. In **Private Link Connection**, select the private link connection that you created in the [Network](#network) section. Make sure the AZs of the private link connection match the AZs of the Kafka deployment.
+3. Fill in the **Bootstrap Port** that you obtained from the [Network](#network) section.
4. Select an **Authentication** option according to your Kafka authentication configuration.
- If your Kafka does not require authentication, keep the default option **Disable**.
@@ -151,90 +98,26 @@ The steps vary depending on the connectivity method you select.
5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
6. Select a **Compression** type for the data in this changefeed.
7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
-8. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
+8. Input the **TLS Server Name** if your Kafka requires TLS SNI verification. For example, Confluent Cloud Dedicated clusters.
+9. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
-
-
-
-
-1. In **Connectivity Method**, select **Private Link**.
-2. In **Private Endpoint**, select the private endpoint that you created in the [Network](#network) section. Make sure the AZs of the private endpoint match the AZs of the Kafka deployment.
-3. Fill in the **Bootstrap Ports** that you obtained from the [Network](#network) section. It is recommended that you set at least one port for one AZ. You can use commas `,` to separate multiple ports.
-4. Select an **Authentication** option according to your Kafka authentication configuration.
-
- - If your Kafka does not require authentication, keep the default option **Disable**.
- - If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
-6. Select a **Compression** type for the data in this changefeed.
-7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
-8. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
-
-
-
-
-
-
-
-1. In **Connectivity Method**, select **Private Service Connect**.
-2. In **Private Endpoint**, select the private endpoint that you created in the [Network](#network) section.
-3. Fill in the **Bootstrap Ports** that you obtained from the [Network](#network) section. It is recommended that you provide more than one port. You can use commas `,` to separate multiple ports.
-4. Select an **Authentication** option according to your Kafka authentication configuration.
-
- - If your Kafka does not require authentication, keep the default option **Disable**.
- - If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
-6. Select a **Compression** type for the data in this changefeed.
-7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
-8. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
-9. TiDB Cloud creates the endpoint for **Private Service Connect**, which might take several minutes.
-10. Once the endpoint is created, log in to your cloud provider console and accept the connection request.
-11. Return to the [TiDB Cloud console](https://tidbcloud.com) to confirm that you have accepted the connection request. TiDB Cloud will test the connection and proceed to the next page if the test succeeds.
-
-
-
-
-
-
-
-1. In **Connectivity Method**, select **Private Link**.
-2. In **Private Endpoint**, select the private endpoint that you created in the [Network](#network) section.
-3. Fill in the **Bootstrap Ports** that you obtained in the [Network](#network) section. It is recommended that you set at least one port for one AZ. You can use commas `,` to separate multiple ports.
-4. Select an **Authentication** option according to your Kafka authentication configuration.
-
- - If your Kafka does not require authentication, keep the default option **Disable**.
- - If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
-6. Select a **Compression** type for the data in this changefeed.
-7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
-8. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
-9. TiDB Cloud creates the endpoint for **Private Link**, which might take several minutes.
-10. Once the endpoint is created, log in to the [Azure portal](https://portal.azure.com/) and accept the connection request.
-11. Return to the [TiDB Cloud console](https://tidbcloud.com) to confirm that you have accepted the connection request. TiDB Cloud will test the connection and proceed to the next page if the test succeeds.
-
-
-
## Step 3. Set the changefeed
-1. Customize **Table Filter** to filter the tables that you want to replicate. For the rule syntax, refer to [table filter rules](/table-filter.md).
+1. Customize **Table Filter** to filter the tables that you want to replicate. For the rule syntax, refer to [table filter rules](https://docs.pingcap.com/tidb/stable/table-filter/#syntax).
+ - **Replication Scope**: you can choose to only replicate tables with valid keys or replicate all selected tables.
+ - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule and click `apply`, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules under the `Filter results`.
- **Case Sensitive**: you can set whether the matching of database and table names in filter rules is case-sensitive. By default, matching is case-insensitive.
- - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules in the box on the right. You can add up to 100 filter rules.
- - **Tables with valid keys**: this column displays the tables that have valid keys, including primary keys or unique indexes.
- - **Tables without valid keys**: this column shows tables that lack primary keys or unique keys. These tables present a challenge during replication because the absence of a unique identifier can result in inconsistent data when the downstream handles duplicate events. To ensure data consistency, it is recommended to add unique keys or primary keys to these tables before initiating the replication. Alternatively, you can add filter rules to exclude these tables. For example, you can exclude the table `test.tbl1` by using the rule `"!test.tbl1"`.
+ - **Filter results with valid keys**: this column displays the tables that have valid keys, including primary keys or unique indexes.
+ - **Filter results without valid keys**: this column shows tables that lack primary keys or unique keys. These tables present a challenge during replication because the absence of a unique identifier can result in inconsistent data when the downstream handles duplicate events. To ensure data consistency, it is recommended to add unique keys or primary keys to these tables before initiating the replication. Alternatively, you can add filter rules to exclude these tables. For example, you can exclude the table `test.tbl1` by using the rule `"!test.tbl1"`.
2. Customize **Event Filter** to filter the events that you want to replicate.
- - **Tables matching**: you can set which tables the event filter will be applied to in this column. The rule syntax is the same as that used for the preceding **Table Filter** area. You can add up to 10 event filter rules per changefeed.
- - **Event Filter**: you can use the following event filters to exclude specific events from the changefeed:
- - **Ignore event**: excludes specified event types.
- - **Ignore SQL**: excludes DDL events that match specified expressions. For example, `^drop` excludes statements starting with `DROP`, and `add column` excludes statements containing `ADD COLUMN`.
- - **Ignore insert value expression**: excludes `INSERT` statements that meet specific conditions. For example, `id >= 100` excludes `INSERT` statements where `id` is greater than or equal to 100.
- - **Ignore update new value expression**: excludes `UPDATE` statements where the new value matches a specified condition. For example, `gender = 'male'` excludes updates that result in `gender` being `male`.
- - **Ignore update old value expression**: excludes `UPDATE` statements where the old value matches a specified condition. For example, `age < 18` excludes updates where the old value of `age` is less than 18.
- - **Ignore delete value expression**: excludes `DELETE` statements that meet a specified condition. For example, `name = 'john'` excludes `DELETE` statements where `name` is `'john'`.
+ - **Tables matching**: you can set which tables the event filter will be applied to in this column. The rule syntax is the same as that used for the preceding **Table Filter** area.
+ - **Event Filter**: you can choose the events you want to ingnore.
3. Customize **Column Selector** to select columns from events and send only the data changes related to those columns to the downstream.
@@ -257,7 +140,7 @@ The steps vary depending on the connectivity method you select.
6. If you select **Avro** as your data format, you will see some Avro-specific configurations on the page. You can fill in these configurations as follows:
- In the **Decimal** and **Unsigned BigInt** configurations, specify how TiDB Cloud handles the decimal and unsigned bigint data types in Kafka messages.
- - In the **Schema Registry** area, fill in your schema registry endpoint. If you enable **HTTP Authentication**, the fields for user name and password are displayed and automatically filled in with your TiDB
clusterinstance endpoint and password.
+ - In the **Schema Registry** area, fill in your schema registry endpoint. If you enable **HTTP Authentication**, the fields for user name and password are displayed to fill in.
7. In the **Topic Distribution** area, select a distribution mode, and then fill in the topic name configurations according to the mode.
@@ -285,7 +168,7 @@ The steps vary depending on the connectivity method you select.
- **Distribute changelogs by primary key or index value to Kafka partition**
- If you want the changefeed to send Kafka messages of a table to different partitions, choose this distribution method. The primary key or index value of a row changelog will determine which partition the changelog is sent to. This distribution method provides a better partition balance and ensures row-level orderliness.
+ If you want the changefeed to send Kafka messages of a table to different partitions, choose this distribution method. The primary key or index value of a row changelog will determine which partition the changelog is sent to. Keep the **Index Name** field empty if you want to use the primary key. This distribution method provides a better partition balance and ensures row-level orderliness.
- **Distribute changelogs by table to Kafka partition**
@@ -308,14 +191,7 @@ The steps vary depending on the connectivity method you select.
11. Click **Next**.
-## Step 4. Configure your changefeed specification
-
-1. In the **Changefeed Specification** area, specify the number of
Replication Capacity Units (RCUs)Changefeed Capacity Units (CCUs) to be used by the changefeed.
-2. In the **Changefeed Name** area, specify a name for the changefeed.
-3. Click **Next** to check the configurations you set and go to the next page.
-
-## Step 5. Review the configurations
-
-On this page, you can review all the changefeed configurations that you set.
+## Step 4. Review and create your changefeed specification
-If you find any error, you can go back to fix the error. If there is no error, you can click the check box at the bottom, and then click **Create** to create the changefeed.
+1. In the **Changefeed Name** area, specify a name for the changefeed.
+2. Review all the changefeed configurations that you set. Click **Previous** to go back to the previous configuration pages if you want to modify some configurations. Click **Submit** if all configurations are correct to create the changefeed.
\ No newline at end of file
From 90c8a4f904002ee0dc4ead1264f999b6c78517d4 Mon Sep 17 00:00:00 2001
From: shi yuhang <52435083+shiyuhang0@users.noreply.github.com>
Date: Fri, 26 Dec 2025 15:44:13 +0800
Subject: [PATCH 06/28] Apply suggestions from code review
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
---
TOC-tidb-cloud-essential.md | 4 ++--
tidb-cloud/essential-changefeed-overview.md | 4 ++--
.../essential-changefeed-sink-to-kafka.md | 18 +++++++++---------
.../essential-changefeed-sink-to-mysql.md | 16 ++++++++--------
4 files changed, 21 insertions(+), 21 deletions(-)
diff --git a/TOC-tidb-cloud-essential.md b/TOC-tidb-cloud-essential.md
index 7cc0d54fc2051..93a9d7893325e 100644
--- a/TOC-tidb-cloud-essential.md
+++ b/TOC-tidb-cloud-essential.md
@@ -234,8 +234,8 @@
- [Connect AWS DMS to TiDB Cloud clusters](/tidb-cloud/tidb-cloud-connect-aws-dms.md)
- Stream Data
- [Changefeed Overview](/tidb-cloud/essential-changefeed-overview.md)
- - [To MySQL Sink](/tidb-cloud/essential-changefeed-sink-to-mysql.md)
- - [To Kafka Sink](/tidb-cloud/essential-changefeed-sink-to-kafka.md)
+ - [Sink to MySQL](/tidb-cloud/essential-changefeed-sink-to-mysql.md)
+ - [Sink to Apache Kafka](/tidb-cloud/essential-changefeed-sink-to-kafka.md)
- Vector Search 
- [Overview](/vector-search/vector-search-overview.md)
- Get Started
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index 912186477d584..9ed3b05fc242c 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -58,7 +58,7 @@ ticloud serverless changefeed get -c
--changefeed-id
-1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB ckuster.
+1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB cluster.
2. Locate the corresponding changefeed you want to pause or resume, and click **...** > **Pause/Resume** in the **Action** column.
@@ -85,7 +85,7 @@ ticloud serverless changefeed resume -c --changefeed-id **Note:**
>
-> TiDB Cloud currently only allows editing changefeeds in the paused status.
+> TiDB Cloud currently only allows editing changefeeds that are in the `Paused` state.
diff --git a/tidb-cloud/essential-changefeed-sink-to-kafka.md b/tidb-cloud/essential-changefeed-sink-to-kafka.md
index 5f548e4b3d595..f578de8ce79e9 100644
--- a/tidb-cloud/essential-changefeed-sink-to-kafka.md
+++ b/tidb-cloud/essential-changefeed-sink-to-kafka.md
@@ -33,7 +33,7 @@ Ensure that your TiDB Cloud cluster can connect to the Apache Kafka service. You
Private Link Connection leverages **Private Link** technologies from cloud providers to enable resources in your VPC to connect to services in other VPCs using private IP addresses, as if those services were hosted directly within your VPC.
-TiDB Cloud currently supports Private Link Connection only for self-hosted Kafka and Confluent Cloud dedicated cluster. It does not support direct integration with MSK, or other Kafka SaaS services.
+TiDB Cloud currently supports Private Link Connection only for self-hosted Kafka and Confluent Cloud Dedicated Cluster. It does not support direct integration with MSK, or other Kafka SaaS services.
See the following instructions to set up a Private Link connection according to your Kafka deployment and cloud provider:
@@ -45,9 +45,9 @@ See the following instructions to set up a Private Link connection according to
-If you want to provide Public access to your Apache Kafka service, assign Public IP addresses or domain names to all your Kafka brokers.
+If you want to provide public access to your Apache Kafka service, assign public IP addresses or domain names to all your Kafka brokers.
-It is **NOT** recommended to use Public access in a production environment.
+It is not recommended to use public access in a production environment.
@@ -59,7 +59,7 @@ To allow TiDB Cloud changefeeds to stream data to Apache Kafka and create Kafka
- The `Create` and `Write` permissions are added for the topic resource type in Kafka.
- The `DescribeConfigs` permission is added for the cluster resource type in Kafka.
-For example, if your Kafka cluster is in Confluent Cloud, you can see [Resources](https://docs.confluent.io/platform/current/kafka/authorization.html#resources) and [Adding ACLs](https://docs.confluent.io/platform/current/kafka/authorization.html#adding-acls) in Confluent documentation for more information.
+For example, if your Kafka cluster is in Confluent Cloud, refer to [Resources](https://docs.confluent.io/platform/current/kafka/authorization.html#resources) and [Adding ACLs](https://docs.confluent.io/platform/current/kafka/authorization.html#adding-acls) in the Confluent documentation for more information.
## Step 1. Open the Changefeed page for Apache Kafka
@@ -74,7 +74,7 @@ The steps vary depending on the connectivity method you select.
-1. In **Connectivity Method**, select **Public**, fill in your Kafka brokers endpoints. You can use commas `,` to separate multiple endpoints.
+1. In **Connectivity Method**, select **Public**, and fill in your Kafka broker endpoints. You can use commas `,` to separate multiple endpoints.
2. Select an **Authentication** option according to your Kafka authentication configuration.
- If your Kafka does not require authentication, keep the default option **Disable**.
@@ -109,7 +109,7 @@ The steps vary depending on the connectivity method you select.
1. Customize **Table Filter** to filter the tables that you want to replicate. For the rule syntax, refer to [table filter rules](https://docs.pingcap.com/tidb/stable/table-filter/#syntax).
- **Replication Scope**: you can choose to only replicate tables with valid keys or replicate all selected tables.
- - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule and click `apply`, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules under the `Filter results`.
+ - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule and click **Apply**, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules under **Filter results**.
- **Case Sensitive**: you can set whether the matching of database and table names in filter rules is case-sensitive. By default, matching is case-insensitive.
- **Filter results with valid keys**: this column displays the tables that have valid keys, including primary keys or unique indexes.
- **Filter results without valid keys**: this column shows tables that lack primary keys or unique keys. These tables present a challenge during replication because the absence of a unique identifier can result in inconsistent data when the downstream handles duplicate events. To ensure data consistency, it is recommended to add unique keys or primary keys to these tables before initiating the replication. Alternatively, you can add filter rules to exclude these tables. For example, you can exclude the table `test.tbl1` by using the rule `"!test.tbl1"`.
@@ -117,7 +117,7 @@ The steps vary depending on the connectivity method you select.
2. Customize **Event Filter** to filter the events that you want to replicate.
- **Tables matching**: you can set which tables the event filter will be applied to in this column. The rule syntax is the same as that used for the preceding **Table Filter** area.
- - **Event Filter**: you can choose the events you want to ingnore.
+ - **Event Filter**: you can choose the events you want to ignore.
3. Customize **Column Selector** to select columns from events and send only the data changes related to those columns to the downstream.
@@ -130,7 +130,7 @@ The steps vary depending on the connectivity method you select.
- Avro is a compact, fast, and binary data format with rich data structures, which is widely used in various flow systems. For more information, see [Avro data format](https://docs.pingcap.com/tidb/stable/ticdc-avro-protocol).
- Canal-JSON is a plain JSON text format, which is easy to parse. For more information, see [Canal-JSON data format](https://docs.pingcap.com/tidb/stable/ticdc-canal-json).
- - Open Protocol is a row-level data change notification protocol that provides data sources for monitoring, caching, full-text indexing, analysis engines, and primary-secondary replication between different databases. For more information, see [Open Protocol data format](https://docs.pingcap.com/tidb/stable/ticdc-open-protocol).
+ - Open Protocol is a row-level data change notification protocol that provides data sources for monitoring, caching, full-text indexing, analysis engines, and primary-secondary replication between different databases. For more information, see [Open Protocol data format](https://docs.pingcap.com/tidb/stable/ticdc-open-protocol).
- Debezium is a tool for capturing database changes. It converts each captured database change into a message called an "event" and sends these events to Kafka. For more information, see [Debezium data format](https://docs.pingcap.com/tidb/stable/ticdc-debezium).
5. Enable the **TiDB Extension** option if you want to add TiDB-extension fields to the Kafka message body.
@@ -180,7 +180,7 @@ The steps vary depending on the connectivity method you select.
- **Distribute changelogs by column value to Kafka partition**
- If you want the changefeed to send Kafka messages of a table to different partitions, choose this distribution method. The specified column values of a row changelog will determine which partition the changelog is sent to. This distribution method ensures orderliness in each partition and guarantees that the changelog with the same column values is send to the same partition.
+ If you want the changefeed to send Kafka messages of a table to different partitions, choose this distribution method. The specified column values of a row changelog will determine which partition the changelog is sent to. This distribution method ensures orderliness in each partition and guarantees that the changelog with the same column values is sent to the same partition.
9. In the **Topic Configuration** area, configure the following numbers. The changefeed will automatically create the Kafka topics according to the numbers.
diff --git a/tidb-cloud/essential-changefeed-sink-to-mysql.md b/tidb-cloud/essential-changefeed-sink-to-mysql.md
index 9d1e32791686f..286bef181be75 100644
--- a/tidb-cloud/essential-changefeed-sink-to-mysql.md
+++ b/tidb-cloud/essential-changefeed-sink-to-mysql.md
@@ -34,7 +34,7 @@ If your MySQL service can be accessed over the public network, you can choose to
-Private link connection leverage **Private Link** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC.
+Private link connections leverage **Private Link** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC.
You can connect your TiDB Cloud cluster to your MySQL service securely through a private link connection. If the private link connection is not available for your MySQL service, follow [Connect to Amazon RDS via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-aws-rds.md) or [Connect to Alibaba Cloud ApsaraDB RDS for MySQL via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-alicloud-rds.md) to create one.
@@ -48,7 +48,7 @@ The **Sink to MySQL** connector can only sink incremental data from your TiDB Cl
To load the existing data:
-1. Extend the [tidb_gc_life_time](https://docs.pingcap.com/tidb/stable/system-variables#tidb_gc_life_time-new-in-v50) to be longer than the total time of the following two operations, so that historical data during the time is not garbage collected by TiDB.
+1. Extend the [tidb_gc_life_time](https://docs.pingcap.com/tidb/stable/system-variables#tidb_gc_life_time-new-in-v50) to be longer than the total time of the following two operations, so that historical data during this period is not garbage collected by TiDB.
- The time to export and import the existing data
- The time to create **Sink to MySQL**
@@ -82,7 +82,7 @@ After completing the prerequisites, you can sink your data to MySQL.
- If you choose **Public**, fill in your MySQL endpoint.
- If you choose **Private Link**, select the private link connection that you created in the [Network](#network) section, and then fill in the MySQL port for your MySQL service.
-4. In **Authentication**, fill in the MySQL user name, password and TLS Encryption of your MySQL service. TiDB Cloud does not support self-signed certificates for MySQL TLS connections currently.
+4. In **Authentication**, fill in the MySQL user name and password, and configure TLS encryption for your MySQL service. Currently, TiDB Cloud does not support self-signed certificates for MySQL TLS connections.
5. Click **Next** to test whether TiDB can connect to MySQL successfully:
@@ -92,7 +92,7 @@ After completing the prerequisites, you can sink your data to MySQL.
6. Customize **Table Filter** to filter the tables that you want to replicate. For the rule syntax, refer to [table filter rules](https://docs.pingcap.com/tidb/stable/table-filter/#syntax).
- **Replication Scope**: you can choose to only replicate tables with valid keys or replicate all selected tables.
- - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule and click `apply`, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules under the `Filter results`.
+ - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule and click **Apply**, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules under **Filter results**.
- **Case Sensitive**: you can set whether the matching of database and table names in filter rules is case-sensitive. By default, matching is case-insensitive.
- **Filter results with valid keys**: this column displays the tables that have valid keys, including primary keys or unique indexes.
- **Filter results without valid keys**: this column shows tables that lack primary keys or unique keys. These tables present a challenge during replication because the absence of a unique identifier can result in inconsistent data when the downstream handles duplicate events. To ensure data consistency, it is recommended to add unique keys or primary keys to these tables before initiating the replication. Alternatively, you can add filter rules to exclude these tables. For example, you can exclude the table `test.tbl1` by using the rule `"!test.tbl1"`.
@@ -100,20 +100,20 @@ After completing the prerequisites, you can sink your data to MySQL.
7. Customize **Event Filter** to filter the events that you want to replicate.
- **Tables matching**: you can set which tables the event filter will be applied to in this column. The rule syntax is the same as that used for the preceding **Table Filter** area.
- - **Event Filter**: you can choose the events you want to ingnore.
+ - **Event Filter**: you can choose the events you want to ignore.
8. In **Start Replication Position**, configure the starting position for your MySQL sink.
- - If you have [loaded the existing data](#load-existing-data-optional) using Export, select **From Time** and fill in the snapshot time that you get from Export. Pay attention the time zone.
+ - If you have [loaded the existing data](#load-existing-data-optional) using Export, select **From Time** and fill in the snapshot time that you get from Export. Pay attention to the time zone.
- If you do not have any data in the upstream TiDB cluster, select **Start replication from now on**.
9. Click **Next** to configure your changefeed specification.
- In the **Changefeed Name** area, specify a name for the changefeed.
-10. If you confirm that all configurations are correct, click **Submit**. If you want to modify some configurations, click **Previous** to go back to the previous configuration page.
+10. If you confirm that all configurations are correct, click **Submit**. If you want to modify some configurations, click **Previous** to go back to the previous configuration page.
-11. The sink starts soon, and you can see the status of the sink changes from **Creating** to **Running**.
+11. The sink starts soon, and you can see the sink status change from **Creating** to **Running**.
Click the changefeed name, and you can see more details about the changefeed, such as the checkpoint, replication latency, and other metrics.
From c7f2950145a4fa4eb753525195d7fc2ef01c200d Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Fri, 26 Dec 2025 16:00:10 +0800
Subject: [PATCH 07/28] fix changefeed
---
tidb-cloud/essential-changefeed-overview.md | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index 9ed3b05fc242c..136df8c168697 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -30,7 +30,7 @@ On the **Changefeed** page, you can create a changefeed, view a list of existing
To create a changefeed, refer to the tutorials:
-- [Sink to Apache Kafka](/tidb-cloud/essential-changefeed-sink-to-apache-kafka.md)
+- [Sink to Apache Kafka](/tidb-cloud/essential-changefeed-sink-to-kafka.md)
- [Sink to MySQL](/tidb-cloud/essential-changefeed-sink-to-mysql.md)
## View a changefeed
@@ -80,7 +80,6 @@ ticloud serverless changefeed resume -c
--changefeed-id
-
## Edit a changefeed
> **Note:**
From 3623e4a17fcdaf4f899909374275dd80f757a43a Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Fri, 26 Dec 2025 16:46:29 +0800
Subject: [PATCH 08/28] fix link
---
tidb-cloud/essential-changefeed-sink-to-kafka.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tidb-cloud/essential-changefeed-sink-to-kafka.md b/tidb-cloud/essential-changefeed-sink-to-kafka.md
index f578de8ce79e9..5a0b2efeec95e 100644
--- a/tidb-cloud/essential-changefeed-sink-to-kafka.md
+++ b/tidb-cloud/essential-changefeed-sink-to-kafka.md
@@ -59,7 +59,7 @@ To allow TiDB Cloud changefeeds to stream data to Apache Kafka and create Kafka
- The `Create` and `Write` permissions are added for the topic resource type in Kafka.
- The `DescribeConfigs` permission is added for the cluster resource type in Kafka.
-For example, if your Kafka cluster is in Confluent Cloud, refer to [Resources](https://docs.confluent.io/platform/current/kafka/authorization.html#resources) and [Adding ACLs](https://docs.confluent.io/platform/current/kafka/authorization.html#adding-acls) in the Confluent documentation for more information.
+For example, if your Kafka cluster is in Confluent Cloud, refer to [Resources](https://docs.confluent.io/platform/current/kafka/authorization.html#resources) and [Adding ACLs](https://docs.confluent.io/platform/current/security/authorization/acls/manage-acls.html#add-acls) in the Confluent documentation for more information.
## Step 1. Open the Changefeed page for Apache Kafka
From 18e8c41ca19c3891f868d427606193beee655544 Mon Sep 17 00:00:00 2001
From: houfaxin
Date: Sun, 4 Jan 2026 14:54:55 +0800
Subject: [PATCH 09/28] Update essential-changefeed-overview.md
---
tidb-cloud/essential-changefeed-overview.md | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index 136df8c168697..500e7e93926bb 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -9,6 +9,7 @@ TiDB Cloud changefeed helps you stream data from TiDB Cloud to other data servic
> **Note:**
>
+> - The changefeed feature is in beta. It might be changed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub.
> - Currently, TiDB Cloud only allows up to 10 changefeeds per {{{ .essential }}} cluster.
> - For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) clusters, the changefeed feature is unavailable.
@@ -35,6 +36,8 @@ To create a changefeed, refer to the tutorials:
## View a changefeed
+You can view a changefeed using the TiDB Cloud console or the TiDB Cloud CLI.
+
@@ -55,11 +58,13 @@ ticloud serverless changefeed get -c
--changefeed-id
1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB cluster.
-2. Locate the corresponding changefeed you want to pause or resume, and click **...** > **Pause/Resume** in the **Action** column.
+2. Locate the corresponding changefeed you want to pause or resume. In the **Action** column, click **...** > **Pause/Resume**.
@@ -86,11 +91,13 @@ ticloud serverless changefeed resume -c --changefeed-id
> TiDB Cloud currently only allows editing changefeeds that are in the `Paused` state.
+You can edit a changefeed using the TiDB Cloud console or the TiDB Cloud CLI.
+
1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB cluster.
-2. Locate the changefeed you want to pause, and click **...** > **Pause** in the **Action** column.
+2. Locate the changefeed you want to pause. In the **Action** column, click **...** > **Pause**.
3. When the changefeed status changes to `Paused`, click **...** > **Edit** to edit the corresponding changefeed.
TiDB Cloud populates the changefeed configuration by default. You can modify the following configurations:
@@ -128,6 +135,8 @@ ticloud serverless changefeed edit -c
--changefeed-id
@@ -147,7 +156,7 @@ ticloud serverless changefeed delete -c
--changefeed-id
Date: Sun, 4 Jan 2026 15:00:52 +0800
Subject: [PATCH 10/28] Refactor docs to use .essential variable for product
name
Replaced hardcoded 'TiDB Cloud' references with the templated '{{{ .essential }}}' variable in changefeed sink documentation for Kafka and MySQL. Added beta feature notes for both sinks and updated instructions and restrictions to use the variable for consistency and easier product branding.
---
.../essential-changefeed-sink-to-kafka.md | 20 ++++++++++-------
.../essential-changefeed-sink-to-mysql.md | 22 +++++++++++--------
2 files changed, 25 insertions(+), 17 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-sink-to-kafka.md b/tidb-cloud/essential-changefeed-sink-to-kafka.md
index 5a0b2efeec95e..14f044195a0c5 100644
--- a/tidb-cloud/essential-changefeed-sink-to-kafka.md
+++ b/tidb-cloud/essential-changefeed-sink-to-kafka.md
@@ -1,17 +1,21 @@
---
title: Sink to Apache Kafka
-summary: This document explains how to create a changefeed to stream data from TiDB Cloud to Apache Kafka. It includes restrictions, prerequisites, and steps to configure the changefeed for Apache Kafka. The process involves setting up network connections, adding permissions for Kafka ACL authorization, and configuring the changefeed specification.
+summary: This document explains how to create a changefeed to stream data from {{{ .essential }}} to Apache Kafka. It includes restrictions, prerequisites, and steps to configure the changefeed for Apache Kafka. The process involves setting up network connections, adding permissions for Kafka ACL authorization, and configuring the changefeed specification.
---
# Sink to Apache Kafka
-This document describes how to create a changefeed to stream data from TiDB Cloud to Apache Kafka.
+This document describes how to create a changefeed to stream data from {{{ .essential }}} to Apache Kafka.
+
+> **Note:**
+>
+> - The sink to Apache Kafka feature is in beta. It might be changed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub.
## Restrictions
-- For each TiDB Cloud cluster, you can create up to 10 changefeeds.
-- Currently, TiDB Cloud does not support uploading self-signed TLS certificates to connect to Kafka brokers.
-- Because TiDB Cloud uses TiCDC to establish changefeeds, it has the same [restrictions as TiCDC](https://docs.pingcap.com/tidb/stable/ticdc-overview#unsupported-scenarios).
+- For each {{{ .essential }}} cluster, you can create up to 10 changefeeds.
+- Currently, {{{ .essential }}} does not support uploading self-signed TLS certificates to connect to Kafka brokers.
+- Because {{{ .essential }}} uses TiCDC to establish changefeeds, it has the same [restrictions as TiCDC](https://docs.pingcap.com/tidb/stable/ticdc-overview#unsupported-scenarios).
- If the table to be replicated does not have a primary key or a non-null unique index, the absence of a unique constraint during replication could result in duplicated data being inserted downstream in some retry scenarios.
## Prerequisites
@@ -23,7 +27,7 @@ Before creating a changefeed to stream data to Apache Kafka, you need to complet
### Network
-Ensure that your TiDB Cloud cluster can connect to the Apache Kafka service. You can choose one of the following connection methods:
+Ensure that your {{{ .essential }}} cluster can connect to the Apache Kafka service. You can choose one of the following connection methods:
- Public Access: suitable for a quick setup.
- Private Link Connection: meeting security compliance and ensuring network quality.
@@ -33,7 +37,7 @@ Ensure that your TiDB Cloud cluster can connect to the Apache Kafka service. You
Private Link Connection leverages **Private Link** technologies from cloud providers to enable resources in your VPC to connect to services in other VPCs using private IP addresses, as if those services were hosted directly within your VPC.
-TiDB Cloud currently supports Private Link Connection only for self-hosted Kafka and Confluent Cloud Dedicated Cluster. It does not support direct integration with MSK, or other Kafka SaaS services.
+{{{ .essential }}} currently supports Private Link Connection only for self-hosted Kafka and Confluent Cloud Dedicated Cluster. It does not support direct integration with MSK, or other Kafka SaaS services.
See the following instructions to set up a Private Link connection according to your Kafka deployment and cloud provider:
@@ -54,7 +58,7 @@ It is not recommended to use public access in a production environment.
### Kafka ACL authorization
-To allow TiDB Cloud changefeeds to stream data to Apache Kafka and create Kafka topics automatically, ensure that the following permissions are added in Kafka:
+To allow {{{ .essential }}} changefeeds to stream data to Apache Kafka and create Kafka topics automatically, ensure that the following permissions are added in Kafka:
- The `Create` and `Write` permissions are added for the topic resource type in Kafka.
- The `DescribeConfigs` permission is added for the cluster resource type in Kafka.
diff --git a/tidb-cloud/essential-changefeed-sink-to-mysql.md b/tidb-cloud/essential-changefeed-sink-to-mysql.md
index 286bef181be75..f70791acd1bd3 100644
--- a/tidb-cloud/essential-changefeed-sink-to-mysql.md
+++ b/tidb-cloud/essential-changefeed-sink-to-mysql.md
@@ -1,16 +1,20 @@
---
title: Sink to MySQL
-summary: This document explains how to stream data from TiDB Cloud to MySQL using the Sink to MySQL changefeed. It includes restrictions, prerequisites, and steps to create a MySQL sink for data replication. The process involves setting up network connections, loading existing data to MySQL, and creating target tables in MySQL. After completing the prerequisites, users can create a MySQL sink to replicate data to MySQL.
+summary: This document explains how to stream data from {{{ .essential }}} to MySQL using the Sink to MySQL changefeed. It includes restrictions, prerequisites, and steps to create a MySQL sink for data replication. The process involves setting up network connections, loading existing data to MySQL, and creating target tables in MySQL. After completing the prerequisites, users can create a MySQL sink to replicate data to MySQL.
---
# Sink to MySQL
-This document describes how to stream data from TiDB Cloud to MySQL using the **Sink to MySQL** changefeed.
+This document describes how to stream data from {{{ .essential }}} to MySQL using the **Sink to MySQL** changefeed.
+
+> **Note:**
+>
+> The sink to MySQL feature is in beta. It might be changed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub.
## Restrictions
-- For each TiDB Cloud cluster, you can create up to 10 changefeeds.
-- Because TiDB Cloud uses TiCDC to establish changefeeds, it has the same [restrictions as TiCDC](https://docs.pingcap.com/tidb/stable/ticdc-overview#unsupported-scenarios).
+- For each{{{ .essential }}} cluster, you can create up to 10 changefeeds.
+- Because {{{ .essential }}} uses TiCDC to establish changefeeds, it has the same [restrictions as TiCDC](https://docs.pingcap.com/tidb/stable/ticdc-overview#unsupported-scenarios).
- If the table to be replicated does not have a primary key or a non-null unique index, the absence of a unique constraint during replication could result in duplicated data being inserted downstream in some retry scenarios.
## Prerequisites
@@ -23,7 +27,7 @@ Before creating a changefeed, you need to complete the following prerequisites:
### Network
-Make sure that your TiDB Cloud cluster can connect to the MySQL service.
+Make sure that your {{{ .essential }}} cluster can connect to the MySQL service.
@@ -36,7 +40,7 @@ If your MySQL service can be accessed over the public network, you can choose to
Private link connections leverage **Private Link** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC.
-You can connect your TiDB Cloud cluster to your MySQL service securely through a private link connection. If the private link connection is not available for your MySQL service, follow [Connect to Amazon RDS via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-aws-rds.md) or [Connect to Alibaba Cloud ApsaraDB RDS for MySQL via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-alicloud-rds.md) to create one.
+You can connect your {{{ .essential }}} cluster to your MySQL service securely through a private link connection. If the private link connection is not available for your MySQL service, follow [Connect to Amazon RDS via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-aws-rds.md) or [Connect to Alibaba Cloud ApsaraDB RDS for MySQL via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-alicloud-rds.md) to create one.
@@ -44,7 +48,7 @@ You can connect your TiDB Cloud cluster to your MySQL service securely through a
### Load existing data (optional)
-The **Sink to MySQL** connector can only sink incremental data from your TiDB Cloud cluster to MySQL after a certain timestamp. If you already have data in your TiDB Cloud cluster, you can export and load the existing data of your TiDB Cloud cluster into MySQL before enabling **Sink to MySQL**.
+The **Sink to MySQL** connector can only sink incremental data from your {{{ .essential }}} cluster to MySQL after a certain timestamp. If you already have data in your {{{ .essential }}} cluster, you can export and load the existing data of your {{{ .essential }}} cluster into MySQL before enabling **Sink to MySQL**.
To load the existing data:
@@ -61,7 +65,7 @@ To load the existing data:
SET GLOBAL tidb_gc_life_time = '72h';
```
-2. Use [Export](/tidb-cloud/serverless-export.md) to export data from your TiDB Cloud cluster, then use community tools such as [mydumper/myloader](https://centminmod.com/mydumper.html) to load data to the MySQL service.
+2. Use [Export](/tidb-cloud/serverless-export.md) to export data from your {{{ .essential }}} cluster, then use community tools such as [mydumper/myloader](https://centminmod.com/mydumper.html) to load data to the MySQL service.
3. Use the snapshot time of [Export](/tidb-cloud/serverless-export.md) as the start position of MySQL sink.
@@ -73,7 +77,7 @@ If you do not load the existing data, you need to create corresponding target ta
After completing the prerequisites, you can sink your data to MySQL.
-1. Navigate to the overview page of the target TiDB Cloud cluster, and then click **Data** > **Changefeed** in the left navigation pane.
+1. Navigate to the overview page of the target {{{ .essential }}} cluster, and then click **Data** > **Changefeed** in the left navigation pane.
2. Click **Create Changefeed**, and select **MySQL** as **Destination**.
From e854ebb9f44b89dbccfdf9251c705786cf2881f7 Mon Sep 17 00:00:00 2001
From: houfaxin
Date: Sun, 4 Jan 2026 15:01:49 +0800
Subject: [PATCH 11/28] Update TOC-tidb-cloud-essential.md
---
TOC-tidb-cloud-essential.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/TOC-tidb-cloud-essential.md b/TOC-tidb-cloud-essential.md
index 93a9d7893325e..a498cc45e7b33 100644
--- a/TOC-tidb-cloud-essential.md
+++ b/TOC-tidb-cloud-essential.md
@@ -232,7 +232,7 @@
- [CSV Configurations for Importing Data](/tidb-cloud/csv-config-for-import-data.md)
- [Troubleshoot Access Denied Errors during Data Import from Amazon S3](/tidb-cloud/troubleshoot-import-access-denied-error.md)
- [Connect AWS DMS to TiDB Cloud clusters](/tidb-cloud/tidb-cloud-connect-aws-dms.md)
-- Stream Data
+- Stream Data 
- [Changefeed Overview](/tidb-cloud/essential-changefeed-overview.md)
- [Sink to MySQL](/tidb-cloud/essential-changefeed-sink-to-mysql.md)
- [Sink to Apache Kafka](/tidb-cloud/essential-changefeed-sink-to-kafka.md)
From df83c8d061dd341f55d2f2cb5d6b2cf44b990c0d Mon Sep 17 00:00:00 2001
From: houfaxin
Date: Sun, 4 Jan 2026 15:34:26 +0800
Subject: [PATCH 12/28] Update changefeed docs to indicate beta status
Added '(Beta)' to titles and headings in changefeed overview and sink documents for Kafka and MySQL. Removed redundant beta notes from the body text to streamline documentation and clarify feature status.
---
tidb-cloud/essential-changefeed-overview.md | 5 ++---
tidb-cloud/essential-changefeed-sink-to-kafka.md | 8 ++------
tidb-cloud/essential-changefeed-sink-to-mysql.md | 8 ++------
3 files changed, 6 insertions(+), 15 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index 500e7e93926bb..969efd31148d7 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -1,15 +1,14 @@
---
-title: Changefeed
+title: Changefeed (Beta)
summary: TiDB Cloud changefeed helps you stream data from TiDB Cloud to other data services.
---
-# Changefeed
+# Changefeed (Beta)
TiDB Cloud changefeed helps you stream data from TiDB Cloud to other data services. Currently, TiDB Cloud supports streaming data to Apache Kafka and MySQL.
> **Note:**
>
-> - The changefeed feature is in beta. It might be changed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub.
> - Currently, TiDB Cloud only allows up to 10 changefeeds per {{{ .essential }}} cluster.
> - For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) clusters, the changefeed feature is unavailable.
diff --git a/tidb-cloud/essential-changefeed-sink-to-kafka.md b/tidb-cloud/essential-changefeed-sink-to-kafka.md
index 14f044195a0c5..8d0da652edc83 100644
--- a/tidb-cloud/essential-changefeed-sink-to-kafka.md
+++ b/tidb-cloud/essential-changefeed-sink-to-kafka.md
@@ -1,16 +1,12 @@
---
-title: Sink to Apache Kafka
+title: Sink to Apache Kafka (Beta)
summary: This document explains how to create a changefeed to stream data from {{{ .essential }}} to Apache Kafka. It includes restrictions, prerequisites, and steps to configure the changefeed for Apache Kafka. The process involves setting up network connections, adding permissions for Kafka ACL authorization, and configuring the changefeed specification.
---
-# Sink to Apache Kafka
+# Sink to Apache Kafka (Beta)
This document describes how to create a changefeed to stream data from {{{ .essential }}} to Apache Kafka.
-> **Note:**
->
-> - The sink to Apache Kafka feature is in beta. It might be changed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub.
-
## Restrictions
- For each {{{ .essential }}} cluster, you can create up to 10 changefeeds.
diff --git a/tidb-cloud/essential-changefeed-sink-to-mysql.md b/tidb-cloud/essential-changefeed-sink-to-mysql.md
index f70791acd1bd3..48ba99b68b456 100644
--- a/tidb-cloud/essential-changefeed-sink-to-mysql.md
+++ b/tidb-cloud/essential-changefeed-sink-to-mysql.md
@@ -1,16 +1,12 @@
---
-title: Sink to MySQL
+title: Sink to MySQL (Beta)
summary: This document explains how to stream data from {{{ .essential }}} to MySQL using the Sink to MySQL changefeed. It includes restrictions, prerequisites, and steps to create a MySQL sink for data replication. The process involves setting up network connections, loading existing data to MySQL, and creating target tables in MySQL. After completing the prerequisites, users can create a MySQL sink to replicate data to MySQL.
---
-# Sink to MySQL
+# Sink to MySQL (Beta)
This document describes how to stream data from {{{ .essential }}} to MySQL using the **Sink to MySQL** changefeed.
-> **Note:**
->
-> The sink to MySQL feature is in beta. It might be changed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub.
-
## Restrictions
- For each{{{ .essential }}} cluster, you can create up to 10 changefeeds.
From b531428705a01e02891d22eb6c7c5162f38e9216 Mon Sep 17 00:00:00 2001
From: houfaxin
Date: Sun, 4 Jan 2026 18:29:13 +0800
Subject: [PATCH 13/28] refine wording
---
.../essential-changefeed-sink-to-kafka.md | 13 +++++----
.../essential-changefeed-sink-to-mysql.md | 29 +++++++++----------
2 files changed, 21 insertions(+), 21 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-sink-to-kafka.md b/tidb-cloud/essential-changefeed-sink-to-kafka.md
index 8d0da652edc83..d029e302be43b 100644
--- a/tidb-cloud/essential-changefeed-sink-to-kafka.md
+++ b/tidb-cloud/essential-changefeed-sink-to-kafka.md
@@ -25,8 +25,8 @@ Before creating a changefeed to stream data to Apache Kafka, you need to complet
Ensure that your {{{ .essential }}} cluster can connect to the Apache Kafka service. You can choose one of the following connection methods:
-- Public Access: suitable for a quick setup.
- Private Link Connection: meeting security compliance and ensuring network quality.
+- Public Network: suitable for a quick setup.
@@ -37,9 +37,9 @@ Private Link Connection leverages **Private Link** technologies from cloud provi
See the following instructions to set up a Private Link connection according to your Kafka deployment and cloud provider:
-- [Connect to Confluent Cloud via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-aws-confluent.md)
-- [Connect to AWS Self-Hosted Kafka via Private Link Connection](/tidbcloud/serverless-private-link-connection-to-self-hosted-kafka-in-aws.md)
-- [Connect to Alibaba Cloud Self-Hosted Kafka via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-self-hosted-kafka-in-alicloud.md)
+- [Connect to Confluent Cloud on AWS via a Private Link Connection](/tidb-cloud/serverless-private-link-connection-to-aws-confluent.md)
+- [Connect to AWS Self-Hosted Kafka via Private Link Connection](/tidb-cloud/serverless-private-link-connection-to-self-hosted-kafka-in-aws.md)
+- [Connect to Alibaba Cloud Self-Hosted Kafka via a Private Link Connection](/tidb-cloud/serverless-private-link-connection-to-self-hosted-kafka-in-alicloud.md)
@@ -65,7 +65,7 @@ For example, if your Kafka cluster is in Confluent Cloud, refer to [Resources](h
1. Log in to the [TiDB Cloud console](https://tidbcloud.com).
2. Navigate to the overview page of the target TiDB Cloud cluster, and then click **Data** > **Changefeed** in the left navigation pane.
-3. Click **Create Changefeed**, and select **Kafka** as **Destination**.
+3. Click **Create Changefeed**, and then select **Kafka** as **Destination**.
## Step 2. Configure the changefeed target
@@ -95,6 +95,7 @@ The steps vary depending on the connectivity method you select.
- If your Kafka does not require authentication, keep the default option **Disable**.
- If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
+
5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
6. Select a **Compression** type for the data in this changefeed.
7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
@@ -194,4 +195,4 @@ The steps vary depending on the connectivity method you select.
## Step 4. Review and create your changefeed specification
1. In the **Changefeed Name** area, specify a name for the changefeed.
-2. Review all the changefeed configurations that you set. Click **Previous** to go back to the previous configuration pages if you want to modify some configurations. Click **Submit** if all configurations are correct to create the changefeed.
\ No newline at end of file
+2. Review all the changefeed configurations that you set. Click **Previous** to go back to the previous configuration pages if you want to modify some configurations. Click **Submit** if all configurations are correct to create the changefeed.
diff --git a/tidb-cloud/essential-changefeed-sink-to-mysql.md b/tidb-cloud/essential-changefeed-sink-to-mysql.md
index 48ba99b68b456..4d86f8568a618 100644
--- a/tidb-cloud/essential-changefeed-sink-to-mysql.md
+++ b/tidb-cloud/essential-changefeed-sink-to-mysql.md
@@ -9,7 +9,7 @@ This document describes how to stream data from {{{ .essential }}} to MySQL usin
## Restrictions
-- For each{{{ .essential }}} cluster, you can create up to 10 changefeeds.
+- For each {{{ .essential }}} cluster, you can create up to 10 changefeeds.
- Because {{{ .essential }}} uses TiCDC to establish changefeeds, it has the same [restrictions as TiCDC](https://docs.pingcap.com/tidb/stable/ticdc-overview#unsupported-scenarios).
- If the table to be replicated does not have a primary key or a non-null unique index, the absence of a unique constraint during replication could result in duplicated data being inserted downstream in some retry scenarios.
@@ -23,20 +23,23 @@ Before creating a changefeed, you need to complete the following prerequisites:
### Network
-Make sure that your {{{ .essential }}} cluster can connect to the MySQL service.
+Make sure that your {{{ .essential }}} cluster can connect to the MySQL service. You can choose one of the following connection methods:
+
+- Private Link Connection: meeting security compliance and ensuring network quality.
+- Public Network: suitable for a quick setup.
-
+
-If your MySQL service can be accessed over the public network, you can choose to connect to MySQL through a public IP or domain name.
+Private link connections leverage **Private Link** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC.
-
+You can connect your {{{ .essential }}} cluster to your MySQL service securely through a private link connection. If the private link connection is not available for your MySQL service, follow [Connect to Amazon RDS via a Private Link Connection](/tidb-cloud/serverless-private-link-connection-to-aws-rds.md) or [Connect to Alibaba Cloud ApsaraDB RDS for MySQL via a Private Link Connection](/tidb-cloud/serverless-private-link-connection-to-alicloud-rds.md) to create one.
-
+
-Private link connections leverage **Private Link** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC.
+
-You can connect your {{{ .essential }}} cluster to your MySQL service securely through a private link connection. If the private link connection is not available for your MySQL service, follow [Connect to Amazon RDS via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-aws-rds.md) or [Connect to Alibaba Cloud ApsaraDB RDS for MySQL via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-alicloud-rds.md) to create one.
+If your MySQL service can be accessed over the public network, you can choose to connect to MySQL through a public IP or domain name.
@@ -55,8 +58,6 @@ To load the existing data:
For example:
- {{< copyable "sql" >}}
-
```sql
SET GLOBAL tidb_gc_life_time = '72h';
```
@@ -119,8 +120,6 @@ After completing the prerequisites, you can sink your data to MySQL.
12. If you have [loaded the existing data](#load-existing-data-optional) using Export, you need to restore the GC time to its original value (the default value is `10m`) after the sink is created:
-{{< copyable "sql" >}}
-
-```sql
-SET GLOBAL tidb_gc_life_time = '10m';
-```
+ ```sql
+ SET GLOBAL tidb_gc_life_time = '10m';
+ ```
From 46f45a6ee2e335f03f871454f2b79a784b6611a4 Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Mon, 12 Jan 2026 15:56:44 +0800
Subject: [PATCH 14/28] add supported region
---
tidb-cloud/essential-changefeed-overview.md | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index 969efd31148d7..5c8a3f6804dee 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -12,6 +12,17 @@ TiDB Cloud changefeed helps you stream data from TiDB Cloud to other data servic
> - Currently, TiDB Cloud only allows up to 10 changefeeds per {{{ .essential }}} cluster.
> - For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) clusters, the changefeed feature is unavailable.
+## Supported regions
+
+The changefeed feature is currently supported in the following regions:
+
+| Cloud Provider | Supported Regions |
+| --- | --- |
+| Alibaba Cloud | ap-southeast-1
cn-hongkong
ap-southeast-5 |
+| AWS | us-east-1 |
+
+Other regions will be supported in the near future. If you require immediate support for a specific region, please contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md) for help.
+
## View the Changefeed page
To access the changefeed feature, take the following steps:
From 1d90785f1216c6f4fde6e64f4f09f1abd09c738b Mon Sep 17 00:00:00 2001
From: houfaxin
Date: Tue, 13 Jan 2026 10:13:52 +0800
Subject: [PATCH 15/28] remove essential in kafka and mysql
---
tidb-cloud/changefeed-sink-to-apache-kafka.md | 2 +-
tidb-cloud/changefeed-sink-to-mysql.md | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/tidb-cloud/changefeed-sink-to-apache-kafka.md b/tidb-cloud/changefeed-sink-to-apache-kafka.md
index a5fdb5a283f14..1b6cde6667e99 100644
--- a/tidb-cloud/changefeed-sink-to-apache-kafka.md
+++ b/tidb-cloud/changefeed-sink-to-apache-kafka.md
@@ -12,7 +12,7 @@ This document describes how to create a changefeed to stream data from TiDB Clou
> **Note:**
>
> - To use the changefeed feature, make sure that your TiDB Cloud Dedicated cluster version is v6.1.3 or later.
-> - For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) and [{{{ .essential }}}](/tidb-cloud/select-cluster-tier.md#essential) clusters, the changefeed feature is unavailable.
+> - For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) clusters, the changefeed feature is unavailable.
diff --git a/tidb-cloud/changefeed-sink-to-mysql.md b/tidb-cloud/changefeed-sink-to-mysql.md
index 676bf3d62dafa..cdb09f913d48e 100644
--- a/tidb-cloud/changefeed-sink-to-mysql.md
+++ b/tidb-cloud/changefeed-sink-to-mysql.md
@@ -12,14 +12,14 @@ This document describes how to stream data from TiDB Cloud to MySQL using the **
> **Note:**
>
> - To use the changefeed feature, make sure that your TiDB Cloud Dedicated cluster version is v6.1.3 or later.
-> - For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) and [{{{ .essential }}}](/tidb-cloud/select-cluster-tier.md#essential) clusters, the changefeed feature is unavailable.
+> - For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) clusters, the changefeed feature is unavailable.
> **Note:**
>
-> For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) and [{{{ .essential }}}](/tidb-cloud/select-cluster-tier.md#essential) clusters, the changefeed feature is unavailable.
+> For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) clusters, the changefeed feature is unavailable.
From fd734a23a1269f272441aeea70110bd76a1b390e Mon Sep 17 00:00:00 2001
From: shi yuhang <52435083+shiyuhang0@users.noreply.github.com>
Date: Tue, 13 Jan 2026 10:33:43 +0800
Subject: [PATCH 16/28] Apply suggestions from code review
Co-authored-by: xixirangrang
---
tidb-cloud/essential-changefeed-overview.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index 5c8a3f6804dee..c197f62af1d66 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -18,10 +18,10 @@ The changefeed feature is currently supported in the following regions:
| Cloud Provider | Supported Regions |
| --- | --- |
-| Alibaba Cloud | ap-southeast-1
cn-hongkong
ap-southeast-5 |
-| AWS | us-east-1 |
+| Alibaba Cloud | `ap-southeast-1`
`cn-hongkong`
`ap-southeast-5` |
+| AWS | `us-east-1` |
-Other regions will be supported in the near future. If you require immediate support for a specific region, please contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md) for help.
+Other regions will be supported in the near future. If you require immediate support for a specific region, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md) for help.
## View the Changefeed page
From 5aa452726c45b95f68dbfcccb79f8cd0004639cf Mon Sep 17 00:00:00 2001
From: houfaxin
Date: Tue, 13 Jan 2026 11:30:58 +0800
Subject: [PATCH 17/28] sync with UI
---
tidb-cloud/changefeed-sink-to-apache-kafka.md | 2 +-
.../essential-changefeed-sink-to-kafka.md | 21 ++++++++++---------
.../essential-changefeed-sink-to-mysql.md | 2 +-
3 files changed, 13 insertions(+), 12 deletions(-)
diff --git a/tidb-cloud/changefeed-sink-to-apache-kafka.md b/tidb-cloud/changefeed-sink-to-apache-kafka.md
index 1b6cde6667e99..763fb10c39bd9 100644
--- a/tidb-cloud/changefeed-sink-to-apache-kafka.md
+++ b/tidb-cloud/changefeed-sink-to-apache-kafka.md
@@ -273,7 +273,7 @@ The steps vary depending on the connectivity method you select.
- **Tables matching**: specify which tables the column selector applies to. For tables that do not match any rule, all columns are sent.
- **Column Selector**: specify which columns of the matched tables will be sent to the downstream.
- For more information about the matching rules, see [Column selectors](https://docs.pingcap.com/tidb/stable/ticdc-sink-to-kafka/#column-selectors).
+ For more information about the matching rules, see [Column selectors](https://docs.pingcap.com/tidb/stable/ticdc-sink-to-kafka/#column-selectors).
4. In the **Data Format** area, select your desired format of Kafka messages.
diff --git a/tidb-cloud/essential-changefeed-sink-to-kafka.md b/tidb-cloud/essential-changefeed-sink-to-kafka.md
index d029e302be43b..7bff4fc3302b1 100644
--- a/tidb-cloud/essential-changefeed-sink-to-kafka.md
+++ b/tidb-cloud/essential-changefeed-sink-to-kafka.md
@@ -1,6 +1,6 @@
---
title: Sink to Apache Kafka (Beta)
-summary: This document explains how to create a changefeed to stream data from {{{ .essential }}} to Apache Kafka. It includes restrictions, prerequisites, and steps to configure the changefeed for Apache Kafka. The process involves setting up network connections, adding permissions for Kafka ACL authorization, and configuring the changefeed specification.
+summary: This document explains how to create a changefeed to stream data from {{{ .essential }}} to Apache Kafka. It includes restrictions, prerequisites, and steps to configure the changefeed for Apache Kafka. The process involves setting up network connections, adding permissions for Kafka ACL authorization, and configuring the changefeed.
---
# Sink to Apache Kafka (Beta)
@@ -31,9 +31,9 @@ Ensure that your {{{ .essential }}} cluster can connect to the Apache Kafka serv
-Private Link Connection leverages **Private Link** technologies from cloud providers to enable resources in your VPC to connect to services in other VPCs using private IP addresses, as if those services were hosted directly within your VPC.
+Private link connections leverage **Private Link** technologies from cloud providers to enable resources in your VPC to connect to services in other VPCs using private IP addresses, as if those services were hosted directly within your VPC.
-{{{ .essential }}} currently supports Private Link Connection only for self-hosted Kafka and Confluent Cloud Dedicated Cluster. It does not support direct integration with MSK, or other Kafka SaaS services.
+{{{ .essential }}} currently supports Private Link Connection only for self-hosted Kafka and Confluent Cloud Dedicated Clusters. It does not support direct integration with MSK, or other Kafka SaaS services.
See the following instructions to set up a Private Link connection according to your Kafka deployment and cloud provider:
@@ -80,7 +80,7 @@ The steps vary depending on the connectivity method you select.
- If your Kafka does not require authentication, keep the default option **Disable**.
- If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-3. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
+3. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v3**.
4. Select a **Compression** type for the data in this changefeed.
5. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
6. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
@@ -89,17 +89,17 @@ The steps vary depending on the connectivity method you select.
1. In **Connectivity Method**, select **Private Link**.
-2. In **Private Link Connection**, select the private link connection that you created in the [Network](#network) section. Make sure the AZs of the private link connection match the AZs of the Kafka deployment.
+2. In **Private Link Connection**, select the private link connection that you created in the [Network](#network) section. Make sure the Availability Zones of the private link connection match those of the Kafka deployment.
3. Fill in the **Bootstrap Port** that you obtained from the [Network](#network) section.
4. Select an **Authentication** option according to your Kafka authentication configuration.
- If your Kafka does not require authentication, keep the default option **Disable**.
- If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
+5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v3**.
6. Select a **Compression** type for the data in this changefeed.
7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
-8. Input the **TLS Server Name** if your Kafka requires TLS SNI verification. For example, Confluent Cloud Dedicated clusters.
+8. Input the **TLS Server Name** if your Kafka requires TLS SNI verification. For example, `Confluent Cloud Dedicated clusters`.
9. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
@@ -125,7 +125,7 @@ The steps vary depending on the connectivity method you select.
- **Tables matching**: specify which tables the column selector applies to. For tables that do not match any rule, all columns are sent.
- **Column Selector**: specify which columns of the matched tables will be sent to the downstream.
- For more information about the matching rules, see [Column selectors](https://docs.pingcap.com/tidb/stable/ticdc-sink-to-kafka/#column-selectors).
+ For more information about the matching rules, see [Column selectors](https://docs.pingcap.com/tidb/stable/ticdc-sink-to-kafka/#column-selectors).
4. In the **Data Format** area, select your desired format of Kafka messages.
@@ -192,7 +192,8 @@ The steps vary depending on the connectivity method you select.
11. Click **Next**.
-## Step 4. Review and create your changefeed specification
+## Step 4. Review and create your changefeed
1. In the **Changefeed Name** area, specify a name for the changefeed.
-2. Review all the changefeed configurations that you set. Click **Previous** to go back to the previous configuration pages if you want to modify some configurations. Click **Submit** if all configurations are correct to create the changefeed.
+2. Review all the changefeed configurations that you set. Click **Previous** to go back to the previous configuration pages if you want to modify some configurations.
+3. If all configurations are correct, click **Submit** to create the changefeed.
diff --git a/tidb-cloud/essential-changefeed-sink-to-mysql.md b/tidb-cloud/essential-changefeed-sink-to-mysql.md
index 4d86f8568a618..30669813b09cd 100644
--- a/tidb-cloud/essential-changefeed-sink-to-mysql.md
+++ b/tidb-cloud/essential-changefeed-sink-to-mysql.md
@@ -108,7 +108,7 @@ After completing the prerequisites, you can sink your data to MySQL.
- If you have [loaded the existing data](#load-existing-data-optional) using Export, select **From Time** and fill in the snapshot time that you get from Export. Pay attention to the time zone.
- If you do not have any data in the upstream TiDB cluster, select **Start replication from now on**.
-9. Click **Next** to configure your changefeed specification.
+9. Click **Next** to configure your changefeed.
- In the **Changefeed Name** area, specify a name for the changefeed.
From 5a6fabd03225856140f352a3ee1cb688a9a115aa Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Tue, 13 Jan 2026 15:06:44 +0800
Subject: [PATCH 18/28] add ap-southeast-1 support
---
tidb-cloud/essential-changefeed-overview.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index c197f62af1d66..1238468fb9ad0 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -19,7 +19,7 @@ The changefeed feature is currently supported in the following regions:
| Cloud Provider | Supported Regions |
| --- | --- |
| Alibaba Cloud | `ap-southeast-1`
`cn-hongkong`
`ap-southeast-5` |
-| AWS | `us-east-1` |
+| AWS | `us-east-1`
`ap-southeast-1` |
Other regions will be supported in the near future. If you require immediate support for a specific region, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md) for help.
From f4847dc2cd993ff28746d85d2765327cb136a0dd Mon Sep 17 00:00:00 2001
From: shi yuhang <52435083+shiyuhang0@users.noreply.github.com>
Date: Tue, 13 Jan 2026 18:38:11 +0800
Subject: [PATCH 19/28] Update tidb-cloud/changefeed-sink-to-apache-kafka.md
Co-authored-by: Aolin
---
tidb-cloud/changefeed-sink-to-apache-kafka.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tidb-cloud/changefeed-sink-to-apache-kafka.md b/tidb-cloud/changefeed-sink-to-apache-kafka.md
index 763fb10c39bd9..1b6cde6667e99 100644
--- a/tidb-cloud/changefeed-sink-to-apache-kafka.md
+++ b/tidb-cloud/changefeed-sink-to-apache-kafka.md
@@ -273,7 +273,7 @@ The steps vary depending on the connectivity method you select.
- **Tables matching**: specify which tables the column selector applies to. For tables that do not match any rule, all columns are sent.
- **Column Selector**: specify which columns of the matched tables will be sent to the downstream.
- For more information about the matching rules, see [Column selectors](https://docs.pingcap.com/tidb/stable/ticdc-sink-to-kafka/#column-selectors).
+ For more information about the matching rules, see [Column selectors](https://docs.pingcap.com/tidb/stable/ticdc-sink-to-kafka/#column-selectors).
4. In the **Data Format** area, select your desired format of Kafka messages.
From 84686bbc671e17fb0387a32066f6aeea932d596d Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Tue, 13 Jan 2026 18:39:54 +0800
Subject: [PATCH 20/28] move toc
---
TOC-tidb-cloud-essential.md | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/TOC-tidb-cloud-essential.md b/TOC-tidb-cloud-essential.md
index affcf138e0971..337a2e305dfd2 100644
--- a/TOC-tidb-cloud-essential.md
+++ b/TOC-tidb-cloud-essential.md
@@ -232,10 +232,6 @@
- [CSV Configurations for Importing Data](/tidb-cloud/csv-config-for-import-data.md)
- [Troubleshoot Access Denied Errors during Data Import from Amazon S3](/tidb-cloud/troubleshoot-import-access-denied-error.md)
- [Connect AWS DMS to TiDB Cloud clusters](/tidb-cloud/tidb-cloud-connect-aws-dms.md)
-- Stream Data 
- - [Changefeed Overview](/tidb-cloud/essential-changefeed-overview.md)
- - [Sink to MySQL](/tidb-cloud/essential-changefeed-sink-to-mysql.md)
- - [Sink to Apache Kafka](/tidb-cloud/essential-changefeed-sink-to-kafka.md)
- Vector Search 
- [Overview](/vector-search/vector-search-overview.md)
- Get Started
@@ -264,6 +260,10 @@
- [Vector Index](/vector-search/vector-search-index.md)
- [Improve Performance](/vector-search/vector-search-improve-performance.md)
- [Limitations](/vector-search/vector-search-limitations.md)
+- Stream Data 
+ - [Changefeed Overview](/tidb-cloud/essential-changefeed-overview.md)
+ - [Sink to MySQL](/tidb-cloud/essential-changefeed-sink-to-mysql.md)
+ - [Sink to Apache Kafka](/tidb-cloud/essential-changefeed-sink-to-kafka.md)
- Security
- [Security Overview](/tidb-cloud/security-overview.md)
- Identity Access Control
From a66e56c4e1dccc845c4ebc6cd5d9684ae3acfbd4 Mon Sep 17 00:00:00 2001
From: shi yuhang <52435083+shiyuhang0@users.noreply.github.com>
Date: Wed, 14 Jan 2026 14:19:47 +0800
Subject: [PATCH 21/28] Apply suggestions from code review
Co-authored-by: Aolin
---
tidb-cloud/essential-changefeed-overview.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index 1238468fb9ad0..ebcf5ca68f57f 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -14,14 +14,14 @@ TiDB Cloud changefeed helps you stream data from TiDB Cloud to other data servic
## Supported regions
-The changefeed feature is currently supported in the following regions:
+The changefeed feature is available in the following regions:
| Cloud Provider | Supported Regions |
| --- | --- |
| Alibaba Cloud | `ap-southeast-1`
`cn-hongkong`
`ap-southeast-5` |
| AWS | `us-east-1`
`ap-southeast-1` |
-Other regions will be supported in the near future. If you require immediate support for a specific region, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md) for help.
+Additional regions will be supported in the future. For immediate support in a specific region, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md).
## View the Changefeed page
From 1d0c6b579818c996d90a78df124c4089035c9317 Mon Sep 17 00:00:00 2001
From: shi yuhang <52435083+shiyuhang0@users.noreply.github.com>
Date: Wed, 14 Jan 2026 14:32:21 +0800
Subject: [PATCH 22/28] Apply suggestions from code review
---
tidb-cloud/changefeed-sink-to-apache-kafka.md | 1 -
tidb-cloud/essential-changefeed-overview.md | 2 +-
2 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/tidb-cloud/changefeed-sink-to-apache-kafka.md b/tidb-cloud/changefeed-sink-to-apache-kafka.md
index 1b6cde6667e99..2743fa6d55d73 100644
--- a/tidb-cloud/changefeed-sink-to-apache-kafka.md
+++ b/tidb-cloud/changefeed-sink-to-apache-kafka.md
@@ -12,7 +12,6 @@ This document describes how to create a changefeed to stream data from TiDB Clou
> **Note:**
>
> - To use the changefeed feature, make sure that your TiDB Cloud Dedicated cluster version is v6.1.3 or later.
-> - For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) clusters, the changefeed feature is unavailable.
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index ebcf5ca68f57f..9550702daefb7 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -35,7 +35,7 @@ To access the changefeed feature, take the following steps:
2. Click the name of your target cluster to go to its overview page, and then click **Data** > **Changefeed** in the left navigation pane. The changefeed page is displayed.
-On the **Changefeed** page, you can create a changefeed, view a list of existing changefeeds, and operate the existing changefeeds (such as scaling, pausing, resuming, editing, and deleting a changefeed).
+On the **Changefeed** page, you can create a changefeed, view a list of existing changefeeds, and operate the existing changefeeds (such as pausing, resuming, editing, and deleting a changefeed).
## Create a changefeed
From da2af74d630a2f05325bcc2a4f9a4f5daf3f1c1f Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Wed, 14 Jan 2026 14:39:02 +0800
Subject: [PATCH 23/28] remove default
---
tidb-cloud/essential-changefeed-sink-to-kafka.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-sink-to-kafka.md b/tidb-cloud/essential-changefeed-sink-to-kafka.md
index 7bff4fc3302b1..2e49971dd8d3e 100644
--- a/tidb-cloud/essential-changefeed-sink-to-kafka.md
+++ b/tidb-cloud/essential-changefeed-sink-to-kafka.md
@@ -80,7 +80,7 @@ The steps vary depending on the connectivity method you select.
- If your Kafka does not require authentication, keep the default option **Disable**.
- If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-3. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v3**.
+3. Select your **Kafka Version**. Choose **Kafka v2** or **Kafka v3** according to the version of your Kafka.
4. Select a **Compression** type for the data in this changefeed.
5. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
6. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
@@ -96,7 +96,7 @@ The steps vary depending on the connectivity method you select.
- If your Kafka does not require authentication, keep the default option **Disable**.
- If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v3**.
+5. Select your **Kafka Version**. Choose **Kafka v2** or **Kafka v3** according to the version of your Kafka.
6. Select a **Compression** type for the data in this changefeed.
7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
8. Input the **TLS Server Name** if your Kafka requires TLS SNI verification. For example, `Confluent Cloud Dedicated clusters`.
From 9ed187b7248f153b778f300e80332a97797be60f Mon Sep 17 00:00:00 2001
From: houfaxin
Date: Thu, 15 Jan 2026 09:18:42 +0800
Subject: [PATCH 24/28] remove notes
---
tidb-cloud/changefeed-sink-to-apache-kafka.md | 7 -------
tidb-cloud/changefeed-sink-to-mysql.md | 10 +---------
2 files changed, 1 insertion(+), 16 deletions(-)
diff --git a/tidb-cloud/changefeed-sink-to-apache-kafka.md b/tidb-cloud/changefeed-sink-to-apache-kafka.md
index 2743fa6d55d73..7c07d5b10fe15 100644
--- a/tidb-cloud/changefeed-sink-to-apache-kafka.md
+++ b/tidb-cloud/changefeed-sink-to-apache-kafka.md
@@ -13,13 +13,6 @@ This document describes how to create a changefeed to stream data from TiDB Clou
>
> - To use the changefeed feature, make sure that your TiDB Cloud Dedicated cluster version is v6.1.3 or later.
-
-
-
-> **Note:**
->
-> For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) and [{{{ .essential }}}](/tidb-cloud/select-cluster-tier.md#essential) clusters, the changefeed feature is unavailable.
-
## Restrictions
diff --git a/tidb-cloud/changefeed-sink-to-mysql.md b/tidb-cloud/changefeed-sink-to-mysql.md
index cdb09f913d48e..297333547c74a 100644
--- a/tidb-cloud/changefeed-sink-to-mysql.md
+++ b/tidb-cloud/changefeed-sink-to-mysql.md
@@ -11,15 +11,7 @@ This document describes how to stream data from TiDB Cloud to MySQL using the **
> **Note:**
>
-> - To use the changefeed feature, make sure that your TiDB Cloud Dedicated cluster version is v6.1.3 or later.
-> - For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) clusters, the changefeed feature is unavailable.
-
-
-
-
-> **Note:**
->
-> For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) clusters, the changefeed feature is unavailable.
+> To use the changefeed feature, make sure that your TiDB Cloud Dedicated cluster version is v6.1.3 or later.
From 67dd80c439401d18cc8b60b61f2748d408cf9702 Mon Sep 17 00:00:00 2001
From: xixirangrang
Date: Thu, 15 Jan 2026 21:22:10 +0800
Subject: [PATCH 25/28] Apply suggestions from code review
Co-authored-by: Aolin
---
tidb-cloud/changefeed-sink-to-apache-kafka.md | 2 +-
tidb-cloud/essential-changefeed-overview.md | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/tidb-cloud/changefeed-sink-to-apache-kafka.md b/tidb-cloud/changefeed-sink-to-apache-kafka.md
index 7c07d5b10fe15..ff506f5bb6c86 100644
--- a/tidb-cloud/changefeed-sink-to-apache-kafka.md
+++ b/tidb-cloud/changefeed-sink-to-apache-kafka.md
@@ -11,7 +11,7 @@ This document describes how to create a changefeed to stream data from TiDB Clou
> **Note:**
>
-> - To use the changefeed feature, make sure that your TiDB Cloud Dedicated cluster version is v6.1.3 or later.
+> To use the changefeed feature, make sure that your TiDB Cloud Dedicated cluster version is v6.1.3 or later.
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index 9550702daefb7..ced71261b9989 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -16,10 +16,10 @@ TiDB Cloud changefeed helps you stream data from TiDB Cloud to other data servic
The changefeed feature is available in the following regions:
-| Cloud Provider | Supported Regions |
+| Cloud provider | Supported regions |
| --- | --- |
-| Alibaba Cloud | `ap-southeast-1`
`cn-hongkong`
`ap-southeast-5` |
| AWS | `us-east-1`
`ap-southeast-1` |
+| Alibaba Cloud | `ap-southeast-1`
`cn-hongkong`
`ap-southeast-5` |
Additional regions will be supported in the future. For immediate support in a specific region, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md).
From 6539106d69c041de332cd1394f0aeff8e7638bb9 Mon Sep 17 00:00:00 2001
From: xixirangrang
Date: Fri, 16 Jan 2026 13:49:54 +0800
Subject: [PATCH 26/28] Apply suggestions from code review
Co-authored-by: Aolin
---
tidb-cloud/essential-changefeed-overview.md | 42 ++++++++++---------
.../essential-changefeed-sink-to-kafka.md | 16 +++----
.../essential-changefeed-sink-to-mysql.md | 16 +++----
3 files changed, 40 insertions(+), 34 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index ced71261b9989..60e74d8add075 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -59,8 +59,10 @@ You can view a changefeed using the TiDB Cloud console or the TiDB Cloud CLI.
-```
-ticloud serverless changefeed get -c --changefeed-id
+Run the following command:
+
+```bash
+ticloud serverless changefeed get --cluster-id --changefeed-id
```
@@ -74,16 +76,16 @@ You can pause or resume a changefeed using the TiDB Cloud console or the TiDB Cl
1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB cluster.
-2. Locate the corresponding changefeed you want to pause or resume. In the **Action** column, click **...** > **Pause/Resume**.
+2. Locate the corresponding changefeed you want to pause or resume, and click **...** > **Pause/Resume** in the **Action** column.
-To pause a changefeed:
+To pause a changefeed, run the following command:
-```
-ticloud serverless changefeed pause -c
--changefeed-id
+```bash
+ticloud serverless changefeed pause --cluster-id --changefeed-id
```
To resume a changefeed:
@@ -99,7 +101,7 @@ ticloud serverless changefeed resume -c --changefeed-id **Note:**
>
-> TiDB Cloud currently only allows editing changefeeds that are in the `Paused` state.
+> TiDB Cloud currently only allows editing changefeeds in the paused status.
You can edit a changefeed using the TiDB Cloud console or the TiDB Cloud CLI.
@@ -107,12 +109,12 @@ You can edit a changefeed using the TiDB Cloud console or the TiDB Cloud CLI.
1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB cluster.
-2. Locate the changefeed you want to pause. In the **Action** column, click **...** > **Pause**.
+2. Locate the changefeed you want to pause, and click **...** > **Pause** in the **Action** column.
3. When the changefeed status changes to `Paused`, click **...** > **Edit** to edit the corresponding changefeed.
TiDB Cloud populates the changefeed configuration by default. You can modify the following configurations:
- - Apache Kafka sink: all configurations except **Destination**, **Connection** and **Start Position**
+ - Apache Kafka sink: all configurations except **Destination**, **Connection**, and **Start Position**
- MySQL sink: all configurations except **Destination**, **Connection** and **Start Position**
4. After editing the configuration, click **...** > **Resume** to resume the corresponding changefeed.
@@ -121,16 +123,16 @@ You can edit a changefeed using the TiDB Cloud console or the TiDB Cloud CLI.
-Edit a changefeed with Apache Kafka sink:
+Edit a changefeed with an Apache Kafka sink:
-```
-ticloud serverless changefeed edit -c --changefeed-id --name --kafka --filter
+```bash
+ticloud serverless changefeed edit --cluster-id --changefeed-id --name --kafka --filter
```
-Edit a changefeed with MySQL sink:
+Edit a changefeed with a MySQL sink:
-```
-ticloud serverless changefeed edit -c --changefeed-id --name --mysql --filter
+```bash
+ticloud serverless changefeed edit --cluster-id --changefeed-id --name --mysql --filter
```
@@ -151,14 +153,16 @@ You can delete a changefeed using the TiDB Cloud console or the TiDB Cloud CLI.
1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB cluster.
-2. Locate the corresponding changefeed you want to delete, and click **...** > **Delete** in the **Action** column.
+2. Locate the changefeed you want to delete, and click **...** > **Delete** in the **Action** column.
-```
-ticloud serverless changefeed delete -c --changefeed-id
+Run the following command:
+
+```bash
+ticloud serverless changefeed delete --cluster-id --changefeed-id
```
@@ -166,7 +170,7 @@ ticloud serverless changefeed delete -c
--changefeed-id **Changefeed** in the left navigation pane.
+2. Navigate to the overview page of the target {{{ .essential }}} cluster, and then click **Data** > **Changefeed** in the left navigation pane.
3. Click **Create Changefeed**, and then select **Kafka** as **Destination**.
## Step 2. Configure the changefeed target
@@ -80,7 +80,7 @@ The steps vary depending on the connectivity method you select.
- If your Kafka does not require authentication, keep the default option **Disable**.
- If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-3. Select your **Kafka Version**. Choose **Kafka v2** or **Kafka v3** according to the version of your Kafka.
+3. For **Kafka Version**, select **Kafka v2** or **Kafka v3** based on your Kafka version.
4. Select a **Compression** type for the data in this changefeed.
5. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
6. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
@@ -96,10 +96,10 @@ The steps vary depending on the connectivity method you select.
- If your Kafka does not require authentication, keep the default option **Disable**.
- If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-5. Select your **Kafka Version**. Choose **Kafka v2** or **Kafka v3** according to the version of your Kafka.
+5. For **Kafka Version**, select **Kafka v2** or **Kafka v3** based on your Kafka version.
6. Select a **Compression** type for the data in this changefeed.
7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
-8. Input the **TLS Server Name** if your Kafka requires TLS SNI verification. For example, `Confluent Cloud Dedicated clusters`.
+8. If your Kafka requires TLS SNI verification, enter the **TLS Server Name**. For example, `Confluent Cloud Dedicated clusters`.
9. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
@@ -141,7 +141,7 @@ The steps vary depending on the connectivity method you select.
6. If you select **Avro** as your data format, you will see some Avro-specific configurations on the page. You can fill in these configurations as follows:
- In the **Decimal** and **Unsigned BigInt** configurations, specify how TiDB Cloud handles the decimal and unsigned bigint data types in Kafka messages.
- - In the **Schema Registry** area, fill in your schema registry endpoint. If you enable **HTTP Authentication**, the fields for user name and password are displayed to fill in.
+ - In the **Schema Registry** area, fill in your schema registry endpoint. If you enable **HTTP Authentication**, enter the user name and password.
7. In the **Topic Distribution** area, select a distribution mode, and then fill in the topic name configurations according to the mode.
@@ -195,5 +195,5 @@ The steps vary depending on the connectivity method you select.
## Step 4. Review and create your changefeed
1. In the **Changefeed Name** area, specify a name for the changefeed.
-2. Review all the changefeed configurations that you set. Click **Previous** to go back to the previous configuration pages if you want to modify some configurations.
+2. Review all the changefeed configurations that you set. Click **Previous** to make changes if necessary.
3. If all configurations are correct, click **Submit** to create the changefeed.
diff --git a/tidb-cloud/essential-changefeed-sink-to-mysql.md b/tidb-cloud/essential-changefeed-sink-to-mysql.md
index 30669813b09cd..5cba762687d5b 100644
--- a/tidb-cloud/essential-changefeed-sink-to-mysql.md
+++ b/tidb-cloud/essential-changefeed-sink-to-mysql.md
@@ -62,9 +62,9 @@ To load the existing data:
SET GLOBAL tidb_gc_life_time = '72h';
```
-2. Use [Export](/tidb-cloud/serverless-export.md) to export data from your {{{ .essential }}} cluster, then use community tools such as [mydumper/myloader](https://centminmod.com/mydumper.html) to load data to the MySQL service.
+2. Use the [Export](/tidb-cloud/serverless-export.md) feature to export data from your {{{ .essential }}} cluster, then use community tools such as [mydumper/myloader](https://centminmod.com/mydumper.html) to load the data into the MySQL service.
-3. Use the snapshot time of [Export](/tidb-cloud/serverless-export.md) as the start position of MySQL sink.
+3. Record the snapshot time returned by [Export](/tidb-cloud/serverless-export.md). Use this timestamp as the starting position when you configure the MySQL sink.
### Create target tables in MySQL
@@ -105,20 +105,22 @@ After completing the prerequisites, you can sink your data to MySQL.
8. In **Start Replication Position**, configure the starting position for your MySQL sink.
- - If you have [loaded the existing data](#load-existing-data-optional) using Export, select **From Time** and fill in the snapshot time that you get from Export. Pay attention to the time zone.
+ - If you have [loaded the existing data](#load-existing-data-optional) using Export, select **From Time** and fill in the snapshot time returned by Export. Ensure that the time zone is correct.
- If you do not have any data in the upstream TiDB cluster, select **Start replication from now on**.
9. Click **Next** to configure your changefeed.
- - In the **Changefeed Name** area, specify a name for the changefeed.
+ In the **Changefeed Name** area, specify a name for the changefeed.
-10. If you confirm that all configurations are correct, click **Submit**. If you want to modify some configurations, click **Previous** to go back to the previous configuration page.
+10. Review the configuration. If all settings are correct, click **Submit**.
-11. The sink starts soon, and you can see the sink status change from **Creating** to **Running**.
+ If you want to modify some configurations, click **Previous** to go back to the previous configuration page.
+
+11. After creation, the sink status changes from **Creating** to **Running**.
Click the changefeed name, and you can see more details about the changefeed, such as the checkpoint, replication latency, and other metrics.
-12. If you have [loaded the existing data](#load-existing-data-optional) using Export, you need to restore the GC time to its original value (the default value is `10m`) after the sink is created:
+12. If you have [loaded the existing data](#load-existing-data-optional) and increased the GC time, restore it to its original value (the default value is `10m`) after the sink is created:
```sql
SET GLOBAL tidb_gc_life_time = '10m';
From 5acb92c055dee29138be3c095d4c089166e8e38b Mon Sep 17 00:00:00 2001
From: houfaxin
Date: Fri, 16 Jan 2026 13:57:11 +0800
Subject: [PATCH 27/28] Update essential-changefeed-sink-to-kafka.md
---
tidb-cloud/essential-changefeed-sink-to-kafka.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tidb-cloud/essential-changefeed-sink-to-kafka.md b/tidb-cloud/essential-changefeed-sink-to-kafka.md
index cdef355807fed..4326ec396012e 100644
--- a/tidb-cloud/essential-changefeed-sink-to-kafka.md
+++ b/tidb-cloud/essential-changefeed-sink-to-kafka.md
@@ -125,7 +125,7 @@ The steps vary depending on the connectivity method you select.
- **Tables matching**: specify which tables the column selector applies to. For tables that do not match any rule, all columns are sent.
- **Column Selector**: specify which columns of the matched tables will be sent to the downstream.
- For more information about the matching rules, see [Column selectors](https://docs.pingcap.com/tidb/stable/ticdc-sink-to-kafka/#column-selectors).
+ For more information about the matching rules, see [Column selectors](https://docs.pingcap.com/tidb/stable/ticdc-sink-to-kafka/#column-selectors).
4. In the **Data Format** area, select your desired format of Kafka messages.
From 3699bcabd870bce99ab6dcfe14cb4ca93a083553 Mon Sep 17 00:00:00 2001
From: houfaxin
Date: Fri, 16 Jan 2026 14:08:27 +0800
Subject: [PATCH 28/28] Update essential-changefeed-overview.md
---
tidb-cloud/essential-changefeed-overview.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index 60e74d8add075..cb1edda715bdb 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -18,8 +18,8 @@ The changefeed feature is available in the following regions:
| Cloud provider | Supported regions |
| --- | --- |
-| AWS | `us-east-1`
`ap-southeast-1` |
-| Alibaba Cloud | `ap-southeast-1`
`cn-hongkong`
`ap-southeast-5` |
+| AWS | - `us-east-1`
- `ap-southeast-1`
|
+| Alibaba Cloud | - `ap-southeast-1`
- `cn-hongkong`
- `ap-southeast-5`
|
Additional regions will be supported in the future. For immediate support in a specific region, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md).