diff --git a/TOC-tidb-cloud-lake.md b/TOC-tidb-cloud-lake.md
index 7c10d48d6ee12..2a16022b0a852 100644
--- a/TOC-tidb-cloud-lake.md
+++ b/TOC-tidb-cloud-lake.md
@@ -30,7 +30,7 @@
- Connect
- [Overview](/tidb-cloud-lake/guides/connection-overview.md)
- SQL Clients
- - [BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md)
+ - [LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md)
- [DBeaver](/tidb-cloud-lake/guides/connect-using-dbeaver.md)
- Drivers
- [Overview](/tidb-cloud-lake/guides/driver-overview.md)
diff --git a/tidb-cloud-lake/guides/authenticate-with-aws-iam-role.md b/tidb-cloud-lake/guides/authenticate-with-aws-iam-role.md
index 98378a1a1309f..b46540e081d2f 100644
--- a/tidb-cloud-lake/guides/authenticate-with-aws-iam-role.md
+++ b/tidb-cloud-lake/guides/authenticate-with-aws-iam-role.md
@@ -86,7 +86,7 @@ After {{{ .lake }}} support shares the trusted principal information for your or
Click `View Role`, and record the role ARN: `arn:aws:iam::987654321987:role/databend-test`
-4. Run the following SQL statement in {{{ .lake }}} cloud worksheet or `BendSQL`:
+4. Run the following SQL statement in {{{ .lake }}} cloud worksheet or `LakeSQL`:
```sql
CREATE CONNECTION databend_test STORAGE_TYPE = 's3' ROLE_ARN = 'arn:aws:iam::987654321987:role/databend-test' EXTERNAL_ID = 'my-external-id-123';
diff --git a/tidb-cloud-lake/guides/connect-using-golang.md b/tidb-cloud-lake/guides/connect-using-golang.md
index d130bcbef3fc3..fcbc865759259 100644
--- a/tidb-cloud-lake/guides/connect-using-golang.md
+++ b/tidb-cloud-lake/guides/connect-using-golang.md
@@ -10,7 +10,7 @@ The official Go driver provides a standard `database/sql` interface for seamless
## Installation
```bash
-go get github.com/databendlabs/databend-go
+go get github.com/tidbcloud/lake-go
```
**Connection String**: See [drivers overview](/tidb-cloud-lake/guides/driver-overview.md) for DSN format and examples.
@@ -60,7 +60,7 @@ import (
"fmt"
"log"
- _ "github.com/databendlabs/databend-go"
+ _ "github.com/tidbcloud/lake-go"
)
// Connect to {{{ .lake }}}
@@ -95,6 +95,6 @@ fmt.Printf("User: %d, %s\n", id, name)
## Resources
-- **GitHub Repository**: [databend-go](https://github.com/databendlabs/databend-go)
+- **GitHub Repository**: [lake-go](https://github.com/tidbcloud/lake-go)
- **Go Package**: [pkg.go.dev](https://pkg.go.dev/github.com/datafuselabs/databend-go)
-- **Examples**: [GitHub Examples](https://github.com/databendlabs/databend-go/tree/main/examples)
+- **Examples**: [GitHub Examples](https://github.com/tidbcloud/lake-go/tree/main/examples)
diff --git a/tidb-cloud-lake/guides/connect-using-java.md b/tidb-cloud-lake/guides/connect-using-java.md
index 54c1243d9b8dd..c4a37e2d97ede 100644
--- a/tidb-cloud-lake/guides/connect-using-java.md
+++ b/tidb-cloud-lake/guides/connect-using-java.md
@@ -14,7 +14,7 @@ The official JDBC driver provides standard JDBC 4.0 compatibility for seamless i
```xml
com.databend
- databend-jdbc
+ lake-jdbc
0.4.1
```
@@ -22,7 +22,7 @@ The official JDBC driver provides standard JDBC 4.0 compatibility for seamless i
### Gradle
```gradle
-implementation 'com.databend:databend-jdbc:0.4.1'
+implementation 'com.databend:lake-jdbc:0.4.1'
```
**Connection String**: See [drivers overview](/tidb-cloud-lake/guides/driver-overview.md) for DSN format and examples.
@@ -114,17 +114,17 @@ conn.close();
## Configuration Reference
-For complete databend-jdbc driver configuration options including:
+For complete lake-jdbc driver configuration options including:
- Connection string parameters
- SSL/TLS configuration
- Authentication methods
- Performance tuning parameters
-Please refer to the [official databend-jdbc Connection Guide](https://github.com/databendlabs/databend-jdbc/blob/main/docs/Connection.md).
+Please refer to the [official lake-jdbc Connection Guide](https://github.com/tidbcloud/lake-jdbc/blob/main/docs/Connection.md).
## Resources
-- **Maven Central**: [databend-jdbc](https://repo1.maven.org/maven2/com/databend/databend-jdbc/)
-- **GitHub Repository**: [databend-jdbc](https://github.com/databendlabs/databend-jdbc)
+- **Maven Central**: [lake-jdbc](https://repo1.maven.org/maven2/com/databend/databend-jdbc/)
+- **GitHub Repository**: [lake-jdbc](https://github.com/tidbcloud/lake-jdbc)
- **JDBC Documentation**: [Oracle JDBC Guide](https://docs.oracle.com/javase/tutorial/jdbc/)
diff --git a/tidb-cloud-lake/guides/connect-using-bendsql.md b/tidb-cloud-lake/guides/connect-using-lakesql.md
similarity index 88%
rename from tidb-cloud-lake/guides/connect-using-bendsql.md
rename to tidb-cloud-lake/guides/connect-using-lakesql.md
index 468027156e5bd..e0edcc5d81aa1 100644
--- a/tidb-cloud-lake/guides/connect-using-bendsql.md
+++ b/tidb-cloud-lake/guides/connect-using-lakesql.md
@@ -1,44 +1,44 @@
---
-title: BendSQL
-summary: BendSQL is a command line tool that has been designed specifically for {{{ .lake }}}. It allows users to establish a connection with {{{ .lake }}} and execute queries directly from a CLI window.
+title: LakeSQL
+summary: LakeSQL is a command line tool that has been designed specifically for {{{ .lake }}}. It allows users to establish a connection with {{{ .lake }}} and execute queries directly from a CLI window.
---
-# BendSQL
+# LakeSQL
-[BendSQL](https://github.com/databendlabs/bendsql) is a command line tool that has been designed specifically for {{{ .lake }}}. It allows users to establish a connection with {{{ .lake }}} and execute queries directly from a CLI window.
+[LakeSQL](https://github.com/tidbcloud/lakesql) is a command line tool that has been designed specifically for {{{ .lake }}}. It allows users to establish a connection with {{{ .lake }}} and execute queries directly from a CLI window.
-BendSQL is particularly useful for those who prefer a command line interface and need to work with {{{ .lake }}} on a regular basis. With BendSQL, users can easily and efficiently manage their databases, tables, and data, and perform a wide range of queries and operations with ease.
+LakeSQL is particularly useful for those who prefer a command line interface and need to work with {{{ .lake }}} on a regular basis. With LakeSQL, users can easily and efficiently manage their databases, tables, and data, and perform a wide range of queries and operations with ease.
-## Installing BendSQL
+## Installing LakeSQL
-BendSQL offers multiple installation options to suit different platforms and preferences. Choose your preferred method from the sections below or download the installation package from the [BendSQL release page](https://github.com/databendlabs/bendsql/releases) to install it manually.
+LakeSQL offers multiple installation options to suit different platforms and preferences. Choose your preferred method from the sections below or download the installation package from the [LakeSQL release page](https://github.com/tidbcloud/lakesql/releases) to install it manually.
### Shell Script
-BendSQL provides a convenient Shell script for installation. You can choose between two options:
+LakeSQL provides a convenient Shell script for installation. You can choose between two options:
#### Default Installation
-Install BendSQL to the user's home directory (~/.bendsql):
+Install LakeSQL to the user's home directory (~/.lakesql):
```bash
-curl -fsSL https://repo.databend.com/install/bendsql.sh | bash
+curl -fsSL https://repo.tidbcloud.com/install/lakesql.sh | bash
```
```bash title='Example:'
# highlight-next-line
-curl -fsSL https://repo.databend.com/install/bendsql.sh | bash
+curl -fsSL https://repo.tidbcloud.com/install/lakesql.sh | bash
- B E N D S Q L
+ L A K E S Q L
Installer
--------------------------------------------------------------------------------
Website: https://tidbcloud.com
Docs: https://docs.tidb.io/tidbcloudlake/
-Github: https://github.com/databendlabs/bendsql
+Github: https://github.com/tidbcloud/lakesql
--------------------------------------------------------------------------------
->>> We'll be installing BendSQL via a pre-built archive at https://repo.databend.com/bendsql/v0.22.2/
+>>> We'll be installing LakeSQL via a pre-built archive at https://repo.tidbcloud.com/lakesql/v0.22.2/
>>> Ready to proceed? (y/n)
>>> Please enter y or n.
@@ -46,83 +46,83 @@ Github: https://github.com/databendlabs/bendsql
--------------------------------------------------------------------------------
->>> Downloading BendSQL via https://repo.databend.com/bendsql/v0.22.2/bendsql-aarch64-apple-darwin.tar.gz ✓
->>> Unpacking archive to /Users/eric/.bendsql ... ✓
->>> Adding BendSQL path to /Users/eric/.zprofile ✓
->>> Adding BendSQL path to /Users/eric/.profile ✓
+>>> Downloading LakeSQL via https://repo.tidbcloud.com/lakesql/v0.22.2/lakesql-aarch64-apple-darwin.tar.gz ✓
+>>> Unpacking archive to /Users/eric/.lakesql ... ✓
+>>> Adding LakeSQL path to /Users/eric/.zprofile ✓
+>>> Adding LakeSQL path to /Users/eric/.profile ✓
>>> Install succeeded! 🚀
->>> To start BendSQL:
+>>> To start LakeSQL:
- bendsql --help
+ lakesql --help
->>> More information at https://github.com/databendlabs/bendsql
+>>> More information at https://github.com/tidbcloud/lakesql
```
#### Custom Installation with `--prefix`
-Install BendSQL to a specified directory (e.g., /usr/local):
+Install LakeSQL to a specified directory (e.g., /usr/local):
```bash
-curl -fsSL https://repo.databend.com/install/bendsql.sh | bash -s -- -y --prefix /usr/local
+curl -fsSL https://repo.tidbcloud.com/install/lakesql.sh | bash -s -- -y --prefix /usr/local
```
```bash title='Example:'
# highlight-next-line
-curl -fsSL https://repo.databend.com/install/bendsql.sh | bash -s -- -y --prefix /usr/local
- B E N D S Q L
+curl -fsSL https://repo.tidbcloud.com/install/lakesql.sh | bash -s -- -y --prefix /usr/local
+ L A K E S Q L
Installer
--------------------------------------------------------------------------------
Website: https://tidbcloud.com
-Docs: https://docs.databend.com
-Github: https://github.com/databendlabs/bendsql
+Docs: https://docs.pingcap.com
+Github: https://github.com/tidbcloud/lakesql
--------------------------------------------------------------------------------
->>> Downloading BendSQL via https://repo.databend.com/bendsql/v0.22.2/bendsql-aarch64-apple-darwin.tar.gz ✓
+>>> Downloading LakeSQL via https://repo.tidbcloud.com/lakesql/v0.22.2/lakesql-aarch64-apple-darwin.tar.gz ✓
>>> Unpacking archive to /usr/local ... ✓
>>> Install succeeded! 🚀
->>> To start BendSQL:
+>>> To start LakeSQL:
- bendsql --help
+ lakesql --help
->>> More information at https://github.com/databendlabs/bendsql
+>>> More information at https://github.com/tidbcloud/lakesql
```
### Homebrew (for macOS)
-BendSQL can be easily installed on macOS using Homebrew with a simple command:
+LakeSQL can be easily installed on macOS using Homebrew with a simple command:
```bash
-brew install databendcloud/homebrew-tap/bendsql
+brew install tidbcloud/homebrew-tap/lakesql
```
### Apt (for Ubuntu/Debian)
-On Ubuntu and Debian systems, BendSQL can be installed via the Apt package manager. Choose the appropriate instructions based on the distribution version.
+On Ubuntu and Debian systems, LakeSQL can be installed via the Apt package manager. Choose the appropriate instructions based on the distribution version.
#### DEB822-STYLE format (Ubuntu-22.04/Debian-12 and later)
```bash
-sudo curl -L -o /etc/apt/sources.list.d/databend.sources https://repo.databend.com/deb/databend.sources
+sudo curl -L -o /etc/apt/sources.list.d/tidbcloudlake.sources https://repo.tidbcloud.com/deb/tidbcloudlake.sources
```
#### Old format (Ubuntu-20.04/Debian-11 and earlier)
```bash
-sudo curl -L -o /usr/share/keyrings/databend-keyring.gpg https://repo.databend.com/deb/databend.gpg
-sudo curl -L -o /etc/apt/sources.list.d/databend.list https://repo.databend.com/deb/databend.list
+sudo curl -L -o /usr/share/keyrings/tidbcloudlake-keyring.gpg https://repo.tidbcloud.com/deb/tidbcloudlake.gpg
+sudo curl -L -o /etc/apt/sources.list.d/tidbcloudlake.list https://repo.tidbcloud.com/deb/tidbcloudlake.list
```
-Finally, update the package list and install BendSQL:
+Finally, update the package list and install LakeSQL:
```bash
sudo apt update
-sudo apt install bendsql
+sudo apt install lakesql
```
### Cargo (Rust Package Manager)
-To install BendSQL using Cargo, utilize the `cargo-binstall` tool or build from source using the provided command.
+To install LakeSQL using Cargo, utilize the `cargo-binstall` tool or build from source using the provided command.
> **Note:**
>
@@ -133,7 +133,7 @@ To install BendSQL using Cargo, utilize the `cargo-binstall` tool or build from
Please refer to [Cargo B(inary)Install - Installation](https://github.com/cargo-bins/cargo-binstall#installation) to install `cargo-binstall` and enable the `cargo binstall ` subcommand.
```bash
-cargo binstall bendsql
+cargo binstall lakesql
```
**Building from Source**
@@ -141,20 +141,20 @@ cargo binstall bendsql
When building from source, some dependencies may involve compiling C/C++ code. Ensure that you have the GCC/G++ or Clang toolchain installed on your computer.
```bash
-cargo install bendsql
+cargo install lakesql
```
## User Authentication
For connections to {{{ .lake }}}, you can use the default `cloudapp` user or an SQL user created with the [CREATE USER](/tidb-cloud-lake/sql/create-user.md) command. Note that the user account you use to log in to the [{{{ .lake }}} console](https://app.lake.tidbcloud.com) cannot be used for connecting to {{{ .lake }}}.
-## Connecting with BendSQL
+## Connecting with LakeSQL
-BendSQL allows you to connect to both {{{ .lake }}} instances.
+LakeSQL allows you to connect to both {{{ .lake }}} instances.
### Customize Connections with a DSN
-A DSN (Data Source Name) is a simple yet powerful way to configure and manage your {{{ .lake }}} connection in BendSQL using a single URI-style string. This method allows you to embed your credentials and connection settings directly into your environment, streamlining the connection process.
+A DSN (Data Source Name) is a simple yet powerful way to configure and manage your {{{ .lake }}} connection in LakeSQL using a single URI-style string. This method allows you to embed your credentials and connection settings directly into your environment, streamlining the connection process.
#### DSN Format and Parameters
@@ -208,16 +208,16 @@ The best practice for connecting to {{{ .lake }}} is to obtain your DSN from {{{
2. Select the database and warehouse you want to connect to.
-3. Your DSN will be automatically generated in the **Examples** section. Below the DSN, you'll find a BendSQL snippet that exports the DSN as an environment variable named `BENDSQL_DSN` and launches BendSQL with the correct configuration. You can copy and paste it directly into your terminal.
+3. Your DSN will be automatically generated in the **Examples** section. Below the DSN, you'll find a LakeSQL snippet that exports the DSN as an environment variable named `LAKESQL_DSN` and launches LakeSQL with the correct configuration. You can copy and paste it directly into your terminal.
```bash title='Example'
- export BENDSQL_DSN="lake://cloudapp:******@tn3ftqihs.gw.aws-us-east-2.default.tidbcloud.com:443/information_schema?warehouse=small-xy2t"
- bendsql
+ export LAKESQL_DSN="lake://cloudapp:******@tn3ftqihs.gw.aws-us-east-2.default.tidbcloud.com:443/information_schema?warehouse=small-xy2t"
+ lakesql
```
-## BendSQL Settings
+## LakeSQL Settings
-BendSQL provides a range of settings that allow you to define how query results are presented:
+LakeSQL provides a range of settings that allow you to define how query results are presented:
| Setting | Description |
| -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
@@ -541,13 +541,13 @@ root@localhost:8000/default> SELECT 'Hello\nWorld' AS message;
1 row read in 0.067 sec. Processed 1 row, 1 B (14.87 rows/s, 14 B/s)
```
-### Configuring BendSQL Settings
+### Configuring LakeSQL Settings
-You have the following options to configure a BendSQL setting:
+You have the following options to configure a LakeSQL setting:
- Use the `!set ` command. For more information, see [Utility Commands](#utility-commands).
-- Add and configure a setting in the configuration file `~/.config/bendsql/config.toml`. To do so, open the file and add your setting under the `[settings]` section. The following example sets the `max_display_rows` to 10 and `max_width` to 100:
+- Add and configure a setting in the configuration file `~/.config/lakesql/config.toml`. To do so, open the file and add your setting under the `[settings]` section. The following example sets the `max_display_rows` to 10 and `max_width` to 100:
```toml title='Example:'
...
@@ -557,7 +557,7 @@ max_width = 100
...
```
-- Configure a setting at runtime by launching BendSQL and then specifying the setting in the format `. `. Please note that settings configured in this way only take effect in the current session.
+- Configure a setting at runtime by launching LakeSQL and then specifying the setting in the format `. `. Please note that settings configured in this way only take effect in the current session.
```shell title='Example:'
root@localhost:8000/default> .max_display_rows 10
@@ -566,25 +566,25 @@ root@localhost:8000/default> .max_width 100
## Utility Commands
-BendSQL provides users with a variety of commands to streamline their workflow and customize their experience. Here's an overview of the commands available in BendSQL:
+LakeSQL provides users with a variety of commands to streamline their workflow and customize their experience. Here's an overview of the commands available in LakeSQL:
| Command | Description |
| ------------------------ | ---------------------------------- |
-| `!exit` | Exits BendSQL. |
-| `!quit` | Exits BendSQL. |
-| `!configs` | Displays current BendSQL settings. |
-| `!set ` | Modifies a BendSQL setting. |
+| `!exit` | Exits LakeSQL. |
+| `!quit` | Exits LakeSQL. |
+| `!configs` | Displays current LakeSQL settings. |
+| `!set ` | Modifies a LakeSQL setting. |
| `!source ` | Executes a SQL file. |
For examples of each command, please refer to the reference information below:
### `!exit`
-Disconnects from {{{ .lake }}} and exits BendSQL.
+Disconnects from {{{ .lake }}} and exits LakeSQL.
```shell title='Example:'
-➜ ~ bendsql
-Welcome to BendSQL 0.17.0-homebrew.
+➜ ~ lakesql
+Welcome to LakeSQL 0.17.0-homebrew.
Connecting to localhost:8000 as user root.
Connected to {{{ .lake }}} Query v1.2.427-nightly-b1b622d406(rust-1.77.0-nightly-2024-04-20T22:12:35.318382488Z)
@@ -595,11 +595,11 @@ Bye~
### `!quit`
-Disconnects from {{{ .lake }}} and exits BendSQL.
+Disconnects from {{{ .lake }}} and exits LakeSQL.
```shell title='Example:'
-➜ ~ bendsql
-Welcome to BendSQL 0.17.0-homebrew.
+➜ ~ lakesql
+Welcome to LakeSQL 0.17.0-homebrew.
Connecting to localhost:8000 as user root.
Connected to {{{ .lake }}} Query v1.2.427-nightly-b1b622d406(rust-1.77.0-nightly-2024-04-20T22:12:35.318382488Z)
@@ -611,7 +611,7 @@ Bye~
### `!configs`
-Displays the current BendSQL settings.
+Displays the current LakeSQL settings.
```shell title='Example:'
// highlight-next-line
@@ -636,7 +636,7 @@ Settings {
### `!set `
-Modifies a BendSQL setting.
+Modifies a LakeSQL setting.
```shell title='Example:'
root@localhost:8000/default> !set display_pretty_sql false
@@ -656,8 +656,8 @@ CREATE TABLE test_table (
INSERT INTO test_table (id, name) VALUES (1, 'Alice');
INSERT INTO test_table (id, name) VALUES (2, 'Bob');
INSERT INTO test_table (id, name) VALUES (3, 'Charlie');
-➜ ~ bendsql
-Welcome to BendSQL 0.17.0-homebrew.
+➜ ~ lakesql
+Welcome to LakeSQL 0.17.0-homebrew.
Connecting to localhost:8000 as user root.
Connected to {{{ .lake }}} Query v1.2.427-nightly-b1b622d406(rust-1.77.0-nightly-2024-04-20T22:12:35.318382488Z)
diff --git a/tidb-cloud-lake/guides/connect-using-node-js.md b/tidb-cloud-lake/guides/connect-using-node-js.md
index 3a5075f24a886..52126d20fb9db 100644
--- a/tidb-cloud-lake/guides/connect-using-node-js.md
+++ b/tidb-cloud-lake/guides/connect-using-node-js.md
@@ -10,7 +10,7 @@ The official Node.js driver provides TypeScript support and Promise-based API fo
## Installation
```bash
-npm install databend-driver
+npm install lake-driver
```
**Connection String**: See [driver overview](/tidb-cloud-lake/guides/driver-overview.md) for DSN format and examples.
@@ -54,7 +54,7 @@ npm install databend-driver
## Basic Usage
```javascript
-const { Client } = require('databend-driver');
+const { Client } = require('lake-driver');
// Connect to {{{ .lake }}}
const client = new Client('');
@@ -81,6 +81,6 @@ conn.close();
## Resources
-- **NPM Package**: [databend-driver](https://www.npmjs.com/package/databend-driver)
-- **GitHub Repository**: [databend-driver](https://github.com/databendlabs/bendsql/tree/main/bindings/nodejs)
+- **NPM Package**: [lake-driver](https://www.npmjs.com/package/databend-driver)
+- **GitHub Repository**: [lake-driver](https://github.com/tidbcloud/lakesql/tree/main/bindings/nodejs)
- **TypeScript Definitions**: Included in package
diff --git a/tidb-cloud-lake/guides/connect-using-python.md b/tidb-cloud-lake/guides/connect-using-python.md
index 2a350f6ec4687..18edb0ccd889e 100644
--- a/tidb-cloud-lake/guides/connect-using-python.md
+++ b/tidb-cloud-lake/guides/connect-using-python.md
@@ -13,14 +13,14 @@ Choose your preferred approach:
| Package | Best For | Installation |
|---------|----------|-------------|
-| **databend-driver** | Direct database operations, async/await | `pip install databend-driver` |
+| **lake-driver** | Direct database operations, async/await | `pip install lake-driver` |
| **databend-sqlalchemy** | ORM integration, existing SQLAlchemy apps | `pip install databend-sqlalchemy` |
**Connection String**: See [driver overview](/tidb-cloud-lake/guides/driver-overview.md) for DSN format and examples.
---
-## databend-driver (Recommended)
+## lake-driver (Recommended)
### Features
@@ -32,7 +32,7 @@ Choose your preferred approach:
### Synchronous Usage
```python
-from databend_driver import BlockingDatabendClient
+from tidbcloudlake_driver import BlockingDatabendClient
# Connect and execute
client = BlockingDatabendClient('')
@@ -53,7 +53,7 @@ cursor.execute("SELECT * FROM users")
print(f"Columns: {[desc[0] for desc in cursor.description]}")
for row in cursor.fetchall():
- # row is a databend_driver.Row object
+ # row is a tidbcloudlake_driver.Row object
# Access by column name
print(f"id: {row['id']}, name: {row['name']}")
@@ -84,7 +84,7 @@ for row in cursor.fetchall():
```python
import asyncio
-from databend_driver import AsyncDatabendClient
+from tidbcloudlake_driver import AsyncDatabendClient
async def main():
client = AsyncDatabendClient('')
@@ -153,5 +153,5 @@ with engine.connect() as conn:
## Resources
-- **PyPI**: [databend-driver](https://pypi.org/project/databend-driver/) • [databend-sqlalchemy](https://pypi.org/project/databend-sqlalchemy/)
-- **GitHub**: [databend-driver](https://github.com/databendlabs/bendsql/tree/main/bindings/python) • [databend-sqlalchemy](https://github.com/databendlabs/databend-sqlalchemy)
+- **PyPI**: [lake-driver](https://pypi.org/project/databend-driver/) • [databend-sqlalchemy](https://pypi.org/project/databend-sqlalchemy/)
+- **GitHub**: [lake-driver](https://github.com/tidbcloud/lakesql/tree/main/bindings/python) • [lake-sqlalchemy](http://github.com/tidbcloud/lake-sqlalchemy)
diff --git a/tidb-cloud-lake/guides/connect-using-rust.md b/tidb-cloud-lake/guides/connect-using-rust.md
index 91c08a6f86d61..751076ba9a651 100644
--- a/tidb-cloud-lake/guides/connect-using-rust.md
+++ b/tidb-cloud-lake/guides/connect-using-rust.md
@@ -13,7 +13,7 @@ Add the driver to your `Cargo.toml`:
```toml
[dependencies]
-databend-driver = "0.30"
+lake-driver = "0.30"
tokio = { version = "1", features = ["full"] }
```
@@ -71,7 +71,7 @@ tokio = { version = "1", features = ["full"] }
Here's a simple example demonstrating DDL, write, and query operations:
```rust
-use databend_driver::Client;
+use tidbcloudlake_driver::Client;
use tokio_stream::StreamExt;
#[tokio::main]
@@ -106,6 +106,6 @@ async fn main() -> Result<(), Box> {
## Resources
-- **Crates.io**: [databend-driver](https://crates.io/crates/databend-driver)
-- **GitHub Repository**: [BendSQL/driver](https://github.com/databendlabs/BendSQL/tree/main/driver)
-- **Rust Documentation**: [docs.rs/databend-driver](https://docs.rs/databend-driver)
+- **Crates.io**: [lake-driver](https://crates.io/crates/databend-driver)
+- **GitHub Repository**: [LakeSQL/driver](https://github.com/tidbcloud/lakesql/tree/main/driver)
+- **Rust Documentation**: [docs.rs/lake-driver](https://docs.rs/databend-driver)
diff --git a/tidb-cloud-lake/guides/connection-overview.md b/tidb-cloud-lake/guides/connection-overview.md
index fc0a80b804aba..922a39f96c4e6 100644
--- a/tidb-cloud-lake/guides/connection-overview.md
+++ b/tidb-cloud-lake/guides/connection-overview.md
@@ -11,7 +11,7 @@ summary: TiDB Cloud Lake supports multiple connection methods to suit different
| I want to... | Recommended |
|-------------|-------------|
-| Run SQL queries interactively | **BendSQL** (CLI) or **DBeaver** (GUI) |
+| Run SQL queries interactively | **LakeSQL** (CLI) or **DBeaver** (GUI) |
| Build an application | Language-specific **Driver** |
| Create dashboards & reports | **BI/Visualization Tools** |
@@ -29,7 +29,7 @@ summary: TiDB Cloud Lake supports multiple connection methods to suit different
| Tool | Type | Best For |
|------|------|----------|
-| [BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md) | CLI | Developers, Scripting, Automation |
+| [LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md) | CLI | Developers, Scripting, Automation |
| [DBeaver](/tidb-cloud-lake/guides/connect-using-dbeaver.md) | GUI | Data Analysis, Visual Query Building |
## Drivers
diff --git a/tidb-cloud-lake/guides/driver-overview.md b/tidb-cloud-lake/guides/driver-overview.md
index 5d78f425ae260..70754d0fe6f79 100644
--- a/tidb-cloud-lake/guides/driver-overview.md
+++ b/tidb-cloud-lake/guides/driver-overview.md
@@ -44,8 +44,8 @@ lake://user:pwd@host[:port]/[database][?sslmode=disable][&arg1=value1]
| Language | Package | Key Features |
| ----------------------- | ------------------------------------------- | ----------------------------------------------------------------------------- |
-| **[Python](/tidb-cloud-lake/guides/connect-using-python.md)** | `databend-driver`
`databend-sqlalchemy` | • Sync/async support
• SQLAlchemy dialect
• PEP 249 compatible |
+| **[Python](/tidb-cloud-lake/guides/connect-using-python.md)** | `lake-driver`
`databend-sqlalchemy` | • Sync/async support
• SQLAlchemy dialect
• PEP 249 compatible |
| **[Go](/tidb-cloud-lake/guides/connect-using-golang.md)** | `databend-go` | • database/sql interface
• Connection pooling
• Bulk operations |
-| **[Node.js](/tidb-cloud-lake/guides/connect-using-node-js.md)** | `databend-driver` | • TypeScript support
• Promise-based API
• Streaming results |
-| **[Java](/tidb-cloud-lake/guides/connect-using-java.md)** | `databend-jdbc` | • JDBC 4.0 compatible
• Connection pooling
• Prepared statements |
-| **[Rust](/tidb-cloud-lake/guides/connect-using-rust.md)** | `databend-driver` | • Async/await support
• Type-safe queries
• Zero-copy deserialization |
+| **[Node.js](/tidb-cloud-lake/guides/connect-using-node-js.md)** | `lake-driver` | • TypeScript support
• Promise-based API
• Streaming results |
+| **[Java](/tidb-cloud-lake/guides/connect-using-java.md)** | `lake-jdbc` | • JDBC 4.0 compatible
• Connection pooling
• Prepared statements |
+| **[Rust](/tidb-cloud-lake/guides/connect-using-rust.md)** | `lake-driver` | • Async/await support
• Type-safe queries
• Zero-copy deserialization |
diff --git a/tidb-cloud-lake/guides/editions.md b/tidb-cloud-lake/guides/editions.md
index 947d3e9dbd48a..75bd7d1a5813b 100644
--- a/tidb-cloud-lake/guides/editions.md
+++ b/tidb-cloud-lake/guides/editions.md
@@ -65,15 +65,15 @@ The following are feature lists of {{{ .lake }}} among editions:
| Iceberg tables for referencing data in a cloud storage data lake. | ✓ | ✓ | ✓ |
| Schema detection for automatically detecting the schema in a set of staged semi-structured data files and retrieving the column definitions. | ✓ | ✓ | ✓ |
| Schema evolution for automatically evolving tables to support the structure of new data received from the data sources. | ✓ | ✓ | ✓ |
-| Support for creating table with external location. | ✓ | ✓ | ✓ |
-| Supports for ATTACH TABLE. | ✓ | ✓ | ✓ |
+| Support for [creating table with external location](/tidb-cloud-lake/sql/create-external-table.md). | ✓ | ✓ | ✓ |
+| Support for [ATTACH TABLE](/tidb-cloud-lake/sql/attach-table.md). | ✓ | ✓ | ✓ |
### Interfaces & Tools
| Features | Personal | Business | Dedicated |
|----------|----------|----------|-----------|
| The next-generation SQL worksheet for advanced query development, data analysis, and visualization. | ✓ | ✓ | ✓ |
-| BendSQL, a command line client for building/testing queries, loading/unloading bulk data, and automating DDL operations. | ✓ | ✓ | ✓ |
+| LakeSQL, a command line client for building/testing queries, loading/unloading bulk data, and automating DDL operations. | ✓ | ✓ | ✓ |
| Programmatic interfaces for Rust, Python, Java, Node.js, .js, PHP, and Go. | ✓ | ✓ | ✓ |
| Native support for JDBC. | ✓ | ✓ | ✓ |
| Extensive ecosystem for connecting to ETL, BI, and other third-party vendors and technologies. | ✓ | ✓ | ✓ |
diff --git a/tidb-cloud-lake/guides/load-from-files.md b/tidb-cloud-lake/guides/load-from-files.md
index a04845daf6e81..6a73c2da5b59e 100644
--- a/tidb-cloud-lake/guides/load-from-files.md
+++ b/tidb-cloud-lake/guides/load-from-files.md
@@ -27,5 +27,5 @@ Select the location of your files to find the recommended loading method:
|-------------|-----------------|-------------|---------------|
| **Staged Data Files** | **COPY INTO** | Fast, efficient loading from internal/external stages or user stage | [Loading from Stage](/tidb-cloud-lake/guides/load-from-stage.md) |
| **Cloud Storage** | **COPY INTO** | Load from Amazon S3, Google Cloud Storage, Microsoft Azure | [Loading from Bucket](/tidb-cloud-lake/guides/load-from-bucket.md) |
-| **Local Files** | [**BendSQL**](https://github.com/databendlabs/BendSQL) | Databend's native CLI tool for local file loading | [Loading from Local File](/tidb-cloud-lake/guides/load-from-local-file.md) |
+| **Local Files** | [**LakeSQL**](https://github.com/tidbcloud/lakesql) | Databend's native CLI tool for local file loading | [Loading from Local File](/tidb-cloud-lake/guides/load-from-local-file.md) |
| **Remote Files** | **COPY INTO** | Load data from remote HTTP/HTTPS locations | [Loading from Remote File](/tidb-cloud-lake/guides/load-from-remote-file.md) |
diff --git a/tidb-cloud-lake/guides/load-from-local-file.md b/tidb-cloud-lake/guides/load-from-local-file.md
index 0920ebf0ae439..974e6a6f5d7d2 100644
--- a/tidb-cloud-lake/guides/load-from-local-file.md
+++ b/tidb-cloud-lake/guides/load-from-local-file.md
@@ -1,11 +1,11 @@
---
title: Loading from Local File
-summary: Uploading your local data files to a stage or bucket before loading them into {{{ .lake }}} can be unnecessary. Instead, you can use BendSQL, the {{{ .lake }}} native CLI tool, to directly import the data. This simplifies the workflow and can save you storage fees.
+summary: Uploading your local data files to a stage or bucket before loading them into {{{ .lake }}} can be unnecessary. Instead, you can use LakeSQL, the {{{ .lake }}} native CLI tool, to directly import the data. This simplifies the workflow and can save you storage fees.
---
# Loading from Local File
-Uploading your local data files to a stage or bucket before loading them into {{{ .lake }}} can be unnecessary. Instead, you can use [BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md), the {{{ .lake }}} native CLI tool, to directly import the data. This simplifies the workflow and can save you storage fees.
+Uploading your local data files to a stage or bucket before loading them into {{{ .lake }}} can be unnecessary. Instead, you can use [LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md), the {{{ .lake }}} native CLI tool, to directly import the data. This simplifies the workflow and can save you storage fees.
Please note that the files must be in a format supported by {{{ .lake }}}, otherwise the data cannot be imported. For more information on the file formats supported by {{{ .lake }}}, see [Input & Output File Formats](/tidb-cloud-lake/sql/input-output-file-formats.md).
@@ -20,7 +20,7 @@ There are two methods to load data from local files:
## Tutorial 1 - Load from a Local File
-This tutorial uses a CSV file as an example to demonstrate how to import data into {{{ .lake }}} using [BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md) from a local source.
+This tutorial uses a CSV file as an example to demonstrate how to import data into {{{ .lake }}} using [LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md) from a local source.
### Before You Begin
@@ -34,7 +34,7 @@ Readings in Database Systems,Michael Stonebraker,2004
### Step 1. Create Database and Table
```shell
-❯ bendsql
+❯ lakesql
root@localhost:8000/default> CREATE DATABASE book_db;
root@localhost:8000/default> USE book_db;
@@ -58,7 +58,7 @@ CREATE TABLE books (
Send loading data request with the following command:
```shell
-❯ bendsql --query='INSERT INTO book_db.books from @_databend_load file_format=(type=csv)' --data=@books.csv
+❯ lakesql --query='INSERT INTO book_db.books from @_databend_load file_format=(type=csv)' --data=@books.csv
```
- The `@_databend_load` is a placeholder representing local file data.
@@ -67,9 +67,9 @@ Send loading data request with the following command:
Alternatively, use a Python script:
```python
- import databend_driver
+ import tidbcloudlake_driver
dsn = "lake://root:@localhost:8000/?sslmode=disable",
- client = databend_driver.BlockingDatabendClient(dsn)
+ client = tidbcloudlake_driver.BlockingDatabendClient(dsn)
conn = client.get_conn()
query = "INSERT INTO book_db.books from @_databend_load file_format=(type=csv)"
progress = conn.load_file(query, "book.csv")
@@ -96,7 +96,7 @@ try (FileInputStream fileInputStream = new FileInputStream(new File("book.csv"))
> **Note:**
>
-> Be sure that you are able to connect to the backend object storage for {{{ .lake }}} from local BendSQL directly.
+> Be sure that you are able to connect to the backend object storage for {{{ .lake }}} from local LakeSQL directly.
> If not, you need to specify the `--set presigned_url_disabled=1` option to disable the presigned url feature.
### Step 3. Verify Loaded Data
@@ -147,7 +147,7 @@ CREATE TABLE bookcomments (
Send loading data request with the following command:
```shell
-❯ bendsql --query='INSERT INTO book_db.bookcomments(title,author,date) file_format=(type=csv)' --data=@books.csv
+❯ lakesql --query='INSERT INTO book_db.bookcomments(title,author,date) file_format=(type=csv)' --data=@books.csv
```
Notice that the `query` part above specifies the columns (title, author, and date) to match the loaded data.
diff --git a/tidb-cloud-lake/guides/load-from-stage.md b/tidb-cloud-lake/guides/load-from-stage.md
index ec3154fe87a95..93bf68f58a44d 100644
--- a/tidb-cloud-lake/guides/load-from-stage.md
+++ b/tidb-cloud-lake/guides/load-from-stage.md
@@ -1,11 +1,11 @@
---
title: Loading from Stage
-summary: "{{{ .lake }}} enables you to easily import data from files uploaded to either the user stage or an internal/external stage. To do so, you can first upload the files to a stage using BendSQL, and then employ the COPY INTO command to load the data from the staged file. Please note that the files must be in a format supported by {{{ .lake }}}, otherwise the data cannot be imported. For more information on the file formats supported by {{{ .lake }}}, see Input & Output File Formats."
+summary: "{{{ .lake }}} enables you to easily import data from files uploaded to either the user stage or an internal/external stage. To do so, you can first upload the files to a stage using LakeSQL, and then employ the COPY INTO command to load the data from the staged file. Please note that the files must be in a format supported by {{{ .lake }}}, otherwise the data cannot be imported. For more information on the file formats supported by {{{ .lake }}}, see Input & Output File Formats."
---
# Loading from Stage
-{{{ .lake }}} enables you to easily import data from files uploaded to either the user stage or an internal/external stage. To do so, you can first upload the files to a stage using [BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md), and then employ the [COPY INTO](/tidb-cloud-lake/sql/copy-into-table.md) command to load the data from the staged file. Please note that the files must be in a format supported by {{{ .lake }}}, otherwise the data cannot be imported. For more information on the file formats supported by {{{ .lake }}}, see [Input & Output File Formats](/tidb-cloud-lake/sql/input-output-file-formats.md).
+{{{ .lake }}} enables you to easily import data from files uploaded to either the user stage or an internal/external stage. To do so, you can first upload the files to a stage using [LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md), and then employ the [COPY INTO](/tidb-cloud-lake/sql/copy-into-table.md) command to load the data from the staged file. Please note that the files must be in a format supported by {{{ .lake }}}, otherwise the data cannot be imported. For more information on the file formats supported by {{{ .lake }}}, see [Input & Output File Formats](/tidb-cloud-lake/sql/input-output-file-formats.md).

@@ -40,7 +40,7 @@ Follow this tutorial to upload the sample file to the user stage and load data f
### Step 1: Upload Sample File
-1. Upload the sample file using [BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md):
+1. Upload the sample file using [LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md):
```sql
root@localhost:8000/default> PUT fs:///Users/eric/Documents/books.parquet @~
@@ -107,7 +107,7 @@ my_internal_stage|Internal | 0|'root'@'%'| |
### Step 2: Upload Sample File
-1. Upload the sample file using [BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md):
+1. Upload the sample file using [LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md):
```sql
root@localhost:8000/default> CREATE STAGE my_internal_stage;
@@ -187,7 +187,7 @@ my_external_stage|External | |'root'@'%'| |
### Step 2: Upload Sample File
-1. Upload the sample file using [BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md):
+1. Upload the sample file using [LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md):
```sql
root@localhost:8000/default> PUT fs:///Users/eric/Documents/books.parquet @my_external_stage
diff --git a/tidb-cloud-lake/guides/mindsdb.md b/tidb-cloud-lake/guides/mindsdb.md
index d884c09f4fd0d..a323f31dd0739 100644
--- a/tidb-cloud-lake/guides/mindsdb.md
+++ b/tidb-cloud-lake/guides/mindsdb.md
@@ -121,7 +121,7 @@ CREATE TABLE pollution_measurement(
PM25 double
);
-COPY INTO pollution_measurement FROM 'https://repo.databend.com/AirPolutionSeoul/Measurement_summary.csv' file_format=(type='CSV' skip_header=1);
+COPY INTO pollution_measurement FROM 'https://repo.tidbcloud.com/AirPolutionSeoul/Measurement_summary.csv' file_format=(type='CSV' skip_header=1);
```
### Step 2. Connect MindsDB to {{{ .lake }}}
diff --git a/tidb-cloud-lake/guides/stage-overview.md b/tidb-cloud-lake/guides/stage-overview.md
index 13fc4faf14d0f..11a65b8672c7b 100644
--- a/tidb-cloud-lake/guides/stage-overview.md
+++ b/tidb-cloud-lake/guides/stage-overview.md
@@ -7,7 +7,7 @@ summary: A stage is a virtual location where data files reside. Files in a stage
In {{{ .lake }}}, a stage is a virtual location where data files reside. Files in a stage can be queried directly or loaded into a table. Alternatively, you can unload data from a table into a stage as a file. The beauty of using a stage is that you can access it for data loading and unloading as conveniently as you would with folders on your computer. Just as when you put a file in a folder, you don't necessarily need to know its exact location on your hard disk. When accessing a file in a stage, you only need to specify the stage name and the file name, such as `@mystage/mydatafile.csv`, rather than specifying its location in the bucket of your object storage. Similar to folders on your computer, you can create as many stages as you need in {{{ .lake }}}. However, it's important to note that a stage cannot contain another stage. Each stage operates independently and does not encompass other stages.
-Utilizing a stage for loading data also improves the efficiency of uploading, managing, and filtering your data files. With [BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md), you can easily upload or download files to or from a stage using a single command. When loading data into {{{ .lake }}}, you can directly specify a stage in the COPY INTO command, allowing the command to read and even filter data files from that stage. Similarly, when exporting data from {{{ .lake }}}, you can dump your data files into a stage.
+Utilizing a stage for loading data also improves the efficiency of uploading, managing, and filtering your data files. With [LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md), you can easily upload or download files to or from a stage using a single command. When loading data into {{{ .lake }}}, you can directly specify a stage in the COPY INTO command, allowing the command to read and even filter data files from that stage. Similarly, when exporting data from {{{ .lake }}}, you can dump your data files into a stage.
## Stage Types
diff --git a/tidb-cloud-lake/guides/tableau.md b/tidb-cloud-lake/guides/tableau.md
index 7ca164c902f1b..a690b42b3b0f7 100644
--- a/tidb-cloud-lake/guides/tableau.md
+++ b/tidb-cloud-lake/guides/tableau.md
@@ -1,11 +1,11 @@
---
title: Tableau
-summary: Tableau is a visual analytics platform transforming the way we use data to solve problems—empowering people and organizations to make the most of their data. By leveraging the databend-jdbc driver (version 0.3.4 or higher), both {{{ .lake }}} and {{{ .lake }}} can integrate with Tableau, enabling seamless data access and efficient analysis. It is important to note that for optimal compatibility, it is advisable to use Tableau version 2022.3 or higher to avoid potential compatibility issues.
+summary: Tableau is a visual analytics platform transforming the way we use data to solve problems—empowering people and organizations to make the most of their data. By leveraging the lake-jdbc driver (version 0.3.4 or higher), both {{{ .lake }}} and {{{ .lake }}} can integrate with Tableau, enabling seamless data access and efficient analysis. It is important to note that for optimal compatibility, it is advisable to use Tableau version 2022.3 or higher to avoid potential compatibility issues.
---
# Tableau
-[Tableau](https://www.tableau.com/) is a visual analytics platform transforming the way we use data to solve problems—empowering people and organizations to make the most of their data. By leveraging the [databend-jdbc driver](https://github.com/databendcloud/databend-jdbc) (version 0.3.4 or higher), both {{{ .lake }}} and {{{ .lake }}} can integrate with Tableau, enabling seamless data access and efficient analysis. It is important to note that for optimal compatibility, it is advisable to use Tableau version 2022.3 or higher to avoid potential compatibility issues.
+[Tableau](https://www.tableau.com/) is a visual analytics platform transforming the way we use data to solve problems—empowering people and organizations to make the most of their data. By leveraging the [lake-jdbc driver](https://github.com/tidbcloud/lake-jdbc) (version 0.3.4 or higher), both {{{ .lake }}} and {{{ .lake }}} can integrate with Tableau, enabling seamless data access and efficient analysis. It is important to note that for optimal compatibility, it is advisable to use Tableau version 2022.3 or higher to avoid potential compatibility issues.
{{{ .lake }}} currently provides two integration methods with Tableau. The first approach utilizes the Other Databases (JDBC) interface within Tableau and is applicable to both {{{ .lake }}} and {{{ .lake }}}. The second method recommends using the [databend-tableau-connector-jdbc](https://github.com/databendcloud/databend-tableau-connector-jdbc) connector specifically developed by {{{ .lake }}} for optimal connectivity with {{{ .lake }}}.
@@ -28,11 +28,11 @@ CREATE USER tableau IDENTIFIED BY 'tableau' WITH DEFAULT_ROLE = 'tableau_role';
GRANT ROLE tableau_role TO tableau;
```
-### Step 2. Install databend-jdbc
+### Step 2. Install lake-jdbc
-1. Download the databend-jdbc driver (version 0.3.4 or higher) from the Maven Central Repository at
+1. Download the lake-jdbc driver (version 0.3.4 or higher) from the Maven Central Repository at
-2. To install the databend-jdbc driver, move the jar file (for example, databend-jdbc-0.3.4.jar) to Tableau's driver folder. Tableau's driver folder varies depending on the operating system:
+2. To install the lake-jdbc driver, move the jar file (for example, lake-jdbc-0.3.4.jar) to Tableau's driver folder. Tableau's driver folder varies depending on the operating system:
| Operating System | Tableau's Driver Folder |
| ---------------- | -------------------------------- |
@@ -75,11 +75,11 @@ CREATE USER tableau IDENTIFIED BY 'tableau' WITH DEFAULT_ROLE = 'tableau_role';
GRANT ROLE tableau_role TO tableau;
```
-### Step 2. Install databend-jdbc
+### Step 2. Install lake-jdbc
-1. Download the databend-jdbc driver (version 0.3.4 or higher) from the Maven Central Repository at
+1. Download the lake-jdbc driver (version 0.3.4 or higher) from the Maven Central Repository at
-2. To install the databend-jdbc driver, move the jar file (for example, databend-jdbc-0.3.4.jar) to Tableau's driver folder. Tableau's driver folder varies depending on the operating system:
+2. To install the lake-jdbc driver, move the jar file (for example, lake-jdbc-0.3.4.jar) to Tableau's driver folder. Tableau's driver folder varies depending on the operating system:
| Operating System | Tableau's Driver Folder |
| ---------------- | -------------------------------- |
@@ -119,11 +119,11 @@ In this tutorial, you'll integrate {{{ .lake }}} with [Tableau Desktop](https://
Obtain the connection information from {{{ .lake }}}. For how to do that, refer to [Connecting to a Warehouse](/tidb-cloud-lake/guides/warehouse.md#connecting-to-a-warehouse).
-### Step 2. Install databend-jdbc
+### Step 2. Install lake-jdbc
-1. Download the databend-jdbc driver (version 0.3.4 or higher) from the Maven Central Repository at
+1. Download the lake-jdbc driver (version 0.3.4 or higher) from the Maven Central Repository at
-2. To install the databend-jdbc driver, move the jar file (for example, databend-jdbc-0.3.4.jar) to Tableau's driver folder. Tableau's driver folder varies depending on the operating system:
+2. To install the lake-jdbc driver, move the jar file (for example, lake-jdbc-0.3.4.jar) to Tableau's driver folder. Tableau's driver folder varies depending on the operating system:
| Operating System | Tableau's Driver Folder |
| ---------------- | -------------------------------- |
diff --git a/tidb-cloud-lake/guides/upload-to-stage.md b/tidb-cloud-lake/guides/upload-to-stage.md
index d3a406221fb12..21bd3e1b2071a 100644
--- a/tidb-cloud-lake/guides/upload-to-stage.md
+++ b/tidb-cloud-lake/guides/upload-to-stage.md
@@ -11,10 +11,10 @@ summary: "{{{ .lake }}} recommends two file upload methods for stages PRESIGN an
The PRESIGN method generates a time-limited URL with a signature, which clients can use to securely initiate file uploads. This URL grants temporary access to the designated stage, allowing clients to directly transfer data without relying on {{{ .lake }}} servers for the entire process, enhancing both security and efficiency.
-If you're using [BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md) to manage files in a stage, you can use the PUT command for uploading files and the GET command for downloading files.
+If you're using [LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md) to manage files in a stage, you can use the PUT command for uploading files and the GET command for downloading files.
- The GET command currently can only download all files in a stage, not individual ones.
-- These commands are exclusive to BendSQL and the GET command will not function when {{{ .lake }}} uses the file system as the storage backend.
+- These commands are exclusive to LakeSQL and the GET command will not function when {{{ .lake }}} uses the file system as the storage backend.
## Uploading with Presigned URL
@@ -159,7 +159,7 @@ Result:
### Uploading with PUT Command
-The following examples demonstrate how to use BendSQL to upload a sample file ([books.parquet](https://datafuse-1253727613.cos.ap-hongkong.myqcloud.com/data/books.parquet)) to the user stage, an internal stage, and an external stage with the PUT command.
+The following examples demonstrate how to use LakeSQL to upload a sample file ([books.parquet](https://datafuse-1253727613.cos.ap-hongkong.myqcloud.com/data/books.parquet)) to the user stage, an internal stage, and an external stage with the PUT command.
@@ -302,7 +302,7 @@ Result:
### Downloading with GET Command
-The following examples demonstrate how to use BendSQL to download a sample file ([books.parquet](https://datafuse-1253727613.cos.ap-hongkong.myqcloud.com/data/books.parquet)) from the user stage, an internal stage, and an external stage with the GET command.
+The following examples demonstrate how to use LakeSQL to download a sample file ([books.parquet](https://datafuse-1253727613.cos.ap-hongkong.myqcloud.com/data/books.parquet)) from the user stage, an internal stage, and an external stage with the GET command.
diff --git a/tidb-cloud-lake/guides/warehouse.md b/tidb-cloud-lake/guides/warehouse.md
index 00b0b772ae7fd..cf4dc425c468b 100644
--- a/tidb-cloud-lake/guides/warehouse.md
+++ b/tidb-cloud-lake/guides/warehouse.md
@@ -184,7 +184,7 @@ Connecting to a warehouse provides the compute resources required to run queries
| Client | Type | Best For | Key Features |
| ------------------------------------------ | --------------- | ----------------------------- | ----------------------------------------------------- |
-| **[BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md)** | Command Line | Developers, Scripts | Native CLI, Rich formatting, Multiple install options |
+| **[LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md)** | Command Line | Developers, Scripts | Native CLI, Rich formatting, Multiple install options |
| **[DBeaver](/tidb-cloud-lake/guides/connect-using-dbeaver.md)** | GUI Application | Data Analysis, Visual Queries | Built-in driver, Cross-platform, Query builder |
#### Developer Drivers
diff --git a/tidb-cloud-lake/sql/alter-user.md b/tidb-cloud-lake/sql/alter-user.md
index 41dc1affb8aa5..fe471853b2b60 100644
--- a/tidb-cloud-lake/sql/alter-user.md
+++ b/tidb-cloud-lake/sql/alter-user.md
@@ -128,7 +128,7 @@ ALTER USER user1 WITH UNSET NETWORK POLICY;
2. Verify the default role of user "user1" using the [SHOW ROLES](/tidb-cloud-lake/sql/show-roles.md) command:
```sql title='Connect as user "user1":'
-eric@Erics-iMac ~ % bendsql --user user1 --password abc123
+eric@Erics-iMac ~ % lakesql --user user1 --password abc123
show roles;
┌───────────────────────────────────────────────────────┐
│ name │ inherited_roles │ is_current │ is_default │
diff --git a/tidb-cloud-lake/sql/apache-icebergtm-tables.md b/tidb-cloud-lake/sql/apache-icebergtm-tables.md
index 04ede9eb4b3ce..bb868b6d9d17a 100644
--- a/tidb-cloud-lake/sql/apache-icebergtm-tables.md
+++ b/tidb-cloud-lake/sql/apache-icebergtm-tables.md
@@ -124,14 +124,14 @@ Once the TPC-H tables are loaded, you can query the data in {{{ .lake }}}:
datafuselabs/databend
```
-2. Connect to {{{ .lake }}} using BendSQL first, and then create an Iceberg catalog:
+2. Connect to {{{ .lake }}} using LakeSQL first, and then create an Iceberg catalog:
```bash
- bendsql
+ lakesql
```
```bash
- Welcome to BendSQL 0.24.1-f1f7de0(2024-12-04T12:31:18.526234000Z).
+ Welcome to LakeSQL 0.24.1-f1f7de0(2024-12-04T12:31:18.526234000Z).
Connecting to localhost:8000 as user root.
Connected to Databend Query v1.2.725-8d073f6b7a(rust-1.88.0-nightly-2025-04-21T11:49:03.577976082Z)
Loaded 1436 auto complete keywords from server.
diff --git a/tidb-cloud-lake/sql/create-temp-table.md b/tidb-cloud-lake/sql/create-temp-table.md
index cd855ffb33564..67c9f9cda41ea 100644
--- a/tidb-cloud-lake/sql/create-temp-table.md
+++ b/tidb-cloud-lake/sql/create-temp-table.md
@@ -17,7 +17,7 @@ Creates a temporary table that is automatically dropped at the end of the sessio
- A temporary table with the same name as a normal table takes precedence, hiding the normal table until dropped. See [Example-2](#example-2).
- No privileges are required to create or operate on a temporary table.
- {{{ .lake }}} supports creating temporary tables with the [Fuse Engine](/tidb-cloud-lake/sql/table-engines.md).
-- To create temporary tables using BendSQL, ensure you are using the latest version of BendSQL.
+- To create temporary tables using LakeSQL, ensure you are using the latest version of LakeSQL.
## Syntax
diff --git a/tidb-cloud-lake/sql/explain-analyze-graphical.md b/tidb-cloud-lake/sql/explain-analyze-graphical.md
index 250db8b3f8f58..c8ae308fbc352 100644
--- a/tidb-cloud-lake/sql/explain-analyze-graphical.md
+++ b/tidb-cloud-lake/sql/explain-analyze-graphical.md
@@ -1,6 +1,6 @@
---
title: EXPLAIN ANALYZE GRAPHICAL
-summary: Analyzes query performance with an interactive visual representation in your browser. Available exclusively in BendSQL v0.22.2+.
+summary: Analyzes query performance with an interactive visual representation in your browser. Available exclusively in LakeSQL v0.22.2+.
---
# EXPLAIN ANALYZE GRAPHICAL
@@ -9,7 +9,7 @@ summary: Analyzes query performance with an interactive visual representation in
>
> Introduced or updated in v1.2.647.
-Analyzes query performance with an interactive visual representation in your browser. Available exclusively in BendSQL v0.22.2+.
+Analyzes query performance with an interactive visual representation in your browser. Available exclusively in LakeSQL v0.22.2+.
## Syntax
@@ -19,7 +19,7 @@ EXPLAIN ANALYZE GRAPHICAL
## Configuration
-Add to your BendSQL config file `~/.config/bendsql/config.toml`:
+Add to your LakeSQL config file `~/.config/lakesql/config.toml`:
```toml
[server]
diff --git a/tidb-cloud-lake/sql/explain-commands.md b/tidb-cloud-lake/sql/explain-commands.md
index 9f131d31b022f..4f42562e671ed 100644
--- a/tidb-cloud-lake/sql/explain-commands.md
+++ b/tidb-cloud-lake/sql/explain-commands.md
@@ -13,8 +13,8 @@ This page provides reference information for the explain-related commands in {{{
|---------|----------|
| [`EXPLAIN`](/tidb-cloud-lake/sql/explain.md) | Understanding query structure and optimization |
| [`EXPLAIN ANALYZE`](/tidb-cloud-lake/sql/explain-analyze.md) | Performance analysis with runtime statistics |
-| [`EXPLAIN ANALYZE GRAPHICAL`](/tidb-cloud-lake/sql/explain-analyze-graphical.md) | Visual performance analysis (BendSQL only) |
+| [`EXPLAIN ANALYZE GRAPHICAL`](/tidb-cloud-lake/sql/explain-analyze-graphical.md) | Visual performance analysis (LakeSQL only) |
| [`EXPLAIN AST`](/tidb-cloud-lake/sql/explain-ast.md) | SQL parsing and syntax analysis |
-| [`EXPLAIN PERF`](/tidb-cloud-lake/sql/explain-perf.md) | Query performance profiling (BendSQL only) |
+| [`EXPLAIN PERF`](/tidb-cloud-lake/sql/explain-perf.md) | Query performance profiling (LakeSQL only) |
| [`EXPLAIN RAW`](/tidb-cloud-lake/sql/explain-raw.md) | Internal query processing analysis |
| [`EXPLAIN SYNTAX`](/tidb-cloud-lake/sql/explain-syntax.md) | SQL code formatting and standardization |
diff --git a/tidb-cloud-lake/sql/explain-perf.md b/tidb-cloud-lake/sql/explain-perf.md
index 0ae55e64ae394..ab6e91d37ec53 100644
--- a/tidb-cloud-lake/sql/explain-perf.md
+++ b/tidb-cloud-lake/sql/explain-perf.md
@@ -22,7 +22,7 @@ EXPLAIN PERF
## Examples
```shell
-bendsql --quote-style never --query="EXPLAIN PERF SELECT avg(number) FROM numbers(10000000)" > demo.html
+lakesql --quote-style never --query="EXPLAIN PERF SELECT avg(number) FROM numbers(10000000)" > demo.html
```
Then, you can open the `demo.html` file in your browser to view the flame graphs.
diff --git a/tidb-cloud-lake/sql/sql-variables.md b/tidb-cloud-lake/sql/sql-variables.md
index 3912b784da83a..06961ed24a1b7 100644
--- a/tidb-cloud-lake/sql/sql-variables.md
+++ b/tidb-cloud-lake/sql/sql-variables.md
@@ -38,7 +38,7 @@ SELECT * FROM sales WHERE amount > getvariable('threshold');
### Accessing Objects with `IDENTIFIER`
-The `IDENTIFIER` keyword lets you reference database objects whose names are stored in variables, enabling flexible query construction. (Note: BendSQL does not yet support `IDENTIFIER`.)
+The `IDENTIFIER` keyword lets you reference database objects whose names are stored in variables, enabling flexible query construction. (Note: LakeSQL does not yet support `IDENTIFIER`.)
```sql title='Example:'
-- Create a table with sales data
diff --git a/tidb-cloud-lake/sql/system-history-login-history.md b/tidb-cloud-lake/sql/system-history-login-history.md
index 31d92aaa7de9b..8a8662d744823 100644
--- a/tidb-cloud-lake/sql/system-history-login-history.md
+++ b/tidb-cloud-lake/sql/system-history-login-history.md
@@ -47,7 +47,7 @@ connection_uri: /session/login?disable_session_token=true
auth_type: Password
user_name: root
client_ip: 127.0.0.1
- user_agent: bendsql/0.26.2-unknown
+ user_agent: lakesql/0.26.2-unknown
session_id: 9a3ba9d8-44d9-49ca-9446-501deaca15c9
node_id: 765ChL6Ra949Ioeb5LrTs
error_message:
diff --git a/tidb-cloud-lake/sql/system-history-query-history.md b/tidb-cloud-lake/sql/system-history-query-history.md
index 2c763c43beb5f..5f2cef928a33b 100644
--- a/tidb-cloud-lake/sql/system-history-query-history.md
+++ b/tidb-cloud-lake/sql/system-history-query-history.md
@@ -121,7 +121,7 @@ written_io_bytes_cost_ms: 0
bytes_from_local_disk: 0
bytes_from_memory: 0
client_address: 127.0.0.1
- user_agent: bendsql/0.26.2-unknown
+ user_agent: lakesql/0.26.2-unknown
exception_code: 0
exception_text:
server_version: v1.2.753-nightly-c3d5fabb79(rust-1.88.0-nightly-2025-06-12T01:48:36.733925000Z)
diff --git a/tidb-cloud-lake/sql/system-query-log.md b/tidb-cloud-lake/sql/system-query-log.md
index 3082b711d3a40..7dbede7f18be6 100644
--- a/tidb-cloud-lake/sql/system-query-log.md
+++ b/tidb-cloud-lake/sql/system-query-log.md
@@ -95,7 +95,7 @@ written_io_bytes_cost_ms: 0
bytes_from_memory: 0
client_info:
client_address: 192.168.65.1
- user_agent: bendsql/0.24.1-f1f7de0
+ user_agent: lakesql/0.24.1-f1f7de0
exception_code: 0
exception_text:
stack_trace:
diff --git a/tidb-cloud-lake/tutorials/access-mysql-redis-via-dictionaries.md b/tidb-cloud-lake/tutorials/access-mysql-redis-via-dictionaries.md
index b4f57e6bdc2d5..fd7a89dad4a27 100644
--- a/tidb-cloud-lake/tutorials/access-mysql-redis-via-dictionaries.md
+++ b/tidb-cloud-lake/tutorials/access-mysql-redis-via-dictionaries.md
@@ -9,7 +9,7 @@ In this tutorial, we’ll guide you through accessing MySQL and Redis data using
## Before You Start
-Before you start, ensure that [Docker](https://www.docker.com/) is installed on your local machine. We need Docker to set up the necessary containers for {{{ .lake }}}, MySQL, and Redis. You will also need a SQL client to connect to MySQL; we recommend using [BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md) to connect to {{{ .lake }}}.
+Before you start, ensure that [Docker](https://www.docker.com/) is installed on your local machine. We need Docker to set up the necessary containers for {{{ .lake }}}, MySQL, and Redis. You will also need a SQL client to connect to MySQL; we recommend using [LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md) to connect to {{{ .lake }}}.
## Step 1: Setting up Environment
diff --git a/tidb-cloud-lake/tutorials/backup-restore-with-bendsave.md b/tidb-cloud-lake/tutorials/backup-restore-with-bendsave.md
index 441da07aa51cb..67637c9147250 100644
--- a/tidb-cloud-lake/tutorials/backup-restore-with-bendsave.md
+++ b/tidb-cloud-lake/tutorials/backup-restore-with-bendsave.md
@@ -14,16 +14,16 @@ Before you start, ensure you have the following prerequisites in place:
- A Linux machine (x86_64 or aarch64 architecture): In this tutorial, we'll deploy {{{ .lake }}} on a Linux machine. You can use a local machine, a virtual machine, or a cloud instance such as AWS EC2.
- [Docker](https://www.docker.com/): Used to deploy a local MinIO instance.
- [AWS CLI](https://aws.amazon.com/cli/): Used to manage buckets in MinIO.
- - If you are on AWS EC2, make sure your security group allows inbound traffic on port `8000`, as this is required for BendSQL to connect to {{{ .lake }}}.
+ - If you are on AWS EC2, make sure your security group allows inbound traffic on port `8000`, as this is required for LakeSQL to connect to {{{ .lake }}}.
-- BendSQL is installed on your local machine. See [Installing BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md#installing-bendsql) for instructions on how to install BendSQL using various package managers.
+- LakeSQL is installed on your local machine. See [Installing LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md#installing-lakesql) for instructions on how to install LakeSQL using various package managers.
- The {{{ .lake }}} release package: Download the release from the [{{{ .lake }}} GitHub Releases page](https://github.com/databendlabs/databend/releases). The package contains the `databend-bendsave` binary in the `bin` directory, which is the tool we'll use for backup and restore operations in this tutorial.
```bash
databend-v1.2.725-nightly-x86_64-unknown-linux-gnu/
├── bin
-│ ├── bendsql
+│ ├── lakesql
│ ├── databend-bendsave # The BendSave binary used in this tutorial
│ ├── databend-meta
│ ├── databend-metactl
@@ -115,10 +115,10 @@ aws --endpoint-url http://127.0.0.1:9000/ s3 mb s3://databend
curl -I http://127.0.0.1:8080/v1/health
```
-4. Connect to your {{{ .lake }}} instance from your local machine with BendSQL, then apply your {{{ .lake }}} Enterprise license, create a table, and insert some sample data.
+4. Connect to your {{{ .lake }}} instance from your local machine with LakeSQL, then apply your {{{ .lake }}} Enterprise license, create a table, and insert some sample data.
```bash
- bendsql -h
+ lakesql -h
```
```sql
@@ -185,7 +185,7 @@ aws --endpoint-url http://127.0.0.1:9000 s3 ls s3://backupbucket/ --recursive
aws --endpoint-url http://127.0.0.1:9000 s3 rm s3://databend/ --recursive
```
-2. After the removal, you can verify using BendSQL that querying the table in {{{ .lake }}} fails:
+2. After the removal, you can verify using LakeSQL that querying the table in {{{ .lake }}} fails:
```sql
SELECT * FROM books;
@@ -221,7 +221,7 @@ aws --endpoint-url http://127.0.0.1:9000 s3 ls s3://backupbucket/ --recursive
2025-04-07 23:21:39 344781 databend_meta.db
```
-5. Query the table again using BendSQL, and you will see that the query now succeeds:
+5. Query the table again using LakeSQL, and you will see that the query now succeeds:
```sql
SELECT * FROM books;
diff --git a/tidb-cloud-lake/tutorials/data-sharing-via-attach-table.md b/tidb-cloud-lake/tutorials/data-sharing-via-attach-table.md
index f8078c029ab2f..33aa69475dd40 100644
--- a/tidb-cloud-lake/tutorials/data-sharing-via-attach-table.md
+++ b/tidb-cloud-lake/tutorials/data-sharing-via-attach-table.md
@@ -14,7 +14,7 @@ Before you start, ensure you have the following prerequisites in place:
- [Docker](https://www.docker.com/) is installed on your local machine, as it will be used to launch a self-hosted {{{ .lake }}}.
- An AWS S3 bucket used as storage for your self-hosted {{{ .lake }}}. [Learn how to create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html).
- AWS Access Key ID and Secret Access Key with sufficient permissions for accessing your S3 bucket. [Manage your AWS credentials](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys).
-- BendSQL is installed on your local machine. See [Installing BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md#installing-bendsql) for instructions on how to install BendSQL using various package managers.
+- LakeSQL is installed on your local machine. See [Installing LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md#installing-lakesql) for instructions on how to install LakeSQL using various package managers.
## Step 1: Launch {{{ .lake }}} in Docker
@@ -60,7 +60,7 @@ SELECT snapshot_location FROM FUSE_SNAPSHOT('default', 'population');
## Step 2: Set Up Attached Tables in {{{ .lake }}}
-1. [Connect to {{{ .lake }}} using BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md).
+1. [Connect to {{{ .lake }}} using LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md).
2. Execute the following statements to create two attached tables:
- The first table, `population_all_columns`, includes all columns from the source data.
- The second table, `population_only`, includes only the selected columns (`city` & `population`).
diff --git a/tidb-cloud-lake/tutorials/ingest-kafka-with-bend-ingest.md b/tidb-cloud-lake/tutorials/ingest-kafka-with-bend-ingest.md
index ccb71c6aa0b24..75af16f185b82 100644
--- a/tidb-cloud-lake/tutorials/ingest-kafka-with-bend-ingest.md
+++ b/tidb-cloud-lake/tutorials/ingest-kafka-with-bend-ingest.md
@@ -99,10 +99,10 @@ CREATE TABLE databend_topic (
2024/08/20 15:10:15 ingest 2 rows (1.225576 rows/s), 75 bytes (45.959100 bytes/s)
```
-3. Connect to {{{ .lake }}} using BendSQL and verify that the data has been successfully loaded:
+3. Connect to {{{ .lake }}} using LakeSQL and verify that the data has been successfully loaded:
```bash
- Welcome to BendSQL 0.19.2-1e338e1(2024-07-17T09:02:28.323121000Z).
+ Welcome to LakeSQL 0.19.2-1e338e1(2024-07-17T09:02:28.323121000Z).
Connecting to tn3ftqihs--eric.gw.aws-us-east-2.default.tidbcloud.com:443 with warehouse eric as user cloudapp
Connected to Databend Query v1.2.626-nightly-a055124b65(rust-1.81.0-nightly-2024-08-27T15:49:08.376336236Z)
diff --git a/tidb-cloud-lake/tutorials/inspect-metadata.md b/tidb-cloud-lake/tutorials/inspect-metadata.md
index d59476f4fe7de..4076d30fe80b5 100644
--- a/tidb-cloud-lake/tutorials/inspect-metadata.md
+++ b/tidb-cloud-lake/tutorials/inspect-metadata.md
@@ -12,7 +12,7 @@ In this tutorial, we'll walk you through uploading a sample Parquet file to an i
Before you start, ensure you have the following prerequisites in place:
- [Download the sample dataset](https://datasets.databend.com/iris.parquet) and save it to your local folder.
-- BendSQL is installed on your local machine. See [Installing BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md#installing-bendsql) for instructions on how to install BendSQL using various package managers.
+- LakeSQL is installed on your local machine. See [Installing LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md#installing-lakesql) for instructions on how to install LakeSQL using various package managers.
### Step 1: Create an internal stage
@@ -20,9 +20,9 @@ Before you start, ensure you have the following prerequisites in place:
CREATE STAGE my_internal_stage;
```
-### Step 2: Upload the sample file using BendSQL
+### Step 2: Upload the sample file using LakeSQL
-Assuming your sample dataset is located at `/Users/eric/Documents/iris.parquet`, run the following command in BendSQL to upload it to the stage:
+Assuming your sample dataset is located at `/Users/eric/Documents/iris.parquet`, run the following command in LakeSQL to upload it to the stage:
```sql
PUT fs:///Users/eric/Documents/iris.parquet @my_internal_stage;
diff --git a/tidb-cloud-lake/tutorials/migrate-from-mysql-with-bend-archiver.md b/tidb-cloud-lake/tutorials/migrate-from-mysql-with-bend-archiver.md
index 849e3d7b4d5f8..09dc5531c5a85 100644
--- a/tidb-cloud-lake/tutorials/migrate-from-mysql-with-bend-archiver.md
+++ b/tidb-cloud-lake/tutorials/migrate-from-mysql-with-bend-archiver.md
@@ -16,7 +16,7 @@ Before you start, ensure you have the following prerequisites in place:
- [Docker](https://www.docker.com/) is installed on your local machine, as it will be used to launch MySQL.
- [Go](https://go.dev/dl/) is installed on your local machine, as it is required to install bend-archiver.
-- BendSQL is installed on your local machine. See [Installing BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md#installing-bendsql) for instructions on how to install BendSQL using various package managers.
+- LakeSQL is installed on your local machine. See [Installing LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md#installing-lakesql) for instructions on how to install LakeSQL using various package managers.
## Step 1: Launch MySQL in Docker
@@ -111,7 +111,7 @@ Bye
## Step 3: Set Up Target in {{{ .lake }}}
-1. Connect to {{{ .lake }}} using BendSQL. If you're unfamiliar with BendSQL, refer to this tutorial: [Connecting to {{{ .lake }}} using BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md).
+1. Connect to {{{ .lake }}} using LakeSQL. If you're unfamiliar with LakeSQL, refer to this tutorial: [Connecting to {{{ .lake }}} using LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md).
2. Copy and paste the following SQL to create a target table named **my_table**:
```sql
@@ -202,7 +202,7 @@ Download bend-archiver from the [release page](https://github.com/databendlabs/b
total time: 1.269478875s
```
-3. Return to your BendSQL session and verify the migration:
+3. Return to your LakeSQL session and verify the migration:
```sql
SELECT * FROM my_table;
diff --git a/tidb-cloud-lake/tutorials/migrate-from-mysql-with-flink-cdc.md b/tidb-cloud-lake/tutorials/migrate-from-mysql-with-flink-cdc.md
index 5a0644085866d..db7663757e6d1 100644
--- a/tidb-cloud-lake/tutorials/migrate-from-mysql-with-flink-cdc.md
+++ b/tidb-cloud-lake/tutorials/migrate-from-mysql-with-flink-cdc.md
@@ -15,7 +15,7 @@ Before you start, ensure you have the following prerequisites in place:
- [Docker](https://www.docker.com/) is installed on your local machine, as it will be used to launch MySQL.
- Java 8 or 11 is installed on your local machine, as it is required by the [Flink {{{ .lake }}} Connector](https://github.com/databendcloud/flink-connector-databend).
-- BendSQL is installed on your local machine. See [Installing BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md#installing-bendsql) for instructions on how to install BendSQL using various package managers.
+- LakeSQL is installed on your local machine. See [Installing LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md#installing-lakesql) for instructions on how to install LakeSQL using various package managers.
## Step 1: Launch MySQL in Docker
@@ -137,7 +137,7 @@ mysql> select * from products;
## Step 3: Set Up Target in {{{ .lake }}}
-1. Connect to {{{ .lake }}} using BendSQL. If you're unfamiliar with BendSQL, refer to this tutorial: [Connecting to {{{ .lake }}} using BendSQL](/tidb-cloud-lake/guides/connect-using-bendsql.md).
+1. Connect to {{{ .lake }}} using LakeSQL. If you're unfamiliar with LakeSQL, refer to this tutorial: [Connecting to {{{ .lake }}} using LakeSQL](/tidb-cloud-lake/guides/connect-using-lakesql.md).
2. Copy and paste the following SQL to create a target table named **products**:
@@ -281,7 +281,7 @@ You can now open the Apache Flink Dashboard if you go to [http://localhost:8081]
You can now see a running job in the Apache Flink Dashboard.
- You're all set! If you go back to the BendSQL terminal and query the **products** table in {{{ .lake }}}, you will see that the data from MySQL has been successfully synchronized:
+ You're all set! If you go back to the LakeSQL terminal and query the **products** table in {{{ .lake }}}, you will see that the data from MySQL has been successfully synchronized:
```sql
SELECT * FROM products;
@@ -309,7 +309,7 @@ You can now open the Apache Flink Dashboard if you go to [http://localhost:8081]
INSERT INTO products VALUES (default, "bicycle", "Lightweight road bicycle");
```
-Next, in the BendSQL terminal, query the **products** table again to verify the new product has been synced:
+Next, in the LakeSQL terminal, query the **products** table again to verify the new product has been synced:
```sql
SELECT * FROM products;