Skip to content

Commit 32da21c

Browse files
mzitnikShimonSteBentsiLeviavShimon Steinitz
authored
Update java client version (#428)
* Upgrade Java Client to V2 syncQuery & syncInsert * Refactor to use the new client v2 api * Add timeout to query operation * Clean NodeClient * Change binary reader * Update client version * Fix project to use snapshots * merge with main * run spotlessScalaApply and implement readAllBytes since java 8 does not support * Remove unneeded remarks * Chanage to client version 0.9.3 * Update socket timeout in new client * Change max connections to 20 * ConnectTimeout to 1200000 * Add 3 sec to sleep * Setting a new setConnectionRequestTimeout for experiment * spotlessScalaApply fix * Fix/json reader fixedstring v2 (#448) * Wake up ClickHouse Cloud instance before tests (#429) * fix: Handle FixedString as plain text in JSON reader for all Spark versions Problem: ClickHouse returns FixedString as plain text in JSON format, but the connector was trying to decode it as Base64, causing InvalidFormatException. Solution: Use pattern matching with guard to check if the JSON node is textual. - If textual (FixedString): decode as UTF-8 bytes - If not textual (true binary): decode as Base64 Applied to Spark 3.3, 3.4, and 3.5. --------- Co-authored-by: Bentsi Leviav <bentsi.leviav@clickhouse.com> Co-authored-by: Shimon Steinitz <shimonsteinitz@Shimons-MacBook-Pro.local> * Added reader and writer tests (#449) * Wake up ClickHouse Cloud instance before tests (#429) * feat: Add comprehensive read test coverage for Spark 3.3, 3.4, and 3.5 Add shared test trait ClickHouseReaderTestBase with 48 test scenarios covering: - All primitive types (Boolean, Byte, Short, Int, Long, Float, Double) - Large integers (UInt64, Int128, UInt128, Int256, UInt256) - Decimals (Decimal32, Decimal64, Decimal128) - Date/Time types (Date, Date32, DateTime, DateTime32, DateTime64) - String types (String, UUID, FixedString) - Enums (Enum8, Enum16) - IP addresses (IPv4, IPv6) - JSON data - Collections (Arrays, Maps) - Edge cases (empty strings, long strings, empty arrays, nullable variants) Test suites for Binary and JSON read formats. Test results: 96 tests per Spark version (288 total) - Binary format: 47/48 passing - JSON format: 47/48 passing - Overall: 94/96 passing per version (98% pass rate) Remaining failures are known bugs with fixes on separate branches. * feat: Add comprehensive write test coverage for Spark 3.3, 3.4, and 3.5 Add shared test trait ClickHouseWriterTestBase with 17 test scenarios covering: - Primitive types (Boolean, Byte, Short, Int, Long, Float, Double) - Decimal types - String types (regular and empty strings) - Date and Timestamp types - Collections (Arrays and Maps, including empty variants) - Nullable variants Test suites for JSON and Arrow write formats. Note: Binary write format is not supported (only JSON and Arrow). Test results: 34 tests per Spark version (102 total) - JSON format: 17/17 passing (100%) - Arrow format: 17/17 passing (100%) - Overall: 34/34 passing per version (100% pass rate) Known behavior: Boolean values write as BooleanType but read back as ShortType (0/1) due to ClickHouse storing Boolean as UInt8. * style: Apply spotless formatting * style: Apply spotless formatting for Spark 3.3 and 3.4 Remove trailing whitespace from test files to pass CI spotless checks. * fix: Change write format from binary to arrow in BinaryReaderSuite The 'binary' write format option doesn't exist. Changed to 'arrow' which is a valid write format option. Applied to Spark 3.3, 3.4, and 3.5. * test: Add nullable tests for ShortType, IntegerType, and LongType Added missing nullable variant tests to ensure comprehensive coverage: - decode ShortType - nullable with null values (Nullable(Int16)) - decode IntegerType - nullable with null values (Nullable(Int32)) - decode LongType - nullable with null values (Nullable(Int64)) These tests verify that nullable primitive types correctly handle NULL values in both Binary and JSON read formats. Applied to Spark 3.3, 3.4, and 3.5. Total tests per Spark version: 51 (was 48) Total across all versions: 153 (was 144) * Refactor ClickHouseReaderTestBase: Add nullable tests and organize alphabetically - Add missing nullable test cases for: Date32, Decimal32, Decimal128, UInt16, UUID, DateTime64 - Organize all 69 tests alphabetically by data type for better maintainability - Ensure comprehensive coverage with both nullable and non-nullable variants for all data types - Apply changes consistently across Spark 3.3, 3.4, and 3.5 * ci: Skip cloud tests on forks where secrets are unavailable Add repository check to cloud workflow to prevent failures on forks that don't have access to ClickHouse Cloud secrets. Tests will still run on the main repository where secrets are properly configured. * Refactor and enhance Reader/Writer tests for all Spark versions - Add BooleanType tests to Reader (2 tests) with format-aware assertions - Add 6 new tests to Writer: nested arrays, arrays with nullable elements, multiple Decimal precisions (18,4 and 38,10), Map with nullable values, and StructType - Reorder all tests lexicographically for better organization - Writer tests increased from 17 to 33 tests - Reader tests increased from 69 to 71 tests - Remove section header comments for cleaner code - Apply changes to all Spark versions: 3.3, 3.4, and 3.5 - All tests now properly sorted alphabetically by data type and variant * style: Apply spotless formatting to Reader/Writer tests --------- Co-authored-by: Bentsi Leviav <bentsi.leviav@clickhouse.com> Co-authored-by: Shimon Steinitz <shimon.steinitz@clickhouse.com> * Fix BinaryReader to handle new Java client types - Fix DecimalType: Handle both BigInteger (Int256/UInt256) and BigDecimal (Decimal types) - Fix ArrayType: Direct call to BinaryStreamReader.ArrayValue.getArrayOfObjects() - Fix StringType: Handle UUID, InetAddress, and EnumValue types - Fix DateType: Handle both LocalDate and ZonedDateTime - Fix MapType: Handle all util.Map implementations Removed reflection and defensive pattern matching for better performance. All 34 Binary Reader test failures are now fixed (71/71 tests passing). Fixes compatibility with new Java client API in update-java-client-version branch. * Add high-precision decimal tests with tolerance - Add Decimal(18,4) test with 0.001 tolerance for JSON/Arrow formats - Documents precision limitation for decimals with >15-17 significant digits - Uses tolerance-based assertions to account for observed precision loss - Binary format preserves full precision (already tested in Binary Reader suite) - All 278 tests passing * Simplify build-and-test workflow trigger to run on all pushes * Fix Scala 2.13 compatibility for nested arrays - Convert mutable.ArraySeq to Array in ClickHouseJsonReader to ensure immutable collections - Add test workaround for Spark's Row.getSeq behavior in Scala 2.13 - Fix Spotless formatting: remove trailing whitespace in ClickHouseBinaryReader - Applied to all Spark versions: 3.3, 3.4, 3.5 * Update java version to 0.9.4 * Enable compression * add logging TPCDSClusterSuite & change client buffers * Change InputStream read code * Remove hard coded settings for experiments * Clean log from insert method --------- Co-authored-by: Shimon Steinitz <shimonste@gmail.com> Co-authored-by: Bentsi Leviav <bentsi.leviav@clickhouse.com> Co-authored-by: Shimon Steinitz <shimonsteinitz@Shimons-MacBook-Pro.local> Co-authored-by: Shimon Steinitz <shimon.steinitz@clickhouse.com>
1 parent c05885e commit 32da21c

File tree

39 files changed

+7043
-332
lines changed

39 files changed

+7043
-332
lines changed

.github/workflows/build-and-test.yml

Lines changed: 1 addition & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -13,16 +13,7 @@
1313
#
1414

1515
name: "Build and Test"
16-
on:
17-
push:
18-
branches:
19-
- "branch-*"
20-
- "main"
21-
pull_request:
22-
branches:
23-
- "branch-*"
24-
- "main"
25-
workflow_dispatch:
16+
on: [push]
2617

2718
jobs:
2819
run-tests:

.github/workflows/cloud.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,8 @@ on:
2828
jobs:
2929
run-tests-with-clickhouse-cloud:
3030
runs-on: ubuntu-22.04
31+
# Only run on main repository where secrets are available
32+
if: github.repository == 'ClickHouse/spark-clickhouse-connector'
3133
strategy:
3234
max-parallel: 1
3335
fail-fast: false

build.gradle

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,16 @@ allprojects {
9090
version = getProjectVersion()
9191

9292
repositories {
93-
maven { url = "$mavenCentralMirror" }
93+
maven {
94+
url = "$mavenCentralMirror"
95+
}
96+
97+
maven {
98+
url = "$mavenSnapshotsRepo"
99+
mavenContent {
100+
snapshotsOnly()
101+
}
102+
}
94103
}
95104
}
96105

@@ -218,7 +227,7 @@ project(':clickhouse-core') {
218227
api "com.fasterxml.jackson.datatype:jackson-datatype-jsr310:$jackson_version"
219228
api "com.fasterxml.jackson.module:jackson-module-scala_$scala_binary_version:$jackson_version"
220229

221-
api("com.clickhouse:clickhouse-jdbc:$clickhouse_jdbc_version:all") { transitive = false }
230+
api("com.clickhouse:client-v2:${clickhouse_client_v2_version}:all") { transitive = false }
222231

223232
compileOnly "jakarta.annotation:jakarta.annotation-api:$jakarta_annotation_api_version"
224233

@@ -239,6 +248,7 @@ project(":clickhouse-core-it") {
239248
testImplementation(testFixtures(project(":clickhouse-core")))
240249

241250
testImplementation("com.clickhouse:clickhouse-jdbc:$clickhouse_jdbc_version:all") { transitive = false }
251+
242252
testImplementation "org.slf4j:slf4j-log4j12:$slf4j_version"
243253
}
244254

clickhouse-core/src/main/scala/com/clickhouse/spark/client/NodeClient.scala

Lines changed: 69 additions & 57 deletions
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,16 @@
1414

1515
package com.clickhouse.spark.client
1616

17-
import com.clickhouse.spark.Logging
1817
import com.clickhouse.client._
19-
import com.clickhouse.client.config.ClickHouseClientOption
20-
import com.clickhouse.data.{ClickHouseCompression, ClickHouseFormat}
18+
import com.clickhouse.client.api.{Client, ServerException}
19+
import com.clickhouse.client.api.enums.Protocol
20+
import com.clickhouse.client.api.insert.{InsertResponse, InsertSettings}
21+
import com.clickhouse.client.api.query.{QueryResponse, QuerySettings}
22+
import com.clickhouse.data.ClickHouseFormat
23+
import com.clickhouse.shaded.org.apache.commons.io.IOUtils
24+
import com.clickhouse.spark.Logging
25+
26+
import java.util.concurrent.TimeUnit
2127
import com.clickhouse.spark.exception.{CHClientException, CHException, CHServerException}
2228
import com.clickhouse.spark.format.{
2329
JSONCompactEachRowWithNamesAndTypesSimpleOutput,
@@ -30,7 +36,8 @@ import com.clickhouse.spark.spec.NodeSpec
3036
import com.fasterxml.jackson.databind.JsonNode
3137
import com.fasterxml.jackson.databind.node.ObjectNode
3238

33-
import java.io.InputStream
39+
import java.io.{ByteArrayInputStream, InputStream}
40+
import java.time.temporal.ChronoUnit
3441
import java.util.UUID
3542
import scala.util.{Failure, Success, Try}
3643

@@ -40,7 +47,7 @@ object NodeClient {
4047

4148
class NodeClient(val nodeSpec: NodeSpec) extends AutoCloseable with Logging {
4249
// TODO: add configurable timeout
43-
private val timeout: Int = 30000
50+
private val timeout: Int = 60000
4451

4552
private lazy val userAgent: String = {
4653
val title = getClass.getPackage.getImplementationTitle
@@ -78,24 +85,26 @@ class NodeClient(val nodeSpec: NodeSpec) extends AutoCloseable with Logging {
7885
private def shouldInferRuntime(): Boolean =
7986
nodeSpec.infer_runtime_env.equalsIgnoreCase("true") || nodeSpec.infer_runtime_env == "1"
8087

81-
private val node: ClickHouseNode = ClickHouseNode.builder()
82-
.options(nodeSpec.options)
83-
.host(nodeSpec.host)
84-
.port(nodeSpec.protocol, nodeSpec.port)
85-
.database(nodeSpec.database)
86-
.credentials(ClickHouseCredentials.fromUserAndPassword(nodeSpec.username, nodeSpec.password))
87-
.build()
88+
private def createClickHouseURL(nodeSpec: NodeSpec): String = {
89+
val ssl: Boolean = nodeSpec.options.getOrDefault("ssl", "false").toBoolean
90+
if (ssl) {
91+
s"https://${nodeSpec.host}:${nodeSpec.port}"
92+
} else {
93+
s"http://${nodeSpec.host}:${nodeSpec.port}"
94+
}
95+
}
8896

89-
private val client: ClickHouseClient = ClickHouseClient.builder()
90-
.option(ClickHouseClientOption.FORMAT, ClickHouseFormat.RowBinary)
91-
.option(
92-
ClickHouseClientOption.PRODUCT_NAME,
93-
userAgent
94-
)
95-
.nodeSelector(ClickHouseNodeSelector.of(node.getProtocol))
97+
private val client = new Client.Builder()
98+
.setUsername(nodeSpec.username)
99+
.setPassword(nodeSpec.password)
100+
.setDefaultDatabase(nodeSpec.database)
101+
.setOptions(nodeSpec.options)
102+
.setClientName(userAgent)
103+
.addEndpoint(createClickHouseURL(nodeSpec))
96104
.build()
97105

98-
override def close(): Unit = client.close()
106+
override def close(): Unit =
107+
client.close()
99108

100109
private def nextQueryId(): String = UUID.randomUUID.toString
101110

@@ -119,15 +128,13 @@ class NodeClient(val nodeSpec: NodeSpec) extends AutoCloseable with Logging {
119128
database: String,
120129
table: String,
121130
inputFormat: String,
122-
inputCompressionType: ClickHouseCompression = ClickHouseCompression.NONE,
123131
data: InputStream,
124132
settings: Map[String, String] = Map.empty
125133
): Either[CHException, SimpleOutput[ObjectNode]] =
126134
syncInsert(
127135
database,
128136
table,
129137
inputFormat,
130-
inputCompressionType,
131138
data,
132139
"JSONEachRow",
133140
JSONEachRowSimpleOutput.deserialize,
@@ -149,24 +156,32 @@ class NodeClient(val nodeSpec: NodeSpec) extends AutoCloseable with Logging {
149156
database: String,
150157
table: String,
151158
inputFormat: String,
152-
inputCompressionType: ClickHouseCompression,
153159
data: InputStream,
154160
outputFormat: String,
155161
deserializer: InputStream => SimpleOutput[OUT],
156162
settings: Map[String, String]
157163
): Either[CHException, SimpleOutput[OUT]] = {
164+
def readAllBytes(inputStream: InputStream): Array[Byte] =
165+
IOUtils.toByteArray(inputStream)
158166
val queryId = nextQueryId()
159167
val sql = s"INSERT INTO `$database`.`$table` FORMAT $inputFormat"
160168
onExecuteQuery(queryId, sql)
161-
val req = client.write(node)
162-
.query(sql, queryId)
163-
.decompressClientRequest(inputCompressionType)
164-
.format(ClickHouseFormat.valueOf(outputFormat))
165-
settings.foreach { case (k, v) => req.set(k, v) }
166-
Try(req.data(data).executeAndWait()) match {
167-
case Success(resp) => Right(deserializer(resp.getInputStream))
168-
case Failure(ex: ClickHouseException) =>
169-
Left(CHServerException(ex.getErrorCode, ex.getMessage, Some(nodeSpec), Some(ex)))
169+
val insertSettings: InsertSettings = new InsertSettings();
170+
settings.foreach { case (k, v) => insertSettings.setOption(k, v) }
171+
insertSettings.setDatabase(database)
172+
// TODO: check what type of compression is supported by the client v2
173+
insertSettings.compressClientRequest(true)
174+
val payload: Array[Byte] = readAllBytes(data)
175+
val is: InputStream = new ByteArrayInputStream("".getBytes())
176+
Try(client.insert(
177+
table,
178+
new ByteArrayInputStream(payload),
179+
ClickHouseFormat.valueOf(inputFormat),
180+
insertSettings
181+
).get()) match {
182+
case Success(resp: InsertResponse) => Right(deserializer(is))
183+
case Failure(se: ServerException) =>
184+
Left(CHServerException(se.getCode, se.getMessage, Some(nodeSpec), Some(se)))
170185
case Failure(ex) => Left(CHClientException(ex.getMessage, Some(nodeSpec), Some(ex)))
171186
}
172187
}
@@ -179,16 +194,15 @@ class NodeClient(val nodeSpec: NodeSpec) extends AutoCloseable with Logging {
179194
): Either[CHException, SimpleOutput[OUT]] = {
180195
val queryId = nextQueryId()
181196
onExecuteQuery(queryId, sql)
182-
val req = client.read(node)
183-
.query(sql, queryId).asInstanceOf[ClickHouseRequest[_]]
184-
.format(ClickHouseFormat.valueOf(outputFormat)).asInstanceOf[ClickHouseRequest[_]]
185-
.option(ClickHouseClientOption.CONNECTION_TIMEOUT, timeout).asInstanceOf[ClickHouseRequest[_]]
186-
settings.foreach { case (k, v) => req.set(k, v).asInstanceOf[ClickHouseRequest[_]] }
187-
Try(req.executeAndWait()) match {
188-
case Success(resp) => Right(deserializer(resp.getInputStream))
189-
case Failure(ex: ClickHouseException) =>
190-
Left(CHServerException(ex.getErrorCode, ex.getMessage, Some(nodeSpec), Some(ex)))
191-
case Failure(ex) => Left(CHClientException(ex.getMessage, Some(nodeSpec), Some(ex)))
197+
val querySettings: QuerySettings = new QuerySettings()
198+
val clickHouseFormat = ClickHouseFormat.valueOf(outputFormat)
199+
querySettings.setFormat(clickHouseFormat)
200+
querySettings.setQueryId(queryId)
201+
settings.foreach { case (k, v) => querySettings.setOption(k, v) }
202+
Try(client.query(sql, querySettings).get(timeout, TimeUnit.MILLISECONDS)) match {
203+
case Success(response: QueryResponse) => Right(deserializer(response.getInputStream))
204+
case Failure(se: ServerException) => Left(CHServerException(se.getCode, se.getMessage, Some(nodeSpec), Some(se)))
205+
case Failure(ex: Exception) => Left(CHClientException(ex.getMessage, Some(nodeSpec), Some(ex)))
192206
}
193207
}
194208

@@ -203,28 +217,26 @@ class NodeClient(val nodeSpec: NodeSpec) extends AutoCloseable with Logging {
203217
}
204218

205219
// //////////////////////////////////////////////////////////////////////////////
206-
// ///////////////////////// ret ClickHouseResponse /////////////////////////////
220+
// ///////////////////////// ret QueryResponse /////////////////////////////
207221
// //////////////////////////////////////////////////////////////////////////////
208222

209223
def queryAndCheck(
210224
sql: String,
211225
outputFormat: String,
212-
outputCompressionType: ClickHouseCompression,
213226
settings: Map[String, String] = Map.empty
214-
): ClickHouseResponse = {
227+
): QueryResponse = {
215228
val queryId = nextQueryId()
216229
onExecuteQuery(queryId, sql)
217-
val req = client.read(node)
218-
.query(sql, queryId).asInstanceOf[ClickHouseRequest[_]]
219-
.compressServerResponse(outputCompressionType).asInstanceOf[ClickHouseRequest[_]]
220-
.format(ClickHouseFormat.valueOf(outputFormat)).asInstanceOf[ClickHouseRequest[_]]
221-
.option(ClickHouseClientOption.CONNECTION_TIMEOUT, timeout).asInstanceOf[ClickHouseRequest[_]]
222-
settings.foreach { case (k, v) => req.set(k, v).asInstanceOf[ClickHouseRequest[_]] }
223-
Try(req.executeAndWait()) match {
224-
case Success(resp) => resp
225-
case Failure(ex: ClickHouseException) =>
226-
throw CHServerException(ex.getErrorCode, ex.getMessage, Some(nodeSpec), Some(ex))
227-
case Failure(ex) => throw CHClientException(ex.getMessage, Some(nodeSpec), Some(ex))
230+
val querySettings: QuerySettings = new QuerySettings()
231+
val clickHouseFormat = ClickHouseFormat.valueOf(outputFormat)
232+
querySettings.setFormat(clickHouseFormat)
233+
querySettings.setQueryId(queryId)
234+
settings.foreach { case (k, v) => querySettings.setOption(k, v) }
235+
236+
Try(client.query(sql, querySettings).get(timeout, TimeUnit.MILLISECONDS)) match {
237+
case Success(response: QueryResponse) => response
238+
case Failure(se: ServerException) => throw CHServerException(se.getCode, se.getMessage, Some(nodeSpec), Some(se))
239+
case Failure(ex: Exception) => throw CHClientException(ex.getMessage, Some(nodeSpec), Some(ex))
228240
}
229241
}
230242

@@ -238,5 +250,5 @@ class NodeClient(val nodeSpec: NodeSpec) extends AutoCloseable with Logging {
238250
|""".stripMargin
239251
)
240252
def ping(timeout: Int = timeout) =
241-
client.ping(node, timeout)
253+
client.ping(timeout)
242254
}

gradle.properties

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
#
1414

1515
mavenCentralMirror=https://repo1.maven.org/maven2/
16-
mavenSnapshotsRepo=https://s01.oss.sonatype.org/content/repositories/snapshots/
16+
mavenSnapshotsRepo=https://central.sonatype.com/repository/maven-snapshots/
1717
mavenReleasesRepo=https://s01.oss.sonatype.org/service/local/staging/deploy/maven2/
1818

1919
systemProp.scala_binary_version=2.12
@@ -23,7 +23,8 @@ systemProp.known_spark_binary_versions=3.3,3.4,3.5
2323

2424
group=com.clickhouse.spark
2525

26-
clickhouse_jdbc_version=0.6.3
26+
clickhouse_jdbc_version=0.9.4
27+
clickhouse_client_v2_version=0.9.4
2728

2829
spark_33_version=3.3.4
2930
spark_34_version=3.4.2

spark-3.3/clickhouse-spark-it/src/test/scala/org/apache/spark/sql/clickhouse/cluster/TPCDSClusterSuite.scala

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,8 @@ class TPCDSClusterSuite extends SparkClickHouseClusterTest {
3838
spark.sql("CREATE DATABASE tpcds_sf1_cluster WITH DBPROPERTIES (cluster = 'single_replica')")
3939

4040
TPCDSTestUtils.tablePrimaryKeys.foreach { case (table, primaryKeys) =>
41+
println(s"before table ${table} ${primaryKeys}")
42+
val start: Long = System.currentTimeMillis()
4143
spark.sql(
4244
s"""
4345
|CREATE TABLE tpcds_sf1_cluster.$table
@@ -51,6 +53,7 @@ class TPCDSClusterSuite extends SparkClickHouseClusterTest {
5153
|SELECT * FROM tpcds.sf1.$table;
5254
|""".stripMargin
5355
)
56+
println(s"time took table ${table} ${System.currentTimeMillis() - start}")
5457
}
5558

5659
TPCDSTestUtils.tablePrimaryKeys.keys.foreach { table =>
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
/*
2+
* Licensed under the Apache License, Version 2.0 (the "License");
3+
* you may not use this file except in compliance with the License.
4+
* You may obtain a copy of the License at
5+
*
6+
* https://www.apache.org/licenses/LICENSE-2.0
7+
*
8+
* Unless required by applicable law or agreed to in writing, software
9+
* distributed under the License is distributed on an "AS IS" BASIS,
10+
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11+
* See the License for the specific language governing permissions and
12+
* limitations under the License.
13+
*/
14+
15+
package org.apache.spark.sql.clickhouse.single
16+
17+
import com.clickhouse.spark.base.ClickHouseSingleMixIn
18+
import org.apache.spark.SparkConf
19+
20+
class ClickHouseSingleArrowWriterSuite extends ClickHouseArrowWriterSuite with ClickHouseSingleMixIn
21+
22+
abstract class ClickHouseArrowWriterSuite extends ClickHouseWriterTestBase {
23+
24+
override protected def sparkConf: SparkConf = super.sparkConf
25+
.set("spark.clickhouse.write.format", "arrow")
26+
.set("spark.clickhouse.read.format", "json")
27+
28+
}
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
/*
2+
* Licensed under the Apache License, Version 2.0 (the "License");
3+
* you may not use this file except in compliance with the License.
4+
* You may obtain a copy of the License at
5+
*
6+
* https://www.apache.org/licenses/LICENSE-2.0
7+
*
8+
* Unless required by applicable law or agreed to in writing, software
9+
* distributed under the License is distributed on an "AS IS" BASIS,
10+
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11+
* See the License for the specific language governing permissions and
12+
* limitations under the License.
13+
*/
14+
15+
package org.apache.spark.sql.clickhouse.single
16+
17+
import com.clickhouse.spark.base.{ClickHouseCloudMixIn, ClickHouseSingleMixIn}
18+
import org.apache.spark.SparkConf
19+
import org.scalatest.tags.Cloud
20+
21+
@Cloud
22+
class ClickHouseCloudBinaryReaderSuite extends ClickHouseBinaryReaderSuite with ClickHouseCloudMixIn
23+
24+
class ClickHouseSingleBinaryReaderSuite extends ClickHouseBinaryReaderSuite with ClickHouseSingleMixIn
25+
26+
/**
27+
* Test suite for ClickHouse Binary Reader.
28+
* Uses binary format for reading data from ClickHouse.
29+
* All test cases are inherited from ClickHouseReaderTestBase.
30+
*/
31+
abstract class ClickHouseBinaryReaderSuite extends ClickHouseReaderTestBase {
32+
33+
// Override to use binary format for reading
34+
override protected def sparkConf: SparkConf = super.sparkConf
35+
.set("spark.clickhouse.read.format", "binary")
36+
.set("spark.clickhouse.write.format", "arrow")
37+
38+
// All tests are inherited from ClickHouseReaderTestBase
39+
// Additional binary-specific tests can be added here if needed
40+
}

0 commit comments

Comments
 (0)