[SPARK-56633][SQL][TESTS] Add comprehensive Parquet vectorized-reader benchmark coverage#55558
Draft
LuciferYang wants to merge 2 commits intoapache:masterfrom
Draft
[SPARK-56633][SQL][TESTS] Add comprehensive Parquet vectorized-reader benchmark coverage#55558LuciferYang wants to merge 2 commits intoapache:masterfrom
LuciferYang wants to merge 2 commits intoapache:masterfrom
Conversation
Contributor
Author
|
will update benchmark results later |
b6f5da6 to
0e9b3fa
Compare
… benchmark coverage
### What changes were proposed in this pull request?
Add comprehensive benchmark coverage for the Parquet vectorized-read decode
paths via three new benchmark files plus an extension to the existing
`VectorizedRleValuesReaderBenchmark`:
* `ParquetVectorUpdaterBenchmark` (new) - every `ParquetVectorUpdater`
family obtained from `ParquetVectorUpdaterFactory`: identity (Boolean,
Byte, Short, Integer, Long, Float, Double, Binary), type-converting
(IntegerToLong, IntegerToDouble, FloatToDouble, DateToTimestampNTZ,
DowncastLong), rebase (IntegerWithRebase, LongWithRebase, LongAsMicros),
unsigned (UnsignedInteger, UnsignedLong), decimal (IntegerToDecimal,
LongToDecimal, BinaryToDecimal, FixedLenByteArrayToDecimal), and
FixedLenByteArray (FixedLenByteArrayUpdater, FixedLenByteArrayAsInt,
FixedLenByteArrayAsLong).
* `VectorizedDeltaReaderBenchmark` (new) - all three delta decoders.
Group A/B: DELTA_BINARY_PACKED INT32/INT64 read+skip across constant /
monotonic / small-delta-random / wide-random distributions.
Group C: DELTA_BYTE_ARRAY read+skip across prefix-overlap shapes.
Group D: DELTA_LENGTH_BYTE_ARRAY read+skip across payload sizes.
Group E: variant reads on DeltaBinaryPackedReader (readBytes,
readShorts, readUnsignedIntegers, readUnsignedLongs, skipBytes,
skipShorts, single-value readByte/Short/Integer/Long) plus
DeltaByteArrayReader.readBinary(int len).
* `VectorizedPlainValuesReaderBenchmark` (new) - every public read/skip
method on `VectorizedPlainValuesReader` across five groups:
fixed-size bulk, conversion bulk (unsigned, with-rebase), variable-
length, single-value, skip.
* `VectorizedRleValuesReaderBenchmark` (extension) - new groups added:
Group E: row-index-filtered reads (exercises the with-filter path of
`readBatchInternal` / `readBatchInternalWithDefLevels`); two filter
shapes x three null ratios x with/without def-level materialization.
Group F: per-call overhead of readBoolean / readInteger /
readValueDictionaryId looped NUM_ROWS times.
Group G: skipBooleans / skipIntegers across the same parameter sweeps
as Groups A and B.
### Why are the changes needed?
Coverage is intentionally broad - every public read/skip method is included
even when no obvious optimization opportunity exists today, so the result
files track the long-term performance baseline of the Parquet decode
surface and future iterative optimization does not have to add benchmark
coverage as a precursor.
### Implementation notes
* Updater instances are obtained via the production
`ParquetVectorUpdaterFactory.getUpdater` entry point so the benchmark
exercises the full configuration matrix (logical-type annotation,
rebase mode, timezone) the production decoder uses. Tricky cases
(`DowncastLongUpdater`, `BinaryToDecimalUpdater`,
`FixedLenByteArrayToDecimalUpdater`) include a brief comment noting
the routing predicate that selects them, since slight changes to the
descriptor or target Spark type re-route to a different Updater.
* Each case pre-warms the decode path before `benchmark.addCase` to
stabilize first-case JIT state (a follow-up to the SPARK-56522
review feedback).
* Variable-length cases call `vector.reset()` at the start of each
iteration so the binary vector's child arrayData does not accumulate
payload bytes across iterations.
* For row-index-filtered cases in `VectorizedRleValuesReaderBenchmark`,
a fresh `ParquetReadState` is constructed per measurement iteration
because `rowRanges` is iterated forward and not reset by the existing
resetForNewBatch / resetForNewPage entry points.
### Does this PR introduce _any_ user-facing change?
No. Benchmark-only addition.
### How was this patch tested?
* `build/sbt sql/Test/compile` clean (including scalastyle).
* Result files to be generated on GHA on JDK 17/21/25 to establish
baseline.
### Was this patch authored or co-authored using generative AI tooling?
Generated-by: Claude Opus 4.7
0e9b3fa to
e30ce3e
Compare
…uet.ParquetVectorUpdaterBenchmark (JDK 17, Scala 2.13, split 1 of 1)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
Add benchmark coverage for the Parquet vectorized-read decode surface that has none today, plus extend the existing
VectorizedRleValuesReaderBenchmarkto its full public API:ParquetVectorUpdaterBenchmark(new) — everyParquetVectorUpdaterfamily obtained throughParquetVectorUpdaterFactory.getUpdater. Six groups: identity, type-converting, rebase, unsigned, decimal, FixedLenByteArray.VectorizedDeltaReaderBenchmark(new) — all three delta decoders (VectorizedDeltaBinaryPackedReader,VectorizedDeltaByteArrayReader,VectorizedDeltaLengthByteArrayReader). Five groups covering bulk read/skip across value distributions and prefix-overlap shapes, plus single-value reads and byte/short/unsigned variants.VectorizedPlainValuesReaderBenchmark(new) — every public read/skip method onVectorizedPlainValuesReader. Five groups: fixed-size bulk, conversion bulk (unsigned, with-rebase), variable-length, single-value, skip.VectorizedRleValuesReaderBenchmark(extended) — three new groups: row-index-filtered reads (with-filter code path), single-value reads, skip paths.Why are the changes needed?
ParquetVectorUpdaterand the delta / plain decoders sit on the hot path of every Parquet column read but have no in-repo benchmark coverage. Coverage is intentionally broad — every public read/skip method is included even when it's already memcpy-optimal — so the result files track the long-term performance baseline and future iterative optimization does not have to add benchmark coverage as a precursor.Does this PR introduce any user-facing change?
No.
How was this patch tested?
Was this patch authored or co-authored using generative AI tooling?
Generated-by: Claude Code