Skip to content

Cap dask graph size in read_geotiff_dask and batch adler32 transfers#1211

Merged
brendancol merged 1 commit intomasterfrom
perf/geotiff-graph-cap-and-adler32-batch
Apr 16, 2026
Merged

Cap dask graph size in read_geotiff_dask and batch adler32 transfers#1211
brendancol merged 1 commit intomasterfrom
perf/geotiff-graph-cap-and-adler32-batch

Conversation

@brendancol
Copy link
Copy Markdown
Contributor

@brendancol brendancol commented Apr 16, 2026

Summary

  • read_geotiff_dask: cap total dask chunks at 1,000,000. When the requested chunks= would exceed the cap, auto-scale chunk size upward and emit a UserWarning. Prevents driver OOM during graph construction for very large files at small chunk sizes.
  • _nvcomp_batch_compress (deflate path): batch all tile uncompressed buffers into a single contiguous device buffer, transfer host-side once, and compute zlib.adler32 from memoryview slices. Removes n_tiles per-tile .get() sync points and tobytes() copies.

Motivation

Static analysis on 30TB-scale dask workloads flagged read_geotiff_dask as WILL-OOM for an unrealistic-but-legal combination: at chunks=256 a 2.87M × 2.87M image would build ~125M spatial chunks and ~500M dask tasks. Each delayed task retains ~1KB of Python graph metadata, so the driver allocates tens to hundreds of GB for the graph alone — before any file read executes. The cap keeps the graph bounded without changing behavior for normal chunk sizes.

The adler32 finding is on the GPU write path: when nvCOMP returns raw deflate we have to wrap it in a zlib container, which needs an adler32 trailer computed from the uncompressed bytes. The previous code pulled each tile off the device individually, each .get() being a stream sync, each .tobytes() an extra copy. Batching gives us one DMA instead of N.

Benchmark

Graph-construction measurement for a synthetic GeoTIFF (4096 × 4096 float32):

chunks tasks wall_ms retained MB
2048 16 65 0.20
512 256 81 0.48
256 1024 127 1.11
128 4096 335 3.76
64 16384 1138 14.31

Linear in task count confirms the O(N) graph scaling. The cap triggers only when the computed chunk count exceeds 1,000,000; at normal usage the behavior and task count are unchanged.

Test plan

  • pytest xrspatial/geotiff/tests/ -k "dask or read_geotiff" — 12 existing tests pass
  • Manual smoke: verify read_geotiff_dask with default chunks produces no warning; verify small-chunks case emits the expected UserWarning with the auto-scaled tuple

Out of scope

  • CPU-side adler32 performance is unchanged (same zlib.adler32 call per tile on host data).
  • The graph-size cap is a defensive upper bound; tuning the default chunks size for very large files is left for a follow-up.

read_geotiff_dask built one delayed task per chunk with no upper bound.
For very large files at small chunk sizes the Python graph itself OOMs
the driver before any pixel read runs (30TB at chunks=256 would produce
~125M chunks, ~500M tasks, ~500GB graph on the host). Cap total chunks
at 1,000,000 and auto-scale the requested chunks size upward, emitting
a UserWarning so callers know their request was adjusted.

_nvcomp_batch_compress on the deflate path copied every uncompressed
tile GPU->CPU one at a time with .get().tobytes() purely to compute the
zlib adler32 trailer. Each per-tile .get() is a sync point on the default
stream. Batch all tiles into a single contiguous device buffer, transfer
once, then compute adler32 from a host memoryview slice per tile.
@github-actions github-actions bot added the performance PR touches performance-sensitive code label Apr 16, 2026
@brendancol brendancol merged commit 6316ef4 into master Apr 16, 2026
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

performance PR touches performance-sensitive code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant