Skip to content

Add GridFlip weight quantizer backend#1

Open
dalistarh wants to merge 2 commits intomainfrom
codex/gridflip-quantizer-cloverlm
Open

Add GridFlip weight quantizer backend#1
dalistarh wants to merge 2 commits intomainfrom
codex/gridflip-quantizer-cloverlm

Conversation

@dalistarh
Copy link
Copy Markdown
Contributor

Summary

  • add a CUDA GridFlip FourSix weight quantizer in the CloverLM quartet2 extension
  • wire GridFlip weight quantization through Quartet_II_linear with QuTLASS/dequantized matmul backends
  • add focused QuTLASS/GridFlip correctness tests

Note: this branch preserves the WUSH/train changes that were already present in the working tree before the GridFlip quantizer work.

Validation

  • uv pip install --python .venv/bin/python -e ./quartet2
  • CUDA_VISIBLE_DEVICES=0 .venv/bin/python -m pytest quartet2/test/test_qutlass_backend.py -q -> 10 passed
  • CUDA_VISIBLE_DEVICES=0 .venv/bin/python -m pytest quartet2/test/test_linear.py -k "compile_fwd or result_fwd or autocast" -q -> 3 passed, 7 deselected
  • git diff --check

Benchmark Smoke

4096x4096 weight quantization wall time:

  • FourSix CUDA: 0.065 ms
  • GridFlip CUDA: 0.076 ms
  • GridFlip Python reference: 7.951 ms

2048x2048 matmul wall time:

  • standard QuTLASS FP4 matmul: 0.092 ms
  • GridFlip QuTLASS FP4 matmul: 2.164 ms

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

if weight_amax is None:
weight_amax = abs_max(weight)

P1 Badge Recompute weight amax after applying WUSH transform

When WUSH is enabled, weight is first transformed (apply_block_transform) and then quantized, but this branch reuses the cached weight_amax whenever it is set. In training, register_optimizer_hook populates weight_abs_max from the untransformed parameter, so after the first optimizer step the forward path quantizes transformed weights with a stale/unrelated max value. This mis-scales FP4 quantization and can materially distort training results whenever --wush is on.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/train.py

current_batch = checkpoint_dict["checkpoint"].train_batch
if current_batch > 1 and ((current_batch % args.checkpoint_freq == 0) or (current_batch == last_batch)):
if args.checkpoint_freq != utils.INF and current_batch > 1 and ((current_batch % args.checkpoint_freq == 0) or (current_batch == last_batch)):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve final checkpoint save when checkpoint_freq is INF

This condition now gates all checkpoint writes behind args.checkpoint_freq != utils.INF. Because the parser default is utils.INF, runs with default settings no longer save even the final-step checkpoint (current_batch == last_batch), which breaks expected resumability/artifact persistence for normal training invocations.

Useful? React with 👍 / 👎.

@dalistarh
Copy link
Copy Markdown
Contributor Author

ClimbMix5B training status (2026-04-26 UTC):

  • Dataset: 5B-token slice from daslab-testing/climbmix-pretokenized, stored at /scratch/dalistarh/cloverlm_data/climbmix5b.
  • Active run: GridFlip 4/6 via qutlass, /scratch/dalistarh/cloverlm_runs/climbmix5b_retry5_adam_gridflip_05b_2gpu_b1280_mb320_manual.
  • First fixed-val checkpoint: batch 500, 82,083,840 tokens, train_loss=4.8814, val_loss_fixed=4.9015, throughput=146,959 toks/sec total / 73,479 toks/sec/GPU, ETA=9h18m.
  • Queue: standard Quartet-II 4/6 and BF16 baselines are queued behind GridFlip in /scratch/dalistarh/jobs/run_climbmix5b_retry5_manual_reserve.sh.
  • Commit 79b4f09 adds the DDP accumulation fix used by these runs and an opt-in GPU reservation keepalive for canhazgpu-launched jobs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants