Skip to content

Sub 2bit iq1s down#72

Open
adis-b wants to merge 4 commits into
antirez:mainfrom
adis-b:sub-2bit-iq1s-down
Open

Sub 2bit iq1s down#72
adis-b wants to merge 4 commits into
antirez:mainfrom
adis-b:sub-2bit-iq1s-down

Conversation

@adis-b
Copy link
Copy Markdown

@adis-b adis-b commented May 11, 2026

64 GB Target (experimental) version added

adisbuturovic and others added 4 commits May 11, 2026 13:17
Adds DS4_TENSOR_IQ1_S (1.5625 bpw) as an accepted routed-expert quant for
the down projection so the routed-MoE down weights can be shipped at
roughly half the size of Q2_K. The gate/up tensors stay at IQ2_XXS for
this first cut.

ds4.c
  - block_iq1_s struct, IQ1S_DELTA, size assertion
  - DS4_TENSOR_IQ1_S enum + iq1_s gguf_types entry fixed (50 not 110 bytes)
  - iq1s_grid.inc included as the 2048-entry codebook
  - ds4_dequant_iq1_s + ds4_vec_dot_iq1_s_q8_K reference implementations
  - dot_routed_down_q8_K dispatcher; matvec_q2_k_* family now branches on
    the routed down tensor type instead of asserting Q2_K

metal/moe.metal
  - block_iq1_s + dequantize_iq1_s template
  - kernel_mul_mv_iq1_s_f32_impl decode kernel (mul_mv_id host export)
  - kernel_mul_mm_id_iq1_s_f32 / _f16 prefill exports via the existing
    mul_mm_id machinery and dequantize_iq1_s
  - IQ1_S codebook placeholder filled at Metal-source-load time

ds4_metal.m
  - DS4_METAL_TENSOR_IQ1_S, mv/mm pipeline globals and dispatch helpers
  - Loads kernel_mul_mv_id_iq1_s_f32 in ds4_gpu_init
  - ds4_gpu_full_source injects iq1s_grid.inc into the moe.metal source
    so ds4.c and the Metal kernels share one codebook

tools/metal_smoke.c
  - Tiny binary (make metal-smoke) that runs ds4_gpu_init only, so MSL
    errors surface without needing to load a 50 GB GGUF
Extends the routed gate/up CPU paths to dispatch on tensor type so the
gate and up projections can also be IQ1_S, not only IQ2_XXS. The
existing fused IQ2_XXS pair dot is still preferred when both sides are
IQ2_XXS; IQ1_S (and any future quant) falls through to per-side dot.

ds4.c
  - dot_routed_pair_q8_K dispatcher (and the per-side helper)
  - matvec_iq2_xxs_pair_ctx, matvec_iq2_xxs_mid_ctx and
    matvec_iq2_xxs_batch_mid_ctx carry per-side gate_type/up_type and
    use the dispatcher
  - matvec_iq2_xxs_expert_pair_prequant and
    matvec_iq2_xxs_experts_mid_prequant accept any pair of supported
    routed-expert quants

Metal already covers gate/up automatically: with IQ1_S the existing
pair_swiglu / pair fused fast-paths report nil and the dispatcher falls
into the non-fused arm that uses ds4_gpu_routed_mv_pipeline(IQ1_S).
Adds a "64 GB Target (experimental)" section to README.md describing the
new IQ1_S routed-expert kernel support, the projected file-size budget
(~63 GB for the q1 variant), the realistic memory math for 64 GB
machines, the Metal smoke-test entry point, and a llama.cpp
--allow-requantize one-liner that turns the existing q2 GGUF into the
new q1 GGUF.

download_model.sh: adds a 'q1' target pointing at adis-b/ds4-64gb-gguf
as a placeholder for a future hosted q1 file. Other targets keep using
antirez/deepseek-v4-gguf via a per-target SRC_REPO.
@adis-b adis-b closed this May 11, 2026
@adis-b adis-b reopened this May 11, 2026
@antirez
Copy link
Copy Markdown
Owner

antirez commented May 11, 2026

In theory interesting, in practice is not a matter of code but of how it performs at 1 bit quant. Only if it is more useful than a smaller model it makes sense. It needs to bet qwen 3.6 35B at least in some way, otherwise it is not useful, and I bet this can be achieved with 1 bit quants... Did you have any test prompt output?

@antirez antirez added the help wanted Extra attention is needed label May 11, 2026
@adis-b
Copy link
Copy Markdown
Author

adis-b commented May 11, 2026

I will have the details tomorrow, I've added another feature as well (https://github.com/adis-b/ds4-64gb/tree/sparse-residency-64gb).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

help wanted Extra attention is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants