Sub 2bit iq1s down#72
Open
adis-b wants to merge 4 commits into
Open
Conversation
Adds DS4_TENSOR_IQ1_S (1.5625 bpw) as an accepted routed-expert quant for
the down projection so the routed-MoE down weights can be shipped at
roughly half the size of Q2_K. The gate/up tensors stay at IQ2_XXS for
this first cut.
ds4.c
- block_iq1_s struct, IQ1S_DELTA, size assertion
- DS4_TENSOR_IQ1_S enum + iq1_s gguf_types entry fixed (50 not 110 bytes)
- iq1s_grid.inc included as the 2048-entry codebook
- ds4_dequant_iq1_s + ds4_vec_dot_iq1_s_q8_K reference implementations
- dot_routed_down_q8_K dispatcher; matvec_q2_k_* family now branches on
the routed down tensor type instead of asserting Q2_K
metal/moe.metal
- block_iq1_s + dequantize_iq1_s template
- kernel_mul_mv_iq1_s_f32_impl decode kernel (mul_mv_id host export)
- kernel_mul_mm_id_iq1_s_f32 / _f16 prefill exports via the existing
mul_mm_id machinery and dequantize_iq1_s
- IQ1_S codebook placeholder filled at Metal-source-load time
ds4_metal.m
- DS4_METAL_TENSOR_IQ1_S, mv/mm pipeline globals and dispatch helpers
- Loads kernel_mul_mv_id_iq1_s_f32 in ds4_gpu_init
- ds4_gpu_full_source injects iq1s_grid.inc into the moe.metal source
so ds4.c and the Metal kernels share one codebook
tools/metal_smoke.c
- Tiny binary (make metal-smoke) that runs ds4_gpu_init only, so MSL
errors surface without needing to load a 50 GB GGUF
Extends the routed gate/up CPU paths to dispatch on tensor type so the
gate and up projections can also be IQ1_S, not only IQ2_XXS. The
existing fused IQ2_XXS pair dot is still preferred when both sides are
IQ2_XXS; IQ1_S (and any future quant) falls through to per-side dot.
ds4.c
- dot_routed_pair_q8_K dispatcher (and the per-side helper)
- matvec_iq2_xxs_pair_ctx, matvec_iq2_xxs_mid_ctx and
matvec_iq2_xxs_batch_mid_ctx carry per-side gate_type/up_type and
use the dispatcher
- matvec_iq2_xxs_expert_pair_prequant and
matvec_iq2_xxs_experts_mid_prequant accept any pair of supported
routed-expert quants
Metal already covers gate/up automatically: with IQ1_S the existing
pair_swiglu / pair fused fast-paths report nil and the dispatcher falls
into the non-fused arm that uses ds4_gpu_routed_mv_pipeline(IQ1_S).
Adds a "64 GB Target (experimental)" section to README.md describing the new IQ1_S routed-expert kernel support, the projected file-size budget (~63 GB for the q1 variant), the realistic memory math for 64 GB machines, the Metal smoke-test entry point, and a llama.cpp --allow-requantize one-liner that turns the existing q2 GGUF into the new q1 GGUF. download_model.sh: adds a 'q1' target pointing at adis-b/ds4-64gb-gguf as a placeholder for a future hosted q1 file. Other targets keep using antirez/deepseek-v4-gguf via a per-target SRC_REPO.
Owner
|
In theory interesting, in practice is not a matter of code but of how it performs at 1 bit quant. Only if it is more useful than a smaller model it makes sense. It needs to bet qwen 3.6 35B at least in some way, otherwise it is not useful, and I bet this can be achieved with 1 bit quants... Did you have any test prompt output? |
Author
|
I will have the details tomorrow, I've added another feature as well (https://github.com/adis-b/ds4-64gb/tree/sparse-residency-64gb). |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
64 GB Target (experimental) version added