Skip to content

Feature/unswizzle#2732

Merged
ptrendx merged 23 commits intoNVIDIA:mainfrom
int-smart:feature/unswizzle
Apr 3, 2026
Merged

Feature/unswizzle#2732
ptrendx merged 23 commits intoNVIDIA:mainfrom
int-smart:feature/unswizzle

Conversation

@int-smart
Copy link
Copy Markdown
Contributor

@int-smart int-smart commented Mar 4, 2026

Description

This PR adds unswizzle support for scaling factors and extends the swizzle module so scaling tensors can be converted from GEMM-swizzled layout back to compact layout, including multi-tensor paths. It also adds round-trip and standalone tests to validate unswizzle correctness.

Fixes # (issue)

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • Added unswizzle APIs and implementation in transformer_engine/common/swizzle/swizzle.cu and declarations in transformer_engine/common/include/transformer_engine/swizzle.h
  • Added multi-tensor unswizzle support with swizzle-like validation assumptions (homogeneous scaling mode/layout, swizzled input and compact output expectations)
  • Refactored multi-tensor unswizzle launch/kernels to mirror swizzle structure (split row-wise and column-wise kernels) for easier readability
  • Added/extended tests in tests/cpp/operator/test_swizzle.cu, including standalone unswizzle and swizzle→unswizzle round-trip coverage

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

int-smart and others added 6 commits March 3, 2026 20:40
- Introduced `nvte_unswizzle_scaling_factors` to convert swizzled scaling factors back to row-major format.
- Implemented `regs_unshuffle_with_bit_shifts` and `regs_unshuffle` for unshuffling operations in CUDA kernels.
- Added `unswizzle_row_scaling_kernel_impl` and `unswizzle_col_scaling_kernel_impl` for handling unswizzling in row and column scaling respectively.

These changes enhance the functionality of the swizzle module, enabling better handling of scaling factors in tensor operations.

Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
These enhancements tests the changes introduced for unswizzling

Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
- Introduced `compute_ref_unswizzle` to handle the conversion of swizzled scaling factors back to their original format.
- Added `performTestUnswizzle1D` to validate the unswizzling process with various scaling modes.
- Created `UnswizzleTestSuite` for comprehensive testing of unswizzling operations.

Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
- Moved the definition of `swizzle_row_scaling_kernel` to a new location for better organization.
- Ensured the kernel implementation is now properly defined and accessible for scaling operations in the swizzle module.

Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
- Introduced `multi_tensor_unswizzle_scaling_factors` to convert swizzled scaling factors back to their original row-major format.
- Implemented CUDA kernels for unswizzling in both row and column scaling, enhancing the swizzle module's functionality.
- Updated the launch function to handle multiple tensor unswizzling operations efficiently.

These changes improve the handling of scaling factors in tensor operations, ensuring better performance and organization within the swizzle module.

Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 4, 2026

Greptile Summary

This PR adds nvte_unswizzle_scaling_factors and nvte_multi_tensor_unswizzle_scaling_factors APIs that convert scaling factors from GEMM-swizzled layout back to compact row-major format, mirroring the existing swizzle infrastructure. It introduces new regs_unshuffle / regs_unshuffle_with_bit_shifts device helpers, dedicated unswizzle_row_scaling_kernel_impl and unswizzle_col_scaling_kernel_impl kernels, a single-tensor dispatch function, and a multi-tensor variant — all structured symmetrically with the swizzle side. Tests cover both a standalone unswizzle path and a swizzle→unswizzle round-trip with padded (non-aligned) shapes.

Previous review threads have surfaced a number of open issues that require attention before merging:

  • Critical — output-size validation uses unpadded dims (unswizzle_scaling_factors rowwise path line ~1231, columnwise path line ~1248): m and k are derived from output->...shape while the check is supposed to guard the padded compact buffer. Any matrix whose M or K/32 is not already aligned will fail a valid call. The corresponding check in multi_tensor_unswizzle_scaling_factors already uses the correct padded dimensions.
  • Critical — m/k read before initialization (multi-tensor rowwise path, NVTE_CHECK around line ~1364): m and k are checked before they are assigned in the same block, causing UB and a check that can never fire. The error message also prints output shape instead of input shape.
  • OOB/correctness — SLM load stride in unswizzle_col_scaling_kernel_impl (line ~390): the inner load loop reads k_tiles_in_tb * SF_TILE_SIZE_I32 contiguous int32s from input_i32[i] under the assumption that K-tiles for an M-tile are contiguous in the swizzled layout; reviewers flagged this may read across M-tile boundaries.
  • Missing input validation (multi-tensor columnwise path, ~line ~1469): input[i]->columnwise_scale_inv.has_data() is never checked before accessing shape and dptr, while the rowwise path has the analogous guard.
  • Trivially-true size check / missing input validation (multi-tensor columnwise check, ~line ~1432): m * k is derived from output[i]->columnwise_scale_inv.shape, making the std::accumulate comparison always true; the input buffer size is never verified.
  • Dead code (block_scale_size assigned but never used in rowwise path, ~line ~1397): will generate a -Wunused-variable warning.
  • Leftover reviewer snippet (// Example for NVFP4 rowwise path: at ~line ~1225): a verbatim code-review suggestion was pasted into source and should be removed.
  • Test UB (skip messages in performTestUnswizzle1D and performTestSwizzleUnswizzleRoundtrip): SF_MODE_X/SF_MODE_Y referenced in std::to_string() before both variables are guaranteed to be written.

Several threads were marked resolved by maintainers: the dual-scale asymmetry between swizzle and unswizzle will be addressed in a follow-up PR, and the num_tensors derivation from output.size() is acceptable given the wrapper always builds equal-length vectors.

Confidence Score: 2/5

Not safe to merge — multiple open correctness and UB issues in swizzle.cu need to be resolved first.

Several unresolved issues from previous review threads remain: output-size validation uses unpadded dimensions causing valid calls to be rejected for non-aligned shapes; m/k are read before initialization in the multi-tensor rowwise path (undefined behaviour); the multi-tensor columnwise path lacks an input->has_data() guard and has a trivially-true size check that never validates the swizzled input buffer; dead code (block_scale_size) will generate compiler warnings; and a leftover review snippet sits in production source. The test file also has minor UB in skip messages. The kernel logic itself is structurally sound and mirrors the existing swizzle code, but the surrounding validation layer needs significant clean-up.

transformer_engine/common/swizzle/swizzle.cu — specifically unswizzle_scaling_factors (validation logic ~lines 1220–1260) and multi_tensor_unswizzle_scaling_factors (rowwise check, columnwise has_data guard, and block_scale_size ~lines 1360–1490).

Important Files Changed

Filename Overview
transformer_engine/common/swizzle/swizzle.cu Adds unswizzle kernels, single-tensor unswizzle_scaling_factors, and multi-tensor multi_tensor_unswizzle_scaling_factors; numerous open validation, UB, and correctness issues flagged in threads remain unresolved.
tests/cpp/operator/test_swizzle.cu Adds performTestUnswizzle1D and performTestSwizzleUnswizzleRoundtrip with padded-shape coverage; minor UB in skip-message string formatting remains open from earlier threads.
transformer_engine/common/include/transformer_engine/swizzle.h Adds nvte_unswizzle_scaling_factors and nvte_multi_tensor_unswizzle_scaling_factors API declarations and fixes a pre-existing "quantitized" typo; docs are clean apart from the typo remaining on pre-existing lines 29/43.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A["nvte_unswizzle_scaling_factors(input, output, stream)"] --> B["unswizzle_scaling_factors()"]
    B --> C{scaling_mode}
    C -->|MXFP8_1D| D{has rowwise?}
    C -->|NVFP4_1D| E{has rowwise?}
    D -->|yes| F["m = output.scale_inv.shape[0]\nk = output.scale_inv.shape[1]\nrowwise = true"]
    D -->|no, columnwise| G["m = output.columnwise.shape[1]\nk = output.columnwise.shape[0]\nrowwise = false"]
    E -->|yes or no| H["rowwise = true\n(NVFP4 always uses row kernel)"]
    F --> I["launch unswizzle_scaling_kernel\n(rowwise=true)"]
    G --> J["launch unswizzle_scaling_kernel\n(rowwise=false)"]
    H --> I
    I --> K["unswizzle_row_scaling_kernel_impl\nLoad tiles to SLM\nregs_unshuffle\nWrite compact row-major"]
    J --> L["unswizzle_col_scaling_kernel_impl\nLoad tiles to SLM\nregs_unshuffle_with_bit_shifts\nWrite compact col-major"]
    M["nvte_multi_tensor_unswizzle_scaling_factors(inputs, outputs, n, stream)"]
    M --> N["multi_tensor_unswizzle_scaling_factors()"]
    N --> O{rowwise_unswizzle?}
    N --> P{columnwise_unswizzle?}
    O -->|yes| Q["launch_multi_tensor_unswizzle\nis_rowwise=true\nmulti_tensor_unswizzle_row_scaling_kernel"]
    P -->|yes| R["launch_multi_tensor_unswizzle\nis_rowwise=false\nmulti_tensor_unswizzle_col_scaling_kernel"]
Loading

Reviews (13): Last reviewed commit: "Merge branch 'main' into feature/unswizz..." | Re-trigger Greptile

@vthumbe1503 vthumbe1503 added the community-contribution PRs from external contributor outside the core maintainers, representing community-driven work. label Mar 4, 2026
Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
@int-smart int-smart force-pushed the feature/unswizzle branch from 85ea04b to 17dbb33 Compare March 5, 2026 02:13
int-smart and others added 2 commits March 4, 2026 18:49
@ptrendx
Copy link
Copy Markdown
Member

ptrendx commented Mar 11, 2026

@int-smart Please address the comments from Greptile and ideally also add the test case with the input not already padded to 128,128.

@int-smart
Copy link
Copy Markdown
Contributor Author

@ptrendx Will look into these

@int-smart
Copy link
Copy Markdown
Contributor Author

@ptrendx From what I am understanding then, there is no relevance of padding to the unswizzle kernel. Since the padding is already done during the swizzling operation I can just mirror it back to compact layout with the zero pads correctly in the compact layout and that should do. Is that assumption correct. Initially I was thinking of removing the padding from the scale_inv itself since this would be used for checkpointing

int-smart and others added 2 commits March 12, 2026 19:53
- Updated unswizzling kernel implementations to remove original_M and original_K parameters, simplifying the function signatures.
- Enhanced test suite to utilize new unswizzling data shapes, ensuring comprehensive coverage of aligned and padded cases.

These changes improve the clarity and efficiency of the unswizzling process in the swizzle module.
Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
@ptrendx
Copy link
Copy Markdown
Member

ptrendx commented Mar 16, 2026

@int-smart I'm not sure I follow, I think that what you are saying is probably correct, but let me try to clarify just in case:

  • the scaling factors, irrespective of the compact or gemm-ready layout, are zero-padded to the multiple of [128,4] (or the transpose in case of compact and columnwise).
  • So for the unswizzle, you should just use the same size of the output unswizzled tensor as the original swizzled one. You don't even need to zero it before unswizzling, since the swizzled tensor already has 0s in the right places so unswizzling it will put 0s in the pad positions.

@int-smart
Copy link
Copy Markdown
Contributor Author

@ptrendx Makes sense. I added that in the last commit.

Comment on lines +1212 to +1219
if (has_rowwise_scale_inv) {
NVTE_CHECK(output->scale_inv.has_data(),
"Output tensor does not have row-wise scaling factors.");
}
if (has_columnwise_scale_inv) {
NVTE_CHECK(output->columnwise_scale_inv.has_data(),
"Output tensor does not have column-wise scaling factors.");
}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say that the logic here is a little backwards, even though I understand how here it is not obvious. Ultimately it is the output that tells you what to do in the function - think about the quantize function where the input does not know anything about the format to which it is quantized and it is the output that controls scaling mode and whether we need rowwise or columnwise quantization. Therefore here I would also treat the output as a "source of truth" on what we need to do and then check that the input tensor provides the right data (as opposed to this code which looks to input to know what to do and then checks the output).

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chaned this for single tensor. Let me know if that makes sense. Can you tell me how this would be called so that I can check the input and output and how they are allocated. Currently I am assuming from your comment above that the output would have all the necessary information to decide between rowwise, columnwise, scaling_mode and data pointers along with dimensions such as m and k. If this is fine then I can make these changes to multi tensor version as well.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the changes look good. Please update the multitensor version accordingly.

Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
Comment on lines +249 to 263
std::vector<std::pair<size_t, size_t>> unswizzle_data_shapes = {
// Aligned: scale dims are already multiples of 128 and 4
{128, 128},
{128, 16896}, // K = 132 * 128, large K
{16896, 128}, // M = 132 * 128, large M
// M-padding only: M not a multiple of 128 (scale-M needs padding to 256)
{160, 128},
// scale-K padding only: K/32 = 3, padded to 4
{128, 96},
// Both M and scale-K need padding
{160, 96},
};

std::vector<std::pair<bool, bool>> scaling_mode = {
{true, false},
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Roundtrip test only covers aligned matrix dimensions

performTestSwizzleUnswizzleRoundtrip is instantiated exclusively with the existing num_tiles vector, which always produces M = num_tiles_M * MAT_TILE_DIM_M — values that are exact multiples of 128 (the scale-M alignment). The standalone performTestUnswizzle1D intentionally adds padded shapes (e.g., M=160, K=96) via unswizzle_data_shapes, but no equivalent padded cases exist for the roundtrip.

If the output-size validation or padding-mask logic ever diverges between the swizzle and unswizzle paths for non-aligned M/K, the roundtrip test would pass while standalone tests fail (or vice-versa). Consider adding a few padded shapes (e.g., {4, 3} tile-count pairs or raw {160, 96} shapes) to num_tiles or creating a separate data-shape vector for the roundtrip suite.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@int-smart Is there a reason for that difference between the tests?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wanted to keep one test similar to swizzle tests which has aligned test cases. Moved to using unswizzled_data_shapes for roundtrip as well with the aligned cases as part of the unswizzled_data_shapes.

Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
…streamline processing. Need to check if rowwise and columnwise both can be true. If yes the if else needs to account for that

Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
Comment on lines +461 to +476
const void* input = kernel_args.input_list[tensor_id];
void* output = kernel_args.output_list[tensor_id];
const int M = kernel_args.m_list[tensor_id];
const int K = kernel_args.k_list[tensor_id];

constexpr int N_TILE_PER_TD = sizeof(LType) / sizeof(int);
constexpr int N_TILES_IN_TB = TB_DIM * N_TILE_PER_TD;

const int num_tiles_k = K / SF_TILE_DIM_K;
const int num_tiles_m = M / SF_TILE_DIM_M;
const int flat_offset = bid - kernel_args.block_range[tensor_id];
const int grid_dim_x = DIVUP(num_tiles_k, N_TILES_IN_TB);
const int grid_dim_y = num_tiles_m;
const int bid_x = flat_offset / grid_dim_y;
const int bid_y = flat_offset % grid_dim_y;

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 unswizzle_col_scaling_kernel_impl: SLM load stride mismatch for multi-block K dimension

The SLM load for each M-tile reads SF_TILE_SIZE_I32 * k_tiles_in_tb contiguous int32s from input_i32[i]:

const int4* input_v4i = reinterpret_cast<const int4*>(input_i32[i]);
for (int j = linear_id; j < SF_TILE_SIZE_I32 * k_tiles_in_tb / 4; j += ...)
    slm_v4i[j] = input_v4i[j];

input_i32[i] is set to base + bid_x * TB_DIM * SF_TILE_SIZE_I32 + mt * SF_TILE_DIM_M_I32 * K_i32, where the stride between adjacent M-tiles in the swizzled layout is SF_TILE_DIM_M_I32 * K_i32.

For a full K-tile block (k_tiles_in_tb == TB_DIM == 32), the write size is 32 * SF_TILE_SIZE_I32 = 32 * 128 = 4096 int32s. The M-tile stride is 32 * K_i32. When K_i32 > 128 (e.g., K = 132 K-tiles), the M-tile stride 32 * 132 = 4224 > 4096, so there is a gap that the read does not cross—it is safe.

However, for the last K-tile block (when bid_x == grid_dim_x - 1) and k_tiles_in_tb < TB_DIM, the following K-tile block for the same M-tile starts at offset (bid_x+1) * TB_DIM * 128 + mt * 32 * K_i32, which is beyond the current read range. This appears correct in isolation, but the stride chosen by the swizzle for that region may leave uninitialised bytes between consecutive partial K-tile writes for the same M-tile.

Consider tracing through with K = 8 K-tiles (K_i32 = 8), M = 128:

  • num_tiles_k = 8 / SF_TILE_DIM_K_I32 = 8 / 4 = 2; grid_dim_x = DIVUP(2, TB_DIM) = 1
  • So bid_x=0 is the last block; k_tiles_in_tb = (2-1) % 32 + 1 = 2
  • Read: 2 * 128 = 256 int32s from 0 + 0 * 32 * 8 = 0
  • Swizzle stored: 2 * 128 = 256 int32s at offset 0

Layouts agree here. But consider K_i32 = 132, grid_dim_x = 2, bid_x = 1 (last), mt = 1:

  • input_i32[1] = 1 * 32 * 128 + 1 * 32 * 132 = 4096 + 4224 = 8320
  • k_tiles_in_tb = (33-1) % 32 + 1 = 1
  • Read: 1 * 128 = 128 int32s from 8320

Swizzle for (bid_x=1, mt=1) wrote 128 int32s starting at 1 * 32 * 128 + 1 * 32 * 132 = 8320. This matches.

After more careful analysis, the reads do appear correct for the cases tested by the test suite (all aligned shapes). However, the correctness relies on k_tiles_in_tb being computed identically in the swizzle and unswizzle kernels. Please add an assertion or comment clarifying the layout invariant assumed by this contiguous read, and add a test covering non-power-of-two K-tile counts (e.g., K = 132 * 4 = 528 with M = 256) to catch any latent mismatch.

…put tensors for scaling mode and data validation. Updated checks for input and output tensor shapes to ensure proper handling of row-wise and column-wise scaling factors.

Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
Comment on lines +208 to +210
#pragma unroll
for (int i = 0; i < kVectorSize; i++)
tmp[i % N_SF_PER_TD_PER_TILE * N_TILE_PER_TD + i / N_SF_PER_TD_PER_TILE] = ptr[i];
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Missing parentheses around modulo expression

In C++, % and * share the same precedence and are left-to-right associative, so i % N_SF_PER_TD_PER_TILE * N_TILE_PER_TD is evaluated as (i % N_SF_PER_TD_PER_TILE) * N_TILE_PER_TD. This happens to be the correct inverse formula, but the lack of explicit parentheses makes the intent unclear compared to regs_shuffle's i / N_TILE_PER_TD + i % N_TILE_PER_TD * N_SF_PER_TD_PER_TILE (which has the same ambiguity). Adding explicit parentheses improves readability:

Suggested change
#pragma unroll
for (int i = 0; i < kVectorSize; i++)
tmp[i % N_SF_PER_TD_PER_TILE * N_TILE_PER_TD + i / N_SF_PER_TD_PER_TILE] = ptr[i];
tmp[(i % N_SF_PER_TD_PER_TILE) * N_TILE_PER_TD + i / N_SF_PER_TD_PER_TILE] = ptr[i];

Comment on lines +1432 to +1439
NVTE_CHECK(m % SF_TILE_DIM_M == 0, "Input should be padded in M/N dimension!");
NVTE_CHECK(k % SF_TILE_DIM_K == 0, "Input should be padded in K dimension!");
NVTE_CHECK(k > 0, "Input scale inverse should be 2D!");
NVTE_CHECK(m * k == std::accumulate(output[i]->columnwise_scale_inv.shape.begin(),
output[i]->columnwise_scale_inv.shape.end(), 1,
std::multiplies<int>()),
"Input.columnwise_scale_inv size is not equal to "
"Output.columnwise_scale_inv size!");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Trivially-true check never validates the swizzled input buffer size

m and k are derived from output[i]->columnwise_scale_inv.shape (shape[1] and shape[0] respectively), so m * k equals output[i]->columnwise_scale_inv.numel() by construction. The std::accumulate on the output shape will always equal m * k, making this check a no-op that never catches a real mismatch. Furthermore, the error message says "Input.columnwise_scale_inv size is not equal to Output.columnwise_scale_inv size!" but the check never reads input[i]->columnwise_scale_inv.numel().

The missing check is against the input (swizzled) size — mirror the same validation added in the rowwise path (line 1380):

NVTE_CHECK(input[i]->columnwise_scale_inv.has_data(), "Input tensor ", i,
           " does not have column-wise scaling factors.");
NVTE_CHECK(static_cast<size_t>(m) * k == input[i]->columnwise_scale_inv.numel(),
           "Expected input tensor ", i, " to have ", static_cast<size_t>(m) * k,
           " column-wise scaling factors, but got shape=",
           input[i]->columnwise_scale_inv.shape, ".");

… use input numel

Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
Comment on lines +1365 to +1366
m * k, " row-wise scaling factors, but got shape=", output[i]->scale_inv.shape,
".");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Misleading error message prints output shape instead of input shape

The NVTE_CHECK on line 1364 tests whether input[i]->scale_inv.numel() equals m * k (where m and k come from output[i]->scale_inv.shape). When the check fails the error message should tell the caller what the input actually contained, but it prints output[i]->scale_inv.shape instead.

Because m * k was already derived from the output shape, the message effectively says "got shape [Y, Z]" where Y*Z == m*k — the very value that was expected — giving the developer no information about what the mismatching input looked like.

Suggested change
m * k, " row-wise scaling factors, but got shape=", output[i]->scale_inv.shape,
".");
NVTE_CHECK(m * k == input[i]->scale_inv.numel(), "Expected input tensor ", i, " to have ",
m * k, " row-wise scaling factors, but got shape=", input[i]->scale_inv.shape,
".");

Signed-off-by: Abhishek <abhi.dtu11@gmail.com>
Comment on lines +1435 to +1446
NVTE_CHECK(m * k == std::accumulate(input[i]->columnwise_scale_inv.shape.begin(),
input[i]->columnwise_scale_inv.shape.end(), 1,
std::multiplies<int>()),
"Input.columnwise_scale_inv size is not equal to "
"Output.columnwise_scale_inv size!");

int num_tiles_k = k / SF_TILE_DIM_K;
int vec_load_size_i = (num_tiles_k - 1) % 4 + 1;
vec_load_size = std::min(vec_load_size, vec_load_size_i);

const int pos = kernel_args.num_tensors;
kernel_args.input_list[pos] = const_cast<void*>(input[i]->columnwise_scale_inv.dptr);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Missing has_data() validation for input columnwise scaling factors

The columnwise unswizzle path accesses input[i]->columnwise_scale_inv.shape (line 1435) and .dptr (line 1446) without first checking that the data actually exists. The validation loop at lines 1330–1332 only checks output[i]->columnwise_scale_inv.has_data() — not the input.

If input[i]->columnwise_scale_inv.has_data() is false, this block will silently compute with a null/zero numel() on line 1435 (causing the size check to fail spuriously) and then pass a null pointer to the kernel on line 1446, producing undefined behaviour.

By contrast, the analogous rowwise path at lines 1358–1359 explicitly guards against this with:

NVTE_CHECK(input[i]->scale_inv.has_data(), "Input tensor ", i,
           " does not have row-wise scaling factors.");

The same guard should be added before line 1435:

NVTE_CHECK(input[i]->columnwise_scale_inv.has_data(), "Input tensor ", i,
           " does not have column-wise scaling factors.");

Comment on lines +1225 to +1228
// Example for NVFP4 rowwise path:
NVTE_CHECK(static_cast<size_t>(m) * k == input->scale_inv.numel(),
"Expected input tensor to have ", static_cast<size_t>(m) * k,
" row-wise scaling factors, but got shape=", input->scale_inv.shape, ".");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Leftover code-review comment should be removed

The comment // Example for NVFP4 rowwise path: (with trailing whitespace on lines 1225–1226) appears to be an artefact of a reviewer's suggested code snippet that was copy-pasted directly into the source. It is not a meaningful code comment and will confuse future readers about why this block is labelled "example".

Suggested change
// Example for NVFP4 rowwise path:
NVTE_CHECK(static_cast<size_t>(m) * k == input->scale_inv.numel(),
"Expected input tensor to have ", static_cast<size_t>(m) * k,
" row-wise scaling factors, but got shape=", input->scale_inv.shape, ".");
NVTE_CHECK(static_cast<size_t>(m) * k == input->scale_inv.numel(),
"Expected input tensor to have ", static_cast<size_t>(m) * k,
" row-wise scaling factors, but got shape=", input->scale_inv.shape, ".");

Comment on lines +1155 to +1156
NVTE_CHECK(!has_rowwise_scale_inv || !has_columnwise_scale_inv,
"Output tensor has both row-wise and column-wise scaling factors");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Dual-scale tensors rejected in single-tensor unswizzle, breaking round-trip symmetry

unswizzle_scaling_factors hard-rejects any tensor that has both rowwise and columnwise scaling factors (line 1155–1156). The corresponding swizzle_scaling_factors (line 599) has the same check, so calling the public round-trip pair:

nvte_swizzle_scaling_factors(input, swizzled, stream);    // succeeds — handles both scales
nvte_unswizzle_scaling_factors(swizzled, output, stream); // FAILS — "Output tensor has both…"

will throw at runtime for any dual-path MXFP8 tensor (common in training). The implementation already has independent rowwise and columnwise kernel paths (if (has_rowwise_scale_inv) / else if (has_columnwise_scale_inv) at lines 1183 and 1197), so the restriction is artificial. Lifting it requires changing else if (has_columnwise_scale_inv) to if (has_columnwise_scale_inv) and executing both paths sequentially when both flags are set — exactly mirroring swizzle_scaling_factors's design.

@int-smart
Copy link
Copy Markdown
Contributor Author

@ptrendx Let me know if this makes sense. All the major issues have been resolved

Signed-off-by: Przemek Tredak <ptredak@nvidia.com>
@ptrendx
Copy link
Copy Markdown
Member

ptrendx commented Apr 1, 2026

/te-ci

ptrendx
ptrendx previously approved these changes Apr 1, 2026
ptrendx
ptrendx previously approved these changes Apr 1, 2026
@ptrendx
Copy link
Copy Markdown
Member

ptrendx commented Apr 1, 2026

@int-smart Yes, it looks good. I resolved the merge conflict and launched the CI.

Signed-off-by: Przemek Tredak <ptredak@nvidia.com>
@ptrendx
Copy link
Copy Markdown
Member

ptrendx commented Apr 3, 2026

/te-ci

@ptrendx ptrendx merged commit 509614d into NVIDIA:main Apr 3, 2026
38 of 42 checks passed
@ptrendx
Copy link
Copy Markdown
Member

ptrendx commented Apr 3, 2026

@int-smart Thank you for your contribution, merged :-)!

@int-smart int-smart deleted the feature/unswizzle branch April 4, 2026 05:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community-contribution PRs from external contributor outside the core maintainers, representing community-driven work.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants