perf: skip redundant convert_to_tensor for MetaTensor strip#8846
perf: skip redundant convert_to_tensor for MetaTensor strip#8846aymuos15 wants to merge 3 commits intoProject-MONAI:devfrom
Conversation
📝 WalkthroughWalkthroughThe PR updates tensor extraction for MetaTensor inputs: in intensity transforms (ScaleIntensity, ScaleIntensityFixedMean, ClipIntensityPercentiles, ScaleIntensityRangePercentiles, SavitzkyGolaySmooth, RandHistogramShift, GibbsNoise, IntensityRemap) the code now uses Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
monai/transforms/intensity/array.py (1)
487-498:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAdd tests for modified transforms. The optimization modifies 8 transforms'
__call__methods without accompanying test additions. Per coding guidelines, modified definitions must have test coverage. IntensityRemap notably lacks any test file.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@monai/transforms/intensity/array.py` around lines 487 - 498, The IntensityRemap __call__ implementation was changed but lacks unit tests; add a new test module that exercises IntensityRemap.__call__ (class IntensityRemap) for the key branches: (1) channel_wise=True with per-channel minv/maxv producing stacked outputs, (2) channel_wise=False using tensor-wide minv/maxv, (3) the fallback branch when minv/maxv are None and factor is applied, (4) dtype conversion via convert_to_dst_type, and (5) MetaTensor inputs vs raw torch.Tensor; use parameterized inputs and assertions on numeric correctness and output dtype/shape to cover rescale_array and convert_to_dst_type interactions. Ensure tests fail on regressions and are placed alongside other intensity-transform tests following project test naming conventions.
🧹 Nitpick comments (1)
monai/transforms/intensity/array.py (1)
2608-2608: ⚡ Quick winInconsistent variable name:
img_should beimg_t.Every other modified transform in this file (
ScaleIntensity,ScaleIntensityFixedMean,ClipIntensityPercentiles,ScaleIntensityRangePercentiles,SavitzkyGolaySmooth,RandHistogramShift,GibbsNoise) names the extracted tensorimg_t.IntensityRemapusesimg_, diverging from the established convention.✏️ Rename for consistency
- img_ = img.as_tensor() if isinstance(img, MetaTensor) else cast(torch.Tensor, img) + img_t = img.as_tensor() if isinstance(img, MetaTensor) else cast(torch.Tensor, img) # sample noise - vals_to_sample = torch.unique(img_).tolist() + vals_to_sample = torch.unique(img_t).tolist() noise = torch.from_numpy(self.R.choice(vals_to_sample, len(vals_to_sample) - 1 + self.kernel_size)) # smooth noise = torch.nn.AvgPool1d(self.kernel_size, stride=1)(noise.unsqueeze(0)).squeeze() # add linear component grid = torch.arange(len(noise)) / len(noise) noise += self.slope * grid # rescale - noise = (noise - noise.min()) / (noise.max() - noise.min()) * img_.max() + img_.min() + noise = (noise - noise.min()) / (noise.max() - noise.min()) * img_t.max() + img_t.min() # intensity remapping function - index_img = torch.bucketize(img_, torch.tensor(vals_to_sample)) + index_img = torch.bucketize(img_t, torch.tensor(vals_to_sample))As per coding guidelines, variable names should be sensible and consistent.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@monai/transforms/intensity/array.py` at line 2608, The variable name img_ in the IntensityRemap transform should be renamed to img_t to match the rest of the file; locate the construction "img_ = img.as_tensor() if isinstance(img, MetaTensor) else cast(torch.Tensor, img)" inside the IntensityRemap implementation and rename it to "img_t", then update every subsequent reference in that function (e.g., any uses in methods or local processing within IntensityRemap) to use img_t instead of img_ while preserving the same type handling and behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Outside diff comments:
In `@monai/transforms/intensity/array.py`:
- Around line 487-498: The IntensityRemap __call__ implementation was changed
but lacks unit tests; add a new test module that exercises
IntensityRemap.__call__ (class IntensityRemap) for the key branches: (1)
channel_wise=True with per-channel minv/maxv producing stacked outputs, (2)
channel_wise=False using tensor-wide minv/maxv, (3) the fallback branch when
minv/maxv are None and factor is applied, (4) dtype conversion via
convert_to_dst_type, and (5) MetaTensor inputs vs raw torch.Tensor; use
parameterized inputs and assertions on numeric correctness and output
dtype/shape to cover rescale_array and convert_to_dst_type interactions. Ensure
tests fail on regressions and are placed alongside other intensity-transform
tests following project test naming conventions.
---
Nitpick comments:
In `@monai/transforms/intensity/array.py`:
- Line 2608: The variable name img_ in the IntensityRemap transform should be
renamed to img_t to match the rest of the file; locate the construction "img_ =
img.as_tensor() if isinstance(img, MetaTensor) else cast(torch.Tensor, img)"
inside the IntensityRemap implementation and rename it to "img_t", then update
every subsequent reference in that function (e.g., any uses in methods or local
processing within IntensityRemap) to use img_t instead of img_ while preserving
the same type handling and behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: df4d23d3-84d5-438f-8fb0-da97439ead8f
📒 Files selected for processing (2)
monai/transforms/intensity/array.pymonai/transforms/post/array.py
Several transforms invoke ``convert_to_tensor`` twice on the same input:
once with ``track_meta=get_track_meta()`` to keep the MetaTensor wrapper,
then again with ``track_meta=False`` to obtain a plain-tensor view for the
actual computation. The second call routes through
``_convert_tensor(data).to(dtype=..., device=..., memory_format=contiguous_format)``
(``monai/utils/type_conversion.py``) and is functionally equivalent to
calling ``MetaTensor.as_tensor()`` for the purpose intended here, since
the first call already enforced ``contiguous_format``.
Replace the second call with an explicit ``as_tensor()`` strip:
img_t = img.as_tensor() if isinstance(img, MetaTensor) else img
This avoids a redundant ``convert_to_tensor`` dispatch and a no-op
``Tensor.to(memory_format=contiguous_format)`` per ``__call__``. Sites
touched (all in transform ``__call__``, so per-sample per-epoch):
* ``KeepLargestConnectedComponent`` (post/array.py)
* ``RemoveSmallObjects`` (post/array.py)
* ``LabelToContour`` (post/array.py)
* ``ScaleIntensity`` (intensity/array.py)
* ``ScaleIntensityFixedMean`` (intensity/array.py)
* ``ClipIntensityPercentiles`` (intensity/array.py)
* ``NormalizeIntensity`` (intensity/array.py)
* ``SavitzkyGolaySmooth`` (intensity/array.py)
* ``RandHistogramShift`` (intensity/array.py)
* ``KSpaceSpikeNoise`` (intensity/array.py)
* ``IntensityRemap`` (intensity/array.py)
No behavioral change: the resulting tensor is the same underlying data
(``as_tensor()`` returns the wrapped ``torch.Tensor`` without copying).
Signed-off-by: Soumya Snigdha Kundu <soumya_snigdha.kundu@kcl.ac.uk>
21f2129 to
c575f22
Compare
The new `img.as_tensor() if isinstance(img, MetaTensor) else cast(...)` expression narrowed mypy's inference of `img_t` to `Tensor`, so downstream reassignments from `_clip` / `_normalize` / `interp` (which return `NdarrayOrTensor`) failed type-checking. Annotate `img_t` explicitly as `NdarrayOrTensor` and drop the redundant `cast(torch.Tensor, img)` so the variable matches the type used by the rest of each transform's body. Keep the `cast` only on `SavitzkyGolaySmooth.__call__` where `self.img_t` is declared as `torch.Tensor` in `__init__` and used unconditionally as a Tensor afterwards. Signed-off-by: Soumya Snigdha Kundu <soumya_snigdha.kundu@kcl.ac.uk>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
monai/transforms/intensity/array.py (1)
487-488: 🏗️ Heavy liftAdd parity tests for updated MetaTensor/tensor paths.
This touches many
__call__implementations. Please add/extend tests that compare MetaTensor vs plain Tensor inputs for output values, dtype/device, and metadata preservation.As per coding guidelines, "Ensure new or modified definitions will be covered by existing or new unit tests."
Also applies to: 546-547, 1172-1173, 1437-1438, 1534-1535, 1911-1912, 1956-1957, 2608-2609
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@monai/transforms/intensity/array.py` around lines 487 - 488, Add parity unit tests that feed both MetaTensor and plain Tensor into every affected transform __call__ implementation (the code paths that use img_t = img.as_tensor() if isinstance(img, MetaTensor) else img and the similar branches at the other flagged locations) and assert that output values, dtype, device and MetaTensor metadata preservation match. Specifically, create tests that: construct identical data as a Tensor and as a MetaTensor with metadata, call the transform (the __call__ methods in monai/transforms/intensity/array.py touched by the diff and the corresponding implementations at the other flagged locations), compare numeric outputs for equality, verify output dtype and device are identical, and verify that metadata from MetaTensor is preserved or propagated as expected; extend or add tests for each of the other spots listed (the spans at 546-547, 1172-1173, 1437-1438, 1534-1535, 1911-1912, 1956-1957, 2608-2609) so every modified path is covered.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@monai/transforms/intensity/array.py`:
- Around line 487-488: Add parity unit tests that feed both MetaTensor and plain
Tensor into every affected transform __call__ implementation (the code paths
that use img_t = img.as_tensor() if isinstance(img, MetaTensor) else img and the
similar branches at the other flagged locations) and assert that output values,
dtype, device and MetaTensor metadata preservation match. Specifically, create
tests that: construct identical data as a Tensor and as a MetaTensor with
metadata, call the transform (the __call__ methods in
monai/transforms/intensity/array.py touched by the diff and the corresponding
implementations at the other flagged locations), compare numeric outputs for
equality, verify output dtype and device are identical, and verify that metadata
from MetaTensor is preserved or propagated as expected; extend or add tests for
each of the other spots listed (the spans at 546-547, 1172-1173, 1437-1438,
1534-1535, 1911-1912, 1956-1957, 2608-2609) so every modified path is covered.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e4b380e0-e5fa-4560-8fcb-04867d44b626
📒 Files selected for processing (1)
monai/transforms/intensity/array.py
Several transforms invoke
convert_to_tensortwice on the same input: once withtrack_meta=get_track_meta()to keep the MetaTensor wrapper, then again withtrack_meta=Falseto obtain a plain-tensor view for the actual computation. The second call routes through_convert_tensor(data).to(dtype=..., device=..., memory_format=contiguous_format)(monai/utils/type_conversion.py) and is functionally equivalent to callingMetaTensor.as_tensor()for the purpose intended here, since the first call already enforcedcontiguous_format.Replace the second call with an explicit
as_tensor()strip:This avoids a redundant
convert_to_tensordispatch and a no-opTensor.to(memory_format=contiguous_format)per__call__. Sites touched (all in transform__call__, so per-sample per-epoch):KeepLargestConnectedComponent(post/array.py)RemoveSmallObjects(post/array.py)LabelToContour(post/array.py)ScaleIntensity(intensity/array.py)ScaleIntensityFixedMean(intensity/array.py)ClipIntensityPercentiles(intensity/array.py)NormalizeIntensity(intensity/array.py)SavitzkyGolaySmooth(intensity/array.py)RandHistogramShift(intensity/array.py)KSpaceSpikeNoise(intensity/array.py)IntensityRemap(intensity/array.py)No behavioral change: the resulting tensor is the same underlying data (
as_tensor()returns the wrappedtorch.Tensorwithout copying).