Skip to content

Commit 1e2d36a

Browse files
bobrenjc93pobin6
authored andcommitted
don't specialize when grad tracking tensors are activated (pytorch#140828)
Fixes `python test/dynamo/test_inline_inbuilt_nn_modules.py InlineInbuiltNNModulesFuncTorchHigherOrderOpTests.test_grad_non_tensor_input_inline_inbuilt_nn_modules` when `specialize_float=False` Pull Request resolved: pytorch#140828 Approved by: https://github.com/ezyang ghstack dependencies: pytorch#140830, pytorch#140832
1 parent b6e070d commit 1e2d36a

File tree

1 file changed

+8
-0
lines changed

1 file changed

+8
-0
lines changed

torch/_dynamo/variables/builder.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1914,6 +1914,14 @@ def wrap_symfloat(self, value):
19141914
# time.
19151915

19161916
wrapped_value = torch.tensor(value, dtype=torch.float64)
1917+
1918+
# We don't support specializing floats for grad checking tensors
1919+
# See https://github.com/pytorch/pytorch/pull/140828 for more
1920+
# context.
1921+
if torch._C._functorch.is_gradtrackingtensor(wrapped_value):
1922+
self.install_guards(GuardBuilder.CONSTANT_MATCH)
1923+
return ConstantVariable.create(value=value, source=self.source)
1924+
19171925
# TODO: Switch RandomValueSource over to use this, this is more
19181926
# accurate
19191927
assert not isinstance(self.get_source(), RandomValueSource)

0 commit comments

Comments
 (0)