-
Notifications
You must be signed in to change notification settings - Fork 381
Empty tensor handling #3891
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Empty tensor handling #3891
Conversation
87ebaf5 to
547022d
Compare
547022d to
5d9d5fc
Compare
narendasan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@apbose this maybe be a case where we would want TRT to properly handle this rather than us doing something hacky. Lets raise it with Yuan Yuo. Dummy inputs do not feel like the right solution
core/runtime/execute_engine.cpp
Outdated
| auto shape = core::util::toVec(dims); | ||
| LOG_DEBUG("Input Name: " << name << " Shape: " << dims); | ||
|
|
||
| void* tensor_addr = nullptr; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I strongly want to avoid having nullptr basically anywhere, we should be looking for some sane default
47be81b to
f411fd1
Compare
narendasan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its looking good, please add a testcase then should be good to merge
This PR addresses the case of empty tensor in torchTRT based on https://docs.nvidia.com/deeplearning/tensorrt/latest/inference-library/advanced.html#empty-tensors, and also focuses on concat operation edge case involving empty tensor
TODO: Might have to separate the case of concat from this PR, in the case when torch.Tensor([]) and a rank greater tensor are concatenated, which is a valid case in pytorch but not TRT.
This addressed #3865. Corresponding HF transformers issue raised - huggingface/transformers#42027 where empty tensor should not come in the first place