Skip to content

Got error when running infer_onnx_tensorrt example #1333

@zhz17

Description

@zhz17

Environment

FastDeploy version:
fastdeploy-gpu-python 1.0.3

OS Platform:
Linux cm.bigdata 3.10.0-1160.76.1.el7.x86_64 #1 SMP Wed Aug 10 16:21:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
CentOS Linux release 7.9.2009 (Core)

Hardware:
Nvidia GPU RTX A4000 CUDA 11.2 CUDNN 8.2

Program Language:
Python 3.10

Problem description

When running the infer_onnx_tensorrt example
FastDeploy/examples/runtime/python/infer_onnx_tensorrt.py
Error message occur as follow:
[ERROR] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(256)::InitFromOnnx [ERROR] Error occurs while calling cudaStreamCreate().

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions