The cpp inference of BiRefNet based on Tensorrt.
Source
DichotomousImage-Gray
DichotomousImage-Pseudo
The inference time includes the pre-preprocessing and post-processing stages:
Device(System) Model Model Input (WxH) Image Resolution (WxH) Inference Time(ms) RTX-3080(Windows11) BiRefNet-general-bb_swin_v1_tiny-epoch_232.pth 1920x1080 (1920x2)x1080 130 RTX-A5500(Ubuntu) BiRefNet-general-bb_swin_v1_tiny-epoch_232.pth 3577x2163 (3577x2)x2163 120Install TensorRT using TensorRT official guidance.
Click here for Windows guideTensorRT-10.x.x.x
. This new subdirectory will be referred to as <installpath>
in the steps below.TensorRT-10.x.x.x.Windows10.x86_64.cuda-x.x.zip
file to the location that you chose. Where:10.x.x.x
is your TensorRT versioncuda-x.x
is CUDA version 12.4
, 11.8
or 12.0
PATH
. To do so, copy the DLL files from <installpath>/lib
to your CUDA installation directory, for example, C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vX.Y\bin
, where vX.Y
is your CUDA version. The CUDA installer should have already added the CUDA path to your system PATH.Download and install any recent OpenCV for Windows.
Modify TensorRT and OpenCV paths in CMakelists.txt:
# Find and include OpenCV
set(OpenCV_DIR "your path to OpenCV")
find_package(OpenCV REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
# Set TensorRT path if not set in environment variables
set(TENSORRT_DIR "your path to TensorRT")
Build project by using the following commands or cmake-gui(Windows).
mkdir build cd build cmake .. cmake --build . --config Release
mkdir build cd build && mkdir out_dir cmake .. make
Finally, copy the opencv dll files such as opencv_world490.dll
and opencv_videoio_ffmpeg490_64.dll
into the <BiRefNet_install_path>/build/Release
folder.
Perform the following steps to create an onnx model:
Download the pretrained model and install BiRefNet:
git clone https://github.com/ZhengPeng7/BiRefNet.git cd BiRefNet # create a new conda enviroment conda create -n BiRefNet python=3.8 conda activate BiRefNet pip install torch torchvision pip install opencv-python pip install onnx pip install -r requirements.txt # copy model and converted files on the root path of BiRefNet cp path_to_BiRefNet-general-bb_swin_v1_tiny-epoch_232.pth . cp cpp/py pth2onnx.py . cp cpp/py deform_conv2d_onnx_exporter.py .
Export the model to onnx format using pth2onnx.py.
Tip
You can modify the size of input and output images, such as 512*512.
trtexec --onnx=BiRefNet-general-bb_swin_v1_tiny-epoch_232.onnx --saveEngine=BiRefNet-general-bb_swin_v1_tiny-epoch_232.engine
Note
If you want to accelerate the inference, you could add fp16 while quantifying the model.
BiRefNet.exe <engine> <input image or video>
Example:
# infer image BiRefNet.exe BiRefNet-general-bb_swin_v1_tiny-epoch_232.engine test.jpg # infer folder(images) BiRefNet.exe BiRefNet-general-bb_swin_v1_tiny-epoch_232.engine data # infer video BiRefNet.exe BiRefNet-general-bb_swin_v1_tiny-epoch_232.engine test.mp4
This project is based on the following projects:
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4