A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/lbq779660843/BiRefNet-Tensorrt below:

lbq779660843/BiRefNet-Tensorrt: The cpp inference of BiRefNet based on Tensorrt.

The cpp inference of BiRefNet based on Tensorrt.

Source

DichotomousImage-Gray

DichotomousImage-Pseudo

The inference time includes the pre-preprocessing and post-processing stages:

Device(System) Model Model Input (WxH) Image Resolution (WxH) Inference Time(ms) RTX-3080(Windows11) BiRefNet-general-bb_swin_v1_tiny-epoch_232.pth 1920x1080 (1920x2)x1080 130 RTX-A5500(Ubuntu) BiRefNet-general-bb_swin_v1_tiny-epoch_232.pth 3577x2163 (3577x2)x2163 120
  1. Install TensorRT using TensorRT official guidance.

    Click here for Windows guide
    1. Download the TensorRT zip file that matches the Windows version you are using.
    2. Choose where you want to install TensorRT. The zip file will install everything into a subdirectory called TensorRT-10.x.x.x. This new subdirectory will be referred to as <installpath> in the steps below.
    3. Unzip the TensorRT-10.x.x.x.Windows10.x86_64.cuda-x.x.zip file to the location that you chose. Where:
    1. Add the TensorRT library files to your system PATH. To do so, copy the DLL files from <installpath>/lib to your CUDA installation directory, for example, C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vX.Y\bin, where vX.Y is your CUDA version. The CUDA installer should have already added the CUDA path to your system PATH.

    Click here for installing tensorrt on Linux.

  2. Download and install any recent OpenCV for Windows.

  3. Modify TensorRT and OpenCV paths in CMakelists.txt:

    # Find and include OpenCV
    set(OpenCV_DIR "your path to OpenCV")
    find_package(OpenCV REQUIRED)
    include_directories(${OpenCV_INCLUDE_DIRS})
    
    # Set TensorRT path if not set in environment variables
    set(TENSORRT_DIR "your path to TensorRT")
    
  4. Build project by using the following commands or cmake-gui(Windows).

    1. Windows:
     mkdir build
    cd build
    cmake ..
    cmake --build . --config Release
    1. Linux(not tested):
    mkdir build
    cd build && mkdir out_dir
    cmake ..
    make
  5. Finally, copy the opencv dll files such as opencv_world490.dll and opencv_videoio_ffmpeg490_64.dll into the <BiRefNet_install_path>/build/Release folder.

Perform the following steps to create an onnx model:

  1. Download the pretrained model and install BiRefNet:

    git clone https://github.com/ZhengPeng7/BiRefNet.git
    cd BiRefNet
    
    # create a new conda enviroment
    conda create -n BiRefNet python=3.8
    conda activate BiRefNet
    pip install torch torchvision
    pip install opencv-python
    pip install onnx
    
    pip install -r requirements.txt
    
    # copy model and converted files on the root path of BiRefNet
    cp path_to_BiRefNet-general-bb_swin_v1_tiny-epoch_232.pth . 
    cp cpp/py pth2onnx.py .
    cp cpp/py deform_conv2d_onnx_exporter.py .
  2. Export the model to onnx format using pth2onnx.py.

Tip

You can modify the size of input and output images, such as 512*512.

trtexec --onnx=BiRefNet-general-bb_swin_v1_tiny-epoch_232.onnx --saveEngine=BiRefNet-general-bb_swin_v1_tiny-epoch_232.engine

Note

If you want to accelerate the inference, you could add fp16 while quantifying the model.

BiRefNet.exe <engine> <input image or video>

Example:

# infer image
BiRefNet.exe BiRefNet-general-bb_swin_v1_tiny-epoch_232.engine test.jpg
# infer folder(images)
BiRefNet.exe BiRefNet-general-bb_swin_v1_tiny-epoch_232.engine data
# infer video
BiRefNet.exe BiRefNet-general-bb_swin_v1_tiny-epoch_232.engine test.mp4 

This project is based on the following projects:


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4