💗 .NET Wrapper for PaddleInference
C API, support Windows(x64) 💻, NVIDIA Cuda 11.8+ based GPU 🎮 and Linux(Ubuntu-22.04 x64) 🐧, currently contained following main components:
text_image_orientation_infer
model to detect text picture's rotation angle(0, 90, 180, 270
).PaddleNLP
Lac Chinese segmenter model, supports tagging/customized words.ONNX
model using C#
.Please checkout this page 📄.
Infrastructure packages 🏗️ NuGet Package 💼 Version 📌 Description 📚 Sdcb.PaddleInference Paddle Inference C API .NET binding ⚙️ Package Version 📌 Description Sdcb.PaddleInference.runtime.win64.mkl Recommended for most users (CPU, MKL) Sdcb.PaddleInference.runtime.win64.openblas CPU, OpenBLAS Sdcb.PaddleInference.runtime.win64.openblas-noavx CPU, no AVX, for old CPUs Sdcb.PaddleInference.runtime.win64.cu118_cudnn89_sm61 CUDA 11.8, GTX 10 Series Sdcb.PaddleInference.runtime.win64.cu118_cudnn89_sm75 CUDA 11.8, RTX 20/GTX 16xx Series Sdcb.PaddleInference.runtime.win64.cu118_cudnn89_sm86 CUDA 11.8, RTX 30 Series Sdcb.PaddleInference.runtime.win64.cu118_cudnn89_sm89 CUDA 11.8, RTX 40 Series Sdcb.PaddleInference.runtime.win64.cu126_cudnn95_sm61 CUDA 12.6, GTX 10 Series Sdcb.PaddleInference.runtime.win64.cu126_cudnn95_sm75 CUDA 12.6, RTX 20/GTX 16xx Series Sdcb.PaddleInference.runtime.win64.cu126_cudnn95_sm86 CUDA 12.6, RTX 30 Series Sdcb.PaddleInference.runtime.win64.cu126_cudnn95_sm89 CUDA 12.6, RTX 40 Series Sdcb.PaddleInference.runtime.win64.cu129_cudnn910_sm61 CUDA 12.9, GTX 10 Series Sdcb.PaddleInference.runtime.win64.cu129_cudnn910_sm75 CUDA 12.9, RTX 20/GTX 16xx Series Sdcb.PaddleInference.runtime.win64.cu129_cudnn910_sm86 CUDA 12.9, RTX 30 Series Sdcb.PaddleInference.runtime.win64.cu129_cudnn910_sm89 CUDA 12.9, RTX 40 Series Sdcb.PaddleInference.runtime.win64.cu129_cudnn910_sm120 CUDA 12.9, RTX 50 Series Sdcb.PaddleInference.runtime.linux-x64.openblas Linux x64, OpenBLAS Sdcb.PaddleInference.runtime.linux-x64.mkl Linux x64, MKL Sdcb.PaddleInference.runtime.linux-x64 Linux x64, MKL+OpenVINO Sdcb.PaddleInference.runtime.linux-arm64 Linux ARM64 Sdcb.PaddleInference.runtime.osx-x64 macOS x64, include ONNXRuntime Sdcb.PaddleInference.runtime.osx-arm64 macOS ARM64Package Selection Guide:
Sdcb.PaddleInference.runtime.win64.mkl
for most users. It offers the best balance between performance and package size. Please note that this package does not support GPU acceleration, making it suitable for most general scenarios.openblas-noavx
is tailored for older CPUs that do not support the AVX2 instruction set.Important:
Not all GPU packages are suitable for every card. Please refer to the following GPU-to-sm
suffix mapping:
sm
Suffix Supported GPU Series sm61 GTX 10 Series sm75 RTX 20 Series (and GTX 16xx series such as GTX 1660) sm86 RTX 30 Series sm89 RTX 40 Series sm120 RTX 50 Series (supported by CUDA 12.9 only)
Any other packages that starts with Sdcb.PaddleInference.runtime
might deprecated.
All packages were compiled manually by me, with some code patches from here: https://github.com/sdcb/PaddleSharp/blob/master/build/capi.patch
Mkldnn - PaddleDevice.Mkldnn()
Based on Mkldnn, generally fast
Openblas - PaddleDevice.Openblas()
Based on openblas, slower, but dependencies file smaller and consume lesser memory
Onnx - PaddleDevice.Onnx()
Based on onnxruntime, is also pretty fast and consume less memory
Gpu - PaddleDevice.Gpu()
Much faster but relies on NVIDIA GPU and CUDA
If you wants to use GPU, you should refer to FAQ How to enable GPU?
section, CUDA/cuDNN/TensorRT need to be installed manually.
Please ensure the latest Visual C++ Redistributable was installed in Windows
(typically it should automatically installed if you have Visual Studio
installed) 🛠️ Otherwise, it will fail with the following error (Windows only):
DllNotFoundException: Unable to load DLL 'paddle_inference_c' or one of its dependencies (0x8007007E)
If it's Unable to load DLL OpenCvSharpExtern.dll or one of its dependencies, then most likely the Media Foundation is not installed in the Windows Server 2012 R2 machine:
Many old CPUs do not support AVX instructions, please ensure your CPU supports AVX, or download the x64-noavx-openblas DLLs and disable Mkldnn: PaddleDevice.Openblas()
🚀
If you're using Win7-x64, and your CPU does support AVX2, then you might also need to extract the following 3 DLLs into C:\Windows\System32
folder to make it run: 💾
You can download these 3 DLLs here: win7-x64-onnxruntime-missing-dlls.zip ⬇️
Enable GPU support can significantly improve the throughput and lower the CPU usage. 🚀
Steps to use GPU in Windows:
Sdcb.PaddleInference.runtime.win64.cu120*
instead of Sdcb.PaddleInference.runtime.win64.mkl
, do not install both. 📦PATH
or LD_LIBRARY_PATH
(Linux) 🔧PATH
or LD_LIBRARY_PATH
(Linux) 🛠️PATH
or LD_LIBRARY_PATH
(Linux) ⚙️You can refer to this blog page for GPU in Windows: 关于PaddleSharp GPU使用 常见问题记录 📝
If you're using Linux, you need to compile your own OpenCvSharp4 environment following the docker build scripts and the CUDA/cuDNN/TensorRT configuration tasks. 🐧
After these steps are completed, you can try specifying PaddleDevice.Gpu()
in the paddle device configuration parameter, then enjoy the performance boost! 🎉
QQ group of C#/.NET computer vision technical communication (C#/.NET计算机视觉技术交流群): 579060605
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4