A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/sdcb/PaddleSharp below:

sdcb/PaddleSharp: .NET/C# binding for Baidu paddle inference library and PaddleOCR

💗 .NET Wrapper for PaddleInference C API, support Windows(x64) 💻, NVIDIA Cuda 11.8+ based GPU 🎮 and Linux(Ubuntu-22.04 x64) 🐧, currently contained following main components:

NuGet Packages/Docker Images 📦

Please checkout this page 📄.

Infrastructure packages 🏗️ NuGet Package 💼 Version 📌 Description 📚 Sdcb.PaddleInference Paddle Inference C API .NET binding ⚙️ Package Version 📌 Description Sdcb.PaddleInference.runtime.win64.mkl Recommended for most users (CPU, MKL) Sdcb.PaddleInference.runtime.win64.openblas CPU, OpenBLAS Sdcb.PaddleInference.runtime.win64.openblas-noavx CPU, no AVX, for old CPUs Sdcb.PaddleInference.runtime.win64.cu118_cudnn89_sm61 CUDA 11.8, GTX 10 Series Sdcb.PaddleInference.runtime.win64.cu118_cudnn89_sm75 CUDA 11.8, RTX 20/GTX 16xx Series Sdcb.PaddleInference.runtime.win64.cu118_cudnn89_sm86 CUDA 11.8, RTX 30 Series Sdcb.PaddleInference.runtime.win64.cu118_cudnn89_sm89 CUDA 11.8, RTX 40 Series Sdcb.PaddleInference.runtime.win64.cu126_cudnn95_sm61 CUDA 12.6, GTX 10 Series Sdcb.PaddleInference.runtime.win64.cu126_cudnn95_sm75 CUDA 12.6, RTX 20/GTX 16xx Series Sdcb.PaddleInference.runtime.win64.cu126_cudnn95_sm86 CUDA 12.6, RTX 30 Series Sdcb.PaddleInference.runtime.win64.cu126_cudnn95_sm89 CUDA 12.6, RTX 40 Series Sdcb.PaddleInference.runtime.win64.cu129_cudnn910_sm61 CUDA 12.9, GTX 10 Series Sdcb.PaddleInference.runtime.win64.cu129_cudnn910_sm75 CUDA 12.9, RTX 20/GTX 16xx Series Sdcb.PaddleInference.runtime.win64.cu129_cudnn910_sm86 CUDA 12.9, RTX 30 Series Sdcb.PaddleInference.runtime.win64.cu129_cudnn910_sm89 CUDA 12.9, RTX 40 Series Sdcb.PaddleInference.runtime.win64.cu129_cudnn910_sm120 CUDA 12.9, RTX 50 Series Sdcb.PaddleInference.runtime.linux-x64.openblas Linux x64, OpenBLAS Sdcb.PaddleInference.runtime.linux-x64.mkl Linux x64, MKL Sdcb.PaddleInference.runtime.linux-x64 Linux x64, MKL+OpenVINO Sdcb.PaddleInference.runtime.linux-arm64 Linux ARM64 Sdcb.PaddleInference.runtime.osx-x64 macOS x64, include ONNXRuntime Sdcb.PaddleInference.runtime.osx-arm64 macOS ARM64

Package Selection Guide:

Important:
Not all GPU packages are suitable for every card. Please refer to the following GPU-to-sm suffix mapping:

sm Suffix Supported GPU Series sm61 GTX 10 Series sm75 RTX 20 Series (and GTX 16xx series such as GTX 1660) sm86 RTX 30 Series sm89 RTX 40 Series sm120 RTX 50 Series (supported by CUDA 12.9 only)

Any other packages that starts with Sdcb.PaddleInference.runtime might deprecated.

All packages were compiled manually by me, with some code patches from here: https://github.com/sdcb/PaddleSharp/blob/master/build/capi.patch

Why my code runs good in my windows machine, but DllNotFoundException in other machine: 💻
  1. Please ensure the latest Visual C++ Redistributable was installed in Windows (typically it should automatically installed if you have Visual Studio installed) 🛠️ Otherwise, it will fail with the following error (Windows only):

    DllNotFoundException: Unable to load DLL 'paddle_inference_c' or one of its dependencies (0x8007007E)
    

    If it's Unable to load DLL OpenCvSharpExtern.dll or one of its dependencies, then most likely the Media Foundation is not installed in the Windows Server 2012 R2 machine:

  2. Many old CPUs do not support AVX instructions, please ensure your CPU supports AVX, or download the x64-noavx-openblas DLLs and disable Mkldnn: PaddleDevice.Openblas() 🚀

  3. If you're using Win7-x64, and your CPU does support AVX2, then you might also need to extract the following 3 DLLs into C:\Windows\System32 folder to make it run: 💾

    You can download these 3 DLLs here: win7-x64-onnxruntime-missing-dlls.zip ⬇️

Enable GPU support can significantly improve the throughput and lower the CPU usage. 🚀

Steps to use GPU in Windows:

  1. (for Windows) Install the package: Sdcb.PaddleInference.runtime.win64.cu120* instead of Sdcb.PaddleInference.runtime.win64.mkl, do not install both. 📦
  2. Install CUDA from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH (Linux) 🔧
  3. Install cuDNN from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH (Linux) 🛠️
  4. Install TensorRT from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH (Linux) ⚙️

You can refer to this blog page for GPU in Windows: 关于PaddleSharp GPU使用 常见问题记录 📝

If you're using Linux, you need to compile your own OpenCvSharp4 environment following the docker build scripts and the CUDA/cuDNN/TensorRT configuration tasks. 🐧

After these steps are completed, you can try specifying PaddleDevice.Gpu() in the paddle device configuration parameter, then enjoy the performance boost! 🎉

QQ group of C#/.NET computer vision technical communication (C#/.NET计算机视觉技术交流群): 579060605


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4