Showing content from https://github.com/inducer/pycuda below:
inducer/pycuda: CUDA integration for Python, plus shiny features
Skip to content Navigation Menu
Search code, repositories, users, issues, pull requests...
Saved searches Use saved searches to filter your results more quickly
Sign up You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert inducer/pycuda PyCUDA: Pythonic Access to CUDA, with Arrays and Algorithms
PyCUDA lets you access Nvidia's CUDA parallel computation API from Python. Several wrappers of the CUDA API already exist-so what's so special about PyCUDA?
- Object cleanup tied to lifetime of objects. This idiom, often called RAII in C++, makes it much easier to write correct, leak- and crash-free code. PyCUDA knows about dependencies, too, so (for example) it won't detach from a context before all memory allocated in it is also freed.
- Convenience. Abstractions like pycuda.driver.SourceModule and pycuda.gpuarray.GPUArray make CUDA programming even more convenient than with Nvidia's C-based runtime.
- Completeness. PyCUDA puts the full power of CUDA's driver API at your disposal, if you wish. It also includes code for interoperability with OpenGL.
- Automatic Error Checking. All CUDA errors are automatically translated into Python exceptions.
- Speed. PyCUDA's base layer is written in C++, so all the niceties above are virtually free.
- Helpful Documentation.
Relatedly, like-minded computing goodness for OpenCL is provided by PyCUDA's sister project PyOpenCL.
About
CUDA integration for Python, plus shiny features
Topics Resources License Stars Watchers Forks
You can’t perform that action at this time.
RetroSearch is an open source project built by @garambo
| Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4