Welcome to our tutorial on GPU-Accelerated Signal Processing with cuSignal! cuSignal is a free and open-source software library designed to support and extend SciPy Signal functionality towards NVIDIA GPUs. Housed under NVIDIA's RAPIDS open data science project, cuSignal delivers 100-300x speedups over CPU with a fully Pythonic API. In this tutorial, we will:
Our goal is to provide an interactive and collaborative tutorial, full of GPU-goodies, best practices, and showing that you really can achieve eye-popping speedups with Python. We want to show the ease and flexibility of creating and implementing GPU-based high performance signal processing workloads from Python, and you can expect to learn as much about using cuSignal as to extending cuSignal via your own Python-CUDA code.
We know that 2020 has been a year to say the least, and it goes without saying that we wish we could all be in Toronto together. We hope that everyone continues to remain safe and healthy.
Let's get started.
Before we jump into code, we'll be walking through a presentation concerning the usecases, features, performance, and techncial backend of cuSignal. You can find a copy of these slides here. Note: We only have PDF slides for parts 1 and 2 of our talk.
cuSignal has been tested on and supports all modern GPUs - from Maxwell to Ampere. While Anaconda is the preferred installation mechanism for cuSignal, developers and Jetson users should follow the source build instructions below. As of cuSignal 0.16, there isn't a cuSignal conda package for aarch64. In general, it's assumed that the developer has already installed the NVIDIA CUDA Toolkit and associated GPU drivers.
Complete build instructions can be found on cuSignal's installation README, but we will highlight Anaconda Linux builds and No GPU instuctions below.
cuSignal can be installed with conda (Miniconda, or the full Anaconda distribution) from the rapidsai
channel. Once installed, create a new cusignal environment and install the package. Instructions for doing this are shown below.
# Create Conda Environment
conda create --name cusignal python=3.8
# Activate Conda Environment
conda activate cusignal
# Install cuSignal into cusignal Environment
conda install -c rapidsai cusignal
# Confirm cuSignal and its Dependencies Successfully Installed
python
>>> import cusignal
>>> import cupy as cp
>>> from numba import cuda
No GPU? No problem. We can use Google Colab for access to a no-cost GPU instance.
File -> New Notebook
to create a new Colab notebook.Runtime -> Change Runtime Type -> GPU
!git clone https://github.com/awthomp/cusignal-icassp-tutorial.git !bash cusignal-icassp-tutorial/colab/cusignal-colab.sh 0.18 import sys, os dist_package_index = sys.path.index('/usr/local/lib/python3.7/dist-packages') sys.path = sys.path[:dist_package_index] + ['/usr/local/lib/python3.7/site-packages'] + sys.path[dist_package_index:]
import cusignal import cupy as cp from numba import cuda # Check versions print(cusignal.__version__) print(cp.__version__)Notebooks Used in Today's Tutorial
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4