Showing content from https://docs.nvidia.com/cuda/archive/11.1.0/cuda-toolkit-release-notes/index.html below:
Release Notes :: CUDA Toolkit Documentation
1. CUDA Toolkit Major Components
This section provides an overview of the major components of the NVIDIA® CUDA® Toolkit and points to their locations after installation.
-
Compiler
-
The CUDA-C and CUDA-C++ compiler, nvcc, is found in the bin/ directory. It is built on top of the NVVM optimizer, which is itself built on top of the LLVM compiler infrastructure. Developers who want to target NVVM directly can do so using the Compiler SDK, which is available in the nvvm/ directory.
-
Please note that the following files are compiler-internal and subject to change without any prior notice.
- any file in include/crt and bin/crt
- include/common_functions.h, include/device_double_functions.h, include/device_functions.h, include/host_config.h, include/host_defines.h, and include/math_functions.h
- nvvm/bin/cicc
- bin/cudafe++, bin/bin2c, and bin/fatbinary
-
Tools
-
The following development tools are available in the bin/ directory (except for Nsight Visual Studio Edition (VSE) which is installed as a plug-in to Microsoft Visual Studio, Nsight Compute and Nsight Systems are available in a separate directory).
- IDEs: nsight (Linux, Mac), Nsight VSE (Windows)
- Debuggers: cuda-memcheck, cuda-gdb (Linux), Nsight VSE (Windows)
- Profilers: Nsight Systems, Nsight Compute, nvprof, nvvp, ncu, Nsight VSE (Windows)
- Utilities: cuobjdump, nvdisasm
-
Libraries
-
The scientific and utility libraries listed below are available in the lib64/ directory (DLLs on Windows are in bin/), and their interfaces are available in the include/ directory.
- cub (High performance primitives for CUDA)
- cublas (BLAS)
- cublas_device (BLAS Kernel Interface)
- cuda_occupancy (Kernel Occupancy Calculation [header file implementation])
- cudadevrt (CUDA Device Runtime)
- cudart (CUDA Runtime)
- cufft (Fast Fourier Transform [FFT])
- cupti (CUDA Profiling Tools Interface)
- curand (Random Number Generation)
- cusolver (Dense and Sparse Direct Linear Solvers and Eigen Solvers)
- cusparse (Sparse Matrix)
- libcu++ (CUDA Standard C++ Library)
- nvJPEG (JPEG encoding/decoding)
- npp (NVIDIA Performance Primitives [image and signal processing])
- nvblas ("Drop-in" BLAS)
- nvcuvid (CUDA Video Decoder [Windows, Linux])
- nvml (NVIDIA Management Library)
- nvrtc (CUDA Runtime Compilation)
- nvtx (NVIDIA Tools Extension)
- thrust (Parallel Algorithm Library [header file implementation])
-
CUDA Samples
-
Code samples that illustrate how to use various CUDA and library APIs are available in the samples/ directory on Linux and Mac, and are installed to C:\ProgramData\NVIDIA Corporation\CUDA Samples on Windows. On Linux and Mac, the samples/ directory is read-only and the samples must be copied to another location if they are to be modified. Further instructions can be found in the Getting Started Guides for Linux and Mac.
-
Documentation
-
The most current version of these release notes can be found online at http://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html. Also, the version.txt file in the root directory of the toolkit will contain the version and build number of the installed toolkit.
-
Documentation can be found in PDF form in the doc/pdf/ directory, or in HTML form at doc/html/index.html and online at http://docs.nvidia.com/cuda/index.html.
-
CUDA-GDB Sources
-
CUDA-GDB sources are available as follows:
-
2. CUDA 11.1 Release Notes
The release notes for the CUDA Toolkit can be found online at http://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html.
2.1. CUDA Toolkit Major Component Versions
-
CUDA Components
-
Starting with CUDA 11, the various components in the toolkit are versioned independently.
For CUDA 11.1, the table below indicates the versions:
-
-
CUDA Driver
-
Running a CUDA application requires the system with at least one CUDA capable GPU and a driver that is compatible with the CUDA Toolkit. See Table 2. For more information various GPU products that are CUDA capable, visit https://developer.nvidia.com/cuda-gpus.
Each release of the CUDA Toolkit requires a minimum version of the CUDA driver. The CUDA driver is backward compatible, meaning that applications compiled against a particular version of the CUDA will continue to work on subsequent (later) driver releases.
More information on compatibility can be found at https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#cuda-runtime-and-driver-api-version.
Note: Starting with CUDA 11.0, the toolkit components are individually versioned, and the toolkit itself is versioned as shown in the table below.
-
-
For convenience, the NVIDIA driver is installed as part of the CUDA Toolkit installation. Note that this driver is for development purposes and is not recommended for use in production with Tesla GPUs.
For running CUDA applications in production with Tesla GPUs, it is recommended to download the latest driver for Tesla GPUs from the NVIDIA driver downloads site at http://www.nvidia.com/drivers.
-
During the installation of the CUDA Toolkit, the installation of the NVIDIA driver may be skipped on Windows (when using the interactive or silent installation) or on Linux (by using meta packages).
For more information on customizing the install process on Windows, see http://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#install-cuda-software.
For meta packages on Linux, see https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#package-manager-metas
2.2. General CUDA
- Added support for NVIDIA Ampere GPU architecture based GA10x GPUs GPUs (compute capability 8.6), including the GeForce RTX-30 series.
- Enhanced CUDA compatibility across minor releases of CUDA will enable CUDA applications to be compatible with all versions of a particular CUDA major release.
- CUDA 11.1 adds a new PTX Compiler static library that allows compilation of PTX programs using set of APIs provided by the library. See https://docs.nvidia.com/cuda/ptx-compiler-api/index.html for details.
- Added the 7.1 version of the Parallel Thread Execution instruction set architecture (ISA). For more details on new (sm_86 target, mma.sp) and deprecated instructions, see https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#ptx-isa-version-7-1 in the PTX documentation.
- Added support for Fedora 32 and Debian 10.3 Buster on x86_64 platforms.
- Unified programming model for:
- async-copy
- async-pipeline
- async-barrier (cuda::barrier)
- Added hardware accelerated sparse texture support.
- Added support for read-only mapping for cudaHostRegister.
- CUDA Graphs enhancements:
- improved graphExec update
- external dependencies
- extended memcopy APIs
- presubmit
- Introduced new system level interface using /dev based capabilities for cgroups style isolation with MIG.
- Improved MPS error handling when using multi-GPUs.
- A fatal GPU exception generated by a Volta+ MPS client will be contained within the devices affected by it and other clients using those devices. Clients running on the other devices managed by the same MPS server can continue running as normal.
-
Users can now configure and query the per-context time slice duration for a GPU via nvidia-smi. Configuring the time slice will require administrator privileges and the allowed settings are default, short, medium and long. The time slice will only be applicable to CUDA applications that are executed after the configuration is applied.
- Improved detection and reporting of unsupported configurations.
2.4. CUDA Libraries 2.4.1. cuFFT Library
- cuFFT is now L2-cache aware and uses L2 cache for GPUs with more than 4.5MB of L2 cache. Performance may improve in certain single-GPU 3D C2C FFT cases.
- After successfully creating a plan, cuFFT now enforces a lock on the cufftHandle. Subsequent calls to any planning function with the same cufftHandle will fail.
- Added support for very large sizes (3k cube) to multi-GPU cuFFT on DGX-2.
- Improved performance on multi-gpu cuFFT for certain sizes (1k cube).
2.4.2. cuSOLVER Library
- Added new 64-bit APIs:
- cusolverDnXpotrf_bufferSize
- cusolverDnXpotrf
- cusolverDnXpotrs
- cusolverDnXgeqrf_bufferSize
- cusolverDnXgeqrf
- cusolverDnXgetrf_bufferSize
- cusolverDnXgetrf
- cusolverDnXgetrs
- cusolverDnXsyevd_bufferSize
- cusolverDnXsyevd
- cusolverDnXsyevdx_bufferSize
- cusolverDnXsyevdx
- cusolverDnXgesvd_bufferSize
- cusolverDnXgesvd
- Added a new SVD algorithm based on polar decomposition, called GESVDP which uses the new 64-bit API, including cusolverDnXgesvdp_bufferSize and cusolverDnXgesvdp.
2.4.3. CUDA Math Library
- Added host support for half and nv_bfloat16 converts to/from integer types.
- Added __hcmadd() device only API for fast half2 and nv_bfloat162 based complex multiply-accumulate.
2.5. Deprecated Features
The following features are deprecated in the current release of the CUDA software. The features still work in the current release, but their documentation may have been removed, and they will become officially unsupported in a future release. We recommend that developers employ alternative solutions to these features in their software.
-
General CUDA
-
- Support for Ubuntu in IBMâs ppc64le platforms is deprecated in this release and will be dropped in a future CUDA release.
-
CUDA Tools
-
- Support for VS2015 is deprecated. Older Visual Studio versions including VS2012 and VS2013 are also deprecated and support may be dropped in a future release of CUDA.
-
CUDA Libraries
-
- The following cuSOLVER 64-bit APIs are deprecated:
- cusolverDnPotrf_bufferSize
- cusolverDnPotrf
- cusolverDnPotrs
- cusolverDnGeqrf_bufferSize
- cusolverDnGeqrf
- cusolverDnGetrf_bufferSize
- cusolverDnGetrf
- cusolverDnGetrs
- cusolverDnSyevd_bufferSize
- cusolverDnSyevd
- cusolverDnSyevdx_bufferSize
- cusolverDnSyevdx
- cusolverDnGesvd_bufferSize
- cusolverDnGesvd
2.6. Resolved Issues 2.6.1. General CUDA
- Fixed an issue that caused cuD3D11GetDevices() to return a misleading error code.
- Fixed an issue that caused cuda_ipc_open to fail with CUDA_ERROR_INVALID_HANDLE. (
- Fixed an issue that caused the nvidia-ml library to be installed in a different location from the one specified in pkg-config.
- Fixed an issue that caused some streaming apps to trigger CUDA safe detection.
- Fixed an issue that caused unexpectedly large host memory usage when loading cubin.
- Fixed an issue with the paths for .pc files in the CUDA SLES15 repo.
- Fixed an issue that caused warnings to be considered fatal when installing nvidia-drivers modules with kickstart.
- Resolved a memory issue when using cudaGraphInstantiate.
- Read-only OS_DESCRIPTOR allocations are now supported.
- Loading an application against the libcuda.so stub library now returns a helpful error message.
- The cudaOccupancy* API is now available even when __CUDA_ACC__ is not defined.
2.6.3. cuBLAS Library
- A performance regression in the cublasCgetrfBatched and cublasCgetriBatched routines has been fixed.
- The IMMA kernels do not support padding in matrix C and may corrupt the data when matrix C with padding is supplied to cublasLtMatmul. A suggested work around is to supply matrix C with leading dimension equal to 32 times the number of rows when targeting the IMMA kernels: computeType = CUDA_R_32I and CUBLASLT_ORDER_COL32 for matrices A,C,D, and CUBLASLT_ORDER_COL4_4R2_8C (on NVIDIA Ampere GPU architecture or Turing architecture) or CUBLASLT_ORDER_COL32_2R_4R4 (on NVIDIA Ampere GPU architecture) for matrix B. Matmul descriptor must specify CUBLAS_OP_T on matrix B and CUBLAS_OP_N (default) on matrix A and C. The data corruption behavior was fixed so that CUBLAS_STATUS_NOT_SUPPORTED is returned instead.
- Fixed an issue that caused an Address out of bounds error when calling cublasSgemm().
- A performance regression in the cublasCgetrfBatched and cublasCgetriBatched routines has been fixed.
2.6.4. cuFFT Library
- Resolved an issue that caused cuFFT to crash when reusing a handle after clearing a callback.
- Fixed an error which produced incorrect results / NaN values when running a real-to-complex FFT in half precision.
2.7. Known Issues 2.7.1. cuFFT Library
- cuFFT will always overwrite the input for out-of-place C2R transform.
- Single dimensional multi-GPU FFT plans ignore user input on the whichGPUs parameter of cufftXtSetGPUs() and assume that GPUs IDs are always numbered from 0 to N-1.
RetroSearch is an open source project built by @garambo
| Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.3