The CUDA runtime makes it possible to compile and link your CUDA kernels into executables. This means that you don't have to distribute cubin files with your application, or deal with loading them through the driver API. As you have noted, it is generally easier to use.
In contrast, the driver API is harder to program but provided more control over how CUDA is used. The programmer has to directly deal with initialization, module loading, etc.
Apparently more detailed device information can be queried through the driver API than through the runtime API. For instance, the free memory available on the device can be queried only through the driver API.
From the CUDA Programmer's Guide:
It is composed of two APIs:
- A low-level API called the CUDA driver API,
- A higher-level API called the CUDA runtime API that is implemented on top of
the CUDA driver API.
These APIs are mutually exclusive: An application should use either one or the
other.
The CUDA runtime eases device code management by providing implicit
initialization, context management, and module management. The C host code
generated by nvcc is based on the CUDA runtime (see Section 4.2.5), so
applications that link to this code must use the CUDA runtime API.
In contrast, the CUDA driver API requires more code, is harder to program and
debug, but offers a better level of control and is language-independent since it only
deals with cubin objects (see Section 4.2.5). In particular, it is more difficult to
configure and launch kernels using the CUDA driver API, since the execution
configuration and kernel parameters must be specified with explicit function calls
instead of the execution configuration syntax described in Section 4.2.3. Also, device
emulation (see Section 4.5.2.9) does not work with the CUDA driver API.
There is no noticeable performance difference between the API's. How your kernels use memory and how they are laid out on the GPU (in warps and blocks) will have a much more pronounced effect.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…