The cuda array inteface is created for interoperability between different implementation of GPU array-like objects in various projects. The idea is borrowed from the numpy array interface.
Note
Currently, we only define the Python-side interface. In the future, we may add a C-side interface for efficient exchange of the information in compiled code.
Note
Experimental feature. Specification may change.
The __cuda_array_interface__
attribute is a dictionary-like object that
must contain the following entries:
shape: (integer, ...)
A tuple of int (or long) representing the size of each dimension.
typestr: str
The type string. This has the same definition as typestr in the numpy array interface.
data: (integer, boolean)
The data is a 2-tuple. The first element data pointer as a Python int (or long). The data must be device-accessible. The second element is the read-only flag as a Python bool.
Because the user of the interface may or may not be in the same context, the most common case is to use
cuPointerGetAttribute
withCU_POINTER_ATTRIBUTE_DEVICE_POINTER
in the CUDA driver API (or the equivalent CUDA Runtime API) to retrieve a device pointer that is usable in the currently active context.
version: integer
An integer for the version of the interface being exported. The current version is 0 since it is still experimental.
The following are optional entries:
strides: None
or (integer, ...)
A tuple of int (or long) representing the number of bytes to skip to access the next element at each dimension. If it is
None
, the array is assumed to be in C-contiguous layout.
descr
This is for describing more complicated types. This follows the same specification as in the numpy array interface.
Additional information about the data pointer can be retrieved using
cuPointerGetAttribute
or cudaPointerGetAttributes
. Such information
include: