3.3. Memory management¶
3.3.1. Data transfer¶
Even though Numba can automatically transfer NumPy arrays to the device, it can only do so conservatively by always transferring device memory back to the host when a kernel finishes. To avoid the unnecessary transfer for read-only arrays, you can use the following APIs to manually control the transfer:
- numba.cuda.device_array(shape, dtype=np.float, strides=None, order='C', stream=0)
Allocate an empty device ndarray. Similar to numpy.empty().
- numba.cuda.device_array_like(ary, stream=0)
Call cuda.devicearray() with information from the array.
- numba.cuda.to_device(obj, stream=0, copy=True, to=None)
Allocate and transfer a numpy ndarray or structured scalar to the device.
To copy host->device a numpy array:
ary = np.arange(10) d_ary = cuda.to_device(ary)
To enqueue the transfer to a stream:
stream = cuda.stream() d_ary = cuda.to_device(ary, stream=stream)
The resulting d_ary is a DeviceNDArray.
To copy device->host:
hary = d_ary.copy_to_host()
To copy device->host to an existing array:
ary = np.empty(shape=d_ary.shape, dtype=d_ary.dtype) d_ary.copy_to_host(ary)
To enqueue the transfer to a stream:
hary = d_ary.copy_to_host(stream=stream)
3.3.1.1. Device arrays¶
Device array references have the following methods. These methods are to be called in host code, not within CUDA-jitted functions.
- class numba.cuda.cudadrv.devicearray.DeviceNDArray(shape, strides, dtype, stream=0, writeback=None, gpu_data=None)
An on-GPU array type
- copy_to_host(ary=None, stream=0)
Copy self to ary or create a new Numpy ndarray if ary is None.
If a CUDA stream is given, then the transfer will be made asynchronously as part as the given stream. Otherwise, the transfer is synchronous: the function returns after the copy is finished.
Always returns the host array.
Example:
import numpy as np from numba import cuda arr = np.arange(1000) d_arr = cuda.to_device(arr) my_kernel[100, 100](d_arr) result_array = d_arr.copy_to_host()
- is_c_contiguous()
Return true if the array is C-contiguous.
- is_f_contiguous()
Return true if the array is Fortran-contiguous.
- ravel(order='C', stream=0)
Flatten the array without changing its contents, similar to numpy.ndarray.ravel().
- reshape(*newshape, **kws)
Reshape the array without changing its contents, similarly to numpy.ndarray.reshape(). Example:
d_arr = d_arr.reshape(20, 50, order='F')
3.3.2. Pinned memory¶
- numba.cuda.pinned(*args, **kws)
A context manager for temporary pinning a sequence of host ndarrays.
- numba.cuda.pinned_array(shape, dtype=np.float, strides=None, order='C')
Allocate a np.ndarray with a buffer that is pinned (pagelocked). Similar to np.empty().
3.3.3. Streams¶
- numba.cuda.stream()
Create a CUDA stream that represents a command queue for the device.
CUDA streams have the following methods:
- class numba.cuda.cudadrv.driver.Stream(context, handle, finalizer)
- auto_synchronize(*args, **kwds)
A context manager that waits for all commands in this stream to execute and commits any pending memory transfers upon exiting the context.
- synchronize()
Wait for all commands in this stream to execute. This will commit any pending memory transfers.
3.3.5. Local memory¶
Local memory is an area of memory private to each thread. Using local memory helps allocate some scratchpad area when scalar local variables are not enough. The memory is allocated once for the duration of the kernel, unlike traditional dynamic memory management.
- numba.cuda.local.array(shape, type)
Allocate a local array of the given shape and type on the device. The array is private to the current thread. An array-like object is returned which can be read and written to like any standard array (e.g. through indexing).
3.3.6. SmartArrays (experimental)¶
Numba provides an Array-like data type that manages data movement to and from the device automatically. It can be used as drop-in replacement for numpy.ndarray in most cases, and is supported by Numba’s JIT-compiler for both ‘host’ and ‘cuda’ target.
- class numba.SmartArray(obj=None, copy=True, shape=None, dtype=None, order=None, where='host')¶
An array type that supports host and GPU storage.
- __init__(obj=None, copy=True, shape=None, dtype=None, order=None, where='host')¶
Construct a SmartArray in the memory space defined by ‘where’. Valid invocations:
SmartArray(obj=<array-like object>, copy=<optional-true-or-false>):
to create a SmartArray from an existing array-like object. The ‘copy’ argument specifies whether to adopt or to copy it.
SmartArray(shape=<shape>, dtype=<dtype>, order=<order>)
to create a new SmartArray from scratch, given the typical NumPy array attributes.
(The optional ‘where’ argument specifies where to allocate the array initially. (Default: ‘host’)
- get(where='host')¶
Return the representation of ‘self’ in the given memory space.
- mark_changed(where='host')¶
Mark the given location as changed, broadcast updates if needed.
Thus, SmartArray objects may be passed as function arguments to jit-compiled functions. Whenever a cuda.jit-compiled function is being executed, it will trigger a data transfer to the GPU (unless the data are already there). But instead of transferring the data back to the host after the function completes, it leaves the data on the device and merely updates the host-side if there are any external references to that. Thus, if the next operation is another invocation of a cuda.jit-compiled function, the data does not need to be transferred again, making the compound operation more efficient (and making the use of the GPU advantagous even for smaller data sizes).
3.3.7. Deallocation Behavior¶
Deallocation of all CUDA resources are tracked on a per-context basis. When the last reference to a device memory is dropped, the underlying memory is scheduled to be deallocated. The deallocation does not occur immediately. It is added to a queue of pending deallocations. This design has two benefits:
- Resource deallocation API may cause the device to synchronize; thus, breaking any asynchronous execution. Deferring the deallocation could avoid latency in performance critical code section.
- Some deallocation errors may cause all the remaining deallocations to fail. Continued deallocation errors can cause critical errors at the CUDA driver level. In some cases, this could mean a segmentation fault in the CUDA driver. In the worst case, this could cause the system GUI to freeze and could only recover with a system reset. When an error occurs during a deallocation, the remaining pending deallocations are cancelled. Any deallocation error will be reported. When the process is terminated, the CUDA driver is able to release all allocated resources by the terminated process.
The deallocation queue is flushed automatically as soon as the following events occur:
- An allocation failed due to out-of-memory error. Allocation is retried after flushing all deallocations.
- The deallocation queue has reached its maximum size, which is default to 10. User can override by setting the environment variable NUMBA_CUDA_MAX_PENDING_DEALLOCS_COUNT. For example, NUMBA_CUDA_MAX_PENDING_DEALLOCS_COUNT=20, increases the limit to 20.
- The maximum accumulated byte size of resources that are pending deallocation is reached. This is default to 20% of the device memory capacity. User can override by setting the environment variable NUMBA_CUDA_MAX_PENDING_DEALLOCS_RATIO. For example, NUMBA_CUDA_MAX_PENDING_DEALLOCS_RATIO=0.5 sets the limit to 50% of the capacity.
Sometimes, it is desired to defer resource deallocation until a code section ends. Most often, users want to avoid any implicit synchronization due to deallocation. This can be done by using the following context manager:
- numba.cuda.defer_cleanup(*args, **kwds)¶
Temporarily disable memory deallocation. Use this to prevent resource deallocation breaking asynchronous execution.
For example:
with defer_cleanup(): # all cleanup is deferred in here do_speed_critical_code() # cleanup can occur here
Note: this context manager can be nested.