As of Numba 0.39, you can, so long as the function argument has also been JIT-compiled:
@jit(nopython=True)
def f(g, x):
return g(x) + g(-x)
result = f(jitted_g_function, 1)
However, dispatching with arguments that are functions has extra overhead. If this matters for your application, you can also use a factory function to capture the function argument in a closure:
def make_f(g):
# Note: a new f() is created each time make_f() is called!
@jit(nopython=True)
def f(x):
return g(x) + g(-x)
return f
f = make_f(jitted_g_function)
result = f(1)
Improving the dispatch performance of functions in Numba is an ongoing task.
Numba considers global variables as compile-time constants. If you want
your jitted function to update itself when you have modified a global
variable’s value, one solution is to recompile it using the
recompile()
method. This is a relatively slow operation,
though, so you may instead decide to rearchitect your code and turn the
global variable into a function argument.
Calling into pdb
or other such high-level facilities is currently not
supported from Numba-compiled code. However, you can temporarily disable
compilation by setting the NUMBA_DISABLE_JIT
environment
variable.
Numba currently doesn’t support the order
argument to most Numpy
functions such as numpy.empty()
(because of limitations in the
type inference algorithm). You can work around this issue by
creating a C-ordered array and then transposing it. For example:
a = np.empty((3, 5), order='F')
b = np.zeros(some_shape, order='F')
can be rewritten as:
a = np.empty((5, 3)).T
b = np.zeros(some_shape[::-1]).T
By default, Numba will generally use machine integer width for integer
variables. On a 32-bit machine, you may sometimes need the magnitude of
64-bit integers instead. You can simply initialize relevant variables as
np.int64
(for example np.int64(0)
instead of 0
). It will
propagate to all computations involving those variables.
parallel=True
worked?¶If the parallel=True
transformations failed for a function
decorated as such, a warning will be displayed. See also
Diagnostics for information about parallel diagnostics.
Numba gives enough information to LLVM so that functions short enough can be inlined. This only works in nopython mode.
Numba doesn’t implement such optimizations by itself, but it lets LLVM apply them.
Numba enables the loop-vectorize optimization in LLVM by default. While it is a powerful optimization, not all loops are applicable. Sometimes, loop-vectorization may fail due to subtle details like memory access pattern. To see additional diagnostic information from LLVM, add the following lines:
import llvmlite.binding as llvm
llvm.set_option('', '--debug-only=loop-vectorize')
This tells LLVM to print debug information from the loop-vectorize pass to stderr. Each function entry looks like:
LV: Checking a loop in "<low-level symbol name>" from <function name>
LV: Loop hints: force=? width=0 unroll=0
...
LV: Vectorization is possible but not beneficial.
LV: Interleaving is not beneficial.
Each function entry is separated by an empty line. The reason for rejecting the vectorization is usually at the end of the entry. In the example above, LLVM rejected the vectorization because doing so will not speedup the loop. In this case, it can be due to memory access pattern. For instance, the array being looped over may not be in contiguous layout.
When memory access pattern is non-trivial such that it cannot determine the access memory region, LLVM may reject with the following message:
LV: Can't vectorize due to memory conflicts
Another common reason is:
LV: Not vectorizing: loop did not meet vectorization requirements.
In this case, vectorization is rejected because the vectorized code may behave
differently. This is a case to try turning on fastmath=True
to allow
fastmath instructions.
It can, in some cases:
target="parallel"
option will run on multiple threads.parallel=True
option to @jit
will attempt to optimize array
operations and run them in parallel. It also adds support for prange()
to
explicitly parallelize a loop.You can also manually run computations on multiple threads yourself and use
the nogil=True
option (see releasing the GIL). Numba
can also target parallel execution on GPU architectures using its CUDA and HSA
backends.
Not significantly. New users sometimes expect to JIT-compile such functions:
def f(x, y):
return x + y
and get a significant speedup over the Python interpreter. But there isn’t much Numba can improve here: most of the time is probably spent in CPython’s function call mechanism, rather than the function itself. As a rule of thumb, if a function takes less than 10 µs to execute: leave it.
The exception is that you should JIT-compile that function if it is called from another jitted function.
Try to pass cache=True
to the @jit
decorator. It will keep the
compiled version on disk for later use.
A more radical alternative is ahead-of-time compilation.
CUDA intialized before forking
error?¶On Linux, the multiprocessing
module in the Python standard library
defaults to using the fork
method for creating new processes. Because of
the way process forking duplicates state between the parent and child
processes, CUDA will not work correctly in the child process if the CUDA
runtime was initialized prior to the fork. Numba detects this and raises a
CudaDriverError
with the message CUDA initialized before forking
.
One approach to avoid this error is to make all calls to numba.cuda
functions inside the child processes or after the process pool is created.
However, this is not always possible, as you might want to query the number of
available GPUs before starting the process pool. In Python 3, you can change
the process start method, as described in the multiprocessing documentation.
Switching from fork
to spawn
or forkserver
will avoid the CUDA
initalization issue, although the child processes will not inherit any global
variables from their parent.
If you’re using PyInstaller or a similar utility to freeze an application,
you may encounter issues with llvmlite. llvmlite needs a non-Python DLL
for its working, but it won’t be automatically detected by freezing utilities.
You have to inform the freezing utility of the DLL’s location: it will
usually be named llvmlite/binding/libllvmlite.so
or
llvmlite/binding/llvmlite.dll
, depending on your system.
When you run a script in a console under Spyder, Spyder first tries to
reload existing modules. This doesn’t work well with Numba, and can
produce errors like TypeError: No matching definition for argument type(s)
.
There is a fix in the Spyder preferences. Open the “Preferences” window,
select “Console”, then “Advanced Settings”, click the “Set UMR excluded
modules” button, and add numba
inside the text box that pops up.
To see the setting take effect, be sure to restart the IPython console or kernel.
If you get an error message such as the following:
RuntimeError: Failed at nopython (nopython mode backend)
LLVM will produce incorrect floating-point code in the current locale
it means you have hit a LLVM bug which causes incorrect handling of floating-point constants. This is known to happen with certain third-party libraries such as the Qt backend to matplotlib.
To work around the bug, you need to force back the locale to its default value, for example:
import locale
locale.setlocale(locale.LC_NUMERIC, 'C')
“Numba” is a combination of “NumPy” and “Mamba”. Mambas are some of the fastest snakes in the world, and Numba makes your Python code fast.
For academic use, the best option is to cite our ACM Proceedings: Numba: a LLVM-based Python JIT compiler. You can also find the sources on github, including a pre-print pdf, in case you don’t have access to the ACM site but would like to read the paper.