numpy.dot
:
For 2-D arrays it is equivalent to matrix multiplication, and for 1-D arrays to inner product of vectors (without complex conjugation). For N dimensions it is a sum product over the last axis of a and the second-to-last of b:
numpy.inner
:
Ordinary inner product of vectors for 1-D arrays (without complex conjugation), in higher dimensions a sum product over the last axes.
(Emphasis mine.)
As an example, consider this example with 2D arrays:
>>> a=np.array([[1,2],[3,4]])
>>> b=np.array([[11,12],[13,14]])
>>> np.dot(a,b)
array([[37, 40],
[85, 92]])
>>> np.inner(a,b)
array([[35, 41],
[81, 95]])
Thus, the one you should use is the one that gives the correct behaviour for your application.
Performance testing
(Note that I am testing only the 1D case, since that is the only situation where .dot
and .inner
give the same result.)
>>> import timeit
>>> setup = 'import numpy as np; a=np.random.random(1000); b = np.random.random(1000)'
>>> [timeit.timeit('np.dot(a,b)',setup,number=1000000) for _ in range(3)]
[2.6920320987701416, 2.676928997039795, 2.633111000061035]
>>> [timeit.timeit('np.inner(a,b)',setup,number=1000000) for _ in range(3)]
[2.588860034942627, 2.5845699310302734, 2.6556360721588135]
So maybe .inner
is faster, but my machine is fairly loaded at the moment, so the timings are not consistent nor are they necessarily very accurate.