It seems you are trying to sum-reduce
the last axis of XYZ_to_sRGB_mat_D50
(axis=1)
with the last one of XYZ_2
(axis=2)
. So, you can use np.tensordot
like so -
np.tensordot(XYZ_2, XYZ_to_sRGB_mat_D50, axes=((2),(1)))
Related post to understand tensordot
.
For completeness, we can surely use np.matmul
too after swappping last two axes of XYZ_2
, like so -
np.matmul(XYZ_to_sRGB_mat_D50, XYZ_2.swapaxes(1,2)).swapaxes(1,2)
This won't be as efficient as tensordot
one.
Runtime test -
In [158]: XYZ_to_sRGB_mat_D50 = np.asarray([
...: [3.1338561, -1.6168667, -0.4906146],
...: [-0.9787684, 1.9161415, 0.0334540],
...: [0.0719453, -0.2289914, 1.4052427],
...: ])
...:
...: XYZ_1 = np.asarray([0.25, 0.4, 0.1])
...: XYZ_2 = np.random.rand(100,100,3)
# @Julien's soln
In [159]: %timeit XYZ_2.dot(XYZ_to_sRGB_mat_D50.T)
1000 loops, best of 3: 450 μs per loop
In [160]: %timeit np.tensordot(XYZ_2, XYZ_to_sRGB_mat_D50, axes=((2),(1)))
10000 loops, best of 3: 73.1 μs per loop
Generally speaking, when it comes to sum-reductions
on tensors, tensordot
is much more efficient. Since, the axis of sum-reduction
is just one, we can make the tensor a 2D
array by reshaping, use np.dot
, get the result and reshape back to 3D
.