We could leverage broadcasting
to have a NumPy based solution -
ss = np.exp(1j*(ph[:,None] + fre[:,None]*tau))
Porting this over to numexpr
to leverage fast transcendental
operations alongwith multi-core capability -
import numexpr as ne
def numexpr_soln(ph, fre):
ph2D = ph[:,None]
fre2D = fre[:,None]
return ne.evaluate('exp(1j*(ph2D + fre2D*tau))')
Timings -
In [23]: num_row, num_col = 6000, 13572
...: ss = np.ones((num_row, num_col), dtype=np.complex128)
...: ph = np.random.standard_normal(num_row)
...: fre = np.random.standard_normal(num_row)
...: tau = np.random.standard_normal(num_col)
# Original soln
In [25]: %%timeit
...: for idx in range(num_row):
...: ss[idx, :] *= np.exp(1j*(ph[idx] + fre[idx]*tau))
1 loop, best of 3: 4.46 s per loop
# Native NumPy broadcasting soln
In [26]: %timeit np.exp(1j*(ph[:,None] + fre[:,None]*tau))
1 loop, best of 3: 4.58 s per loop
For Numexpr solution with varying number of cores/threads -
# Numexpr solution with # of threads = 2
In [51]: ne.set_num_threads(nthreads=2)
Out[51]: 2
In [52]: %timeit numexpr_soln(ph, fre)
1 loop, best of 3: 2.18 s per loop
# Numexpr solution with # of threads = 4
In [45]: ne.set_num_threads(nthreads=4)
Out[45]: 4
In [46]: %timeit numexpr_soln(ph, fre)
1 loop, best of 3: 1.62 s per loop
# Numexpr solution with # of threads = 8
In [48]: ne.set_num_threads(nthreads=8)
Out[48]: 8
In [49]: %timeit numexpr_soln(ph, fre)
1 loop, best of 3: 898 ms per loop
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…