Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
415 views
in Technique[技术] by (71.8m points)

python - Fast(er) numpy fancy indexing and reduction?

I'm trying to use and accelerate fancy indexing to "join" two arrays and sum over one of results' axis.

Something like this:

$ ipython
In [1]: import numpy as np
In [2]: ne, ds = 12, 6
In [3]: i = np.random.randn(ne, ds).astype('float32')
In [4]: t = np.random.randint(0, ds, size=(1e5, ne)).astype('uint8')

In [5]: %timeit i[np.arange(ne), t].sum(-1)
10 loops, best of 3: 44 ms per loop

Is there a simple way to accelerate the statement in In [5] ? Should I go with OpenMP and something like scipy.weave or Cython's prange ?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

numpy.take is much faster than fancy indexing for some reason. The only trick is that it treats the array as flat.

In [1]: a = np.random.randn(12,6).astype(np.float32)

In [2]: c = np.random.randint(0,6,size=(1e5,12)).astype(np.uint8)

In [3]: r = np.arange(12)

In [4]: %timeit a[r,c].sum(-1)
10 loops, best of 3: 46.7 ms per loop

In [5]: rr, cc = np.broadcast_arrays(r,c)

In [6]: flat_index = rr*a.shape[1] + cc

In [7]: %timeit a.take(flat_index).sum(-1)
100 loops, best of 3: 5.5 ms per loop

In [8]: (a.take(flat_index).sum(-1) == a[r,c].sum(-1)).all()
Out[8]: True

I think the only other way you're going to see much of a speed improvement beyond this would be to write a custom kernel for a GPU using something like PyCUDA.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...