I am getting really weird timings for the following code:
import numpy as np
s = 0
for i in range(10000000):
s += np.float64(1) # replace with np.float32 and built-in float
- built-in float: 4.9 s
- float64: 10.5 s
- float32: 45.0 s
Why is float64
twice slower than float
? And why is float32
5 times slower than float64?
Is there any way to avoid the penalty of using np.float64
, and have numpy
functions return built-in float
instead of float64
?
I found that using numpy.float64
is much slower than Python's float, and numpy.float32
is even slower (even though I'm on a 32-bit machine).
numpy.float32
on my 32-bit machine. Therefore, every time I use various numpy functions such as numpy.random.uniform
, I convert the result to float32
(so that further operations would be performed at 32-bit precision).
Is there any way to set a single variable somewhere in the program or in the command line, and make all numpy functions return float32
instead of float64
?
EDIT #1:
numpy.float64 is 10 times slower than float in arithmetic calculations. It's so bad that even converting to float and back before the calculations makes the program run 3 times faster. Why? Is there anything I can do to fix it?
I want to emphasize that my timings are not due to any of the following:
- the function calls
- the conversion between numpy and python float
- the creation of objects
I updated my code to make it clearer where the problem lies. With the new code, it would seem I see a ten-fold performance hit from using numpy data types:
from datetime import datetime
import numpy as np
START_TIME = datetime.now()
# one of the following lines is uncommented before execution
#s = np.float64(1)
#s = np.float32(1)
#s = 1.0
for i in range(10000000):
s = (s + 8) * s % 2399232
print(s)
print('Runtime:', datetime.now() - START_TIME)
The timings are:
- float64: 34.56s
- float32: 35.11s
- float: 3.53s
Just for the hell of it, I also tried:
from datetime import datetime
import numpy as np
START_TIME = datetime.now()
s = np.float64(1)
for i in range(10000000):
s = float(s)
s = (s + 8) * s % 2399232
s = np.float64(s)
print(s)
print('Runtime:', datetime.now() - START_TIME)
The execution time is 13.28 s; it's actually 3 times faster to convert the float64
to float
and back than to use it as is. Still, the conversion takes its toll, so overall it's more than 3 times slower compared to the pure-python float
.
My machine is:
- Intel Core 2 Duo T9300 (2.5GHz)
- WinXP Professional (32-bit)
- ActiveState Python 3.1.3.5
- Numpy 1.5.1
EDIT #2:
Thank you for the answers, they help me understand how to deal with this problem.
But I still would like to know the precise reason (based on the source code perhaps) why the code below runs 10 times slow with float64
than with float
.
EDIT #3:
I rerun the code under the Windows 7 x64 (Intel Core i7 930 @ 3.8GHz).
Again, the code is:
from datetime import datetime
import numpy as np
START_TIME = datetime.now()
# one of the following lines is uncommented before execution
#s = np.float64(1)
#s = np.float32(1)
#s = 1.0
for i in range(10000000):
s = (s + 8) * s % 2399232
print(s)
print('Runtime:', datetime.now() - START_TIME)
The timings are:
- float64: 16.1s
- float32: 16.1s
- float: 3.2s
Now both np
floats (either 64 or 32) are 5 times slower than the built-in float
. Still, a significant difference. I'm trying to figure out where it comes from.
END OF EDITS
See Question&Answers more detail:
os