numpy is actually looking out for you on this one. Unlke in standard Python, its integer operations don't work on arbitrary-precision objects. I'd guess you were running a 32-bit python, because the same operations don't overflow for me:
>>> sys.maxsize
9223372036854775807
>>> size = 3000
>>> c = numpysum(size)
>>>
but they will eventually. Even easier to see if you control the size of the type manually:
>>> numpy.arange(10, dtype=numpy.int8)**10
__main__:1: RuntimeWarning: invalid value encountered in power
array([ 0, 1, 0, -87, 0, -7, 0, -15, 0, 0], dtype=int8)
>>> numpy.arange(10, dtype=numpy.int16)**10
array([ 0, 1, 1024, -6487, 0, 761, -23552, 15089,
0, 0], dtype=int16)
>>> numpy.arange(10, dtype=numpy.int32)**10
array([ 0, 1, 1024, 59049, 1048576,
9765625, 60466176, 282475249, 1073741824, -2147483648], dtype=int32)
>>> numpy.arange(10, dtype=numpy.int64)**10
array([ 0, 1, 1024, 59049, 1048576,
9765625, 60466176, 282475249, 1073741824, 3486784401])
where things improve as the number of bits increases. If you really want numpy array operations on Python arbitrary-size integers, you can set dtype to object:
>>> numpy.arange(10, dtype=object)**20
array([0, 1, 1048576, 3486784401, 1099511627776, 95367431640625,
3656158440062976, 79792266297612001, 1152921504606846976,
12157665459056928801], dtype=object)