Some of the performance difference can be explained by taking into account the time it takes the .
operator to do its thing:
>>> x = 'foobar'
>>> y = 'foo'
>>> sw = x.startswith
>>> %timeit x.startswith(y)
1000000 loops, best of 3: 316 ns per loop
>>> %timeit sw(y)
1000000 loops, best of 3: 267 ns per loop
>>> %timeit x[:3] == y
10000000 loops, best of 3: 151 ns per loop
Another portion of the difference can be explained by the fact that startswith
is a function, and even no-op function calls take a bit of time:
>>> def f():
... pass
...
>>> %timeit f()
10000000 loops, best of 3: 105 ns per loop
This does not totally explain the difference, since the version using slicing and len
calls a function and is still faster (compare to sw(y)
above -- 267 ns):
>>> %timeit x[:len(y)] == y
1000000 loops, best of 3: 213 ns per loop
My only guess here is that maybe Python optimizes lookup time for built-in functions, or that len
calls are heavily optimized (which is probably true). It might be possible to test that with a custom len
func. Or possibly this is where the differences identified by LastCoder kick in. Note also larsmans' results, which indicate that startswith
is actually faster for longer strings. The whole line of reasoning above applies only to those cases where the overhead I'm talking about actually matters.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…