Well, this implementation is based on virtually the same trick (Determine if a word has a zero byte) as the glibc implementation you linked. They do pretty much the same thing, except that in glibc version some loops are unrolled and bit masks are spelled out explicitly. The ONES
and HIGHS
from the code you posted is exactly himagic = 0x80808080L
and lomagic = 0x01010101L
form glibc version.
The only difference I see is that glibs version uses a slightly different criterion for detecting a zero byte
if ((longword - lomagic) & himagic)
without doing ... & ~longword
(compare to HASZERO(x)
macro in your example, which does the same thing with x
, but also includes ~(x)
member). Apparently glibc authors believed this shorter formula is more efficient. Yet it can result in false positives. So they check for false positives under that if
.
It is indeed an interesting question, what is more efficient: a single-stage precise test (your code) or a two-stage test that begins with rough imprecise check followed, if necessary, by a precise second check (glibc code).
If you want to see how they compare in terms of actual performance - time them on your platform and your data. There's no other way.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…