TL;DR: In almost all cases, use pcmpeq/shift to generate a mask, and andps to use it. It has the shortest critical path by far (tied with constant-from-memory), and can't cache-miss.
How to do that with intrinsics
Getting the compiler to emit pcmpeqd
on an uninitialized register can be tricky. (godbolt). The best way for gcc / icc looks to be
__m128 abs_mask(void){
// with clang, this turns into a 16B load,
// with every calling function getting its own copy of the mask
__m128i minus1 = _mm_set1_epi32(-1);
return _mm_castsi128_ps(_mm_srli_epi32(minus1, 1));
}
// MSVC is BAD when inlining this into loops
__m128 vecabs_and(__m128 v) {
return _mm_and_ps(abs_mask(), v);
}
__m128 sumabs(const __m128 *a) { // quick and dirty no alignment checks
__m128 sum = vecabs_and(*a);
for (int i=1 ; i < 10000 ; i++) {
// gcc, clang, and icc hoist the mask setup out of the loop after inlining
// MSVC doesn't!
sum = _mm_add_ps(sum, vecabs_and(a[i])); // one accumulator makes addps latency the bottleneck, not throughput
}
return sum;
}
clang 3.5 and later "optimizes" the set1 / shift into loading a constant from memory. It will use pcmpeqd
to implement set1_epi32(-1)
, though. TODO: find a sequence of intrinsics that produces the desired machine code with clang. Loading a constant from memory isn't a performance disaster, but having every function use a different copy of the mask is pretty terrible.
MSVC: VS2013:
_mm_uninitialized_si128()
is not defined.
_mm_cmpeq_epi32(self,self)
on an uninitialized variable will emit a movdqa xmm, [ebp-10h]
in this test case (i.e. load some uninitialized data from the stack. This has less risk of a cache miss than just loading the final constant from memory. However, Kumputer says MSVC didn't manage to hoist the pcmpeqd / psrld out of the loop (I assume when inlining vecabs
), so this is unusable unless you manually inline and hoist the constant out of a loop yourself.
Using _mm_srli_epi32(_mm_set1_epi32(-1), 1)
results in a movdqa to load a vector of all -1 (hoisted outside the loop), and a psrld
inside the loop. So that's completely horrible. If you're going to load a 16B constant, it should be the final vector. Having integer instructions generating the mask every loop iteration is also horrible.
Suggestions for MSVC: Give up on generating the mask on the fly, and just write
const __m128 absmask = _mm_castsi128_ps(_mm_set1_epi32(~(1<<31));
Probably you'll just get the mask stored in memory as a 16B constant. Hopefully not duplicated for every function that uses it. Having the mask in a memory constant is more likely to be helpful in 32bit code, where you only have 8 XMM registers, so vecabs
can just ANDPS with a memory source operand if it doesn't have a register free to keep a constant lying around.
TODO: find out how to avoid duplicating the constant everywhere it's inlined. Probably using a global constant, rather than an anonymous set1
, would be good. But then you need to initialize it, but I'm not sure intrinsics work as initializers for global __m128
variables. You want it to go in the read-only data section, not to have a constructor that runs at program startup.
Alternatively, use
__m128i minus1; // undefined
#if _MSC_VER && !__INTEL_COMPILER
minus1 = _mm_setzero_si128(); // PXOR is cheaper than MSVC's silly load from the stack
#endif
minus1 = _mm_cmpeq_epi32(minus1, minus1); // or use some other variable here, which will probably cost a mov insn without AVX, unless the variable is dead.
const __m128 absmask = _mm_castsi128_ps(_mm_srli_epi32(minus1, 1));
The extra PXOR is quite cheap, but it's still a uop and still 4 bytes on code size. If anyone has any better solution to overcome MSVC's reluctance to emit the code we want, leave a comment or edit. This is no good if inlined into a loop, though, because the pxor/pcmp/psrl will all be inside the loop.
Loading a 32bit constant with movd
and broadcasting with shufps
might be ok (again, you probably have to manually hoist this out of a loop, though). That's 3 instructions (mov-immediate to a GP reg, movd, shufps), and movd is slow on AMD where the vector unit is shared between two integer cores. (Their version of hyperthreading.)
Choosing the best asm sequence
Ok, lets look at this for let's say Intel Sandybridge through Skylake, with a bit of mention of Nehalem. See Agner Fog's microarch guides and instruction timings for how I worked this out. I also used Skylake numbers someone linked in a post on the http://realwordtech.com/ forums.
Lets say the vector we want to abs()
is in xmm0
, and is part of a long dependency chain as is typical for FP code.
So lets assume any operations that don't depend on xmm0
can begin several cycles before xmm0
is ready. I've tested, and instructions with memory operands don't add extra latency to a dependency chain, assuming the address of the memory operand isn't part of the dep chain (i.e. isn't part of the critical path).
I'm not totally clear on how early a memory operation can start when it's part of a micro-fused uop. As I understand it, the Re-Order Buffer (ROB) works with fused uops, and tracks uops from issue to retirement (168(SnB) to 224(SKL) entries). There's also a scheduler that works in the unfused domain, holding only uops that have their input operands ready but haven't yet executed. uops can issue into the ROB (fused) and scheduler (unfused) at the same time when they're decoded (or loaded from the uop cache). If I'm understanding this correctly, it's 54 to 64 entries in Sandybridge to Broadwell, and 97 in Skylake. There's some unfounded speculation about it not being a unified (ALU/load-store) scheduler anymore.
There's also talk of Skylake handling 6 uops per clock. As I understand it, Skylake will read whole uop-cache lines (up to 6 uops) per clock into a buffer between the uop cache and the ROB. Issue into the ROB/scheduler is still 4-wide. (Even nop
is still 4 per clock). This buffer helps
where code alignment / uop cache line boundaries cause bottlenecks for previous Sandybridge-microarch designs. I previously thought this "issue queue" was this buffer, but apparently it isn't.
However it works, the scheduler is large enough to get the data from cache ready in time, if the address isn't on the critical path.
1a: mask with a memory operand
ANDPS xmm0, [mask] # in the loop
- bytes: 7 insn, 16 data. (AVX: 8 insn)
- fused-domain uops: 1 * n
- latency added to critical path: 1c (assuming L1 cache hit)
- throughput: 1/c. (Skylake: 2/c) (limited by 2 loads / c)
- "latency" if
xmm0
was ready when this insn issued: ~4c on an L1 cache hit.
1b: mask from a register
movaps xmm5, [mask] # outside the loop
ANDPS xmm0, xmm5 # in a loop
# or PAND xmm0, xmm5 # higher latency, but more throughput on Nehalem to Broadwell
# or with an inverted mask, if set1_epi32(0x80000000) is useful for something else in your loop:
VANDNPS xmm0, xmm5, xmm0 # It's the dest that's NOTted, so non-AVX would need an extra movaps
- bytes: 10 insn + 16 data. (AVX: 12 insn bytes)
- fused-domain uops: 1 + 1*n
- latency added to a dep chain: 1c (with the same cache-miss caveat for early in the loop)
- throughput: 1/c. (Skylake: 3/c)
PAND
is throughput 3/c on Nehalem to Broadwell, but latency=3c (if used between two FP-domain operations, and even worse on Nehalem). I guess only port5 has the wiring to forward bitwise ops directly to the other FP execution units (pre Skylake). Pre-Nehalem, and on AMD, bitwise FP ops are treated identically to integer FP ops, so they can run on all ports, but have a forwarding delay.
1c: generate the mask on the fly:
# outside a loop
PCMPEQD xmm5, xmm5 # set to 0xff... Recognized as independent of the old value of xmm5, but still takes an execution port (p1/p5).
PSRLD xmm5, 1 # 0x7fff... # port0
# or PSLLD xmm5, 31 # 0x8000... to set up for ANDNPS
ANDPS xmm0, xmm5 # in the loop. # port5
- bytes: 12 (AVX: 13)
- fused-domain uops: 2 + 1*n (no memory ops)
- latency added to a dep chain: 1c
- throughput: 1/c. (Skylake: 3/c)
- throughput for all 3 uops: 1/c saturating all 3 vector ALU ports
- "latency" if
xmm0
was ready when this sequence issued (no loop): 3c (+1c possible bypass delay on SnB/IvB if ANDPS has to wait for integer data to be ready. Agner Fog says in some cases there's no extra delay for integer->FP-boolean on SnB/IvB.)
This version still takes less memory than versions with a 16B constant in memory. It's also ideal for an infrequently-called function, because there's no load to suffer a cache miss.
The "bypass delay" shouldn't be an issue. If xmm0 is part of a long dependency chain, the mask-generating instructions will execute well ahead of time, so the integer result in xmm5 will have time to reach ANDPS before xmm0 is ready, even if it takes the slow lane.
Haswell has no bypass delay for integer results -> FP boolean, according to Agner Fog's testing. His description for SnB/IvB says this is the case with the outputs of some integer instructions. So even in the "standing start" beginning-of-a-dep-chain case where xmm0
is ready when this instruction sequence issues, it's only 3c on *well, 4c on *Bridge. Latency probably doesn't matter if the execution units are clearing the backlog of uops as fast as they're being issued.
Either way, ANDPS's output will be in the FP domain, and have no bypass delay if used in MULPS
or something.
On Nehalem, bypass delays are 2c. So at the start of a dep chain (e.g. after a branch mispredict or I$ miss) on Nehalem, "latency" if xmm0
was ready when this sequence issued is 5c. If you care a lot about Nehalem, and expect this code to be the first thing that runs after frequent branch mispredicts or similar pipeline stalls that leaves the OoOE machinery unable to get started on calculating the mask before xmm0
is ready, then this might not be the best choice for non-loop situations.
2a: AVX max(x, 0-x)
VXORPS xmm5, xmm5, xmm