There's a massive difference between MSVC inline asm and GNU C inline asm. GCC syntax is designed for optimal output without wasted instructions, for wrapping a single instruction or something. MSVC syntax is designed to be fairly simple, but AFAICT it's impossible to use without the latency and extra instructions of a round trip through memory for your inputs and outputs.
If you're using inline asm for performance reasons, this makes MSVC inline asm only viable if you write a whole loop entirely in asm, not for wrapping short sequences in an inline function. The example below (wrapping idiv
with a function) is the kind of thing MSVC is bad at: ~8 extra store/load instructions.
MSVC inline asm (used by MSVC and probably icc, maybe also available in some commercial compilers):
- looks at your asm to figure out which registers your code steps on.
- can only transfer data via memory. Data that was live in registers is stored by the compiler to prepare for your
mov ecx, shift_count
, for example. So using a single asm instruction that the compiler won't generate for you involves a round-trip through memory on the way in and on the way out.
- more beginner-friendly, but often impossible to avoid overhead getting data in/out. Even besides the syntax limitations, the optimizer in current versions of MSVC isn't good at optimizing around inline asm blocks, either.
GNU C inline asm is not a good way to learn asm. You have to understand asm very well so you can tell the compiler about your code. And you have to understand what compilers need to know. That answer also has links to other inline-asm guides and Q&As. The x86 tag wiki has lots of good stuff for asm in general, but just links to that for GNU inline asm. (The stuff in that answer is applicable to GNU inline asm on non-x86 platforms, too.)
GNU C inline asm syntax is used by gcc, clang, icc, and maybe some commercial compilers which implement GNU C:
- You have to tell the compiler what you clobber. Failure to do this will lead to breakage of surrounding code in non-obvious hard-to-debug ways.
- Powerful but hard to read, learn, and use syntax for telling the compiler how to supply inputs, and where to find outputs. e.g.
"c" (shift_count)
will get the compiler to put the shift_count
variable into ecx
before your inline asm runs.
extra clunky for large blocks of code, because the asm has to be inside a string constant. So you typically need
"insn %[inputvar], %%reg
" // comment
"insn2 %%reg, %[outputvar]
"
very unforgiving / harder, but allows lower overhead esp. for wrapping single instructions. (wrapping single instructions was the original design intent, which is why you have to specially tell the compiler about early clobbers to stop it from using the same register for an input and output if that's a problem.)
Example: full-width integer division (div
)
On a 32bit CPU, dividing a 64bit integer by a 32bit integer, or doing a full-multiply (32x32->64), can benefit from inline asm. gcc and clang don't take advantage of idiv
for (int64_t)a / (int32_t)b
, probably because the instruction faults if the result doesn't fit in a 32bit register. So unlike this Q&A about getting quotient and remainder from one div
, this is a use-case for inline asm. (Unless there's a way to inform the compiler that the result will fit, so idiv won't fault.)
We'll use calling conventions that put some args in registers (with hi
even in the right register), to show a situation that's closer to what you'd see when inlining a tiny function like this.
MSVC
Be careful with register-arg calling conventions when using inline-asm. Apparently the inline-asm support is so badly designed/implemented that the compiler might not save/restore arg registers around the inline asm, if those args aren't used in the inline asm. Thanks @RossRidge for pointing this out.
// MSVC. Be careful with _vectorcall & inline-asm: see above
// we could return a struct, but that would complicate things
int _vectorcall div64(int hi, int lo, int divisor, int *premainder) {
int quotient, tmp;
__asm {
mov edx, hi;
mov eax, lo;
idiv divisor
mov quotient, eax
mov tmp, edx;
// mov ecx, premainder // Or this I guess?
// mov [ecx], edx
}
*premainder = tmp;
return quotient; // or omit the return with a value in eax
}
Update: apparently leaving a value in eax
or edx:eax
and then falling off the end of a non-void function (without a return
) is supported, even when inlining. I assume this works only if there's no code after the asm
statement. See Does __asm{}; return the value of eax? This avoids the store/reloads for the output (at least for quotient
), but we can't do anything about the inputs. In a non-inline function with stack args, they will be in memory already, but in this use-case we're writing a tiny function that could usefully inline.
Compiled with MSVC 19.00.23026 /O2
on rextester (with a main()
that finds the directory of the exe and dumps the compiler's asm output to stdout).
## My added comments use. ##
; ... define some symbolic constants for stack offsets of parameters
; 48 : int ABI div64(int hi, int lo, int divisor, int *premainder) {
sub esp, 16 ; 00000010H
mov DWORD PTR _lo$[esp+16], edx ## these symbolic constants match up with the names of the stack args and locals
mov DWORD PTR _hi$[esp+16], ecx
## start of __asm {
mov edx, DWORD PTR _hi$[esp+16]
mov eax, DWORD PTR _lo$[esp+16]
idiv DWORD PTR _divisor$[esp+12]
mov DWORD PTR _quotient$[esp+16], eax ## store to a local temporary, not *premainder
mov DWORD PTR _tmp$[esp+16], edx
## end of __asm block
mov ecx, DWORD PTR _premainder$[esp+12]
mov eax, DWORD PTR _tmp$[esp+16]
mov DWORD PTR [ecx], eax ## I guess we should have done this inside the inline asm so this would suck slightly less
mov eax, DWORD PTR _quotient$[esp+16] ## but this one is unavoidable
add esp, 16 ; 00000010H
ret 8
There's a ton of extra mov instructions, and the compiler doesn't even come close to optimizing any of it away. I thought maybe it would see and understand the mov tmp, edx
inside the inline asm, and make that a store to premainder
. But that would require loading premainder
from the stack into a register before the inline asm block, I guess.
This function is actually worse with _vectorcall
than with the normal everything-on-the-stack ABI. With two inputs in registers, it stores them to memory so the inline asm can load them from named variables. If this were inlined, even more of the parameters could potentially be in the regs, and it would have to store them all, so the asm would have memory operands! So unlike gcc, we don't gain much from inlining this.
Doing *premainder = tmp
inside the asm block means more code written in asm, but does avoid the totally braindead store/load/store path for the remainder. This reduces the instruction count by 2 total, down to 11 (not including the ret
).
I'm trying to get the best possible code out of MSVC, not "use it wrong" and create a straw-man argument. But AFAICT it's horrible for wrapping very short sequences. Presumably there's an intrinsic function for 64/32 -> 32 division that allows the compiler to generate good code for this particular case, so the entire premise of using inline asm for this on MSVC could be a straw-man argument. But it does show you that intrinsics are much better than inline asm for MSVC.
GNU C (gcc/clang/icc)
Gcc does even better than the output shown here when inlining div64, because it can typically arrange for the preceding code to generate the 64bit integer in edx:eax in the first place.
I can't get gcc to compile for the 32bit vectorcall ABI. Clang can, but it sucks at inline asm with "rm"
constraints (try it on the godbolt link: it bounces function arg through memory instead of using the register option in the constraint). The 64bit MS calling convention is close to the 32bit vectorcall, with the first two params in edx, ecx. The difference is that 2 more params go in regs before using the stack (and that the callee doesn't pop the args off the stack, which is what the ret 8
was about in the MSVC output.)
// GNU C
// change everything to int64_t to do 128b/64b -> 64b division
// MSVC doesn't do x86-64 inline asm, so we'll use 32bit to be comparable
int div64(int lo, int hi, int *premainder, int divisor) {
int quotient, rem;
asm ("idivl %[divsrc]"
: "=a" (quotient), "=d" (rem) // a means eax, d means edx
: "d" (hi), "a" (lo),
[divsrc] "rm" (divisor) // Could have just used %0 instead of naming divsrc
// note the "rm" to allow the src to be in a register or not, whatever gcc chooses.
// "rmi" would also allow an immediate, but unlike adc, idiv doesn't have an immediate form
: // no clobbers
);
*premainder = rem;
return quotient;
}
<a href="https://gcc.godbolt.org/#compilers:!((compiler:g530,options:'-xc+-Wall+-std%3Dgnu11+-mabi%3Dms+-m64+-O3+-march%3Dnative++-fverbose-asm+-mno-avx',sourcez:MQSwdgxgNgrgJgUwAQB4DOAXO4MDoAWAfAFDED0ZSAtALYCGARiALw1pIDuA9gE4DW7DiAz4kAcwgQkdMHCTQZY3OUoSpcLgjRgA5BiRoECGkgxck%2BOgDdkdTnQCep8zENIA%2Bu5sQzPCHSgoABokADMeLhMRZAgaOChwZF5pDAweEAYYDC0VJG9ff0CkEHZsUNCEHgQwfXDI03xkMF56KAMMOgg%2BeQC2gEEAIQBJYrAkAGYAJiY8JCQhsYA2ABYZ4v0uMCgnCEswMS0kADEABSRKfN40UlBQxFCPdwV9z2Jge8SkQZHPOlT0zLZTwACmBlz8vQAlJDcs8xMV2NkeACoLZ9OAEmBbGgaCERFwYGJ8CE0OZhEgNFpdPp6HxbOIuFw5AgAB50GgAB1RygoSAAmgTxAh9Ky0nZMLxDmYkDRjLwnDI5JgYOU3tUyqQKO8EKFPgBZADK30efzSGSyCBBwLY7kYIGhc0d50olO0egMCFRPjyCB8vEKUBU2t1WKQhuNv3%2B5qB7lBdvcHTEwIAROCA8noZC5ryAFaufS7RSHMDs5D0faYsRBj6h8PDJAAbQbYjAMAAXG2070ALrdp3951IADCAGoRwBGccpM2A5BoBw1Ogs6s6/VG%2BueRDQNAc31g30FKE3EChWQ6r7DN415DfNWyE%2BayiF/bIBA2HgOETgeHSnArBPOBS5jjpMAAcDBkCsDDUIQSBQRSIBWCUICbLkhoAGpDkBVLuhoSAsqBixUCsoyYtiuIGOYHAIDoRSuMgUxrNKDAxJEHJ0DwjCosQOAXiMCFWCswK8VAXAhLx%2BAgOJNRIAAVByVT0OAiA8NJ%2BjYEhpI8FmADexBOrxACOMBcBgIDVBgISKQA3Ppjp0DiSApiAGltEgACkDYaWgfjdsmdkDnMbZIMmzB0MmTnGaZ5k1JCIShXAEXAopMKBY6wXJolTmSXFIXhU5olxQFaWNt5vkhTwNBJRpJS8KlJXBbyzTyKJDAsTw1xpZCtlOvJil0MplRIMwSA2cVVQYDAPBjFFZkWbZAC%2BpC8caAn/rKPAHHAwkyX