I compiled following c program with gcc -O3 -S a.c
:
#include <stdio.h>
int main() {
int sum = 0;
for (int i = 0; i <= 100; i++) {
sum += i;
}
printf("%d", sum);
}
The generated assembly code is as follow:
.section __TEXT,__text,regular,pure_instructions
.build_version macos, 10, 15 sdk_version 10, 15, 4
.globl _main ## -- Begin function main
.p2align 4, 0x90
_main: ## @main
.cfi_startproc
## %bb.0:
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset %rbp, -16
movq %rsp, %rbp
.cfi_def_cfa_register %rbp
leaq L_.str(%rip), %rdi
movl $5050, %esi ## imm = 0x13BA
xorl %eax, %eax
callq _printf
xorl %eax, %eax
popq %rbp
retq
.cfi_endproc
## -- End function
.section __TEXT,__cstring,cstring_literals
L_.str: ## @.str
.asciz "%d"
.subsections_via_symbols
As if GCC ran the code and notice that the loop times are determined and GCC replaced the
whole calculating with the result 5050.
movl $5050, %esi
How does gcc do this kind of optimization?
What's the academic name of this kind of optimization?
question from:
https://stackoverflow.com/questions/65896918/how-gcc-optimize-sum-from-1-to-100-into-5050 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…