I am investigating the effect of vectorization on the performance of the program. In this regard, I have written following code:
#include <stdio.h>
#include <sys/time.h>
#include <stdlib.h>
#define LEN 10000000
int main(){
struct timeval stTime, endTime;
double* a = (double*)malloc(LEN*sizeof(*a));
double* b = (double*)malloc(LEN*sizeof(*b));
double* c = (double*)malloc(LEN*sizeof(*c));
int k;
for(k = 0; k < LEN; k++){
a[k] = rand();
b[k] = rand();
}
gettimeofday(&stTime, NULL);
for(k = 0; k < LEN; k++)
c[k] = a[k] * b[k];
gettimeofday(&endTime, NULL);
FILE* fh = fopen("dump", "w");
for(k = 0; k < LEN; k++)
fprintf(fh, "c[%d] = %f", k, c[k]);
fclose(fh);
double timeE = (double)(endTime.tv_usec + endTime.tv_sec*1000000 - stTime.tv_usec - stTime.tv_sec*1000000);
printf("Time elapsed: %f
", timeE);
return 0;
}
In this code, I am simply initializing and multiplying two vectors. The results are saved in vector c
. What I am mainly interested in is the effect of vectorizing following loop:
for(k = 0; k < LEN; k++)
c[k] = a[k] * b[k];
I compile the code using following two commands:
1) icc -O2 TestSMID.c -o TestSMID -no-vec -no-simd
2) icc -O2 TestSMID.c -o TestSMID -vec-report2
I expect to see performance improvement since the second command successfully vectorizes the loop. However, my studies show that there is no performance improvement when the loop is vectorized.
I may have missed something here since I am not super familiar with the topic. So, please let me know if there is something wrong with my code.
Thanks in advance for your help.
PS: I am using Mac OSX, so there is no need to align the data as all the allocated memories are 16-byte aligned.
Edit:
I would like to first thank you all for your comments and answers.
I thought about the answer proposed by @Mysticial and there are some further points that should be mentioned here.
Firstly, as @Vinska mentioned, c[k]=a[k]*b[k]
does not take only one cycle. In addition to loop index increment and the comparison made to ensure that k
is smaller than LEN
, there are other things to be done to perform the operation. Having a look at the assembly code generated by the compiler, it can be seen that a simple multiplication needs much more than one cycle. The vectorized version looks like:
L_B1.9: # Preds L_B1.8
movq %r13, %rax #25.5
andq $15, %rax #25.5
testl %eax, %eax #25.5
je L_B1.12 # Prob 50% #25.5
# LOE rbx r12 r13 r14 r15 eax
L_B1.10: # Preds L_B1.9
testb $7, %al #25.5
jne L_B1.32 # Prob 10% #25.5
# LOE rbx r12 r13 r14 r15
L_B1.11: # Preds L_B1.10
movsd (%r14), %xmm0 #26.16
movl $1, %eax #25.5
mulsd (%r15), %xmm0 #26.23
movsd %xmm0, (%r13) #26.9
# LOE rbx r12 r13 r14 r15 eax
L_B1.12: # Preds L_B1.11 L_B1.9
movl %eax, %edx #25.5
movl %eax, %eax #26.23
negl %edx #25.5
andl $1, %edx #25.5
negl %edx #25.5
addl $10000000, %edx #25.5
lea (%r15,%rax,8), %rcx #26.23
testq $15, %rcx #25.5
je L_B1.16 # Prob 60% #25.5
# LOE rdx rbx r12 r13 r14 r15 eax
L_B1.13: # Preds L_B1.12
movl %eax, %eax #25.5
# LOE rax rdx rbx r12 r13 r14 r15
L_B1.14: # Preds L_B1.14 L_B1.13
movups (%r15,%rax,8), %xmm0 #26.23
movsd (%r14,%rax,8), %xmm1 #26.16
movhpd 8(%r14,%rax,8), %xmm1 #26.16
mulpd %xmm0, %xmm1 #26.23
movntpd %xmm1, (%r13,%rax,8) #26.9
addq $2, %rax #25.5
cmpq %rdx, %rax #25.5
jb L_B1.14 # Prob 99% #25.5
jmp L_B1.20 # Prob 100% #25.5
# LOE rax rdx rbx r12 r13 r14 r15
L_B1.16: # Preds L_B1.12
movl %eax, %eax #25.5
# LOE rax rdx rbx r12 r13 r14 r15
L_B1.17: # Preds L_B1.17 L_B1.16
movsd (%r14,%rax,8), %xmm0 #26.16
movhpd 8(%r14,%rax,8), %xmm0 #26.16
mulpd (%r15,%rax,8), %xmm0 #26.23
movntpd %xmm0, (%r13,%rax,8) #26.9
addq $2, %rax #25.5
cmpq %rdx, %rax #25.5
jb L_B1.17 # Prob 99% #25.5
# LOE rax rdx rbx r12 r13 r14 r15
L_B1.18: # Preds L_B1.17
mfence #25.5
# LOE rdx rbx r12 r13 r14 r15
L_B1.19: # Preds L_B1.18
mfence #25.5
# LOE rdx rbx r12 r13 r14 r15
L_B1.20: # Preds L_B1.14 L_B1.19 L_B1.32
cmpq $10000000, %rdx #25.5
jae L_B1.24 # Prob 0% #25.5
# LOE rdx rbx r12 r13 r14 r15
L_B1.22: # Preds L_B1.20 L_B1.22
movsd (%r14,%rdx,8), %xmm0 #26.16
mulsd (%r15,%rdx,8), %xmm0 #26.23
movsd %xmm0, (%r13,%rdx,8) #26.9
incq %rdx #25.5
cmpq $10000000, %rdx #25.5
jb L_B1.22 # Prob 99% #25.5
# LOE rdx rbx r12 r13 r14 r15
L_B1.24: # Preds L_B1.22 L_B1.20
And non-vectoized version is:
L_B1.9: # Preds L_B1.8
xorl %eax, %eax #25.5
# LOE rbx r12 r13 r14 r15 eax
L_B1.10: # Preds L_B1.10 L_B1.9
lea (%rax,%rax), %edx #26.9
incl %eax #25.5
cmpl $5000000, %eax #25.5
movsd (%r15,%rdx,8), %xmm0 #26.16
movsd 8(%r15,%rdx,8), %xmm1 #26.16
mulsd (%r13,%rdx,8), %xmm0 #26.23
mulsd 8(%r13,%rdx,8), %xmm1 #26.23
movsd %xmm0, (%rbx,%rdx,8) #26.9
movsd %xmm1, 8(%rbx,%rdx,8) #26.9
jb L_B1.10 # Prob 99% #25.5
# LOE rbx r12 r13 r14 r15 eax
Beside this, the processor does not load only 24 bytes. In each access to memory, a full line (64 bytes) is loaded. More importantly, since the memory required for a
, b
, and c
is contiguous, prefetcher would definitely help a lot and loads next blocks in advance.
Having said that, I think the memory bandwidth calculated by @Mysticial is too pessimistic.
Moreover, using SIMD to improve the performance of program for a very simple addition is mentioned in Intel Vectorization Guide. Therefore, it seems we should be able to gain some performance improvement for this very simple loop.
Edit2:
Thanks again for your comments. Also, thank to @Mysticial sample code, I finally saw the effect of SIMD on performance improvement. The problem, as Mysticial mentioned, was the memory bandwidth. With choosing small size for a
, b
, and c
which fit into the L1 cache, it can be seen that SIMD can help to improve the performance significantly. Here are the results that I got:
icc -O2 -o TestSMIDNoVec -no-vec TestSMID2.c: 17.34 sec
icc -O2 -o TestSMIDVecNoUnroll -vec-report2 TestSMID2.c: 9.33 sec
And unrolling the loop improves the performance even further:
icc -O2 -o TestSMIDVecUnroll -vec-report2 TestSMID2.c -unroll=8: 8.6sec
Also, I should mention that it takes only one cycle for my processor to complete an iteration when compiled with -O2
.
PS: My computer is a Macbook Pro core i5 @2.5GHz (dual core)
Question&Answers:
os