Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
778 views
in Technique[技术] by (71.8m points)

assembly - How to calculate time for an asm delay loop on x86 linux?

I was going through this link delay in assembly to add delay in assembly. I want to perform some experiment by adding different delay value.

The useful code to generate delay

; start delay

mov bp, 43690
mov si, 43690
delay2:
dec bp
nop
jnz delay2
dec si
cmp si,0    
jnz delay2
; end delay

What I understood from the code, the delay is proportion to the time it spends to execute nop instructions (43690x43690 ). So in different system and different version of OS, delay will be different. Am I right?

Can anyone explain to me how to calculate the amount of delay in nsec, the following assembly code is generating so that I can conclude my experiment with respect to delay I added in my experimental setup?

This is the code I am using to generate delay without understanding the logic behind use of 43690 value ( I used only one loop against two loops in original source code). To generate different delay (without knowing its value), I just varied number 43690 to 403690 or other value.

Code in 32bit OS

movl  $43690, %esi   ; ---> if I vary this 4003690 then delay value ??
.delay2:
    dec %esi
    nop
    jnz .delay2

How much delay is generated by this assembly code ?

If I want to generate 100nsec or 1000nsec or any other delay in microsec, what will be initial value I need to load in register?

I am using ubuntu 16.04 (both 32bit as well as 64bit ), in Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz and Core-i3 CPU 3470 @ 3.20GHz processor.

Thank you in advance.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

There is no very good way to get accurate and predictable timing from fixed counts for delay loops on a modern x86 PC, especially in user-space under a non-realtime OS like Linux. (But you could spin on rdtsc for very short delays; see below). You can use a simple delay-loop if you need to sleep at least long enough and it's ok to sleep longer when things go wrong.

Normally you want to sleep and let the OS wake your process, but this doesn't work for delays of only a couple microseconds on Linux. nanosleep can express it, but the kernel doesn't schedule with such precise timing. See How to make a thread sleep/block for nanoseconds (or at least milliseconds)?. On a kernel with Meltdown + Spectre mitigation enabled, a round-trip to the kernel takes longer than a microsecond anyway.

(Or are you doing this inside the kernel? I think Linux already has a calibrated delay loop. In any case, it has a standard API for delays: https://www.kernel.org/doc/Documentation/timers/timers-howto.txt, including ndelay(unsigned long nsecs) which uses the "jiffies" clock-speed estimate to sleep for at least long enough. IDK how accurate that is, or if it sometimes sleeps much longer than needed when clock speed is low, or if it updates the calibration as the CPU freq changes.)


Your (inner) loop is totally predictable at 1 iteration per core clock cycle on recent Intel/AMD CPUs, whether or not there's a nop in it. It's under 4 fused-domain uops, so you bottleneck on the 1-per-clock loop throughput of your CPUs. (See Agner Fog's x86 microarch guide, or time it yourself for large iteration counts with perf stat ./a.out.) Unless there's competition from another hyperthread on the same physical core...

Or unless the inner loop spans a 32-byte boundary, on Skylake or Kaby Lake (loop buffer disabled by microcode updates to work around a design bug). Then your dec / jnz loop could run at 1 per 2 cycles because it would require fetching from 2 different uop-cache lines.

I'd recommend leaving out the nop to have a better chance of it being 1 per clock on more CPUs, too. You need to calibrate it anyway, so a larger code footprint isn't helpful (so leave out extra alignment, too). (Make sure calibration happens while CPU is at max turbo, if you need to ensure a minimum delay time.)

If your inner loop wasn't quite so small (e.g. more nops), see Is performance reduced when executing loops whose uop count is not a multiple of processor width? for details on front-end throughput when the uop count isn't a multiple of 8. SKL / KBL with disabled loop buffers run from the uop cache even for tiny loops.


But x86 doesn't have a fixed clock frequency (and transitions between frequency states stop the clock for ~20k clock cycles (8.5us), on a Skylake CPU).

If running this with interrupts enabled, then interrupts are another unpredictable source of delays. (Even in kernel mode, Linux usually has interrupts enabled. An interrupts-disabled delay loop for tens of thousands of clock cycles seems like a bad idea.)

If running in user-space, then I hope you're using a kernel compiled with realtime support. But even then, Linux isn't fully designed for hard-realtime operation, so I'm not sure how good you can get.

System management mode interrupts are another source of delay that even the kernel doesn't know about. PERFORMANCE IMPLICATIONS OF SYSTEM MANAGEMENT MODE from 2013 says that 150 microseconds is considered an "acceptable" latency for an SMI, according to Intel's test suite for PC BIOSes. Modern PCs are full of voodoo. I think/hope that the firmware on most motherboards doesn't have much SMM overhead, and that SMIs are very rare in normal operation, but I'm not sure. See also Evaluating SMI (System Management Interrupt) latency on Linux-CentOS/Intel machine

Extremely low-power Skylake CPUs stop their clock with some duty-cycle, instead of clocking lower and running continuously. See this, and also Intel's IDF2015 presentation about Skylake power management.


Spin on RDTSC until the right wall-clock time

If you really need to busy-wait, spin on rdtsc waiting for the current time to reach a deadline. You need to know the reference frequency, which is not tied to the core clock, so it's fixed and nonstop (on modern CPUs; there are CPUID feature bits for invariant and nonstop TSC. Linux checks this, so you could look in /proc/cpuinfo for constant_tsc and nonstop_tsc, but really you should just check CPUID yourself on program startup and work out the RDTSC frequency (somehow...)).

I wrote such a loop as part of a silly-computer-tricks exercise: a stopwatch in the fewest bytes of x86 machine code. Most of the code size is for the string manipulation to increment a 00:00:00 display and print it. I hard-coded the 4GHz RDTSC frequency for my CPU.

For sleeps of less than 2^32 reference clocks, you only need to look at the low 32 bits of the counter. If you do your compare correctly, wrap-around takes care of itself. For the 1-second stopwatch, a 4.3GHz CPU would have a problem, but for nsec / usec sleeps there's no issue.

 ;;; Untested,  NASM syntax

 default rel
 section .data
    ; RDTSC frequency in counts per 2^16 nanoseconds
    ; 3200000000 would be for a 3.2GHz CPU like your i3-3470

    ref_freq_fixedpoint: dd  3200000000 * (1<<16) / 1000000000

    ; The actual integer value is 0x033333
    ; which represents a fixed-point value of 3.1999969482421875 GHz
    ; use a different shift count if you like to get more fractional bits.
    ; I don't think you need 64-bit operand-size


 ; nanodelay(unsigned nanos /*edi*/)
 ; x86-64 System-V calling convention
 ; clobbers EAX, ECX, EDX, and EDI
 global nanodelay
 nanodelay:
      ; take the initial clock sample as early as possible.
      ; ideally even inline rdtsc into the caller so we don't wait for I$ miss.
      rdtsc                   ; edx:eax = current timestamp
      mov      ecx, eax       ; ecx = start
      ; lea ecx, [rax-30]    ; optionally bias the start time to account for overhead.  Maybe make this a variable stored with the frequency.

      ; then calculate edi = ref counts = nsec * ref_freq
      imul     edi, [ref_freq_fixedpoint]  ; counts * 2^16
      shr      edi, 16        ; actual counts, rounding down

.spinwait:                     ; do{
    pause         ; optional but recommended.
    rdtsc                      ;   edx:eax = reference cycles since boot
    sub      eax, ecx          ;   delta = now - start.  This may wrap, but the result is always a correct unsigned 0..n
    cmp      eax, edi          ; } while(delta < sleep_counts)
    jb     .spinwait

    ret

To avoid floating-point for the frequency calculation, I used fixed-point like uint32_t ref_freq_fixedpoint = 3.2 * (1<<16);. This means we just use an integer multiply and shift inside the delay loop. Use C code to set ref_freq_fixedpoint during startup with the right value for the CPU.

If you recompile this for each target CPU, the multiply constant can be an immediate operand for imul instead of loading from memory.

pause sleeps for ~100 clock on Skylake, but only for ~5 clocks on previous Intel uarches. So it hurts timing precision a bit, maybe sleeping up to 100 ns past a deadline when the CPU frequency is clocked down to ~1GHz. Or at a normal ~3GHz speed, more like up to +33ns.

Running continously, this loop heated up one core of my Skylake i7-6700k at ~3.9GHz by ~15 degrees C without pause, but only by ~9 C with pause. (From a baseline of ~30C with a big CoolerMaster Gemini II heatpipe cooler, but low airflow in the case to keep fan noise low.)

Adjusting the start-time measurement to be earlier than it really is will let you compensate for some of the extra overhead, like branch-misprediction when leaving the loop, as well as the fact that the first rdtsc doesn't sample the clock until probably near the end of its execution. Out-of-order execution can let rdtsc run early; you might use lfence, or consider rdtscp, to stop the first clock sample from happening out-of-order ahead of instructions before the delay function is called.

Keeping the offset in a variable will let you calibrate the constant offset, too. If you can do this automatically at startup, that could be good to handle variations between CPUs. But you need some high-accuracy timer for that to work, and this is already based on rdtsc.

Inlining the first RDTSC into the caller and passing the low 32 bits as another function arg would make sure the "timer" starts right away even if there's an instruction-cache miss or other pipeline stall when calling the delay function. So the I$ miss time would be part of the delay interval, not extra overhead.


The advantage of spinning on rdtsc:

If anything happens that delays execution, the loop still exits at the deadline, unless execution is currently blocked when the deadline passes (in which case you're screwed with any method).

So instead of using exactly n cycles of CPU time, you use CPU time until the current time is n * freq nanoseconds later than when you first checked.

With a simple counter delay loop, a delay that's long enough at 4GHz would make you sleep more than 4x too long at 0.8GHz (typical minimum frequency on recent Intel CPUs).

This does run rdtsc twice, so it's not appropriate for delays of only a couple nanoseconds. (rdtsc itself is ~20 uops, and has a throughput of one per 25 clocks on Skylake/Kaby Lake.) I think this is probably the least bad solution for a busy-wait of hundreds or thousands of nanoseconds, though.

Downside: a migration to another core with unsynced TSC could result in sleeping for the wrong time. But unless your delays are very long, the migration time will be longer than the intended delay. T


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...