Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
860 views
in Technique[技术] by (71.8m points)

assembly - Interrupts, Instruction Pointer, and Instruction Queue in 8086

Suppose an external interrupt request is made to 8086. Processor will handle the interrupt after completing the current instruction being executed (if any). Before handling of the interrupt, the state of the program will also be saved (PSW flag, registers etc.) by pushing data onto the stack segment.

Now, most tutorials/documents describe that instruction pointer is also pushed onto the stack segment, which is okay because it was pointing to the next byte of instruction in the code segment (just before interrupt request was made).

But what happens to the instruction queue? Is it also pushed onto the stack segment when an interrupt request is being handled? Or is it that its contents are cleared to zero? In this case, shouldn't the instruction pointer be decremented so that it can be made to point back to the previous instructions in the code segment (after interrupt has been served)?

IP pushed onto the stack

Here, After interrupt request actually means After interrupt request has been served. What this diagram shows is that before interrupt request came, the instructions were cached, with IP pointing to the address of the next byte of instruction in the CS memory segment. To serve the interrupt request, the contents of registers (incl. IP and flags) are pushed onto the stack segment. After request is served, the previous contents are loaded back - with IP still pointing to the location of the 7th byte (of instruction), with queue (cache) being empty. This is what my doubt is. Is IP decremented to point back to i1? Secondly, do we need to manually handle the IP (like, push it onto the stack upon interrupt) or does the interrupt-service-routine handles this for us? Please, any help is appreciated, thanks!


Note: Instruction Queue - The 8086 architecture has a six-byte prefetch instruction pipeline. As the Execution Unit is executing the current instruction, the bus interface unit reads up to six bytes of opcodes in advance from the memory.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

You aren't very clear about what you mean by "instruction queue".

One meaning might be "prefetched instructions". In practice, the processor has speculatively read-ahead in the instruction stream from the point of the last completed instruction, following branches or not based on various types of branch prediction algorithms. Since these are reads, if the processor decides to abandon the current instruction "stream" for another (e.g., the interrupt routine), it simply ignores its read-ahead.

Another meaning might be "instructions partly executed (in flight/in the 'pipeline')", which frequently happens with super-scalar CPUs. In the case of an asynchronous interrupt, the processor must complete those that have affected the visible state of the system (e.g, has committed a write to a register or memory), and may or may not complete other instructions depending on the whims of the specific processor's designers. In the case of a synchronous trap, the processor has to complete instructions that affected state, but simply abandons the rest (OP's phrase was "zeros the queue" which has the right concept but the wrong phrasing).

[Adding at OP's request a comment I made]: You say the 8086 has a 6 byte prefetch "instruction pipeline" (bad term IMHO). There may have been one with that property, but that's a detail of the implementation and there's no good reason to believe this is a property of all 8086s. For modern CPUs, the way the instruction prefetch is implemented is simply dependent on the cleverness of the designers. About all you can reasonably predict is there will be some prefetch scheme, and it will be difficult for you to detect its presence in your application program except for impact on performance and funny rules about self-modifying code.

[Answering OP's second question]: Secondly, do we need to manually handle the IP (like, push it onto the stack upon interrupt) or does the interrupt-service-routine handles this for us?

For any type of trap or interrupt, it is sufficient to store the architecturally defined state (the "registers", PC, etc.). For many processors, it is sufficient for the hardware to store a critical subset of the architectural state, and let the interrupt routine store (and eventually restore) the rest. So the responsibility for storing the entire state is split between hardware and software (to save implementation effort in the hardware).

For the x86 family, typically the instruction pointer (IP) and the flags register are pushed by the hardware onto the current stack, control transfers to the interrupt, and the interrupt routine has instructions that store the rest of the registers typically in an operating-system defined data structure often called a "context block". The interrupt routine does its job, and either returns control to the application by reloading the registers, and then reloading the IP and flags by using a special IRET instruction, or it transfers control to an OS scheduler which chooses to run some other activity, eventually using the saved context block content to restart the application.

A really fast interrupt routine might save just enough registers to do its critical work, and then restore those registers before returning to the interruptee.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...