You can't use int 10h (0e)
for char output when you collect bits into bx
. That int
call requires bl
set to foreground colour of text and bh
to point to text page.
Also in bx
you will count number of ones, not the input number. Try it in debugger (your original code), put breakpoint after loop
and enter (blindly, if it doesn't show) for example "1100110011001100", bx
will be 8 (I may be wrong if some int
call destroy bx
, I didn't run it, just in my head).
So to fix your input part I would go for int 21h, 2
instead for displaying the chars, like this (also fixes the accumulation of result in bx
):
; read 16 bits from keyboard ('0'/'1' characters accepted only)
mov cx, 16 ; loop goes 16 Times because I need 16 bit binary input
xor bx, bx ; result number (initialized to zero)
read:
mov ah, 10h
int 16h ; read character from keyboard
cmp al, '0'
jb read ; ASCII character below '0' -> re-read it
cmp al, '1'
ja read ; ASCII character above '1' -> re-read it
mov dl,al ; keep ASCII for output in DL
shr al,1 ; turn ASCII '0'(0x30)/'1'(0x31) into CF=0/1 (Carry Flag)
rcl bx,1 ; enrol that CF into result from right (and shift previous bits up)
mov ah,2 ; output character in DL on screen
int 21h
loop read ; read 16 bits
I didn't check the rest of the code, because if I would, I would have strong itch to rewrite it completely, so let stick with the input part only for the moment.
The debugger should allow you to step one instruction per time (or to put breakpoints on any line, and run up till it).
So you can examine values in registers and memory after each step.
If you will for example put breakpoint ahead of your add bx,ax
in original code, you should be able to read in debugger (after hitting "1" key and debugger breaking on the add
) that:
ax
is 1 (according to key pressed), and bx
goes from 0 to the count of "1" key presses (in further iterations).
After doing like four "1" key presses it should be obvious to you, that bx
equal to 4
(0100
in binary) is far off from 1111
, thus something doesn't work as you wanted and you have to readjust from "what I wanted to wrote there" to "what I really wrote", read your code again and understand what needs to be changed to get expected result.
In your case for example adding instruction shl bx,1
ahead of add
would fix the situation (moving old bits by one position "up", leaving least significant bit set to zero, ie. "ready for add ax").
Keep trying the debugger stuff hard, it's almost impossible to do anything in Assembly without figuring out debugger. Or keep asking here, what you see and what you don't understand. It's really absolutely essential for Assembly programming.
Other option is just to "emulate" CPU in your head and run the instructions from the screen with help notes (I suggest strongly paper, PC somehow doesn't work well for me). This is much more difficult and tedious, than using debugger. May take weeks/months before you start to "emulate" without too many mistakes, so you will spot bugs usually on first try. On the bright side this would give you deep understanding of how CPU works.
About second part (number to hexadecimal string conversion).
I will try to help you understand what you have at hand, and pick up some mistakes from original code to demonstrate how to work with it.
So you have 16 bit number, like:
1010 1011 1100 1101 (unsigned decimal 43981)
I put spaces between each group of 4 bits (rarely called as "nibble"), because there's a funny fact. Each nibble forms single hexadecimal digit, exactly. So the number above is in hexadecimal:
A B C D (10, 11, 12, 13)
Check how each hexa digit corresponds with the 4 bits above.
So what you want is to break the original 16b value into four 4 bit numbers, from most significant to least significant (b12-b15, b8-b11, b4-b7, b0-b3 => particular bits from 16 bit number: "b15 b14 b13 ... b2 b1 b0").
Each such number will be of value 0-15 (because they are 4 bits, and using all possible combinations), so then you want to turn that into ASCII character '0'
-'9'
for values 0-9, and 'A'
-'F'
for values 10-15.
And each converted value is stored into memory buffer, on next byte position, so in the end they form string "ABCD".
This may sound "obvious", but it's complete description of inner-calculation of part 2, so make sure you really understand each step, so you can check your code against this any time and search for differences.
Now I will show you some of the bugs I see in second part, trying to connect it to the "theory" above.
Data and structures first:
HEX_Out DB "00", 13, 10, '$'
This compiles to bytes: '0', '0', 13, 10, '$'
(or 30 30 0D 0A 24
when viewed as hexadecimal bytes).
If you write 'A', 'B', 'C', 'D'
over it, can you spot the problem?
You reserved only two bytes (by "00") for number, but you write four bytes, so also 13
and 10
will be overwritten.
Now about IntegerToHexFromMap
, from the code it looks like you don't understand what and
and shr
does (search for the bitwise operations explanation).
You extract for first three characters the same b4-b7 bits from bx (copy of ax)
, then for the fourth letter you extract bits b0-b3. So this is your try to extend 8 bit conversion code to 16 bit, but you don't extract the correct bits.
I will try to extensively comment the first part of it, to give you idea what you did.
; bx = 16 bit value, mark each bit as "a#" from a0 to a15
and bx, 00FFh
; the original: a15 a14 a13 ... a2 a1 a0 bits get
; AND-ed by: 0 0 0 ... 1 1 1
; resulting into bx = "a7 to a0 remains, rest is cleared to 0"
shr bx, 1
; shifts bx to right by one bit, inserting 0 into top bit
; bx = 0 0 0 0 0 0 0 0 0 a7 a6 a5 a4 a3 a2 a1 (a0 is in CF)
shr bx, 1
; shifts it further
; bx = 0 0 0 0 0 0 0 0 0 0 a7 a6 a5 a4 a3 a2 (a1 is in CF)
shr bx, 1
; bx = 0 0 0 0 0 0 0 0 0 0 0 a7 a6 a5 a4 a3 (a2 ...)
shr bx, 1
; bx = 0 0 0 0 0 0 0 0 0 0 0 0 a7 a6 a5 a4
; so if bx was value 0x1234 at the beginning, now bx = 0x0003
; conversion to ASCII and write is OK.
So you take bits b4-b7 for first character, but you need bits b12-b15. I hope you fully get this one, I know it can be confusing at start which bit is which and why sometimes there is something on right and then left.
Bits are usually named from least significant (value 20 = 1, so I call it "b0") to most significant (value 215 = 32768 in case of 16 bit number, I call it "b15").
But for numeric reasons bits are written from most significant to least significant (in binary numbers), so bits on "left" starts with b15, and bits on "right" end with b0.
Shifting to right means to move b_i to b_(i-1), which effectively halves it's value, so shr value,1
can be viewed also as unsigned division by two.
Shifting to left is from b_i to b_(i+1), effectively multiplies the value by two (instructions shl
and sal
, both producing same result, as b0 is set to zero with both).
sar
is "arithmetic" shift right, keeping value of most significant bit intact (sign bit), so for -1
(all bits are 1) it will produce again -1
, for all other numbers it works as signed division by two.
BTW since 80286 CPU you can use shr bx,4
(which can be also seen as divide by 16 = 2*2*2*2). Are you really forced to code for 8086? Then it may be worth to load cl
with 4 and do shr bx,cl
, instead of four shr bx,1
. That annoys hell out of me, four identical lines.
Also if you already understand what and
does, this must look ridiculous to you now:
and bx, 00FFh ; why not 0Fh already here???
and bl, 0Fh
Now contemplate for a while how to extract bits b12-b15 for first character and how to fix your IntegerToHexFromMap
.
And ultimately I will show you how I would rewrite it to have the code very short, I mean source, but also binary size. (for performance I would write different code, and not for 8086, but this one should work on 8086):
WARNING - try to fix your version on your own by above advices. Only when you will have fixed version, then look at my code, as an inspiration for new ideas how some things were written 30 years ago. Also if you are doing school assigment, make sure you can say everything about XLAT instruction from head, because as a lector I would be highly suspicious about any student using this one, it's total history and as compilers don't use it, it's obvious the code was written by human, and probably experienced one.
IntegerToHexFromMap PROC
; ax = number to convert, di = string buffer to write to
; modifies: ax, bx, cx, dx, di
; copy of number to convert (AX will be used for calculation)
mov dx, ax
; initialize other helpful values before loop
mov bx, OFFSET HEX_Map ; Pointer to hex-character table
mov cx, 00404h ; for rotation of bits and loop counter
; cl = 4, ch = 4 (!) Hexadecimal format allows me
; to position the two "4" easily in single 16b value.
FourDigitLoop: ; I will do every digit with same code, in a loop
; move next nibble (= hexa digit) in DX into b0-b3 position
rol dx, cl
; copy DX b0-b3 into AL, clear other bits (AL = value 0-15)
mov al, dl
and al, 0Fh
; convert 0-15 in AL into ASCII char by special 8086 instruction
; designed to do exactly this task (ignored by C/C++ compilers :))
xlat
; write it into string, and move string pointer to next char
mov [di],al
inc di
; loop trough 4 digits (16 bits)
dec ch
jnz FourDigitLoop
ret
IntegerToHexFromMap ENDP
If you will just use this code without understanding how it works, god will kill a kitten... you don't want that, right?
Final disclaimer: I don't have any 16bit x86 environment, so I wrote all the code without testing (I only try to compile it sometimes, but the syntax must be NASM-like, so I don't do that for this MASM/TASM/emu8086 sources). Thus some syntax bugs may be there (maybe even functional bug? :-O ), in case you will be unable to make it work, comment.