There are various open source assemblers such as gas, nasm, and yasm. They have different pseudo-ops
and macro
syntaxes. For many open source projects, assembler is pre-processed to replace constants and platform conditionals.
What limitations would gcc
have creating assembler assuming you can use all current attributes
and #pragmas
, excluding translation performance (compile/assemble to binary time)?
I am not talking about inline-assembly.
#define MOV(RA,RB) (OXFEB10000UL | RA << 16 | RB)
#define ADD(RA,RB) (OXFEB20000UL | RA << 16 | RB)
#define RET (OXFEB7ABCDUL)
unsigned long add4[] __attribute(section(".text")) =
{
ADD(R0,R1),
ADD(R2,R3),
MOV(R1,R2),
ADD(R0,R1),
RET()
};
I believe that using pointer arithmetic can allow simulation of .
and other labels
. Perhaps this is an XY problem; I am trying to understand why there are so many assemblers at all. It seems like everything can be done by the pre-processor and the assembler is really a programmer preference; or there is a technical limitation I am missing.
I guess this might be related to 'Something you can do with an assembler that you can't do with shell code'.
Edit: I have re-tagged this from C to compiler. I am interested in the technical details of an assembler. Is it simply a 1-1
translation and emitting relocations (as a compiler will) or is there more? I don't mean for people to code assembler as I have outlined above. I am trying to understand what the assemblers are doing. I don't believe there is a Dragon book for assemblers. Of course, the pre-processor can not create a binary
by itself and needs additional machinery; it only translates text.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…