I came across a problem today where I discovered the way the bit fields in my bytes are ordered is dependent on the endianness of my processor. Take the next example:
struct S {
uint8_t a : 3;
uint8_t b : 5;
};
This struct takes one byte but the bit layout depends on the machine:
- Little endian: b4 b3 b2 b1 b0 a2 a1 a0
- Big endian: a2 a1 a0 b4 b3 b2 b1
So on a little endian machine it starts filling from the LSB and on a big endian machine it start filling from the MSB. I once heard Stroustrup say it's a main goal to achieve portability across platforms but leaving some things like this is not portable at all. If I were to send this struct over a connection to someone, how would he know which bits mapped to which fields in the struct? Would it not have been easier if the order was fixed? What's the reasoning behind the choice of leaving this open to the processor and compiler? The only safe option is to use bit shifts and masks which uses a lot more code. It would have been so much easier if me and my co-workers could have counted on a fixed order, the little endian way for example, but there certainly must have been a reason why chose not to do it.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…