I've been given some ancient documentation for my company and tasked with writing an interface for it. the docs are clear enough with how to send data, but they also state that as part of the send command, there needs to be an "error checking" bit. the error checking bit is supposedly some sort of formula comprised of all the other bits in the stream. But I can't figure out for the life of me how this calculation could possibly be accurate. Here is the example stream:
AA 04 00 40 2C 01 2C 01 44
And then the documentation:
To calculate error code: (Byte count)^(Bias Address)^(Data)
Here : (04)^(00)^(40)^(2C)^(01)^(2C)^(01) = 44
So we ignore the initiating "AA" bit and the final bit is somehow the product/sum/granddaughter-twice-removed of the rest of the stream. I've tried multiplying each of these, raising each bit to the power of the next bit with exponents. I've tried converting the hex values to dec values and doing the same calculations on that as well. I still can't see how this possibly arrives at a value of 44. Does anyone have any ideas how this could happen?
question from:
https://stackoverflow.com/questions/65892563/decipher-poor-documentation-for-sending-serial-commands-in-hex 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…