Take a look at the following C# code (function extracted from the BuildProtectedURLWithValidity
function in http://wmsauth.org/examples):
byte[] StringToBytesToBeHashed(string to_be_hashed) {
byte[] to_be_hashed_byte_array = new byte[to_be_hashed.Length];
int i = 0;
foreach (char cur_char in to_be_hashed)
{
to_be_hashed_byte_array[i++] = (byte)cur_char;
}
return to_be_hashed_byte_array;
}
My question is: What the casting from byte to char does in terms of Encoding?
I guess it really does nothing in terms of Encoding, but does that mean that the Encoding.Default is the one which is used and so the byte to return will depend on how the framework will encode the underlying string in the specific Operative System?
And besides, is the char actually bigger than a byte (I'm guessing 2 bytes) and will actually omit the first byte?
I was thinking in replacing all this by:
Encoding.UTF8.GetBytes(stringToBeHashed)
What do you think?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…