So I've been working recently on an implementation of the Miller-Rabin primality test. I am limiting it to a scope of all 32-bit numbers, because this is a just-for-fun project that I am doing to familiarize myself with c++, and I don't want to have to work with anything 64-bits for awhile. An added bonus is that the algorithm is deterministic for all 32-bit numbers, so I can significantly increase efficiency because I know exactly what witnesses to test for.
So for low numbers, the algorithm works exceptionally well. However, part of the process relies upon modular exponentiation, that is (num ^ pow) % mod. so, for example,
3 ^ 2 % 5 =
9 % 5 =
4
here is the code I have been using for this modular exponentiation:
unsigned mod_pow(unsigned num, unsigned pow, unsigned mod)
{
unsigned test;
for(test = 1; pow; pow >>= 1)
{
if (pow & 1)
test = (test * num) % mod;
num = (num * num) % mod;
}
return test;
}
As you might have already guessed, problems arise when the arguments are all exceptionally large numbers. For example, if I want to test the number 673109 for primality, I will at one point have to find:
(2 ^ 168277) % 673109
now 2 ^ 168277 is an exceptionally large number, and somewhere in the process it overflows test, which results in an incorrect evaluation.
on the reverse side, arguments such as
4000111222 ^ 3 % 1608
also evaluate incorrectly, for much the same reason.
Does anyone have suggestions for modular exponentiation in a way that can prevent this overflow and/or manipulate it to produce the correct result? (the way I see it, overflow is just another form of modulo, that is num % (UINT_MAX+1))
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…