This really depends on the domain of what values you want to compute a logarithm of.
For IEEE doubles, many processors can take logarithms in a single assembly instruction; x86 has the FYL2X and FYL2XP1 instructions, for example. Although typically instructions like these will only take the logarithm in some fixed base, they can be used to take logarithms in arbitrary bases by using the fact that
loga b = logc b / logc a
by simply taking two logarithms and finding their quotient.
For general integers (of arbitrary precision), you can use repeated squaring combined with a binary search to take logarithms using only O(log log n) arithmetic operations (each time you square a number you double the exponent, which means you can only square the number log log n times before you've exceeded its value and can do a binary search). Using some cute tricks with Fibonacci numbers, you can do this in only O(log n) space. If you're computing the binary logarithm, there are some cute tricks you can use with bit shifts to compute the value in less time (though the asymptotic complexity is the same).
For arbitrary real numbers the logic is harder. You can use Newton's method or the Taylor series to compute logarithms to within a certain precision, though I confess that I'm not familiar with the methods for doing this. However, you rarely actually need to do this because most real numbers are IEEE doubles and there are better algorithms (or even hardware instructions) in that case.
Hope this helps!
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…