Implementations of the FPATAN instructions in x86 processors are usually proprietary. To compute arctan, or other (inverse) trigonometric functions, common algorithms follow a three-step process:
- argument reduction for mapping the full input domain to a narrow interval
- computation of the core approximation on the narrow interval (primary approximation interval)
- expansion of the intermediate result based on the argument reduction to produce the final result
The argument reduction is usually based on well-known trigonometric identities that can be looked up in various standard references such as MathWorld (http://mathworld.wolfram.com/InverseTangent.html). For the computation of arctan, commonly used identities are
- arctan (-x) = -arctan(x)
- arctan (1/x) = 0.5 * pi - arctan(x) [x > 0]
- arctan (x) = arctan(c) + arctan((x - c) / (1 + x*c))
Note that the last identity lends itself to the construction of a table of values arctan(i/2n), i = 1...2n, which allows the use of an arbitrarily narrow primary approximation interval at the expense of additional table storage. This is a classical programming trade-off between space and time.
The approximation on the core interval is typically a minimax polynomial approximation of sufficient degree. Rational approximations are usually not competitive on modern hardware due to the high cost of floating-point division, and also suffer from additional numerical error, due to the computation of two polynomials plus the error contributed by the division.
The coefficients for minimax polynomial approximations are usually computed using the Remez algorithm (http://en.wikipedia.org/wiki/Remez_algorithm). Tools like Maple and Mathematica have built-in facilities to compute such approximations. The accuracy of polynomial approximations can be improved by making sure all coefficients are exactly representable machine numbers. The only tool I am aware of that has a built-in facility for this is Sollya (http://sollya.gforge.inria.fr/) which offers an fpminimax()
function.
The evaluation of polynomials usually utilizes Horner's scheme (http://en.wikipedia.org/wiki/Horner%27s_method) which is efficient and accurate, or a mixture of Estrin's scheme (http://en.wikipedia.org/wiki/Estrin%27s_scheme) and Horner's. Estrin's scheme allows one to make excellent use of instruction level parallelism provided by superscalar processors, with a minor impact on overall instruction count and often (but not always) benign impact on accuracy.
The use of FMA (fused-multiply add) enhances the accuracy and performance of either evaluation scheme due to the reduced number of rounding steps and by offering some protection against subtractive cancellation. FMA is found on many processors, including GPUs and recent x86 CPUs. In standard C and standard C++, the FMA operation is exposed as the fma()
standard library function, however it needs to be emulated on platforms that do not offer hardware support, which makes it slow on those platforms.
From a programming viewpoint one would like to avoid the risk of conversion errors when translating the floating-point constants needed for the approximation and argument reduction from textual to machine representation. ASCII-to-floating-point conversion routine are notorious for containing tricky bugs (e.g. http://www.exploringbinary.com/php-hangs-on-numeric-value-2-2250738585072011e-308/). One mechanism offered by standard C (not C++ best I know, where it is available only as a proprietary extension) is to specify floating-point constants as hexadecimal literals that directly express the underlying bit-pattern, effectively avoiding complicated conversions.
Below is C code to compute double-precision arctan() that demonstrates many of the design principles and techniques mentioned above. This quickly-constructed code lacks the sophistication of the implementations pointed to in other answers but should provide results with less than 2 ulps of error, which may be sufficient in various contexts. I created a custom minimax approximation with a simple implementaton of the Remez algorithm that used 1024-bit floating-point arithmetic for all intermediate steps. I would expect the use of Sollya or similar tools to result in a numerically superior approximations.
double my_atan (double x)
{
double a, z, p, r, s, q, o;
/* argument reduction:
arctan (-x) = -arctan(x);
arctan (1/x) = 1/2 * pi - arctan (x), when x > 0
*/
z = fabs (x);
a = (z > 1.0) ? 1.0 / z : z;
/* evaluate minimax polynomial approximation */
s = a * a; // a**2
q = s * s; // a**4
o = q * q; // a**8
/* use Estrin's scheme for low-order terms */
p = fma (fma (fma (-0x1.53e1d2a25ff34p-16, s, 0x1.d3b63dbb65af4p-13), q,
fma (-0x1.312788dde0801p-10, s, 0x1.f9690c82492dbp-9)), o,
fma (fma (-0x1.2cf5aabc7cef3p-7, s, 0x1.162b0b2a3bfcep-6), q,
fma (-0x1.a7256feb6fc5cp-6, s, 0x1.171560ce4a483p-5)));
/* use Horner's scheme for high-order terms */
p = fma (fma (fma (fma (fma (fma (fma (fma (fma (fma (fma (fma (p, s,
-0x1.4f44d841450e1p-5), s,
0x1.7ee3d3f36bb94p-5), s,
-0x1.ad32ae04a9fd1p-5), s,
0x1.e17813d66954fp-5), s,
-0x1.11089ca9a5bcdp-4), s,
0x1.3b12b2db51738p-4), s,
-0x1.745d022f8dc5cp-4), s,
0x1.c71c709dfe927p-4), s,
-0x1.2492491fa1744p-3), s,
0x1.99999999840d2p-3), s,
-0x1.555555555544cp-2) * s, a, a);
/* back substitution based on argument reduction */
r = (z > 1.0) ? (0x1.921fb54442d18p+0 - p) : p;
return copysign (r, x);
}