I'm getting 2 places less accuracy than the c# Average()
No, you are only losing 1 significant digit. The float type can only store 7 significant digits, the rest are just random noise. Inevitably in a calculation like this, you can accumulate round-off error and thus lose precision. Getting the round-off errors to balance out requires luck.
The only way to avoid it is to use a floating point type that has more precision to accumulate the result. Not an issue, you have double available. Which is why the Linq Average method looks like this:
public static float Average(this IEnumerable<float> source) {
if (source == null) throw Error.ArgumentNull("source");
double sum = 0; // <=== NOTE: double
long count = 0;
checked {
foreach (float v in source) {
sum += v;
count++;
}
}
if (count > 0) return (float)(sum / count);
throw Error.NoElements();
}
Use double to reproduce the Linq result with a comparable number of significant digits in the result.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…