I have encountered with following bug (or feature) in SQL Server.
When I use SUM (*column*)
where column
has a numeric(18, 8)
type and multiply it to any other number (integer or decimal) the result precision is reducing to numeric(18, 6)
.
Here is the example script to demonstrate.
CREATE TABLE #temp (Qnty numeric(18,8))
INSERT INTO #temp (Qnty) VALUES (0.00000001)
INSERT INTO #temp (Qnty) VALUES (0.00000002)
INSERT INTO #temp (Qnty) VALUES (0.00000003)
SELECT Qnty, 1*Qnty
FROM #temp
SELECT (-1)*SUM(Qnty), SUM(Qnty), -SUM(Qnty), SUM(Qnty) * CAST(2.234 as numeric(18,8))
FROM #temp
DROP TABLE #temp
The result of second SELECT query
0.000000 0.00000006 -0.00000006 0.000000
As you can see then I multiply SUM the result is 0.000000
Could anyone explain the strange behavior?
UPD. I executed this query in SQL Management Studio on 2000, 2005 and 2008 SQL Server.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…