Short summary
- Performance of subqueries method depends on the data distribution.
- Performance of conditional aggregation does not depend on the data distribution.
Subqueries method can be faster or slower than conditional aggregation, it depends on the data distribution.
Naturally, if the table has a suitable index, then subqueries are likely to benefit from it, because index would allow to scan only the relevant part of the table instead of the full scan. Having a suitable index is unlikely to significantly benefit the Conditional aggregation method, because it will scan the full index anyway. The only benefit would be if the index is narrower than the table and engine would have to read fewer pages into memory.
Knowing this you can decide which method to choose.
First test
I made a larger test table, with 5M rows. There were no indexes on the table.
I measured the IO and CPU stats using SQL Sentry Plan Explorer. I used SQL Server 2014 SP1-CU7 (12.0.4459.0) Express 64-bit for these tests.
Indeed, your original queries behaved as you described, i.e. subqueries were faster even though the reads were 3 times higher.
After few tries on a table without an index I rewrote your conditional aggregate and added variables to hold the value of DATEADD
expressions.
Overall time became significantly faster.
Then I replaced SUM
with COUNT
and it became a little bit faster again.
After all, conditional aggregation became pretty much as fast as subqueries.
Warm the cache (CPU=375)
SELECT -- warm cache
COUNT(*) AS all_cnt
FROM LogTable
OPTION (RECOMPILE);
Subqueries (CPU=1031)
SELECT -- subqueries
(
SELECT count(*) FROM LogTable
) all_cnt,
(
SELECT count(*) FROM LogTable WHERE datesent > DATEADD(year,-1,GETDATE())
) last_year_cnt,
(
SELECT count(*) FROM LogTable WHERE datesent > DATEADD(year,-10,GETDATE())
) last_ten_year_cnt
OPTION (RECOMPILE);
Original conditional aggregation (CPU=1641)
SELECT -- conditional original
COUNT(*) AS all_cnt,
SUM(CASE WHEN datesent > DATEADD(year,-1,GETDATE())
THEN 1 ELSE 0 END) AS last_year_cnt,
SUM(CASE WHEN datesent > DATEADD(year,-10,GETDATE())
THEN 1 ELSE 0 END) AS last_ten_year_cnt
FROM LogTable
OPTION (RECOMPILE);
Conditional aggregation with variables (CPU=1078)
DECLARE @VarYear1 datetime = DATEADD(year,-1,GETDATE());
DECLARE @VarYear10 datetime = DATEADD(year,-10,GETDATE());
SELECT -- conditional variables
COUNT(*) AS all_cnt,
SUM(CASE WHEN datesent > @VarYear1
THEN 1 ELSE 0 END) AS last_year_cnt,
SUM(CASE WHEN datesent > @VarYear10
THEN 1 ELSE 0 END) AS last_ten_year_cnt
FROM LogTable
OPTION (RECOMPILE);
Conditional aggregation with variables and COUNT instead of SUM (CPU=1062)
SELECT -- conditional variable, count, not sum
COUNT(*) AS all_cnt,
COUNT(CASE WHEN datesent > @VarYear1
THEN 1 ELSE NULL END) AS last_year_cnt,
COUNT(CASE WHEN datesent > @VarYear10
THEN 1 ELSE NULL END) AS last_ten_year_cnt
FROM LogTable
OPTION (RECOMPILE);
Based on these results my guess is that CASE
invoked DATEADD
for each row, while WHERE
was smart enough to calculate it once. Plus COUNT
is a tiny bit more efficient than SUM
.
In the end, conditional aggregation is only slightly slower than subqueries (1062 vs 1031), maybe because WHERE
is a bit more efficient than CASE
in itself, and besides, WHERE
filters out quite a few rows, so COUNT
has to process less rows.
In practice I would use conditional aggregation, because I think that number of reads is more important. If your table is small to fit and stay in the buffer pool, then any query will be fast for the end user. But, if the table is larger than available memory, then I expect that reading from disk would slow subqueries significantly.
Second test
On the other hand, filtering the rows out as early as possible is also important.
Here is a slight variation of the test, which demonstrates it. Here I set the threshold to be GETDATE() + 100 years, to make sure that no rows satisfy the filter criteria.
Warm the cache (CPU=344)
SELECT -- warm cache
COUNT(*) AS all_cnt
FROM LogTable
OPTION (RECOMPILE);
Subqueries (CPU=500)
SELECT -- subqueries
(
SELECT count(*) FROM LogTable
) all_cnt,
(
SELECT count(*) FROM LogTable WHERE datesent > DATEADD(year,100,GETDATE())
) last_year_cnt
OPTION (RECOMPILE);
Original conditional aggregation (CPU=937)
SELECT -- conditional original
COUNT(*) AS all_cnt,
SUM(CASE WHEN datesent > DATEADD(year,100,GETDATE())
THEN 1 ELSE 0 END) AS last_ten_year_cnt
FROM LogTable
OPTION (RECOMPILE);
Conditional aggregation with variables (CPU=750)
DECLARE @VarYear100 datetime = DATEADD(year,100,GETDATE());
SELECT -- conditional variables
COUNT(*) AS all_cnt,
SUM(CASE WHEN datesent > @VarYear100
THEN 1 ELSE 0 END) AS last_ten_year_cnt
FROM LogTable
OPTION (RECOMPILE);
Conditional aggregation with variables and COUNT instead of SUM (CPU=750)
SELECT -- conditional variable, count, not sum
COUNT(*) AS all_cnt,
COUNT(CASE WHEN datesent > @VarYear100
THEN 1 ELSE NULL END) AS last_ten_year_cnt
FROM LogTable
OPTION (RECOMPILE);
Below is a plan with subqueries. You can see that 0 rows went into the Stream Aggregate in the second subquery, all of them were filtered out at the Table Scan step.
As a result, subqueries are again faster.
Third test
Here I changed the filtering criteria of the previous test: all >
were replaced with <
. As a result, the conditional COUNT
counted all rows instead of none. Surprise, surprise! Conditional aggregation query took same 750 ms, while subqueries became 813 instead of 500.
Here is the plan for subqueries:
Could you give me an example, where conditional aggregation notably
outperforms the subquery solution?
Here it is. Performance of subqueries method depends on the data distribution. Performance of conditional aggregation does not depend on the data distribution.
Subqueries method can be faster or slower than conditional aggregation, it depends on the data distribution.
Knowing this you can decide which method to choose.
Bonus details
If you hover the mouse over the Table Scan
operator you can see the Actual Data Size
in different variants.
- Simple
COUNT(*)
:
- Conditional aggregation:
- Subquery in test 2:
- Subquery in test 3:
Now it becomes clear that the difference in performance is likely caused by the difference in the amount of data that flows through the plan.
In case of simple COUNT(*)
there is no Output list
(no column values are needed) and data size is smallest (43MB).
In case of conditional aggregation this amount doesn't change between tests 2 and 3, it is always 72MB. Output list
has one column datesent
.
In case of subqueries, this amount does change depending on the data distribution.