Custom Aggregates in DAX / BISM Tabular (part 1)
SQL Server Denali | PowerPivot
I’m referring to the example of my previous post about semi additive measures. Let’s say we’re monitoring the stock of two products we’re selling. For the totals we want to see the average stock over time. At least once a month we’re taking a snapshot of the stock. If we have more than one snapshot per month, the monthly total computes as the average of those snapshot. For aggregation above the month level we want to take the average of the monthly averages. At first, this looks like we only have to use average as the aggregation function. But the average of averaged values is not identical to the average of all values. Let’s take a look at this source data table:
For product Notate we have a single measurement of 50 pieces in February. For product Quickerstill we have 8 distinct measurements with an average of 50 pieces in February. However, when we look at the total average for Quickerstill, the 8 distinct measurements in February result in a higher weight of the February average and therefore in a higher total average of 44 instead of 30:
The Average Stock measure in this example is the same semi additive measure as in my previous post and computed like this:
- AVG Stock:=Sum([Stocklevel])/DISTINCTCOUNT([Date])
The requirement for the custom aggregate means, that we also want to see a total average of 30 for product Quickerstill (20+50+20=90, 90/3=30). This requirement is somewhat unusual as the computation above gives the “correct” average of all values. One interpretation is that the weight for computing the average of two or more months is not influenced by the number of measurements within the month.
We can achieve this in a way that is very similar to the semi additive calculations from my last post. Here is the resulting formula:
- Special AVG Stock:=AVERAGEX(SUMMARIZE('Stock',[Year],[Month],"AvgStock",AVERAGE([Stocklevel])),[AvgStock])
This formula simply summarizes the average stock at a grouping level of year and month. Then, in a second step, it takes these values and computes the average of them. By doing so, we have broken the aggregation into two layers. First we average by month, then we take the average of those values.
Here is the resulting table using the new aggregate:
And after expanding the February values (2nd month) we clearly see the how our custom aggregate works:
Of course, this is just a simple custom aggregate but it is remarkable that we didn’t need any kind of cube script with scope-statements to achieve this but only a very simple DAX expression.