Tag Archives: testing DAX

Testing a DAX measure using Excel functionality – XIRR example

Power Pivot’s DAX is missing many of the Excel functions we’ve come to take for granted, but it’s often possible to simulate them using pure DAX, see here for my attempt at a DAX Percentile function.

Even if the logic behind such attempts is sound (or not), the result is likely to be judged against Excel’s version. Also, in many cases, due to the limitations of DAX, crafting an acceptable solution may not be possible. The attached workbook shows a method to either test, or replace, such a DAX measure using existing Excel functionality.

The example attempts to emulate the XIRR function in DAX and test the result against Excel’s version.

It uses a VBA User Defined Function which can “hook into” a pivot table which has been converted to static  CUBE functions. The UDF can then extract the “cross-tab” filters for each cell and generate a DAX Query call via ADO to get the filtered sub-set from the Power Pivot model. It then applies the Excel XIRR function to this data.

The DAX version of XIRR uses a brute force method to pass a range of rates to a NPV calculation, picking the average of the lowest return(s). In the example there are 3000 odd rates (-100% to 200% with one place of decimals so  …10.1%,10.2% …).

A smaller range would result in a faster measure, but be more likely to fail when the yield is outside that range. Likewise, increasing the range of rates, for example, by increasing the number of decimal points, would add to the time taken to calculate.

So, be warned, this is not a generic solution, it would need to be adjusted for the real-world yields that are likely to be encountered and perhaps modified to allow pre-calculation at lower levels and/or allow for restricted “likely-rate” ranges as a first pass, only going to a wider, slower range for outliers.

You’ll notice a definite lag when filtering the example, made worse by the use of CUBE functions which are not as efficient as single Pivot Table callbacks. So, irregardless of how correct or not the measure is, it’ll be a performance non-runner unless properly tuned. In fact, using the callback to Excel method as the solution rather than as the test, may end up as the only practical way to do this.

Here’s the Excel 2013 version, and here’s an Excel 2010 one.

NOTE: CUBE formulas used with pre-2013 PP Models reference “PowerPivot Data”, this will not be updated to “ThisWorkbooksDataModel” when the workbook is subsequently auto-upgraded to the Excel 2013 Data Model.

Below, is the [xirr] measure, with one supporting [npv] measure and a supporting calculated column. There’s also a calculated column [StandardisedDate] for use by the VBA UDF.

xirr:=IF(
ISBLANK(SUM(Cashflow[Cashflow])),
BLANK(),
CALCULATE(
AVERAGE(RATES[Rate] ),
FILTER(
RATES,
ABS(Cashflow[npv])
=
MINX(
RATES,
ABS(Cashflow[npv])
)
)
)
)

npv:=SUMX(
SUMMARIZE(
Cashflow,
Cashflow[DaysSince],
"grouped CashFlow",
sum(Cashflow[Cashflow])
),
[grouped CashFlow] /
IF(
Cashflow[DaysSince]=0,
1,
POWER(
1+ AVERAGE(RATES[Rate]),
(Cashflow[DaysSince]/365))
)
)

Cashflow[DaysSince]= trunc([Event Date]) - trunc(minx(FILTER(cashflow,( [Asset] = earlier([Asset]) && [Cashflow] <0 )),[Event Date]))

Cashflow[StandardisedDate] = =Date(1901,1,1)+[DaysSince]

Have fun.

Advertisements