Testing a DAX measure using Excel functionality – XIRR example

Power Pivot’s DAX is missing many of the Excel functions we’ve come to take for granted, but it’s often possible to simulate them using pure DAX, see here for my attempt at a DAX Percentile function.

Even if the logic behind such attempts is sound (or not), the result is likely to be judged against Excel’s version. Also, in many cases, due to the limitations of DAX, crafting an acceptable solution may not be possible. The attached workbook shows a method to either test, or replace, such a DAX measure using existing Excel functionality.

The example attempts to emulate the XIRR function in DAX and test the result against Excel’s version.

It uses a VBA User Defined Function which can “hook into” a pivot table which has been converted to static  CUBE functions. The UDF can then extract the “cross-tab” filters for each cell and generate a DAX Query call via ADO to get the filtered sub-set from the Power Pivot model. It then applies the Excel XIRR function to this data.

The DAX version of XIRR uses a brute force method to pass a range of rates to a NPV calculation, picking the average of the lowest return(s). In the example there are 3000 odd rates (-100% to 200% with one place of decimals so  …10.1%,10.2% …).

A smaller range would result in a faster measure, but be more likely to fail when the yield is outside that range. Likewise, increasing the range of rates, for example, by increasing the number of decimal points, would add to the time taken to calculate.

So, be warned, this is not a generic solution, it would need to be adjusted for the real-world yields that are likely to be encountered and perhaps modified to allow pre-calculation at lower levels and/or allow for restricted “likely-rate” ranges as a first pass, only going to a wider, slower range for outliers.

You’ll notice a definite lag when filtering the example, made worse by the use of CUBE functions which are not as efficient as single Pivot Table callbacks. So, irregardless of how correct or not the measure is, it’ll be a performance non-runner unless properly tuned. In fact, using the callback to Excel method as the solution rather than as the test, may end up as the only practical way to do this.

Here’s the Excel 2013 version, and here’s an Excel 2010 one.

NOTE: CUBE formulas used with pre-2013 PP Models reference “PowerPivot Data”, this will not be updated to “ThisWorkbooksDataModel” when the workbook is subsequently auto-upgraded to the Excel 2013 Data Model.

Below, is the [xirr] measure, with one supporting [npv] measure and a supporting calculated column. There’s also a calculated column [StandardisedDate] for use by the VBA UDF.

xirr:=IF(
ISBLANK(SUM(Cashflow[Cashflow])),
BLANK(),
CALCULATE(
AVERAGE(RATES[Rate] ),
FILTER(
RATES,
ABS(Cashflow[npv])
=
MINX(
RATES,
ABS(Cashflow[npv])
)
)
)
)

npv:=SUMX(
SUMMARIZE(
Cashflow,
Cashflow[DaysSince],
"grouped CashFlow",
sum(Cashflow[Cashflow])
),
[grouped CashFlow] /
IF(
Cashflow[DaysSince]=0,
1,
POWER(
1+ AVERAGE(RATES[Rate]),
(Cashflow[DaysSince]/365))
)
)

Cashflow[DaysSince]= trunc([Event Date]) - trunc(minx(FILTER(cashflow,( [Asset] = earlier([Asset]) && [Cashflow] <0 )),[Event Date]))

Cashflow[StandardisedDate] = =Date(1901,1,1)+[DaysSince]

Have fun.

DISCOVER_COMMANDS DMV – Improve Drillthrough & Learn DAX Query

In the past I’ve demonstrated a few examples of using DAX Query in PowerPivot workbooks, this “create a Date Dim” for example.

The ability to produce tables rather than just pivots can be very useful; not just for ETL-like activities like the above mentioned Date Dim, but also for testing and development; and for producing pivot friendly flattened tables for consumption by other other data vsiualisation toolsets.

I’ve been asked a few times what’s the best way to learn DAX Query? Well, in most respects, it’s the same as the standard DAX used to build measures and calculated columns, so just keep practicing that measure building!

But, another interesting way, is to see the DAX auto-created by a tool that, I suspect, was the original use-case for tabular DAX i.e. PowerView.

PowerView comes with Excel 2013 (at least those versions that also pack the PowerPivot add-in) so you no longer require a SharePoint farm to try it out.

(One thing to note if you’re running a 64bit version of Excel 2013, you’ll need to source a 64 bit version of Sliverlight; the 32bit download that you’ll be prompted to install via your (32bit) browser will not work.)

Here’s a good introduction to  report building in PowerView by MSBI Academy.

To see the DAX generated by PowerView (and indeed the MDX generated by the workbook’s pivotables) use the DMV view DISCOVER_COMMANDS.

For Excel 2010 you would have used my trick to fire DMV views, but this no longer works in 2013. So, instead, you can use this trick to execute any DAX, MDX or DMV.

Note: you’ll need a true external connection (a workbook table will not enable the Edit DAX option). Also, the table’s connection points at the model, so be careful with the Refresh command (i.e it’ll cause the table to refresh). It’s a little bit more complex, it depends on the nature of the DAX; the engine uses a dependency graph it seems (which is good, shows this feature is fully integrated). If your DAX mentions a column on a table, then that table will be refreshed.

The Edit Dax action uses a…

Connections(“ModelConnection_<<whatever>>”).Refresh

…which is obviously only between workbook and model, so doesn’t cause this external refresh.

You may also notice that sometimes the sequence of columns returned is different from expected, I usually find re-issuing the DAX command fixes this.

Because of the awkwardness of the above hack(s) when I don’t need this dependency stuff I usually revert to my HAMMER micro-ETL utility to do any DAX, MDX or DMV queries via the HAMMER(“PPCONECTION”, <<my command>>, “ADO”) command set.

You could also write your own ADO code, latching on to the ActiveWorkbook.Model.DataModelConnection.ModelConnection.ADOConnection connection object.

Not only can you see examples of DAX Query construction using the above methods, but you can also run/modify the extracted commands using the same.

Another very useful use of DISCOVER_COMMANDS is to pick-up the MDX DRILLTHROUGH command issued when a Show Details is applied to a pivottable cell.

You can then, for example, change the MAXROWS returned (something that disappeared in 2013, only other method is to open the workbook ZIP source and modify the Connection’s XML).

But even better, you can pick a different set of columns to return. This is so useful, as normally the default DRILLTHOUGH usually doesn’t bring back useful columns such as the row’s unique ID!

See here for a workbook containing examples of the above.

To use the HAMMER commands you’ll need to manually open the xll (make sure to pick the correct bitness).

To enable HAMMER’s full functionality (namely any command, such as HAMMERtoFit, that uses a background thread) you’ll need to close and reopen Excel. This is something new in Excel 2013, probably related to ensuring that an internet-delivered workbook doesn’t  use something like a background thread to do something nasty before it has been “trusted”. Well that’s my theory anyway, and I’m sticking to it. Anybody else seen similar?

UPDATE: Another method to have fun with DAX Query expressions is Dax Studio, which now supports Excel 2013! http://geekswithblogs.net/darrengosbell/archive/2013/05/04/new-release—dax-studio-1.2-with-excel-2013-support.aspx

A frown, or two – Data Explorer to PowerPivot gotchas.

As I continue my discovery of all things Data Explorer and M I’m not only becoming more impressed with the product but also, as I try more ambitious tasks, I’m hitting some, hopefully, beta edition teething problems. I would use the “frown” button but unfortunately it’s determined to force me to use and/or setup an Outlook Profile; too many steps, easier to blog about it 🙂

The example I picked this time is a spreadsheet response to an Irish Government FOI (Freedom Of Information) request on expenses paid within a department. The result is typical of many reporting tools, repeating column values blanked out to make it “easier to read” but also making it difficult to re-analyse. In essence, the report contains a flattened view of four tables, Payee, Claim, Claim Items & Claim Item Details. The task I set myself was to separate out the four tables and link them back together using a PowerPivot Data Model.

And here’s the M code ( and here’s an Excel 2013 workbook version of the example):

[
 Source = Excel.CurrentWorkbook(){[Name="foitable"]}[Content],
 SourceTable = Table.AddIndexColumn(Source,"RowNo",1),
 AddPayeeID = Table.FillDown(Table.AddColumn(SourceTable,"PayeeID", each if [Full Name] = null then null else [RowNo]),"PayeeID"),
 AddClaimID = Table.FillDown(Table.AddColumn(AddPayeeID,"ClaimID", each if [Claim Date] = null then null else [RowNo]),"ClaimID"),
 modSource = Table.FillDown(Table.AddColumn(AddClaimID,"ClaimItemID", each if [Start Date] = null then null else [RowNo]),"ClaimItemID"),
 PayeeTable = Table.SelectRows(Table.SelectColumns(modSource,{"Full Name","Grade","PayeeID"}), each [Full Name] <> null),
 ClaimsTable1 = Table.SelectRows(Table.SelectColumns(modSource,{"Claim Date","Payment Date","ClaimID","PayeeID","Status","Total Claimed","Amount Paid"}), each [Claim Date] <> null),
 ClaimsTable = Table.TransformColumns(ClaimsTable1,{{"Total Claimed",each if _ = "NULL" then 0 else _},{"Amount Paid",each if _ = "NULL" then 0 else _}}),
 ClaimsItemsTable = Table.SelectRows(Table.SelectColumns(modSource,{"Start Date","ClaimID","PayeeID","ClaimItemID","Expense Type","Euro Line Amount"}), each [Start Date] <> null),
 ClaimsItemsDetailsTable = Table.SelectRows(Table.SelectColumns(modSource,{"ClaimID","PayeeID","ClaimItemID","Quantity","Unit","Net Total (EUR)"}), each [#"Net Total (EUR)"] <> "NULL")
]

..again a fraction of the time of doing this using say Python and much more approachable for those with limited programming skills. To create a single table query from the above”foitables” query, simply create a new DE query with “source = foitables[tableNameRequired]”.

But, you need to be careful how you do this, otherwise you can easily lose work and have to repeat some import tasks again & again. So here’s a list of gotchas I encountered:

Gotcha No. 1

Don’t connect your PowerPivot internal table to a DE Workbook connection directly. At least, not until you’re sure you’ve no further changes to make to the DE “code”.

If you modify the code, in any way, the connection is no longer valid (looks like the code is attached as a Base64 encoded string to the connection) and all your hard(ish) work within PowerPivot will be lost.  Even if not adding DAX formulas, you will have to do work on all imported DE tables, not just creating relationships, but all fields are imported as text, so dates & currencies etc. will need to have types applied.

Only code changes, not normal refreshes, cause this destroy-the-connection action to happen. Although I’d isolated the 4  for-import tables from the likely-to-change multi-table source “record”, it was no matter, change the one-step-removed query or a for-import query, and the connection is toast.

How to get around this? Well, as stated above, you could just wait until the M “model” is finalised/tested before doing anything outside DE. If you’re using Excel 2010, then creating a “linked table” rather than using the connection directly will get around this, but you’ll end up with the usual “linked table” refresh awkwardness.

If you have 2013, then the solution is much better, again don’t use the connection directly, but use instead the “load directly to Data Model” option at DE design stage. This seems to create a different type of internal PowerPivot connection which is obviously related to the DE output table and which continues to be valid even after code changes are applied.

GOTCHA NO. 2

The other no-no, is to avoid messing with DE connection’s “SQL”, in fact even looking at it can cause problems (that might be an exaggeration, it was late). The default “Select * from [Query1]” is about the limit of it and even it it, if “touched” but left as the same code, causes the query to return no columns, or in some cases, only 1 column, go figure? This strange behaviour can be seen in both 2010/2013 PowerPivot. In Excel 2010, direct connections using a workbook table appear not to work at all. Again, Excel 2013 seems to be more stable, for example if you pick the “All DE Queries” connection as a table source in Excel 2010 it’ll crash Excel, while in 2013 it gives an error message.

GOTCHA NO. 3

The Refresh button on the Table Query Ribbon (or a implicit refresh when leaving M design mode) breaks the Excel calculation graph. If you’ve referenced the DE table in a formula (such as using Table1[#All]) the formula will be “nuked” with the dreaded #ref, meaning you’ll have to go back an amend each one. This doesn’t happen if you use the Data Ribbon’s Refresh or RefreshAll buttons. So again, make sure the M code is “working” before linking to it in a formula, and even then, do not use the now-working-table’s Table Query Refresh button, ever, ever again!

M’s back – Create a PowerPivot Time Dimension using Excel’s PowerQuery

When I first heard of PowerQuery (aka Data Explorer in the past), the new ETL-like addin for Excel 2010/2013, I was only so-so impressed; likely to be yet another UI based ETL tool that does the easy bits, and one or two “packaged” hard bits, but  chokes when a real hard-case needs to be tackled!

That was before it became clear, that behind the UI facade lay a powerful and elegant functional language called M (yep, the same M that first appeared in Project Oslo). If data shaping & cleansing is what you do, this you’ve got to see.

To try it out, I picked on that perennial of BI data models, the Date (aka Time) Dimension. If you do any serious work with PowerPivot you’ll at some stage need such a table, for why, see here. I used the same variation on an old SQL trick as in creating a Date Dimension using DAX, but this time in M.


let
    days={1..31},
    months={1..12},
    years={2009..2015},
    monthLU={"Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"},
    dayT=Table.FromList(days,each {_},{"Day"}),
    monthT=Table.FromList(months,each {_},{"Month"}),
    yearT=Table.FromList(years,each {_},{"Year"}),
    dayJ = Table.AddColumn(dayT, "joiner", each 1),
    monthJ = Table.AddColumn(monthT, "joiner", each 1),
    yearJ = Table.AddColumn(yearT, "joiner", each 1),
    yearmonth=Table.Join(yearJ,"joiner",monthJ,"joiner"),
    yearmonthday=Table.Join(yearmonth,"joiner",dayJ,"joiner"),
    addDate=Table.AddColumn(yearmonthday,"Date",each let x = try #date([Year],[Month],[Day])
                                                     in if x[HasError]
                                                     then #date(1,1,1)
                                                     else x[Value]),
    removeErrors=Table.SelectRows(addDate,each not([Date]=#date(1,1,1))),
    cleanCols=Table.SelectColumns(removeErrors,{"Date","Year","Month"}),
    renameCols = Table.RenameColumns(cleanCols,{"Month","MonthNo"}),
    addtextDate = Table.AddColumn(renameCols,"TextDate",each Date.ToText([Date],"YYYYMMDD")),
    addQtr = Table.AddColumn(addtextDate,"Qtr",each "Qtr" & Text.From(Date.QuarterOfYear([Date]))),
    DateDim = Table.AddColumn(addQtr,"Month",each monthLU{[MonthNo]-1})
in
    DateDim

PowerQuery’s Table function library appears not to have a cross join, that’s why I needed to add a “joiner” column to the three tables, and it took me a little while to get my head around M’s error handling; but other than that, even with my current elementary knowledge of M and DE’s function library, it was both a quick & a friction-free exercise (helped by the step by step try/feedback build-loop, very similar to working a problem with Excel).

If Data Explorer (aka PowerQuery) is new to you have a look at Chris Webb’s excellent series of articles on the subject.

The Model of a very modern PowerPivot – without the add-in

My last post demonstrated a method by which explicit PowerPivot DAX measures could be created in any version of Excel 2013 without  the PowerPivot add-in. Here’s another method, this time using Excel 2010’s PowerPivot add-in (available for all versions of 2010) to design the model and the OpenXML SDK to copy the resulting model to an Excel 2013 workbook.

I had thought originally that this wasn’t possible; when I first tried it, the model copied OK but the workbook connection couldn’t access it. Turned out, this was due to using a V1 model that although upgraded to V2 still used the “Sandbox” cube name rather than the now expected “Model” name. So, all models must be created in PP V2 for this to work.

Also, the “receiving” Excel 2013 workbook must be primed with a data model (easiest way without the add-in is to create two tables and link via relationships option, or just use the Model_2013_template workbook provided).

You can download a POC workbook Example_CopyModels_no_Addin demonstrating this method from here. 

Note:

You’ll need to modify the paths of the example workbooks to suit your environment.

Also, if you copy the workbook out of the downloaded folder, make sure to copy the datasmith xll of the right “bitness” and copy the DocumentFormat.OpenXml.dll or ensure one is available on the GAC or PATH.

Before using the datasmith xll for the first time, right-click and unblock, this can also be a good idea for workbooks with external connections, otherwise Excel can sometimes keep certain functionality “locked” 1st time you access such files.

Having copied the model into the Excel 2013 workbook, save & re-open before making any further changes.

If you look  at the code you’ll see it’s also possible to copy models for 2010 to 2010 and from 2013 to 2013 or indeed from 2013 back to 2010. This general coping of models from workbook to workbook was the use-case I had in mind when I first discovered this (not as a way of getting around lack of add-in). Very useful during model development allowing different views to be tried against a first developed (or commonly used) data model. Could also be used when live to allow a single model-refresh to be used by many “view” workbooks.

Have fun.