I wrote my Star Schemas – to boldly go where no Excel spreadsheet has gone before post when I first came in contact with PowerPivot in Excel. I, and I think most others from a BI background, saw PowerPivot as yet another star-schema driven tool. But, as I began to fully appreciate the power of DAX, and also as I saw more and more real-life models that only barely, if at all, resembled classic star-schemas, the reality dawned that the tabular model behind PowerPivot requires more of a relational modelling approach rather than a hyper-cube approach. (You might say the “tabular model” moniker might have been a clue to its relational underpinnings, but that name only appeared with V2).
The tabular model doesn’t throw out the dimensional baby with the pure star-schema bath water. Many of the tricks and approaches of dimensional modelling still apply, conformed dimensions (i.e. key management), date/time dimensions, for example. But, much as it pains me to say this, being a Kimbalist most of my BI career, an organisation with a Inmon inspired “logical data model” based data warehouse is probably better prepared for the self-service world that the tabular model envisages.
As I argued in Death of The Star Schema and in Slowly Changing Dimensions – time to stop worrying, data warehouse IT teams need to be the providers of “safe” data rather than the builders of analytical models. Providing cleansed key-managed logical-model-inspired data feeds is a better use of expensive IT resources than trying to second guess the analytical requirements of those at the every-changing business front-line. After all, an organisation’s “data potential” is a constant (OK, it too changes, but slowly, and usually under the control of IT); it’s the area that can and should be “systemised” something which IT excels at.
Another hint, besides the name, to the Tabular Model’s relational tendency is the wide spread use within models of, what I call, cling-on tables. These tables are not necessarily part of the “logical model” but provide either transformation lists or visual selectors to the model. They’re rarely linked to the core model via the in-built one-to-many relationship facility, more often depending on loosely coupled “calculated relationships” instead.
Currency handling is a classic case where both transformational and visual selector “cling-ons” might be used.
On the transformational side one or more currency conversion lists would be provided which could then be used in an “ETL” fashion via calculated columns or via measures for just-in-time conversions. Both cling-on based measures & calculated columns often require the use of range-queries (e.g. average rate within last 90 days) so precluding the use of fixed relationships.
On the visual side, a slicer, for example, might be linked to a two-row cling-on table, populated with the text values “Current Rate” and “Budget Rate”. Measures could then be built than respond to the value selected in this table to show converted values using current rates or those used during budget preparation. Again, the table is stand-alone, no fixed relationship defined.
Having said all that, there’s still a place for a model being delivered as a pure star-schema. (Hopefully, modelled by datasmiths, but IT likely to be required to provide the necessary ETL support). The original reason for the star-schema approach was to simplify the SQL required to use the model (both by humans and by SQL-producing query software). The drawback was the reduction in “data potential”, as such models tend to answer very specific problem domain questions; but simplicity is often more important than ability to stray from the original analysis requirement.
Star-schemas still offer that simplicity in a DAX query world. A skilfully constructed star-schema will require of end-user datasmiths a much reduced knowledge of the complexities of DAX. In fact, no DAX at all is required, if the model is manipulated (as would be the case in standard “data model” powered Excel 2013) in the same way as a standard pivot table. I use the term datasmith deliberately assuming that the end-user is actively engaged in interrogating the model, if the end-user is a report-consumer, using, say, a pre-defined dashboard, the nature of the model behind such a dashboard matters little; logical models, pure-dimensional models or even raw OLTP extracts will work, as long the designer of the dashboard has the necessary DAX skills to match.