Category Archives: microETL

When HAMMER met SWF

I use the term “micro ETL” a lot when writing about tools such as HAMMER, but what do I mean by the term?

The ETL bit is easy to explain:

ETL, as all you data-warehousing and business intelligence folks will know, is the Extracting, Transformation and Loading of data from source systems into a reporting/data warehousing system. The techniques of ETL are not unique to the DW/BI worlds but are used anywhere transfers of data are needed between one computer system and another, for example, master data take-on for new systems or transactional interfaces between front and back-office systems – this is often referred to as DI (data integration) but is essentially the same problem domain.

So what’s the “micro” bit about?

You might assume that the micro adjective implies small or indeed tiny datasets, and in many cases you would be correct. Most final-mile data analysis, like politics, is local. Most business decisions along with their implementation and monitoring require ‘localised’ data. That data will be pre-filtered and summarised to some degree, but a fair degree of data shaping will still happen close to the decision makers. Excel is often the tool of choice when data gets to this stage.

HAMMER is optimised for this world, it sees the world how Excel sees it, but also adds the power of SQL and scripting languages to pick up where Excel stops. But enabling better Excel based data shaping is not HAMMER’s only function. It can operate outside of Excel (HAMMER.exe) and it can be used to craft task-specific ETL tools (HAMMER Inside). In both cases I continue to use Excel as my IDE, teasing out a problem before fixing it in code or in an external HAMMER call; and I can also use Excel as the UI for the end products.

In such scenarios, micro applies not so much to the datasets (which can be anything from tiny to very large) but to the concept of deploying simple micro “fractional horsepower” data engines to solve complex ETL, DI or RSS (Really SImple Systems) requirements.

HAMMER is built to take advantage of the distributed grid of powerful data crunchers (be that PCs, laptops, in-house cheap servers or just-in-time pay-as-you-go cloud-based CPUs) that every business, big or small, can now call on.

This revolution in distributed power is similar to what happened with the deployment of fractional horsepower AC-powered electrical motors in the last century. No longer was manufacturing restricted to “dark satanic mills” which had to be built close to natural power sources (water and later coal seams); and had to conform to the multi-story classic mill design to harness that captured power through belts, pulleys and shafts. With the expansion of the AC power grids (and the parallel expansion of internal combustion engine carrying roadways) the factory began to take on its modern single-story (or single story with mezzanine) distributed profile than can be seen everywhere from China to Cork. A similar landscape change is happening in IT.

HAMMER can take advantage of the “distributed engines” easily enough but the workflow, the actual control and distribution of tasks, data and decisions requires the ad hoc implementation  of either steam-powered or classic centralised server processes. I badly needed a more pre-built modular approach, micro Workflow to complement  micro ETL (and micro BI via PowerPivot ?), if you like. Last week I had started to think seriously about how/what to do about this (JSDB powered grids were featuring high on the list) when this appeared.

Perfect timing, Amazon’s SWF (Simple Workflow service) is exactly what I need!

SWF allows for the control and distributed deployment of stateless data processors. HAMMER was designed primarily as a stateless data processor (with state being persisted either in Excel or on disk as simple CSV/JSON flat files). Its default use of in-memory, rather than disk-based, SQLite assumes both abundant CPU and RAM (like is the case with your average 64bit laptop) and the existence of an external state-machine (which Excel and now SWF provide).

I’ve spent any spare time I had this week doing a deep dive into SWF and figuring out how HAMMER can take full advantage of this technology, not just for classic ETL, but for distributed decision control processes and RSS solutions. The result, in Dublin slang, is that I’m both “delira and excira” (delighted and excited). This is, to use that term again, yet another AWS game- changer.

Advertisements

Those with a datasmith’s hammer, see every problem as a table.

I picked the name HAMMER for my new micro ETL tool to capture the spirit of how it’s intended to be used. It’s an everyday tool, sufficient for many of the tasks a datasmith faces and when not, will help (or at least not get in the way of) with the marshalling of data that needs to be processed by more powerful tools.

The saying “If you only have a hammer, you tend to see every problem as a nail” could be rephrased as “If you only have a datasmith’s HAMMER, you tend to see every problem as a table“!. But that’s OK, as nearly every commercial IT problem can be expressed as a set of tables, and in any case, HAMMER is not intended to be a datasmith’s only tool. Its close integration with Excel, recognises the prime importance of spreadsheets to most data wranglers and its use of CSV & the SQLite database format as its persistence & transport mechanism, (à la “SQLite as the MP3 of data“) recognises that datasets will often need to be shared with other tools.

Finding the perfect tool for the job it an IT obsession that most of our customer’s care little for; it’s only when our choice of tool affects their bottom-line (i.e excessive cost or wasted time) that end users take much notice of the backroom technology. The most important skill for a datasmith, is the ability to understand the structures and forms that data can take and to be able to derive and build business value from that data. The technology is an aid, nothing more, technology (and indeed  applications) will come and go, data lives on.

HAMMER encapsulates the  minimum set of technical skills an aspiring datasmith should learn:

  • Spreadsheets, in particular Excel. No need to know every nook and cranny, nor expect it to handle every task. But do learn about pivot tables and array formulas; if you have Excel >= 2007, learn about Excel  tables (aka Lists). If you have Excel 2010, make sure to download PowerPivot. Become comfortable with “formula programming”, don’t expect  a ribbon command to solve every problem.
  • SQL – learn the basics of selecting, joining and group-by; again no need to become a master; SQLite is not only an excellent way to learn SQL, it’s also a powerful tool to have once you’ve picked up the basics.
  • Learn a scripting language –  Python is one of the best and is relatively easy to learn. Again, mastery is nice, but not essential, learn the basics of IF-THEN-ELSE logic, loops and iterations, array and list handling and string manipulation.  Your efforts at coding do not have to be pretty or elegant, just functional. Python skills will also transfer across platforms, CPython (the original and the best), IronPython (.NET and HAMMER) and Jython (JVM, here’s an cool example of Python as a scripting language to automate GUIs).

All this talk of picking the right tools brings to mind the old golf pro-am story where the amateur was constantly pointing out to the pro (Jack Nicklaus I think) what club to play. At a particularly tricky shot to the green, the pro had enough when his choice of club was again “criticised”. So, he proceeded to take out every club including woods & putters, placed a ball for each one and hit every ball on to the green.

We’re not all as talented as Jack Nicklaus, so having at least an good enough tool for the job at hand  is important. But it does show, that focusing on the end-game is what matters, not becoming fixated with a  particular way of doing things.

Enough of the moralising, and talking of being fixated on a particular tool  😉 here’s this week’s list of new features to be found in HAMMER:

New commands:

TXT – will load a text file such that each line is a new row in a single columned (column named “text”) table.

LOADDB – like OPENDB, opens the named database but only if the database file exists (OPENDB will create a new empty database if file not found). Intended primarily as end-point to request-process-response loop, see DELEGATE/SUBMIT  below.

SAVEDB – saves the current “main” database using the previous argument as it’s name. (shorthand for … “main”,”filename.db”,”SAVENAMEDDB”).

SUBMIT – same as SAVEDB, but the value of the prior argument has a “.request” appended to make a file name in the format “databasename.request” for saving to. Also, if the previous argument = “” or “{GUID}” will generated a globally unique name using  a GUID. The main use-case for this command is to send the saved database for processing by an external program, maybe a Talend or Keetle ETL job , or a HAMMER.EXE job.

DELEGATE – same a SUBMIT, but doesn’t do any processing (i.e. all commands other than DELEGATE, which is expected to be the last command, are ignored), instead it’ll package the request in the saved database with the expectation that an external HAMMER.EXE, or another Excel-based HAMMER call, with do the processing.

Changes and helper functions to support DELEGATE/SUBMIT processing:

The DELEGATE and SUBMIT commands are how HAMMER implements its version of steam-powered servers.

The intention is that databases named with extensions of “.request” are picked up and processed by “servers” waiting for such files. (The transport of such files between server and the served is a separate issue, might be simply a matter of placing files on a shared drives, or DropBox!) Theses servers may then populate that database with new data ( or not, e.g. might generate a CSV). When finished, a text file of the same name but with the “.request” replaced by a “.response” is written to indicate that processing is complete.

Both the Excel udf HAMMER and HAMMER.EXE (non-excel command line version of UDF function), have been changed such that when a single argument is passed, that argument is expected to be a DELEGATE generated “.request” database. The database’s “pickled” arguments and data will then unpacked and the “delegated request” processed.

HAMMER.exe, if started with no arguments, implements a simple DELEGATE “server”, i.e. will wait for “.requests” files in its default folder and process each file as above.

Three RTD helper functions have been added to HAMMER udf:

waitOnHammerResponse(database (without the .response),[folder name]) – will return “Waiting” until it finds a database.response file, will then return the original .request filename (i.e. not the response file as it’s simply a marker, the database.request will  contain original data and any new data generated).

waitOnHammerRequest(database (without the .request),[folder name]) – will return “Waiting” until it finds a database.request file, will then then return the request filename.

waitOnFile(filenaname,[folder name]) – like the functions above but without the response/request expectations.

Here’s a list of the HAMMER commands implemented so far …

Get the latest version of HAMMER here …

Using PowerPivot to Hammer home some facts

From my previous post’s example of a Hammer use-case, it’s obvious I primarily see Hammer (and indeed microETL) as a tool for shaping dimensional type data; i.e. relatively low volume, often very ‘dirty’, but very high (business) value.

Fact (aka transactional data) can of course be handled, particularly when already reduced or when by-nature low volume, such facts will in many cases fit easily in-memory.

But when facts start to run into the millions of records, traditional in-memory manipulation becomes a problem. Obviously such large volumes datasets should in the first instance be handled IT-side utilising tools that are designed to handle such volumes, i.e. enterprise-class databases. But sometimes data, even large transactional databases, are “thrown-over-the-wall” with limited support offered (or accepted). But again, there’s plenty of cut-down versions of enterprise RDBMS available (SQL Express, Oracle Express etc.) plus the FOSS offerings such as MySQL or PostgresSQL, that can be configured user-side to help tame these beasts.

If you’re using PowerPivot, you have another option, PowerPivot itself. With its ability to quickly load and compress large volumes of data and its ability to perform many data cleansing tasks by means of row-context DAX formulas, often that’s all that will be required.

The typical problems that a transactional dataset can throw up, such as data split over two tables (header and detail) or needing to replace a surrogate date key with an actual date (to enable certain DAX date functionality to work) can easily be fixed within PowerPivot.

One thing to note about fact data presented as a Header-Detail set, is that traditional star-schema design requires that such data be flattened to the lower “grain” of the detail line, but PowerPivot doesn’t actually require you to do that. Some dimensions can link to the header (example Customer on Invoice Header) and others to the line (example Product on Invoice Line). The detail line table is still the “hub” of the “star” but one of its dimensions (the header table) is its route to a multiple other dimensions. Not classic star schema design, but it’ll work , and good for quick and dirty analysis (might be situations though where things might not pan-out as expected see this, perhaps best stick with pure star-schemas for complex work).

There’ll come a time however, when you’ll be faced with the problem of manipulating large datasets outside of traditional  RDBMS servers and outside of PowerPivot. Combining sets of data from multiple sources, as in my previous post, would be a prime example. Such projects often operate on a “need-to-know-basis” often with those supplying the data ‘outside the loop’. Today’s additions to HAMMER should help.

Three new commands, ATTACHDB, ATTACHDBINMEMORY and SAVENAMEDB, will allow external disk-based databases to be attached to the default HAMMER database.

ATTACHDB requires a filename  followed by an alias name for the attached db. Having an attached external database would allow, for example, a large fact table in CSV format to be loaded (and indexed) without touching memory. This could also be done using the previously introduced OPENDB command, but the benefit of ATTACHDB is that other non-memory-threatening processing can continue to take place in-memory.

The ATTACHDBINMEMORY also attaches an external database, but this time loads it into memory, so any changes made will not be automatically persisted back to disk. To do that, use the SAVENAMEDDB command.

This requires an attached database alias, followed by the file name to save the database to. SAVEASNAMEDB has other uses such as making backups or making copies of data for use in an external debugging platform (it can be much easier to debug Python using a proper IDE).

Along side the facility to load data via CSV/TSV, I’ve also added an ADO command. This requires a valid ADO connection string, followed by either a table/view name or a SQL Select statement. It uses ADODB 2.7, to enable handling of modern Access file formats, I’ll eventually make an ADO.NET version to remove this dependency.

Finally, I’ve managed to get a DBAPI compliant SQLite provider working in the PYTHON step. The provider is called XLite (it’s a modified version of Jeff Hardy’s SQLite3 library) and exposes most of the same functionality as CPython’s SQlite3 provider.

The library can open external SQLite databases, so offering another means of accessing non in-memory data and accesses the HAMMER default databases via the pre-connected pyDB variable. Having the ability to lazy-load rows via a cursor loop is also very useful in reducing memory foot-print when dealing with large tables. (see the IrelandFOIExample_hammer_test2.xlsx for an example of using XLite).

Update:

I’ve removed the dependency on a specific ADODB library (figured out how do equivalent of CreateObject in .NET for COM libraries). Also included is a first pass at a command line version (hammer.exe). Example:

hammer.exe mydata.csv CSV myotherdata.csv CSV JOIN > new.csv

hammer.exe inv.csv “CSV” “sum(qty)” REDUCE sum.csv TOCSV

Here’s a list of the HAMMER commands implemented so far …

Download  the latest version of HAMMER from here …

HAMMER a new Excel ETL tool for the PowerPivot age …

So, why am I developing this new datasmithing tool when I already have microETL to do the same task?

To be honest, when I started, my aim was to simply port  a lot of my useful framework code to .NET as I’m doing more and more work in pure .NET these days.

It’s a lot easier to do so with a project driving the work rather than the drudgery of line by line conversions simply for the sake of conversion.

But over the last few months of investigating the process of moving some of more useful code over to .NET; discovering C#-SQLite as a near drop-in replacement for SQLite; realising that IronPython 2.7 marks the coming-of-age of Python under .NET; discovering the ease of multi-threading programming under Net 4.o; I began to see the potential of building a tool more suited to this new emerging world of technological plenty; of 64bit, multi-core high-RAM machines playing host to ground-breaking tools such as PowerPivot, than the 32bit, single-threaded  world that microETL was born into.

I wanted to bring microETL back to its roots as an in-cell function. To make it sit more easily with the spreadsheet functional style of programming, a style of programming that has been more successful at attracting “civilians” to programming that any other method. To make the tool more in tune with the Excel-way of doing things.

At the same time, I also wanted the tool to be “detachable” from Excel, so that it could perform ETL functions server-side without the need for Excel. Ideally capable of being run from the command-line from a single no-install-required executable.

And so, the Datasmith’s HAMMER was born.

So where does this fit in with PowerPivot? When I design something like HAMMER I normally craft it around a serious of “use cases”. Below would be a one of my PowerPivot & HAMMER  use cases:

The Data Quality Team

The sales & marketing department of a wholesaling company have discovered PowerPivot and are excited about using it to help manage an upcoming merger with a recently purchased competitor, also in the wholesaling business. There’s a large overlap between customers and products supplied, and the sales structures, pricing and KPIs used by both companies are very different. The transition phase from two separate companies to a single operating unit will need detailed  planning and constant monitoring.  Oh, and there’s also the problem of the brand new ERP system that’ll replace the existing systems currently in use.

The group can see how PowerPivot models will help with this task but are worried about the sourcing and managing the data. They decide to appoint a small data quality team, with a mixture of IT and business experience; which will be responsible for mapping old  to old,old to new, managing the data deltas (business will still goes on, new customers, new products etc.).

Most of this work revolves around “master data”. Transactional data may be much larger in volume, but get the master data right then transactional data will be easy to handle as long as the processing capacity to handle the volumes is available (which thanks to PowerPivot and really fast high-RAM PCs it is).

In golf, there’s a saying “You drive for show, but you putt for dough”. Likewise in data: a lot of emphasis is put on the big-data transactional datastores, but real analytical benefit comes from the less sexier master-data, aka the dimensions. As a result the most important output from the data quality team will be a set of conformed dimensions.

Each week (and at month-end) the team will get extracts of the master and transactional data in various formats from both legacy systems and will also get the WIP master datasets from the team working on the new ERP system. From these they’ll construct point-in-time conformed-dimensions combined old with old and old with new; and will rebuild transactional feeds with new conformed keys alongside existing keys. These datatsets are then sent to the various analysis teams to enable them build Excel and PowerPivot models that will hopefully all “sing from the same hymn sheet”.

And how will this be accomplished? With the datasmith’s HAMMER of course (well it is my day-dream use-case!).  Both in-Excel and command-line HAMMER “scripts” will be used to wrangle the datasets and the complete sets packaged as SQLite database files.

When the analysis groups receive these files, they too will use HAMMER to extract the datasets they require (either straight to CSV for loading into PowerPivot) or into Excel for some further transformation work prior to modelling. To handle slowing-changing dimension scenarios, many of the teams will squirrel away each week’s data in other to be able to model yet-unknown-but-very-likely changes of organisational structures.

Although this is a use-case, it’s based on reality, a reality that would have been so much easier to manage if I, and the teams I worked with, had access to technology such as HAMMER and PowerPivot.

To download the latest version of the code, go to this page on my website.

Follow the HAMMER tag on  this blog for information on commands and examples (best start with the oldest and work forward …)

SQL noSQL no Python no VBA.

I’ve uploaded another version of HAMMER; this adds some new features and also takes some away. The removed features are Python and multi-threading support from the 2003 version of the add-in. Calling it the 2003 version isn’t entirely accurate (it’s actually called datasmith-noPython.xll) as this version will also work for Excel 2007/2010 32bit and for older versions (maybe even ’97!). It should really be called the .NET 2.0 version as the features removed from this version depend on the NET 4.0 runtime (IronPython 2.7 and multi-threading). I’ll eventually build a .NET4 version for Excel 97-2003 with Python included, but this will still be missing the multi-threading features.

So, the version that the setup.xls will install if it detects a sub-2007 version of Excel, will offer SQL and noSQL (JOIN,UNION etc.) but no Python or multi-threading.

So what about new features? Excel being the original noSQL  database, I continue to add more noSQL commands for those who wish to avoid SQL or find its syntax somewhat long-winded. The JOIN  & LOJOIN (outer join) commands are good examples, simply load two tables with the column names that you wish to join on, sharing the same names, simple.  Another example is the REDUCE (aka GROUPBY aka DISTINCT) command I’ve added this version. It essentially performs a SELECT … FROM … GROUP BY; again load or generate a table, then follow with a list of the columns you wish to ‘reduce’ the table by, plus any aggregates you wish to perform. Examples:

  • =HAMMER(myHugeList,”dept,sum(overtime)”,”REDUCE”)
  • =HAMMER(AccessLogs!A1:C9999,”areaAccessed,byWhom”,”REDUCE”)
  • =HAMMER(invHead,invLine,”JOIN”,”count(invID),sum(netAmt)”,”REDUCE”)

If noSQL is not your cup of tea and you wish to utilise the full power of a SQL database; a new command “OPENDB” will allow you to open an existing (or create a new) SQLite database file. This will allow SQLite data sources to be accessed and written to via standard in-cell formula, no VBA required! The command expects the previous argument to be the database name. If no such argument exists it will create a temporary on-disk database. This command usually only makes sense as the 1st command as it’ll close and wipe any previously opened databases. If no “OPENDB” command is issued (i.e. the default) an in-memory database (aka :memory:)  is used . Examples:

  • =HAMMER(“C:\data\myDB.db”,”OPENDB”,A10:C:9910) will copy the data for range A10:C9910 and save in a table called table3 in the myDB.db SSQLite database.
  • =HAMMER(“C:\data\myDB.db”,”OPENDB”,”SELECT * from table3″) will fetch the same data back into Excel.

Wow steady on, what if there’s a need to store or fetch data from disk without using SQLite? No problem, use the “TOCSV” command, outputs the last table loaded or generated in CSV format to the file name specified. (There’s also a “SQLTOCVS” command which expects a SQL statement to specify the data to extract followed by the file name to extract to).

Two other commands “CSV” and “TSV” will load comma and tab separated data into HAMMER.  Although the CSV functionality is useful within Excel, the main driver for these command is to enable HAMMER to work outside Excel as a command-line data processor; you heard it here first folks!

I’ve also added the 1st set of my helper functions, these two functions are only available in the 2007/2010 versions as they use multi-threading. The two functions are:

  • hammerToFit – wraps HAMMER, but will auto-resize the array area (or create a brand new array-selection if none) to fit the returned table. Note: to achieve this, the HAMMER function will be called twice if the existing array area needs adjusting.
  • hammerToSheet – again wraps HAMMER, but will paste the resulting table to a new sheet.

Although both helper functions utilise threads to achieve these little tricks (hence they’re not available sub-2007) when HAMMER functionality is called via these wrappers the function operates as a single threaded function – there’s a good reason for this which I’ll explain some other time. Internal HAMMER threading does however still work.

Here’s a list of the HAMMER commands implemented so far …

Download the latest version of HAMMER here …