When analyzing data to make a decision, the last thing you want to encounter is a data dead end. You may be digging into some figures only to find that the data you have access to has been aggregated, combined, filtered, interpreted, or otherwise changed (in an unauthorized way) from its original source. And as an analyst, the last thing that you want to discover is that your ETL processes are solely responsible.
In Business Intelligence and decision-support instances, especially reports and dashboards, data alterations are common. Aggregates, summaries, snapshots, and the like are normal and necessary for a bird’s eye view of whatever business process is being examined. But in order to avoid information loss, be certain that the underlying data is intact at the most atomic and granular level. And also be sure analysts can get at this data (no black boxes allowed). You don’t want this information to be tossed into a black hole never to be seen or heard from again.
Atomic and Granular
I like to distinguish atomicity from granularity in the following way: Atomicity refers to non-additive and descriptive elements, usually stored as dimensions or non-additive facts, while granularity refers to measurement data usually stored as facts in a business process dimensional model. You could interchange these definitions under certain circumstances, but I like to draw the line so it is clear what I’m talking about.
Atomic data elements will give you the ability to conduct deeper research. By atomic, I mean that the data element has an exact meaning and does not represent some concatenated value or total. The sum of the parts have greater meaning than their whole, and in the end, allow analysts to cut analysis across different dimensions at a very minute scale.
- A phone number is better split into country code, area code, and subscriber number
- A street address into street number, name, type, and direction
- A person’s name into surname and given name
- A parcel ID into plat, lot, and map
- An industry classification into groups and subgroups
- A date into year, quarter, month, week of year, day, and day of week
- Et cetera!
With granularity, you define the level of detail in a measurement. The more granular, the greater the detail. For a trip to the market, you can define the granularity of your shopping excursion on the item level (each item in the basket), by product (grouping similar items), or perhaps by the entire basket as a whole. The choice is yours. Of course, storing the price of each item is the most granular and will give you the greatest flexibility in your analysis. You can then build your aggregates (by product, entire basket, etc.) from the most granular metrics.
If you decide to load data at larger grains, you are losing information and creating dead ends for your decision-makers. It pays to load data at the finest grain possible.
Integrating data into the data warehouse at an atomic and granular level gets you pretty far. You are likely already doing this (especially if you are familiar with transaction grain fact tables). But there are other ways you can lose data, and therefore information. In a follow-up to this post, I’ll discuss how evaluations and logic gates can also be a source of information loss.
I’d like to know your thoughts on this subject. Have I missed anything important, or have I marked something important that you feel is inconsequential?