Posts Tagged ETL

Scoping Data Warehouse Initiatives

focus Scoping Data Warehouse InitiativesData warehousing is a complex operation. From start to finish (if there is a finish), project teams are faced with many challenges. In all phases of the lifecycle, there are opportunities for derailment. The best way to mitigate potential issues and stay on time and within budget is to carefully define and manage scope. Managing scope can be an ongoing struggle (especially if requirements are not clearly defined or justified). While this is really a PM101-type of topic, I feel there are some fine points in a DW/BI environment that are not mentioned enough.

Consider the following:

Programs verses projects

I won’t get into a deep PM discussion here, but it is important to point out that data warehousing (or business intelligence, master data management, etc.) initiatives should be thought of as programs and not projects. This mindset will help in scoping.

A program (which might also be called a “project portfolio” in some circles) is basically just a set of related projects. With a program, the emphasis is on organizing, prioritizing, and allocating resources to the right projects. Program scope is more strategic, and answers long-term questions about what type of value the organization hopes to achieve from the initiative.

A project, on the other hand, is much more specific — with a set number of deliverables and goals that have a high immediate impact. The scope at the project level is therefore more tactical in nature: high impact, fast delivery. Be aware that some projects may never be given the green light (for example, if there is a low business impact or if there is a low feasibility rating because of data source or data quality complications).

What I find odd is that organizations still choose to tackle immense data warehousing initiatives in one or two shots, trying to deliver everything at once over a period of 18 or more months. This is the wrong approach (here’s why). Break this large initiative into individual projects and try to deliver functionality every 6 to 8 weeks.

The business process

The best way to break down data warehousing programs into high-impact projects is along business process lines. A business process, as defined here, is:

The complete response that a business makes to an event. A business process entails the execution of a sequence of one or more process steps. It has a clearly defined deliverable or outcome. A Business Process is defined by the business event that triggers the process, the inputs and outputs, all the operational steps required to produce the output, the sequential relationship between the process steps, the business decisions that are part of the event response, and the flow of material and/or information between process steps.

Some example of the above: inventory tracking, Internet sales, retail sales, marketing, tax assessment, tax collection, pitching, batting.

In any data warehousing environment, you can expect to have several business processes to model. Each business process you tackle will have elements touching upon different aspects of the data warehouse, including infrastructure, middleware, data modeling, ETL, business logic development, presentation elements, and so on. If you scope each project to the business process, you can deliver complete solutions in the shortest amount of time. (It should be obvious that the very first business process you implement will take the longest, as the team works out the core infrastructure. Most of this infrastructure will be reused by other business processes.)

Avoid scoping to a data source

Do not fall into the trap of scoping to a data source. Scoping to a data source is almost guaranteed to deliver mediocre outcomes. These projects typically include many unfinished or inadequate business processes all delivered at once some time in the distant future and long after the excitement over the initiative has subsided.

While it is true that only one or two data sources might exist in some organizations, it is not true that inventory, customers, sales, procurement, shipping, and other business processes need to be taken on at once. Create a single project for each business process, prioritize based on impact and feasibility, and then badabing badaboom, you deliver. Next.

Along the same lines, do not adjust your scope if the data source is unavailable, uncooperative, or lacking in quality. Instead, bring the fight to the data source (here is where a good, preferable C-Leveled, business sponsor can come in handy) and set things right. This is obviously a project risk, and also an organizational risk. If you are having problems extracting inventory data then maybe its time to put down your data warehousing gloves and get a new inventory system.

Last thoughts

Scoping the data warehouse is a difficult problem. Troubles start early on with the initial idea, it moves on through requirement gathering, and finally into the development phase of the lifecycle. There is not a lot of good advice in this area for data warehousing (if you happen to know of a good source, please send me a link or title). But I do find that if you work towards business processes, think in terms of programs and projects, and avoid the data source trap, scoping decisions will settle into the real needs of the business.

Tags: , , , , , , , , ,

The Three Faces of a Good ETLer

Hiring a “data integration expert” or consultant for your next, greatest, data warehousing project? Don’t take it lightly. ETL personnel are critical to the success or failure of your project.

The following are what I deem to be essential technology-related aspects, or faces, of a good ETL developer and/or architect (herein referred to as an ETLer for lack of creativity). While you need to consider business and industry knowledge, personality, and experience in your team-building process, you should start by checking off the following on your interview sheet:

First Face: the technologist

Programming must come natural to an ETLer. Objects, logical constructs, expression construction, program flow, and the like, must be well understood. The truth is that no matter how much your vendor proclaims that their tool does it all, chances are excellent that some hand coding will be required. On top of that, ETL tools work a lot like procedural programs. Technologists are very good at putting their right foot forward, and will generally think of things to make the ETL flow perform better. They also think about logging, auditing, and exception handling; all important.

Second Face: the theorist

But a solid programming background is not enough. Knowledge of Data Integration theory and best practices are equally important. While I believe in and use Kimball’s methodologies for integrating data into a dimensional data warehouse, other methodologies exist that may be more suitable to your business and integration needs. Following a proven methodology, with slight modifications to suit your environment will get you further, faster. Having little or no theory behind what you’re doing gets you somewhere, slower. Identify your methodology, and then find someone who understands it.

Third Face: the specialist

Knowing the ins and outs of your ETL tool (SSIS, OWB, Datastage, Talend Open Studio, etc.) is essential. I would venture to guess that a solid programmer who has a great understanding of ETL theory will be able to get by using most tools with little learning curve. What I worry about (and you should too) are the nuances in the tooling that can stump even the best. These nuances (SSIS, my tool of *ehem* choice — sorry, I needed to clear my throat, has many of these nuances) can cost you many project hours and force rewrites if blocking issues are encountered. Tool knowledge is also essential to know when it is appropriate to forgo the tool because of I/O issues, or because hierarchical data is better handled elsewhere, or because business logic is best not bundled within a data flow.

About Face

While junior members of your data integration team can be one or two-faced (that came out funny), senior members and architects must have more meat on the bone.

I suppose this is why good ETLers are difficult to come by. The ETLer needs to have a healthy mix of programming talent, an approach discipline, and tool knowledge. Trained DBAs and software developers might have a lot to offer, as might a troop of certified tool jocks and method junkies, but to get your project in on time and within budget, don’t settle.

Tags: , , , , ,

Avoid Data Dead Ends and Information Loss

Black Hole Avoid Data Dead Ends and Information LossWhen analyzing data to make a decision, the last thing you want to encounter is a data dead end. You may be digging into some figures only to find that the data you have access to has been aggregated, combined, filtered, interpreted, or otherwise changed (in an unauthorized way) from its original source. And as an analyst, the last thing that you want to discover is that your ETL processes are solely responsible.

In Business Intelligence and decision-support instances, especially reports and dashboards, data alterations are common. Aggregates, summaries, snapshots, and the like are normal and necessary for a bird’s eye view of whatever business process is being examined. But in order to avoid information loss, be certain that the underlying data is intact at the most atomic and granular level. And also be sure analysts can get at this data (no black boxes allowed). You don’t want this information to be tossed into a black hole never to be seen or heard from again.

Atomic and Granular

I like to distinguish atomicity from granularity in the following way: Atomicity refers to non-additive and descriptive elements, usually stored as dimensions or non-additive facts, while granularity refers to measurement data usually stored as facts in a business process dimensional model. You could interchange these definitions under certain circumstances, but I like to draw the line so it is clear what I’m talking about.

Atomicity

Atomic data elements will give you the ability to conduct deeper research. By atomic, I mean that the data element has an exact meaning and does not represent some concatenated value or total. The sum of the parts have greater meaning than their whole, and in the end, allow analysts to cut analysis across different dimensions at a very minute scale.

  • A phone number is better split into country code, area code, and subscriber number
  • A street address into street number, name, type, and direction
  • A person’s name into surname and given name
  • A parcel ID into plat, lot, and map
  • An industry classification into groups and subgroups
  • A date into year, quarter, month, week of year, day, and day of week
  • Et cetera!

Granularity

With granularity, you define the level of detail in a measurement. The more granular, the greater the detail. For a trip to the market, you can define the granularity of your shopping excursion on the item level (each item in the basket), by product (grouping similar items), or perhaps by the entire basket as a whole. The choice is yours. Of course, storing the price of each item is the most granular and will give you the greatest flexibility in your analysis. You can then build your aggregates (by product, entire basket, etc.) from the most granular metrics.

If you decide to load data at larger grains, you are losing information and creating dead ends for your decision-makers. It pays to load data at the finest grain possible.

From here…

Integrating data into the data warehouse at an atomic and granular level gets you pretty far. You are likely already doing this (especially if you are familiar with transaction grain fact tables). But there are other ways you can lose data, and therefore information. In a follow-up to this post, I’ll discuss how evaluations and logic gates can also be a source of information loss.

I’d like to know your thoughts on this subject. Have I missed anything important, or have I marked something important that you feel is inconsequential?

Tags: , , , , ,

ETL Subsystem 31: Paralleling and Pipelining

This article is part of a series discussing the Kimball Group’s “34 Subsystems of ETL“. The Subsystems are a group of “Best Practices” for delivering a BI/DW solution. In my articles, I discuss how each Subsystem can be implemented in SSIS or hand coded in Visual FoxPro.

intro 150x150 ETL Subsystem 31: Paralleling and PipeliningOf all the subsystems that I’ve discussed so far, this one resulted in the most research. I had to (a) learn more about how paralleling works and (b) experiment with my environment to better understand it.

Honestly, I’ve taken this subsystem for granted over the years. And for VFP, I’ve done little exploration in this arena. For SSIS, I have tended to adjust the settings I can adjust (more on this below), monitor the results, and tweak my performance as needed. In some environments, this lackluster approach will get you by just fine. If you have very small load windows and performance is critical, then you’ll need to make a better effort.

So what is Paralleling and Pipelining?

Lumped together into a single subsystem, these two performance means are related but different. They’re cousins, I suppose. Running your ETL processes in parallel means that your ETL system is carrying out multiple operations simultaneously. Pipelining your ETL processes means that you can start new operations before the previous ones complete.

Paralleling and Pipelining are quite desirable. And depending on your tool of choice, taking advantage of them can be painless or painful.

How can you apply them?

You can achieve parallel processing by utilizing the CPUs on a single machine, or you can utilize multiple machines. The first option is the easiest to set up and results can be quite good. For a multiple CPU (or core) setup, you are actually running code (programs, algorithms) simultaneously, potentially doubling performance, all on the same box. You can scale out (i.e. scale horizontally) your ETL processes by adding computers (see What is distributed computing? by Kirk Pearson), allowing you to take advantage of the CPUs, RAM, and I/O of each machine. The latter has some significant design implications, but well worth it if your environment needs it.

assembly line women.thumbnail ETL Subsystem 31: Paralleling and PipeliningPipelining increases throughput. Unlike parallelism, it will not allow the instruction to run faster, but rather it permits downstream processes to start before the upstream process finishes. A great analogy is an assembly line, where parts are added to the whole as it travels down the line.

Getting parallelism and pipelining to work together is the Holy Grail of ETL performance. While certain performance techniques are available at all phases of data integration (from Extraction and CDC, to surrogate key handling and using partitions for fast loading), none can compare to the gains you can realize with this subsystem.

You should also keep in mind that CPU multitasking is different from parallel processing, and multithreading is different from pipelining. A multitasking process shares CPU resources, giving the illusion of parallelism (although one man’s illusion can be another man’s reality). Multithreaded applications share the same memory, but operate on different engine threads (i.e. a subtask). Multitasking and multithreading, like pipelining, increase throughput and also play an important role in performance tuning. I’ll talk a little more about this below in my section about FoxPro. Otherwise, if anyone is interested, I can try to elaborate in another post.

Where can this best be utilized?

Here are some ideas on where you can focus your efforts:

  • When loading historical data or retrieving data from multiple similar sources, execute the same package for different date ranges, at the same time (in SSIS for example, use multiple Execute Package Tasks or run the same package multiple times together as Jamie Thomson explores here), or you could design your historical load packages to break apart the data into separate threads.
  • Spread out UPDATE statements. This can be real handy if you have a few accumulating snapshot fact tables.
  • Spread out complex algorithms and routines that can operate on a subset of data.
  • Load staging tables while downstream processing loads your dimensional model
  • Do lookups (especially surrogate key lookups) in parallel
  • Distribute your conformed dimensions to other machines, data marts, etc. in parallel

 

SQL Server 2005 Integration Services (SSIS)

As you know, I use SSIS and VFP for ETL (not at the same time or on the same project though). With SSIS, I can quickly create complex routines that can automatically take advantage of multiple processors. The native support for buffers, execution trees, and parallelism makes my job pretty easy (which is why I suppose I’ve taken this subsystem for granted over the years). Simply understand how SSIS works, adjust the settings you need to adjust, monitor your performance, and tweak as needed.

To get a grip, the following resources are invaluable:

 

Hand Coding with Visual FoxPro (VFP9)

While SSIS and SQL Server have built-in mechanisms to manage most of the paralleling and pipelining responsibilities for you, FoxPro does not. You can achieve some very good results using VFP and multithreading, but you have to be extremely creative in how you handle paralleling and pipelining. If you don’t think this is the case, I’d love to hear how paralleling and pipelining can be achieved with VFP!!!

Of course, the VFP community is — and has always been — quite creative. As with most of this sort of thing, Calvin Hsia is near the front of the line. MTmyVFP (True VFP multi-threading) on CodePlex is a creative example using Hsia’s Multithreading class. For more information and a ton of details, check out:

As I’ve stated before, multithreading is not parallelism, nor is it pipelining. But if you utilize MTmyVFP (or similar solution) in your VFP ETL system, you will realize many performance benefits. Lastly, there was a pretty interesting, albiet short, discussion on this issue here.

From here

This post might have come off a bit long-winded, but there were quite a few important points to make. I hope that I’ve been able to distill what I’ve learned and that in the end, it all makes some sense. In my next ETL post, I’ll talk about ETL Subsystem 32: Security.

Tags: , , , , , , , , , ,

ETL Subsystem 28: Sorting

This article is part of a series discussing the Kimball Group’s “34 Subsystems of ETL“. The Subsystems are a group of “Best Practices” for delivering a BI/DW solution. In my articles, I discuss how each Subsystem can be implemented in SSIS or hand coded in Visual FoxPro.

Before you can aggregate, summarize, look up, or merge data, you will need it ordered in some way. Sorting, which changes the physical order of the underlying data, is seldom necessary. Instead, indexing, which changes the logical order of the underlying data is often all you need. The Kimball Group seems to use Sort interchangeably with Order, so I won’t deviate in this post. This may be because some databases vendors do so as well.

If you’re loading data from a mainframe, then the utility programs DFSORT or SORT will be of some use. Other utilities exist depending on the vendor, but these are the main ones. The idea is to sort the raw physical data before it hits the ETL pipeline. Doing so will save processing time. Otherwise, you can simply load raw data into your staging area and then use standard SQL to order your data.

Sometimes sorting is implicit, so be careful. In SQL Server, sorting occurs when you use ORDER BY, GROUP BY, SELECT DISTINCT, and UNION. You should, for example, avoid the SELECT DISTINCT clause whenever possible, as the additional sorting can hinder performance. Many programmers and (ehem) trained DBAs automatically add the distinct clause to their queries. Be certain that the distinct clause is necessary; otherwise performance will take a hit.

Beware that sometimes too much sorting can be a problem. Here is a general rule: Only sort when you need to! Sorting is an expensive operation and I’ve seen it overused and abused.

Clusters and Heaps

A clustered index determines the physical order of data in a table. That is, data is inserted into the table in order. Pages are linked for faster sequential access. This is in contrast to a heap table, in which data is added to the end of the table and the data pages are not linked.

Clustered indexes can make a big impact in the staging area. Consider a surrogate key lookup dimension that contains 4 columns: surrogate key, business key, valid_from, and valid_to. A clustered index on business key, valid_from, and valid_to will return the surrogate key upon lookup faster than if your lookup dimension was a heap. This is mainly because “queries that return a range of values using operators such as BETWEEN, >, >=, <, and <=" are well suited for the clustered index structure.

This works well during surrogate key lookups where we typically want the surrogate key of some business key on a particular date. Of course INSERTs and UPDATEs take longer on clustered tables because the physical structure of the table needs to be maintained. So be sure to balance speed with need. In my experience: Maintaining a clustered lookup dimension is almost always well worth the effort, especially when the original dimension is big and wide.

Best Practices

Here are some best practices that might help you keep your sorting operations under control:

  • Create one tempdb data file for each physical processor on the machine (more)
  • Put the tempdb files on their own drive (more)
  • Make sure your data types are appropriate for the column (storing numbers in a varchar field is not very efficient)
  • Keep the columns you sort on as narrow as possible (here is where good data profiling will help)
  • Integer sorting is much faster than character sorting, so consider using integers instead of character data whenever possible (more)

 

SQL Server 2005 Integration Services (SSIS)

As I have stated, sorting is often overused. The Sort component that ships with SSIS is good for small data sets that fit entirely into memory. Performance will degrade as soon as there is overflow. One way around the problem:

  1. In the Source Adapter, set the data access mode to “The results of an SQL statement”
  2. Use and ORDER BY clause in the SQL Statement
  3. Open the “Advanced Editor” of the Source component
  4. On the Input and Output Properties tab, click the Source Output node and set the property IsSorted to True.
  5. Under Output Columns, set the SortKey for each column involved in the index (start at 1, use a negative number to denote a descending index order)

 

Hand Coding with Visual FoxPro (VFP9)

Physically sorting a table in FoxPro is not necessary. The physical order of the table is irrelevant because of how VFP uses its indexes. You can test this easily by creating a table with a few million rows containing a randomly generated integer value and another column with some other data (perhaps system time or something). Using the SORT command (or SELECT INTO with an ORDER BY) create a second table in the order of the integer column. Without creating an index, attempt to retrieve a row from the table based on some integer value. Now, create an index on the column in both tables. You should notice no performance differences.

So while you’ll need to understand indexes and rushmore in VFP, you won’t need to bother with physical sorts.

From Here

In the next post, I’ll discuss lineage and dependency analysis. If you work in the financial sector (like me) then this topic will be of critical interest to you. If you don’t, I suspect you still care about lineage and dependencies!

Tags: , , , ,