Why I prefer Pig for Big Data Warehouse ETL


First, a brief note about this blog. Shortly after I announced this blog, an… event?… was announced, and it seemed prudent to avoid blogging while that event was underway. However, the quiet period is now over and I have several months of blog posts in the queue! So let the blogging commence! (again) 🙂


Apache PigLast week, I had the pleasure of speaking at Hadoop Summit 2015 on Data Warehousing in Hadoop. There was a lot of interest in this topic… the session was packed, and I received a lot of great questions both during and after the session. One question that kept popping up was why I prefer Pig over Hive for performing Data Warehouse ETL in Hadoop. The question itself wasn’t as surprising as the context it was raised in, i.e. “But I thought Hive was for data warehousing?” These questions were largely from people who were investigating and/or beginning their own data warehouse migration or enrichment project. After a few of these conversations, I came to realize that this was a result of the excellent marketing that Hive has done in billing itself as “data warehouse software.”

Given the confusion, please allow me to clarify my position on this topic: I think Hive and Pig both have a role in a Hadoop data warehouse. The purpose of this post is to explain my opinion 🙂 of the role each technology plays.

I rely on Hive for two primary purposes: definitions/exposure of DDL via HCatalog and ad hoc querying. I can create an awesome data warehouse, but if I don’t expose it in Hive via HCatalog, then data consumers won’t know what’s available to query. Commands such as show databases and show tables wouldn’t return information about the rich and valuable datasets my team produces. So I think it’s actually extremely important to define DDL in Hive as the first step to producing new datasets, i.e. :

Also, Hive has done a decent job of ensuring that the core query syntax & functionality from SQL has been ported into Hive. Thusly, anyone who has a basic understanding of SQL can easily sit down and start to retrieve data from Hadoop. The importance of this cannot be understated… quite simply, it has lowered the barrier of entry and has provided analysts with an easier transition from querying legacy DWs to querying Hadoop using HiveQL.

Hive also makes it easy to materialize the results of queries into tables. You can do this either through CTAS (Create-Table-As) statements, which are useful for storing the results of ad hoc queries, or using an INSERT statement. This makes it very easy and natural for someone with a data engineering background in pretty much any enterprise data warehouse project (SQL Server, APS PDW, Teradata, Netezza, Vertica, etc.) to gravitate toward Hive for this type of functionality.

However, I think that’s a short-sighted mistaken.

Here’s why: when it comes to ETL, my focus is on a robust solution that ensures enterprise-level, production-quality processes that data consumers can rely on and have confidence in. Here are some of the top reasons why I believe Pig fits this role better than Hive:

  1. Hive works very well with structured data, but the whole point of moving our data warehouse to Hadoop is to take advantage of so-called “new data”, also known as unstructured and semi-structured data. Hive does provide support for complex data types, but it can quickly get… well, complex 🙂 when trying to work with this data and the limitations it imposes (lateral views, anyone?). In general, the more complex the data or transformation, the easier it seems to be to perform it in Pig than Hive.
  2. Much of the processes I work with are pipeline-friendly; meaning, I can start with a single dataset, integrate/transform/cleanse it, write out the granular details to a table, then aggregate the same data and write it to a separate table. Pig makes this faster overall by allowing you to build a data pipeline and minimizes data quality issues resulting from inconsistent logic between the granular and aggregate table versions.
  3. Hadoop is not meant for serving data; instead, my team writes the final results of ETL to a serving layer, which includes SQL Server, MySQL, and Cassandra. Pig makes it easy to process the data once and write the exact same dataset to each destination server. This works well for both refresh and incremental patterns and, again, minimizes data inconsistencies resulting from the creation of separate ETL packages for each of these destination servers.
  4. Pig’s variable support is better than Hive’s. I can write logic like…

    Anyone who has written enterprise ETL understands why this is a very good thing.
  5. PigStats makes it easier to identify jobs that may have exceptions, such as jobs that write zero rows or jobs that write a different number of rows to each destination server. This makes it easier to monitor for and raise alerts on these types of conditions.

With that said, I do recommend Hive as a great place to start for ad hoc and one-off analyses or for prototyping new processes. However, once you’re ready to move towards production-quality processes, I think you’d be better served standardizing on Pig for data warehouse ETL and Hive for data warehouse query access.

Your turn: what do you use for ETL in Hadoop? Do you like it or dislike it? 🙂

Read More