My Mac Setup for Hadoop Development

I’ve recently decided to switch to a Mac. Having been such a proponent of all-things-Microsoft in the past, and having invested so much time in my dev skills using a PC, this was a pretty huge move for me. In fact, it took me a very long time to make the decision. But the more time I spent trying to figure out how to do Hadoop dev better, and faster, the more clear it became to me that switching to a Mac would help with these things. After only a few weeks, I’ve already found that many of the things that were very painful on a PC are exceedingly easy on a Mac, such as installing Hadoop locally.

Now, this post isn’t to convince you to switch to a Mac. A person’s OS preference is very personal, and as such, discussions can get almost as heated as religious and political discussions. 🙂 However, for those who are already considering switching to a Mac from a PC, I thought it’d be helpful to outline some of the applications I’ve installed that have improved my Hadoop dev experience.


App What is it? Why do I use it? Where to get it?
HomeBrew Homebrew installs the stuff you need that Apple didn’t. Makes local app installs super easy. I used this to install Maven, MySQL, Python, Hadoop, Pig, & Spark, & much more.
iTerm2 iTerm2 is a replacement for Terminal and the successor to iTerm. For connecting to Hadoop via SSH. This provides some nice features, such as tabs and status colors, which makes it easier to keep track of numerous simultaneous activities in Hadoop.
IntelliJ IDEA The Community Edition is an excellent free IDE For development of Pig, Hive, Spark, & Python scripts
0xDBE (EAP) New Intelligent IDE for DBAs and SQL Developers For SQL Server & MySQL development (And yes, I *do* miss SSMS, but I don’t want to have to run a VM to use it)

My config

IntelliJ Plugins

  • Apache Pig Language Plugin
  • Python Community Edition
  • etc


Bash Profile

    I also added the following code to my local & Hadoop .bashrc profiles. This changes the title of a Bash window. This isn’t specific to iTerm2, and I could have done this on my PC if I had known about it at the time. So if you are using either Terminal or a PC SSH client (i.e. PuTTY, PowerShell), you may still be able to take advantage of this if your client displays window titles.




This is an example of how you would call the code at the start of any new Bash session


My Dev Process
I have 3 monitors set up, which are typically configured as:

Monitor 1

  • email
  • calendar
  • web browser (Slack, JIRA, etc.)

Monitor 2

  • IntelliJ

Monitor 3

  • iTerm2, with tabs already open for
    • Pig
    • Hive
    • Hadoop Bash (HDFS, log files, etc.)
    • misc (Python, Spark, etc.)
    • local Bash (GitHub commits, etc.)

In general, I write the code in IntelliJ and copy/paste it into iTerm2. This provides nice syntax highlighting and makes it easy to check my code into GitHub when I’m done. Once I’m past the initial dev phase, I SCP the actual scripts over to the prod Hadoop shell box for scheduling. Overall, I’ve found that this approach makes iterative dev much faster.

That’s pretty much the highlights, though I’ll continue to add to this as I stumble across tweaks, hacks, and apps that make my life easier.

Hopefully for those just starting out on a Mac, you’ve found this post helpful for getting up and running with Hadoop dev. For those who have already made the switch — or who have always used a Mac — did I miss something? Is there a killer app that you love for Hadoop dev? If so, please let me know! 🙂

Read More

Data Warehousing in Hadoop

The slides from my highly-rated Hadoop Summit 2015 session are now available!

Session: Data Warehousing in Hadoop

Session Abstract

How can we take advantage of the veritable treasure trove of data stored in Hadoop to augment our traditional data warehouses? In this session, Michelle will share her experience with migrating GoDaddy’s data warehouse to Hadoop. She’ll explore how GoDaddy has adapted traditional data warehousing methodologies to work with Hadoop and will share example ETL patterns used by her team. Topics will also include how the integration of structured and unstructured data has exposed new insights, the resulting business impact, and tips for making your own Hadoop migration project more successful.

Session Slides: Slideshare – Data Warehousing in Hadoop

Hadoop Summit 2015
Hadoop Summit 2015

I’ll be honest: I didn’t know what to expect from Hadoop Summit. I’ve been surprised at the lack of overlap between the PASS community that I know so well and dearly love, and this new community of open-source aficionados, data hackers, and Ph.D. data scientists. Would this new community be interested in data warehousing, a topic traditionally — and fallaciously, in my opinion 🙂 — associated with all things BI and relational? Combine this with the fact that I’ve never even attended Hadoop Summit before, and well… this was easily the most nervous I’ve been before a presentation since my first major presentation in 2009. However, all my fears were for naught… the session was packed — clearly, folks are interested in this topic! And judging from the quantity of conversations I had with people afterwards — many of whom are from companies you’d readily recognize, too — this is a topic that is only going to grow.

For those who were unable to attend but are interested in this topic, I have good news! The session recording should also be available online within the next couple of weeks. I’ll post the link once it becomes available. 🙂

Lastly, I typically find the conversations I have with session attendees after presentations to be my favorite part of conferences, and this was no exception. Thank you to everyone who attended and reached out to me afterwards! I met some great people, and I regret not doing a better job of exchanging contact information amidst the chaos of the event. If we connected at Hadoop Summit, let’s connect on LinkedIn too. 🙂

Read More

Why I prefer Pig for Big Data Warehouse ETL

First, a brief note about this blog. Shortly after I announced this blog, an… event?… was announced, and it seemed prudent to avoid blogging while that event was underway. However, the quiet period is now over and I have several months of blog posts in the queue! So let the blogging commence! (again) 🙂

Apache PigLast week, I had the pleasure of speaking at Hadoop Summit 2015 on Data Warehousing in Hadoop. There was a lot of interest in this topic… the session was packed, and I received a lot of great questions both during and after the session. One question that kept popping up was why I prefer Pig over Hive for performing Data Warehouse ETL in Hadoop. The question itself wasn’t as surprising as the context it was raised in, i.e. “But I thought Hive was for data warehousing?” These questions were largely from people who were investigating and/or beginning their own data warehouse migration or enrichment project. After a few of these conversations, I came to realize that this was a result of the excellent marketing that Hive has done in billing itself as “data warehouse software.”

Given the confusion, please allow me to clarify my position on this topic: I think Hive and Pig both have a role in a Hadoop data warehouse. The purpose of this post is to explain my opinion 🙂 of the role each technology plays.

I rely on Hive for two primary purposes: definitions/exposure of DDL via HCatalog and ad hoc querying. I can create an awesome data warehouse, but if I don’t expose it in Hive via HCatalog, then data consumers won’t know what’s available to query. Commands such as show databases and show tables wouldn’t return information about the rich and valuable datasets my team produces. So I think it’s actually extremely important to define DDL in Hive as the first step to producing new datasets, i.e. :

Also, Hive has done a decent job of ensuring that the core query syntax & functionality from SQL has been ported into Hive. Thusly, anyone who has a basic understanding of SQL can easily sit down and start to retrieve data from Hadoop. The importance of this cannot be understated… quite simply, it has lowered the barrier of entry and has provided analysts with an easier transition from querying legacy DWs to querying Hadoop using HiveQL.

Hive also makes it easy to materialize the results of queries into tables. You can do this either through CTAS (Create-Table-As) statements, which are useful for storing the results of ad hoc queries, or using an INSERT statement. This makes it very easy and natural for someone with a data engineering background in pretty much any enterprise data warehouse project (SQL Server, APS PDW, Teradata, Netezza, Vertica, etc.) to gravitate toward Hive for this type of functionality.

However, I think that’s a short-sighted mistaken.

Here’s why: when it comes to ETL, my focus is on a robust solution that ensures enterprise-level, production-quality processes that data consumers can rely on and have confidence in. Here are some of the top reasons why I believe Pig fits this role better than Hive:

  1. Hive works very well with structured data, but the whole point of moving our data warehouse to Hadoop is to take advantage of so-called “new data”, also known as unstructured and semi-structured data. Hive does provide support for complex data types, but it can quickly get… well, complex 🙂 when trying to work with this data and the limitations it imposes (lateral views, anyone?). In general, the more complex the data or transformation, the easier it seems to be to perform it in Pig than Hive.
  2. Much of the processes I work with are pipeline-friendly; meaning, I can start with a single dataset, integrate/transform/cleanse it, write out the granular details to a table, then aggregate the same data and write it to a separate table. Pig makes this faster overall by allowing you to build a data pipeline and minimizes data quality issues resulting from inconsistent logic between the granular and aggregate table versions.
  3. Hadoop is not meant for serving data; instead, my team writes the final results of ETL to a serving layer, which includes SQL Server, MySQL, and Cassandra. Pig makes it easy to process the data once and write the exact same dataset to each destination server. This works well for both refresh and incremental patterns and, again, minimizes data inconsistencies resulting from the creation of separate ETL packages for each of these destination servers.
  4. Pig’s variable support is better than Hive’s. I can write logic like…

    Anyone who has written enterprise ETL understands why this is a very good thing.
  5. PigStats makes it easier to identify jobs that may have exceptions, such as jobs that write zero rows or jobs that write a different number of rows to each destination server. This makes it easier to monitor for and raise alerts on these types of conditions.

With that said, I do recommend Hive as a great place to start for ad hoc and one-off analyses or for prototyping new processes. However, once you’re ready to move towards production-quality processes, I think you’d be better served standardizing on Pig for data warehouse ETL and Hive for data warehouse query access.

Your turn: what do you use for ETL in Hadoop? Do you like it or dislike it? 🙂

Read More