Example HiveRC File to Configure Hive Preferences

Anyone who regularly works with query languages invariably develops personal preferences. For T-SQL, it may be a preference for which metadata is included with the results of a query, how quoted identifiers are handled, or what the default behavior should be for transactions. These types of settings can typically be configured at a session level, and Hive is no exception in allowing this. In fact, Hive provides users with an impressive number of configurable session properties. Honestly, you’ll probably never need to change the majority of these settings, and if/when you do, it’ll most likely apply to a specific Hive script (i.e. to improve performance). However, there are a handful of Hive settings that you may wish to always enable if they’re not already defaulted server-wide, such as displaying column headers. One option is to set these manually at the start of each session, using the SET command. But this can quickly get tedious if you have more than 1 or 2 settings to change. A better option in those scenarios, and the topic of this blog post, is to use a HiveRC file to configure your personal preferences for Hive’s default behavior.

For those of you not familiar with the concept, Linux commonly uses RC files — which I believe stands for “runtime configuration,” but don’t quote me on that 🙂 — for defining preferences, and various applications support these, typically in the format of .<app>rc. These will usually live in a user’s home directory, and some examples include .bashrc, .pythonrc, and .hiverc.

Now that we have a little context, let’s walk through how to create your personal .hiverc file. Note that all of these steps take place on the same server you use for connecting to Hive.

 

Now, from inside Vim, do the following:

You should be back at your bash prompt. Now run these commands to verify everything is working as expected.

That’s all there is to it! Not too hard, huh? But to make things even easier, I’ve posted an example of my personal HiveRC file on my Hadoopsie GitHub repo.

hiverc

That’s all for now, you awesome nerds. 🙂

Read More

Hadoop Summit 2015 Sessions

Recordings from Hadoop Summit 2015 sessions are now available! The conference organizers have made session content readily available via the following ways:

To be clear, this content isn’t just available to conference attendees; this is freely available to anyone who’s interested in it. So take a few minutes to learn about what’s new in the Hadoop community and what the tech giants are doing with Hadoop.

Shameless Plug: if you’re wondering what the frilly heck to do with all your data, check out my session on Data Warehousing in Hadoop 🙂

Read More

My Mac Setup for Hadoop Development

I’ve recently decided to switch to a Mac. Having been such a proponent of all-things-Microsoft in the past, and having invested so much time in my dev skills using a PC, this was a pretty huge move for me. In fact, it took me a very long time to make the decision. But the more time I spent trying to figure out how to do Hadoop dev better, and faster, the more clear it became to me that switching to a Mac would help with these things. After only a few weeks, I’ve already found that many of the things that were very painful on a PC are exceedingly easy on a Mac, such as installing Hadoop locally.

Now, this post isn’t to convince you to switch to a Mac. A person’s OS preference is very personal, and as such, discussions can get almost as heated as religious and political discussions. 🙂 However, for those who are already considering switching to a Mac from a PC, I thought it’d be helpful to outline some of the applications I’ve installed that have improved my Hadoop dev experience.

Applications

App What is it? Why do I use it? Where to get it?
HomeBrew Homebrew installs the stuff you need that Apple didn’t. Makes local app installs super easy. I used this to install Maven, MySQL, Python, Hadoop, Pig, & Spark, & much more. http://brew.sh/
iTerm2 iTerm2 is a replacement for Terminal and the successor to iTerm. For connecting to Hadoop via SSH. This provides some nice features, such as tabs and status colors, which makes it easier to keep track of numerous simultaneous activities in Hadoop. https://www.iterm2.com/
IntelliJ IDEA The Community Edition is an excellent free IDE For development of Pig, Hive, Spark, & Python scripts https://www.jetbrains.com/idea/
0xDBE (EAP) New Intelligent IDE for DBAs and SQL Developers For SQL Server & MySQL development (And yes, I *do* miss SSMS, but I don’t want to have to run a VM to use it) https://www.jetbrains.com/dbe/

My config

IntelliJ Plugins

  • Apache Pig Language Plugin
  • Python Community Edition
  • etc

iTerm2

Bash Profile

    I also added the following code to my local & Hadoop .bashrc profiles. This changes the title of a Bash window. This isn’t specific to iTerm2, and I could have done this on my PC if I had known about it at the time. So if you are using either Terminal or a PC SSH client (i.e. PuTTY, PowerShell), you may still be able to take advantage of this if your client displays window titles.

 

 

 

This is an example of how you would call the code at the start of any new Bash session

iterm2

My Dev Process
I have 3 monitors set up, which are typically configured as:

Monitor 1

  • email
  • calendar
  • web browser (Slack, JIRA, etc.)

Monitor 2

  • IntelliJ

Monitor 3

  • iTerm2, with tabs already open for
    • Pig
    • Hive
    • Hadoop Bash (HDFS, log files, etc.)
    • misc (Python, Spark, etc.)
    • local Bash (GitHub commits, etc.)

In general, I write the code in IntelliJ and copy/paste it into iTerm2. This provides nice syntax highlighting and makes it easy to check my code into GitHub when I’m done. Once I’m past the initial dev phase, I SCP the actual scripts over to the prod Hadoop shell box for scheduling. Overall, I’ve found that this approach makes iterative dev much faster.

That’s pretty much the highlights, though I’ll continue to add to this as I stumble across tweaks, hacks, and apps that make my life easier.

Hopefully for those just starting out on a Mac, you’ve found this post helpful for getting up and running with Hadoop dev. For those who have already made the switch — or who have always used a Mac — did I miss something? Is there a killer app that you love for Hadoop dev? If so, please let me know! 🙂

Read More

Data Warehousing in Hadoop

The slides from my highly-rated Hadoop Summit 2015 session are now available!

Session: Data Warehousing in Hadoop

Session Abstract

How can we take advantage of the veritable treasure trove of data stored in Hadoop to augment our traditional data warehouses? In this session, Michelle will share her experience with migrating GoDaddy’s data warehouse to Hadoop. She’ll explore how GoDaddy has adapted traditional data warehousing methodologies to work with Hadoop and will share example ETL patterns used by her team. Topics will also include how the integration of structured and unstructured data has exposed new insights, the resulting business impact, and tips for making your own Hadoop migration project more successful.

Session Slides: Slideshare – Data Warehousing in Hadoop

Hadoop Summit 2015
Hadoop Summit 2015

I’ll be honest: I didn’t know what to expect from Hadoop Summit. I’ve been surprised at the lack of overlap between the PASS community that I know so well and dearly love, and this new community of open-source aficionados, data hackers, and Ph.D. data scientists. Would this new community be interested in data warehousing, a topic traditionally — and fallaciously, in my opinion 🙂 — associated with all things BI and relational? Combine this with the fact that I’ve never even attended Hadoop Summit before, and well… this was easily the most nervous I’ve been before a presentation since my first major presentation in 2009. However, all my fears were for naught… the session was packed — clearly, folks are interested in this topic! And judging from the quantity of conversations I had with people afterwards — many of whom are from companies you’d readily recognize, too — this is a topic that is only going to grow.

For those who were unable to attend but are interested in this topic, I have good news! The session recording should also be available online within the next couple of weeks. I’ll post the link once it becomes available. 🙂

Lastly, I typically find the conversations I have with session attendees after presentations to be my favorite part of conferences, and this was no exception. Thank you to everyone who attended and reached out to me afterwards! I met some great people, and I regret not doing a better job of exchanging contact information amidst the chaos of the event. If we connected at Hadoop Summit, let’s connect on LinkedIn too. 🙂

Read More

Is Hadoop better than SQL Server?

Over the past year, I’ve switched my focus from SQL Server and Teradata to Hadoop. As someone who has spent the majority of my professional career focused on SQL Server and who has been awarded as a Microsoft Most Valuable Professional (MVP) in SQL Server for 4 consecutive years, it comes as no surprise that I often get asked:

“Why are you switching to Hadoop? Is it better than SQL Server?”

I’ll save you the suspense of a long post and answer the second question first: No, it’s not. 

SQL Server is Still Relevant
Here’s why. SQL Server does what it does *extremely* well. I would not hesitate to suggest SQL Server in numerous scenarios, such as the database backend for an OLTP application, a data store for small-to-medium sized data marts or data warehouses, or an OLAP solution for building and serving cubes. Honestly, with little exception, it remains my go-to solution over MySQL and Oracle.

Now that we’ve cleared that up, let’s go back to the first question. If SQL Server is still a valid and effective solution, why did I switch my focus to Hadoop?

Excellent question, dear reader! I’m glad you asked. 🙂

Before I get to the reason behind my personal decision, let’s discuss arguably the biggest challenge we face in the data industry.

Yes, Data Really Is Exploding
We’re in the midst of a so-called Data Explosion. You’ve probably heard about this… it’s one of the few technical topics that has actually made it into mainstream media. But I still think it’s important to understand just how quickly it’s growing.

Every year, EMC sponsors a study called The Digital Universe, which “is the only study to quantify and forecast the amount of data produced annually.” I’ve reviewed each of their studies and taken the liberty of preparing the following graphic* based on past performance and future predictions. Also worth noting is that, EMC historically tends to be conservative in their data growth estimates.

Data Growth Rates according to EMC's The Digital Universe
Data Growth Rates – EMC’s The Digital Universe

* Feel free to borrow this graphic with credit to: Michelle Ufford & EMC’s The Digital Universe

Take a moment and just really absorb this graphic. They say a picture is worth a thousand words. My hope is that this picture explains why the concept of Big Data is so important to all data professionals. DBAs, ETL developers, data warehouse engineers, BI analysts, and more are affected by the fact that data is growing at an alarming rate, and the majority of that data growth is coming in the form of unstructured and semi-structured data.

Throughout my career, I have been focused on using data to do really cool things for the business. I have built systems to personalize marketing offers, predict customer behaviors, and improve the customer experience in our applications. There is no doubt in my mind that Hadoop is absolutely critical to the ability of an enterprise to perform these types of activities.

The Bottom Line
SQL Server isn’t going away. Arguably, the most valuable raw data in an enterprise will still be managed in a SQL Server database, such as inventory, customer information, and order data.

So again: why did I make the decision to focus on Hadoop over the past year?

I once had the pleasure to work for a serial entrepreneur. One day over lunch, he gave me a piece of advice that resonated with me and would come to influence my whole career: “Michelle, to be successful in whatever you do, you need to find the point where your heart and the money intersect.”

My heart is in data, the money is in the ability to effectively consume data, and Hadoop is where they intersect.

Read More