Moving data around in Hadoop is easy when using the Hue interface but when it comes to learning to do this from the command line it gets hard.
In my Pluralsight course HDFS Getting Started you can get up to speed on moving data around from the Hadoop command line in under 3 hours. The first two modules of this course will get you up to speed on using Hadoop from the command line using the hdfs dfs commands.
Not Just Hadoop from the Command Line
Don’t just stop at learning Hadoop from the command line, let’s focus on the other Hadoop frameworks as well. The last few modules of this course will focus on using the following Hadoop frameworks from the command line.
Hive & Pig
Two of my favorite Hadoop Framework works tools. Both of these tools allow for administrators to write MapReduce jobs without having to write in Java. After learning about Hadoop Pig and Hive are two tools EVERY Hadoop Admin should know. Let’s break down each one.
Hive fills the space for the structured data in Hadoop and acts similar to a database. Hive uses a syntax called HiveQL that is very similar to SQL syntax. The closeness of Hive and SQL database was intentional because most analyst know SQL not Java.
Pig’s motto is it eats anything, which means it process unstructured, semi structured, and structured data. Pig doesn’t care how the data is structured it can process it. Pig uses a syntax called Pig Latin, insert pig latin joke here,
which is less SQL like than HiveQL. Pig latin is also a procedural or step by step programming language.
HBase
Learning to interact with HBase from the command line is hot skill for Hadoop Admins. HBase is used when you need real time read and writing in Hadoop. Think very large data sets like billions of rows and columns.
Configuring and setting up HBase is complicated but in HDFS from the Command Line you will learn to setup a development environment to start using HBase. Configurations changes in HBase are all done from the command line.
Sqoop
Hadoop is about unstructured data but what about data that lives in Relational Databases? Sqoop allows Hadoop administrators to import and export data from traditional database systems. Offloading structured data in data warehouses is one of Hadoop’s biggest use case. Hadoop allows for DBAs to offload frozen data into Hadoop for 10x the cost. The frozen or unused data can then be analyzed in Hadoop and bring about new insights. In my HDFS Getting Started course I will walk through using Sqoop to import and export data in HDFS.