The biggest thing you need to know about Hadoop is that it isn’t Hadoop anymore.
Between Cloudera sometimes swapping out HDFS for Kudu while declaring Spark the center of its universe (thus replacing MapReduce everywhere it is found) and Hortonworks joining the Spark party, the only item you can be sure of in a “Hadoop” cluster is YARN. Oh, but Databricks, aka the Spark people, prefer Mesos over YARN -- and by the way, Spark doesn’t require HDFS.
Yet distributed filesystems are still useful. Business intelligence is a great use case for Cloudera’s Impala and Kudu, a distributed columnar store, is optimized for it. Spark is great for many tasks, but sometimes you need an MPP (massively parallel processing) solution like Impala to do the trick -- and Hive remains a useful file-to-table management system. Even when you’re not using Hadoop because you’re focused on in-memory, real-time analytics with Spark, you still may end up using pieces of Hadoop here and there.
By no means is Hadoop dead, although I’m sure that's what the next Gartner piece will say. But by no means is it only Hadoop anymore.
What in this new big data Hadoopy/Sparky world do you need to know now? I covered this topic last year, but there's so much new ground, I'm pretty much starting from scratch.
Spark is as fast as you've heard it is -- and, more important, the API is much easier to use and requires less code than with previous distributed computing paradigms. With IBM promising 1 million new Spark developers and a boatload of money for the project, Cloudera declaring Spark is the center of everything we know to be good with its One Platform initiative, and Hortonworks giving its full support, we can safely say the industry has crowned its Tech Miss Universe (hopefully getting it right this time).
Economics is also driving Spark's ascendance. Once it was costly to do it in memory, but with cloud computing and increased computing elasticity, the number of workloads that can’t be loaded into memory (at least on a distributed computing cluster) are diminishing. Again, we're not talking about all your data, but the subset you need in order to calculate a result.
Spark is still rough around the edges -- we’ve really seen this when working with it in a production environment -- but the warts are worth it. It really is that much faster and altogether better.
The irony is that the loudest buzz around Spark relates to streaming, which is Spark's weakest point. Cloudera had that deficiency in mind when it announced its intention to make Spark streaming work for 80 percent of use cases. Nonetheless, you may still need to explore alternatives for subsecond or high-volume data ingestion (as opposed to analytics).
Spark isn’t only obviating the need for MapReduce and Tez, but also possibly tools like Pig. Moreover, Spark’s RDD/DataFrames APIs aren’t bad ways to do ETL and other data transformations. Meanwhile, Tableau and other data visualization vendors have announced their intent to support Spark directly.
Hive lets you run SQL queries against text files or structured files. Those usually live on HDFS when you use Hive, which catalogs the files and exposes them as if they were tables. Your favorite SQL tool can connect via JDBC or ODBC to Hive.
In short, Hive is a boring, slow, useful tool. By default, it converts your SQL into MapReduce jobs. You can switch it to use the DAG-based Tez, which is much faster. You can also switch it to use Spark, but the word "alpha" doesn’t really capture the experience.
You need to know Hive because so many Hadoop projects begin with “let’s dump the data somewhere” and thereafter “oh by the way, we want to look at it in a [favorite SQL charting tool].” Hive is the most straightforward way to do that. You may need other tools to do that performantly (such as Phoenix or Impala).
I loathe Kerberos, and it isn’t all that fond of me, either. Unfortunately, it's the only fully implemented authentication for Hadoop. You can use tools like Ranger or Sentry to reduce the pain, but you’ll still probably integrate with Active Directory via Kerberos.
If you don’t use Ranger or Sentry, then each little bit of your big data platform will do its own authentication and authorization. There will be no central control, and each component will have its own weird way of looking at the world.
But which one to choose: Ranger or Sentry? Well, Ranger seems a bit ahead and more complete at the moment, but it's Hortonworks baby. Sentry is Cloudera’s baby. Each supports the part of the Hadoop stack that its vendor supports. If you’re not planning to get support from Cloudera or Hortonworks, then I’d say Ranger is the better offering at the moment. However, Cloudera’s head start on Spark and the big plans for security the company announced as part of its One Platform strategy will certainly pull Sentry ahead. (Frankly, if Apache were functioning properly, it would pressure both vendors to work together on one offering.)
HBase is a perfectly acceptable column family data store. It's also built into your favorite Hadoop distributions, it's supported by Ambari, and it connects nicely with Hive. If you add Phoenix, you can even use your favorite business intelligence tool to query HBase as if it was a SQL database. If you’re ingesting a stream of data via Kafka and Spark or Storm, then HBase is a reasonable landing place for that data to persist, at least until you do something else with it.
There are good reasons to use alternatives like Cassandra. But if you use Hadoop you already have HBase -- if you've purchased support from a Hadoop vendor, you already have HBase support -- so it's a good place to start. After all, it's a low-latency, persistent datastore that can provide a reasonable amount of ACID support. If Hive and Impala underwhelm you with their SQL performance, you'll find HBase and Phoenix faster for some datasets.
Teradata and Netezza use MPP to process SQL queries across distributed storage. Impala is essentially an MPP solution built on HDFS.
The biggest difference between Impala and Hive is that, when you connect your favorite BI tool, “normal stuff” will run in seconds rather than in minutes. Impala can replace Teradata and Netezza for many applications. Other structures may be necessary for different types of queries or analysis (for those, look toward stuff like Kylin and Phoenix). But generally, Impala lets you escape your hated proprietary MPP system, use one platform for structured and unstructured data analysis, and even deploy to the cloud.
There's plenty of overlap with using straight Hive, but Impala and Hive operate in different ways and have different sweet spots. Impala is supported by Cloudera and not Hortonworks, which supports Phoenix instead. While operating Impala is less complex, you can achieve some of the same goals with Phoenix, toward which Cloudera is now moving.
7. HDFS (Hadoop Distributed File System)
Thanks to the rise of Spark and ongoing migration to the cloud for so-called big data projects, HDFS is less fundamental than it was last year. But it's still the default and one of the more conceptually simple implementations of a distributed filesystem.
Distributed messaging such as that offered by Kafka will make client-server tools like ActiveMQ completely obsolete. Kafka is used in many -- if not most -- streaming projects. It's also really simple. If you’ve used other messaging tools, it may feel a little primitive, but in the majority of cases, you don't need the granular routing options MQ-type solutions offer anyway.
Spark isn't so great at streaming, but what about Storm? It's faster, has lower latency, and uses less memory -- which is important when ingesting streaming data at scale. On the other hand, Storm’s management tools are doggy-doo and the API isn’t as nice as Spark’s. Apex is newer and better, but it's not widely deployed yet. I’d still default to Spark for everything that doesn’t need to be subsecond.
10. Ambari/Cloudera Manager
I’ve seen people try and monitor and manage Hadoop clusters without Ambari or Cloudera Manager. It isn’t pretty. These solutions have taken management and monitoring of Hadoop environments a long way in a relatively short period of time. Compare this to the NoSQL space, which is nowhere near as advanced in this department -- despite simpler software with far fewer components -- and you gotta wonder where those NoSQL guys spent their massive funding.
I suspect this is the last year that Pig makes my list. Spark is much faster can be used for a lot of the same ETL cases -- and Pig Latin (yes, that's what they call the language you write with Pig) is a bit bizarre and often frustrating. As you might imagine, running Pig on top of Spark entails hard work.
Theoretically, people doing SQL on Hive can move to Pig in the same way that they used to go from SQL to PL/SQL, but in truth, Pig isn’t as easy as PL/SQL. There might be room for something between plain old SQL and full-on Spark, but I don’t think Pig is it. Coming from the other direction is Apache Nifi, which might let you do some of the same ETL with less or no code. We already use Kettle to reduce the amount of ETL code we write, which is pretty darn nice.
YARN and Mesos enable you to queue and schedule jobs across the cluster. Everyone is experimenting with various approaches: Spark to YARN, Spark to Mesos, Spark to YARN to Mesos, and so on. But know that Spark’s Standalone mode isn’t very realistic for busy multijob, multi-user clusters. If you’re not using Spark exclusively and still running Hadoop batches, then go with YARN for now.
Nifi would have had to try hard not to be an improvement over Oozie. Various vendors are calling Nifi the answer to the Internet of things, but that's marketing noise. In truth, Nifi is like Spring integration for Hadoop. You need to pipe data through transforms and queues, then land it somewhere on a schedule -- or from various sources based on a trigger. Add a pretty GUI and that’s Nifi. The power is that someone wrote an awful lot of connectors for it.
If you need this today but want something a bit more mature, use Pentaho’s Kettle (along with other associated kitchenware, such as Spoon). These tools have been working in production for a while. We’ve used them. They’re pretty nice, honestly.
While Knox is perfectly adequate edge protection, all it does is provide a reverse proxy written in Java with authentication. It's not very well written; for one thing, it obscures errors. For another, despite how it uses URL rewriting, merely adding a new service behind it requires a whole Java implementation.
You need to know Knox because if someone wants edge protection, this is the “approved” means of providing it. Frankly, a small modification or add-on for HTTPD’s mod_proxy and it would have been more functional and offered a better breadth of authentication options.
Technically, you can use Java 8 for Spark or Hadoop jobs. But in reality, Java 8 support is an afterthought, so salespeople can tell big companies they still use their Java developers. The truth is Java 8 is a new language if you use it right -- in that context, I consider Java 8 a bad knockoff of Scala.
For Spark in particular, Java trails Scala and possibly even Python. I don’t really care for Python myself, but it's reasonably well supported by Spark and other tools. It also has robust libraries -- and for many data science, machine learning, and statistical applications it will be the language of choice. Scala is your first choice for Spark and, increasingly, other toolsets. For the more “mathy” stuff you may need Python or R due to their robust libraries.
Remember: If you write jobs in Java 7, you’re silly. If use Java 8, it's because someone lied to your boss.
The Notebook concept most of us first encountered with iPython Notebook is a hit. Write some SQL or Spark code along with some markdown describing it, add a graph and execute on the fly, then save it so someone else can derive something from your result.
Ultimately, your data science is documented and executed -- and the charts are pretty!
Databricks has a head start, and its solution has matured since I last noted being underwhelmed with it. On the other hand, Zeppelin is open source and isn’t tied to buying cloud services from Databricks. You should know one of these tools. Learn one and it won’t be a big leap to learn the other.
New technologies to watch
I wouldn't throw these technologies into production yet, but you should certainly know about them.
Kylin: Some queries need lower latency, so you have HBase on one side, and on the other side, larger analytics queries might not be appropriate for HBase -- thus, Hive on the other. Moreover, joining a few tables over and over to calculate a result is slow, so “prejoining” and “precalculating” that data into Cubes is a major advantage for such datasets. This is where Kylin comes in.
Kylin is this year’s up and comer. We’ve already seen people using Kylin in production, but I’d suggest a bit more caution. Because Kylin isn’t for everything, its adoption isn't as broad as Spark's, but Kylin has similar energy behind it. You should know at least a little about it at this point.
Atlas/Navigator: Atlas is Hortonworks’ new data governance tool. It isn’t even close to fully baked yet, but it's making progress. I expect it will probably surpass Cloudera’s Navigator, but if history repeats itself, it will have a less fancy GUI. If you need to know the lineage of a table or, say, map security without having to do so on a column-by-column basis (tagging), then either Atlas or Navigator could be your tool. Governance is a hot topic these days. You should know what one of these doohickies does.
Technologies I'd rather forget
Here's the stuff I am happily throwing under the bus. I have that luxury because new technologies have emerged to perform the same functions better.
Oozie: At All Things Open this year, Ricky Saltzer from Cloudera defended Oozie and said it was good for what it was originally intended to do -- that is, chain a couple MapReduce jobs together -- and dissatisfaction with Oozie stemmed from people overextending its purpose. I still say Oozie was bad at all of it.
Let's make a list: error-hiding, features that don’t work or work differently than documented, totally incorrect documentation with XML errors in it, a broken validator, and more. Oozie simply blows. It was written poorly and even elementary tasks become week-long travails when nothing works right. You can tell who actually works with Hadoop on a day-to-day basis versus who only talks about it because the professionals hate Oozie more. With Nifi and other tools taking over, I don’t expect to use Oozie much anymore.
MapReduce: The processing heart of Hadoop is on the way out. A DAG algorithm is a better use of resources. Spark does this in memory with a nicer API. The economic reasons that justified sticking with MapReduce recede as memory gets ever cheaper and the move to the cloud accelerates.
Tez: To some degree, Tez is a road not taken -- or a neanderthal branch of the evolutionary tree of distributed computing. Like Spark, it's a DAG algorithm, although one of its developers described it as an assembly language.
As with MapReduce, the economic rationale (disk versus memory) for using Tez is receding. The main reason to continue using it: The Spark bindings for some popular Hadoop tools are less mature or not ready at all. However, with Hortonworks joining the move to Spark, it seems unlikely Tez will have a place by the end of the year. If you don’t know Tez by now, don’t bother.
Now's the time
The Hadoop/Spark realm changes constantly. Despite some fragmentation, the core is about to become a lot more stable as the ecosystem coalesces around Spark.
The next big push will be around governance and application of the technology, along with tools to make cloudification and containerization more manageable and straightforward. Such progress presents a major opportunity for vendors that missed out on the first wave.
Good timing, then, to jump into big data technologies if you haven't already. Things evolve so quickly, it's never too late. Meanwhile, vendors with legacy MPP cube analytics platforms should prepare to be disrupted.