Role of Apache Kafka in the Hadoop ecosystem

Apache Kafka is a simple, high-performance, distributed, fault-tolerant messaging system. It was originally developed at LinkedIn and is now used at many companies, including Tumblr,, Twitter & Square.
Kafka Deployment Model(Source:

Kafka is designed to be used as plumbing for low-latency, very high throughput pipeline that handles all messaging, tracking, logging, and metrics data. This unified pipeline can provide data feeds to Hadoop clusters as well as  to a diverse set of real-time stream processing applications.

It is sort of like ActiveMQ and other JMS based message queuing implementations. There are many differences though and some of them are:
  • Acknowledgment not needed for every message, and hence can deliver very high throughput but with less reliability
  • Written in Scala
  • No message IDs
  • Implements Pull-Model

 Use Cases

  • Performance Stats Collection
  • Logs Aggregation
  • Real time Events Processing

 From Apache/Kafka site

"Apache Kafka is a distributed publish-subscribe messaging system. It is designed to support the following
  • Persistent messaging with O(1) disk structures that provide constant time performance even with many TB of stored messages.
  • High-throughput: even with very modest hardware Kafka can support hundreds of thousands of messages per second.
  • Explicit support for partitioning messages over Kafka servers and distributing consumption over a cluster of consumer machines while maintaining per-partition ordering semantics.
  • Support for parallel data load into Hadoop.
Kafka provides a publish-subscribe solution that can handle all activity stream data and processing on a consumer-scale web site. This kind of activity (page views, searches, and other user actions) are a key ingredient in many of the social feature on the modern web. This data is typically handled by "logging" and ad hoc log aggregation solutions due to the throughput requirements. This kind of ad hoc solution is a viable solution to providing logging data to an offline analysis system like Hadoop, but is very limiting for building real-time processing. Kafka aims to unify offline and online processing by providing a mechanism for parallel load into Hadoop as well as the ability to partition real-time consumption over a cluster of machines. The use for activity stream processing makes Kafka comparable to Facebook's Scribe or Apache Flume (incubating), though the architecture and primitives are very different for these systems and make Kafka more comparable to a traditional messaging system."

Version 0.8 released

A new version with more bug fixes and enhancements was recently released. Important note: Since 0.8 is not backward compatible with 0.7.x, if you had an 0.7 installation already, you will need to delete all existing Zookeeper data and Kafka log data (if the data is not critical). Alternatively, you can use a different Zookeeper namespace and a new Kafka log directory.