HBase Real-time Analytics & Rollbacks via Append-based Updates (Part 2)

This is the second part of a 3-part post series in which we describe how we use HBase at Sematext for real-time analytics with an append-only updates approach.

In our previous post we explained the problem in detail with the help of example and touched on the suggested solution idea. In this post we will go through solution details as well as briefly introduce the open-sourced implementation of the described approach.

Suggested Solution

Suggested solution can be described as follows:

  1. replace update (Get+Put) operations at write time with simple append-only writes
  2. defer processing updates to periodic compaction jobs (not to be confused with minor/major HBase compaction)
  3. perform on the fly updates processing only if user asks for data earlier than updates compacted

Before (standard Get+Put updates approach):

The picture below shows an example of updating search query metrics that can be collected by Search Analytics system (something we do at Sematext).

  • each new piece of data (blue box) is processed individually
  • to apply update based on the new piece of data:
    • existing data (green box) is first read
    • data is changed and
    • written back

After (append-only updates approach):

1. Writing updates:

2. Periodic updates processing (compacting):

3. Performing on the fly updates processing (only if user asks for data earlier than updates compacted):

Note: the result of the processing updates on the fly can be optionally stored back right away, so that next time same data is requested no compaction is needed.

The idea is simple and not a new one, but given the specific qualities of HBase like fast range scans and high write throughput it works especially well with HBase. So, what we gain here is:

  • high update throughput
  • real-time updates visibility: despite deferring the actual updates processing, user always sees the latest data changes
  • efficient updates processing by replacing random Get+Put operations with processing sets of records at a time (during the fast scan) and eliminating redundant Get+Put attempts when writing the very first data item
  • better handling of update load peaks
  • ability to roll back any range of updates
  • avoid data inconsistency problems caused by tasks that fail after only partially updating data in HBase without doing rollback (when using with MapReduce, for example)
Let’s take a closer look at each of the above points.

High Update Throughput

Higher update throughput is achieved by not doing Get operation for every record update. Thus, Get+Put operations are replaced with Puts (which can be further optimized by using client-side buffer), which are really fast in HBase. The processing of updates (compaction) is still needed to perform the actual data merge, but it is done much more efficiently than doing Get+Put for each update operation (see below more details).

Real-time Updates Visibility

Users always see latest data changes. Even if updates have not yet been processed and still stored as a series of records, they will be merged on the fly.

By doing periodic merges of appended records we ensure data that has to be processed on the fly is small enough and does fast.

Efficient Updates Processing

  • N Get+Put operations at write time are replaced with N Puts + 1 Scan (shared) + 1 Put operations
  • Processing N changes at once is usually more effective than applying N individual changes

Let’s say we got 360 update requests for the same record: e.g. record keeps track of some sensor value for 1 hour interval and we collect data points every 10 seconds. These measurements needs to be merged in a single record that represents the whole 1-hour interval. Initially we would perform 360 Get+Put operations (while we can use some client-side buffer to perform partial aggregation and reduce the number of actual Get+Put operations, we want data to be sent immediately as it arrives instead of asking user to wait 10*N seconds). With append-only approach, we will perform 360 Put operations, 1 scan (which is actually meant to process not only these updates) that goes through 360 records (stored in sequence), it calculates resulting record, and performs 1 Put operation to store the result back. Fewer operations means using less resources and this leads to more efficient processing. Moreover, if the value which needs to be updated is a complex one (needs time to load in memory, etc.) it is much more efficient to apply all updates at once than one by one individually.

Deferring processing of updates is especially effective when large portion of operations is in essence insertion of new data, but not an update of stored data. In this case a lot of Get operations (checking if there’s something to update) are redundant.

Better Handling of Update Load Peaks

Deferring processing of updates helps handle load peaks without major performance degradation. The actual (periodic) processing of updates can be scheduled for off-peak time (e.g. nights or weekends).

Ability to Rollback Updates

Since updates don’t change the existing data (until they are processed) rolling back is easy.

Preserving rollback ability even after updates were compacted is also not hard. Updates can be grouped and compacted within time periods of given length as shown in the picture below. That means the client that reads data will still have to merge updates on-the-fly even right after compaction is finished. However using the proper configuration this isn’t going to be a problem as the number of records to be merged on the fly will be small enough.

Consider the following example where the goal is to keep all-time avg value for particular sensor. Let’s say data is collected every 10 seconds for 30 days, which gives 259200 separately written data points. While compacting on-the-fly this amount of values might be quite fast for a medium-large HBase cluster, performing periodic compaction will improve reading speed a lot. Let’s say we perform updates processing every 4 hours and use 1 hour interval as compacting base (as shown in the picture above). This gives us, at any point in time, less than 24*30 + 4*60*6 = 2,160 non-compacted records that need to be processed on-the-fly when fetching resulting record for 30 days. This is a small number of records and can be processed very fast. At the same time it is possible to perform rollback to any point in time with 1 hour granularity.

In case system should store more historical data, but we don’t care about rolling it back (if nothing wrong was found during 30 days the data is likely to be OK) then compaction can be configured to process all data older than 30 days as one group (i.e. merge into one record).

Automatic Handling of Task Failures which Write Data to HBase

Typical scenario: task updating HBase data fails in the middle of writing – some data was written, some not. Ideally we should be able to simply restart the same task (on the same input data) so that new one performs needed updates without corrupting data because some write operations were duplicate.

In the suggested solution every update is written as a new record. In order to make sure that performing the same (literally the same, not just similar) update operation multiple times does not result in multiple separate update operations, in which case data will be corrupted, every update operation should write to the record with the same row key from any task attempts. This results in overriding the same single record (if it was created by failed task) and avoiding doing the same update multiple times which in turn means that data is not corrupted.

This especially convenient in situations when we write to HBase table from MapReduce tasks, as MapReduce framework restarts failed tasks for you. With the given approach we can say that handling task failures happens automatically – no extra effort to manually roll back previous task changes and starting a new task is needed.

Cons

Below are the major drawbacks of the suggested solution. Usually there are ways to reduce their effect on your system depending on the specific case (e.g. by tuning HBase appropriately or by adjusting parameters involved in data compaction logic).

  • merging on the fly takes time. Properly configuring periodic updates processing is a key to keeping data fetching fast.
  • when performing compaction, scanning of many records that don’t need to be compacted can happen (already compacted or “alone-standing” records). Compaction can usually be performed only on data written after the previous compaction which allows to use efficient time-based filters to reduce the impact here.

Solving these issues may be implementation-specific. We’ll bring them up again when talking about our implementation in the follow up post.

Implemenation: Meet HBaseHUT

Suggested solution was implemented and open-sourced as HBaseHUT project. HBaseHUT will be covered in the follow up post shortly.

 

If you like this sort of stuff, we’re looking for Data Engineers!

HBase Real-time Analytics & Rollbacks via Append-based Updates

In this part 1 of a 3-part post series we’ll describe how we use HBase at Sematext for real-time analytics and how we can perform data rollbacks by using an append-only updates approach.

Some bits of this topic were already covered in Deferring Processing Updates to Increase HBase Write Performance and some were briefly presented at BerlinBuzzwords 2011 (video). We will also talk about some of the ideas below during HBaseCon-2012 in late May (see Real-time Analytics with HBase). The approach described in this post is used in our production systems (SPM & SA) and the implementation was open-sourced as HBaseHUT project.

Problem we are Solving

While HDFS & MapReduce are designed for massive batch processing and with the idea of data being immutable (write once, read many times), HBase includes support for additional operations such as real-time and random read/write/delete access to data records. HBase performs its basic job very well, but there are times when developers have to think at a higher level about how to utilize HBase capabilities for specific use-cases.  HBase is a great tool with good core functionality and implementation, but it does require one to do some thinking to ensure this core functionality is used properly and optimally. The use-case we’ll be working with in this post is a typical data analytics system where:

  • new data are continuously streaming in
  • data are processed and stored in HBase, usually as time-series data
  • processed data are served to users who can navigate through most recent data as well as dig deep into historical data

Although the above points frame the use-case relatively narrowly, the approach and its implementation that we’ll describe here are really more general and applicable to a number of other systems, too. The basic issues we want to solve are the following:

  • increase record update throughput. Ideally, despite high volume of incoming data changes can be applied in real-time . Usually. due to the limitations of the “normal  HBase update”, which requires Get+Put operations, updates are applied using batch-processing approach (e.g. as MapReduce jobs).  This, of course, is anything but real-time: incoming data is not immediately seen.  It is seen only after it has been processed.
  • ability to roll back changes in the served data. Human errors or any other issues should not permanently corrupt data that system serves.
  • ability to fetch data interactively (i.e. fast enough for inpatient humans).  When one  navigates through a small amount of recent data, as well as when selected time interval spans years, the retrieval should be fast.

Here is what we consider an “update”:

  • addition of a new record if no records with same key exists
  • update of an existing record with a particular key

Let’s take a look at the following example to better understand the problem we are solving.

Example Description

Briefly, here are the details of an example system:

  • System collects metrics from a large number of sensors (N) very frequently (each second) and displays them on chart(s) over time
  • User needs to be able to select small time intervals to display on a chart (e.g. several minutes) as well as very large spans (e.g. several years)
  • Ideally, data shown to user should be updated in real-time (i.e. user can see the most recent state of the sensors)

Note that even if some of the above points are not applicable to your system the ideas that follow may still be relevant and applicable.

Possible “direct” Implementation Steps

The following steps are by no means the only possible approach.

Step 1: Write every data point as a new record or new column(s) in some record in HBase. In other words, use a simple append-only approach. While this works well for displaying charts with data from short time intervals, showing a year (there are about 31,536,000 seconds in one year) worth of data may be too slow to call the experience “interactive”.

Step 2: Store extra records with aggregated data for larger time intervals (say 1 hour, so that 1 year = 8,760 data points). As new data comes in continuously and we want data to be seen in real-time, plus we cannot rely on data coming in a strict order, say because one sensor had network connectivity issues or we want to have ability to import historical data from a new data source, we have to use update operations on those records that hold data for longer intervals. This requires a lot of Get+Put operations to update aggregated records and this means degradation in performance — writing to HBase in this fashion will be significantly slower compared to using the append-only approach described in Step 1. This may slow writes so much that a system like this may not actually be able to keep up with the volume of the incoming data.  Not good.

Step 3: Compromise real-time data analysis and process data in small batches (near real-time). This will decrease the load on HBase as we can process (aggregate) data more efficiently in batches and can reduce the number of update (Get+Put) operations. But do we really want to compromise real-time analytics? No, of course not.  While it may seem OK in this specific example to show data for bigger intervals with some delay (near real-time), in real-world systems this usually affects other charts/reports, such as reports that need to show total, up to date figures. So no, we really don’t want to compromise real-time analytics if we don’t have to. In addition, imagine what happens if something goes wrong (e.g. wrong data was fed as input, or application aggregates data incorrectly due to a bug or human error).  If that happens we will not be able to easily roll back recently written data. Utilizing native HBase column versions may help in some cases, but in general, when we want greater control over rollback operation a better solution is needed.

Use Versions for Rolling Back?

Recent additions in managing cell versions make cell versioning even more powerful than before. Things like HBASE-4071 make it easy to store historical data without big data overhead by cleaning old data efficiently. While it seems obvious to use versions (native HBase feature) for allowing rolling back data changes, we cannot (and do not want to) rely heavily on cell versions here. The main reason for that is that it is just not very effective when dealing with lots of versions for a given cell. When update history for a record/cell becomes very long this requires many versions for a given cell. Versions are managed and navigated as a simple list in HBase (as opposed to using a Map-like structure that is used for records and columns) so managing long lists of versions is less efficient than having a bunch of separate records/columns. Besides, using versions will not help us with Get+Put situation and we are aiming to kill these two birds with one rock with the solution we are about to describe. One could try to use append-only updates approach described below and use cells versions as update log, but this would again bring us to managing long lists in a non-efficient way.

Suggested Solution

Given the example above, our suggested solution can be described as follows:

  • replace update (Get+Put) operations at write time with simple append-only writes and defer processing of updates to periodic jobs or perform aggregations on the fly if user asks for data earlier than individual additions are processed.

The idea is simple and not necessarily novel, but given the specific qualities of HBase, namely fast range scans and high write throughput, this approach works very well.  So well, in fact, that we’ve implemented it in HBaseHUT and have been using it with success in our production systems (SPM & SA).

So, what we gain here is:

  • high update throughput
  • real-time updates visibility: despite deferring the actual updates processing, user always sees the latest data changes
  • efficient updates processing by replacing random Get+Put operations with processing whole sets of records at a time (during the fast scan) and eliminating redundant Get+Put attempts when writing first data item
  • ability to roll back any range of updates
  • avoid data inconsistency problems caused by tasks that fail after only partially updating data in HBase without doing rollback (when using with MapReduce, for example)

In part 2 post we’ll dig into the details around each of the above points and we’ll talk more about HBaseHUT, which makes all of the above possible. If you like this sort of stuff, we’re looking for Data Engineers!

Berlin Buzzwords 2012 – Three Talks from Sematext

Last year was our first time at Berlin Buzzwords.  We gave 1 full talk about Search Analytics (video) and 2 lightning talks (video, video).  We saw a number of good talks, too.  We also took part in a HBase Hackathon organized by Lars George in Groupon’s Berlin offices and even found time to go clubbing.  So in hopes of paying Berlin another visit this year, a few of us at Sematext (@sematext) submitted talk proposals.  Last week we all got acceptance emails, so this year there will be 3 talks from 3 Sematextans at Berlin Buzzwords!  Here is what we’ll be talking about:

RafałScaling Massive ElasticSearch Clusters

This talk describes how we’ve used ElasticSearch to build massive search clusters capable of indexing several thousand documents per second while at the same time serving a few hundred QPS over billions of documents in well under a second.  We’ll talk about building clusters that continuously grow in terms of both indexing and search rates. You will learn about finding cluster nodes that can handle more documents, about managing shard and replica allocation and prevention of unwanted shard rebalancing, about avoiding expensive distributed queries, etc.  We’ll also describe our experience doing performance testing of several ElasticSearch clusters and will share our observations about what settings affect search performance and how much.  In this talk you’ll also learn how to monitor large ElasticSearch clusters, what various metrics mean, and which ones to pay extra attention to.

AlexReal-time Analytics with HBase

HBase can store massive amounts of data and allow random access to it – great. MapReduce jobs can be used to perform data analytics on a large scale – great. MapReduce jobs are batch jobs – not so great if you are after Real-time Analytics. Meet append-only writes approach that allows going real-time where it wasn’t possible before.

In this talk we’ll explain how we implemented “update-less updates” (not a typo!) for HBase using append-only approach. This approach shines in situations where high data volume and velocity make random updates (aka Get+Put) prohibitively expensive.  Apart from making Real-time Analytics possible, we’ll show how the append-only approach to updates makes it possible to perform rollbacks of data changes, and avoid data inconsistency problems caused by tasks in MapReduce jobs that fail after only partially updating data in HBase.  The talk is based on Sematext’s success story of building a highly scalable, general purpose data aggregation framework which was used to build Search Analytics and Performance Monitoring services. Most of the generic code needed for append-only approach described in this talk is implemented in our HBaseHUT open-source project.

OtisLarge Scale ElasticSearch, Solr & HBase Performance Monitoring 

This talk has all the buzzwords covered: big data, search, analytics, realtime, large scale, multi-tenant, SaaS, cloud, performance… and here is why:

In this talk we’ll share the “behind the scenes” details about SPM for HBase, ElasticSearch, and Solr, a large scale, multi-tenant performance monitoring SaaS built on top of Hadoop and HBase running in the cloud.  We will describe all its backend components, from the agent used for performance metrics gathering, to how metrics get sent to SPM in the cloud, how they get aggregated and stored in HBase, how alerting is implemented and how it’s triggered, how we graph performance data, etc.  We’ll also point out the key metrics to watch for each system type.  We’ll go over various pain-points we’ve encountered while building and running SPM, how we’ve dealt with them, and we’ll discuss our plans for SPM in the future.

We hope to see some of you in Berlin.  If these topics are of interest to you, but you won’t be coming to Berlin, feel free to get in touch, leave comments, or ping @sematext.  And if you love working with things our talks are about, we are hiring world-wide!

Sensei: distributed, realtime, semi-structured database

Once upon a time there was no decent open-source search engine.  Then, at the very beginning of this millennium Doug Cutting gave us Lucene.  Several years later Yonik Seeley wrote Solr.  In 2010 Shay Banon released ElasticSearch.  And just a few days ago John Wang and his team at LinkedIn announed Sensei 1.0.0 (also known as SenseiDB).  Here at Sematext we’ve been aware of Sensei for a while now (2 years?) and are happy to have one more great piece of search software available for our own needs and those of our customers.  As a matter of fact, we are so excited about Sensei that we’ve already started hacking on adding support for Sensei to SPM, our Scalable Performance Monitoring tool and service!  Since Sensei is brand new, we asked John to tell us a little more about it.

Please tell us a bit about yourself.

My name is John Wang, and I am the search architect at LinkedIn.com. I am the creator and the current lead for the Sensei project.

Could you describe Sensei for us?

Sensei is an open-source, elastic, realtime, distributed database with native support for searching and navigating both unstructured text and structured data. Sensei is designed to handle complex semi-structured queries on very large, and rapidly changing datasets.

It was written by the content search team at LinkedIn to support LinkedIn Homepage and Signal.

The core engine is also used for LinkedIn search properties, e.g. people search, recruiter system, job and company search pages.

Why did you write Sensei instead of using Lucene or Solr?

Sensei leverages Lucene.

We weren’t able to leverage Solr because of the following requirements:

  • High update requirement, 10’s of thousands updates per second in to the system
  • Real distributed solution, current Solr’s distributed story has a SPOF at the master, and Solr Cloud is not yet completed.
  • Complex faceting support. Not just your standard terms based faceting. We needed to facet on social graph, dynamic time ranges and many other interesting faceting scenarios. Faceting behavior also needs to be highly customizable, which is not available via Solr.

What does Sensei do that existing open-source search solutions don’t provide?

Consider Sensei if your application has the following characteristics:

  • High update rates
  • Non-trivial semi-structured query support

Who should stick with Solr or ElasticSearch instead of using Sensei?

The feature set, as well as limitations of all these system don’t overlap fully. Depending on your application, if you are building on certain features in one system and it is working out, then I would suggest you stick with it. But for Sensei, our philosophy is to consider performance ahead of features.

Are there currently any Sensei users other than LinkedIn that you are aware of?

We have seen some activities on the mailing list indicating deployments outside of LinkedIn, but I don’t know the specifics. This is a new project and we are keeping track of its usage on http://senseidb.github.com/sensei/usage.html, so let us know if you are using Sensei and want to be listed there.

What are Sensei’s biggest weaknesses and how and when do you plan on addressing them?

Let me address this question by providing a few limitations of Sensei:

  • Each document inserted into Sensei must have a unique identifier (UID) of type long. Though this can be inconvenient, this is a decision made for performance reasons. We have no immediate plans for addressing this, but we are thinking about it.
  • For columns defined in numeric format, e.g. int, float, long…, we don’t yet support negative numbers. We will have support for negative numbers very soon.
  • Static schema. Dynamic schema is something we find useful, and we will support it in the near future.

What’s next for Sensei as a project?

We will continue iterating on Sensei on both the performance and feature front. See below for a list of things we are looking at.

What are some of the key features you plan on adding to Sensei in the coming months?

This may not be a comprehensive list, but gives you an idea areas we are working on:

  • Relevance toolkit
  • Built-in time and geo type columns
  • Parent-node type documents
  • Attribute type faceting (name-value pairs)
  • Online rebalancing
  • Online reindexing
  • Parameter secondary store (e.g. activities on a document, social gestures, etc.)
  • Dynamic schemata
  • Support for aggregation functions, e.g. AVG, MIN, MAX, etc.

The Relevance toolkit sounds interesting.  Could you tell us a bit about what this will do and if, by any chance, this might in any way be close to the idea behind Apache Open Relevance?

This is a big feature for 1.1.0. I am not familiar with Open Relevance to comment. The idea behind relevance toolkit is to allow you to specify a model with the query. One important usage for us is to be able to perform relevance tuning against fast-flowing production data.  Waiting for things to be redeployed to production after relevance model changes does not work if the production data is changing in real-time, like tweets.

Maybe some specific tech questions – feel free to skip the ones that you think are not important or you just don’t feel like answering.

What is the role of Norbert in Sensei?

Sensei currently uses Norbert , whose maintainer is one of our main developers ,as a cluster manager and RPC between a Broker and Sensei nodes. A Broker is servlet embedded in each Sensei node. Norbert is used as a message transport to Sensei nodes.. Norbert is an elegant wrapper around Zookeeper for cluster management. We do have plans to create abstraction around this component to allow pluggability for other cluster managers.

When I first saw SQL-like query on Sensei’s home page I thought it was purely for illustration purposes, but now I realize it is actually very real!

BQL – Browse Query Language, is a SQL-variant to query Senesi. It is very real, we plan for BQL to be a standard way to query Sensei.

Can you share with us any Sensei performance numbers?

We have published some performance numbers at http://senseidb.com/performance.html

We have created a separate Github repository containing all our performance evaluation code at:

https://github.com/kwei/search-perf

Does Sensei have a SPOF (Single Point Of Failure)?

No – assuming a Sensei cluster contains more than 1 replica of each document. This is one important design goal of Sensei: every Sensei node in the cluster acts independently in both consuming data as well as handling queries. See the following answers for details.

What has to happen for data loss to occur?

Data loss occurs only if you have data store corruption on all replicas.  If only 1 replica is corrupted, you can always recover from other replicas.

Sensei by design assumes a data source that is ordered and versioned, e.g., a persistent queue. Each Sensei node persists the version for each commit. Thus, to recover data events can be replayed from that version.

In production at Linkedin, this is very handy to ensure data consistency when bouncing nodes.

You mention recovery from other replicas and recovery by replaying data events from a specific version.  Does that mean once a copy of a document makes it into Sensei in order to recover lost replicas for that document Sensei does not need to reach out to the originator of the data and is self-sufficient, so to speak?  Or does replaying mean getting the lost data from an external data store?

The data stream is external. So to catch-up from an older version, Sensei would just re-play the data events from the stream using this version. But if an entire data replica is lost, a manual copy from other replicas is required (for now).

What happens if a node in a cluster fails?

When a node fails, Zookeeper notifies other cluster event listeners in the cluster, which means the Broker. Broker keeps a state of the current cluster node topology, and subsequent queries will be routed to the live replicas, thus avoiding sending requests to the failed node. If all nodes for one replica are down, then partial results are returned.

What happens when the cluster reaches its capacity?  Can one simply add more nodes to the cluster and Sensei auto-magically takes care of the rest or does one have to manually rebalance shards, or…?

Depending on how data is sharded:

If over-sharding technique is used, then adding nodes to the cluster is trivial. New nodes would just specify which shards they want to handle – every node in sensei.properties indicates partitions it should handle, e.g., sensei.node.partitions=1,3,8

If using sharding strategy where data migration is not needed as new data is flowing into the system, e.g., sharding by time or consecutive UID, then expanding the cluster is also trivial.

If such sharding strategy requires data migration, e.g. mod based sharding. Then cluster rebalancing is required. This is coming in a future release, where we already have designs for online data rebalancing.  For now, one has to reindex all data in order to reshard and rebalance.

Since Sensei is an eventually consistent system, how does one insure the search client gets consistent results (e.g. when paging through results or filtering results with facets)?

On the Sensei request object, there is a routing parameter. Typically this routing parameter is set to the value of the search session. By default, Sensei applies consistent hashing on the routing parameter to make sure the same replica is used for queries with the same routing parameter or search session.

How does one upgrade Sensei? Can one perform an online upgrade of a Sensei cluster without shutting down the whole cluster? Are new versions of Sensei backwards compatible?

Yes, subsets of the cluster can be shut down, and dynamic routing via Zookeeper would take place. This is very useful when we are pushing out new builds in canary mode to ensure stability and compatibility.

Does Sensei require a schema or is it schemaless?

Sensei requires a schema. But we do have plans to make Sensei schema dynamic like ElasticSearch.

Does Sensei have support for things like Spatial Search, Function Queries, Parent-Child data, or JOIN?

We have in the works a relevance toolkit which should cover features of Solr’s Function Queries.

We also have plans to support Spatial Search and Parent-Child data.

We don’t have immediate plans to support Joins.

How does one talk to Sensei?  Are there existing client libraries?

The Sensei cluster exposes 2 rest end-points: REST/JSON and BQL.
The packaging also include Java and Python clients, (also Ruby if resourcing works out), along with JavaScript helpers for using the REST/JSON API in a web application.

Does Sensei have an administrative/management UI?

Sensei comes with a web application for helping with building queries against the cluster. We use it to tweak relevance models as well as instrumenting an online cluster.

JMX is also exposed to administer the cluster.

In the configuration users can turn on other types of data reporting to other clusters, e.g. RRD, log etc.

Big thank you to John and his team for releasing and open-sourcing Sensei and for taking the time to answer all our questions.

@sematext

Event Stream Processor Matrix

We published our first ever UI-focused post on Top JavaScript Dynamic Table Libraries the other day and got some valuable feedback – thanks!

We are back to talking about the backend again.  Our Search Analytics and Scalable Performance Monitoring services/products accept, process, and store huge amounts of data.  One thing both of these services do is process a stream of events in real-time (and batch, of course).  So what solutions are there that help one process data in real-time and perform some operations on a rolling window of data, such as the last 5 or 30 minutes of incoming event stream?  We know of several solutions that fit that bill, so we decided to put together a matrix with essential attributes of those tools in order to compare them and make our pick.  Below is the matrix we came up with.  If you are viewing this on our site, the table is likely going to be too wide, but it should look find in a proper feed reader.

If you like working on systems that handle large volumes of data, like our Search Analytics and Scalable Performance Monitoring services, we are hiring world-wide.

Matrix part 1:

License Language Scaling Add or change rules on the fly Other infra needed Rule types
Esper GPL2, commercial java Scale up yes none Declarative, query-based
Drools Fusion ASL 2.0 java Scale up yes none Declarative, mostly rule based, but support queries too
FlumeBase ASL 2.0 java Horizontal: natural sharding on top of Flume yes Flume Declarative, query-based
Storm EPL 1.0 clojure Horizontal Can be implemented on top of Zookeeper ZeroMQ, Zookeeper Provides only low level primitives(like grouping). Rule engine should be implemented manually on top.
S4 ASL 2.0 java Horizontal Can be implemented on top of Zookeeper Zookeeper Provides set of low level primitives. Somehow correlation support via joins. Documentation have a “windowing” section, but it empty.
Activeinsight CPAL 1.0, commercial java Horizontal yes Declarative, Query-like
Kafka APL 2.0 java Horizontal Zookeeper Set of low level primitives

Matrix part 2:

Docs / examples Maturity Community URL Notes
Esper very good mature, stable medium esper.codehaus.org
Drools Fusion good 3 years, stable small jboss.org/drools/drools-fusion.html
FlumeBase good alpha small flumebase.org
Storm exists used in production growing very fast tech.backtype.com good deployment features
S4 average alpha, butused in production medium (will grow under ASF) s4.io
Activeinsight poor unknown unknown activeinsight.org
Kafka good used in production small (will grow under ASF) incubator.apache.org/kafka

So there you have it – we hope you find this useful.  If you have any comments or questions, tweet us (@sematext) or leave a comment here.  If you like working on systems that handle large volumes of data, like our Search Analytics and Scalable Performance Monitoring services, we are hiring world-wide.

Follow

Get every new post delivered to your Inbox.

Join 1,633 other followers