Kafka Real-time Stream Multi-topic Catch up Trick

Half of the world, Sematext included, seems to be using Kafka.

Kafka is the spinal cord that connects various components in SPM, Site Search Analytics, and Logsene.  If Kafka breaks, we’re in trouble (but we have anomaly detection all over the place to catch issues early).  In many Kafka deployments, ours included, the most recent data is the most valuable.  Consider the case of Kafka in SPM, which processes massive amounts of performance metrics for monitoring applications and servers.  Clearly, in a performance monitoring system you primarily care about current performance numbers.  Thus, if SPM’s Kafka pipeline were to break and we restore it, what we’d really like to avoid is processing all data sequentially, oldest to newest.  What we’d prefer is processing new metrics data first and then processing older data using any spare capacity we have in order to “fill the gap” caused by Kafka downtime.

Here’s a very quick “video” that show this in action:

Kafka Catch Up

Kafka Catch Up


How does this work?

We asked about it back in 2013, but didn’t really get good tips.  Shortly after that we implemented the following logic that’s been working well for us, as you can see in the animation above.

The catch up logic assumes having multiple topics to consume from and one of these topics being the “active” topic to which producer is publishing messages. Consumer sets which topic is active, although Producer can also set it if it has not already been set. The active topic is set in ZooKeeper.

Consumer looks at the lag by looking at the timestamp that Producer adds to each message published to Kafka. If the lag is over N minutes then Consumer starts paying attention to the offset.  If the offset starts getting smaller and keeps getting smaller M times in a row, then Consumer knows we are able to keep up (i.e. the offset is not getting bigger) and sets another topic as active. This signals to Producer to switch publishing to this new topic, while Consumer keeps consuming from all topics.

As the result, Consumer is able to consume both new data and the delayed/old data and avoid not having fresh data while we are in catch-up mode busy processing the backlog.  Consuming from one topic is what causes new data to be processed (this corresponds to the right-most part of the chart above “moving forward”), and consuming from the other topic is where we get data for filling in the gap.

If you run Kafka and want a good monitoring tool for Kafka, check out SPM for Kafka monitoring.


Poll: How do you ship your Logs?

Recently, a few people from Sematext’s Logsene team debated about how useful the “structured” part of syslog logs (those using the RFC5424 format) is to people.  Or has shipping logs in other structured formats, like JSON, made RFC5424 irrelevant to most people?  Here is a quick poll to help us all get some insight into that.

NOTE: the question is not what your logs looks like initially when they are generated, but about what structure they have when you ship them to a centralized logging service, whether running within your organization or a logging SaaS like Logsene.

If you choose “None of the above” please do leave a comment to help spread knowledge about alternatives.


SPM Client Async Metrics Collection

Do your SPM charts ever look choppy like this?


Let’s hope not!  However, if you run large Elasticsearch clusters, especially those with thousands of shards, you may suffer from Elasticsearch sometimes taking a long time to provide information about its metrics that SPM, of course, uses to render various metrics charts.  While this slowness is something Elasticsearch developers are undoubtedly working on, we’ve also proactively improved the SPM Client (aka agent, aka monitor), specifically by making metric collection from various sources asynchronous, to better cope with this slowness.

There are other goodies in more recent versions of the SPM Client, including additional optimizations (some specific to Elasticsearch, but others beneficial to all SPM users), but also some that will enable exciting new functionality we’ll be announcing next month.

To get the latest SPM Client please go to https://apps.sematext.com/spm-reports/client.do.

Custom Metrics from Node.js Apps

We recently added support for Node.js and io.js monitoring to SPM and have received great feedback.  While SPM for Node.js monitors all key Node.js metrics, most applications have additional metrics one often wants to track — things like: the number of concurrent users, the number of items placed in a shopping cart, or any other kind of IT metric, business transaction or KPI.  SPM already provides a Custom Metrics API and libraries that make shipping custom metrics from Java and from Ruby applications a snap.  But why leave Node.js behind?  Meet spm-metrics-js (it’s on Github) – the npm module for sending custom metrics from Node.js apps to SPM.  

This JavaScript module supports measurements using counters, meters, timers, and histograms. These helpers calculate values of metrics objects and ship them to SPM, where they are then turned into charts and inputs to alert rules and anomaly detection algorithms.

Here’s an example for counting users on login and logout:

Sending custom metrics is really that easy!

Now, let’s have a look at the options used when creating a custom metric object:

  • name – the name of the metric you can find in SPM’s user interface
  • aggregation – the aggregation type: ‘avg’, ‘sum’, ‘min’ or ‘max’ used in SPM’s aggregations server
  • filter1 – the SPM user interface provides two filter criteria; the value will be available in the UI as the first filter
  • filter2 – the filter value for the second filter field in SPM’s UI
  • interval – time in ms to call save() periodically. Defaults to no automatic call to save(). The save() function captures the metric and resets meters, histograms, counters or timers.
  • valueFilter – array of property names for calculated values. Only specified fields are sent to SPM (e.g. [‘count’, ‘min’, ‘max’].

Additional measurement functions are available to extend the custom metric object automatically with additional calculated properties:

  • Meter – measure rates and provide the following calculated properties:
    • mean: the average rate since the meter was started
    • count: the total of all values added to the meter
    • currentRate: the rate of the meter since the meter was started
    • 1MinuteRate: the rate of the meter biased toward the last 1 minute
    • 5MinuteRate: the rate of the meter biased toward the last 5 minutes
    • 15MinuteRate: the rate of the meter biased toward the last 15 minutes
  • Histogram – build percentile, min, max, & sum aggregations over time
    • min: the lowest observed value
    • max: the highest observed value
    • sum: the sum of all observed values
    • variance: the variance of all observed values
    • mean: the average of all observed values
    • stddev: the stddev of all observed values
    • count: the number of observed values
    • median: 50% of all values in the reservoir are at or below this value.
    • p75: see median, 75% percentile
    • p95: see median, 95% percentile
    • p99: see median, 99% percentile
    • p999: see median, 99.9% percentile
  • Timer – measures time and captures rates in an internal meter and histogram

If this is more than you actually need, we recommend selecting only the relevant properties (using the ‘valueFilter’ option). Please note that Custom Metrics are aggregated by the specified aggregation type (‘avg’, ‘sum’, ‘min’, ‘max’).  Moreover, the aggregation type for each property can be defined – for further details please check the package documentation.

Adding instrumentation always raises the question of performance; in spm-metrics-js all metrics are buffered and efficiently ship metrics to SPM in bulk using asynchronous functions. We recommend using a transmit time of 60 seconds.

Once you send custom metrics to SPM you can create alerts on them, have SPM detect and alert you about anomalies, put charts with those metrics on dashboards, share charts with those metrics publicly or just with your team or organization, etc.


Actions for Metrics – e.g. define alerts using anomaly detection


Dashboard with Custom Metric and other Metrics

Please note the free plan has no limits on the number of monitored Applications, Processes, Dashboards or Users and you can share Accounts with your whole DevOps team and integrate SPM with Slack, HipChat, PagerDuty, Webhooks, etc. If you don’t use SPM yet, grab a free account to start monitoring your Node.js and io.js applications and benefit from all standard SPM features such as alerting, anomaly detection, event and log correlation, unlimited dashboards, secure information sharing, etc. Check out spm-metrics-js (or on Github) and drop us a line (or tweet 140 characters to @sematext) — we’d love to hear from you!

Poll Results: HBase Version Distribution

The results for HBase version distribution poll are in.  Thanks to everyone who took the time to vote!

The distribution pie chart is below, but we could summarize it as follows:

  • A big chunk of HBase clusters, about 30%, are still “stuck” on HBase 0.94.x
  • Over 37% of the HBase clusters are on 0.98.x that, until very recently, was the latest stable version
  • Only about 7% of clusters are on the 0.96.x and we can assume these clusters will soon migrate to either 0.98.x or 1.0.x
  • Somewhat surprisingly, almost 20% of HBase clusters are already on HBase 1.0.0 even though 1.0.0 was released only a few weeks ago

It’s great to see so many clusters moving to 1.0.0 so quickly! As for why there are still so many clusters using 0.94.x, which is several years old, see this comment on the HBase mailing list.  Here at Sematext we make heavy use of HBase and were on 0.94.x version for a long time, too.  A few months ago we’ve moved to 0.98.x and have been enjoying all its benefits.  Furthermore, we’ve recently updated SPM for HBase to monitor a pile of new HBase metrics that provide interesting new insights about our HBase clusters though some of the new metric charts.  For example, we are now able to see the dramatic impact of major compactions on data locality (and thus HBase performance!) — see for yourself – https://apps.sematext.com/spm-reports/s/VhOltU14Cy, or the number and size of HLog files over time — https://apps.sematext.com/spm-reports/s/7LU1qvs7ur.

HBase version distribution

Apache HBase Version Distribution

You may also want to check out the results of our other polls about big data technologies.

HBase Poll: Version You Run?

We are updating SPM for HBase to make sure SPM collects all the key HBase metrics that were added in 0.98, we thought it would be good to see which HBase versions are being used in the wild.  We’re on 0.98 after being on 0.94 for a long time.  How about you?

Please tweet this poll and help us spread the word, so we can get a good, statistically significant results.  We’ll publish the results here and via @sematext (follow us!) in a week.

Please tweet this poll and help us spread the word, so we can get a good, statistically significant results.  We’ll publish the results here and via @sematext (follow us!) in a week.

Poll Results: Kafka Version Distribution

The results for Apache Kafka version distribution poll are in.  Thanks to everyone who took the time to vote!

The distribution pie chart is below, but we could summarize it as follows:

  • Only about 5% of Kafka 0.7.x users didn’t indicate they will upgrade to 0.8.2.x in the next 2 months
  • Only about 14% of Kafka 0.8.1.x users didn’t indicate they will upgrade to 0.8.2.x in the next 2 months
  • Over 42% of Kafka users are already using 0.8.2.x!
  • Over 80% of Kafka users say they will be using 0.8.2.x within the next 2 months!

It’s great to see Kafka users being so quick to migrate to the latest version of Kafka!  We’re extra happy to see such quick 0.8.2 adoption because we put a lot of effort into improving Kafka metric, as well as making all 100+ Kafka metrics available via SPM Kafka 0.8.2 monitoring a few weeks ago, right after Kafka 0.8.2 was released.

Apache Kafka Version Distribution

Apache Kafka Version Distribution


You may also want to check out the results of our recent Kafka Producer/Consumer language poll.



Get every new post delivered to your Inbox.

Join 181 other followers