Introducing Logsene Live Tail

We’ve been hard at work on our centralized logging SaaS / On-Premises solution – Logsene – and we’re confident the logging fans among us will enjoy the new Live Tail functionality.

Live Tail Benefits

Logsene Live Tail has several important benefits, including:

  • shows logs in real-time as they arrive into your Logsene app
  • lets you filter logs to only see those you’re most interested in (e.g., eliminate info logs which are high-volume, noisy and don’t contain critical information)
  • useful for deploying new software versions; see new errors right away and quickly go in and fix them



Logsene Live Tail is best seen, not told.  Check out this short video for a detailed look!

We hope you like this new addition to Logsene.  Got ideas how we could make it more useful for you?  Let us know via comments, email or @sematext.

Not using Logsene yet? Check out the free 30-day trial by registering here (ping us if you’re a startup, a non-profit, or educational institution – we’ve got special pricing for you!).  There’s no commitment and no credit card required.  Even better — combine Logsene with SPM to make the integration of performance metrics, logs, events and anomalies more robust for those looking for a single pane of glass.


Introducing Top Database Operations

If you run Elasticsearch, Solr, or any datastore you connect to via JDBC, you’ll like what we’ve just added to SPM.  We call it Database Operations and in SPM you can find it in the new Database report:

If you didn’t watch the video, here’s what Database Operations gives you:

  • Top 5 operation types across all your data stores or filtered to a specific data store type
  • Top 5 operation types by speed, throughput, or simply their volume
  • Time-series reports for volume, throughput, and latency broken down by operation type
  • Ability to view all collected operations, not just the slowest ones, filter by database type or by operation type, sorted by average or total duration, or throughput
  • Sparklines that show last 5 minute values and trends
  • Top 10 slowest individual operations and drill-in details

Integration with Transaction Tracing, so you can correlate slow data store operations with the actual transaction/request that triggered slow operations


  • To get this information add SPM agent to the application that is talking to a data store (e.g. Solr or Elasticsearch). This is because the SPM agent captures operations at that client layer, not in the server itself.
  • To start capturing this information enable Transaction Tracing in your SPM agents

This, including Distributed Transaction Tracing, works for all Java applications




Don’t forget – when you enable Database Operations you will also automatically get Transaction Tracing, as well as the cool AppMaps – enjoy! :)

Got ideas how we could make Database Operations better and more useful to you?  Let us know via comments, email or @sematext.

Grab a free 30-day SPM trial by registering here (ping us if you’re a startup, a non-profit, or educational institution – we’ve got special pricing for you!).  There’s no commitment and no credit card required.

Kafka Real-time Stream Multi-topic Catch up Trick

Half of the world, Sematext included, seems to be using Kafka.

Kafka is the spinal cord that connects various components in SPM, Site Search Analytics, and Logsene.  If Kafka breaks, we’re in trouble (but we have anomaly detection all over the place to catch issues early).  In many Kafka deployments, ours included, the most recent data is the most valuable.  Consider the case of Kafka in SPM, which processes massive amounts of performance metrics for monitoring applications and servers.  Clearly, in a performance monitoring system you primarily care about current performance numbers.  Thus, if SPM’s Kafka pipeline were to break and we restore it, what we’d really like to avoid is processing all data sequentially, oldest to newest.  What we’d prefer is processing new metrics data first and then processing older data using any spare capacity we have in order to “fill the gap” caused by Kafka downtime.

Here’s a very quick “video” that show this in action:

Kafka Catch Up

Kafka Catch Up


How does this work?

We asked about it back in 2013, but didn’t really get good tips.  Shortly after that we implemented the following logic that’s been working well for us, as you can see in the animation above.

The catch up logic assumes having multiple topics to consume from and one of these topics being the “active” topic to which producer is publishing messages. Consumer sets which topic is active, although Producer can also set it if it has not already been set. The active topic is set in ZooKeeper.

Consumer looks at the lag by looking at the timestamp that Producer adds to each message published to Kafka. If the lag is over N minutes then Consumer starts paying attention to the offset.  If the offset starts getting smaller and keeps getting smaller M times in a row, then Consumer knows we are able to keep up (i.e. the offset is not getting bigger) and sets another topic as active. This signals to Producer to switch publishing to this new topic, while Consumer keeps consuming from all topics.

As the result, Consumer is able to consume both new data and the delayed/old data and avoid not having fresh data while we are in catch-up mode busy processing the backlog.  Consuming from one topic is what causes new data to be processed (this corresponds to the right-most part of the chart above “moving forward”), and consuming from the other topic is where we get data for filling in the gap.

If you run Kafka and want a good monitoring tool for Kafka, check out SPM for Kafka monitoring.


Docker + Elasticsearch: How to Monitor the Official Elasticsearch Image on Docker

The official Elasticsearch Image on Docker Hub has already generated more than 1.6 million pulls. It is probably the easiest way to get a development setup — which includes Elasticsearch — to the application stack. The reason for this crazy number? A rapidly growing number of organizations are using Elasticsearch and Docker in production. Needless to say, monitoring Elasticsearch is essential in production, and you can find a detailed analysis of this topic (including the “top 10 Elasticsearch metrics to watch”) in the free eBook: ElasticsearchMonitoring Essentials. Docker is disruptive in many ways, and there are many things that are slightly different and worth mentioning.  These include:

  1. Changed deployment for Elasticsearch and its monitoring tools using Dockerfile, Docker Compose or various Orchestration Tools
  2. There is a new Layer to monitor: Container Metrics and Events, see: Docker Events and Metrics monitoring and SPM for Docker
  3. Logging has changed: containers log to the console and logs needs to be retrieved from Docker-Daemon instead getting them from the Elasticsearch log file.  Check out our post on the subject: Innovative Docker Log Management
  4. Official Images may not provide options for monitoring (such as JMX).  However, the official Image for Elasticsearch provides an option to pass parameters to the Java Runtime Environment.  We we will use this option for Elasticsearch monitoring in this post. You should also be aware that the official Elasticsearch Image does not include any plugins, and commercial monitoring from Elastic can’t be distributed in this Image for licensing reasons.  Our monitoring tool of choice is SPM.  If you are not familiar with SPM — but have heard of it — or if you use Marvel, have a look at Marvel vs. SPM.

Next, I’m going to demonstrate a setup to monitor multiple Elasticsearch nodes on a single Docker Host. The final setup will provide the full Monitoring and Logging package:

  • Detailed Application Metrics for Elasticsearch, deployed on Docker
  • Detailed Container Metrics and Docker Events  
  • Centralized Logs for all Containers by SPM for Docker

So let’s first decide on one of the following options to monitor Elasticsearch on Docker.  You can:

  1. Build your own Elasticsearch container with the included monitoring components. I’m not going to go into details about this option today; rather, I’m going to focus on the official / trusted build.
  2. Use a standalone agent, which queries metrics from the Elasticsearch container. This requires a setup for JMX and Docker networking configurations for the monitor and Elasticsearch. The metrics, gathered by remote agents, are limited and, in the Docker context, running an external monitoring process plus Elasticsearch processes consumes more resources.  And the next option …
  3. Inject an SPM in-process monitoring agent into Elasticsearch. This option has the lowest resource usage and has support for advanced monitoring functions like Transaction Tracing and AppMap.

I chose to implement Option #3 in this blog post because it provides the best insights into Elasticsearch. This means the Elasticsearch container needs file-system access to the SPM monitoring agent. Sematext provides the SPM Client (which includes the monitoring agent and metrics sender) pre-installed in a Docker Image, referred as “SPM Client Image/Container” in the following instructions and published on Docker Hub as “sematext/spm-client”.  The main trick here is to mount a volume from SPM-Client Container into Elasticsearch Containers in order to load the monitoring library.

Let’s have a look at the desired setup and how to get there:


Monitoring Setup for Elasticsearch on Docker

Read more of this post

Elasticsearch “Big Picture” – A Creative Flow Chart and Poster

There are many ways to look at Elasticsearch, but here at Sematext we’re pretty confident that you haven’t seen anything like this flowchart to demonstrate how it works:


Download a copy and print your own Elasticsearch poster!

If you’re looking for something unique to show off your Elasticsearch chops, then download a copy today and print your own.  We have files for US letter, A4, Ledger (11”x17”) and Poster (24”x36”) sizes.



Sematext is your “one-stop shop” for all things Elasticsearch: Expert Consulting, Production Support, Elasticsearch Training, Elasticsearch Monitoring, even Hosted ELK!

Doing Centralized Logging with ELK?  We Can Help There, Too

If your log analysis and management leave something to be desired, then we’ve got you covered there as well.  There’s our centralized logging solution, Logsene, which you can think of as your “Managed ELK Stack in the Cloud.”   It’s is also available as an On Premises deployment.  Lastly, we offer Logging Consulting should you require more in-depth support.

Questions or Feedback?

If any questions or feedback for us, please contact us by email or hit us on Twitter.

Presentation: Large Scale Log Analytics with Solr

In this presentation from Lucene/Solr Revolution 2015, Sematext engineers — and Solr and centralized logging experts — Radu Gheorghe and Rafal Kuć talk about searching and analyzing time-based data at scale.

Documents ranging from blog posts and social media to application logs and metrics generated by smartwatches and other “smart” things share a similar pattern: timestamps among their fields, rarely changeable, and deletion when they become obsolete. Because this kind of data is so large it often causes scaling and performance challenges.

In this talk, Radu and Rafal focus on these challenges, including: properly designing collections architecture, indexing data fast and without documents waiting in queues for processing, being able to run queries that include time-based sorting and faceting on enormous amounts of indexed data (without killing Solr!), and many more.

Here is the video:

…and here are the slides:


Here’s a Taste of What You’ll See

How do Logstash, rsyslog, Redis, and fast-food-hating zombies (?!) relate? You’ll have to check out the presentation to find out…


Solr “One-stop Shop”

Sematext is your “one-stop shop” for all things Solr: Expert Consulting, Production Support, Solr Training, and Solr Monitoring with SPM.

Log Analytics – We Can Help

If your log analysis and management leave something to be desired, then we’ve got you covered there as well.  There’s our centralized logging solution, Logsene.  And we also offer Logging Consulting should you require more in-depth support.

Questions or Feedback?

If you have any questions or feedback for us, please contact us by email or hit us on Twitter.  We love talking Solr — and logs!


Presentation: Log Analysis with Elasticsearch

Fresh from the Velocity NYC conference is the latest presentation from Sematext engineers Rafal Kuć and Radu Gheorghe“From zero to production hero: Log Analysis with Elasticsearch.”

The talk goes through the basics of centralizing logs in Elasticsearch and all the strategies that make it scale with billions of documents in production. They cover:

  • Time-based indices and index templates to efficiently slice your data
  • Different node tiers to de-couple reading from writing, heavy traffic from low traffic
  • Tuning various Elasticsearch and OS settings to maximize throughput and search performance
  • Configuring tools such as logstash and rsyslog to maximize throughput and minimize overhead

Here is part 1 of the Video:

Here is part 2 of the Video:

Here are the slides:


And here are the Commands and Demo used in the presentation:

Read more of this post


Get every new post delivered to your Inbox.

Join 181 other followers