Using Elasticsearch Mapping Types to Handle Different JSON Logs

By default, Elasticsearch does a good job of figuring the type of data in each field of your logs. But if you like your logs structured like we do, you probably want more control over how they’re indexed: is time_elapsed an integer or a float? Do you want your tags analyzed so you can search for big in big data? Or do you need it not_analyzed, so you can show top tags via the terms aggregation? Or maybe both?

In this post, we’ll look at how to use index templates to manage multiple types of logs across multiple indices. Also, we’ll explain how to use logging tools (such as Logstash and rsyslog) to handle JSON logging and specify types.

Elasticsearch Mapping and Logs

As you may already know, to control these things in Elasticsearch you’ll need to define a mapping. This works similarly in Logsene, our log analytics SaaS, because it uses Elasticsearch and exposes its API.

With logs you’ll probably use time-based indices, because they scale better (in Logsene, for instance, you get daily indices). That said, to make sure the mapping you define today applies to the index you create tomorrow, you need to define it in an index template.

Managing Multiple Types

Mappings provide a nice abstraction when you have to deal with multiple types of structured data. Let’s say you have two apps generating logs of different structures: both have a timestamp field, but one recording logins has a user field, and another one recording purchases has an amount field.

To deal with this, you can define the timestamp field in the _default_ mapping which applies to all types. Then, in each type’s own mapping we’ll define fields unique to that mapping. The following snippet is an example that works with Logsene, provided that aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee is your Logsene app token. If you roll your own Elasticsearch, you can use whichever name you want, and make sure the template applies to your index pattern.

curl -XPUT 'logsene-receiver.sematext.com/_template/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee_MyTemplate' -d '{
 "template" : "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee*",
 "order" : 21,
 "mappings" : {
  "_default_" : {
   "properties" : {
    "timestamp" : { "type" : "date" }
   }
  },
  "firstapp" : {
   "properties" : {
    "user" : { "type" : "string" }
   }
  },
  "secondapp" : {
   "properties" : {
    "amount" : { "type" : "long" }
   }
  }
 }
}'

Sending JSON Logs to Specific Types

When you send a document to Elasticsearch by using the API, you have to provide an index and a type. You can use an Elasticsearch client for your preferred language to log directly to Elasticsearch or Logsene this way. But I wouldn’t recommend this, because then you’d have to manage things like buffering if the destination is unreachable.

Instead, I’d keep my logging simple and use a specialized logging tool, such as Logstash or rsyslog to do the hard work for me. Logging to a file is usually the easiest option. It’s local, and you can have your logging tool tail the file and send contents over the network. I usually prefer sockets (like syslog) because they let me configure Logstash/rsyslog to:
– write events in a human format to a local file I can tail if I need to (usually in development)
– forward logs without hitting disk if I need to (usually in production)
Whatever you prefer, I think writing to local files or sockets is better than sending logs over the network from your application. Unless you’re willing to do a reliability trade-off and use UDP, which gets rid of most complexities.

Opinions aside, here’s a Logstash configuration for tailing a file with JSON logs separated by a newline. Here’s how you’d send those documents to Logsene via the Elasticsearch API:

input {
 file {
 path => "/var/log/test"
 codec => "json"
 }
}

output {
 elasticsearch {
 host => "logsene-receiver.sematext.com"
 port => 80
 index => "LOGSENE-APP-TOKEN-GOES-HERE"
 index_type => "fileapp"
 protocol => "http"
 manage_template => false
 }
}

Note how the JSON codec does the parsing here, instead of the more expensive and maintenance-heavy approach with grok that we’ve shown in an earlier post on getting started with Logstash. Some applications let you configure the log format, so you can make them write JSON (Apache httpd, for example).

If you want to send JSON over syslog, there’s the JSON-over-syslog (CEE) format that we detailed in a previous post. You can use rsyslog’s JSON parser module to take your structured logs and forward them to Logsene:

module(load="imuxsock")        # can listen to local syslog socket
module(load="omelasticsearch") # can forward to Elasticsearch
module(load="mmjsonparse")     # can parse JSON

action(type="mmjsonparse")  # parse CEE-formatted messages

template(name="syslog-cee" type="list") {  # Elasticsearch documents will contain
  property(name="$!all-json")              # all JSON fields that were parsed
}

action(
  type="omelasticsearch"
  template="syslog-cee"                     # use the template defined earlier
  server="logsene-receiver.sematext.com"
  serverport="80"
  searchType="syslogapp"
  searchIndex="LOGSENE-APP-TOKEN-GOES-HERE"
  bulkmode="on"                                # send logs in batches
  queue.dequeuebatchsize="1000"                # of up to 1000
  action.resumeretrycount="-1"    # retry indefinitely (buffer) if destination is unreachable
)

To send a CEE-formatted syslog, you can run logger ‘@cee: {“amount”: 50}’ for example. Rsyslog would forward this JSON to Elasticsearch or Logsene via HTTP. Note that Logsene also supports CEE-formatted JSON over syslog out of the box if you want to use a syslog protocol instead of the Elasticsearch API.

Filtering by Type

Once your logs are in, you can filter them by type (via the _type field) in Kibana:
Type Filtering with Kibana
However, if you want more refined filtering by source, we suggest using a separate field for storing the application name. This can be useful when you have different applications using the same logging format. For example, both crond and postfix use plain syslog.

If you’re looking for a place to send your logs to, check out Logsene!

Solr vs. Elasticsearch — How to Decide?

by Otis Gospodnetić

[Otis is a Lucene, Solr, and Elasticsearch expert and co-author of “Lucene in Action” (1st and 2nd editions).  He is also the founder and CEO of Sematext. See full bio below.]

“Solr or Elasticsearch?”…well, at least that is the common question I hear from Sematext’s consulting services clients and prospects.  Which one is better, Solr or Elasticsearch?  Which one is faster?  Which one scales better?  Which one can do X, and Y, and Z?  Which one is easier to manage?  Which one should we use?  Which one do you recommend? etc., etc.

These are all great questions, though not always with clear and definite, universally applicable answers. So which one do we recommend you use? How do you choose in the end?  Well, let me share how I see Solr and Elasticsearch past, present, and future, let’s do a bit of comparing and contrasting, and hopefully help you make the right choice for your particular needs.

Solr_vs_Elasticsearch

Early Days: Youth Vs. Experience

Apache Solr is a mature project with a large and active development and user community behind it, as well as the Apache brand.  First released to open-source in 2006, Solr has long dominated the search engine space and was the go-to engine for anyone needing search functionality.  Its maturity translates to rich functionality beyond vanilla text indexing and searching; such as faceting, grouping (aka field collapsing), powerful filtering, pluggable document processing, pluggable search chain components, language detection, etc.

Read more of this post

Parsing and Centralizing Elasticsearch Logs with Logstash

No, it’s not an endless loop waiting to happen, the plan here is to use Logstash to parse Elasticsearch logs and send them to another Elasticsearch cluster or to a log analytics service like Logsene (which conveniently exposes the Elasticsearch API, so you can use it without having to run and manage your own Elasticsearch cluster).

If you’re looking for some ELK stack intro and you think you’re in the wrong place, try our 5-minute Logstash tutorial. Still, if you have non-trivial amounts of data, you might end up here again. Because you’ll probably need to centralize Elasticsearch logs for the same reasons you centralize other logs:

  • to avoid SSH-ing into each server to figure out why something went wrong
  • to better understand issues such as slow indexing or searching (via slowlogs, for instance)
  • to search quickly in big logs

In this post, we’ll describe how to use Logstash’s file input to tail the main Elasticsearch log and the slowlogs. We’ll use grok and other filters to parse different parts of those logs into their own fields and we’ll send the resulting structured events to Logsene/Elasticsearch via the elasticsearch output. In the end, you’ll be able to do things like slowlog slicing and dicing with Kibana:

logstash_elasticsearch

TL;DR note: scroll down to the FAQ section for the whole config with comments.

Read more of this post

Elasticsearch Monitoring: SPM vs. Marvel

While many SPM Performance Monitoring users quickly see the benefits of SPM and adopt it in their organizations for monitoring — not just for Elasticsearch, but for their complete application stack — some Elasticsearch users evaluate SPM and compare it to Marvel from Elasticsearch.  We’ve been asked about SPM vs. Marvel enough times that we decided to put together this focused comparison to show some of the key differences and help individuals and organizations pick the right tool for their needs.

Marvel is a relatively young product that provides a detailed visualization of Elasticsearch metrics in a Kibana-based UI. It installs as an Elasticsearch plug-in and includes ‘Sense’ (a developer console), plus a replay functionality for shard allocation history.

SPM, on the other hand, offers multiple agent deployment modes, has both Cloud and On Premises versions, includes alerts and anomaly detection, is not limited to Elasticsearch monitoring, integrates with third party services, etc. The following Venn diagram shows key areas that SPM and Marvel have in common and also the areas where they differ.

SPM-vs-Marvel

Looking into the details surfaces many notable differences.  For example:

  • The SPM agent can run independently from the Elasticsearch process and an upgrade of the agent does not require a restart of Elasticsearch
  • Dashboards are defined with different philosophies: Marvel exposes each Metric in a separate chart, while SPM groups related metrics together in a single chart or in adjacent charts (thus making it easy for people to have more information in a single place without needing to jump between multiple views)
  • Both have the ability to show metrics from multiple nodes in a single chart: Marvel draws a separate line for each node, while in SPM you can choose to aggregate values or display them separately.

The following “SPM vs. Marvel Comparison Table” is a starting point to evaluate monitoring products for organization’s individual needs.

SPM vs. Marvel Comparison Table

Feature SPM by Sematext Marvel by Elasticsearch
Supported Applications Elasticsearch, Hadoop, Spark, Kafka, Storm, Cassandra, HBase, Redis, Memcached, NGINX(+), Apache, MySQL, Solr, AWS CloudWatch, JVM, … Elasticsearch
Agent deployment mode in- and out-of-process
(out-of-process allows for seamless updates without requiring Elasticsearch restarts)
in-process
(as Elasticsearch plug-in; updates require Elasticsearch restarts)
Predefined dashboard graphs organized in groups YES YES
Saving Individual Dashboards Each user can store multiple dashboards, mixing charts from all applications, including both metrics and logs. Current view can be saved, reset to defaults possible. These changes are global.
API for Custom Metrics and Business KPIs YES NO
Extra Elasticsearch Metrics NO

  • Metrics are added based on user demand and users  can always graph them as Custom Metrics.
YES

  • Circuit Breakers
  • ID Cache
  • Lucene memory
  • ES Threadpools
  • Percolator
OS and JVM Metrics YES (+)

  • JVM pool sizes
  • JVM pool utilization
YES
Correlation of Metrics with Logs, Events, Alerts, and Anomalies YES

  • SPM and Logsene integration
  • Ability to ingest and chart arbitrary external Events
NO

  • Cluster Pulse displays only Elasticsearch Events
Deployment model SaaS or On Premises On Premises
Security/User Roles &
Permissions
YES NO
Easy & Secure Sharing of Reports with internal and external organizations YES

  • via short links
  • vie embeds / iframe
  • via email
NO
Machine Learning-based Anomaly Detection YES NO
Threshold based Alerts YES NO
Heartbeat Alerts YES NO
Forwarding Alerts to 3rd parties YES

  • E-Mail
  • PagerDuty
  • Nagios / Shinken
  • HipChat
  • Slack
  • Webhooks
NO
Metrics Aggregation YES

  • Pre-aggregation at multiple granularity levels, including 1 min granularity.  Advantage: more efficient storage, scales better, faster for graphing performance over longer time periods at the expense of sub-minute precision.
YES

  • Query-time aggregation. No write or query-time aggregation.
    Advantage: 10 second precision by default at the expense of storage size, write, and read performance and memory footprint.

As an aside, most of the features in this comparison table would also apply if we compared SPM to BigDesk, ElasticHQ, Statsd, Graphite, Ganglia, Nagios, Riemann, and other application-specific monitoring or alerting tools out there.

If you have any questions about this comparison or have any feedback, please let us know!

Video and Slides: Centralized Logging with Logstash and Elasticsearch

Sematext engineer and Elasticsearch / Logstash expert Rafal Kuc gave a well-received talk at the recent DevOps Days Warsaw event.  The talk was titled “From Zero to Hero – Centralized Logging with Logstash & Elasticsearch” and you can watch the video here:

And check out the slides here:

Brief Summary

Rafal talked about the common problem of digging through logs to find one particular event — or group of them.  And going even further into this pain point — what if you have lots of servers and you don’t have a single place to look for logs?  Do you really want to ssh to one or more servers and grep log files?  Of course not!  It’s 2014 and there are tools and services that help you spend less time hunting around for problems and more time actually fixing them.

To help solve this problem Rafal guided the audience through the basics of using Logstash and Elasticsearch together as the perfect combination for handling logs from multiple applications.  Attendees also learned how to set up Logstash, how to configure it to parse logs and, finally, how to send them to an Elasticsearch cluster.

Rafal also discussed tuning Elasticsearch for log management and centralized logging purposes, and showed how to easily switch between shipping logs to a self-hosted solution like Elasticsearch / Logstash / Kibana (aka ELK) and instead ship logs to Logsene Log Management and Analytics by changing a single line in Logstash configuration.

See also:

Enjoy!  And thanks to everyone who attended Rafal’s talk in person and stopped by the Sematext booth.

Job: Sematext is hiring – Elasticsearch Engineer

The Sematext team is more distributed than your average Elasticsearch cluster and, trust me, we’ve seen a a good portion of the world’s Elasticsearch clusters.  The thing with Elasticsearch clusters is they often get new nodes added and they keep expanding to handle more data and more queries.  Similarly, we are looking to add a new node to the Sematext team so we can reshard our work a bit, distribute it more evenly, and scale further.  In plain English, we are looking for an Engineer who loves working with Elasticsearch, who loves large volumes of data, and a wide variety of projects and challenges involving large scale data processing, high volume indexing, high query rates, who likes working with our clients, and wants to make Logsene and SPM the killer log management and monitoring platforms.  Advanced knowledge of Elasticsearch is less important than passion to learn and build, positive attitude, ability to make decisions, work both independently and with the rest of the team, communicate well, and simply be a good person.  We can teach you everything about Elasticsearch and turn you into a bonsai tree loving Elasticsearch samurai, but we need you to be all these other things.

As a member of our team you will get to:

  • Work with world-class search experts
  • Design and implement systems (both our own and our clients’) that process 10s of thousands of queries per second and handle billions of documents, logs, data points, etc.
  • Interact with clients and customers world-wide
  • Provide guidance, architecture design, implementation, and production support around Elasticsearch
  • Participate in and contribute to open-source (we’ve contributed to Solr, Lucene, HBase, Flume, rsyslog, Logstash, etc.)
  • Share your knowledge with clients, at conferences and under-conferences, online community, etc.

This position:

  • Offers a lot of independence, learning, and growth
  • Is open to applicants “west of New York City” (this could be South, Central, or North America, of course), though we’ll happily make an exception if you persuade us we should make an exception for you!

Our search team members have written several books about search, regularly give talks at conferences, blog, and participate in open-source projects.  For more info, see 19 things you may like about Sematext.

Interested? Please send your resume to jobs@sematext.com.

For other job openings please see Jobs @ Sematext or even our previous job listings.

Correlating Metrics and Logs — Use Case: Elasticsearch Indexing

Here’s one way users can benefit from the SPM Performance Monitoring, Alerting and Anomaly Detection and Logsene Log Management and Analytics integration we just announced in the latest release. Problem – CPU Utilization hits 95%!

  • You get an alarm about a CPU usage jump to 95% (note: using classic threshold-based alerts for CPU usage is a little crazy.  SPM’s anomaly detection feature would be a much better thing to use for CPU usage metrics).
  • You wonder, naturally, why this is happening and investigate immediately.
  • Without access to log graphs — like you would have with an SPM and Logsene combination — you would not be able to tell right away that the indexing rate increased.  It could be anything.  So you would need to connect, via ssh or VPN, to a server (or servers) where the CPU jumped and start looking around and see which process has been using the most CPU.  You’d run tools like top, vmstat, etc., but of course they’d have no historical data.
  • Even knowing which process uses the most CPU is not detailed enough.  You need to start looking at logs — either in another vendor’s log management tool which does not work seamlessly with your monitoring tool or manually “grepping” through one or more potentially very large log files on one or more servers — and try to determine what this application is doing more of now than it did before.  Not surprisingly, this is error-prone, time-consuming, and needlessly manual.  Most people have better things to do and want better tools.

Solution: Use SPM and Logsene Together to Triage With a dashboard like the one you see here you can quickly tell what happened — i.e., why CPU usage went up.   In this particular case it is because the Elasticsearch indexing rate increased.  Now that the problem has been identified you can move on to taking action to fix it if a fix is needed.  Note:  You can even access the actual logs via Logsene so you can really be sure that there is no increase in some errors that are related to higher CPU usage. test_dashboard_SPM_Logsene We hope you found this use case helpful.  Got other performance monitoring, centralized log management or search-related use case ideas you’d like to see?  Drop us a line!

Follow

Get every new post delivered to your Inbox.

Join 155 other followers