JOB: Elasticsearch / Lucene Engineer (starts in the Netherlands)

In addition to looking for an Elasticsearch / Solr Engineer to join the Sematext team, we are also looking for an Lucene / Elasticsearch Engineer in EU for a specific project.  This project calls for 6 months of on-site work with our client in Netherlands.  After 6 months the collaboration with our client would continue remotely if there is more work to be done for the client or, if the client project(s) are over, this person would join our global team of Engineers and Search Consultants and work remotely (we are all very distributed over several countries and continents). This is a position focused on search – it involves working with Elasticsearch, but also requires enough understanding of Lucene to allow one to write custom Elasticsearch/Lucene components, such as tokenizers, for example. Here are some of the skills one should have for this job:

  •  knowledge of different types of Lucene queries/filters (boolean, spans, etc.) and their capabilities
  •  experience in extending out-of-the-box Lucene functionality via developing custom queries, scorers, collectors
  •  understanding of Lucene document analysis in the process of indexing, experience in writing custom analyzers
  •  experience in mapping advanced hierarchical data structures to Lucene fields
  •  experience in scalable distributed open-source search technologies such as Elasticsearch or Solr

The above is not much information to go by, but if this piqued your interest and if you think you are a good match, please fix up your resume and send it to jobs@sematext.com quickly.

Community Voting for Sematext Talks at Lucene/Solr Revolution 2014

The biggest open source conference dedicated to Apache Lucene/Solr takes place in November in Washington, DC.  If you are planning to attend — and even if you are not — you can help improve the conference’s content by voting for your favorite talk topics.  The top vote-getters for each track will be added to Lucene/Solr Revolution 2014 agenda.

Not surprisingly for one of the leading Lucene/Solr products and services organizations, Sematext has two contenders in the Tutorial track:

We’d love your support to help us contribute our expertise to this year’s conference.  To vote, simply click on the above talk links and you’ll see a “Vote” button in the upper left corner.  That’s it!

To give you a better sense of what Radu and Rafal would like to present, here are their talk summaries:

Tuning Solr for Logs – by Radu Gheorghe

Performance tuning is always nice for keeping your applications snappy and your costs down. This is especially the case for logs, social media and other stream-like data that can easily grow into terabyte territory.

While you can always use SolrCloud to scale out of performance issues, this talk is about optimizing. First, we’ll talk about Solr settings by answering the following questions:

  • How often should you commit and merge?
  • How can you have one collection per day/month/year/etc?
  • What are the performance trade-offs for these options?

Then, we’ll turn to hardware. We know SSDs are fast, especially on cold-cache searches, but are they worth the price? We’ll give you some numbers and let you decide what’s best for your use case.

The last part is about optimizing the infrastructure pushing logs to Solr. We’ll talk about tuning Apache Flume for handling large flows of logs and about overall design options that also apply to other shippers, like Logstash. As always, there are trade-offs, and we’ll discuss the pros and cons of each option.

Solr Anti-Patternsby Rafal Kuc

Working as a consultant, software engineer and helping people in various ways we can see multiple patterns on how Solr is used and how it should be used. We all usually say what should be done, but we don’t talk and point out why we should not go some ways. That’s why I would like to point out common mistakes and roads that should be avoided at all costs.   During the talk I would like not only to show the bad patterns, but also show the difference before and after.

The talk is divided into three major sections:

  1. We will start with general configuration pitfalls that people are used to make. We will discuss different use cases showing the proper path that one should take
  2. Next we will focus on data modeling and what to avoid when making your data indexable. Again we will see real life use cases followed by the description how to handle them properly
  3. Finally we will talk about queries and all the juicy mistakes when it comes to searching for indexed data

Each shown use case will be illustrated by the before and after analysis – we will see the metrics changes, so the talk will not only bring pure facts, but hopefully know-how worth remembering.

Thank you for your support!

4 Lucene Revolution Talks from Sematext

Bingo! We’re 4 of 4 at Lucene Revolution 2013 – 4 talk proposals and all 4 accepted!  We are hiring just so next year we can attempt getting 5 talks in. ;)  We’ll also be exhibiting at the conference, so stop by.  We will be giving away Solr and Elasticsearch books.  Here’s what we’ll be talking about in Dublin on November 6th and 7th:

In Using Solr to Search and Analyze Logs Radu will be talking about … well, you guessed it – using Solr to analyze logs.  After this talk you may want to run home (or back to the hotel) and hack on LogStash or Flume, and Solr and get Solr to eat your logs…. but don’t forget we have to keep Logsene well fed.  Feed this beast your logs like we feed it ours and help us avoid getting eaten by our own creation.

Abstract:

Many of us tend to hate or simply ignore logs, and rightfully so: they’re typically hard to find, difficult to handle, and are cryptic to the human eye. But can we make logs more valuable and more usable if we index them in Solr, so we can search and run real-time statistics on them? Indeed we can, and in this session you’ll learn how to make that happen. In the first part of the session we’ll explain why centralized logging is important, what valuable information one can extract from logs, and we’ll introduce the leading tools from the logging ecosystems everyone should be aware of – from syslog and log4j to LogStash and Flume. In the second part we’ll teach you how to use these tools in tandem with Solr. We’ll show how to use Solr in a SolrCloud setup to index large volumes of logs continuously and efficiently. Then, we’ll look at how to scale the Solr cluster as your data volume grows. Finally, we’ll see how you can parse your unstructured logs and convert them to nicely structured Solr documents suitable for analytical queries.

Rafal will teach about Scaling Solr with SolrCloud in a 75-minute session.  Prepare for taking lots of notes and for scaling your brain both horizontally and vertically while at the same time avoiding split-brain.

Abstract:

Configure your Solr cluster to handle hundreds of millions of documents without even noticing, handle queries in milliseconds, use Near Real Time indexing and searching with document versioning. Scale your cluster both horizontally and vertically by using shards and replicas. In this session you’ll learn how to make your indexing process blazing fast and make your queries efficient even with large amounts of data in your collections. You’ll also see how to optimize your queries to leverage caches as much as your deployment allows and how to observe your cluster with Solr administration panel, JMX, and third party tools. Finally, learn how to make changes to already deployed collections —split their shards and alter their schema by using Solr API.

Rafal doesn’t like to sleep.  He prefers to write multiple books at the same time and give multiple talks at the same conference.  His second talk is about Administering and Monitoring SolrCloud Clusters – something we and our customers do with SPM all the time.

Abstract:

Even though Solr can run without causing any troubles for long periods of time it is very important to monitor and understand what is happening in your cluster. In this session you will learn how to use various tools to monitor how Solr is behaving at a high level, but also on Lucene, JVM, and operating system level. You’ll see how to react to what you see and how to make changes to configuration, index structure and shards layout using Solr API. We will also discuss different performance metrics to which you ought to pay extra attention. Finally, you’ll learn what to do when things go awry – we will share a few examples of troubleshooting and then dissect what was wrong and what had to be done to make things work again.

Otis has aggregation coming out of his ears and dreams about data visualization, timeseries graphs, and other romantic visuals. In Solr for Analytics: Metrics Aggregations at Sematext we’ll share our experience running SPM on top of SolrCloud (vs. HBase, which we currently use).

Abstract:

While Solr and Lucene were originally written for full-text search, they are capable and increasingly used for Analytics, as Key Value Stores, NoSQL databases, and more. In this session we’ll describe our experience with Solr for Analytics. More specifically, we will describe a couple of different approaches we have taken with SolrCloud for aggregation of massive amounts of performance metrics, we’ll share our findings, and compare SolrCloud with HBase for large-scale, write-intensive aggregations. We’ll also visit several Solr new features that are in the works that will make Solr even more suitable for Analytics workloads.

See you in Dublin!

ElasticSearch Cache Usage

We’ve been doing a ton of work with ElasticSearch. Not long ago, we had a few situations where ElasticSearch would “eat” all the JVM heap memory we give it.  It was so hungry, we could not feed it enough memory to keep it happy.  It was insatiable.  After some troubleshooting and looking at SPM for ElasticSearch (btw. we released a new version of the SPM agent earlier this week, so if you don’t have it, go grab agent v1.5.0) we figured out the cause - ElasticSearch default field cache setting was not quite right for our deployment. In this post we’ll share our experience on this topic, explain why this was happening and how to minimize the negative effect of large field caches.

ElasticSearch Cache Types

There are two types of caches in ElasticSearch whose behaviors you can control. The first cache is the filter cache. This cache is responsible for caching results of filters used in your queries. This is very handy, because after a filter is run once, ElasticSearch will subsequently use values stored in the filter cache and thus save precious disk I/O operations and by avoiding disk I/O speed up query execution. There are two main implementations of filter cache in ElasticSearch:

  1. node filter cache (default)
  2. index filter cache

The node filter cache is an LRU cache, which means that the least recently used items will be evicted when the filter cache is full. Its size can be limited to be either a percentage of the total memory allocated to the Java process or by specifying the exact amount of memory. The second type of filter cache is the index filter cache. It is not recommended for use because you can’t predict (in most cases) how much memory it will use, since that depends on which shards are located on which node. In addition to that, you can’t control the amount of memory used by index filter cache, you can only set its expiration time and maximum amount of entries in that cache.

The second type of cache in ElasticSearch is field data cache. This cache is used for sorting and faceting in most cases. It loads all values from the field you sort or facet on and then provides calculations on the basis of loaded values. You can imagine that the cost of building such a cache for a large amount of data might be very high.  And it is.  Apart from the type (which can be either resident or soft) you can control two additional parameters of field data cache – the maximum amount of entries in it and its expiration time.

The Defaults

The default implementation for the filter cache is the index filter cache, with its size set to the maximum of 20% of the memory allocated to the Java process. As you can imagine there is nothing to worry about – if the cache fills up appropriate cache entries will get evicted.  You can then consider adding more RAM to make index filter cache bigger or you must live with evictions. That’s perfectly acceptable.

On the other hand we have the default settings for ElasticSearch field data cache – it is a resident cache with unlimited size. Yes, unlimited. The cost of rebuilding this cache is very high and thus you must know how much memory it can use – you must control your queries and watch what you sort on and on which fields you do the faceting.

What Happens When You Don’t Control Your Cache Size ?

This is what can happen when you don’t control your field data cache size:

As you can see on the above chart field data cache jumped to more than 58 GB, which is enormous. Yes, we got OutOfMemory exception during that time.

What CanYou Do ?

There are actually three thing you can do to make your field data cache use less memory:

Control its Size and Expiration Time

When using the default, resident field data cache type, you can set its size and expiration time. However, please remember, that there are situations when you need the field data cache to hold values for that particular field you are sorting or faceting on. In order to change field data cache size, you specify the following property:

index.cache.field.max_size

It specifies the maximum size entries in that cache per Lucene segment. It doesn’t limit the amount of memory field data cache can use, so you have to do some testing to ensure queries you are using won’t result in OutOfMemory exception.

The other property you can set is the expiration time.  It defaults to -1 which says that the cache will not be expired by default. In order to change that, you must set the following property:

index.cache.field.expire

So if, for example, you would like to have a maximum of 50k entries of field data cache per segment and if you would like to have those entries expiredafter 10 minutes, you would set the following property values in ElasticSearch configuration file:

index.cache.field.max_size: 50000
index.cache.field.expire: 10m

Change its Type

The other thing you can do is change field data cache type from the default resident to soft. Why does that matter? ElasticSearch uses Google Guava libraries to implement its cache. The soft type wraps cache values in soft references, which means that whenever memory is needed garbage collector will clear those references even when they are used. This means that when you start hitting heap memory limit, the JVM wont throw OutOfMemory exception, but will  instead release those soft references with the use of garbage collector. More about soft references can be found at:

http://docs.oracle.com/javase/6/docs/api/java/lang/ref/SoftReference.html

So in order to change the default field data cache type to soft you should add the following property to ElasticSearch configuration file:

index.cache.field.type: soft

Change Your Data

The last thing you can do is the operation that requires much more effort than only changing ElasticSearch configuration – you may want to change your data. Look at your index structure, look at your queries and think. Maybe you can lowercase some string data and this way reduce the number of unique values in the field? Maybe you don’t need your dates be precise down to a second, maybe it can be minute or even an hour? Of course, when doing some faceting operations you can set the granularity, but the data will still be loaded into memory. So if there are parts of your data that can be changed in a way that will result in lower memory consumption, you should consider it. As a matter of fact, that is exactly what we did!

Caches After Some Changes

After we made some changes in our ElasticSearch configuration/deployment, this is what the field data cache usage looked like:

As you can see, the cache dropped from 58 GB down to 37 GB.  Impressive drop!  After these changes we stopped running into OutOfMemory exception problems.

Summary

You have to remember that the default settings for field data cache in ElasticSearch may not be appropriate for you. Of course, that may not be the case in your deployment. You may not need sorting, apart from the default based on Lucene scoring and you may not need faceting on fields with many unique terms. If that’s the case for you, don’t worry about field data cache. If you have enough memory for holding the fields data for your facets and sorting then you also don’t need to change anything regarding the cache setup from the default ElasticSearch configuration. What you need to remember is to monitor your JVM heap memory usage and cache statistics, so you know what is happening in your cluster and react before things get worse.

One More Thing

The charts you see in the post are taken from SPM (Scalable Performance Monitoring) for ElasticSearch.  SPM is currently free and, as you can see, we use it extensively in our client engagements.  If you give it a try, please let us know what you think and what else you would like to see in it.

@sematext (Like working with ElasticSearch?  We’re hiring!)

Wanted Dead or Alive: Search Engineer with Client-facing Skills

We are on a lookout for a strong Search Engineer with interest and ability to interact with clients and with potential to build and lead local and/or remote development teams.  By “client-facing” we really mean primarily email, phone, Skype.

A person in this role needs to be able to:

  • design large scale search systems
  • have solid knowledge of either Solr or ElasticSearch or both
  • efficiently troubleshoot performance, relevance, and other search-related issues
  • speak and interact with clients

Pluses – beyond pure engineering:

  • ability and desire to expand and lead a development/consulting teams
  • ability to think both business and engineering
  • ability to build products and services based on observed client needs
  • ability to present in public, at meetups, conferences, etc
  • ability to contribute to blog.sematext.com
  • active participation in online search communities
  • attention to detail
  • desire to share knowledge and teach
  • positive attitude, humor, agility

We’re small and growing.  Our HQ is in Brooklyn, but our team is spread over 4 continents.  If you follow this blog you know we have deep expertise in search and big data analytics and that our team members are conference speakers, book authors, Apache members, open-source contributors, etc.  While we are truly international, this particular opening is in New York.  Speaking of New York, some of our New York City clients that we are allowed to mention are Etsy, Gilt, Tumblr, Thomson Reuters, Simon & Schuster (more on http://sematext.com/clients/index.html).

Relevant pointers:

If you are interested, please send over some information about yourself, your CV, and let’s talk.

Sematext Presenting Open Source Search Safari at ESS 2012

We are continuing our “new tradition” of presenting at Enterprise Search Summit (ESS) conferences.  We presented at ESS in 2011 (see http://blog.sematext.com/2011/11/02/search-analytics-at-enterprise-search-summit-fall-2011-presentation/).  This year we’ll be giving a talk titled Open Source Search Safari, in which Otis (@otisg) will present a number of open-source search solutions – Lucene, Solr, ElasticSearch, and Sensei, plus maybe one or two others (suggestions?).  We’ll also be chained to booth 26 where we’ll be showcasing our search-vendor neutral Search Analytics service (service is currently still free, feel free to use it all you want), along with some of our other search-related products.

ESS East will be held May 15-16 in our own New York City.  If you are a past or prospective client or customer of ours, please get in touch if you are interested in attending ESS at a discount.

Lucene & Solr Year 2011 in Review

The year 2011 is coming to an end and it’s time to reflect on the past 12 months.  Without further fluff, let’s look back and summarize all significant events that happened in Lucene and Solr world over the course of last dozen months. In the next few paragraphs we’ll go over major changes in Lucene and Solr, new blood, relevant conferences and books.

We should start by pointing out that this year Apache Lucene celebrated its 10 year anniversary as an Apache Software Foundation project.  Lucene itself is actually over 10 years old.  Otis is one of the very few people from the early years who is still around.  While we didn’t celebrations any Solr anniversaries this year, we should note that Solr, too, has been around for quite a while and is in fact approaching its 6th year at ASF!

This year saw numerous changes and additions both in Lucene and Solr.  As a matter of fact, we’d venture to say we saw more changes in Lucene & Solr this year than in any one year before.  In that sense, both projects are very much like wine – getting better with time. Lets take a look at a few of the most significant changes in 2011.

The much anticipated Near Real-Time search (NRT) functionality has arrived.  What this means is that documents that were just added to a Lucene/Solr index can immediately be made visible in search results.  This is big!  Of course, work on NRT is still in progress, but NRT is ready and you, like a number of our clients, should start using it.

Field Collapsing was one of the most watched and voted for JIRA issues for many month.  This functionality was implemented this year and now Lucene and Solr users can group result on the basis of a field or a query. In addition, you can control the groups and even do faceting calculation on the groups, not single documents. A rather powerful feature.

From Lucene users’ perspective it is also worth noting that Lucene finally got a faceting module.  Until now, faceting was available only in Solr.  If you are a pure Lucene users, you now don’t need Solr to calculate facets.

In the past modeling parent-child relationships in Lucene and Solr indices was not really possible – one had to flatten everything.  No longer – if you need to model a parent-child relationship in your index you can use the Join contrib module.  This Join functionality lets you join parent and child documents at query-time, while relaying on some assumptions about how documents were indexed.

Good and broad language support is hugely important for any search solution and this year was good for Lucene and Solr in that department: KStemFilter English stemmer was added, full Unicode 4 support was added, a new Japanese and Chinese support was added, a new stemmer-protection mechanism was added, work on synonym filter RAM consumption reduction was done, etc.  Another big addition was integration with Hunspell, which enables language-specific processing for all languages supported by Open Office.  That’s a lot of new languages we can now handle with Lucene and Solr! There is more.

Lucene 3.5.0 introduced significantly reduced the  term dictionary memory footprint. Big!  Right now, Lucene uses 3 to 5 times less memory for when dealing with terms dictionary, so it’s even less RAM consuming.

If you use Lucene and need to page through a lot of results you may run into problems. That’s why in Lucene 3.5.0 the searchAfter method was introduced which solves the deep paging problem once and for all!

There is also a new, fast and reliable Term Vector-based highlighter that both Lucene and Solr can use.

Dismax is great, but Extended Dismax query parser added to Solr is even better – it extends Dismax query parser functionality and can further improve the quality of search results.

You can now also sort by function (imagine sorting the results by distance from a point) and a new spatial search with filtering.

Solr also got the new suggest/autocomplete functionality based on FST automaton which significantly reduced the memory needed for such functionality.  If you need this for your search application, have a look at Sematext’s AutoComplete – it has additional functionality that lots of our customers like.

While not yet officially released, the new transaction log support provides Solr with a real-time get operation – as soon as you add a document you can retrieve it by ID.  This will also be used for recovering nodes in SolrCloud.

And talking about SolrCloud…  We’ve covered SolrCloud on this blog before in Solr Digest, Spring-Summer 2011, Part 2: Solr Cloud and Near Real Time Search, and we’ll be covering it again soon.  In short, SolrCloud will make it easier for people to operate larger Solr clusters by making use of more modern design principles and software components such as ZooKeeper, that make creation of distributed, cluster-based software/services easier.  Some of the core functionality is that there will be no single point of failure, any node will be able to handle any operation, there will be no traditional master-slave setup, there will be centralized cluster management and configuration, failovers will be automatic and in general things will be much more dynamic.  SolrCloud has not been released yet, but Solr developers are working on it and the codebase is seeing good progress.  We’ve used SolrCloud in a few recent engagements with our customers and were pleased by what we saw.

After merging developments of those two projects back in the 2010, we saw a speed up in development and releases. Lucene and Solr committers introduced five(!) new versions of both projects! In March, Lucene and Solr 3.1 was released with the Unicode 4 support, ReusableTokenStream, Spatial search, Vector-based Highlighter, Extended Dismax parser, and many more features and bug fixes. Then, after less than 3 months(!) on June 4th, version 3.2 was released. This release introduced a new and much desired results grouping module, NRTCachingDirectory, and highlighting performance improvements. Just one month later, on July 1st, Lucene and Solr 3.3 were introduced. That release included KStem stemmer, new implementations of Spellchecker, Field Collapsing in Solr and RAM usage reduction for autocomplete mechanism. By the end of summer there was another release, this time it was version 3.4 released on the 14th of September. Pure Lucene users got what Solr could do for a very long time – the long awaited faceting module contributed by IBM. Version 3.4 also included the new Join functionality, ability to turn off query and filter caches and faceting calculation for Field Collapsing. The last release of Lucene and Solr saw the light of day in late November. The 3.5.0 version consisted of huge memory reduction when dealing with term dictionaries, deep paging support, SearcherManager and SearcherLifetimeManager classes along with language identification provided by Tika, as well as sortMissingFirst and sortMissingLast support for TrieFields.

During the last 12 months we attended three major conferences focused on search and big data themes. Lucene Revolution took place in San Francisco in May. Otis gave a talk titled “Search Analytics: What? Why? How?” (slides) during the first day. There were a number of other good talks there and the complete conference agenda is available on  http://lucenerevolution.com/2011/agenda. Some videos are available as well. Next came the Berlin Buzzwords conference, a more grass-roots conference which took place between 4th and 10th of June. Otis gave the updated version of his “Search Analytics: What? Why? How?”. If you want to know more, check conference official site – http://berlinbuzzwords.de. The last conference focused exclusively on Lucene and Solr was Lucene Eurocon 2011 in sunny and tourist-filled Barcelona between 17th and 20th of October. And guess what – we were there again (surprise!), this time in slightly larger numbers. Otis gave a talk about “Search Analytics: Business Value & BigData NoSQL Backend” (video, slides) and Rafał gave a talk on a pretty popular topic - “Explaining & Visualizing Solr ‘explain’ information” (video, slides). No open source project can endure without regular injections of new blood. This year, Lucene and Solr development team was joined by a number of new people whose names may look familiar to you:

These 7 men are now Lucene and Solr committers and we look forward to our next year’s Year in Review post, where we hope to go over the good things these people will have brought to Lucene and Solr in 2012.

You know an open source project is successful when a whole book is dedicated to it.  You know a project is very successful when more than one book and more than one publisher cover it.  There were no new editions of Lucene in Action (amazon, manning) this year, but our own Rafał Kuć published his Solr 3.1 Cookbook (amazon) in July.  Rafał’s cookbook includes a number of recipes that can make your life easier when it comes to solving common problems with Apache Solr. Another book, Apache Solr 3 Enterprise Search Server (amazon) by David Smiley and Eric Pugh was published in November. This is a major update to the first edition of the book and it covers a wide range of functionalities of Apache Solr.

@sematext

Solr Performance Monitoring with SPM

Originally delivered as Lightning Talk at Lucene Eurocon 2011 in Barcelona, this quick presentation shows how to use Sematext’s SPM service, currently free to use for unlimited time, to monitor Solr, OS, JVM, and more.

We built SPM because we wanted to have a good and easy to use tool to help us with Solr performance tuning during engagements with our numerous Solr customers.  We hope you find our Scalable Performance Monitoring service useful!  Please let us know if you have any sort of feedback, from SPM functionality and usability to its speed.  Enjoy!

AutoComplete with Suggestion Groups

While Otis is talking about our new Search Analytics (it’s open and free now!) and Scalable Performance Monitoring (it’s also open and free now!) services at Lucene Eurocon in Barcelona Pascal, one of the new members of our multinational team at Sematext, is working on making improvements to our search-lucene.com and search-hadoop.com sites.  One of the recent improvements is on the AutoComplete functionality we have there.  If you’ve used these sites before, you may have noticed that the AutoComplete there now groups suggestions.  In the screen capture below you can see several suggestion groups divided with pink lines.  Suggestions can be grouped by any criterion, and here we have them grouped by the source of suggestions.  The very first suggestion is from “mail # general”, which is our name for the “general” mailing list that some of the projects we index have.  The next two suggestions are from “mail # user”, followed by two suggestions from “mail # dev”, and so on.  On the left side of each suggestion you can see icons that signify the type of suggestion and help people more quickly focus on types of suggestions they are after.

Sematext AutoComplete Segments

Sematext AutoComplete Suggestion Grouping

The other interesting thing you can see in this AutoComplete implementation is a custom footer, which allows one to show any sort of messaging or even advertising to the end user.  We should also point out that one can use Sematext AutoComplete to embed custom suggestions, such as ads, and have them show up in the list of suggestions only when their (meta) data matches the input, yet displayed differently from the rest of suggestion in order to make it clear they are ads and not ordinary suggestions.

We hope you find this functionality useful.  Please let us know what you think and if you have other suggestions for how to improve either AutoComplete or search-lucene.com and search-hadoop.com, please let us know – we do listen!

The State of Solandra – Summer 2011

A little over 18 months ago we talked to Jake Luciani about Lucandra – a Cassandra-based Lucene backend.  Since then Jake has moved away from raw Lucene and married Cassandra with Solr, which is why Lucandra now goes by Solandra.  Let’s see what Jake and Solandra are up to these days.

What is the current status of Solandra in terms of features and stability?

Solandra has gone through a few iterations. First as Lucandra which partitioned data by terms and used thrift to communicate with Cassandra.  This worked for a few big use cases, mainly how to manage a index per user, and garnered a number of adopters.  But it performed poorly when you had very large indexes with many dense terms, due to the number and size of remote calls needed to fulfill a query.Last summer I started off on a new approach based on Solr that would address Lucandra’s shortcomings: Solandra.  The core idea of Solandra is to use Cassandra as a foundation for scaling Solr.  It achieves this by embedding Solr in the Cassandra runtime and uses the Cassandra routing layer to auto shard a index across the ring (by document).  This means good random distribution of data for writes (using Cassandra’s RandomParitioner) and good search performance since individual shards can be searched in parallel across nodes (using SolrDistributedSearch).  Cassandra is responsible for sharding, replication, failover and compaction.  The end user now gets a single scalable component for search without changing API’s which will scale in the background for them.  Since search functionality is performed by Solr so it will support anything Solr does.

I gave a talk recently on Solandra and how it works: http://blip.tv/datastax/scaling-solr-with-cassandra-5491642

Are you still the sole developer of Solandra?  How much time do you spend on Solandra?
Have there been any external contributions to Solandra?

I still am responsible for the majority of the code, however the Solandra community is quite large with over 500 github followers and 60 forks.  I receive many useful bug reports and patches through the community.  Late last year I started working at DataStax (formerly Riptano) to focus on Apache Cassandra.   DataStax is building a suite of products and services to help customers use Cassandra in production and incorporate Cassandra into existing enterprise infrastructure.  Solandra is a great example of this. We currently have a number of customers using Solandra and we encourage people interested in using Solandra to reach out to us for support.

What are the most notable differences with Solandra and Solr?

The primary difference is the ability to grow your Solr infrastructure seamlessly using Cassandra. I purposely want to avoid altering the Solr functionality since the primary goal here is to make it easy for users to migrate to and from Solandra and Solr.   That being said Solandra does offer some unique features regarding managing millions of indexes.  One is different Solr schemas can be injected at runtime using a RESTful interface and Solandra supports the concept of virtual Solr Cores which share the same core but are treated as different indexes. For example, if you have a core called “inbox” you can create an index per user like “inbox.otis” or “inbox.jake” just by changing the endpoint URL.

Finally, Solandra has a bulk loading interface that makes it easy to index large amounts of data at a time (one known cluster indexes at ~4-5MB of text per second).

What are the most notable differences with Solandra and Elastic Search?

ElasticSearch is more mature and offers a similar architecture for scaling search though not based on Cassandra or Solr.  I think ElasticSearch’s main weakness is it requires users to scrap their existing code and tools to use it.  On the other hand, Solandra provides a scalable platform built on Solr and lets you grow with it.

Solandra doesn’t use the Lucene index file format so it will grow to support millions of indexes. Systems like Solr and ElasticSearch create a directory per index which makes managing millions of indexes very hard. The flipside is there are a lot of performance tweaks lost by not using the native file format most of the current work on Solandra relates to improving single node performance.

Solandra is a single component that gives you search AND NoSQL database, and is therefore much easier to manage from the operations perspective IMO.

What do you plan on adding to Solandra that will make it clearly stand out from Solr or Elastic Search?

Solandra will continue to grow with Solr (4.0 will be out in the future), as well as with Cassandra. Right now Solandra’s real-time search is limited by the performance of Solr’s field cache implementation. By incorporating Cassandra triggers I think we can remove this bottleneck and get really impressive real-time performance at scale, due to how Solandra pre-allocates shards.

Also, since the Solr index is stored in the Cassandra datamodel, you can now apply some really interesting features of Cassandra to Solr, such as expiring indexes and triggered searches.

When should one use Solandra?

If you say yes to any of the following you should use Solandra:

  • I have more data than can fit on a single box
  • I have potentially millions of indexes
  • I need improved indexing with multi-master writes
  • I need multi-datacenter search
  • I am already using Cassandra and Solr
  • I am having trouble managing my Solr cluster

When should one not use Solandra?

If you are happy with your Solr system today and you have enough capacity to scale the size and number of indexes comfortably then there is no need to use Solandra.  Also, Solandra is under active development so you should be prepared to help diagnose unknown issues.  Also note that if you require search features that are currently not supported by Solr distributed search, you should not use Solandra.

Are there known problems with Solandra that users should be aware of?

Yes, currently the index sizes can be much larger in Solandra than Solr (in some cases 10x) this is due to how Solandra indexes data as well as Cassandra’s file format. Cassandra 1.0 includes compression so that will help quite a bit.Also, since consistency in Solandra is tunable it requires your application to consider the implications of writing data at lower consistencies.Finally, one thing that keeps coming up quite often is users assuming Solandra auto indexes the data you already have in Cassandra, since Solandra builds on Cassandra.  This is not the case.  Data must be indexed and searched through the traditional Solr APIs.

Is anyone using Solandra in production? What is the biggest production deployment in terms of # docs, data size on filesystem, query rate?

Solandra is now in production with a handful of users I know of.  Some others are in the testing/pre-production stage. But it’s still a small number AFAIK.The largest Solandra cluster I know of is in the order of ~5 nodes, ~10TB of data with ~100k indexes and ~2B documents.

If you had to do it all over, what would you do differently?

I’m really excited with the way Lucandra/Solandra has evolved over the past year. It’s been a great learning experience and has allowed me to work with technologies and people I’m really, really excited about. I don’t think I’d change a thing, great software takes time.

When is Solandra 1.0 coming out and what is the functionality/issues that remain to be implemented before 1.0?

I don’t really use the 1.0 moniker as people tend to assume too much when they read that. I think once Solandra is fully documented, supports things like Cassandra based triggers for indexing and search, and has an improved on disk format, I’d be comfortable calling Solandra 0.9 :)

Thank you Jake.  We are looking forward to Solandra 0.9 then.

Follow

Get every new post delivered to your Inbox.

Join 1,633 other followers