Sensei: distributed, realtime, semi-structured database

Once upon a time there was no decent open-source search engine.  Then, at the very beginning of this millennium Doug Cutting gave us Lucene.  Several years later Yonik Seeley wrote Solr.  In 2010 Shay Banon released ElasticSearch.  And just a few days ago John Wang and his team at LinkedIn announed Sensei 1.0.0 (also known as SenseiDB).  Here at Sematext we’ve been aware of Sensei for a while now (2 years?) and are happy to have one more great piece of search software available for our own needs and those of our customers.  As a matter of fact, we are so excited about Sensei that we’ve already started hacking on adding support for Sensei to SPM, our Scalable Performance Monitoring tool and service!  Since Sensei is brand new, we asked John to tell us a little more about it.

Please tell us a bit about yourself.

My name is John Wang, and I am the search architect at LinkedIn.com. I am the creator and the current lead for the Sensei project.

Could you describe Sensei for us?

Sensei is an open-source, elastic, realtime, distributed database with native support for searching and navigating both unstructured text and structured data. Sensei is designed to handle complex semi-structured queries on very large, and rapidly changing datasets.

It was written by the content search team at LinkedIn to support LinkedIn Homepage and Signal.

The core engine is also used for LinkedIn search properties, e.g. people search, recruiter system, job and company search pages.

Why did you write Sensei instead of using Lucene or Solr?

Sensei leverages Lucene.

We weren’t able to leverage Solr because of the following requirements:

  • High update requirement, 10’s of thousands updates per second in to the system
  • Real distributed solution, current Solr’s distributed story has a SPOF at the master, and Solr Cloud is not yet completed.
  • Complex faceting support. Not just your standard terms based faceting. We needed to facet on social graph, dynamic time ranges and many other interesting faceting scenarios. Faceting behavior also needs to be highly customizable, which is not available via Solr.

What does Sensei do that existing open-source search solutions don’t provide?

Consider Sensei if your application has the following characteristics:

  • High update rates
  • Non-trivial semi-structured query support

Who should stick with Solr or ElasticSearch instead of using Sensei?

The feature set, as well as limitations of all these system don’t overlap fully. Depending on your application, if you are building on certain features in one system and it is working out, then I would suggest you stick with it. But for Sensei, our philosophy is to consider performance ahead of features.

Are there currently any Sensei users other than LinkedIn that you are aware of?

We have seen some activities on the mailing list indicating deployments outside of LinkedIn, but I don’t know the specifics. This is a new project and we are keeping track of its usage on http://senseidb.github.com/sensei/usage.html, so let us know if you are using Sensei and want to be listed there.

What are Sensei’s biggest weaknesses and how and when do you plan on addressing them?

Let me address this question by providing a few limitations of Sensei:

  • Each document inserted into Sensei must have a unique identifier (UID) of type long. Though this can be inconvenient, this is a decision made for performance reasons. We have no immediate plans for addressing this, but we are thinking about it.
  • For columns defined in numeric format, e.g. int, float, long…, we don’t yet support negative numbers. We will have support for negative numbers very soon.
  • Static schema. Dynamic schema is something we find useful, and we will support it in the near future.

What’s next for Sensei as a project?

We will continue iterating on Sensei on both the performance and feature front. See below for a list of things we are looking at.

What are some of the key features you plan on adding to Sensei in the coming months?

This may not be a comprehensive list, but gives you an idea areas we are working on:

  • Relevance toolkit
  • Built-in time and geo type columns
  • Parent-node type documents
  • Attribute type faceting (name-value pairs)
  • Online rebalancing
  • Online reindexing
  • Parameter secondary store (e.g. activities on a document, social gestures, etc.)
  • Dynamic schemata
  • Support for aggregation functions, e.g. AVG, MIN, MAX, etc.

The Relevance toolkit sounds interesting.  Could you tell us a bit about what this will do and if, by any chance, this might in any way be close to the idea behind Apache Open Relevance?

This is a big feature for 1.1.0. I am not familiar with Open Relevance to comment. The idea behind relevance toolkit is to allow you to specify a model with the query. One important usage for us is to be able to perform relevance tuning against fast-flowing production data.  Waiting for things to be redeployed to production after relevance model changes does not work if the production data is changing in real-time, like tweets.

Maybe some specific tech questions – feel free to skip the ones that you think are not important or you just don’t feel like answering.

What is the role of Norbert in Sensei?

Sensei currently uses Norbert , whose maintainer is one of our main developers ,as a cluster manager and RPC between a Broker and Sensei nodes. A Broker is servlet embedded in each Sensei node. Norbert is used as a message transport to Sensei nodes.. Norbert is an elegant wrapper around Zookeeper for cluster management. We do have plans to create abstraction around this component to allow pluggability for other cluster managers.

When I first saw SQL-like query on Sensei’s home page I thought it was purely for illustration purposes, but now I realize it is actually very real!

BQL – Browse Query Language, is a SQL-variant to query Senesi. It is very real, we plan for BQL to be a standard way to query Sensei.

Can you share with us any Sensei performance numbers?

We have published some performance numbers at http://senseidb.com/performance.html

We have created a separate Github repository containing all our performance evaluation code at:

https://github.com/kwei/search-perf

Does Sensei have a SPOF (Single Point Of Failure)?

No – assuming a Sensei cluster contains more than 1 replica of each document. This is one important design goal of Sensei: every Sensei node in the cluster acts independently in both consuming data as well as handling queries. See the following answers for details.

What has to happen for data loss to occur?

Data loss occurs only if you have data store corruption on all replicas.  If only 1 replica is corrupted, you can always recover from other replicas.

Sensei by design assumes a data source that is ordered and versioned, e.g., a persistent queue. Each Sensei node persists the version for each commit. Thus, to recover data events can be replayed from that version.

In production at Linkedin, this is very handy to ensure data consistency when bouncing nodes.

You mention recovery from other replicas and recovery by replaying data events from a specific version.  Does that mean once a copy of a document makes it into Sensei in order to recover lost replicas for that document Sensei does not need to reach out to the originator of the data and is self-sufficient, so to speak?  Or does replaying mean getting the lost data from an external data store?

The data stream is external. So to catch-up from an older version, Sensei would just re-play the data events from the stream using this version. But if an entire data replica is lost, a manual copy from other replicas is required (for now).

What happens if a node in a cluster fails?

When a node fails, Zookeeper notifies other cluster event listeners in the cluster, which means the Broker. Broker keeps a state of the current cluster node topology, and subsequent queries will be routed to the live replicas, thus avoiding sending requests to the failed node. If all nodes for one replica are down, then partial results are returned.

What happens when the cluster reaches its capacity?  Can one simply add more nodes to the cluster and Sensei auto-magically takes care of the rest or does one have to manually rebalance shards, or…?

Depending on how data is sharded:

If over-sharding technique is used, then adding nodes to the cluster is trivial. New nodes would just specify which shards they want to handle – every node in sensei.properties indicates partitions it should handle, e.g., sensei.node.partitions=1,3,8

If using sharding strategy where data migration is not needed as new data is flowing into the system, e.g., sharding by time or consecutive UID, then expanding the cluster is also trivial.

If such sharding strategy requires data migration, e.g. mod based sharding. Then cluster rebalancing is required. This is coming in a future release, where we already have designs for online data rebalancing.  For now, one has to reindex all data in order to reshard and rebalance.

Since Sensei is an eventually consistent system, how does one insure the search client gets consistent results (e.g. when paging through results or filtering results with facets)?

On the Sensei request object, there is a routing parameter. Typically this routing parameter is set to the value of the search session. By default, Sensei applies consistent hashing on the routing parameter to make sure the same replica is used for queries with the same routing parameter or search session.

How does one upgrade Sensei? Can one perform an online upgrade of a Sensei cluster without shutting down the whole cluster? Are new versions of Sensei backwards compatible?

Yes, subsets of the cluster can be shut down, and dynamic routing via Zookeeper would take place. This is very useful when we are pushing out new builds in canary mode to ensure stability and compatibility.

Does Sensei require a schema or is it schemaless?

Sensei requires a schema. But we do have plans to make Sensei schema dynamic like ElasticSearch.

Does Sensei have support for things like Spatial Search, Function Queries, Parent-Child data, or JOIN?

We have in the works a relevance toolkit which should cover features of Solr’s Function Queries.

We also have plans to support Spatial Search and Parent-Child data.

We don’t have immediate plans to support Joins.

How does one talk to Sensei?  Are there existing client libraries?

The Sensei cluster exposes 2 rest end-points: REST/JSON and BQL.
The packaging also include Java and Python clients, (also Ruby if resourcing works out), along with JavaScript helpers for using the REST/JSON API in a web application.

Does Sensei have an administrative/management UI?

Sensei comes with a web application for helping with building queries against the cluster. We use it to tweak relevance models as well as instrumenting an online cluster.

JMX is also exposed to administer the cluster.

In the configuration users can turn on other types of data reporting to other clusters, e.g. RRD, log etc.

Big thank you to John and his team for releasing and open-sourcing Sensei and for taking the time to answer all our questions.

@sematext

Sematext Year 2011 in Review

2011 was a good year for Sematext. Here are some highlights.

Products

In 2011, we’ve released several new versions of our popular AutoComplete, Key Phrase Extractor, and DYM ReSearcher products and have witnessed a number of organizations adopting them.

SaaS

After months of hard work, we’ve opened up our Search Analytics and Performance Monitoring services to public. Anyone can sign up for an account and use either or both of these services for free.  Yes, both services are completely free now and can be used without any restrictions.

Services – Tech Support

In addition to our standard consulting services we’ve successfully started offering commercial tech support for Lucene and Solr.  These annual subscriptions come in several different packages and are made for those running Lucene or Solr in production and wanting immediate access to Lucene and Solr experts when things go awry.

Services – Consulting Packages

For those who need long-term access to Lucene or Solr experts we started offering several levels of consulting support packages.  What makes these packages attractive is that one gets immediate help to Lucene or Solr expert consultants while paying a lower rate in exchange for an annual commitment.

Conferences

We attended and presented at a number of conferences (slides and videos) in 2011 – Lucene Revolution in San Francisco in May, Berlin Buzzwords in June, Lucene Eurocon in Barcelona in October, and Enterprise Search Summit Fall in Washington, DC in November.

Open Source

During our work on Search Analytics and Performance Monitoring and specifically the parts of them that use HBase, we’ve forked a couple of open-source projects that we put up on Github.  We are looking at open-sourcing a few other things in 2012.  We’ve also contributed patches to Flume, Solr, HBase, and while in Berlin in June we took part in our first HBase Hackathon.

Team Growth

Our team has roughly doubled in size.  Our people are now in 3 different time zones and 6 countries and we are looking to expand this further.  We are actively hiring and have a number of open positions, from mobile development and design to system administration, sales and marketing.  Of course, we are always looking for search and big data experts.  The team didn’t grow only in number, but also in knowledge and expertise – we now have 2 Cloudera Certified Hadoop developers on staff, 2 published book authors, a recent Machine Learning and Artificial Intelligence Stanford online class “graduate”, etc.

Collaboration with Academia

We have partnered with a university lab on the other side of the Atlantic and have been collaborating on some interesting projects results of which we’ll share in 2012.

 

Overall, 2011 was a good year.  But we are in 2012 now and it’s time to look ahead.  Looking at our crystal ball, I see more fun work and more opportunities.  We’ll do our best to make the most of them.

Relevance Tuning and Competitive Advantage via Search Analytics

Here are two cool things about Search Analytics that I’d like to point out.  The slides are stolen from our Search Analytics presentation at Enterprise Search Summit 2011 in Washington DC.

Search Analytics for A/B testing, relevance tuning and improvements

This slide shows how Search Analytics can be used to help with A/B testing.  Concretely, in this slide we see two Solr Dismax handlers selected on the right side.  If you are not familiar with Solr, think of a Dismax handler as an API that search applications call to execute searches.  In this example, each Dismax handler is configured differently and thus each of them ranks search hits slightly differently.  On the graph we see the MRR (see Wikipedia page for Mean Reciprocal Rank details) for both Dismax handlers and we can see that the one corresponding to the blue line is performing much better.  That is, users are clicking on search hits closer to the top of the search results page, which is one of several signals of this Dismax handler providing better relevance ranking than the other one.  Once you have a system like this in place you can add more Dismax handlers and compare 2 or more of them at a time.  As the result, with the help of Search Analytics you get actual, real feedback about any changes you make to your search engine.  Without a tool like this, you cannot really tune your search engine’s relevance well and will  be doing it blindly.


A/B Testing with Search Analytics

A/B Testing with Search Analytics

Note: while in this slide we see two Solr Dismax handlers, Sematext Search Analytics is search vendor agnostic – the same thing can be done with search powered by FAST/Microsoft Search, Attivio, Endeca/Oracle, Autonomy/HP, Vivisimo, Dieselpoint, Coveo, ElasticSearch, vanilla Lucene, Xapian, or Sphinx, or …

Gaining Competitive Advantage with Search Analytics

As you can see, the only way to fix or improve things, and in this case we are talking about various aspects of search experience, is by having something with which you can measure this search experience and your changes.  You need something to tell you when it’s time to improve things, and you need something that gives you feedback about your changes: Did the key metrics improve after you changes?  If so, how much?  Did any metrics degrade? etc.

Search Analytics Key Takeways

Search Analytics Key Takeways

I can’t emphasize enough how important Search Analytics is and how few organizations use it or use it well and consistently.  While this may be a bit mind boggling for those of us who live and breath search, from your perspective this is a great thing – it means that if you are smart about using Search Analytics to improve your search engine and your users’ search experience, you will gain competitive advantage and be ahead of your competitors who still don’t have or don’t use Search Analytics!

If you have any questions of feedback about Search Analytics in general, please leave a comment and we’ll follow up as soon as possible!

@sematext

Hadoop 1.0.0 – Extra Notes

The big Hadoop 1.0.0 release has arrived.  The general notes about releases from the dev team include:

  • security
  • Better support for HBase (append/hsynch/hflush, and security)
  • webhdfs (with full support for security)
  • performance enhanced access to local files for HBase
  • other performance enhancements, bug fixes, and features

You can also find the complete release notes here and see all fixes, improvements and new features included in the release. To save you time, please find below additional information about some of the items that attracted our attention from the Hadoop 1.0.0 release.

Cluster Management Optimizations

HADOOP-7728hadoop-setup-conf.sh should be modified to enable task memory manager
Adds additional options to manage memory usage by MR tasks. In particular, this allows to set max memory usage for map and reduce tasks (separately).

Performance Improvements

HDFS-2246hadoop-setup-conf.sh should be modified to enable task memory manager
This is a short-term solution for the issue HDFS-347 “DFS read performance suboptimal when client co-located on nodes with data” which is quite hot in Hadoop dev community nowadays. NOTE: by default this optimization is switched off (or is it? Update: it is not, see the comments) so some config adjustments are required to benefit from it. And you will definitely want to benefit from it: some reported two times I/O performance improvements. Also highly recommended for HBase users.

HDFS-895Allow hflush/sync to occur in parallel with new writes to the file
Previously if a hflush/sync were in progress, an application could not write data to the HDFS client buffer. Again we stress out this improvement for HBase users as this increases the write throughput of the transaction log in HBase.

MAPREDUCE-2494Make the distributed cache delete entires using LRU priority
When certain threshold was reached and distributed cache was being purged, previous implementation deleted all entries that were not currently being used. With new code more hot data can be left in the cache (the percentage is configurable) and thus decrease cache misses.

New Features

HDFS-2316 - [umbrella] WebHDFS: a complete FileSystem implementation for accessing HDFS over HTTP
Allows accessing HDFS over HTTP (read & write)

MAPREDUCE-3169Create a new MiniMRCluster equivalent which only provides client APIs cross MR1 and MR2
Cleaner MR1 & MR2 compatible API for mini MR cluster to be used in unit-tests.

HADOOP-7710Create a script to setup application in order to create root directories for application such hbase, hcat, hive etc
Similar to hadoop-setup-user script, a hadoop-setup-applications script was added to set up root directories for apps to write to (/hbase, /hive, etc.)

Enjoy Hadoop 1.0.0 and we hope you found this quick summary useful!

@sematext

Follow

Get every new post delivered to your Inbox.

Join 1,667 other followers