Besides working with search in general and Lucene/Solr/Nutch in particular, we also work with Hadoop, HBase, and other related technologies. This October we’ll be presenting at Hadoop World (see the schedule). The title of our talk is Search Analytics with Flume and HBase. Here’s the abstract:
In this talk we will show how we use Flume to transport search and clickstream data to HBase with the ultimate goal of producing Search Analytics reports using that data. We’ll show how data flow through the system from the moment a query or click event is captured in the search application UI, until it lands in HBase via Flume’s HBase sink. We’ll also share information about what this system looked like in the pre-Flume days. Finally we’ll demonstrate various reports the system ultimately produces and insight we derive from them.
So, if you are interested in search and analytics, and especially the mix of the two, come see us this October in New York. If you can’t wait until then or can’t make it to New York, and need help with search and/or analytics, let us know!
The biggest announcement of the year: Apache Hadoop 0.21.0 released and is available for download here. Over 1300 issues have been addressed since 0.20.2; you can find details for Common, HDFS and MapReduce. Note from Tom White who did an excellent job as a release manager: “Please note that this release has not undergone testing at scale and should not be considered stable or suitable for production. It is being classified as a minor release, which means that it should be API compatible with 0.20.2.”. Please find a detailed description of what’s new in 0.21.0 release here.
Community trends & news:
- New branch hadoop-0.20-security is being created. Apart from the security features, which are in high demand, it will include improvements and fixes from over 12 months of work by Yahoo!. The new security features are going to be a very valuable and welcome contribution (also discussed before).
- A thorough discussion about approaches of backing up HDFS data in this thread.
- Hive voted to become Top Level Apache Project (TLP) (also here). Note that we’ll keep Hive under Search-Hadoop.com even after Hive goes TLP.
- Pig voted to become TLP too (also here). Note that we’ll keep Pig under Search-Hadoop.com even after Pig goes TLP.
- Tip: if you define a Hadoop object (e.g. Partitioner, as implementing Configurable, then its setConf() method will be called once, right after it gets instantiated)
- For those new to ZooKeeper and pressed for time, here you can find the shortest ZooKeeper description — only 4 sentences short!
- Good read “Avoiding Common Hadoop Administration Issues” article.
- Howl: Common metadata layer for Hadoop’s Map Reduce, Pig, and Hive (yet another contribution from Yahoo!)
- PHP library for Avro, includes schema parsing, Avro data file and
- avro-scala-compiler-plugin: aimed to auto-generate Avro serializable classes based on some simple case class definitions
- How to programatically determine the names of the files in a particular Hadoop/HDFS directory?
Use FileSystem & FileStatus API. Detailed examples are in this thread.
- How to restrict HDFS space usage?
Please, refer to HDFS Quotas Guide.
- How to pass parameters determined at run-time (i.e. not hard-coded) to Hadoop objects (like Partitioner, Writable, etc.)?
One option is to define a Hadoop object as implementing Configurable. In this case its setConf() method will be called once, right after it gets instantiated and you can use “native” Hadoop configuration for passing parameters you need.
The second “developer release”, hbase-0.89.201007d, is now available for download. To remind everyone, there are currently two active branches of HBase:
- 0.20 – the current stable release series, being maintained with patches for bug fixes only.
- 0.89 – a development release series with active feature and stability development, not currently recommended for production use.
First one doesn’t support HDFS durability (edits may be lost in the case of node failure) whereas the second one does. You can find more information at this wiki page. HBase 0.90 release may happen in October! See info from developers.
Community trends & news:
- New HBase AMIs are available for dev release and 0.20.6.
- Looking for some GUI that could be used for browsing through tables in HBase? Check out Toad for Cloud, watch for HBase-Explorer and HBase-GUI-Admin.
- How many regions a RegionServer can support and what are the consequences of having lots of regions in a RegionServer? Check info in this thread.
- Some more complaints to be aware of regarding HBase performing on EC2 in this thread. For those who missed it, more on Hadoop & HBase reliability with regard to EC2 in our March digest post.
- Need guidance in sizing your first Hadoop/HBase cluster? This article will be helpful.
- Where can I find information about data model design with regard to HBase?
Take a look at http://wiki.apache.org/hadoop/HBase/HBasePresentations.
- How can I perform SQL-like query “SELECT … FROM …” on HBase?
First, consider that HBase is a key-value store which should be treated accordingly. But if you are still up for writing ad-hoc queries in your particular situation take a look at Hive & HBase integration.
- How can I access Hadoop & HBase metrics?
Refer to HBase Metrics documentation.
- How to connect to HBase from java app running on remote (to cluster) machine?
Check out client package documentation. Alternatively, one can use the REST interface: Stargate.
August brought a lot of activity into Solr world. There were many important developments, so we again compiled the most interesting ones for you, grouped into 4 categories:
Some new (and already committed) features
- We already wrote about new work done on CollapsingComponent in June’s digest under SOLR-1682. A lot of work was done on this component and it appears that it is very close to being committed. Patches attached to the issue are functional, so you can give it a try.
- SpellCheckComponent got improvement related to recent Lucene changes – Add support for specifying Spelling SuggestWord Comparator to Lucene spell checkers for SpellCheckComponent. Issue SOLR-2053 is already fixed, patch is attached if you need it, but it is also committed to trunk and 3_x branch.
- Another minor feature is improvement of WordDelimiterFilter in SOLR-2059 – Allow customizing how WordDelimiterFilter tokenizes text. Patch is already there and committed to trunk and 3_x.
- Performance boost for faceting can be found in SOLR-2089 – Faceting: order term ords before converting to values. Behind this intimidating title hides a very decent speedup in cases when facet.limit is high. Patch is available, trunk and branch 3_x also got this magic committed.
Some new features being discussed and implemented
- One very important (and probably much wanted) feature just got its Jira issue – SOLR-2080 – Create a Related Search Component. The issue was created by Grant Ingersoll, so we can expect some quality work do be done here. There are no patches (or even discussions) yet as the issue is in its infancy, but you can watch its progress in Jira. In the meantime, if you’re interested in such functionality, you can check Sematext’s RelatedSearches product.
- Jira issue SOLR-2026 – Need infrastructure support in Solr for requests that perform multiple sequential queries – might add some interesting capabilities to search components, especially if you’re writing some of them on your own. We at Sematext have plenty of experience with writing of custom Solr components (check, for instance, our DYM ReSearcher or its Relaxer sibling), so we know that sometimes it is not a very pleasant task. If Solr gets better support for execution of multiple queries during a single request, writing custom components will become easier. One patch is already posted to this issue, so you can check it out, however, it is still unclear in which way this feature will evolve. We’re hoping for a flexible and comprehensive solution which would be easily extensible to many other features.
- Defining QueryComponent’s default query parser can be made configurable with the patch attached to the issue SOLR-2031. You probably didn’t encounter many cases where you needed this functionality, but if you needed it, you had a problem before, and now that problem will become history.
- It appears that QueryElevationComponent might get an improvement : Distinguish Editorial Results from “normal” results in the QueryElevationComponent. Jira issue SOLR-2037 will be the place to watch the progress.
Some newly found bugs
- DataImportHandler has a bug – Multivalued fields with dynamic names does not work properly with DIH – the fix isn’t available, but if you have such problems, you check the status here.
- Another bug in DataImportHandler points to a connection-leak issues – DIH doesn’t release JDBC connections in conjunction with DB2. There is no fix at the moment but, as usual, you can check the status in Jira.
Other interesting news
- One potentially useful tool we recommend checking is SolrMeter. It is a standalone tool for stress testing of you Solr. From their site: The main goal of this open source project is to bring to the solr user community a “generic tool to interact specifically with solr”, firing queries and adding documents to make sure that your Solr implementation will support the real use. With SolrMeter you can simulate your work load over solr index and retrieve statistics graphically.
- In which IDEs do you work with Solr/Lucene? Here at Sematext, we use both Eclipse and IntelliJ IDEA. If you use the latter and you want to set up Lucene or Solr in it, you can check a very useful description and patch in LUCENE-2611 – IntelliJ IDEA setup.
We hope you enjoyed another Solr Digest from @sematext. Come back and read us next month!