HBaseWD and HBaseHUT: Handy HBase Libraries Available in Public Maven Repo

HBaseWD is aimed to help distribute writes of records with sequential row keys in HBase (and avoid RegionServer hotspotting). Good introduction can be found here.

We recently published 0.1.0 version of the library to Sonatype public maven repository. Thus, integration in your project became much easier:

  <repositories>
    <repository>
      <id>sonatype release</id>
      <url>https://oss.sonatype.org/content/repositories/releases/</url>
    </repository>
  </repositories>
  <dependency>
    <groupId>com.sematext.hbasewd</groupId>
    <artifactId>hbasewd</artifactId>
    <version>0.1.0</version>
  </dependency>

HBaseHUT is aimed to help in situations when you need to update a lot of records in HBase in read-modify-write style. Good introduction can be found here.

We recently published 0.1.0 version of this library to Sonatype public maven repository too. Integration info:

  <repositories>
    <repository>
      <id>sonatype release</id>
      <url>https://oss.sonatype.org/content/repositories/releases/</url>
    </repository>
  </repositories>
  <dependency>
    <groupId>com.sematext.hbasehut</groupId>
    <artifactId>hbasehut</artifactId>
    <version>0.1.0</version>
  </dependency>

For running (MR jobs) on hadoop-2.0+ (which is a part of CDH4.1+) use 0.1.0-hadoop-2.0 version:

  <dependency>
    <groupId>com.sematext.hbasehut</groupId>
    <artifactId>hbasehut</artifactId>
    <version>0.1.0-hadoop-2.0</version>
  </dependency>

Thank you to all contributors and users of the libraries!

SPM Discountorama Announcement

We are happy to announce the General Availability of SPM, our performance monitoring solution for Apache Solr, ElasticSearch, HBase, SenseiDB, and Java applications, and of course all system metrics. You can also vote for what else you want SPM to monitor.  Over the last N months that we’ve been running SPM we’ve received a lot of good feedback (thanks!), a lot of words of encouragement (thanks!), and even a few nice quotes (another thanks!). Here is one from Jerry Yang, a Software Engineer at Walmart Labs: “I have been using SPM for couple of days and it has been amazing. I learned a lot about my Solr services and was able to optimize based on the results on SPM. Great work.”

Discount Codes

Since holiday season is coming up, we thought we’d offer some discounts every week between now until the end of the year.  Each of the following discounts can be used only during “its week” specified below.  There is a limit to the number of people who can use each discount, so if you want it, don’t waste too much time.  Each discount will reduce the price of SPM SaaS for 365 days after you’ve used it, which effectively means you will get discount until the end of 2013.  Note that when you register for SPM you do not need to enter your credit card information.  You also don’t need to provide it when you create the SPM application for the system you want to monitor.  And it is when you create your SPM application that you can enter the discount code.

  • 20% for the remainder of this week until the end of this Sunday, December 9: NY201320
  • 15% for the week of December 10, 2012: NY201315
  • 10% for the week of December 17, 2012: NY201310
  • 5% for the week of December 24, 2012: NY201305

Note that each discount code expires on Sunday at 00:00 UTC.

SPM Flavours

The above discounts are good for our SPM SaaS.  However, if you’d rather run SPM on your own servers, we do offer SPM on Premises – please get in touch if you are interested in the on premises version.  You can also vote for SPM SaaS vs. On Premise and that way tell us which version you prefer or want.

SPM Plans

There are a few different subscription plans available in SPM SaaS:

  • Basic plan that is free and shows the last 30 minutes of performance data
  • Standard plan that shows the last 30 days of data and costs $0.035/server/hour
  • Pro plan that shows the last 60 days of performance data and costs $0.070/server/hour

If you have not used SPM before, here is what you can expect to see – click on the image to see a large, non-fuzzy version:

We hope you will find SPM useful and fun to use.  We are always looking for feedback – just email spm-support@sematext.com or ping @sematext and tell us what you like or don’t like about SPM.

HBase FuzzyRowFilter: Alternative to Secondary Indexes

In this post we’ll explain the usage of FuzzyRowFilter which can help in many situations where secondary indexes solutions seems to be the only choice to avoid full table scans.

Background

When it comes to HBase the way you design your row key affects everything. It is a common pattern to have composite row key which consists of several parts, e.g. userId_actionId_timestamp. This allows for fast fetching of rows (or single row) based on start/stop row keys which have to be a prefix of the row keys you want to select. E.g. one may select last time of userX logged in by specifying row key prefix “userX_login_”. Or last action of userX by fetching the first row with prefix “userX_”. These partial row key scans work very fast and does not require scanning the whole table: HBase storage is optimized to make them fast.

Problem

However, there are cases when you need to fetch data based on key parts which happen to be in the middle of the row key. In the example above you may want to find last logged in users. When you don’t know the first parts of the key partial row key scan turns into full table scan which might be very slow and resource intensive.

Possible Solution #1: Data Redundancy

One possible way around it would be to use secondary indexes by creating redundant rows with the same data as original ones but with different sequence of the parts of the key (e.g. actionId_timestamp). This solution may not be suitable for some because of its cons:

  • storing extra indexes (usually it requires to store N times more data for N indexes) results in storing a lot more data on disk
  • storing (and serving) extra rows brings additional load on the cluster during writing and reading (extra blocks fighting to be in cache, etc.)
  • writing/updating/deleting several rows is not an atomic operation in HBase

Possible Solution #2: Integrated Secondary Indexes

Another way to attack the problem is to use smart secondary indexes mechanism integrated in HBase which doesn’t rely on data redundancy. E.g. something like IHBase. The problem here is that there’s no out-of-the box solution to be used. This may change with addition of newer CoProcessors functionality (see e.g. HBASE-2038 or this). But as of now existent solutions have their own limitations and drawbacks while new solutions are yet to be completed.

Suggested Solution

First of all, I have to say that solution suggested below is not a silver bullet. Moreover its performance may be very bad and even be close to full table scan in some cases. Even more: it can’t be used in any of the situations described in Background and Problem sections. But in many cases depending on your data the suggested simple solution can be used to avoid secondary indexes burden and still allow for very fast scans. In many other cases it can be used to significantly speed up your full table scans.

Suggested solution is not new and quite simple, but it is usually overlooked by HBase users, though it shouldn’t be.

Fast-Forwarding in Server-side Filter

In recent HBase versions (I believe in 0.90.+) there’s a mechanism that allows skipping the whole range of rows when scanning with server-side filter. These skipped rows data may not even be read from the disk. Based on the current row key the filter can tell scanner to advance to the row with the specific key and by doing that jump over many rows which are simply skipped. For example, this makes it possible to perform fast full-table scans (or large partial key scans) in case there’s enough information about the key and the data that allows to provide efficient hints for skipping a lot of rows during the scan.

Most of the time you’ll have to implement your own custom filter that performs fast-forwarding. Hint for these cases: refer to org.apache.hadoop.hbase.filter.Filter.ReturnCode.SEEK_NEXT_USING_HINT in HBase sources.

FuzzyRowFilter

FuzzyRowFilter is one of the handy filters which is available and which performs fast-forwaring based on the fuzzy row key mask provided by user. It will be available out of the box in the next HBase release, but you can now download its sources from HBASE-6509 (use latest patch) and use it as any other custom filter (there’s no need to patch HBase, etc. it relies on existing functionality).

FuzzyRowFilter takes as parameters row key and a mask info. In example above, in case we want to find last logged in users and row key format is userId_actionId_timestamp (where userId has fixed length of say 4 chars), the fuzzy row key we are looking for is “????_login_”. This translates into the following params for FuzzyRowKey:

FuzzyRowFilter rowFilter = new FuzzyRowFilter(
 Arrays.asList(
  new Pair<byte[], byte[]>(
    Bytes.toBytesBinary("\\x00\\x00\\x00\\x00_login_"),
    new byte[] {1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0})));

I.e. the row key to compare with is provided as the first byte array (at the byte positions where any value is allowed, “\x00″ is set, which is translated into (byte) 0). To tell which positions are fixed and which are not fixed, second byte array is provided (mask info) with zeroes on positions whose values are “fixed” and ones at the “non-fixed” positions.

Thus one can define different fuzzy row key masks, including those with “non-fixed” positions anywhere in the middle of the key. E.g.: “hb?se” or “??pred?ce”.

Note that FuzzyRowFilter accepts more than one mask: if row key satisfies at least one, the row will be included in the result. E.g. by providing masks “????_login_” and “????_register_” we can find last logged in and registered users.

How It Works

In the example above, with the mask “????_login_” scan initially navigates to the first row of the table. It is likely to be a user-action record (let userId to be “0001”), but the action may be not “login”. In this case as filter knows the “current” user (“0001″) and the action it is looking for, filter tells scan to jump to the row with the key “0001_login_”. By doing that, many rows may be skipped from the scanning (if we track other user actions apart from “login”, there are likely a lot more other user-action records than user logins). Then it scans user login actions records until it faces the record with action which is not login, say “0001_logout”. In this case filter knows that there’s no point in scanning this user’s records and tells scanner to jump to the next user “0002_login_” and it will continue scanning its records. Note: there might be no “0002” user, filter knows nothing about users, it simply suggests the next user id by increasing the current one by one. In this case scan will automatically jump to the next existing user, and the steps above will be repeated.

Limitations & Performance Considerations

As you probably already have figured out from the example above, FuzzyRowFilter can be applied only if userId has fixed length. While it is  usually not hard to design the row key format so that its parts have fixed length (at least those parts that we need to mask with “???”), in many situations it may be problematic.

The efficiency of using FuzzyRowFilter (and any other fast-forwarding filters) is determined by how many records filter can actually skip and how many jumps it has to do to skip them.

Performance of the scan based on FuzzyRowFilter usually depends on the cardinality of the fuzzy part. E.g. in the example above, if users number is several hundreds to several thousand, the scan should be very fast: there will only be several hundreds or thousand “jumps” and huge amount of rows might be skipped. If the cardinality is high then scan can take a lot of time. The worst-case scenario is when you have N records and N users, i.e. one record per user. In this case there’s simply nothing to skip.

At times when the performance of full-table scan with the help of FuzzyRowFilter is not suitable for serving online data, it has still proven to be very efficient when you feed data from HBase into MapReduce job. Don’t overlook this!

Summary

There are times when you design the row key for the data to be stored in HBase and feel the need for the secondary indexes, because of very different data access patterns. In this situation consider relying on FuzzyRowFilter for some of the data reading use-cases. Depending on your data with small adjustments of the row key format (sometimes it is not even needed) you can benefit from very fast fetching of records where before you needed to perform full table scans or very large partial key scans.

Plug: if this sort of stuff interests you, we are hiring people who know and love to work with Hadoop, HBase, MapReduce…

@abaranau

Announcing HBase Refcard

We’re happy to announce the very first HBase Refcard proudly authored by two guys from Sematext.  We hope people will find the HBase Refcard useful in their work with HBase, along with the wonderful Apache HBase Reference Guide.  If you think the refcard is missing some important piece of information that deserves to be included or that it contains superfluous content, please do let us know! (e.g., via comments here)

Configuring HBase Memstore: What You Should Know

In this post we discuss what HBase users should know about one of the internal parts of HBase: the Memstore. Understanding underlying processes related to Memstore will help to configure HBase cluster towards better performance.

HBase Memstore

Let’s take a look at the write and read paths in HBase to understand what Memstore is, where how and why it is used.

Memstore Usage in HBase Read/Write Paths

Memstore Usage in HBase Read/Write Paths

(picture was taken from Intro to HBase Internals and Schema Design presentation)

When RegionServer (RS) receives write request, it directs the request to specific Region. Each Region stores set of rows. Rows data can be separated in multiple column families (CFs). Data of particular CF is stored in HStore which consists of Memstore and a set of HFiles. Memstore is kept in RS main memory, while HFiles are written to HDFS. When write request is processed, data is first written into the Memstore. Then, when certain thresholds are met (obviously, main memory is well-limited) Memstore data gets flushed into HFile.

The main reason for using Memstore is the need to store data on DFS ordered by row key. As HDFS is designed for sequential reads/writes, with no file modifications allowed, HBase cannot efficiently write data to disk as it is being received: the written data will not be sorted (when the input is not sorted) which means not optimized for future retrieval. To solve this problem HBase buffers last received data in memory (in Memstore), “sorts” it before flushing, and then writes to HDFS using fast sequential writes. Note that in reality HFile is not just a simple list of sorted rows, it is much more than that.

Apart from solving the “non-ordered” problem, Memstore also has other benefits, e.g.:

  • It acts as a in-memory cache which keeps recently added data. This is useful in numerous cases when last written data is accessed more frequently than older data
  • There are certain optimizations that can be done to rows/cells when they are stored in memory before writing to persistent store. E.g. when it is configured to store one version of a cell for certain CF and Memstore contains multiple updates for that cell, only most recent one can be kept and older ones can be omitted (and never written to HFile).

Important thing to note is that every Memstore flush creates one HFile per CF.

On the reading end things are simple: HBase first checks if requested data is in Memstore, then goes to HFiles and returns merged result to the user.

What to Care about

There are number of reasons HBase users and/or administrators should be aware of what Memstore is and how it is used:

  • There are number of configuration options for Memstore one can use to achieve better performance and avoid issues. HBase will not adjust settings for you based on usage pattern.
  • Frequent Memstore flushes can affect reading performance and can bring additional load to the system
  • The way Memstore flushes work may affect your schema design

Let’s take a closer look at these points.

Configuring Memstore Flushes

Basically, there are two groups of configuraion properties (leaving out region pre-close flushes):

  • First determines when flush should be triggered
  • Second determines when flush should be triggered and updates should be blocked during flushing

First  group is about triggering “regular” flushes which happen in parallel with serving write requests. The properties for configuring flush thresholds are:

  • hbase.hregion.memstore.flush.size
<property>
 <name>hbase.hregion.memstore.flush.size</name>
 <value>134217728</value>
 <description>
 Memstore will be flushed to disk if size of the memstore
 exceeds this number of bytes. Value is checked by a thread that runs
 every hbase.server.thread.wakefrequency.
 </description>
</property>
  • base.regionserver.global.memstore.lowerLimit
<property>
 <name>hbase.regionserver.global.memstore.lowerLimit</name>
 <value>0.35</value>
 <description>Maximum size of all memstores in a region server before
 flushes are forced. Defaults to 35% of heap.
 This value equal to hbase.regionserver.global.memstore.upperLimit causes
 the minimum possible flushing to occur when updates are blocked due to
 memstore limiting.
 </description>
</property>

Note that the first setting is the size per Memstore. I.e. when you define it you should take into account the number of regions served by each RS. When number of RS grows (and you configured the setting when there were few of them) Memstore flushes are likely to be triggered by the second threshold earlier.

Second group of settings is for safety reasons: sometimes write load is so high that flushing cannot keep up with it and since we  don’t want memstore to grow without a limit, in this situation writes are blocked unless memstore has “manageable” size. These thresholds are configured with:

  • hbase.regionserver.global.memstore.upperLimit
<property>
 <name>hbase.regionserver.global.memstore.upperLimit</name>
 <value>0.4</value>
 <description>Maximum size of all memstores in a region server before new
 updates are blocked and flushes are forced. Defaults to 40% of heap.
 Updates are blocked and flushes are forced until size of all memstores
 in a region server hits hbase.regionserver.global.memstore.lowerLimit.
 </description>
</property>
  • hbase.hregion.memstore.block.multiplier
<property>
 <name>hbase.hregion.memstore.block.multiplier</name>
 <value>2</value>
 <description>
 Block updates if memstore has hbase.hregion.block.memstore
 time hbase.hregion.flush.size bytes. Useful preventing
 runaway memstore during spikes in update traffic. Without an
 upper-bound, memstore fills such that when it flushes the
 resultant flush files take a long time to compact or split, or
 worse, we OOME.
 </description>
</property>

Blocking writes on particular RS on its own may be a big issue, but there’s more to that. Since in HBase by design one Region is served by single RS when write load is evenly distributed over the cluster (over Regions) having one such “slow” RS will make the whole cluster work slower (basically, at its speed).

Hint: watch for Memstore Size and Memstore Flush Queue size. Memstore Size ideally should not reach upper Memstore limit and Memstore Flush Queue size should not constantly grow.

Frequent Memstore Flushes

Since we want to avoid blocking writes it may seem a good approach to flush earlier when we are far from “writes-blocking” thresholds. However, this will cause too frequent flushes which can affect read performance and bring additional load to the cluster.

Every time Memstore flush happens one HFile created for each CF. Frequent flushes may create tons of HFiles. Since during reading HBase will have to look at many HFiles, the read speed can suffer.

To prevent opening too many HFiles and avoid read performance deterioration there’s HFiles compaction process. HBase will periodically (when certain configurable thresholds are met) compact multiple smaller HFiles into a big one. Obviously, the more files created by Memstore flushes, the more work (extra load) for the system. More to that: while compaction process is usually performed in parallel with serving other requests, when HBase cannot keep up with compacting HFiles (yes, there are configured thresholds for that too;)) it will block writes on RS again. As mentioned above, this is highly undesirable.

Hint: watch for Compaction Queue size on RSs. In case it is constantly growing you should take actions before it will cause problems.

More on HFiles creation & Compaction can be found here.

So, ideally Memstore should use as much memory as it can (as configured, not all RS heap: there are also in-memory caches), but not cross the upper limit. This picture (screenshot was taken from our SPM monitoring service) shows somewhat good situation:

Memstore Size: Good Situation

Memstore Size: Good Situation

“Somewhat”, because we could configure lower limit to be closer to upper, since we barely ever go over it.

Multiple Column Families & Memstore Flush

Memstores of all column families are flushed together (this might change). This means creating N HFiles per flush, one for each CF. Thus, uneven data amount in CF will cause too many HFiles to be created: when Memstore of one CF reaches threshold all Memstores of other CFs are flushed too. As stated above too frequent flush operations and too many HFiles may affect cluster performance.

Hint: in many cases having one CF is the best schema design.

HLog (WAL) Size & Memstore Flush

On RegionServer write/read paths picture above you may also noticed a Write-ahead Log (WAL) where data is getting written by default. It contains all the edits of RegionServer which were written to Memstore but were not flushed into HFiles. As data in Memstore is not persistent we need WAL to recover from RegionServer failures. When RS crushes and data which was stored in Memstore and wasn’t flushed is lost, WAL is used to replay these recent edits.

When WAL (in HBase it is called HLog) grows very big, it may take a lot of time to replay it. For that reason there are certain limits for WAL size, which when reached cause Memstore to flush. Flushing Memstores decreases WAL as we don’t need to keep in WAL edits which were written to HFiles (persistent store). This is configured by two properties: hbase.regionserver.hlog.blocksize and hbase.regionserver.maxlogs. As you probably figured out, maximum WAL size is determined by hbase.regionserver.maxlogs * hbase.regionserver.hlog.blocksize (2GB by default). When this size is reached, Memstore flushes are triggered. So, when you increase Memstore size and adjust other Memstore settings you need to adjust HLog ones as well. Otherwise WAL size limit may be hit first and you will never utilize all the resources dedicated to Memstore. Apart from that, triggering of Memstore flushes by reaching WAL limit is not the best way to trigger flushing, as it may create “storm” of flushes by trying to flush many Regions at once when written data is well distributed across Regions.

Hint: keep hbase.regionserver.hlog.blocksize * hbase.regionserver.maxlogs just a bit above hbase.regionserver.global.memstore.lowerLimit * HBASE_HEAPSIZE.

Compression & Memstore Flush

With HBase it is advised to compress the data stored on HDFS (i.e. HFiles). In addition to saving on space occupied by data this reduces the disk & network IO significantly. Data is compressed when it is written to HDFS, i.e. when Memstore flushes. Compression should not slow down flushing process a lot, otherwise we may hit many of the problems above, like blocking writes caused by Memstore being too big (hit upper limit) and such.

Hint: when choosing compression type favor compression speed over compression ratio. SNAPPY showed to be a good choice here.

 

[UPDATE 2012/07/23: added notes about HLog size and Compression types]

 

Plug: if this sort of stuff interests you, we are hiring people who know about Hadoop, HBase, MapReduce…

@abaranau

Presentation: Intro to HBase Internals and Schema Design

Below are the slides (and audio) from the Intro to HBase Internals and Schema Design presentation  Alex gave at had our inaugural HBase NYC meetup.   See also: Introduction to HBase

We’re hiring people who want to work WITH and ON HBase and other Big Data technologies.  See jobs @ sematext.

Presentation: Intro to HBase

Last week we had our inaugural HBase NYC meetup.  About 30 people turned up – not bad for the first meetup.  Etsy, Sematext old customer and Brooklyn neighbours, provided the space, AV equipment and help, as well as their fridge with beer – thanks!  Alex gave two talks, first the Introduction to HBase whose slides (audio) are below and then Intro to HBase Internals and Schema Design.

We’re hiring people who want to work WITH and ON HBase and other Big Data technologies.  See jobs @ sematext.

Follow

Get every new post delivered to your Inbox.

Join 1,683 other followers