Sometimes your id values are already pre-defined, for example if an external database or content management system assigned one, or if you must use a URI, but if you are free to assign your own ids then what works best for Lucene?
One obvious choice is Java's UUID class, which generates version 4 universally unique identifiers, but it turns out this is the worst choice for performance: it is 4X slower than the fastest. To understand why requires some understanding of how Lucene finds terms.
BlockTree terms dictionary
The purpose of the terms dictionary is to store all unique terms seen during indexing, and map each term to its metadata (
docFreq
, totalTermFreq
, etc.), as well as the postings (documents, offsets, postings and payloads). When a term is requested, the terms dictionary must locate it in the on-disk index and return its metadata.
The default codec uses the BlockTree terms dictionary, which stores all terms for each field in sorted binary order, and assigns the terms into blocks sharing a common prefix. Each block contains between 25 and 48 terms by default. It uses an in-memory prefix-trie index structure (an FST) to quickly map each prefix to the corresponding on-disk block, and on lookup it first checks the index based on the requested term's prefix, and then seeks to the appropriate on-disk block and scans to find the term.
In certain cases, when the terms in a segment have a predictable pattern, the terms index can know that the requested term cannot exist on-disk. This fast-match test can be a sizable performance gain especially when the index is cold (the pages are not cached by the the OS's IO cache) since it avoids a costly disk-seek. As Lucene is segment-based, a single id lookup must visit each segment until it finds a match, so quickly ruling out one or more segments can be a big win. It is also vital to keep your segment counts as low as possible!
Given this, fully random ids (like UUID V4) should perform worst, because they defeat the terms index fast-match test and require a disk seek for every segment. Ids with a predictable per-segment pattern, such as sequentially assigned values, or a timestamp, should perform best as they will maximize the gains from the terms index fast-match test.
Testing Performance
I created a simple performance tester to verify this; the full source code is here. The test first indexes 100 million ids into an index with 7/7/8 segment structure (7 big segments, 7 medium segments, 8 small segments), and then searches for a random subset of 2 million of the IDs, recording the best time of 5 runs. I used Java 1.7.0_55, on Ubuntu 14.04, with a 3.5 GHz Ivy Bridge Core i7 3770K.
Since Lucene's terms are now fully binary as of 4.0, the most compact way to store any value is in binary form where all 256 values of every byte are used. A 128-bit id value then requires 16 bytes.
I tested the following identifier sources:
- Sequential IDs (0, 1, 2, ...), binary encoded.
- Zero-padded sequential IDs (00000000, 00000001, ...), binary encoded.
- Nanotime, binary encoded. But remember that nanotime is tricky.
- UUID V1, derived from a timestamp, nodeID and sequence counter, using this implementation.
- UUID V4, randomly generated using Java's
UUID.randomUUID()
.
- Flake IDs, using this implementation.
UUID.randomUUID()
) is ~4X slower.
But for most applications, sequential ids are not practical. The 2nd fastest is UUID V1, encoded in binary. I was surprised this is so much faster than Flake IDs since Flake IDs use the same raw sources of information (time, node id, sequence) but shuffle the bits differently to preserve total ordering. I suspect the problem is the number of common leading digits that must be traversed in a Flake ID before you get to digits that differ across documents, since the high order bits of the 64-bit timestamp come first, whereas UUID V1 places the low order bits of the 64-bit timestamp first. Perhaps the terms index should optimize the case when all terms in one field share a common prefix.
I also separately tested varying the base from 10, 16, 36, 64, 256 and in general for the non-random ids, higher bases are faster. I was pleasantly surprised by this because I expected a base matching the BlockTree block size (25 to 48) would be best.
There are some important caveats to this test (patches welcome)! A real application would obviously be doing much more work than simply looking up ids, and the results may be different as hotspot must compile much more active code. The index is fully hot in my test (plenty of RAM to hold the entire index); for a cold index I would expect the results to be even more stark since avoiding a disk-seek becomes so much more important. In a real application, the ids using timestamps would be more spread apart in time; I could "simulate" this myself by faking the timestamps over a wider range. Perhaps this would close the gap between UUID V1 and Flake IDs? I used only one thread during indexing, but a real application with multiple indexing threads would spread out the ids across multiple segments at once.
I used Lucene's default TieredMergePolicy, but it is possible a smarter merge policy that favored merging segments whose ids were more "similar" might give better results. The test does not do any deletes/updates, which would require more work during lookup since a given id may be in more than one segment if it had been updated (just deleted in all but one of them).
Finally, I used using Lucene's default Codec, but we have nice postings formats optimized for primary-key lookups when you are willing to trade RAM for faster lookups, such as this Google summer-of-code project from last year and MemoryPostingsFormat. Likely these would provide sizable performance gains!
The chart/image is not visible in Firefox.
ReplyDeleteHmm I can see it with Firefox on OS X and Windows. Which OS/Firefox version are you using?
DeleteMichael, thanks for the informative post!
ReplyDeleteI have an off-topic question (as usual). This post provides details about the codec internals: "The default codec uses the...". I'm really interested in it. Is there such detailed explanation already published?
Nevertheless, it seems like this datastructure design exploits some sort of "block" pattern, or it's just a common sense? Can you point on any materials about designing such efficient datastructures? I need to design my own one.
Thanks!
Hi Mikhail,
DeleteAlas, BlockTree is not well described, but it's very similar to burst tries, and I think there's a link to the paper in its javadocs or comments?
Got it in BlockTreeTermsReader! Thanks!
DeleteRandom UUIDs have another issue, indexing tends to be faster than the random number generator of the box.
ReplyDeleteI go with id's applied at indexing gateways a v1 UUID sort64 encoded. https://code.google.com/p/guava-libraries/issues/detail?id=1415. I adapted Cassandra's UUID code which uses timestamp plus sequence plus node id for high frequency events. Then for bucketing / partioning an murmurh hash of the uuid ngram prefix.
Actually the reason for not going binary with the ids is because the speed improvement wasn't worth not being able to easily email, share.
Good post useful insight, tnx
Hi Michael,
ReplyDeleteIt is a very informative post, and I learned a lot about UUIDs.
But there is one place I couldn't understand about the efficiency between flask and UUIDv1.
You points out the reason of this performance difference is the result of the common prefix in flask and uuidv1 avoids this by reverse the high/low bytes in timestamp.
If this theory stands, the zero-pad sequential should be slower than the sequential. But your test result seems to tell a different story.
I am the beginner in this field and I might not understand it correctly. Can you kindly point out what I may misunderstand? Thank you.
Edwin.JH.Lee@qq.com
2014/7/25
Hi Anonymous,
DeleteThat's a good point! I think the explanation is a bit tricky ... when you don't zero-pad the sequential IDs, you end up with blocks in the terms dictionary that mix terms and sub-blocks. Seen as a tree, it means terms can occur on the inside (non-leaf) nodes of the tree.
Whereas with zero-padding, all terms occur on leaf nodes, and the inner nodes just point to other blocks; those inner nodes don't contain their own terms.
This makes a difference in performance because in the terms index (the in-memory structure) we record whether a given block has terms, and if it doesn't have terms, we can avoid seeking to that block in certain cases, which can be a big savings...
Hi Michael,
ReplyDeleteI have some questions about which id implementation might have better performance for Solr. You know, in our Solr collection (Solr 4.8), we have the following unique key definition.
id
In our external java program, in the past, we generated an UUID V4 with UUID.randomUUID().toString() first. Then, we used Cryptographic hash to generate a 32 bytes length text and finally used it as id. So, my first question is, will 32 bytes length Cryptographic hash have better performance than UUID V4?
For now, we might need to post more than 20k Solr docs per second. Then UUID.randomUUID() or the Cryptographic hash stuff might take time. We might have a simple workaround to share one Cryptographic hash stuff for many Solr docs. Namely, we want to append sequence to Cryptographic hash such as 9AD0BB6DDD7AA9FE4D9EB1FF16B3BDFY000000, 9AD0BB6DDD7AA9FE4D9EB1FF16B3BDFY000001, 9AD0BB6DDD7AA9FE4D9EB1FF16B3BDFY000002, etc.
What we secondly want to know, if we use a 38 bytes length id or 36 bytes length id (both of them are to append sequence to 32 bytes length Cryptographic hash), which one might be better? In my understanding based on what you mentioned in the post, 38 bytes length id is better. Right?
Thanks,
Eternal
Hi Anonymous,
DeleteI suspect the crypto hash output after processing the UUID v4 will have poor performance, the same as UUID v4 (though maybe a bit worse since you have to spend some CPU to compute the crypto hash) since there will not be any predictability in how IDs are assigned to segments.
On your 2nd question, the length of the ID (36 vs 38) bytes likely won't affect performance that much, but I would expect the shorter one to be a bit faster, assuming those 2 extra digits appended to the original crypto key is "enough" for your use case.
Mike, this is a great post!
ReplyDeleteI want to get the "UUID v1 [binary]" performance; obviously the v1 algorithm is up to me, but I want to make sure I'm getting the binary performance. Looking at the UUIDField implementation, it looks like it's just going to be stored as a String. So it sounds like it would boil down to a UTF-8 encoded byte[]. I suppose I need to write my own UUIDField that stores a 16 byte byte[] under the covers, but still operates on the same hex format that UUIDField does (so the field would be interchangeable). Can I just extend BinaryField? How does TermToBytesRefAttribute play into it, if at all?
Hi Ryan,
DeleteWe just enhanced Lucene's StringField to take a BytesRef ... this is by far the easiest way to get a binary token indexed in Lucene.
See https://issues.apache.org/jira/browse/LUCENE-5989 for details ... it will be included in Lucene 5.2 release.
This is a great improvement, this will be good motivation to upgrade to 5.2 as soon as it is available. Will UUIDField be updated to make use of this API? It's similar to the ipv6 issue except of I don't expect numeric/range queries to be of much use.
DeleteI'm not sure whether UUIDField will be updated...
Deleteall of this is to optimize retrieving documents by IDs? what scenario is best at direct ID retrievals?
ReplyDeleteassuming I'm building an index to search - most of my queries will be searches would they not? could this affect searching somehow too or is this only for "lots of direct document id retrievals" case?
Hi Anonymous,
DeleteGood questions! "Lookup by ID" most affects application of deletes during indexing, or if you "update" a document in ES or Solr, which is delete + add to Lucene. Even if you append-only with ES, it asks Lucene to update, so Lucene must go and try to find the ID (which won't exist).
Searching itself doesn't normally do ID lookups, unless there's a second pass to retrieve documents by ID, which I think Solr does, whereas ES retrieves by Lucene's docID which is much more efficient since the user's ID has already been resolved to docID.
cool
ReplyDeleteHi Michael,
ReplyDeleteI didn't understand the answer you gave to Ryan Josal, Is it possible to use "UUID v1 [binary]" on Lucene 4.8 ? Should I adjust UUIDField to take ByetsRef ?
How does this change will impact the segments size/merging ?
Thanks!
Hi Michael McCandless
ReplyDeleteDo these findings still hold for elastic 2.3.0 i.e. lucene 5.5.0. We are planning to use random IDs to avoid hotsposts in our cluster. and our first choice was UUID v4.
I suspect the findings still hold, but still I wouldn't worry so much if you have your own reasons for choosing random IDs: likely the performance cost for that is minor in your overall indexing cost.
DeleteBut, then, you shouldn't see hotspots in the cluster with your IDs unless the hashing function that spreads IDs across shards is somehow struggling with your IDs.
What was the width of your zero-padded binary encoded sequential ID? 32 bit?
ReplyDelete