To apply a filter, Lucene must compute the intersection of the documents matching the query against the documents allowed by the filter. Today, we do that in IndexSearcher like this:
while (true) {
if (scorerDoc == filterDoc) {
// Check if scorer has exhausted, only before collecting.
if (scorerDoc == DocIdSetIterator.NO_MORE_DOCS) {
break;
}
collector.collect(scorerDoc);
filterDoc = filterIter.nextDoc();
scorerDoc = scorer.advance(filterDoc);
} else if (scorerDoc > filterDoc) {
filterDoc = filterIter.advance(scorerDoc);
} else {
scorerDoc = scorer.advance(filterDoc);
}
}
We call this the "leapfrog approach": the query and the filter take turns trying to advance to each other's next matching document, often jumping past the target document. When both land on the same document, it's collected.
Unfortunately, for various reasons this implementation is inefficient (these are spelled out more in LUCENE-1536):
- The advance method for most queries is costly.
- The advance method for most filters is usually cheap.
- If the number of documents matching the query is far higher than
the number matching the filter, or vice versa, it's better to drive
the matching by whichever is more restrictive. - If the filter supports fast random access, and is not super
sparse, it's better to apply it during postings enumeration, like
deleted docs. - Query scorers don't have a random access API, only .advance(),
which does unecessary extra work .next()'ing to the next matching
document.
Until then, there in a simple way to get a large speedup in many cases, addressing the 4th issue above. Prior to flexible indexing, when you obtained the postings enumeration for documents matching a given term, Lucene would silently filter out deleted documents. With flexible indexing, the API now allows you to pass in a bit set marking the documents to skip. Normally you'd pass in the IndexReader's deleted docs. But, with a simple subclass of FilterIndexReader, it's possible to use any filter as the documents to skip.
To test this, I created a simple class, CachedFilterIndexReader (I'll attach it to LUCENE-1536). You pass it an existing IndexReader, plus a Filter, and it creates an IndexReader that filters out both deleted documents and documents that don't match the provided filter. Basically, it compiles the IndexReader's deleted docs (if any), and the negation of the incoming filter, into a single cached bit set, and then passes that bit set as the skipDocs whenever postings are requested. You can then create an IndexSearcher from this reader, and all searches against it will be filtered according to the filter you provided.
This is just a prototype, and has certain limitations, eg it doesn't implement reopen, it's slow to build up its cached filter, etc.
Still, it works very well! I tested it on a 10M Wikipedia index, with a random filter accepting 50% of the documents:
Query | QPS Default | QPS Flex | % change |
united~0.7 | 19.95 | 19.25 | -3.5% |
un*d | 43.19 | 52.21 | 20.9% |
unit* | 21.53 | 30.52 | 41.8% |
"united states" | 6.12 | 8.74 | 42.9% |
+united +states | 9.68 | 14.23 | 47.0% |
united states | 7.71 | 14.56 | 88.9% |
states | 15.73 | 36.05 | 129.2% |
I'm not sure why the fuzzy query got a bit slower, but the speedups on the other queries are awesome. However, this approach is actually slower if the filter is very sparse. To test this, I ran just the TermQuery ("states"), against different filter densities:
The cutover, for TermQuery at least, is somewhere around 1.1%, meaning if the filter accepts more than 1.1% of the index, it's best to use the CachedFilterIndexReader class; otherwise it's best to use Lucene's current implementation.
Thanks to this new flex API, until we can fix Lucene to properly optimize for filter and query intersection, this class gives you a viable, fully external, means of massive speedups for non-sparse filters!
Hi! Nice info! I am also sure you'll be interested in this information as well.
ReplyDelete