Wednesday, March 14, 2012

New index statistics in Lucene 4.0

In the past, Lucene recorded only the bare minimal aggregate index statistics necessary to support its hard-wired classic vector space scoring model.

Fortunately, this situation is wildly improved in trunk (to be 4.0), where we have a selection of modern scoring models, including Okapi BM25, Language models, Divergence from Randomness models and Information-based models. To support these, we now save a number of commonly used index statistics per index segment, and make them available at search time.

To understand the new statistics, let's pretend we've indexed the following two example documents, each with only one field "title":
  • document 1: The Lion, the Witch, and the Wardrobe
  • document 2: The Da Vinci Code
Assume we tokenize on whitespace, commas are removed, all terms are downcased and we don't discard stop-words. Here are the statistics Lucene tracks:
    TermsEnum.docFreq()
How many documents contain at least one occurrence of the term in the field; 3.x indices also save this (TermEnum.docFreq()). For term "lion" docFreq is 1, and for term "the" it's 2.

    Terms.getSumDocFreq()
Number of postings, i.e. sum of TermsEnum.docFreq() across all terms in the field. For our example documents this is 9.

    TermsEnum.totalTermFreq()
Number of occurrences of this term in the field, across all documents. For term "the" it's 4, for term "vinci" it's 1.

    Terms.getSumTotalTermFreq()
Number of term occurrences in the field, across all documents; this is the sum of TermsEnum.totalTermFreq() across all unique terms in the field. For our example documents this is 11.

    Terms.getDocCount()
How many documents have at least one term for this field. In our example documents, this is 2, but if for example one of the documents was missing the title field, it would be 1.

    Terms.getUniqueTermCount()
How many unique terms were seen in this field. For our example documents this is 8. Note that this statistic is of limited utility for scoring, because it's only available per-segment and you cannot (efficiently!) compute this across all segments in the index (unless there is only one segment).

    Fields.getUniqueTermCount()
Number of unique terms across all fields; this is the sum of Terms.getUniqueTermCount() across all fields. In our example documents this is 8. Note that this is also only available per-segment.

    Fields.getUniqueFieldCount()
Number of unique fields. For our example documents this is 1; if we also had a body field and an abstract field, it would be 3. Note that this is also only available per-segment.

3.x indices only store TermsEnum.docFreq(), so if you want to experiment with the new scoring models in Lucene 4.0, you should either re-index or upgrade your index using IndexUpgrader. Note that the new scoring models all use the same single-byte norms format, so you can freely switch between them without re-indexing.

In addition to what's stored in the index, there are also these statistics available per-field, per-document while indexing, in the FieldInvertState passed to Similarity.computeNorm method for both 3.x and 4.0:
    length
How many tokens in the document. For document 1 it's 7; for document 2 it's 4.

    uniqueTermCount
For this field in this document, how many unique terms are there? For document 1, it's 5; for document 2 it's 4.

    maxTermFrequency
What was the count for the most frequent term in this document. For document 1 it's 3 ("the" occurs 3 times); for document 2 it's 1.

In 3.x, if you want to consume these indexing-time statistics, you'll have to save them away yourself (e.g., somehow encoding them into the single-byte norm value). However, since 4.0 uses doc values for norms, you have more freedom to encode these statistics however you'd like. Your custom similarity can then pull from these.

From these available statistics you're now free to derive other commonly used statistics:
  • Average field length across all documents is Terms.getSumTotalTermFreq() divided by maxDoc (or Terms.getDocCount(), if not all documents have the field).

  • Average within-document field term frequency is FieldInvertState.length divided by FieldInvertState.uniqueTermCount.

  • Average number of unique terms per field across all documents is Terms.getSumDocFreq() divided by maxDoc (or Terms.getDocCount(field), if not all documents have the field).
Remember that the statistics do not reflect deleted documents, until those documents are merged away; in general this also means that segment merging will alter scores! Similarly, if the field omits term frequencies, then the statistics will not be correct (though they will still be consistent with one another: we will pretend each term occurred once per document).

Monday, March 5, 2012

Crowd-data can find drug and vaccine side effects

The social crowd has proven to be powerful, if you can find some way to harness it: crowd-sourcing can perform tasks and solve collaborative problems, crowd-funding can raise substantial financing.

I suspect crowd-data will similarly become an effective way to create large, realistic databases.

A great application of this is the medical world, where many people post to health forums raising medical problems, possible side effects from drugs and vaccines, etc. Why not collect all such posts to find previously undiscovered problems? In fact, this paper describes just that: the authors extracted the nasty side effects of statin drugs based on posts to online health forums. Similarly, this abstract describes a system that used crowd-data to spot nasty side effects from Singulair, years before the FDA issued a warning. The VAERS database, which gathers parent-reported problems after children receive vaccines, is another example.

Unfortunately the drug safety trials that take place before a drug can be released are not especially trustworthy. Here's a scary quote from that interview:

    When you look at the highest quality medical studies, the odds that a study will favor the use of a new drug are 5.3 times higher for commercially funded studies than for noncommercially funded studies.

And that was 7 years ago! I imagine the situation has only gotten worse.

When a new drug is released, the true, unbiased drug trial begins when millions of guinea-pigs start taking taking it. Crowd-data makes it possible to draw conclusions from that that post-market drug trial.

Of course there are challenging tradeoffs: crowd-data, being derived from "ordinary people" without any rigorous standard collection process, can be dirty, incomplete and reflect sampling bias (only people experiencing nasty side effects speak up). For these reasons, old-fashioned journals turn their noses up at papers drawing conclusions from crowd-data.

Nevertheless, I believe such limitations are more than offset by the real-time nature and shear scale the millions of people, constantly posting information over time. Inevitably, trustworthy patterns will emerge over the noise. Unlike the synthetic drug trial, this data is as real as you can get: sure, perhaps the drug seemed fine in the carefully controlled pre-market testing, but then out in the real world, unexpected interactions can suddenly emerge. Crowd-data will enable us to find such cases quickly and reliably, as long as we still have enough willing guinea-pigs!

Fast forward a few years and I expect crowd-data will be an excellent means of drawing conclusions, and will prove more reliable than the company-funded pre-market drug trials.

Thursday, March 1, 2012

Transactional Lucene

Many users don't appreciate the transactional semantics of Lucene's APIs and how this can be useful in search applications. For starters, Lucene implements ACID properties:
  • Atomicity: when you make changes (adding, removing documents) in an IndexWriter session, and then commit, either all (if the commit succeeds) or none (if the commit fails) of your changes will be visible, never something in-between. Some methods have their own atomic behavior: if you call updateDocument, which is implemented as a delete followed by an add, you'll never see the delete without the add, even if you open a near-real-time (NRT) reader or commit from a separate thread. Similarly if you add a block of documents, using the relatively new addDocuments method, you'll see either none or all of the documents in any reader you obtain.

  • Consistency: if the computer or OS crashes, or the JVM crashes or is killed, or power is lost, your index will remain intact (ie, not corrupt). Note that other problems, such as bad RAM, a bit-flipping CPU or file system corruption, can still easily corrupt the index!

  • Isolation: while IndexWriter is making changes, nothing is visible to any IndexReader searching the index, until you commit or open a new NRT reader. Only one IndexWriter instance at a time can change the index.

  • Durability: once commit returns, all changes have been written to durable storage (assuming your I/O system correctly implements fsync). If the computer or OS crashes, or the JVM crashes or is killed, or power is lost to the computer, all changes will still be present in the index.

Lucene provides a two-phased commit API: call the prepareCommit method to do all of the hard work (applying buffered deletes, writing buffered documents, fsyncing files). If something is going to go wrong (e.g., disk fills up) it'll almost certainly happen during this first phase. Then, call commit to complete the transaction.

When you close the IndexWriter, it calls commit under the hood. If, instead, you want to discard all changes since the last commit, call the rollback method instead, which also closes the writer. You can even rollback a CREATE: if you have an existing index, and you open an IndexWriter on it with OpenMode.CREATE, and then rollback, the index will be unchanged. Likewise, if you call deleteAll and then rollback.

Note that merely opening an IndexWriter on a new directory does not create an empty commit; ie, you cannot open an IndexReader on the directory until you've called commit yourself.

Lucene does not implement a transaction log itself, but it's easy to build that layer out on top. For example, popular search servers such as Solr and ElasticSearch, do so.

Multiple commits in one index

A single Lucene index is free to contain more than one commit; this is a powerful yet often overlooked feature. Each commit holds a point-in-time view of the index as it existed when the commit was created.

This is similar to the snapshots and writable clones available in modern filesystems like ZFS and the up-and-coming Btrfs. In fact, Lucene is able to efficiently expose multiple commits for the very same underlying reason: all index segments and files are write-once, just like the file blocks in ZFS and Btrfs.

To save multiple commits in your index, just implement your own IndexDeletionPolicy and pass it to IndexWriter. This is the class Lucene uses to know which commits should be deleted: IndexWriter invokes it on opening an index and whenever a commit succeeds. The default policy, KeepOnlyLastCommitDeletionPolicy, deletes all but the last commit. If you use NoDeletionPolicy then every commit is retained!

You can pass a userData (Map<String,String>) to commit, to record custom information (opaque to Lucene) about that commit, and then use IndexReader.listCommits to find all commits in the index. Once you've found a commit, you can open an IndexReader on it to search the index as of that commit.

You can also open an IndexWriter on a prior commit, to effectively roll back all changes after it: this is just like the rollback method, except it enables you to rollback across commits and not just the changes made in the current IndexWriter session.

Old commits are still kept even when you open an index with OpenMode.CREATE. It's also fine to pass OpenMode.CREATE when IndexReaders are still searching the old commits. This enables fun use cases, such as fully re-indexing your content between each commit without affecting any open readers.

Combining all of these fun transactional features, you can do some cool things:

  • Hot backups, using SnapshotDeletionPolicy or PersistentSnapshotDeletionPolicy: these deletion policies make it trivial to take a "live" backup of the index without blocking ongoing changes with IndexWriter. The backup can easily be incremental (just copy the new files, remove the deleted ones), and you can freely throttle the IO to minimize any interference with searching.

  • Searching different catalog versions: perhaps you run an e-commerce site, and but you ship multiple versions of your catalog. In this case you can keep older commits around, each searching a specific version of your catalog, enabling users to choose which catalog to search.

  • Repeatable indexing tests from the same initial index: maybe you want to run a bunch of performance tests, perhaps trying different RAM buffer sizes or merge factors, starting from a large initial index. To do this, simply run each test, but in the end, instead of closing the IndexWriter, use the rollback method to quickly return the index to its initial state, ready for the next test.

  • Force all index segments to be merged down to a single segment, but also keep the prior multi-segment commit. Then you can do tests to compare multi-segment vs single-segment performance.

  • Indexing and searching over the NFS file system: because NFS does not protect still-open files from deletion, you must use an IndexDeletionPolicy to keep each commit around until all open readers have finished with the commit (ie, reopened to a newer commit). The simple approach is time-based, for example: don't delete the commit until it is 15 minutes old, and then always reopen your readers every 5 minutes. Without this you'll hit all sorts of scary exceptions when searching over NFS.

  • Distributed commit: if you have other resources that must commit atomically along with the changes to your Lucene index, you can use the two-phased commit API. This is simple, but vulnerable to failures during the 2nd phaes; to also recover from such cases, for example if Lucene completed its 2nd phase commit but the database's 2nd phase hit some error or crash or power loss, you can easily rollback Lucene's commit by opening an IndexWriter on the prior commit.

  • Experimental index changes: maybe you want to try re-indexing some subset of your index in a new way, but you're not sure it'll work out. In this case, just keep the old commit around, and then rollback if it didn't work out, or delete the old commit if it did.

  • Time-based snapshots: maybe you'd like the freedom to roll back to your index as it existed 1 day ago, 1 week ago, 1 month ago, etc., so you preserve commits based on their age.
Remember that keeping more than one commit alive will necessarily consume additional disk space, however, the overhead is often small since the multiple commits will usually share common segments, especially the larger, older ones.