- Apache Tika, implemented in Java, using its LanguageIdentification class
language-detection, a project on Google code, also implemented in Java
For the test corpus I used a the corpus described here, created by the author of
language-detection. It contains 1000 texts from each of 21 languages, randomly sampled from the Europarl corpus.
It's not a perfect test (no test ever is!): the content is already very clean plain text; there are no domain, language, encoding hints to apply (which you'd normally have with HTML content loaded over HTTP); it "only" covers 21 languages (versus at least 76 that CLD can detect).
language-detectioncover all 21 languages, but Tika is missing Bulgarian (
bg), Czech (
cs), Lithuanian (
lt) and Latvian (
lv), so I only tested on the remaining subset of 17 languages that all three detectors support. This works out to 17,000 texts totalling 2.8 MB.
Many of the texts are very short, making the test challenging: the shortest is 25 bytes, and 290 (1.7%) of the 17000 are 30 bytes or less.
In addition to the challenges of the corpora, the differences in the detectors make the comparison somewhat apples to oranges. For example, CLD detects at least 76 languages, while
language-detectiondetects 53 and Tika detects 27, so this biases against CLD, and
language-detectionto a lesser extent, since their classification task is harder relative to Tika's.
For CLD, I disabled its option to abstain (
removeWeakMatches), so that it always guesses at the language even when confidence is low, to match the other two detectors. I also turned off the
pickSummaryLanguage, as this was also hurting accuracy; now CLD simply picks the highest scoring match as the detected language.
language-detection, I ran with the default
ALPHAof 0.5, and set the random seed to 0.
Here are the raw results:
CLD results (total 98.82% = 16800 / 17000):
Tika results (total 97.12% = 16510 / 17000):
Language-detectionresults (total 99.22% = 16868 / 17000):
Some quick analysis:
- The language-detection library gets the best accuracy, at 99.22%,
followed by CLD, at 98.82%, followed by Tika at 97.12%.
Net/net these accuracies are very good, especially considering how
short some of the tests are!
- The difficult languages are Danish (confused with Norwegian),
Slovene (confused with Croatian) and Dutch (for Tika and
language-detection). Tika in particular has trouble with Spanish (confuses it with Galician). These confusions are to be expected: the languages are very similar.
language-detectionwas wrong, Tika was also wrong 37% of the time and CLD was also wrong 23% of the time. These numbers are quite low! It tells us that the errors are somewhat orthogonal, i.e. the libraries tend to get different test cases wrong. For example, it's not the case that they are all always wrong on the short texts.
This means the libraries are using different overall signals to achieve their classification (for example, perhaps they were trained on different training texts). This is encouraging since it means, in theory, one could build a language detection library combining the signals of all of these libraries and achieve better overall accuracy.
You could also make a simple majority-rules voting system across these (and other) libraries. I tried exactly that approach: if any language receives 2 or more votes from the three detectors, select that as the detected language; otherwise, go with
language-detectionchoice. This gives the best accuracy of all: total 99.59% (= 16930 / 17000)!
Finally, I also separately tested the run time for each package. Each time is the best of 10 runs through the full corpus:
|CLD||171 msec||16.331 MB/sec|
|2367 msec||1.180 MB/sec|
|Tika||42219 msec||0.066 MB/sec|
CLD is incredibly fast!
language-detectionis an order of magnitude slower, and Tika is another order of magnitude slower (not sure why).
I used the 09-13-2011 release of
language-detection, the current trunk (svn revision 1187915) of Apache Tika, and the current trunk (hg revision b0adee43f3b1) of CLD. All sources for the performance tests are available from here.