I write a monthly column for PCPlus, a computer news-views-n-reviews magazine in the UK (actually there are 13 issues a year — there’s an Xmas issue as well — so it’s a bit more than monthly). The column is called Theory Workshop and appears in the Make It section of the magazine. When I signed up, my editor and the magazine were gracious enough to allow me to reprint the articles here after say a year or so. What I’ll do is publish the article from a year ago or so here when I purchase the current issue.
Since I picked up April 2010’s issue when I was in England just over a week ago, I’m publishing this article from April 2009 slightly earlier than I usually do (which means May 2009’s article will be posted in June or something).
The topic is pretty interesting: how does a spell-checker give you alternatives to a badly-spelled word? The first step — finding out if a word is misspelled — is simple enough as a basic algorithm: just look it up in a big ol’ list o’ words. The next step can seem daunting though: how can you generate a valid list of possible corrections? The article talks about a few possibilities: the Levenshtein distance, the Damerau algorithm, and the Soundex method.
The Levenshtein distance is a bit impractical for a large list of words, although there are certain techniques you can use to speed things up. It essentially calculates the edit distance (the number of character insertions and deletions to go from one word to another) of the misspelled word and every other word in the list. The words with the smallest edit distance are your correction candidates. As I said, possibly too slow for a large dictionary, although it’s a great algorithm for creating diff tools.
The Damerau algorithm essentially calculates all the words that can be formed form the misspelled word by a single mistyped letter and checks those in the dictionary. Those that are present are valid correction candidates. Although single mistakes are common, two or more mistyped letters in a single word is fairly rare, so the algorithm does pretty well (as the article shows).
The Soundex algorithm is a “phonetic” algorithm: it finds candidate words that “sound like” the misspelled word.
Of course the article doesn’t go into deeper, harder algorithms for space reasons. For example, Bloom filters are great for compressing the dictionary at the expense of a few false positives and can be used to replace the article’s hash table for the word list. Different dictionaries could be employed that just store root words and then describe possible suffixes, prefixes, and circumfixes that can go with each word (for example, Hunspell). Better phonetic algorithms now exist than Soundex (for example, Metaphone). Nevertheless, I think the article hangs together quite well and I’m pretty pleased with it.
This article first appeared in issue 280, April 2009.
You can download the PDF here.
Armstrong, Craig - Revelations
(from Plunkett & Macleane [Original Score])