Okay, I am going to propose something weird, but coming from C++
I have been using Boost
for a long time and I have come to see the MultiIndex
library.
The idea of this library is to create one collection, but have many different ways to query it. It could model, in fact, a database.
So, let's put our words in a table, and put the necessary indexes in place:
word |length|c0|c1|c2| ... |c26|
-------------------------|------|--|--|--| ... |---|
Singapour |9 |S |i |n | ... |0 |
Now the query will look like:
Select word From table Where length=9 And c2='n' And c8='u';
Easy enough isn't it ?
For maximum efficiency, the table should be partitioned on the length, and the indexes (one per cX column) should be local to the partition.
For an in-memory solution you would have one container per length, containing as many indexes as the length, each index being a hash table pointing to a sorted list (easier merge)
Here is a python description:
class Dictionary:
def __init__(self, length):
self.length = length
self.words = set([])
self.indexes = collections.defaultdict(set)
def add(self, word):
if len(word) != self.length:
raise RuntimeException(word + ' is not ' + `self.length` + ' characters long')
if word in self.words:
raise RuntimeException(word + ' is already in the dictionary')
self.words.add(word)
for i in range(0,length):
self.indexes[(i,word[i])].add(word)
def search(self, list):
"""list: list of tuples (position,character)
"""
def compare(lhs,rhs): return cmp(len(lhs),len(rhs))
sets = [self.indexes[elem] for elem in list]
sets.sort(compare)
return reduce(intersection, sets)
I have voluntarily provided the length
argument, to minimize the size of the hashes and thus make the search better. Also, the sets are sorted by length so that the computation of the intersection be better :)
Go ahead and test it against other solutions if you wish :)