Here's a dynamic programming solution (implemented as a memoized function). Given a dictionary of words with their frequencies, it splits the input text at the positions that give the overall most likely phrase. You'll have to find a real wordlist, but I included some made-up frequencies for a simple test.
WORD_FREQUENCIES = {
'file': 0.00123,
'files': 0.00124,
'save': 0.002,
'ave': 0.00001,
'as': 0.00555
}
def split_text(text, word_frequencies, cache):
if text in cache:
return cache[text]
if not text:
return 1, []
best_freq, best_split = 0, []
for i in xrange(1, len(text) + 1):
word, remainder = text[:i], text[i:]
freq = word_frequencies.get(word, None)
if freq:
remainder_freq, remainder = split_text(
remainder, word_frequencies, cache)
freq *= remainder_freq
if freq > best_freq:
best_freq = freq
best_split = [word] + remainder
cache[text] = (best_freq, best_split)
return cache[text]
print split_text('filesaveas', WORD_FREQUENCIES, {})
--> (1.3653e-08, ['file', 'save', 'as'])
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…