You are facing a problem which is known in the field of Information Retrieval as Near Duplicates Detection.
One of the known solutions to it is to use Jaccard-Similarity for getting the difference between two documents.
Jaccard Similarity is basically - get sets of words from each document, let these sets be s1
and s2
- and the jaccard similarity is |s1 [intersection] s2|/|s1 [union] s2|
.
Usually when facing near duplicates - the order of words has some importance however. In order to deal with it - when generating the sets s1
and s2
- you actually generate sets of k-shinglings, instead sets of only words.
In your example, with k=2
, the sets will be:
s1 = { I'm write, write a, a crawler, crawler to }
s2 = { I'm write, write a, a some, some text, text crawler, crawler to, to get }
s1 [union] s2 = { I'm write, write a, a crawler, crawler to, a some, some text, text crawler, to get }
s1 [intersection] s2 = { I'm write, write a, crawler to }
In the above, the jaccard-similarity will be 3/8
. If you use single words with the same approach, (k=1 shinglings) you will get your desired 5/8
- but this is worse solution in my (and most IR experts) opinion.
This procedure can be scaled nicely to deal very efficiently with huge collections, without checking all pairs and creating huge numbers of sets. More details could be found in these lecture notes (I gave this lecture few months ago, based on the author's notes).
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…