I need to deduplicate a kafka stream of messages by similarity in a rolling fashion. We can assume that only messages within 1 day will possibly be duplicates. The current strategy is to do compute the cosine similarity of a new message with previous messages within a day in memory to find the most similar and mark these as duplicates. However, this has to be single-threaded due to the in-memory nature. I presume that we need to persist messages within 1 day in some sort of storage to make the process distributed. However, computing the cosine similarity becomes no longer viable if it's not in memory.
Is there a good algorithm to find similairy/duplicates with persistence storage?
question from:
https://stackoverflow.com/questions/65910304/deduplication-by-similarity-of-kafka-messages 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…