Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
248 views
in Technique[技术] by (71.8m points)

python - Removing duplicates from a large image dataset

I am working with a training data set of 127.000 images scraped from internet.

I know there are quite a few duplicates in there and i want to remove them to improve the performance of my deep learning model.

I have tried several different ways to do this. Some did not work at all, others did just remove a few or way too many.

The last one i tried was this:

import hashlib
import os
import PIL
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
%matplotlib inline
import time
import numpy as np

def file_hash(filepath):
    with open(filepath, 'rb') as f:
        return md5(f.read()).hexdigest()

os.chdir('/content/train')

import hashlib, os
duplicates = []
hash_keys = dict()
for index, filename in  enumerate(os.listdir('.')):  #listdir('.') = current directory
    if os.path.isfile(filename):
        with open(filename, 'rb') as f:
            filehash = hashlib.md5(f.read()).hexdigest()
        if filehash not in hash_keys: 
            hash_keys[filehash] = index
        else:
            duplicates.append((index,hash_keys[filehash]))

for file_indexes in duplicates[:30]:
    try:
    
        plt.subplot(121),plt.imshow(imread(file_list[file_indexes[1]]))
        plt.title(file_indexes[1]), plt.xticks([]), plt.yticks([])

        plt.subplot(122),plt.imshow(imread(file_list[file_indexes[0]]))
        plt.title(str(file_indexes[0]) + ' duplicate'), plt.xticks([]), plt.yticks([])
        plt.show()
    
    except OSError as e:
        continue

for index in duplicates:
    os.remove(file_list[index[0]])
  

This method found 490 duplicates, but i am estimating there are at least a couple of thousand duplicates.

I have also tried imagededup with different methods and thresholds.

pip install imagededup

from imagededup.methods import DHash
method_object = DHash()
duplicates = method_object.find_duplicates_to_remove(image_dir='/content/train', 
                                                     max_distance_threshold=3) 

Last run found 23919 duplicates, and its usually somewhere between 20k and 35k depending on the method and the threshold. This is too many. Running the model removing all these produce a worse result.

Anyone know about a better way to remove duplicate images?

question from:https://stackoverflow.com/questions/65651787/removing-duplicates-from-a-large-image-dataset

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...