itertools
generally works on unique positions within inputs, not on unique values. So when you want to remove duplicate values, you generally have to either post-process the itertools
result sequence, or "roll your own". Because post-processing can be very inefficient in this case, roll your own:
def uprod(*seqs):
def inner(i):
if i == n:
yield tuple(result)
return
for elt in sets[i] - seen:
seen.add(elt)
result[i] = elt
for t in inner(i+1):
yield t
seen.remove(elt)
sets = [set(seq) for seq in seqs]
n = len(sets)
seen = set()
result = [None] * n
for t in inner(0):
yield t
Then, e.g.,
>>> print list(uprod([1, 2, 1], [2, 4, 4], [5, 6, 5]))
[(1, 2, 5), (1, 2, 6), (1, 4, 5), (1, 4, 6), (2, 4, 5), (2, 4, 6)]
>>> print list(uprod([1], [1, 2], [1, 2, 4], [1, 5, 6]))
[(1, 2, 4, 5), (1, 2, 4, 6)]
>>> print list(uprod([1], [1, 2, 4], [1, 5, 6], [1]))
[]
>>> print list(uprod([1, 2], [3, 4]))
[(1, 3), (1, 4), (2, 3), (2, 4)]
This can be much more efficient, since a duplicate value is never even considered (neither within an input iterable, nor across them).