I would say this is the most pythonic way of handling it:
foo = ['a', 'b', 'c', 'd', 'e', 'f']
random.shuffle(foo)
length = len(foo)//2
result = list(zip(foo[:length], foo[length:]))
Zip will combine the elements at the same index of multiple lists and stop when one list runs out of elements. So you're going to to shuffle the list, and take the first and 2nd half of the list, then combine them element wise until the shorter one runs out of elements.
Edit: you said you were interested in the performance of different ways of handling it. I made a function for each of the unique answers here and timed them:
def a(foo):
random.shuffle(foo)
length = len(foo)//2
return list(zip(foo[:length], foo[length:]))
def b(foo):
random.shuffle(foo)
return [[foo[i], foo[i+1]] for i in range(0, len(foo)-(len(foo)%2), 2)]
def c(foo):
np.random.shuffle(foo)
return foo[:len(foo)-(len(foo)%2)].reshape(2,-1)
def d(foo):
result = []
for i in range(1, len(foo), 2):
result.append([foo[i-1], foo[i]])
return result
def e(foo):
el_pairs = []
while len(foo) > 1:
pair_idxs = np.sort(np.random.choice(range(len(foo)), 2, replace=False))
el_pair = [foo.pop(pair_idxs[0]), foo.pop(pair_idxs[1] - 1)]
el_pairs.append(el_pair)
return el_pairs
def f(foo):
random.shuffle(foo)
length = len(foo)//2
return zip(foo[:length], foo[length:])
Zip without list:
%timeit f(foo)
3.96 μs ± 12.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Zip:
%timeit a(foo)
4.36 μs ± 156 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
List comprehension:
%timeit b(foo)
4.38 μs ± 22.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
For loop:
%timeit d(foo)
812 ns ± 5.68 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
Original:
%timeit e(foo)
154 ns ± 1.11 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
The numpy answer given didn't run out of box for me.