Update:
With current versions you can use an array
of literals:
from pyspark.sql.functions import array, lit
df.where(df.a == array(*[lit(x) for x in ['list','of' , 'stuff']]))
Original answer:
Well, a little bit hacky way to do it, which doesn't require a Python batch job, is something like this:
from pyspark.sql.functions import col, lit, size
from functools import reduce
from operator import and_
def array_equal(c, an_array):
same_size = size(c) == len(an_array) # Check if the same size
# Check if all items equal
same_items = reduce(
and_,
(c.getItem(i) == an_array[i] for i in range(len(an_array)))
)
return and_(same_size, same_items)
Quick test:
df = sc.parallelize([
(1, ['list','of' , 'stuff']),
(2, ['foo', 'bar']),
(3, ['foobar']),
(4, ['list','of' , 'stuff', 'and', 'foo']),
(5, ['a', 'list','of' , 'stuff']),
]).toDF(['id', 'a'])
df.where(array_equal(col('a'), ['list','of' , 'stuff'])).show()
## +---+-----------------+
## | id| a|
## +---+-----------------+
## | 1|[list, of, stuff]|
## +---+-----------------+
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…