I've managed to get it working from within the jupyter notebook which is running form the all-spark container.
I start a python3 notebook in jupyterhub and overwrite the PYSPARK_SUBMIT_ARGS flag as shown below. The Kafka consumer library was downloaded from the maven repository and put in my home directory /home/jovyan:
import os
os.environ['PYSPARK_SUBMIT_ARGS'] =
'--jars /home/jovyan/spark-streaming-kafka-assembly_2.10-1.6.1.jar pyspark-shell'
import pyspark
from pyspark.streaming.kafka import KafkaUtils
from pyspark.streaming import StreamingContext
sc = pyspark.SparkContext()
ssc = StreamingContext(sc,1)
broker = "<my_broker_ip>"
directKafkaStream = KafkaUtils.createDirectStream(ssc, ["test1"],
{"metadata.broker.list": broker})
directKafkaStream.pprint()
ssc.start()
Note: Don't forget the pyspark-shell in the environment variables!
Extension: If you want to include code from spark-packages you can use the --packages flag instead. An example on how to do this in the all-spark-notebook can be found here
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…