I have a MapReduce job written in Java. It depends on multiple classes. I want to run the MapReduce job on Spark.
What steps should I follow to do the same?
I need to make changes only to the MapReduce class?
Thanks!
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…