Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.6k views
in Technique[技术] by (71.8m points)

apache spark - WARN cluster.YarnScheduler: Initial job has not accepted any resources

Any spark jobs that I run will fail with the following error message

17/06/16 11:10:43 WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

Spark version is 1.6, running on Yarn.

I am issuing jobs from pyspark.

And you can notice from the job timeline that it runs indefinitely and no resources are added or removed.1

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

First point is that if there are enough resources such as nodes, CPUs and memory available to yarn it can use dynamic allocation to create spark workers with appropriate default cores and memory allocated.

In my case I needed to turn off dynamic allocation as my resource levels were very low.

So from pyspark I set the following values :

conf = (SparkConf().setAppName("simple")
        .set("spark.shuffle.service.enabled", "false")
        .set("spark.dynamicAllocation.enabled", "false")
        .set("spark.cores.max", "1")
        .set("spark.executor.instances","2")
        .set("spark.executor.memory","200m")
        .set("spark.executor.cores","1")

Note: basically the values set here should be less than the actual resources available. However too small values here can lead to out of memory issues, or slow performance issues when your job runs.

The complete code gist of a sample job is available here

Another important point to note for this pyspark case is that Spark on Yarn can run on two modes

  1. cluster mode - the spark driver is run in the spark master node
  2. client mode - the spark driver is run from the client side where the interactive shell is run.

Cluster mode is not well suited to using Spark interactively. Spark applications that require user input, such as spark-shell and pyspark, require the Spark driver to run inside the client process that initiates the Spark application.

Client mode can be set in environment as below
export PYSPARK_SUBMIT_ARGS='--master yarn --deploy-mode client pyspark-shell'


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...