Sometimes in a Spark job, we face OOM issues like the ones below:
org.apache.spark.shuffle.FetchFailedException: failed to allocate 16777216 byte(s) of direct memory (used: 42748346368, max: 42749067264)
OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (?used: 7633633280, max: 7635730432)
ExecutorLostFailure (executor 71 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 62.3 GB of 62 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead
These posts suggest tweaking the Spark memory configurations so that Spark doesn't run out of direct memory while reading shuffled data:
Turns out these are the shuffle configs to tune to prevent this from happening:
- spark.reducer.maxSizeInFlight
- spark.reducer.maxReqsInFlight
- spark.reducer.maxBlocksInFlightPerAddress
- spark.maxRemoteBlockSizeFetchToMem
The downside of bad parameter tuning is increasing job latencies due to slow shuffles. In an effort to find optimal values for these, I want to find out what are the current metrics for these. (I know the default values for these from spark https://spark.apache.org/docs/latest/configuration.html)
But, Is there a way to know or evaluate these metrics from the Spark UI or logs? i.e. What is the inflight size/reqs/blocks being utilised during the shuffles? Or at least, which metrics are indicators to approximately estimate this?
question from:
https://stackoverflow.com/questions/65932792/evaluating-blocksinflightperaddress-from-spark-ui 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…