You aren't actually overwriting anything with this code. Just so you can see for yourself try the following.
As soon as you start pyspark shell type:
sc.getConf().getAll()
This will show you all of the current config settings. Then try your code and do it again. Nothing changes.
What you should do instead is create a new configuration and use that to create a SparkContext. Do it like this:
conf = pyspark.SparkConf().setAll([('spark.executor.memory', '8g'), ('spark.executor.cores', '3'), ('spark.cores.max', '3'), ('spark.driver.memory','8g')])
sc.stop()
sc = pyspark.SparkContext(conf=conf)
Then you can check yourself just like above with:
sc.getConf().getAll()
This should reflect the configuration you wanted.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…