I tried to retrieve data from Greenplum database and show it using Pyspark.This is the code that I have implemented.
import pyspark
from pyspark import SparkContext,SparkConf
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
spark = SparkSession
.builder
.appName("spkapp")
.master("local[*]")
.config("spark.debug.maxToStringFields", "100")
.config("spark.sql.broadcastTimeout", "36000")
.config("spark.network.timeout", "600s")
.config('spark.executor.cores','1')
.getOrCreate()
gscPythonOptions = {
"url": "jdbc:postgresql://localhost:5432/db_name",
"user": "my_user",
"password": "",
"dbschema": "public"
}
gpdf_swt = spark.read.format("greenplum").options(**gscPythonOptions,dbtable="products",partitionColumn= "id").load()
gpdf_swt.printSchema()
gpdf_swt.show()
But when I run my python file using spark submit it gives me an error like below.
20/12/30 21:23:33 ERROR TaskSetManager: Task 2 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
File "/home/credit_card/summary_table_creation2Test.py", line 38, in <module>
gpdf_swt.count()
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/dataframe.py", line 524, in show
File "/usr/local/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
File "/usr/local/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o84.show.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 0.0 failed 1 times, most recent failure: Lost task 2.0 in stage 0.0 (TID 2, localhost, executor driver): java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at io.pivotal.greenplum.spark.jdbc.Jdbc$.getDistributedTransactionId(Jdbc.scala:500)
at io.pivotal.greenplum.spark.externaltable.GreenplumRowIterator.<init>(GreenplumRowIterator.scala:100)
at io.pivotal.greenplum.spark.GreenplumRDD.compute(GreenplumRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
This is my spark-submit command.
/usr/local/spark/bin/spark-submit --driver-class-path /root/greenplum/greenplum-spark_2.11-1.6.2.jar summary_table_creation
Any help is appreciated to overcome from this error.
Edit -:
My Greenplum version is 6.4.0. There is a similar question here. But its solution is applicable only for the greenplum versions that are higher than 6.7.1.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…