Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
94 views
in Technique[技术] by (71.8m points)

sql - How to use -SUM() and -COUNT() for excluding reversals in PySpark?

I have trying to convert the SQL Query to PySpark Code. Wherever possible, I am trying to part away from using SQL code and trying to keep it as much pure pyspark as possible.

The SQL Query I am trying to work on is:

SELECT 
INVESTOR_NUMBER, F_NUM ,TARGET_NUMBER ,DIST_NUMBER,
ELECTRONIC_TRANSAC,


-SUM(SHARES_QUANTITY) AS UNITS_INFLOW , -SUM(PURCHASE_TRANSACTION_AMOUNT) AS AMOUNT_INFLOW,
-COUNT(*) AS CNT_INFLOW , -COUNT(DISTINCT(TRANSACTION_REFERENCE_NUMBER)) AS CNT_INFLOW_DIST

FROM INDIA__TRANSACTIONS_FACT WHERE 
TRANSACTION_CODE  IN ('P','S')
GROUP BY 
DIST_NUMBER,    ELECTRONIC_TRANSAC

Here, when I try to avoid reversals in PySpark what do I do?

-SUM(SHARES_QUANTITY) AS UNITS_INFLOW , -SUM(PURCHASE_TRANSACTION_AMOUNT) AS AMOUNT_INFLOW,
-COUNT(*) AS CNT_INFLOW , -COUNT(DISTINCT(TRANSACTION_REFERENCE_NUMBER)) AS CNT_INFLOW_DIST

The SQL code here works alright, but I am unable to find the method for the same in PySpark.

Any help on the syntactic code conversion from SQL to PySpark is appreciated.

question from:https://stackoverflow.com/questions/65641796/how-to-use-sum-and-count-for-excluding-reversals-in-pyspark

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...