Pyspark 中基于时间的窗口函数
我的目标是计算另一列,保持与原始 DataFrame 相同的行数,我可以在其中显示每个用户过去 30 天的平均余额。
我想可以使用窗口函数、按用户分区并以某种方式限制当前日期和 30 天前之间的行来完成,但我不知道如何在 PySpark 中实现它。
我有以下 Spark DataFrame:
userId | 日期 | 余额 |
---|---|---|
A | 09/06/2020 | 100 |
A | 03/07/2020 | 200 |
A | 05/08/2020 | 600 |
A | 30/08/2020 | 1000 |
A | 15/09/2020 | 500 |
B | 03/01/ 2020 | 100B |
年 | 05/04/2020 | 200 |
B | 29/04/2020 | 600 |
B | 01/05/2020 | 1600 |
我想要的输出 DataFrame 是:
userId | 日期 | 余额 | mean_last_30days_balance |
---|---|---|---|
A | 09/06/2020 | 100 | 100 |
A | 03/07/2020 | 200 | 150 |
A | 05/08/2020 | 600 | 600 |
A | 30/08/2020 | 1000 | 800 |
A | 15/09/2020 | 500 | 750 |
B | 03/01/2020 | 100 | 100 |
B | 05/04/2020 | 200 | 200 |
B | 29/04/2020 | 600 | 400 |
B | 2020年1月5日 | 1600 | 800 |
from datetime import datetime
from pyspark.sql import types as T
data = [("A",datetime.strptime("09/06/2020",'%d/%m/%Y'),100),
("A",datetime.strptime("03/07/2020",'%d/%m/%Y'),200),
("A",datetime.strptime("05/08/2020",'%d/%m/%Y'),600),
("A",datetime.strptime("30/08/2020",'%d/%m/%Y'),1000),
("A",datetime.strptime("15/09/2020",'%d/%m/%Y'),500),
("B",datetime.strptime("03/01/2020",'%d/%m/%Y'),100),
("B",datetime.strptime("05/04/2020",'%d/%m/%Y'),200),
("B",datetime.strptime("29/04/2020",'%d/%m/%Y'),600),
("B",datetime.strptime("01/05/2020",'%d/%m/%Y'),1600)]
schema = T.StructType([T.StructField("userId",T.StringType(),True),
T.StructField("date",T.DateType(),True),
T.StructField("balance",T.StringType(),True)
])
sdf_prueba = spark.createDataFrame(data=data,schema=schema)
sdf_prueba.printSchema()
sdf_prueba.orderBy(F.col('userId').asc(),F.col('date').asc()).show(truncate=False)
My goal is to calculate another column, keeping the same number of rows as the original DataFrame, where I can show the mean balance for each user for the last 30 days.
I guess it can be done using Window Functions, partitioning by user and somehow limiting the rows which are between the current date and 30 days before, but I don't know how to implement it in PySpark.
I have the following Spark DataFrame:
userId | date | balance |
---|---|---|
A | 09/06/2020 | 100 |
A | 03/07/2020 | 200 |
A | 05/08/2020 | 600 |
A | 30/08/2020 | 1000 |
A | 15/09/2020 | 500 |
B | 03/01/2020 | 100 |
B | 05/04/2020 | 200 |
B | 29/04/2020 | 600 |
B | 01/05/2020 | 1600 |
My desired output DataFrame would be:
userId | date | balance | mean_last_30days_balance |
---|---|---|---|
A | 09/06/2020 | 100 | 100 |
A | 03/07/2020 | 200 | 150 |
A | 05/08/2020 | 600 | 600 |
A | 30/08/2020 | 1000 | 800 |
A | 15/09/2020 | 500 | 750 |
B | 03/01/2020 | 100 | 100 |
B | 05/04/2020 | 200 | 200 |
B | 29/04/2020 | 600 | 400 |
B | 01/05/2020 | 1600 | 800 |
from datetime import datetime
from pyspark.sql import types as T
data = [("A",datetime.strptime("09/06/2020",'%d/%m/%Y'),100),
("A",datetime.strptime("03/07/2020",'%d/%m/%Y'),200),
("A",datetime.strptime("05/08/2020",'%d/%m/%Y'),600),
("A",datetime.strptime("30/08/2020",'%d/%m/%Y'),1000),
("A",datetime.strptime("15/09/2020",'%d/%m/%Y'),500),
("B",datetime.strptime("03/01/2020",'%d/%m/%Y'),100),
("B",datetime.strptime("05/04/2020",'%d/%m/%Y'),200),
("B",datetime.strptime("29/04/2020",'%d/%m/%Y'),600),
("B",datetime.strptime("01/05/2020",'%d/%m/%Y'),1600)]
schema = T.StructType([T.StructField("userId",T.StringType(),True),
T.StructField("date",T.DateType(),True),
T.StructField("balance",T.StringType(),True)
])
sdf_prueba = spark.createDataFrame(data=data,schema=schema)
sdf_prueba.printSchema()
sdf_prueba.orderBy(F.col('userId').asc(),F.col('date').asc()).show(truncate=False)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您可以使用
RANGE BETWEEN
关键字:如果您想使用
pyspark
API,您需要将天转换为 Unix 秒以便使用
rangeBetween
You can use the
RANGE BETWEEN
keyword:If you want to use the
pyspark
API, you need toconvert days to unix seconds in order to use
rangeBetween