将SAS自内在加入到Pyspark
我一直在尝试将以下SAS代码转换为Pyspark语法,但是我无法弄清楚日期。
inner join (select var1, max(date) as max_date
from table
group by var1) as recent
on a.var1 = recent.var1 and a.date = recent.date
I have been trying to convert the below SAS code into PySpark syntax, but I haven't been able to figure out the dates.
inner join (select var1, max(date) as max_date
from table
group by var1) as recent
on a.var1 = recent.var1 and a.date = recent.date
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
对于Spark中的自我加入,建议对双方使用
别名
。如果您不需要重复的列,请通过从第一个数据框架中选择所有内容来将其删除。
For self joins in Spark, it's recommended to use
alias
for both sides.If you don't need duplicated columns, remove them by selecting everything from the first dataframe.