计算 PySpark 中给定另一列的唯一列值

发布于 2025-01-17 16:54:49 字数 1045 浏览 0 评论 0原文

我正在尝试计算date pyspark中的每个唯一id

+-------------------+----------+
|               Date|        ID|
+-------------------+----------+
|2022-03-19 00:00:00|   Ax3838J|
|2022-03-11 00:00:00|   Ax3838J|
|2021-11-01 00:00:00|   Ax3838J|
|2021-10-27 00:00:00|   Ax3838J|
|2021-10-25 00:00:00|   Bz3838J|
|2021-10-22 00:00:00|   Bz3838J|
|2021-10-18 00:00:00|   Bz3838J|
|2021-10-15 00:00:00|   Rr7422u|
|2021-09-22 00:00:00|   Rr742uL|
+-------------------+----------+

当我尝试的时候,

df.groupBy('ID').count('Date').show()

我发现了错误: _api()采用1个位置参数,但给出了2个 这是有道理的,但是我不确定在Pyspark中要计算的其他技术是什么。

我如何计数唯一的日期值以下:

df.groupBy('ID').count().show()

预期输出:

+-------------------+----------+
|               Date|        ID|
+-------------------+----------+
|                  4|   Ax3838J|
|                  3|   Bz3838J|
|                  2|   Rr742uL|
+-------------------+----------+

I am trying to count Date for each unique ID in Pyspark.

+-------------------+----------+
|               Date|        ID|
+-------------------+----------+
|2022-03-19 00:00:00|   Ax3838J|
|2022-03-11 00:00:00|   Ax3838J|
|2021-11-01 00:00:00|   Ax3838J|
|2021-10-27 00:00:00|   Ax3838J|
|2021-10-25 00:00:00|   Bz3838J|
|2021-10-22 00:00:00|   Bz3838J|
|2021-10-18 00:00:00|   Bz3838J|
|2021-10-15 00:00:00|   Rr7422u|
|2021-09-22 00:00:00|   Rr742uL|
+-------------------+----------+

When I tried

df.groupBy('ID').count('Date').show()

I got the error:
_api() takes 1 positional argument but 2 were given
which makes sense, but I am not sure what are the other techniques exits to count so in PySpark.

How do I count unique Date values with this:

df.groupBy('ID').count().show()

Expected output:

+-------------------+----------+
|               Date|        ID|
+-------------------+----------+
|                  4|   Ax3838J|
|                  3|   Bz3838J|
|                  2|   Rr742uL|
+-------------------+----------+

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

漫雪独思 2025-01-24 16:54:50

请找到预期输出的工作版本。我正在Spark-3上运行代码。

from pyspark.sql.functions import countDistinct

data = [["2022-03-19 00:00:00", "Ax3838J"], ["2022-03-11 00:00:00", "Ax3838J"], ["2021-11-01 00:00:00", "Ax3838J"], ["2021-10-27 00:00:00", "Ax3838J"], ["2021-10-25 00:00:00", "Bz3838J"], ["2021-10-22 00:00:00", "Bz3838J"], ["2021-10-18 00:00:00", "Bz3838J"], ["2021-10-15 00:00:00", "Rr7422u"], ["2021-09-22 00:00:00", "Rr742uL"]]
df = spark.createDataFrame(data, ['Date', 'ID'])
df.show()
+-------------------+-------+
|               Date|     ID|
+-------------------+-------+
|2022-03-19 00:00:00|Ax3838J|
|2022-03-11 00:00:00|Ax3838J|
|2021-11-01 00:00:00|Ax3838J|
|2021-10-27 00:00:00|Ax3838J|
|2021-10-25 00:00:00|Bz3838J|
|2021-10-22 00:00:00|Bz3838J|
|2021-10-18 00:00:00|Bz3838J|
|2021-10-15 00:00:00|Rr742uL|
|2021-09-22 00:00:00|Rr742uL|
+-------------------+-------+

df.groupby("ID").agg(countDistinct("Date").alias("count")).show()
+-------+-----+
|     ID|count|
+-------+-----+
|Rr742uL|    2|
|Ax3838J|    4|
|Bz3838J|    3|
+-------+-----+

请让我知道您是否需要任何帮助,如果它解决了您的目的,请接受

Please find the working version of expected output. I am running code on spark-3.

from pyspark.sql.functions import countDistinct

data = [["2022-03-19 00:00:00", "Ax3838J"], ["2022-03-11 00:00:00", "Ax3838J"], ["2021-11-01 00:00:00", "Ax3838J"], ["2021-10-27 00:00:00", "Ax3838J"], ["2021-10-25 00:00:00", "Bz3838J"], ["2021-10-22 00:00:00", "Bz3838J"], ["2021-10-18 00:00:00", "Bz3838J"], ["2021-10-15 00:00:00", "Rr7422u"], ["2021-09-22 00:00:00", "Rr742uL"]]
df = spark.createDataFrame(data, ['Date', 'ID'])
df.show()
+-------------------+-------+
|               Date|     ID|
+-------------------+-------+
|2022-03-19 00:00:00|Ax3838J|
|2022-03-11 00:00:00|Ax3838J|
|2021-11-01 00:00:00|Ax3838J|
|2021-10-27 00:00:00|Ax3838J|
|2021-10-25 00:00:00|Bz3838J|
|2021-10-22 00:00:00|Bz3838J|
|2021-10-18 00:00:00|Bz3838J|
|2021-10-15 00:00:00|Rr742uL|
|2021-09-22 00:00:00|Rr742uL|
+-------------------+-------+

df.groupby("ID").agg(countDistinct("Date").alias("count")).show()
+-------+-----+
|     ID|count|
+-------+-----+
|Rr742uL|    2|
|Ax3838J|    4|
|Bz3838J|    3|
+-------+-----+

Please let me know if you need any help and if its solve your purpose please accept it

芯好空 2025-01-24 16:54:50

尝试以下操作:

df.groupBy('ID').count(distinct 'Date').show()

try this:

df.groupBy('ID').count(distinct 'Date').show()
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文