PostgreSQL:“按分钟”运行查询的行数
我需要每分钟查询截至该分钟的总行数。
到目前为止我所能达到的最好成绩并不能解决问题。它返回每分钟的计数,而不是每分钟的总计数:
SELECT COUNT(id) AS count
, EXTRACT(hour from "when") AS hour
, EXTRACT(minute from "when") AS minute
FROM mytable
GROUP BY hour, minute
I need to query for each minute the total count of rows up to that minute.
The best I could achieve so far doesn't do the trick. It returns count per minute, not the total count up to each minute:
SELECT COUNT(id) AS count
, EXTRACT(hour from "when") AS hour
, EXTRACT(minute from "when") AS minute
FROM mytable
GROUP BY hour, minute
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
仅返回活动
最短
使用的分钟
date_trunc()
,它返回的正是您所需要的。不要在查询中包含
id
,因为您想要对分钟切片进行GROUP BY
。count()
通常用作普通聚合函数。附加OVER
子句使其成为窗口函数。在窗口定义中省略 PARTITION BY - 您希望对所有行进行运行计数。默认情况下,按照ORDER BY
的定义,从当前行的第一行到最后一个对等行进行计数。 手册:而这恰好正是您所需要的。
使用
count(*)
而不是count(id)
。它更适合您的问题(“行数”)。它通常比count(id)
稍快。而且,虽然我们可能假设id
为NOT NULL
,但问题中尚未指定它,因此count(id)
为 错误,严格来说,因为count(id)
不计算NULL值。您无法在同一查询级别对分钟切片进行
GROUP BY
。聚合函数在窗口函数之前应用,这样窗口函数count(*)
每分钟只能看到 1 行。不过,您可以
SELECT DISTINCT
,因为DISTINCT
是在窗口函数之后应用的。ORDER BY 1
只是这里ORDER BY date_trunc('month', "when")
的简写。1
是对SELECT
列表中第一个表达式的位置引用。如果需要,请使用
to_char()
格式化结果。喜欢:最快
与上面非常相似,但是:
我使用子查询来聚合和计算每分钟的行数。这样,我们每分钟获得 1 行,而外部
SELECT
中没有DISTINCT
。现在使用
sum()
作为窗口聚合函数来累加子查询的计数。我发现这要快得多,每分钟有很多行。
包括无活动的分钟数
最短
@GabiMe 询问在评论中如何在时间内每
分钟
获取一行帧,包括那些没有发生事件的帧(基表中没有行):使用
generate_series()
- 此处直接基于子查询的聚合值。LEFT JOIN
到截断为分钟和计数的所有时间戳。NULL
值(不存在行)不会添加到运行计数中。最快的
CTE:
同样,在第一步中聚合并计算每分钟的行数,它省略了后面的
DISTINCT
的需要。与
count()
不同,sum()
可以返回NULL
。默认为0
和COALESCE
。对于许多行和
“when”上的索引
,这个带有子查询的版本是我使用 Postgres 9.1 - 9.4 测试的几个变体中最快的:Return only minutes with activity
Shortest
Use
date_trunc()
, it returns exactly what you need.Don't include
id
in the query, since you want toGROUP BY
minute slices.count()
is typically used as plain aggregate function. Appending anOVER
clause makes it a window function. OmitPARTITION BY
in the window definition - you want a running count over all rows. By default, that counts from the first row to the last peer of the current row as defined byORDER BY
. The manual:And that happens to be exactly what you need.
Use
count(*)
rather thancount(id)
. It better fits your question ("count of rows"). It is generally slightly faster thancount(id)
. And, while we might assume thatid
isNOT NULL
, it has not been specified in the question, socount(id)
is wrong, strictly speaking, because NULL values are not counted withcount(id)
.You can't
GROUP BY
minute slices at the same query level. Aggregate functions are applied before window functions, the window functioncount(*)
would only see 1 row per minute this way.You can, however,
SELECT DISTINCT
, becauseDISTINCT
is applied after window functions.ORDER BY 1
is just shorthand forORDER BY date_trunc('minute', "when")
here.1
is a positional reference reference to the 1st expression in theSELECT
list.Use
to_char()
if you need to format the result. Like:Fastest
Much like the above, but:
I use a subquery to aggregate and count rows per minute. This way we get 1 row per minute without
DISTINCT
in the outerSELECT
.Use
sum()
as window aggregate function now to add up the counts from the subquery.I found this to be substantially faster with many rows per minute.
Include minutes without activity
Shortest
@GabiMe asked in a comment how to get eone row for every
minute
in the time frame, including those where no event occured (no row in base table):Generate a row for every minute in the time frame between the first and the last event with
generate_series()
- here directly based on aggregated values from the subquery.LEFT JOIN
to all timestamps truncated to the minute and count.NULL
values (where no row exists) do not add to the running count.Fastest
With CTE:
Again, aggregate and count rows per minute in the first step, it omits the need for later
DISTINCT
.Different from
count()
,sum()
can returnNULL
. Default to0
withCOALESCE
.With many rows and an index on
"when"
this version with a subquery was fastest among a couple of variants I tested with Postgres 9.1 - 9.4: