T-SQL 查询计划可以帮助选择最佳的查询形式吗?
我有一个简单的数据记录表,其中有两个数据列 A 和 B,我想为其生成 A+B 的总和。然而,A 和 B 中的一个或两个可以为空。该表还有一个时间戳列,它是主键
,我可以看到几种给这只猫剥皮的方法,但我很想知道对一系列时间戳上的数据求和的首选方法是什么:
1. SUM(Coalesce(A,0)+Coalesce(B,0))
2. SUM(Coalesce(A+B,A,B,0))
3. Coalesce( Sum(A), 0) + Coalesce( SUM(B), 0)
我从以下位置得到以下结果执行计划器:
Form Select Compute Scalar Stream Aggregate Compute Scaler Clustered Index Seek 1 0% 0% 11% 1% 87% 2 0% 0% 11% 1% 87% Form Select Compute Scalar Compute Scaler Stream Aggregate Clustered Index Seek 3 0% 0% 0% 12% 88%
我以前从未深入研究过查询计划,所以我不确定如何解释这些数字。公平地说,一般来说,总体百分比偏向右侧越多越好吗?让 Form 3 优于 1 和 2?
I have a simple data logging table with two data columns A and B for which I want to generate the sum of A+B. However either or both A and B could be null. The table also has a timestamp column which is the primary key
I can see several ways to skin this cat, but I was curious to know what the preferred method might be to sum the data over a range of timestamps:
1. SUM(Coalesce(A,0)+Coalesce(B,0))
2. SUM(Coalesce(A+B,A,B,0))
3. Coalesce( Sum(A), 0) + Coalesce( SUM(B), 0)
I get the following results from the Execution planner:
Form Select Compute Scalar Stream Aggregate Compute Scaler Clustered Index Seek 1 0% 0% 11% 1% 87% 2 0% 0% 11% 1% 87% Form Select Compute Scalar Compute Scaler Stream Aggregate Clustered Index Seek 3 0% 0% 0% 12% 88%
I have never delved into query plans very much before so I am not sure how to interpret the numbers. Would it be fair to say that in general the more pushed to the right hand side the overall percentages are skewed then the better? Making Form 3 superior to 1 and 2?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
针对预期情况进行优化。
在这里,如果 A 和 B 大部分时间都有值,那么 #2 应该做得更好,因为合并将达到第一个值并停止。如果 A 或 B 经常为 NULL,那么使用#1 可能会做得更好一些。与往常一样,这里的关键是尝试两者并根据实际数据分析实际时间。
Optimize for the expected case.
Here, if both A and B have values most of the time, then #2 should do better because the coalesce will hit that first value and stop. If A or B are NULL very often you might do a little better with #1. The key here, as always, is to try both and profile actual times on actual data.