奇怪的是,在 SQL Server 2008 中制作参数副本极大地加快了 SP 速度
当使用 SqlDataAdapter.fill() 运行存储过程时,我注意到它花费了 90 秒以上的时间,而在 Management Studio 中运行相同的存储过程只需要 1-2 秒。我开始摆弄参数试图找到问题,最终我做到了,尽管这是一个奇怪的问题。我发现,如果我只是在存储过程中声明三个新变量并直接将参数的内容复制到它们中,然后在存储过程主体中使用这些新变量,则 fill() 方法会下降到 1-2 秒,就像直接在 Management Studio 中运行存储过程。换句话说,这样做:
CREATE PROCEDURE [dbo].[TestProc]
@location nvarchar(100), @startTime datetime, @endTime datetime
AS
declare @location2 nvarchar(100), @endTime2 datetime, @startTime2 datetime
set @location2 = @location
set @startTime2 = @startTime
set @endTime2 = @endTime
--... query using @location2, @startTime2, @endTime2
如果我仅将查询正文中的一个引用从 @startTime2 更改回 @startTime(从 C# 传入的实际参数),查询就会立即跳回大约 90 秒甚至更长。
那么......为什么 SQLDataAdapter 或 SQL Server 关心我如何处理它的参数,一旦它们被传递到存储过程?为什么这会影响执行时间?非常感谢任何有关如何进一步解决此问题的指导。谢谢!
编辑:虽然我可以发誓使用 SqlDataAdapter 和使用 Management Studio 从 C# 运行查询之间存在差异,但截至目前,我无法复制该差异。现在,管理工作室也采取了>当我不复制参数时,运行存储过程需要 90 秒。这是一个巨大的缓解,因为这意味着问题不在于 C#,而只是一个更常见的(尽管仍然很奇怪)SQL Server 问题。我团队中的一位优秀的 SQL 人员正在查看存储过程在运行时是否首先复制参数的执行路径。如果我们弄清楚了,我会在这里发布答案。感谢迄今为止的帮助!
When running a sproc with SqlDataAdapter.fill(), I noticed it was taking upwards of 90 seconds when running the same sproc in management studio took only 1-2 seconds. I started messing around with the parameters to try to find the issue, and I eventually did, though it's a strange one. I discovered that if I simply declared three new variables in the sproc and directly copied the contents of the parameters into them, and then used those new variables in the body of the sproc, the fill() method dropped to 1-2 seconds just like running the sproc directly in management studio. In other words, doing this:
CREATE PROCEDURE [dbo].[TestProc]
@location nvarchar(100), @startTime datetime, @endTime datetime
AS
declare @location2 nvarchar(100), @endTime2 datetime, @startTime2 datetime
set @location2 = @location
set @startTime2 = @startTime
set @endTime2 = @endTime
--... query using @location2, @startTime2, @endTime2
If I changed even just one of the references in the query body from @startTime2 back to @startTime (the actual parameter passed in from C#), the query jumped right back up to around 90s or even longer.
SO.... why in the world does SQLDataAdapter or SQL Server care what I do with its parameters once they're passed into the sproc? Why would this affect execution time? Any guidance of how to root out this issue further is greatly appreciated. Thanks!
Edit: Although I could've sworn there was a difference between running the query from C# using SqlDataAdapter and using management studio, as of right now, I can't replicate the difference. Now, management studio also takes > 90 seconds to run the sproc when I do NOT copy the parameters. This is a huge relief, because it means the problem isn't somehow with C#, and it just a more run of the mill (though still strange) SQL Server issue. One of the guys on my team that's an excellent SQL guy is looking at the execution path of the sproc when run with and without first copying the parameters. If we figure it out, I'll post the answer here. Thanks for the help so far!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
毫无疑问,这是参数嗅探和执行计划不正确重用的情况,这些执行计划是使用具有非常不同的最佳访问模式的不同参数集创建的。
两种不同风格的访问突然改变为相同(而不是快速)强烈表明缓存的执行计划已更新为现在使用两种访问方法执行缓慢的版本,或者您的数据或参数已更改。
根据我的经验,这种小/大执行时间差的一般罪魁祸首是使用实际需要哈希匹配的嵌套循环连接。 (对于极少数的行,嵌套循环是优越的,超过一定的相当低的障碍,然后哈希匹配变得更便宜。除非你很幸运你的输入都是按连接标准排序的,否则合并连接是很少见的发现对大集合进行排序往往比散列匹配更昂贵。)
您在 SP 中调整参数解决了问题的原因是,SQL Server 意识到您通过将参数设置为某个值(忽略你将它们设置为什么)并且它必须计算一个新的执行计划,因此它抛弃了旧的执行计划,并根据当前参数集设计了一条新的访问路径,从而获得了更好的结果。
如果这个问题仍然存在,那么使用 SP 重新编译/清除计划缓存并结合使用必须处理截然不同的行数的不同参数可能会揭示问题所在。查看用于使用不同参数运行 SP 的执行计划,并查看在错误条件下采用不同访问策略的效果。
It's undoubtedly a case of parameter sniffing and improper reuse of execution plans that were created with a different set of parameters that had a very different optimal access pattern.
The sudden change to the two different-style accesses being the same (rather than one quick) strongly suggests that the cached execution plan was updated to a version that now performs slowly with both access methods, or your data or your parameters changed.
In my experience the general culprit in this sort of small/huge time difference of execution is use of a nested loop join where a hash match is actually required. (For a very small number of rows the nested loop is superior, past a certain fairly low barrier, then the hash match becomes less expensive. Unless you're lucky that your inputs are both sorted by the join criteria, a merge join is rare to find as sorting large sets tends to be more expensive than hash matching.)
The reason that your parameter tweaking in the SP fixed the problem is that then SQL Server became aware you were doing something to the parameters by setting them to some value (ignoring what you'd set them to) and it had to compute a new execution plan, so it threw out the old one and designed a new access path based on the current set of parameters, getting better results.
If this problem persists then playing with SP recompilation/clearing the plan cache combined with using different parameters that must deal with hugely different number of rows may reveal where the problem is. Look at the execution plan that is used to run the SP with different parameters and see the effects of different access strategies being employed in the wrong conditions.