奇怪的 SQL Server 报告与更新统计信息相关的性能问题
我使用报告服务获得了一份复杂的报告,该报告连接到 SQl 2005 数据库,并调用许多存储过程和函数。最初工作正常,但几个月后(数据增长),它遇到超时错误。
我创建了一些索引来提高性能,但奇怪的是,它在创建索引后起作用,但第二天又抛出相同的错误。然后我尝试更新数据库上的统计信息,它再次起作用(查询的运行时间提高了 10 倍)。但第二天它又停止工作了。
现在,临时解决方案是我每小时运行一次更新统计数据。但我找不到这种行为的合理解释。数据库不是很忙,一天不会更新很多数据。更新的统计数据怎么会产生如此大的差异呢?
I got a complex report using reporting service, the report connect to a SQl 2005 database, and calling a number of store procedure and functions. it works ok initially, but after a few months(data grows), it run into timeout error.
I created a few indexes to improve the performance, but the strange thing it that it works after the index was created, but throws out the same error the next day. Then I try to update the statistics on the database, it works again (the running time of the query improve 10 times). But again, it stop working the next day.
Now, the temp solution is that I run the update statistic every hour. But I can't find a reasonable explanation for this behaviour. the database is not very busy, there won't be lots of data being updated for one day. how can the update statistics make so much difference?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我怀疑你有参数嗅探。更新统计信息只会强制丢弃所有查询计划,因此它似乎暂时有效
I suspect you have parameter sniffing. Updating statistics merely forces all query plans to be discarded, so it appears to work for a time
当基础表上的索引需要调整或者 SQL 需要工作时,我就遇到过这个问题。
重建索引和更新统计信息将表读入缓存,从而提高性能。第二天,该表已从缓存中清除,性能问题又出现了。
SQL Profiler 在这些情况下非常有用,可以识别运行之间的变化。
I've seen this problem when the indexes on the underlying tables need to be adjusted or the SQL needs work.
The rebuild index and the update statistics read the table into the cache, which improves performance. The next day the table has been flushed out of the cache and the performance problems return.
SQL Profiler is very useful in these situations to identify what changes from run to run.