MS SQL 并发,多余的锁

发布于 2024-07-07 19:44:35 字数 206 浏览 7 评论 0原文

我在 ms sql 2000 上有一个数据库,一次被数百个用户访问。 使用 Reporting Services 2005 访问同一个数据库有大量报告。

当有大量报告正在运行并且人们同时使用数据库时,我们会看到阻塞进程达到了系统开始对在这种情况下在一段时间后进行的任何事务给予超时的级别。

是否有一种全局方法可以最大限度地减少阻塞,以便事务可以继续进行。

I have a database on ms sql 2000 that is being hit by hundreds of users at a time. There are intense reports using reporting services 2005 hitting the same database.

When there are lots of reports running and people using the database concurrently we see blocking processes to the level that the system starts to give time out to any transaction made after some time in that situation.

Is there a global way of minimize blocking so the transaction can continue to flow.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

征棹 2024-07-14 19:44:35

如果更新不经常发生并且数据库主要用于报告,请使用乐观锁定。

SQL Server 有一个相当悲观的锁定默认值。

查看 SQL Server 表提示可能会帮助您入门。

Use optimistic locking, if updates are not happening often and the database is mainly used for reporting.

SQL Server has quite a pessimistic locking default.

A look into SQL Server Table Hints might get you started.

莫言歌 2024-07-14 19:44:35

报告可以使用WITH(NOLOCK)

其他可能性是让报告运行数据库的只读副本或运行针对报告需求进行了优化的数据库的数据仓库版本。

The reports can use WITH(NOLOCK).

Other possibilities are having the reports run off a read-only replica of the database or running off a datawarehouse version of the database which is optimized for the reporting needs.

话少情深 2024-07-14 19:44:35

由于您已经在报告中使用 NOLOCK 提示和 READ UNCOMMITTED 隔离级别,因此调查需要转向传入的事务查询。这可能会变得很深入。 也许应用程序使事务保持打开状态的时间过长。 也可能是这样的情况,您在其他一些查询量中进行了大量的表扫描或范围扫描,并且这些扫描可能持有长时间运行的事务的共享锁。 这些共享锁会阻止您的编写者。

您需要开始查看 sp_lock,查看哪些类型的锁未完成,查看被阻止的查询试图获取哪些锁,然后检查阻止请求者的查询。

如果您不熟悉 SQL Server 锁定,这将对您有所帮助:
了解 SQL Server 2000 锁定

另外,也许您可​​以描述您的磁盘子系统。 它可能尺寸偏小。

Since you are already using NOLOCK hints and READ UNCOMMITTED isolation level for your reports, the investigation needs to turn to the transactional queries coming in. This may get deep. Perhaps applications are keeping transactions open too long. It may also be the case that you have a lot of table scans or range scans in some of the other query volume, and those may be holding shared locks for long-running transactions. Those shared locks will block your writers.

You need to start looking at sp_lock, and seeing what kinds of locks are outstanding, see what locks the blocked queries are trying to obtain, and then examine the queries that are blocking the requestors.

This will help you if you are unfamiliar with SQL Server locking:
Understanding SQL Server 2000 Locking

Also, perhaps you could describe your disk subsystem. It may be undersized.

隔纱相望 2024-07-14 19:44:35

感谢各位的支持。 为了缓解这个问题,我们所做的就是每小时创建一个带有日志传送过程的新数据库,以保持与真实数据库的同步。 不需要实时数据的报告指向该数据库,而需要实时数据的报告受到限制,因此只有少数人可以访问它们。 该方法的缺点是数据最多会不同步一小时,我们只需要为此目的创建一个新服务器。 此外,当日志传送过程运行时,每个连接都会在很短的时间内断开,但这对于非常长的过程或报告来说可能是一个问题。 之后,我将验证报告中的查询,以便了解可以优化的内容。 谢谢,我会向整个 IT 部门推荐该网站。

Thanks everyone for your support. What we do to mitigate the problem was to create a new database whit a logshipping procedure every hour to mantain in sync to the real one. The reports that do no need real time data where point to that database and the ones that needs real time data where restricted so only a few people can access them. The drawbacks whit the method is tha the data will be up to one hour out of sync and we need to create a new server for that purpose only. Also when the loggshipping procedure runs every connetion is drop for a very short period of time but it can be a problem to really long procedures or reports. After this I will verify the querys from the reports so I can understand what can be optimize. Thanks and I will recomend the site to the whole IT department.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文