关于最小化 MS SQL Server 中仅附加表上的锁定的建议?

发布于 2024-07-24 09:59:27 字数 463 浏览 8 评论 0原文

我正在编写一些将在生产中运行的日志记录/审核代码(不仅仅是在抛出错误时或在开发时运行)。 在阅读了 Coding Horror 在死锁和日志记录方面的经验后,我决定应寻求建议。 (杰夫的“不记录”解决方案对我不起作用,这是法律强制的安全审计)

是否有合适的隔离级别来最小化争用和死锁? 我可以将任何查询提示添加到插入语句或存储过程中吗?

我非常关心除审计表之外的所有事物的事务完整性。 我们的想法是,将记录如此多的内容,即使少数条目失败,也不是问题。 如果日志记录停止了其他一些事务——那就糟糕了。

我可以登录到数据库或文件,尽管登录到文件不太有吸引力,因为我需要能够以某种方式显示结果。 记录到文件将(几乎)保证日志记录不会干扰其他代码。

I'm writing some logging/auditing code that will be running in production (not just when errors are thrown or while developing). After reading Coding Horror's experiences with dead-locking and logging, I decided I should seek advice. (Jeff's solution of "not logging" won't work for me, this is legally mandated security auditing)

Is there an suitable Isolation level for minimizing contention and dead-locking? Any query hints I can add to the insert statement or the stored procedure?

I care deeply about the transactional integrity for everything except the audit table. The idea is that so much will be logged that if a few entries fail, it's not a problem. If the logging stops a some other transaction-- that would be bad.

I can log to a database or a file, although logging to a file is less attractive because I need to be able to display the results somehow. Logging to a file would (almost) guarantee the logging wouldn't interfere with other code though.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

暮光沉寂 2024-07-31 09:59:28

如果您不关心日志记录表的一致性,为什么不从单独的线程执行所有日志记录。

我可能不会在记录之前等待事务完成,因为日志对于诊断长时间运行的事务至关重要。 此外,这使您能够查看回滚事务所做的所有工作。

在日志记录线程中获取堆栈跟踪和所有日志记录数据 ,并在单个事务中将它们刷新到数据库。

最小化锁定的步骤:

  • (KEY) 在主线程/连接/事务之外执行对日志记录表的所有追加。
  • 确保您的日志记录表具有单调递增的聚集索引(例如 int Identity ),每次附加日志消息时该索引都会增加。 这可确保插入的页面通常位于内存中,并避免堆表的性能下降。
  • 在事务中对日志执行多次追加(事务中 10 次插入比事务外 10 次插入更快,并且通常获取/释放更少的锁)
  • 休息一下。 仅每 N 毫秒对数据库执行一次日志记录。 批处理一些作品。
  • 如果您需要报告历史记录,您可以考虑对日志表进行分区。 示例:您可以每月创建一个新的日志记录表,同时拥有一个日志视图,它是所有旧日志记录表的 UNION ALL。 根据最合适的来源执行报告。

通过在单个(较小的)事务中刷新多个日志消息,您将获得更好的性能,并且具有以下优点:如果 10 个线程正在执行工作并记录内容,则只有一个线程将内容刷新到日志记录表。 这种管道实际上使东西可以更好地扩展。

If you don't care about consistency on your logging table, why not perform all the logging from a separate thread.

I probably would not wait for transactions to complete before logging, since the log can be pivotal in diagnosing long running transactions. Also, this enables you to see all the work a transaction that rolled back did.

Grab the stack trace and all of your logging data in the logging thread, chuck it on a queue when there are new logging messages, flush them to the db in a single transaction.

Steps to minimizing locking:

  • (KEY) perform all appends to the logging table outside of the main thread/connection/transaction.
  • Ensure your logging table has a monotonically increasing clustered index (Eg. int identity ) that is increasing each time you append a log message. This ensures the pages being inserted into are usually in memory and avoids the performance hits you get with heap tables.
  • Perform multiple appends to the log in a transaction (10 inserts in a transaction are faster than 10 inserts out of a transaction and usually acquire/release less locks)
  • Give it a break. Only perform logging to your db every N milliseconds. Batch up bits of works.
  • If you need to report on stuff historically, you can consider partitioning your logging table. Example: You could create a new logging table every month, and at the same time have a log VIEW that is a UNION ALL of all the older logging tables. Perform the reporting against the most appropriate source.

You will get better performance by flushing multiple logging messages in a single (smallish) transaction, and have the advantage that if 10 threads are doing work and logging stuff, only a single thread is flushing stuff to the logging table. This pipelining actually makes stuff scale better.

软的没边 2024-07-31 09:59:28

由于您不关心审计表的事务完整性,因此您显然可以在事务之外执行日志记录(即在事务完成后)。 这将最大限度地减少对交易的影响。

另外,如果您想最大程度地减少锁定,您应该尝试确保尽可能多的查询工作负载覆盖非聚集索引。 (SQL Server 2005及以上版本,在NC索引中使用INCLUDE语句可以产生很大的差异)

Since you don't care about the transactional integrity of the audit table, you can obviously perform logging outside of the transaction (i.e. after it completes). That will minimise impact on the transaction.

Also, if you want to minimize locking, you should try to ensure that as much of your query workload as possible has covering non-clustered indexes. (SQL Server 2005 and above, the use of the INCLUDE statement in NC indexes can make a big difference)

风筝在阴天搁浅。 2024-07-31 09:59:28

防止日志记录与“常规”数据库出现锁定问题的一种简单方法是不使用相同的数据库。 只需为您的日志记录创建另一个数据库即可。 另外,日志数据库的快速增长不会导致主数据库出现碎片。 就个人而言,我通常更喜欢记录到文件中 - 但话又说回来,我习惯在我的编辑器 - VIM 中进行大量文本操作。 记录到单独的数据库应该有助于避免死锁问题。

只要确保如果您尝试为您使用的日志框架编写自己的数据库附加程序,您就必须非常小心您的锁(我猜这就是您引用的博客文章中让 Jeff 绊倒的原因)。 如果编写正确(请参阅 Jeff 帖子中的几条评论),您的日志记录框架不应该出现锁定问题,除非它们做了一些奇怪的事情。

One easy way to prevent your logging from having locking issues with your 'regular' database is to not use the same database. Just create another database for your logging. As a bonus, the rapid growth of your logging database won't result in fragmentation in your main DB. Personall, I usually prefer to log to a file -- but then again, I'm used to doing heavy text manipulation in my editor - VIM. Logging to a separate DB should help avoid deadlocking issues.

Just make sure that if you try writing your own database appender for the logging framework you use, you be very careful about your locks (which I'm guessing is what tripped up Jeff in the blog post you reference). Properly written (see several of the comments in Jeff's post), you shouldn't have locking issues with your logging framework unless they do something odd.

请别遗忘我 2024-07-31 09:59:27

正常事务(即READ COMMITTED)插入已经执行“最小”锁定。 插入密集型应用程序不会在插入时出现死锁,无论插入与其他操作混合的顺序如何。 在最坏的情况下,密集插入系统可能会导致发生插入的热点发生页锁争用,但不会导致死锁。

要导致杰夫所描述的死锁,必须有更多的因素在起作用,例如以下任何一项:

  • 系统正在使用更高的隔离级别(他们当时就有了它,并且理应得到它)
  • 他们在运行期间从日志表中读取事务(因此不再是“仅追加”)
  • 死锁链涉及应用程序层锁(即 log4net 框架中的 .Net lock 语句),导致无法检测到的死锁(即应用程序挂起)。 鉴于解决问题涉及查看进程转储,我想这就是他们遇到的情况。

因此,只要您在 READ COMMITTED 隔离级别事务中仅插入日志记录,您就是安全的。 如果您期望遇到我怀疑的相同问题(即涉及应用程序层锁的死锁),那么再多的数据库魔法也无法拯救您,因为即使您登录单独的事务或单独的连接,问题仍然会出现。

A normal transaction (ie. READ COMMITTED) insert already does the 'minimal' locking. Insert intensive applications will not deadlock on the insert, no matter the order of how the insert is mixed with other operations. At worst an intensive insert system may cause page latch contention on the hot spot where insert occurs, but not deadlocks.

To cause deadlocks as described by Jeff there has to be more at play, like any one of the following:

  • The system is using a higher isolation level (they had it coming then and well deserve it)
  • They were reading from the log table during the transaction (so is no longer 'append-only')
  • The deadlock chain involved application layer locks (ie. .Net lock statements in the log4net framework) resulting in undetectable deadlocks (ie. application hangs). Given that solving the problem involved looking at process dumps, I guess this is the scenario they were having.

So as long as you do insert only logging in READ COMMITTED isolation level transactions you are safe. If you expect the same problem I suspect SO had (ie. deadlocks involving application layer locks) then no amount of database wizardry can save you, as the problem can still manifest even if you log on separate transaction or into separate connection.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文