关于最小化 MS SQL Server 中仅附加表上的锁定的建议?
我正在编写一些将在生产中运行的日志记录/审核代码(不仅仅是在抛出错误时或在开发时运行)。 在阅读了 Coding Horror 在死锁和日志记录方面的经验后,我决定应寻求建议。 (杰夫的“不记录”解决方案对我不起作用,这是法律强制的安全审计)
是否有合适的隔离级别来最小化争用和死锁? 我可以将任何查询提示添加到插入语句或存储过程中吗?
我非常关心除审计表之外的所有事物的事务完整性。 我们的想法是,将记录如此多的内容,即使少数条目失败,也不是问题。 如果日志记录停止了其他一些事务——那就糟糕了。
我可以登录到数据库或文件,尽管登录到文件不太有吸引力,因为我需要能够以某种方式显示结果。 记录到文件将(几乎)保证日志记录不会干扰其他代码。
I'm writing some logging/auditing code that will be running in production (not just when errors are thrown or while developing). After reading Coding Horror's experiences with dead-locking and logging, I decided I should seek advice. (Jeff's solution of "not logging" won't work for me, this is legally mandated security auditing)
Is there an suitable Isolation level for minimizing contention and dead-locking? Any query hints I can add to the insert statement or the stored procedure?
I care deeply about the transactional integrity for everything except the audit table. The idea is that so much will be logged that if a few entries fail, it's not a problem. If the logging stops a some other transaction-- that would be bad.
I can log to a database or a file, although logging to a file is less attractive because I need to be able to display the results somehow. Logging to a file would (almost) guarantee the logging wouldn't interfere with other code though.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
如果您不关心日志记录表的一致性,为什么不从单独的线程执行所有日志记录。
我可能不会在记录之前等待事务完成,因为日志对于诊断长时间运行的事务至关重要。 此外,这使您能够查看回滚事务所做的所有工作。
在日志记录线程中获取堆栈跟踪和所有日志记录数据 ,并在单个事务中将它们刷新到数据库。
最小化锁定的步骤:
通过在单个(较小的)事务中刷新多个日志消息,您将获得更好的性能,并且具有以下优点:如果 10 个线程正在执行工作并记录内容,则只有一个线程将内容刷新到日志记录表。 这种管道实际上使东西可以更好地扩展。
If you don't care about consistency on your logging table, why not perform all the logging from a separate thread.
I probably would not wait for transactions to complete before logging, since the log can be pivotal in diagnosing long running transactions. Also, this enables you to see all the work a transaction that rolled back did.
Grab the stack trace and all of your logging data in the logging thread, chuck it on a queue when there are new logging messages, flush them to the db in a single transaction.
Steps to minimizing locking:
You will get better performance by flushing multiple logging messages in a single (smallish) transaction, and have the advantage that if 10 threads are doing work and logging stuff, only a single thread is flushing stuff to the logging table. This pipelining actually makes stuff scale better.
由于您不关心审计表的事务完整性,因此您显然可以在事务之外执行日志记录(即在事务完成后)。 这将最大限度地减少对交易的影响。
另外,如果您想最大程度地减少锁定,您应该尝试确保尽可能多的查询工作负载覆盖非聚集索引。 (SQL Server 2005及以上版本,在NC索引中使用
INCLUDE
语句可以产生很大的差异)Since you don't care about the transactional integrity of the audit table, you can obviously perform logging outside of the transaction (i.e. after it completes). That will minimise impact on the transaction.
Also, if you want to minimize locking, you should try to ensure that as much of your query workload as possible has covering non-clustered indexes. (SQL Server 2005 and above, the use of the
INCLUDE
statement in NC indexes can make a big difference)防止日志记录与“常规”数据库出现锁定问题的一种简单方法是不使用相同的数据库。 只需为您的日志记录创建另一个数据库即可。 另外,日志数据库的快速增长不会导致主数据库出现碎片。 就个人而言,我通常更喜欢记录到文件中 - 但话又说回来,我习惯在我的编辑器 - VIM 中进行大量文本操作。 记录到单独的数据库应该有助于避免死锁问题。
只要确保如果您尝试为您使用的日志框架编写自己的数据库附加程序,您就必须非常小心您的锁(我猜这就是您引用的博客文章中让 Jeff 绊倒的原因)。 如果编写正确(请参阅 Jeff 帖子中的几条评论),您的日志记录框架不应该出现锁定问题,除非它们做了一些奇怪的事情。
One easy way to prevent your logging from having locking issues with your 'regular' database is to not use the same database. Just create another database for your logging. As a bonus, the rapid growth of your logging database won't result in fragmentation in your main DB. Personall, I usually prefer to log to a file -- but then again, I'm used to doing heavy text manipulation in my editor - VIM. Logging to a separate DB should help avoid deadlocking issues.
Just make sure that if you try writing your own database appender for the logging framework you use, you be very careful about your locks (which I'm guessing is what tripped up Jeff in the blog post you reference). Properly written (see several of the comments in Jeff's post), you shouldn't have locking issues with your logging framework unless they do something odd.
正常事务(即READ COMMITTED)插入已经执行“最小”锁定。 插入密集型应用程序不会在插入时出现死锁,无论插入与其他操作混合的顺序如何。 在最坏的情况下,密集插入系统可能会导致发生插入的热点发生页锁争用,但不会导致死锁。
要导致杰夫所描述的死锁,必须有更多的因素在起作用,例如以下任何一项:
lock
语句),导致无法检测到的死锁(即应用程序挂起)。 鉴于解决问题涉及查看进程转储,我想这就是他们遇到的情况。因此,只要您在 READ COMMITTED 隔离级别事务中仅插入日志记录,您就是安全的。 如果您期望遇到我怀疑的相同问题(即涉及应用程序层锁的死锁),那么再多的数据库魔法也无法拯救您,因为即使您登录单独的事务或单独的连接,问题仍然会出现。
A normal transaction (ie. READ COMMITTED) insert already does the 'minimal' locking. Insert intensive applications will not deadlock on the insert, no matter the order of how the insert is mixed with other operations. At worst an intensive insert system may cause page latch contention on the hot spot where insert occurs, but not deadlocks.
To cause deadlocks as described by Jeff there has to be more at play, like any one of the following:
lock
statements in the log4net framework) resulting in undetectable deadlocks (ie. application hangs). Given that solving the problem involved looking at process dumps, I guess this is the scenario they were having.So as long as you do insert only logging in READ COMMITTED isolation level transactions you are safe. If you expect the same problem I suspect SO had (ie. deadlocks involving application layer locks) then no amount of database wizardry can save you, as the problem can still manifest even if you log on separate transaction or into separate connection.