使用 ADO.NET TransactionScope 与 NOLOCK 上的 ExecuteCommand 的高容量站点,直接读取未提交的内容?
只需阅读 Omar 在他的博客上发表的这篇有趣的文章 Linq to SQL 使用未提交的读取解决事务死锁和查询超时问题 并在最后 Javed Hasan 开始与他争论如何解决高流量站点上的 nolock 情况。
这里,要解决的问题是,从sql意义上讲,我们需要使用带NOLOCK的Select语句或使用SET TRANSACTION LEVEL READ UNCOMMITTED,否则在数据库中的高容量行将被锁定并导致错误。 Omar 使用的技术是 Linq2Sql,所以问题是我们如何在 C# 数据访问代码中实现这一目标,这样上述情况就不会发生?
基本上在这篇文章中,Omar 通过在现实世界的网站上工作和测试以及使用 SqlProfiler 等工具来得出他的解决方案,而 Javed Hasan 通过 MSDN 文档和 Scott Hanselman 的博客文章等得出他的解决方案。Omar
使用以下内容
using (var db = new DropthingsDataContext2())
{
db.Connection.Open();
db.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;");
var user = db.aspnet_Users.First();
var pages = user.Pages.ToList();
}
建议使用以下内容,而 Javed Hasan 建议
using (new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted }))
{
//Your db Operation
}
我很想知道你们在像 StatckOverflow 这样的高容量网站上针对这个特定问题做了什么,或者 Jeff 和他们的人在这方面做了什么?
编辑< /strong>:读完第一篇文章后,我想指出 Omar 文章中的一些内容。
- 他的方法确实遇到了连接问题,但他解决了,请参阅他的帖子。
- 更重要的是,他提到他尝试过使用 ADO.NET 事务方式,甚至尝试了 Scott Hanselman 在他的博客上写的内容,但它不适用于高容量网站,它会大大降低性能。 Omar 表示,“System.Transactions 的开销很大。我从来没有能够在不使 CPU 达到 100% 且请求/秒下降到 1/10 的情况下在大容量网站上使用它。它是为企业应用程序而设计的,而不是为高性能应用程序而设计的。”卷网站。”
Just read this interesting article by Omar on his blog Linq to SQL solve Transaction deadlock and Query timeout problem using uncommitted reads and at the end
Javed Hasan started arguing with him about his solution to the nolock situation on a high volume site.
Here, the problem trying to solve is, from the sql sense we need to use Select statements with NOLOCK or use SET TRANSACTION LEVEL READ UNCOMMITTED, otherwise at high volume rows in DB will be locked and cause errors. The technology Omar used is Linq2Sql, so the question is how do we get this achieved in your C# data access code so the above does not happen?
Basically in the post, Omar comes to his solution by working and testing on real world site and with tools like SqlProfiler, whereas Javed Hasan comes to his solution with MSDN documents and Scott Hanselman's blog post etc.
Omar suggests using the following
using (var db = new DropthingsDataContext2())
{
db.Connection.Open();
db.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;");
var user = db.aspnet_Users.First();
var pages = user.Pages.ToList();
}
whereas Javed Hasan suggests
using (new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted }))
{
//Your db Operation
}
I'm very interested to know what you guys out there do on this particular issue on a high volume site like the StatckOverflow, or what did Jeff and their guys do in this regard?
Edit: After reading the first post, I want to point out a few things in Omar's post.
- he did run into connection problem with his approach but he solved it, see his post.
- more importantly he mentioned he tried the using ADO.NET Transaction way and even tried what Scott Hanselman wrote on his blog, but it is not working for high volume site, it degrades the performance quite a bit. Omar said this "System.Transactions has significant overhead. I have never been able to use it on a high volume website without making CPU go 100% and Req/sec come down to 1/10th. It's made for enterprise applications, not for high volume websites."
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
我是 Microsoft SQL Server 小组工具团队的开发人员。 许多应用程序对事务一致性并不是超级敏感,特别是如果您编写一个进行报告的应用程序或偶尔出现不一致的数据并不是世界末日的应用程序。 当然,如果您编写财务应用程序或其他对数据不一致的容忍度非常低的应用程序,您可能想探索其他解决方案。
如果选择使用未提交的读取,我已经 博客上写了一个方便的使用 C# 中的扩展方法的解决方案。
I'm a developer on a tools team in the SQL Server group at Microsoft. Many applications are not super-sensitive to transaction consistency, especially if you writing an app which does reporting or something where occasionally inconsistent data is not the end of the world. Of course, if you writing a financial application or something else which has very low tolerance for data inconsistency, you probably want to explore other solutions.
If do choose to use uncommitted reads, I have blogged a handy solution using extension methods in C#.
{我的(糟糕的)声誉阻止我发表评论,所以我将其作为答案}
如果您通过 System.Transactions 使用 IsolationLevel 并在事务块中创建新的 Linq 上下文,SQL Server 最终会尝试调用 DTC 来协调事务。 这件事刚刚发生在我身上,而且是出乎我的意料。
{My (poor) reputation prevents me from posting comments so I put this as an answer}
If you use IsolationLevel via System.Transactions and create a new Linq context within the transaction block, SQL Server ends up trying to call DTC to coordinate the transaction. That just happened to me and was quite unexpected.
关于 .Net 中的事务和 DTC 的(某种程度上令人惊讶的)副作用,本文档 Introducing System.Transactions in the .NET Framework 2.0 很好地解释了这一点,并且仍然完全有效 (.Net4)。 值得一读。
(如果可以的话,我也会发表评论。)
Regarding transactions in .Net and the (somehow surprising) side-effect of the DTC, this document Introducing System.Transactions in the .NET Framework 2.0 by Juval Lowy explains things very well and is still fully valid (.Net4). Worth reading.
(I would also have posting a comment... if I could.)
首先,请避免未提交的读取,它们可能会导致很多问题。 更好的方法是将数据库设置为快照隔离< /a>. 这就是杰夫所做的。
Jeff 基本上说:“bla bla bla,说实话,bla bla bla,数据库理论家,bla bla bla,READ UNCOMMITTED 对于不需要数据一致性的真实生产应用程序很有用。” Jeff 不是 DBA,幸运的是这里有很多 DBA。
Omar 方法的问题在于,它可能会将“读未提交”隔离级别的连接泄漏到您的连接池中,这可能会对您的网站造成严重破坏。 这意味着随机语句可以在未提交的读中执行。
Javed 方法会更好,因为在处置时 MS 有机会清理连接上的东西。
编辑如果您在使用 Javed 方法时遇到性能问题,您可以考虑滚动自己的事务管理器。
您可能想要做的一些事情:
First of all please avoid uncommitted reads, they can cause lots of issues. A much better approach is just to set the database to snapshot isolation. This is what Jeff did.
Jeff basically said: "bla bla bla, be real, bla bla bla, database theoreticians, bla bla bla, READ UNCOMMITTED can be useful for REAL production apps that don't need data consistency." Jeff is not a DBA, fortunately there are many DBAs out here on SO.
The problem with Omar's approach is that it can leak connections with "read uncommitted" isolation level in to your connections pool which could wreak havoc in your website. Meaning random statement may be executed in read uncommitted.
Javed approach would be much better because on dispose MS have the chance to clean stuff up on the connection.
EDIT If you are having performance issues with Javed's approach you could look at rolling your own transaction manager.
Some things you probably want to do: