LINQ和SQL Server性能调优SQL Server 2008数据库最佳实践?
我的问题是使用 LINQ for SQL 优化性能的最佳实践是什么 性能是用户界面中的响应超时。
现在,我在 SQL Server 2008 数据库中有一些销售数据,并显示这些数据(MAT、每年、不同细分市场、细分市场增长、市场增长百分比、、、、) 在使用 LINQ for SQL 构建执行的 Iquerable 表达式的 ASP.NET 应用程序中的图表中,
我看到了一个挑战,即我有一个数据库并使用 LINQ 构建所有问题,但我无法控制创建什么 SQL(我可以跟踪它,但,,,,) 并且我不使用存储过程,因此我的数据的获取方式就像一个黑匣子。
现在,我运行一些单元测试并手动测试应用程序,并使用数据库引擎调整顾问创建索引等......
My question is what is best practice to optimize performance using LINQ for SQL
And performance is response time out in the user interface.
Right now I have some sales data in a SQL Server 2008 database and I display this data (MAT, yearly, in different segments, growth in segment, percent of market growth ,,,,)
in charts in a ASP.NET application using LINQ for SQL to constructs Iquerable expressions that are executed
I see the challenge that I have a database and used LINQ to construct all questions and I have no control what SQL is created ( I can track it but ,,,,) and I don't use Stored Procedures so how my data is fetched is like a black box.
Right now I run some unit tests and manual test the application and use the Databasse Engine Tuning Advisor what indexes etc to create....
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
除此之外,我通常会在大型数据集上对一些模拟用户使用 SQL 探查器和 CLR 探查器,并通过数据上下文监视长时间运行的查询和/或长时间运行的调用(这可能意味着多个往返发生在幕后)。我个人的偏好也是默认禁用所有数据上下文上的延迟加载和对象跟踪,因此在大多数情况下我必须选择多次往返。虽然您不能直接影响生成的 SQL,但您可以小心使用 LoadWith/AssociateWith,并确保您不会获取非常大/低效的结果集,并分解具有大量昂贵连接(有时是多轮)的查询。 - 旅行比大桌子上的 mondo join 便宜)。
这一切都与测量有关——使用任何你能得到的工具。
In addition to that, I'll usually use both SQL profiler and CLR profiler with some simulated users on a large-ish data set, and watch for long-running queries and/or long-running calls through the datacontext (which may signify multiple round-trips happening under the covers). My personal preference is also to disable deferred loading and object tracking on all my datacontexts by default, so I have to opt-IN to multiple round-trips in most cases. While you can't directly affect the SQL that's generated, you can be careful with LoadWith/AssociateWith and make sure that you're not fetching horribly large/inefficient result sets, and break up queries that have lots of expensive joins (sometimes multiple round-trips are cheaper than mondo joins on big tables).
It's all about measurement- use whatever tools you can get your hands on.
分析,分析,分析。 :)
不仅要测量时序,还要注意 I/O。由于缓存,经常执行的 I/O 密集型查询可以快速执行,但反过来可能会对数据库服务器的整体性能产生负面影响,因为可用于其他查询的资源较少。
正如您所说,L2S 可能有点像黑匣子,因此您需要尝试在真实用户使用应用程序时复制所有场景和/或配置文件。然后使用它来 1) 调整查询 2) 添加索引 3) 进行任何其他所需的更改以获得所需的性能。
我有一个专门为 Linq-to-SQL 制作的分析工具,以使其“少一些黑匣子”——它允许您在将生成的查询绑定到导致特定查询的代码(调用堆栈)的同时进行运行时分析正在被处决。您可以在 http://www.huagati.com/L2SProfiler/
这里更详细地概述了我的探查器的背景原因:
http://huagati.blogspot.com/2009/ 06/profiling-linq-to-sql-applications.html
...这里介绍了一些高级分析选项:
http://huagati.blogspot.com/2009/ 08/walkthrough-of-newest-filters-and.html
如果您有很多包含很多列的表,另一件事可能会有所帮助,那就是在代码编辑器中获取索引信息。这是通过将带有该信息的 xml 文档注释添加到实体类和成员属性来完成的;然后,该信息将显示在 VS 代码编辑器的工具提示中:
...这样,您就可以在键入查询时看到是否有索引覆盖 where 子句等中使用的列。为了避免必须键入所有内容,我为此创建了一个工具也。请参阅 http://www.huagati.com/dbmltools/ 中的“更新文档”功能
Profiling, profiling, profiling. :)
Measure not only timings, but pay attention to I/O as well. A frequently executed query that is I/O intensive can execute fast due to caching, but can in turn have a negative effect on the overall db-server performance since there will be less resources available for other queries.
As you say, L2S can be a bit of a black box, so you need to try to replicate all scenarios and/or profile while the app is in use by real users. Then use that to 1) tweak queries 2) add indexes 3) make any other changes needed to get the performance you need.
I have a profiling tool made specifically for Linq-to-SQL to make it a little bit 'less black box' - it allows you to do runtime profiling while tying the generated queries to the code (call stack) that resulted in a specific query being executed. You can download it and get a free trial license at http://www.huagati.com/L2SProfiler/
The background reason for my profiler is outlined in a bit more detail here:
http://huagati.blogspot.com/2009/06/profiling-linq-to-sql-applications.html
...and some advanced profiling options are covered here:
http://huagati.blogspot.com/2009/08/walkthrough-of-newest-filters-and.html
Another thing that may help if you have a lot of tables with a lot of columns is to get index info into the code editor. This is done by adding xml doc-comments with that info to the entity classes and member properties; that info is then displayed in the VS code editor's tooltips:
...that way you can see already while typing queries if there is an index covering the column(s) used in where clauses etc. To avoid having to type all of that in, I have created a tool for that too. See the 'update documentation' feature in http://www.huagati.com/dbmltools/