SQL Server:12% 索引扫描密度和 50% 碎片。 “坏”到底有多坏?

发布于 2024-12-08 01:09:02 字数 1590 浏览 0 评论 0原文

碎片有多少是不好的?扫描密度低到什么程度才算太低? 扫描密度多低就不好

我有一个具有以下索引密度和碎片级别的表:

Name                           Scan Density  Logical Fragmentation
=============================  ============  =====================
IX_TransactionEntries_Tran...  12.834        48.392
IX_TransactionEntries_Curr...  15.419        41.239
IX_TransactionEntries_Tran...  12.875        48.372
TransactionEntries17           98.081         0.0049325
TransactionEntries5            12.960        48.180
PK_TransactionEntries          12.869        48.376
TransactionEntries18           12.886        48.480
IX_TranasctionEntries_CDR...   12.799        49.157
IX_TransactionEntries_CDR...   12.969        48.103
IX_TransactionEntries_Tra...   13.181        47.127

您可以看到我刚刚TransactionEntries17进行了碎片整理,这就是为什么它的扫描密度是如此之高,而其碎片化却如此之低。

但 12% 的扫描密度是否低得可怕? 48% 的碎片率是否高得可怕

我在删除行时遇到性能问题(这需要索引扫描)。索引碎片是 70000 页索引上的巨型闪烁红色警报,还是可能但不太可能的原因?


从 SQL Server BOL:

扫描密度[最佳计数:实际计数]

是百分比。它是最佳计数实际计数的比率。如果一切都是连续的,则该值为 100;如果该值小于 100,则存在一些碎片。

最佳计数是在所有内容都连续链接的情况下范围更改的理想数量。 实际计数是范围更改的实际数量。

逻辑碎片

扫描索引叶页返回的无序页的百分比。该数字与堆无关。乱序页是指分配给索引的下一个物理页不是当前叶页中的下一页指针所指向的页。

但没有关于什么级别的碎片过高以及应该降低的指导。也没有任何关于什么扫描密度太低以及应该增加的指导。

How much fragmentation is bad? How low of scan density is too low?
How low does scan density is bad?

i have a table with the following index density and fragmentation levels:

Name                           Scan Density  Logical Fragmentation
=============================  ============  =====================
IX_TransactionEntries_Tran...  12.834        48.392
IX_TransactionEntries_Curr...  15.419        41.239
IX_TransactionEntries_Tran...  12.875        48.372
TransactionEntries17           98.081         0.0049325
TransactionEntries5            12.960        48.180
PK_TransactionEntries          12.869        48.376
TransactionEntries18           12.886        48.480
IX_TranasctionEntries_CDR...   12.799        49.157
IX_TransactionEntries_CDR...   12.969        48.103
IX_TransactionEntries_Tra...   13.181        47.127

You can see that i just defragmented TransactionEntries17, which is why its Scan Density is so high, and it's fragmentation is so low.

But is 12% scan density horribly low? Is 48% fragmentation horribly high?

i'm getting performance issues when deleting rows (which requires index scanning). Is the index fragmentation a GIANT FLASHING RED ALARM on a 70000 page index, or is it a possible but unlikely cause?


From SQL Server BOL:

Scan Density [Best Count: Actual Count]

Is a percentage. It is the ratio Best Count to Actual Count. This value is 100 if everything is contiguous; if this value is less than 100, some fragmentation exists.

Best Count is the ideal number of extent changes if everything is contiguously linked. Actual Count is the actual number of extent changes.

LogicalFragmentation

Percentage of out-of-order pages returned from scanning the leaf pages of an index. This number is not relevant to heaps. An out-of-order page is a page for which the next physical page allocated to the index is not the page pointed to by the next-page pointer in the current leaf page.

But there's no guidance about what level of fragmentation is too high, and should be decreased. Nor is there any guidance about what scan density is too low, and should be increased.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

执妄 2024-12-15 01:09:02

与 SQL 中的其他任何内容一样,这取决于

这将取决于诸如争用(这会增加等待时间,因为有更多的数据页需要“争用”)、索引的宽度等等等因素。

根据我个人的经验,我对碎片的影响进行了一些测试我运行的一些构建和加载。

对于聚合密集型过程,我针对碎片率为 40% 的表运行了一次,针对碎片率为 0% 的“相同”表运行了另一次。

40% 的碎片表需要1,200% 的时间来运行相同的基本操作。

您的里程可能会有所不同,但差别很大。

Paul Randal 可以说负责 SQL Server 2005+ 中的大部分 DBCC,他认为它是一个非常好的解决方案。也很重要。

简而言之,唯一确定的方法就是进行测试。 BOL 没有给出密度和碎片范围的指南,因为变量太多,无法进行“一般”评估。

这就像问“我的车损坏太严重了吗?”答案取决于“因什么而损坏太多?”、您行驶的距离、速度、道路类型、天气、一年中的什么时间以及一百万个其他因素。

Like anything else in SQL, it depends.

It will depend on stuff like contention (which increases wait times since there are more data pages to "contend" for), how wide your indexes are, etc etc etc etc.

From my personal experience, I did some testing of impacts of fragmentation on some builds and loads that I run.

For an aggregate-intensive procedure, I ran one run against a table that was 40% fragmented, and another against the "same" table with 0% fragmentation.

The 40% fragmented table took 1,200% longer to run the same basic operations.

Your mileage may vary, but it makes a big difference.

Paul Randal, arguably responsible for most of DBCC in SQL Server 2005+, thinks its a pretty huge deal too.

In short, the only way to know for sure is to do testing. BOL doesn't give guidelines on ranges for density and fragmentation because there are too many variables to make an "general" assessment.

It's like asking "Is my car too damaged?" The answer depends on "too damaged for what?", how far you are driving, how fast, on what kind of roads, what weather, what time of year, and a million other things.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文