11秒删除SQL Server中的240行

发布于 2024-12-07 23:41:12 字数 2523 浏览 0 评论 0原文

我正在运行删除语句:

DELETE FROM TransactionEntries
WHERE SessionGUID = @SessionGUID

删除的实际执行计划是:

Execution Tree
--------------
Clustered Index Delete(
   OBJECT:([GrobManagementSystemLive].[dbo].[TransactionEntries].IX_TransactionEntries_SessionGUIDTransactionGUID]), 
   WHERE:([TransactionEntries].[SessionGUID]=[@SessionGUID])
)

该表由 SessionGUID 聚集,因此 240 行物理上在一起。

该表上没有触发器。

操作耗时:

  • 持续时间:11821 ms
  • CPU:297
  • 读取:14340
  • 写入:1707

该表包含 11 个索引:

  • 1 个聚集索引(SessionGUID
  • 1 个唯一(主键)索引
  • 9 个其他非唯一、非-聚集索引

我怎样才能弄清楚为什么这个删除操作正在执行14,340读取,并需要 11 秒?

  • 平均。磁盘读取队列长度达到
  • 平均0.8。磁盘每秒读取 永远不会超过
  • 平均4ms。磁盘写入队列长度达到0.04
  • 平均。磁盘秒/写入永远不会超过4ms

其他读取的目的是什么?执行计划没有表明它正在读取什么。


更新

EXECUTE sp_spaceused TransactionEntries

TransactionEntries  
  Rows      6,696,199
  Data:     1,626,496 KB (249 bytes per row)
  Indexes:  7,303,848 KB (1117 bytes per row)
  Unused:      91,648 KB    
            ============
  Reserved: 9,021,992 KB (1380 bytes per row)

每行 1,380 字节、240 行,即需要删除 340 kB

与直觉相反的是,对于 340 kB 来说它是如此困难。

更新二:碎片

Name                           Scan Density  Logical Fragmentation
=============================  ============  =====================
IX_TransactionEntries_Tran...  12.834        48.392
IX_TransactionEntries_Curr...  15.419        41.239
IX_TransactionEntries_Tran...  12.875        48.372
TransactionEntries17           98.081         0.0049325
TransactionEntries5            12.960        48.180
PK_TransactionEntries          12.869        48.376
TransactionEntries18           12.886        48.480
IX_TranasctionEntries_CDR...   12.799        49.157
IX_TransactionEntries_CDR...   12.969        48.103
IX_TransactionEntries_Tra...   13.181        47.127

整理TransactionEntries17

DBCC INDEXDEFRAG (0, 'TransactionEntries', 'TransactionEntries17')

,因为INDEXDEFRAG是一个“在线操作”(即它只保存>IS 意图共享锁)。然后我打算手动对其他系统进行碎片整理,直到业务部门打电话来,说系统已经死了 - 然后他们转而在纸上做所有事情。

你说什么; 50% 的碎片和只有 12% 的扫描密度,导致索引扫描性能糟糕

i am running a delete statement:

DELETE FROM TransactionEntries
WHERE SessionGUID = @SessionGUID

The actual execution plan of the delete is:

Execution Tree
--------------
Clustered Index Delete(
   OBJECT:([GrobManagementSystemLive].[dbo].[TransactionEntries].IX_TransactionEntries_SessionGUIDTransactionGUID]), 
   WHERE:([TransactionEntries].[SessionGUID]=[@SessionGUID])
)

The table is clustered by SessionGUID, so the 240 rows are physically together.

The table has no triggers on it.

The operation takes:

  • Duration: 11821 ms
  • CPU: 297
  • Reads: 14340
  • Writes: 1707

The table contains 11 indexes:

  • 1 clustered index (SessionGUID)
  • 1 unique (primary key) index
  • 9 other non-unique, non-clustered indexes

How can i figure out why this delete operation is performing 14,340 reads, and takes 11 seconds?

  • the Avg. Disk Read Queue Length reaches 0.8
  • the Avg. Disk sec/Read never exceeds 4ms
  • the Avg. Disk Write Queue Length reaches 0.04
  • the Avg. Disk sec/Write never exceeds 4ms

What are the other reads for? The execution plan gives no indication of what it's reading.


Update:

EXECUTE sp_spaceused TransactionEntries

TransactionEntries  
  Rows      6,696,199
  Data:     1,626,496 KB (249 bytes per row)
  Indexes:  7,303,848 KB (1117 bytes per row)
  Unused:      91,648 KB    
            ============
  Reserved: 9,021,992 KB (1380 bytes per row)

With 1,380 bytes per row, and 240 rows, that's 340 kB to be deleted.

Counter intuitive that it can be so difficult for 340 kB.

Update Two: Fragmentation

Name                           Scan Density  Logical Fragmentation
=============================  ============  =====================
IX_TransactionEntries_Tran...  12.834        48.392
IX_TransactionEntries_Curr...  15.419        41.239
IX_TransactionEntries_Tran...  12.875        48.372
TransactionEntries17           98.081         0.0049325
TransactionEntries5            12.960        48.180
PK_TransactionEntries          12.869        48.376
TransactionEntries18           12.886        48.480
IX_TranasctionEntries_CDR...   12.799        49.157
IX_TransactionEntries_CDR...   12.969        48.103
IX_TransactionEntries_Tra...   13.181        47.127

i defragmented TransactionEntries17

DBCC INDEXDEFRAG (0, 'TransactionEntries', 'TransactionEntries17')

since INDEXDEFRAG is an "online operation" (i.e. it only holds IS Intent Shared locks). i was going to then manually defragment the others until the business operations called, saying that the system is dead - and they switched to doing everything on paper.

What say you; 50% fragmentation, and only 12% scan density, cause horrible index scan performance?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

初心未许 2024-12-14 23:41:12

正如@JoeStefanelli 在评论中指出的那样,这是额外的非聚集索引。

您将从表中删除 240 行。

这相当于 2640 个索引行,其中 240 个包含表中的所有字段。

根据它们的宽度以及您拥有的包含字段的数量,这可能相当于您看到的所有额外读取活动。

非聚集索引行肯定不会在磁盘上分组在一起,这会增加延迟。

As @JoeStefanelli points out in comments, it's the extra non-clustered indexes.

You are deleting 240 rows from the table.

This equates to 2640 index rows, 240 of which include all fields in the table.

Depending on how wide they are and how many included fields you have, this could equate to all the extra read activity you are seeing.

The non-clustered index rows will definitely NOT be grouped together on disk, which will increase delays.

怎会甘心 2024-12-14 23:41:12

我认为索引可能是最有可能的罪魁祸首,但我想排除另一种可能性。您提到没有触发器,但是是否有任何表与该表具有外键关系?必须检查它们以确保其中没有记录,如果您打开了级联删除,则这些记录也必须被删除。

I think the indexing might be the likeliest culprit but I wanted to throw out another possibility. You mentioned no triggers, but are there any tables that have a foreign key relationship to this table? They would have to be checked to make sure no records are in them and if you have cascade delete turned on, those records would have to be deleted as well.

旧情别恋 2024-12-14 23:41:12

在解决了许多 SQL 性能问题后,我对此类问题的标准操作程序是:

  1. 备份数据
  2. 删除相关表上的索引之一
  3. 测量操作
  4. Restore DB
  5. 重复 w/#2 直到 #图3显示了巨大的变化。这很可能就是你的罪魁祸首。

Having banged my head on many-a-SQL performance issue, my standard operating procedure for something like this is to:

  1. Back up the data
  2. Delete one of the indexes on the table in question
  3. Measure the operation
  4. Restore DB
  5. Repeat w/#2 until #3 shows a drastic change. That's likely your culprit.
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文