SQL Server 事务较大,然后偶尔出现命令超时

发布于 2024-11-05 01:24:05 字数 436 浏览 0 评论 0原文

我有一个导入大量数据的批处理过程。我通过读取一个大文本文件并解析,然后根据数据的指示执行插入、更新和删除来完成此操作。这些是作为存储过程执行的简单语句。这批存储过程调用被包装在一个事务中,以确保文件在继续之前得到完全处理。批量导入每周进行一次。如果客户落后,可能会连续发生多个大型交易。发生这种情况时,我偶尔会在第一个事务之后遇到命令超时。我已将命令超时增加到 120 秒。到目前为止工作正常,但是,在慢速计算机上,它可能不会。我注意到超时经常发生在:

Update <table> set <columns> where <pk = some value>

我想也许SQL仍在后台更新索引。对正在发生的事情有什么想法吗?

我意识到我可以使用 SqlBulkCopy 之类的东西,但是,现在这不是一个选择。

谢谢, 斯科特

I have a batch process that imports large amounts of data. I am doing this by reading in a large text file and, parsing, then executing inserts, updates and deletes as dictated by the data. These are simple statements being executed as Stored Procedures. The batch of stored procedure calls are wrapped inside a transaction to be sure the file is completely processed before moving on. The batch import is done once a week. If a client gets behind, there could be several large transactions in a row. When this happens, I occasionally get command timeouts in transactions after the first one. I have increased the command timeout to 120 seconds. Works fine so far, but, on a slow computer, it might not. I have noticed that the timeout often occurs on an:

Update <table> set <columns> where <pk = some value>

I thought that perhaps SQL is still updating indexes in the background. Any ideas on what is going on?

I realize that I could use something like SqlBulkCopy, but, that is not an option right now.

Thanks,
Scott

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

戒ㄋ 2024-11-12 01:24:05

您的存储过程是逐行进行还是进行基于集合的插入、更新等?这个问题的答案在接下来的几段中将变得很重要。

但首先,扩大超时的问题是n+1问题,无论你做多久,仍然会出现这种情况。因此,延长超时时间并不是一个永久的“像婴儿一样睡觉”的解决方案。更好的方法是通过分解作业来消除超时的需要。

您要做的第一件事就是消除包装交易。随着操作中行数的增加,维护锁所需的资源会激增,因此,如果将此类批处理操作分解为需要较小事务的较小步骤,那么它们通常会更快。

由于您不再拥有该包装事务,下一步是确保在作业失败时可以安全地重新运行每个单独的步骤,无论失败时流程位于何处。如果你想变得更花哨,这称为“幂等”,或者如果你想使用简单的英语,则称为“可重新运行”。

现在我们回到问题,您的存储过程是逐行进行还是执行影响许多行的插入,然后执行更新,等等。

CASE ROW-BY-ROW:最简单,但可能是最慢的。将文本文件放入“收件箱”表中,并添加“已处理”列(是/否)。当您逐行进行操作时,您执行 INSERT、UPDATE 或 DELETE,然后将收件箱表中的行更新为 Processed=Y。如果您在任何阶段拔掉插头,存储过程就会继续查看未处理的行,直到没有剩余的行为止。这给你带来了与大包装交易相同的效果,而没有开销。您可以连续运行数十个文件,并且服务器永远不会超时。

CASE SET-BASED:如果您正在执行基于集合的 DML,那么您可以修改 INSERT,以便它从 INBOX 表中提取数据并将其插入目标表中尚未存在的所有行。这使得它可以重新运行。 DELETE 语句不需要此检查,如果您重新运行基于集合的 DELETE 并且它已经运行,它只是发现没有任何可删除的内容。 UPDATE 与 DELETE 基本相同。

这是基于您对问题所述的一般建议。为了更具体,我需要更多地了解这个过程。

Does your sproc go row-by-row or does it do set-based INSERTS, UPDATES and so forth? The answer to this question will become important a few paragraphs down.

But first, the problem with expanding the timeout is the n+1 problem, no matter how long you make it there will be that case that still goes over. So expanding the timeout is not a permanent sleep-at-night-like-a-baby solution. Much better is to eliminate the need for the timeout by breaking up the job.

First thing you do is eliminate that wrapping transaction. The resources required to maintain locks explode as the row count in the operation goes up, so that batch operations like this often go faster if you break them up into smaller steps that require smaller transactions.

The next step, since you no longer have that wrapping transaction, is to make sure that each individual step can be safely re-run in the event of a failed job, no matter where the process was when it failed. This is called "idempotent" if you want to be fancy, or "re-runnable" if you want to use plain English.

Now we return to the question, is your sproc going row-by-row or is it executing INSERTS that affect many rows, then UPDATES, and so on.

CASE ROW-BY-ROW: Easiest, though probably the slowest. Gobble the text file into an "INBOX" table and add a column "Processed" which is Y/N. As you go row by row, you do your INSERT, UPDATE or DELETE, then update the row in the inbox table as Processed=Y. If you pull the plug at any stage, the sproc simply resumes looking at unprocessed rows until there are none left. This gives you the same affect as the great big wrapping transaction without the overhead. You can run dozens of files in a row and the server will never time out.

CASE SET-BASED: If you are doing set-based DML, then you modify the INSERT so it pulls from the INBOX table and INSERTS into the target table for all rows not already there. This makes it re-runnable. DELETE statements don't need this check, if you re-run a set-based DELETE and it has already run, it simply finds nothing to delete. UPDATE is basically the same as DELETE in.

This is general advice based on what you stated about your issue. To get more specific I would need to know more about the process.

江城子 2024-11-12 01:24:05

我会查看索引的碎片并在必要时重建它们。

另外,我会在 SSMS 中运行您的查询并打开包含实际执行计划,看看您是否有任何问题。

I would look at the fragmentation of your indexes and rebuild them if necessary.

Also I would run your queries in SSMS and turn on include actual execution plan and see if you have any issues.

不奢求什么 2024-11-12 01:24:05

您需要首先调查导致超时的原因。与任何 SQL Server 性能调查一样,最好的方法是等待和队列 方法论。基本上,您需要查看 sys.dm_exec_requests 用于执行 UPDATE 的会话。这将显示是什么阻止/减慢了更新,根据这一发现,您可以采取适当的操作/解决方案。如果我想在黑暗中冒险,你会遇到日志自动增长< /a> 测试中的事件。尽管如此,正确的解决方案是先测量后削减:调查以确定超时的根本原因,然后才继续进行更改。

You need to investigate first what is causing the timeout. As with any SQL Server performance investigation, the best approach is the Waits and Queues methodology. Basically you need to look at the wait_type, wait_time and wait_resource in sys.dm_exec_requests for the session that is doing the UPDATE. This will show what is blocking/slowing the UPDATE, and according to this find you can do an appropriate action/solution. If I'd venture a shot in the dark, you are hitting log auto-growth events in your testing. None the less, the proper solution is to measure first and cut second: investigate to identify the root cause of the timeout, and only then proceed with changes.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文