事务块会降低 SQL Server 的性能吗?

发布于 2024-11-15 00:22:57 字数 765 浏览 2 评论 0原文

现在我和一位同事正在争论非重要的 BEGIN TRAN....COMMIT TRAN 块的影响。 我已经为简单的插入-更新-删除操作编写了大约 140 个存储过程,并且由于我们稍后可能需要在其中执行一些额外的操作,因此我已经包含了可能需要的 BEGIN TRAN 和 COMMIT TRAN 块,如下所示:

CREATE PROCEDURE [Users].[Login_Insert]

        @Username           nvarchar (50) OUTPUT,
        @Password           char (40),
        @FullName           nvarchar (150),
        @LoginTypeId        int

AS

SET NOCOUNT ON;

BEGIN TRY
BEGIN TRAN

INSERT [Users].[Login]
(
        [Username],
        [Password],
        [FullName],
        [LoginTypeId]
)
VALUES
(
        @Username,
        @Password,
        @FullName,
        @LoginTypeId
)

COMMIT TRAN
RETURN 1
END TRY

BEGIN CATCH
ROLLBACK TRAN

RETURN -1
END CATCH

GO

现在,其中许多交易可能永远没有必要。这些无关的块是否会显着影响性能? 提前致谢。

Now me and a coworker are arguing about the effect of non-vital BEGIN TRAN....COMMIT TRAN blocks.
I've written about 140 Stored Procedures for simple insert-update-delete operations and since we may later need to do some extra operations in them, I've already included the might-be-necessary BEGIN TRAN and COMMIT TRAN blocks like so:

CREATE PROCEDURE [Users].[Login_Insert]

        @Username           nvarchar (50) OUTPUT,
        @Password           char (40),
        @FullName           nvarchar (150),
        @LoginTypeId        int

AS

SET NOCOUNT ON;

BEGIN TRY
BEGIN TRAN

INSERT [Users].[Login]
(
        [Username],
        [Password],
        [FullName],
        [LoginTypeId]
)
VALUES
(
        @Username,
        @Password,
        @FullName,
        @LoginTypeId
)

COMMIT TRAN
RETURN 1
END TRY

BEGIN CATCH
ROLLBACK TRAN

RETURN -1
END CATCH

GO

Now many of these transactions may never be necessary. Are these extraneous blocks going to affect the performance in a noticeable manner?
Thanks in advance.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

雪花飘飘的天空 2024-11-22 00:22:57

还不足以引起注意。

也就是说,每个 TXN 将在 BEGIN TRAN 和 INSERT 之间打开额外的 OhNoSecond。如果有人能测量它,我会印象深刻。

然而,如果你做了 BEGIN TRAN 然后提示用户输入,你的腿需要折断...

不过好主意:我这样做是为了让我所有的写入过程都是 100% 一致,具有相同的错误处理,可以嵌套等

编辑:在 Remus 之后' 答案,我发现我没有链接到我的嵌套 TXN 模板:包含 TRY CATCH ROLLBACK 模式的嵌套存储过程?与 Remus 的不同之处在于它总是回滚并且没有保存点

编辑,快速而肮脏的测试表明它更快事务的 2/3 时间

SET NOCOUNT ON
SET STATISTICS IO OFF

DECLARE @date DATETIME2
DECLARE @noTran INT
DECLARE @withTran INT

SET @noTran = 0
SET @withTran = 0

DECLARE @t TABLE (ColA INT)
INSERT @t VALUES (1)

DECLARE 
  @count INT,
  @value INT

SET @count = 1

WHILE @count < 100
BEGIN

  SET @date = GETDATE()
  UPDATE smalltable SET smalltablename = CASE smalltablename WHEN 'test1' THEN 'test' ELSE 'test2' END WHERE smalltableid = 1
  SET @noTran = @noTran + DATEDIFF(MICROSECOND, @date, GETDATE())

  SET @date = GETDATE()
  BEGIN TRAN
  UPDATE smalltable SET smalltablename = CASE smalltablename WHEN 'test1' THEN 'test' ELSE 'test2' END WHERE smalltableid = 1
  COMMIT TRAN
  SET @withTran = @withTran + DATEDIFF(MICROSECOND, @date, GETDATE())

  SET @count = @count + 1
END

SELECT 
  @noTran / 1000000. AS Seconds_NoTransaction, 
  @withTran / 1000000. AS Seconds_WithTransaction

Seconds_NoTransaction    Seconds_WithTransaction
2.63200000               2.70400000
2.16700000               2.12300000

颠倒更新顺序保持相同的行为

Not enough to notice.

That is, each TXN will be open for an extra OhNoSecond between BEGIN TRAN and INSERT. I'd be impressed if anyone could measure it.

However, if you did BEGIN TRAN then prompted for user input, your legs need breaking...

Good idea though: I do this so all my write procs are 100% consistent, have same error handling, can be nested etc

Edit: After Remus' answer, I see I didn't link to my nest TXN template: Nested stored procedures containing TRY CATCH ROLLBACK pattern? This is different to Remus' in that it always rolls back and has not SAVEPOINTs

Edit, a quick and dirty test shows it's quicker around 2/3 of the time with the transaction

SET NOCOUNT ON
SET STATISTICS IO OFF

DECLARE @date DATETIME2
DECLARE @noTran INT
DECLARE @withTran INT

SET @noTran = 0
SET @withTran = 0

DECLARE @t TABLE (ColA INT)
INSERT @t VALUES (1)

DECLARE 
  @count INT,
  @value INT

SET @count = 1

WHILE @count < 100
BEGIN

  SET @date = GETDATE()
  UPDATE smalltable SET smalltablename = CASE smalltablename WHEN 'test1' THEN 'test' ELSE 'test2' END WHERE smalltableid = 1
  SET @noTran = @noTran + DATEDIFF(MICROSECOND, @date, GETDATE())

  SET @date = GETDATE()
  BEGIN TRAN
  UPDATE smalltable SET smalltablename = CASE smalltablename WHEN 'test1' THEN 'test' ELSE 'test2' END WHERE smalltableid = 1
  COMMIT TRAN
  SET @withTran = @withTran + DATEDIFF(MICROSECOND, @date, GETDATE())

  SET @count = @count + 1
END

SELECT 
  @noTran / 1000000. AS Seconds_NoTransaction, 
  @withTran / 1000000. AS Seconds_WithTransaction

Seconds_NoTransaction    Seconds_WithTransaction
2.63200000               2.70400000
2.16700000               2.12300000

Reversing the order of update keeps the same behaviour

话少心凉 2024-11-22 00:22:57

在您发布的代码中,不会有可测量的影响,但事务确实会对性能产生影响,它们可以由于日志刷新提交分组而显着提高性能,或者由于管理不正确的争用问题而显着降低性能。但最重要的是,当需要交易来保证正确性时,你不能跳过它们。话虽如此,相对于事务和 try-catch 块,您的模板实际上非常糟糕。 catch 块中的事务必须对 XACT_STATE< 进行三态逻辑检查/code>返回值 (-1, 0, 1) 并正确处理注定失败的交易。有关示例,请参阅异常处理和嵌套事务

另外,您永远不应该将 try-catch 错误处理与返回代码错误处理混合在一起。选择一个并坚持下去,最好是尝试捕捉。换句话说,您的存储过程应该RAISE,不返回-1。将异常与错误代码混合在一起会使您的代码维护和正确调用成为一场噩梦。

In the code you posted there will be no measurable effect, but transactions do have effect on performance, they can dramatically improve performance due to log flush commit grouping or they can dramatically reduce performance due to incorrectly managed contention issues. But the bottom line is that when transactions are needed for correctness you cannot skip having them. That being said, your template is actually quite bad vis-a-vis transactions and try-catch blocks. Transcation in a catch block must have a tri-state logic check for XACT_STATE return values (-1, 0, 1) and properly handle doomed transactions. See Exception handling and nested transactions for an example.

also, you should never ever mix try-catch error handling with return code error handling. Pick one and stick with it, preferably try-catch. In other words, your stored procedure should RAISE, not return -1. Mixing exception with error codes makes your code a nightmare to maintain and properly call.

你列表最软的妹 2024-11-22 00:22:57

TL;DR - 包含两个处理 3300 万条记录的选择查询的存储过程在没有事务的情况下花了 45 秒执行,在有事务的情况下花了 48 秒。

免责声明:我写了一个存储过程大约 4 个小时,并找到了这个问题的一个可衡量的答案(注意:这并不是那么重要!)由于我正在处理的数据的敏感性,查询逻辑中的间隙被故意省略。

方法:此过程是使用两个查询开发的 - 一个执行大部分繁重的工作,另一个自行计算一个附加字段,因此它不会尝试计算超出需要的字段。我将其分为两个步骤:

1)我将 2 个公用表表达式和 1 个 SQL SELECT 写入临时表,然后再次查询。我必须这样做,因为要求我实现几个标量值函数,否则这些函数将尝试在超过 3300 万条记录而不是 355 条记录上运行该函数。2

)我在第一个查询之后附加了一个标量值函数,因此它没有尝试查找 3000 万条记录(如果你关心的话,这会产生巨大的差异)。

查询:出于读者的目的,我删除了大部分查询(案例语句)。

CREATE PROC GET_PAYMENT_SUMS_BY_CLIENT
AS

--Required for reporting in a specific Business Intelligence later; Optional
SET FMTONLY OFF; 

BEGIN TRANSACTION

--Query 1
--This CTE checks over 30 million records
WITH CTE1 AS(
SELECT CASE VARIABLE
--170 case conditions go here
END AS TheType, 
Amount,
PK1 FROM TABLE1),

--THIS CTE Pivots the sums to get the data in the style I want it in
CTE2 AS(
SELECT PK1, [PIVOT1], [PIVOT2], [PIVOT3]
FROM
  (SELECT * FROM CTE1) AS BaseTable --Alias was just to get it to execute
)
PIVOT(
  SUM(Amount)
FOR TheType IN ([PIVOT1], [PIVOT2], [PIVOT3])
        ) AS PivotTable
    )
 )

SELECT TABLE2.NAME, CTE2.* INTO #TEMPORARY_TABLE 
FROM CTE2 
JOIN TABLE2 ON CTE2.PK1 = TABLE2.PK2

--Query 2
--Written to force the function to look at 355 records instead of 33 million
SELECT *, dbo.SCALAR_VALUED_FUNCTION(PK2) FROM #TEMPORARY_TABLE

COMMIT TRANSACTION

调查结果:
使用事务 - 如果使用事务逻辑,则该集合的结果查询需要 48 秒来处理包含 170 行的 case 语句中的超过 3300 万条记录,按总和旋转数据,将数据放入临时表中,然后附加第一个查询运行后的标量值函数。

没有事务 - 如果代码中的注释行保留注释,则上述所有步骤都将在 45 秒内完成。这比使用事务块快大约 7%:3/45 = 0.0666.... 快 7%。

结论:
虽然我的努力无法告诉您对 10 条记录执行相同的查询是否会产生相同比例的差异,但它可以告诉您,当您开始处理更大的数据集和/或更复杂的查询时,它开始变得更加重要。

我知道这些信息对那里的某人有用!

TL;DR - Stored Procdure containing two select queries dealing with 33 million records took me 45 seconds to execute without a transaction, 48 seconds with.

Disclaimer: I wrote a stored procedure for about 4 hours and came across a somewhat measurable answer to this question (NOTE: It's not that significant!) Gaps in query logic are intentionally omitted due to the sensitivity of the data I was working with.

Methodology: This procedure was developed using two queries - one doing most of the heavy lifting, and the other calculating one additional field on its own so it doesn't try to calculate a field more than it needs to. I've broken it down into two steps:

1) I wrote 2 Common Table Expressions with 1 SQL SELECT into a Temporary Table, then Queried it again. I had to do this because the requirements asked of me were to implement a couple scalar valued functions that would have otherwise attempted to run the function on over 33 million records instead of 355.

2) I attached a scalar valued function AFTER the first query so it didn't try to look in the 30 million records (It made a huge difference, if you care).

Query: For readership purposes, I've cut out a large part of the query (The case statement).

CREATE PROC GET_PAYMENT_SUMS_BY_CLIENT
AS

--Required for reporting in a specific Business Intelligence later; Optional
SET FMTONLY OFF; 

BEGIN TRANSACTION

--Query 1
--This CTE checks over 30 million records
WITH CTE1 AS(
SELECT CASE VARIABLE
--170 case conditions go here
END AS TheType, 
Amount,
PK1 FROM TABLE1),

--THIS CTE Pivots the sums to get the data in the style I want it in
CTE2 AS(
SELECT PK1, [PIVOT1], [PIVOT2], [PIVOT3]
FROM
  (SELECT * FROM CTE1) AS BaseTable --Alias was just to get it to execute
)
PIVOT(
  SUM(Amount)
FOR TheType IN ([PIVOT1], [PIVOT2], [PIVOT3])
        ) AS PivotTable
    )
 )

SELECT TABLE2.NAME, CTE2.* INTO #TEMPORARY_TABLE 
FROM CTE2 
JOIN TABLE2 ON CTE2.PK1 = TABLE2.PK2

--Query 2
--Written to force the function to look at 355 records instead of 33 million
SELECT *, dbo.SCALAR_VALUED_FUNCTION(PK2) FROM #TEMPORARY_TABLE

COMMIT TRANSACTION

Findings:
With Transaction - If the Transaction logic is used, the resulting query from this set takes 48 seconds to process more than 33 million records in a case statement containing 170 lines, pivot the data by sum, place the data into a temporary table, and attach a scalar valued function AFTER the first query has run.

Without Transaction - If the commented lines in the code are left commented, all of the aforementioned steps are accomplished in 45 seconds. This is about 7% faster than with a Transaction block: 3/45 = 0.0666.... ~ 7% faster.

Conclusion:
While my efforts cannot tell you if doing the same query for 10 records will yield the same proportion of difference, it can tell you that it begins to matter more when you begin to larger data sets and/or more complicated queries.

I have this information has served a purpose for someone out there!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文