TSQL 快照隔离
使用 SQL2k5,我有一个临时表,其中包含将填充许多其他表的列。例如,这样的语句:
INSERT INTO [appTable1] ([colA], [colB])
SELECT [appTable1_colA], [appTable1_colB]
FROM [stageTable]
[appTable1] 上的触发器会将新插入的行的标识列值填充回 [stageTable];对于本例,我们会说它是 [stageTable].[appTable1_ID],然后将其作为 FK 插入到其他表中。更多类似的陈述如下:
INSERT INTO [appTable2] ([colA], [colB], [colC], [appTable1_FK])
SELECT [appTable2_colA], [appTable2_colB], [appTable2_colC], [appTable1_ID]
FROM [stageTable]
这个过程通过许多这样的表继续进行。正如您所看到的,我没有在临时表的 SELECT 上包含 WHERE 子句,因为该表在过程结束时会被截断。但是,这使得另一个进程有可能在此事务中间将记录添加到该暂存表中,并且这些记录不会包含先前填充的 FK。我想发布此声明来防止这种情况发生吗?:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
如果这是最好的解决方案,那么这样做有什么缺点?
Using SQL2k5, I have a staging table that contains columns that will populate numerous other tables. For instance, a statement like this:
INSERT INTO [appTable1] ([colA], [colB])
SELECT [appTable1_colA], [appTable1_colB]
FROM [stageTable]
A trigger on [appTable1] will then populate the identity column values of the newly inserted rows back into [stageTable]; for this example, we'll say it's [stageTable].[appTable1_ID] which are then inserted into other tables as a FK. More similar statements follow like:
INSERT INTO [appTable2] ([colA], [colB], [colC], [appTable1_FK])
SELECT [appTable2_colA], [appTable2_colB], [appTable2_colC], [appTable1_ID]
FROM [stageTable]
This process continues through numerous tables like this. As you can see, I'm not including a WHERE clause on the SELECTs from the staging table as this table gets truncated at the end of the process. However, this leaves the possibility of another process adding records to this staging table in the middle of this transaction and those records would not contain the FKs previously populated. Would I want to issue this statement to prevent this?:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
If this is the best solution, what are the downsides of doing it this way?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您能否将批次 ID 添加到临时表中,以便可以在 where 子句中使用它来确保您只处理原始批次的记录?任何向临时表添加记录的进程都必须使用新的、唯一的批次 ID。我认为这比依赖快照隔离更有效(也更稳健)。
Can you add a batch id to your staging table, so that you can use it in where clauses to ensure that you are only working on the original batch of records? Any process that adds records to the staging table would have to use a new, unique batch id. This would be more efficient (and more robust) than depending on snapshot isolation, I think.
所有隔离级别(包括快照)仅影响读取。 stageTable 中的 SELECT 不会看到未提交的插入,也不会阻塞。我不确定这是否能解决您将所有内容都放入 stageTable 而不考虑所有权的问题。当事务最终提交时会发生什么,stageTable 中留下所有中间结果可供下一个事务读取?也许您应该使用临时的#stageTable来确保并发线程之间的自然隔离。
要了解使用快照隔离的成本,请阅读行版本控制资源使用情况 :
All Isolation levels, including snapshot, affect only reads. SELECTs from stageTable will not see uncommited inserts, nor it will block. I'm not sure that solves your problem of throwing everything into the stageTable without any regard for ownership. What happens when the transaction finally commits, the stageTable is left with all the intermediate results ready to be read by the next transaction? Perhaps you should use a temporary #stageTable that will ensure a natural isolation between concurent threads.
To understand the cost of using Snapshot isolation, read Row Versioning Resource Usage: