用于跟踪已删除记录的删除触发器的替代方案

发布于 2024-10-31 03:23:33 字数 303 浏览 3 评论 0原文

我的任务是创建一个大型数据复制服务来配置我们的数据仓库。源数据库驻留在其他服务器上。

到目前为止,我已经能够使用 SqlBulkCopy 类和 TSql excepts 语句的结合来实现将项目更新和插入到仓库中。

我现在的问题是处理记录删除。我能想到的处理删除的唯一方法是在表上创建一个自定义触发器,将已删除的记录插入到我的服务可以读取的临时表中,并从仓库中删除。

我知道有很多数据复制工具,但该公司想要定制的内部服务。

注意一些将配置的表超过 1 亿条记录。

有什么建议吗?

I've got a task to create a large data replication service that will provision our data warehouse. the source databases reside on other servers.

So far, I've been able to implement the Update and Insert items into the warehouse using the conjunction of the SqlBulkCopy class, and the TSql Excepts statement.

my problem now is handling record deletion. the only thing I can think of for handling deletions was to create a custom trigger on the table, to insert deleted records into a temp table that my service could read, and remove from the warehouse.

I know there are a lot of data replication tools out there, but the company wants a custom in house service.

note some of the tables that will be provisioned are over 100 million records.

Any suggestions?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

魄砕の薆 2024-11-07 03:23:33

我不认为删除触发器会有那么糟糕。 1 亿条记录绝对是一个很好的块,但是您在服务器上运行触发器,SQL 可以优化执行路径。

如果您在客户端执行其他操作,则会产生从服务器获取记录然后向仓库发出删除命令的开销。

困扰您的触发因素是什么?

I don't think a Delete trigger would be all that bad. 100 million records is definitely a good chunk, but you're running the trigger on the server, in a way that SQL can optimize the execution path.

If you do something else client side, you're going to incur overhead for fetching the records from the server and then issuing delete commands to the warehouse.

What is it about a trigger that is bothering you?

把梦留给海 2024-11-07 03:23:33

根据我所读到的内容,您正在尝试重新发明复制(http://msdn.microsoft.com/en-us/library/ms151198.aspx)。这是否概括了一切?如果是这样,我的建议是不要这样做。

From what I'm reading, you're trying to re-invent replication (http://msdn.microsoft.com/en-us/library/ms151198.aspx). Does that about sum it up? If so, my suggestion would be not to.

ι不睡觉的鱼゛ 2024-11-07 03:23:33

SQL Server有内置的变更跟踪(至少在2008 R2中,我不确定这个功能到底是什么时候引入的)。在此处阅读更多信息:http://msdn.microsoft.com/en-us/library /cc280462.aspx

SQL Server has built-in change tracking (at least in 2008 R2, I'm not sure when exactly this feature was introduced). Read more here: http://msdn.microsoft.com/en-us/library/cc280462.aspx

烟织青萝梦 2024-11-07 03:23:33

我可能不完全理解你在做什么,但在 SQL Server 中,你可以捕获使用 输出子句。也许这是你可以使用的东西。

-- Table to delete from
declare @T table (id int, name varchar(50))

-- Table to capture the deleted rows
declare @DeletedRows table (id int, name varchar(50))

-- Dummy data
insert into @T values
(1, 'Name1'),
(2, 'Name2'),
(3, 'Name3'),
(4, 'Name4'),
(5, 'Name5')

-- Delete every other row
delete from @T
output deleted.id, deleted.name into @DeletedRows
where id % 2 = 0

select *
from @DeletedRows

结果 - 删除的行

id          name
----------- --------------------------------------------------
2           Name2
4           Name4

I might not understand entirely what you are doing but in SQL Server you can capture the rows that are deleted with the output clause. Perhaps this is something you can use.

-- Table to delete from
declare @T table (id int, name varchar(50))

-- Table to capture the deleted rows
declare @DeletedRows table (id int, name varchar(50))

-- Dummy data
insert into @T values
(1, 'Name1'),
(2, 'Name2'),
(3, 'Name3'),
(4, 'Name4'),
(5, 'Name5')

-- Delete every other row
delete from @T
output deleted.id, deleted.name into @DeletedRows
where id % 2 = 0

select *
from @DeletedRows

Result - Deleted rows

id          name
----------- --------------------------------------------------
2           Name2
4           Name4
吃不饱 2024-11-07 03:23:33

我在使用触发器将记录 ID 插入临时表方面拥有丰富的经验。然后,我们创建了一组 SSIS 包和 SQL 代理作业轮询临时表并根据临时表的内容采取适当的操作。这使得实现定制成为可能。由于我们在 OLTP 系统和仓库之间移动数据,因此它很有意义 - 记录并不总是逐行排列。

I've had good experience using triggers to insert record IDs into staging tables. We then created a set of SSIS packages and SQL Agent jobs poll the staging tables and take appropriate action based on the contents of the staging table. This made it possible to implement custom. Since we were moving data between an OLTP system and a warehouse, it made a lot of sense - records don't always line up row for row.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文