事务日志传送是否受到长时间运行的 redgate 脚本的影响

发布于 2024-09-18 08:23:26 字数 424 浏览 2 评论 0原文

我有一个长时间运行的 redgate 脚本,它将一堆架构类型更改应用于数据库。跑完需要3个小时。该脚本将在具有镜像和事务日志传送的生产数据库上运行。

我的具体问题是事务日志传送将如何受到巨大的 redgate 生成脚本的影响?其配置: 每 15 分钟备份一次 备份到本地驱动器 运送到 dr 服务器驱动器 每 30 分钟涂抹一次 保留 60 分钟

是否仍会逐步传送更改,或者如果有一项 redgate 事务,则在完成之前不会传送?

令人担忧的是 1. 长时间运行的脚本不会受到此事务日志传送的影响(考虑到它将跨越多个备份) 2. 更改是否会增量传送或作为一个大转储传送 - 因为我认为 redgate 通常使用一个事务,因此如果失败,它会回滚所有内容?我知道日志文件总共增加了大约 80 GB,因此我试图确保事务日志传送有足够的空间来存储它需要存储的任何内容。

谢谢!

I have a long running redgate script that is applying a bunch of schema type changes to a database. It is going to take 3 hours to run. This script will be run on a production database that has mirroring and transaction log shipping in place.

My specific question is how is transaction log shipping going to be affected by a huge redgate generated script? Its configured:
backed up every 15 minutes
backed up to local drive
shipped to dr server drive
applied every 30 mins
kept for 60 mins

will it still incrementally be shipping the changes, or if there's one redgate transaction it won't get shipped until it completes?

Concern is that
1. the long running script won't be affected by this transaction log shipping (given its going to span several backups)
2. whether the changes will be shipped incrementally or as one big dump - as I thought redgate typically used one transaction so if it fails it rolls back everything? I know the log file increases a total of about 80 gig so am trying to ensure there is enough room for the transaction log shipping to store whatever it needs to store.

Thanks!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

怎会甘心 2024-09-25 08:23:27

您应该能够通过检查 RedGate 脚本来判断这是否是一笔大交易。只需 grep sql 文件“开始事务”即可了解情况。

如果是,那么事务日志传送在事务完成并提交之前不会传送它,因此它跨越同步并不重要。我很确定情况确实如此 - 我基于这篇文章 http://msdn.microsoft.com/en-us/library/ms151706.aspx 上面写着:

分发数据库是存储转发队列,更改将从该队列发送到订阅者..

“只有已提交的事务才会发送到分发数据库。”

You should be able to tell from inspecting the RedGate script whether it is all one big transaction. Just grep the sql files for "begin transaction" to get an idea.

If it is, then your transaction log shipping won't ship it until the transaction has completed and been committed, so it doesn't matter that it spans syncs. I'm pretty sure this is the case - I'm basing this on this article http://msdn.microsoft.com/en-us/library/ms151706.aspx which says:

The distribution database [is] the store-and-forward queue from which changes are sent to Subscribers ..

"Only committed transactions are sent to the distribution database."

一个人练习一个人 2024-09-25 08:23:27

好吧,我完成了升级(耶!),发现它并没有将整个东西作为一大块交付。我从他们的 dba 那里得到了以下信息:

它并不是将其作为一大块来完成...随着您的操作,您只会拥有更大的 TRN 文件。您获取 TRN 备份、发送和应用的次数越多,您可以保持的规模就越小。然而,进行备份显然需要 cpu + i/o...所以你不想连续运行它。

所以虽然我认为日志文件会增长到 90g..然后尝试发送某种90g文件却没有。它只是逐渐填满事务日志传送文件夹,其 60g 足以用于升级:)

OK so I made it through my upgrade (yay!) and discovered that it didn't ship the entire thing as one big chunk. From their dba I got this information:

It doesn't do it as one big chunk... you'll just have bigger TRN files as you go along. The more often you take TRN backups and ship them and apply them, the smaller you can keep it. However, taking backup obviously requires cpu + i/o... so you don't want to run it continuously.

so whilst I thought the log file would grow to 90g.. and then try to ship some kind of 90g file across it didn't. It just incrementally filled up the transaction log shipping folder and the 60g it had was sufficient for the upgrade :)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文