MS-SQL Server 2005:使用备用快照位置初始化合并订阅

发布于 2024-07-05 05:15:04 字数 661 浏览 10 评论 0原文

一年前我们开始了一些海外合并复制,到目前为止一切都很顺利。 我的问题是,我们现在的系统中有如此多的数据,订阅者服务器上的任何崩溃都将是一场灾难:以标准方式重新初始化订阅将需要几天的时间(我们的连接速度肯定很慢,但已经非常非常昂贵)! 我一直在遵循的想法如下:

  1. 复制原件 数据库,冻结它,发送文件 通过飞机到达订户,以及 启动复制,无需 快照:这是以前的东西 传统上与年长者一起完成 SQL 版本,但听起来 对我来说有点混乱:我会 将我的发布商数据放入 只读模式并停止所有 重复直到操作完成 完全的。
  2. 制作数据快照, 将快照文件发送到国外, 将它们安装在订阅者上,并且 指示新的快照位置 作为备用位置 复制属性。 这个 对我来说听起来很公平(没有必要暂停正在进行的复制,没有数据冻结),但是,在此 一点,微软帮助不... 帮助。

我相信你们中的一些人已经经历过这样的情况。 你的选择是什么?

编辑:当然,有人可能会说“你为什么不尝试一下你的想法”,但这需要几个小时(sql服务器、虚拟机和所有这些东西的多个实例......),我我认为做这件事的人只需要 2 分钟就能解释他的想法。 如果有人愿意牺牲 2 分钟的时间来让我省去几个小时的辛苦工作,我将是最幸福的人......

We started some overseas merge replication 1 year ago and everything is going fine till now. My problem is that we have now so much data in our system that any crash on one of the subscriber's servers will be a disaster: reinitialising a subscription the standard way will take days (our connexions are definitely slow, but already very very expensive)! Among the ideas I have been following up are the following:

  1. make a copy of the original
    database, freeze it, send the files
    by plane to the subscriber, and
    initiate replication without
    snapshot: this is something that was
    done traditionnaly with older
    versions of SQL, but it sounds a
    little bit messy to me: I would have
    to put my publisher's data in
    read-only mode and stop all
    replications untill the operation is
    completed.
  2. make a snapshot of the data,
    send the snapshot files abroad,
    install them on the subscriber, and
    indicate the new snapshot location
    as an alternate location in the
    replication properties. This one
    sounds fair to me (no necessity to suspend ongoing replications, no data freeze), but, on this
    point, Microsoft help does not ...
    help.

I am sure some of you have already experienced such a situation. What was your choice?

EDIT: of course, one could say "Why don't you just give a try to your ideas", but it will take hours (multiple instances of sql-servers, virtual machines, and all that stuff...), and I was thinking that the guy who did it will need only 2 minutes to explain his idea. And I'd be the most happy man if someone accepts to loose 2 minutes of his time to spare me hours of hard work ...

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

城歌 2024-07-12 05:15:04

我们刚刚经历了这样的事情,这并不美好。 尽管涉及的所有服务器都是本地的,但仍然需要很长时间。

让事情变得更困难的是,至少对于 SQL 2000,如果压缩的 cab 超过 4 Gig,快照将会失败。

我能提供的最好建议是确保每个站点都有良好的可用备份。 这样,至少数据不必亲自携带给订户。

We just went through something just like this, and it is not pretty. Even though all of the servers involved were local, it still took a long time.

Just to make things more difficult, at least with SQL 2000, the snapshot will fail if the compressed cab would exceed 4 Gig.

The best advice I could offer is to make sure that each site has good backups available. With that, at least the data would not have to be hand-carried to the subscriber.

瘫痪情歌 2024-07-12 05:15:04

当我将数据从加利福尼亚州洛杉矶复制到中国时,我必须做类似的事情。 使用正常方法加载该快照需要 44 天。

我所做的是将 SQL 复制配置为使用快照的本地路径。 然后我禁用了事务性作业(在您的情况下是合并作业)。 然后我运行了快照。 我将快照压缩并通过 FTP 将文件从加利福尼亚传输到中国。 当它们到达中国时,我将它们解压缩并将它们放在我在加利福尼亚使用的同一文件夹路径中。

然后我从中国服务器上的命令行运行了 distrib.exe。 这样就将数据加载到了china的表中。 加载快照后,我关闭了中国服务器上的分发器,并启动了加利福尼亚服务器上的正常分发器。

这个方法只花了大约28个小时,而不是一个多月。

如果您的数据需要几天以上才能到达目的地,那么您将需要编辑发布并增加可以排队的数据量,否则订阅者将超时,并且需要创建新的快照。采取。

I had to do something similar to this when replicating data from Los Angeles, CA to China. The snap would have taken 44 days to load using normal methods.

What I did was configure the SQL Replication to use a local path to the snapshot. I then disabled the transactional job (in your case the merge job). I then ran the snap. I zipped up the snap and FTPed the files from California to China. When they got to China I unzipped them and put them in the same folder path that I used in California.

I then ran the distrib.exe from the command line on the server in China. This loaded the data into the table in china. Once the snap was loaded I shutdown the distributor on the server in China and started up the normal distributor on the server in California.

This method only took about 28 hours instead of over a month.

If your data will take more then a couple of days to get to its destination then you will need to edit the publication and increase the amount of data that can be queued up or the subscriber will be timed out and a new snapshot will need to be taken.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文