SQL 2008 R2 p2p 复制替代方案
要求。建立 3 个全球数据中心(站点),并根据思科全球站点选择器将用户引导至最近的站点。如果站点 1 出现故障,则站点 1 的所有流量都将直接流向站点 2。如果站点 1 的用户访问站点 2 或 3,他们应该能够访问输入的信息。所有数据中心的所有数据都应该完全相同,并且接近实时。我们还应该能够轻松添加新的数据中心。
问题:我有一个现有数据库,需要复制到所有 3 个站点并具有完全相同的数据。我以为我可以使用 SQL 2008 对等复制,但由于此方法不支持标识列,所以这是行不通的。
还有哪些其他复制技术可以使 3 个以上 SQL Server 2008 数据库在全球数据中心之间保持同步?第三方工具?块复制?远程集群?
The requirement. Establish 3 global data centers (sites) and direct users to their closest site based on a Cisco Global Site Selector. If site 1 goes down all site 1 traffic will be direct to site 2. If a user from site 1 travels to site 2 or 3 they should be able to access information entered. All data should be exactly the same across all data centers, in near real-time. We should also be able to easily add a new data center.
The issue: I have an existing database that needs to be replicated to all 3 sites and have the exact same data. I thought I could use SQL 2008 peer-to-peer replication but that is a no go due to this method not supporting Identity columns.
What other replication technologies exist to keep 3+ SQL Server 2008 Databases in sync across global data centers? 3rd party tools? Block Replication? Remote Clustering?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我为此目的使用了合并复制,并取得了良好的效果。
http://msdn.microsoft.com/en-us/library/ms151329.aspx
我们曾经管理过的最大的系统是这样的,它有 11 个独立的位置,都有自己的现场服务器。
请注意,我们将其用于访问共享数据库的各个位置,这样,如果线路出现故障,它们仍然可以运行。
我不知道这对于异地数据中心的转化效果如何。
我们将自动编号增量设置为 100,然后为每个位置设置不同的种子值。
I have used Merge Replication for this purpose with good results.
http://msdn.microsoft.com/en-us/library/ms151329.aspx
The largest system we ever managed like this was 11 separate locations having their on on-site server.
Note that we used this for individual locations that accessed a shared DB so that if the lines ever went down they could still operate as well.
How well this translates to off-site data centers I don't know.
We set the autonumber increment to 100 and then set the seed value for each location differently.