SQL Server 更改跟踪、复制、差异备份
好吧,我们有关键的事务数据库,并且它在 SQL Server 2008 中处于完全恢复模式。我们在两个不同时区的两个不同数据中心有两台不同的服务器。我正在尝试使用各种选项设置使数据库尽可能最新的最佳方法。数据库目前只有 1.5GB,预计每 6 个月增长 1GB。
我们使用了一个简单的解决方案,即使用 SMO 在午夜凌晨 1 点创建完整备份,然后每 15 分钟进行一次差异备份。我们将这些数据传输到作为从属服务器工作的其他服务器,并在从属服务器上恢复数据。因此,与当前数据库相比,所有从站都在运行 15 分钟,因此,如果发生崩溃,我们将保留最后 15 分钟的数据。
现在我想将此解决方案与复制和更改跟踪进行比较。
复制和更改跟踪都在数据库中放置了一些额外的元数据来完成它们正在做的所有事情,并且很少额外利用 CPU 使用率。然而,与差异备份相比,它们不会给 CPU 带来更多负载(据我所知)。我假设差异备份将保留一些事务等待或增加一些待处理队列,这可能会在用户使用它时造成延迟或信息丢失。
我需要知道每 15 分钟进行一次差异备份是否会给服务器带来更多负载?或者在处理事务时,确实不建议每 15 分钟使用一次差异备份?
注意:事务仅应用于主服务器,并且使用备份恢复将它们应用于从服务器。日志传送不传送架构更改,如果它停止工作,我们将无法收到任何错误通知,在我们自己的自定义解决方案中,我们会通过电子邮件发送日志对我们来说这对我们有帮助。
Ok we have critical transactional database and its in full recovery mode in SQL Server 2008. We have have two different servers in two different data centers on two different timezones. And I am trying to setup best way to make database as upto date as possible using various options. Database is currently only 1.5GB, expected to grow 1GB every 6 months.
We have used a simple solution by using SMO to create FULL Backup at midnight 1 am, and then take differential backup every 15 minutes. And we transfer this data to other servers who are working as slaves and we restore data on slaves. So all slaves are running 15 minutes old compared to current DB, so in case of crash we will have data till last 15 minutes.
Now I want to compare this solution with respect to Replication and Change Tracking.
Both Replication and Change Tracking puts some extra metadata in DB to do everything they are doing and utilize cpu usage little extra. However they will not put more load on CPU (as far as my understanding) compared to Diff Backup. I am assuming that Diff Backup will keep some transactions awaiting or increase some pending queues and that might create delay or loss of information while users are using it.
I need to know will Diff Backup every 15 mins put more load on server? Or its really not adviceable to use Diff Backups every 15 mins, when transactions are processing?
Note: Transactions are only applied at Primary Server and they are applied to slaves using backup restore.. Log Shipping doesnt ship schema changes and somehow if it stops working we are unable to get any error notifications, in our own custom solution we get logs emailed to us that helps us.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
忘记复制或更改数据跟踪。 那些不会复制架构,并且会增加大量开销。两者都不是作为高可用性或灾难恢复性解决方案而设计的。它们可能可以这样使用,但与日志传送、数据库镜像或硬件镜像等专用解决方案相比就显得苍白无力。
日志传送传输数据库中的所有内容,包括架构以及用户、权限、索引、数据等等。您没有指定何时传输日志备份。每 15 分钟进行一次差异备份听起来有点矫枉过正。差异备份是累积性的,它们包含自上次完整备份以来的所有更改,因此它们的大小会随着一天的推移而增加。 15 分钟听起来像是定期日志备份的时间段,而不是差异备份的时间段。
日志传送依赖于 SQL 代理作业的文件复制操作。因此,它需要访问文件共享,并且需要身份验证。在不同的域中,您将需要直接访问或某种 VPN。
数据库镜像也会创建数据库的相同副本,但其数据丢失窗口长达数秒,而不是日志传送中的日志备份间隔。数据库镜像在两个服务器之间保持开放的特殊连接,并且主体将每个事务实时发送到镜像。因为镜像端点支持基于证书的身份验证可以轻松地跨域设置,并且不需要需要VPN。 DBM 可以是同步的(主体上的每个事务在提交之前等待镜像确认,也称为高安全模式)或异步(主体将在镜像之前写入并立即提交,又称为高性能模式)。如果连接丢失,主体将开始“暴露”运行,因此您不会失去服务,但会使自己面临数据丢失的风险。一旦恢复连接,主体就会向镜像提供待处理的事务队列(即 LDF 文件中尚未传送的部分),直到镜像恢复到最新状态。所有这一切都是自动的,并且 SSMS 中有监控工具,可以设置为在连接丢失、主体暴露运行、未发送队列增长超过预设大小时发送通知。
硬件镜像:您需要与硬件供应商或数据中心运营商交谈。这要花一大笔钱。
整体数据库镜像是迄今为止您的最佳选择。
Forget about replication or change data tracking. Those don't replicate schema and they add significant overhead. Neither is designed as a High Availability or Disaster Recoverability solution. They may be used as such, but pale in comparison with dedicated solutions like log shipping, database mirroring or hardware mirroring.
Log shipping transfers everything in the database, including schema, as well as users, permissions, indexes, data and so on and so forth. You did not specify when do you transfer the log backups. Doing a differential backup every 15 minutes sounds like overkill. The differential backups are cumulative, they contain every change since the last full backup, so they will increase in size over the course of the day. 15 minutes sounds like a time period for periodic log backups, not differential ones.
Log shipping relies on file copy operations from a SQL Agent job. As such it needs to access file shares and that requires authentication. Over distinct domains you'll need either Direct Access or a VPN of some sort.
Database Mirroring is also creating an identical copy of the database, but its data loss window is up to seconds as opposed to the log backup interval in log shipping. Database Mirroring keeps a special connection open between the two servers and the principal ships every transaction to the mirror, as it happens, in real-time. Because mirroring endpoints support certificate based authentication it can easily be set up cross domains and does not require a VPN. DBM can be synchronous (every transaction on the principal waits for the mirror to confirm it before commit, aka. high safety mode) or asynchronous (the principal will write ahead of the mirror and commit immediately, aka. high performance mode). If connectivity is lost the principal will start running 'exposed', so you do not loose service, but you expose yourself to data loss. As soon as connectivity is regained the principal will feed the mirror the pending queue of transactions (ie. the part of the LDF file that was not shipped over yet) until the mirror is back up to date. All this is automatic and there are monitoring tools in SSMS that can be set up to send notifications when connectivity is lost, when the principal is running exposed, when the unsent-queue is growing over a pre-set size.
Hardware mirroring: you need to talk with a hardware vendor or your data center operators. It costs a fortune.
Overall Database Mirroring is by far your best option.
我们找到了自己的解决方案如下,
MSDN说差异文件备份更快,我们选择差异文件备份。是的,15 分钟,看起来有点大材小用,但它们速度最快、更可靠。而24小时内,累计变化只有几MB。
备份由自定义 Windows 服务进行,并且还被压缩以节省网络传输。另外,我们会收到所有事情的正确电子邮件通知。
另外,从属数据库可以位于互联网上的任何地方。备份是安全的并使用密码进行压缩。内置 Web 服务器中的小型 HTTP 将数据从一台计算机传输到另一台计算机,因此所需的配置开销更少。
当我们有很多服务器时,配置它们是一件非常痛苦的事情。另外,每个网络管理员都可能会犯错误并造成灾难。
We found our own solution as follow,
MSDN Says Differential File Backups are faster, we chosen differential file backups. Yes for 15 min, it looks little overkill but they are fastest and more reliable. And for 24 hours, the accumalated changes are only few MBs.
The backups are taken by custom windows service and they are also compressed to save network transfer. Plus we get proper email notifications of everything.
Plus slave database can be anywhere over internet. Backups are secure and compressed with password. And a small HTTP in built web server transfers data from one machine to another so less configuration overhead is required.
When we have many servers, configuring them is huge pain. Plus every network admin may make a mistake and create disaster.