两个数据库之间的数据同步

发布于 2024-12-03 14:56:18 字数 495 浏览 0 评论 0原文

我需要在两个数据源之间进行同步:

我有一个在网络上运行的网络服务。它不断从网络收集数据并将其存储在数据库中。它还根据客户的请求向客户提供数据。我想保留一个数据存储库作为对象以提供更快的服务。

在客户端,有一个windows服务,它调用前面提到的Web服务,并将其本地数据库同步到服务器。

我的限制很少:

  • Web 服务的缓冲区限制非常小,每次调用只能传输不到 200 条记录,这对于一天收集的数据来说是不够的。
  • 我也无法复制数据库文件,因为数据库结构非常不同(sql和其他是access)
  • 数据每小时更新一次,并且需要传输大量数据。
  • 由于大小限制,无法按日期或其他组同步。可以完成分页,但远程存储库不断变化(并且我不知道如何从 SQL 数据库表的中间获取数据块)

如何使用存储库进行最近的数据更新/或完整数据库是否与此限制同步?

解决问题的更好方法或当前方法的改进将被视为正确答案

I need to synchronize between two data sources:

I have a web service running on then net. It continuously gathers data from the net and stores it on the database. It also provides the data to the client based on the client's request. I want to keep a repository of data as object for faster service.

On the client side, there is a windows service that calls the web service mentioned previously and synchronize its local database to the server.

Few of my restrictions:

  • The web service has very small buffer limit and it can only transfer less then 200 records per call which is not enough for data collected in a day.
  • I also can't copy the database files since the database structure is very different (sql and other is access)
  • The data is being updated on a hourly basis and there will be large amount of data that will be needed to be transfer.
  • Sync by date or other group is not possible with the size limitation. Paging can be done but the remote repository keeps changing (and I don't know how to take chunk of data from the middle of table of SQL database)

How do I use the repository for recent data update/or full database in sync with this limitation?

A better approach for the problem or an improvement of the current approach will be taken as the right answer

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

徒留西风 2024-12-10 14:56:18

您提到按日期或按组同步不起作用,因为记录数量太大,但是按日期(或组或其他)同步然后按分页怎么样?好处是您将拥有一批已定义的记录,并且您现在可以对其进行翻页,因为该组不会更改。

例如,如果您需要每小时提取一次数据(因此,当从上午 8:59 到上午 9:00 时),您可以开始提取在上午 8 点到 9 点之间添加的数据,每块为 200 个或服务可以处理的任何大小。

You mentioned that syncing by date or by group wouldn't work because the number of records would be too big, but what about syncing by date (or group or whatever) and then paging by that? The benefit is that you will have a defined batch of records and you can now page over that because that group won't change.

For example, if you need to pull data off hourly, as each hour elapses (so, when it goes from 8:59am to 9:00 am), you begin pulling down the data that was added between 8am and 9am in chunks of 200 or whatever size the service can handle.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文