聚合多个分布式MySQL数据库
我有一个项目需要我们在多台计算机上维护多个MySQL数据库。它们将具有相同的模式。
每个数据库必须定期将其内容发送到主服务器,该服务器将聚合所有传入数据。内容应转储到一个文件中,该文件可以通过闪存驱动器传送到支持互联网的计算机上进行发送。
键将被命名空间,所以那里不应该有任何冲突,但我不完全确定设计这个的优雅方法。我正在考虑为每一行添加时间戳并在每个表上运行查询“SELECT * FROM [table] WHERE timestamp > last_backup_time”,然后将其转储到文件并在主服务器上批量加载服务器。
分布式计算机将无法访问互联网。我们位于第三世界国家的一个非常农村的地区。
有什么建议吗?
I have a project that requires us to maintain several MySQL databases on multiple computers. They will have identical schemas.
Periodically, each of those databases must send their contents to a master server, which will aggregate all of the incoming data. The contents should be dumped to a file that can be carried via flash drive to an internet-enabled computer to send.
Keys will be namespace'd, so there shouldn't be any conflict there, but I'm not totally sure of an elegant way to design this. I'm thinking of timestamping every row and running the query "SELECT * FROM [table] WHERE timestamp > last_backup_time"
on each table, then dumping this to a file and bulk-loading it at the master server.
The distributed computers will NOT have internet access. We're in a very rural part of a 3rd-world country.
Any suggestions?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您
将错过已删除的行。
您可能想要做的是通过 U 盘使用 MySQL 复制。也就是说,在源服务器上启用binlog并确保binlog不会被自动丢弃。将 binlog 文件复制到 U 盘,然后 PURGE MASTER LOGS TO ... 以在源服务器上删除它们。
在聚合服务器上,使用 mysqlbinlog 命令将 binlog 转换为可执行脚本,然后将该数据作为 SQL 脚本导入。
聚合服务器必须拥有每个源服务器数据库的副本,但可以在不同的架构名称下拥有该副本,只要您的 SQL 都使用非限定表名称(从不使用 schema.table 语法来引用表)。然后,导入 mysqlbinlog 生成的脚本(带有适当的 USE 命令前缀)将在聚合服务器上镜像源服务器更改。
然后可以使用完全限定的表名(即在 JOIN 或 INSERT ... SELECT 语句中使用 schema.table 语法)来完成所有数据库的聚合。
Your
will miss DELETEed rows.
What you probably want to do is use MySQL replication via USB stick. That is, enable the binlog on your source servers and make sure the binlog is not thrown away automatically. Copy the binlog files to USB stick, then PURGE MASTER LOGS TO ... to erase them on the source server.
On the aggregation server, turn the binlog into an executeable script using the mysqlbinlog command, then import that data as an SQL script.
The aggregation server must have a copy of each source servers database, but can have that under a different schema name as long as your SQL all does use unqualified table names (does never use schema.table syntax to refer to a table). The import of the mysqlbinlog generated script (with a proper USE command prefixed) will then mirror the source servers changes on the aggregation server.
Aggregation across all databases can then be done using fully qualified table names (i.e. using schema.table syntax in JOINs or INSERT ... SELECT statements).