共享主机中的 MySQL 复制
我需要做一些mysql复制。 一些信息:
- 我有两个数据库实例,位于共享托管中,因此我无法使用 Mysql 复制(我无权访问配置文件)。
- 这是一个非营利项目(教育),所以我们买不起自己的服务器。
- 如果主服务器宕机了几分钟,通常情况并没有那么糟糕,但在某些特定的日子里,我们确实需要一个与主服务器同步的备份解决方案(网站的限时活动)。
现在,系统在每个表的每一行上使用修订号,我们定期检查这些编号的修改(并更新相应的行)。 速度相当慢。
我想到的是,每个 SELECT/INSERT/UPDATE 查询都记录在特定的表中,并且“从服务器”定期向“主服务器”询问该表的内容并应用相应的查询。
您对这个想法有何看法?
我知道它并不完美,在传播所有查询之前服务器可能会停机,但我希望使用尽可能少的代码行来最大程度地减少可能的问题。
实施它的最佳方法是什么?
- 在特定表中执行其他插入(我只是插入查询) ?
- 在 php 代码中,在每个 SELECT/INSERT/UPDATE 上,我可以使用触发器
I need to do some mysql replication.
Some informations :
- I have two databases instances, in shared hosting, so I can't use Mysql Replication (I don't have access to the configuration files).
- It's for a non profit project (educational) so we can't afford our own servers.
- If the main server is down for a few minutes it's generally no that bad but there exists specific days where we REALLY need to have a backup solution, synchronized with the main server (time limited events of the website).
Right now the system is using a revision number on every row of every table and we periodically checked for modification of these number (and update the corresponding rows).
It's quite slow.
What I'm thinking of is that every SELECT/INSERT/UPDATE query is logged in a specific table and the "slave server" periodically ask the "master server" for the content of this table and apply the corresponding queries.
What is your opinion on that idea ?
I know it's not perfect, a server might be down before all the queries are propagated, but I want to minimize the possible problems, with as few lines of code as possible.
What would be the best possible way to implement it ?
- In the php code, on every SELECT/INSERT/UPDATE I can do an other insert in a specific table (I simply insert the query)
- With a trigger ?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我成功地使用触发器和联合表的组合来模拟数据从一台服务器上的 MyISAM 表复制到共享托管环境中另一台服务器上的 MyISAM 表。
我的主表上的任何插入/更新/删除都会通过 AFTER INSERT / AFTER UPDATE / AFTER DELETE 触发器复制到同一服务器上的联合表。然后,该联合表将更改推送到不同服务器上的表。
我不能把提出这种方法的功劳归功于 RolandoMySQLDBA 关于服务器故障的非常有帮助的记录:< br>
MySQL存储过程是否能够插入/更新到远程备份MySQL服务器?。
以下是我实现的步骤:
在 SERVER2 上...
它与主表中的那些相匹配(我们称之为
SERVER1 上的master_table)。
在 SERVER1 上...
我创建了一个表(我们称之为federated_table)
与 master_table 中的列匹配的列,指定
FEDERATED
存储引擎和一个CONNECTION
字符串来引用 SERVER2 上的 slave_table...CONNECTION='mysql://username:password@SERVER2:port/database/slave_table';
我添加了
AFTER INSERT
、AFTER UPDATE
和AFTER DELETE
触发master_table 其中包含 SQL 命令...
INSERT
,INTO federated_table VALUES (NEW.id,NEW.title);
更新
和federated_table SET id=NEW.id,title=NEW.title WHERE id=OLD.id;
分别从 federated_table WHERE id=OLD.id;
中删除。我希望这可以帮助处于类似情况的人。
I successfully used a combination of triggers and a federated table to simulate replication of data from a MyISAM table on one server to a MyISAM table on a different server in a shared hosting environment.
Any inserts / updates / deletes on my master table are replicated to my federated table on the same server via AFTER INSERT / AFTER UPDATE / AFTER DELETE triggers. That federated table then pushes the changes to a table on a different server.
I can't take the credit for coming up with this approach as it was very helpfully documented by RolandoMySQLDBA on Server Fault:
Is a MySQL stored procedure able to insert/update to a remote backup MySQL server?.
Here are the steps I implemented:
On SERVER2...
which matched those in the master table (let's call it
master_table) on SERVER1.
On SERVER1...
I created a table (let's call it federated_table) with
columns which matched those in master_table, specifying a
FEDERATED
storage engine and aCONNECTION
string to reference slave_table on SERVER2...CONNECTION='mysql://username:password@SERVER2:port/database/slave_table';
I added
AFTER INSERT
,AFTER UPDATE
andAFTER DELETE
triggers tomaster_table which contained SQL commands to...
INSERT
,INTO federated_table VALUES (NEW.id,NEW.title);
UPDATE
andfederated_table SET id=NEW.id,title=NEW.title WHERE id=OLD.id;
DELETE FROM federated_table WHERE id=OLD.id;
respectively.I hope that helps someone in a similar situation.
两个想法:
一个在备份数据库表中查找 max(ID) 的 cron,然后获取主数据库中除此之外的所有记录。
要包含我评论中的建议,请将写入内容直接复制到第二个数据库,而不是将查询写入表。这可能会导致一些开销,但可能是最容易实现的。
Two ideas:
A cron that finds the max(ID)s in the backup database tables and then gets all the records in the main database beyond that.
To include the suggestion from my comment, duplicating your writes directly to the 2nd database instead of writing the queries to a table. This may cause a bit of overhead, but might be the easiest to implement.