每周使用 PHP 归档 MySQL 数据

发布于 2024-10-31 17:28:19 字数 322 浏览 6 评论 0原文

我有一个 MySQL 数据库,它每周在一周中的某一天的给定时间从源接收大量数据(大约 120 万行),并将其存储在“实时”表中。

我想将“实时”表中的所有数据复制到存档中,并截断实时表,以为下周将出现的下一个“当前数据”腾出空间。

任何人都可以建议一种有效的方法来做到这一点。我真的想避免 -- insert into archive_table select * from live --。我希望能够使用 PHP 运行此存档器,因此我无法使用 Maatkit。有什么建议吗?

编辑:此外,存档数据需要易于访问。由于每次插入都有时间戳,如果我想查找上个月的数据,我可以在档案中搜索它

I have a MySQL DB that receives a lot of data from a source once every week on a certain day of the week at a given time (about 1.2million rows) and stores it in, lets call it, the "live" table.

I want to copy all the data from "live" table into an archive and truncate the live table to make space for the next "current data" that will come in the following week.

Can anyone suggest an efficient way of doing this. I am really trying to avoid -- insert into archive_table select * from live --. I would like the ability to run this archiver using PHP so I cant use Maatkit. Any suggestions?

EDIT: Also, the archived data needs to be readily accessible. Since every insert is timestamped, if I want to look for the data from last month, I can just search for it in the archives

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

终止放荡 2024-11-07 17:28:19

偷偷摸摸的方法:

不要复制记录。那需要太长时间了。

相反,只需重命名活动表,然后重新创建:

RENAME TABLE live_table TO archive_table;
CREATE TABLE live_table (...);

它应该非常快速且轻松。

编辑:如果您想要每个循环周期的存档表,我描述的方法最有效。如果您想维护单个存档表,可能需要变得更加棘手。但是,如果您只想对历史数据进行即席查询,则可能可以使用 UNION。

如果您只想保存几个周期的数据,则可以以类似于日志轮转的方式进行几次重命名。然后,您可以定义一个视图,将存档表合并为一个大表。

EDIT2:如果你想维护自动增量的东西,你可能希望尝试一下:

RENAME TABLE live TO archive1; 
CREATE TABLE live (...); 
ALTER TABLE LIVE AUTO_INCREMENT = (SELECT MAX(id) FROM archive1);

但遗憾的是,这行不通。但是,如果您使用 PHP 来驱动该流程,那么就很容易解决这个问题。

The sneaky way:

Don't copy records over. That takes too long.

Instead, just rename the live table out of the way, and recreate:

RENAME TABLE live_table TO archive_table;
CREATE TABLE live_table (...);

It should be quite fast and painless.

EDIT: The method I described works best if you want an archive table per-rotation period. If you want to maintain a single archive table, might need to get trickier. However, if you're just wanting to do ad-hoc queries on historical data, you can probably just use UNION.

If you only wanted to save a few periods worth of data, you could do the rename thing a few times, in a manner similar to log rotation. You could then define a view that UNIONs the archive tables into one big honkin' table.

EDIT2: If you want to maintain auto-increment stuff, you might hope to try:

RENAME TABLE live TO archive1; 
CREATE TABLE live (...); 
ALTER TABLE LIVE AUTO_INCREMENT = (SELECT MAX(id) FROM archive1);

but sadly, that won't work. However, if you're driving the process with PHP, that's pretty easy to work around.

悟红尘 2024-11-07 17:28:19

编写一个脚本作为 cron 作业运行,以便:

  1. 从“实时”表中转储存档数据(从 shell 脚本中使用 mysqldump 可能更有效)
  2. 截断实时表
  3. 修改转储文件中的 INSERT 语句,以便表name 引用存档表而不是实时表
  4. 将存档数据附加到存档表(同样,可以通过 shell 脚本从转储文件导入,例如 mysql dbname < dumpfile.sql)

Write a script to run as a cron job to:

  1. Dump the archive data from the "live" table (this is probably more efficient using mysqldump from a shell script)
  2. Truncate the live table
  3. Modify the INSERT statements in the dump file so that the table name references the archive table instead of the live table
  4. Append the archive data to the archive table (again, could just import from dump file via shell script, e.g. mysql dbname < dumpfile.sql)
影子是时光的心 2024-11-07 17:28:19

这取决于您在归档数据后对数据执行的操作,但是您是否考虑过使用 MySQL 复制?

您可以将另一台服务器设置为复制从属服务器,一旦所有数据都被复制,请在其之前使用 SET BIN-LOG 0 进行删除或截断,以避免该语句也被复制。

This would depend on what you're doing with the data once you've archived it, but have you considered using MySQL replication?

You could set up another server as a replication slave, and once all the data gets replicated, do your delete or truncate with a SET BIN-LOG 0 before it to avoid that statement also being replicated.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文