MySQL 使用 cron 作业创建表或使用视图

发布于 2024-11-05 05:13:07 字数 615 浏览 0 评论 0原文

我有许多复杂的查询,其结果存储在 MySQL 视图中。问题在于 MySQL 视图的性能受到影响。

我设置了一个 cron 作业,用与填充视图相同的数据填充标准表:

DROP TABLE user_reports;
CREATE TABLE user_reports
  SELECT col1, col2, col3 FROM 
  /** COMPLEX QUERY **/
;

现在,在对 cron 填充的 user_reports 表进行查询时,查询花费了几乎十分之一的时间与同等视图相比,查询时间

这是一种常见的方法吗?显然,每次运行 CRON 作业时,服务器都会承受一些负担,这意味着数据实时不可用。

查询user_reports所需时间 = 0.002 秒
查询 view_user_reports 所需时间 = 0.018 秒

这就是说,也许一个需要 0.018 秒运行的查询应该从应用程序代码运行,而不是存储在视图中?尽管我认为它的扩展性不如 cron 驱动的方法。

I have a number of complex queries, the results of which were being stored in MySQL views. The problem is that MySQL views suffer in performance terms.

I set up a cron job to populate a standard table with the same data that the views were populated with:

DROP TABLE user_reports;
CREATE TABLE user_reports
  SELECT col1, col2, col3 FROM 
  /** COMPLEX QUERY **/
;

Now, when making queries on the cron-populated user_reports table, queries take almost a tenth of the time to query compared with the equivalent view.

Is this a common approach? Obviously there is some burden on the server every time the CRON job is run and it means that data is not available live.

Time taken to query user_reports = 0.002 seconds
Time taken to query view_user_reports = 0.018 seconds

This all said, maybe a query that takes 0.018 seconds to run should be run from application code, rather than stored in views? Although I don't think it would scale as well as the cron-driven method.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

仅一夜美梦 2024-11-12 05:13:07

其结果存储在 MySQL 视图中

哦天哪。 MySQL 视图不存储数据。

删除表 user_reports;

创建表 user_reports

...有问题...

TRUNCATE TABLE user_reports;

我认为它的扩展性不如 cron 驱动的方法

,只要 cron 作业中的查询运行时间不长 - 当它运行时,你需要开始考虑增量地将数据添加到预处理结果集。但在 0.018 秒时,这只是不成熟的优化。

the results of which were being stored in MySQL views

Oh dear. MySQL views DO NOT store data.

DROP TABLE user_reports;

CREATE TABLE user_reports

...something wrong with....

TRUNCATE TABLE user_reports;

?

I don't think it would scale as well as the cron-driven method

Only as long as the query in the cron job doesn't take long to run - when it does you'll need to start thinking about incrementally adding data to the pre-processed result set. But at 0.018 secs, this is just premature optimization.

与风相奔跑 2024-11-12 05:13:07

在大多数情况下,18 毫秒的数据库搜索不会对性能产生不利影响。如果是,那么通常使用某种缓存。如果您需要能够对数据进行搜索,并且数据有点旧也没关系,那么您的方法很好。

如果您不需要能够搜索数据,则可以将缓存直接存储在内存或文件中,最好采用与传递给客户端相同的格式。

In most cases an 18 ms db search isn't adversely affecting performance. If it is, then it is common to use some kind of caching. If you need to be able to do searches on the data and it's ok if the data is a trifle old, then your approach is good.

If you don't need to be able to search the data, then the cache can be stored directly in memory or in a file, preferbly in the same format as it is to be delivered to the clients.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文