MySQL 效率与数据库/表大小相关

发布于 2024-08-30 00:04:49 字数 463 浏览 6 评论 0原文

我正在使用 django、Sphinx 和 MySQL 构建一个系统,该系统很快就会变得相当大。该数据库当前大约有 2000 行,我编写了一个程序,将在几天内向其中填充另外 40,000 行。由于数据库现在已经上线,而且我从来没有拥有过包含这么多信息的数据库,所以我担心一些事情:

  1. 添加所有这些行是否会严重降低我的 django 应用程序的效率?我是否需要返回并优化所有数据库调用,以便他们更聪明地做事?或者这会让数据库变慢,以至于我根本无能为力?

  2. 如果您嘲笑我的 40k 行,那么我的下一个问题是,我应该在什么时候担心?我可能很快就会再增加几十万行,所以我担心,我担心。

  3. 狮身人面像对这一切会有什么感受?当它意识到必须对所有这些数据建立索引时,它会惊慌失措吗?或者会好吗?这对它来说正常吗?如果是,我什么时候应该担心 Sphinx 的数据太多了?

感谢您的任何想法。

I'm building a system using django, Sphinx and MySQL that's very quickly becoming quite large. The database currently has about 2000 rows, and I've written a program that's going to populate it with another 40,000 rows in a couple days. Since the database is live right now, and since I've never had a database with this much information in it, I'm worried about some things:

  1. Is adding all these rows going to seriously degrade the efficiency of my django app? Will I need to go back through it and optimize all my database calls so they're doing things more cleverly? Or will this make the database slow all around to the extent that I can't do anything about it at all?

  2. If you scoff at my 40k rows, then, my next question is, at what point SHOULD I be concerned? I will likely be adding another couple hundred thousand soon, so I worry, and I fret.

  3. How is sphinx going to feel about all this? Is it going to freak out when it realizes it has to index all this data? Or will it be fine? Is this normal for it? If it is, at what point should I be concerned that it's too much data for Sphinx?

Thanks for any thoughts.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

一个人的旅程 2024-09-06 00:04:49

对于普通查询来说,2000 行不算什么——即使没有索引,速度也会非常快,因为整个表都可以缓存在内存中。在大多数情况下,100000 行也应该可以正常工作,尽管如果您没有适当的索引或者您的查询没有使用可用的索引,那么您现在就会注意到这一点 - 如果不使用索引,原本需要几秒钟的查询可能需要几分钟正确索引。但解决问题应该不会花很长时间 - 对慢速查询运行 EXPLAIN 并查看它慢的原因,并找出您需要哪些索引。

当您达到数百万行或数千万行时,您将必须更仔细地考虑您的数据库设计和索引策略。如果操作得当,表中可能会包含数亿行。

For ordinary queries 2000 rows is nothing - even without an index it will be very fast as the entire table can be cached in memory. 100000 rows should work fine too in most situations, although if you do not have appropriate indexes or your queries aren't using the available indexes then you will notice it by now - queries that should take seconds could take minutes if they don't use indexes correctly. But it shouldn't take long to fix the problem - run EXPLAIN on your slow query and see why it is slow, and figure out what indexes you need.

By the time you get to millions of rows or tens of millions of rows, then you will have to think more carefully about your database design and your indexing strategy. It's possible to have hundreds of millions of rows in a table if you do things right.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文