具有 3000 万条目的表速度很慢。优化MySQL还是改用mongodb?

发布于 2024-10-12 14:22:03 字数 2054 浏览 5 评论 0原文

我有一个简单的 mysql 数据库在一台服务器上运行,有两个表:产品和评论。产品表有大约 1000 万个条目,评论表有大约 3000 万个条目。

整个Db约为30Gb。我觉得它变得越来越慢,我想知道我该怎么办。我创建了索引,但没有帮助。例如,产品表有类别字段,当我执行一个简单的 select * from products where Category=2 时,速度很慢。

在这种情况下切换到 mongodb 对我有帮助吗?或者我可以通过某种方式优化 Mysql 来解决这个问题?在这种情况下,我应该进行分片还是表的大小不是那么大,并且可以通过其他方式进行优化?

表和 my.cnf

CREATE TABLE IF NOT EXISTS `products` (
  `id` int(11) NOT NULL auto_increment,
  `product_title` varchar(1000) NOT NULL,
  `product_id` varchar(100) NOT NULL,
  `title` varchar(1000) NOT NULL,
  `image` varchar(1000) NOT NULL,
  `url` varchar(1000) NOT NULL,
  `price` varchar(100) NOT NULL,
  `reviews` int(11) NOT NULL,
  `stars` float NOT NULL,
  `BrowseNodeID` int(11) NOT NULL,
  `status` varchar(100) NOT NULL,
  `started_at` int(15) NOT NULL,
  PRIMARY KEY  (`id`),
  KEY `id_index` (`BrowseNodeID`),
  KEY `status_index` (`status`),
  KEY `started_index` (`started_at`),
  KEY `id_ind` (`id`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=13743335 ;


CREATE TABLE IF NOT EXISTS `reviews` (
  `id` int(11) NOT NULL auto_increment,
  `product_id` varchar(100) NOT NULL,
  `product_title` varchar(1000) NOT NULL,
  `review_title` varchar(1000) NOT NULL,
  `content` varchar(5000) NOT NULL,
  `author` varchar(255) NOT NULL,
  `author_profile` varchar(1000) NOT NULL,
  `stars` float NOT NULL,
  `owner` varchar(100) NOT NULL,
  PRIMARY KEY  (`id`),
  KEY `product_id` (`product_id`),
  KEY `id_index` (`product_id`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=48129737 ;

my.cnf 中的一些信息

set-variable    = query_cache_size=1512M
set-variable    = thread_cache_size=8
thread_concurrency = 8
skip-innodb
low-priority-updates
delay-key-write=ALL

key_buffer_size = 100M
max_allowed_packet = 1M
#table_open_cache = 4048
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 50M
set-variable    = table_cache=256
set-variable    = query_cache_limit=1024M
set-variable    = query_cache_size=1024M

I have a simple mysql db running on one server with two tables: products and reviews. products table has about 10 million entries and reviews table has about 30 million entries.

The whole Db is about 30Gb. I feel like it's getting slow and I'm wondering what should I do about it. I created indexes, but it didn't help. For example, products table have category field and when I do a simple select * from products where category=2 - it is just slow.

Will switching to mongodb help me in this situation or I can solve this just by optimizing Mysql somehow? In this case should I do sharding or size of tables is not that big and it's possible to optimize the other way?

Tables and my.cnf

CREATE TABLE IF NOT EXISTS `products` (
  `id` int(11) NOT NULL auto_increment,
  `product_title` varchar(1000) NOT NULL,
  `product_id` varchar(100) NOT NULL,
  `title` varchar(1000) NOT NULL,
  `image` varchar(1000) NOT NULL,
  `url` varchar(1000) NOT NULL,
  `price` varchar(100) NOT NULL,
  `reviews` int(11) NOT NULL,
  `stars` float NOT NULL,
  `BrowseNodeID` int(11) NOT NULL,
  `status` varchar(100) NOT NULL,
  `started_at` int(15) NOT NULL,
  PRIMARY KEY  (`id`),
  KEY `id_index` (`BrowseNodeID`),
  KEY `status_index` (`status`),
  KEY `started_index` (`started_at`),
  KEY `id_ind` (`id`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=13743335 ;


CREATE TABLE IF NOT EXISTS `reviews` (
  `id` int(11) NOT NULL auto_increment,
  `product_id` varchar(100) NOT NULL,
  `product_title` varchar(1000) NOT NULL,
  `review_title` varchar(1000) NOT NULL,
  `content` varchar(5000) NOT NULL,
  `author` varchar(255) NOT NULL,
  `author_profile` varchar(1000) NOT NULL,
  `stars` float NOT NULL,
  `owner` varchar(100) NOT NULL,
  PRIMARY KEY  (`id`),
  KEY `product_id` (`product_id`),
  KEY `id_index` (`product_id`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=48129737 ;

Some info from my.cnf

set-variable    = query_cache_size=1512M
set-variable    = thread_cache_size=8
thread_concurrency = 8
skip-innodb
low-priority-updates
delay-key-write=ALL

key_buffer_size = 100M
max_allowed_packet = 1M
#table_open_cache = 4048
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 50M
set-variable    = table_cache=256
set-variable    = query_cache_limit=1024M
set-variable    = query_cache_size=1024M

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

明月松间行 2024-10-19 14:22:03

根据您的 my.cnf,您的 key_buffer_size 似乎太小,因此您每次读取时都会写入磁盘。理想情况下,该值应设置为大于 MyISAM 索引的总大小。

在更改数据库技术之前,您可能还需要考虑将表类型更改为 InnoDB。您的 my.cnf 已将其禁用,现在。我从具有智能索引和足够内存的 300M 行表中获得了相当出色的性能。 InnoDB 还将为您提供更长运行读取的余地,因为它们不会锁定整个表。

Based on your my.cnf, it looks as though your key_buffer_size is way too small, so you're going to disk for every read. Ideally, that value should be set larger than the total size of your MyISAM indexes.

Before you go changing DB technologies, you may also want to consider changing your table type to InnoDB. Your my.cnf has it disabled, right now. I've gotten pretty stellar performance out of a 300M row table with smart indexes and enough memory. InnoDB will also give you some leeway with longer running reads, as they won't lock your entire table.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文