mysql UPDATE 比 INSERT INTO 快吗?

发布于 2024-09-09 17:26:55 字数 80 浏览 3 评论 0 原文

这更多的是一个理论问题。

如果我运行 50,000 个插入新行的查询和 50,000 个更新这些行的查询,哪一个将花费更少的时间?

This is more of a theory question.

If I'm running 50,000 queries that insert new rows, and 50,000 queries that updates those rows, which one will take less time?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

压抑⊿情绪 2024-09-16 17:26:55

插入会更快,因为使用更新,您需要首先搜索要更新的记录,然后执行更新。

尽管这似乎不是一个有效的比较,因为您永远无法选择是否插入或更新,因为两者满足两种完全不同的需求。

编辑:我还应该补充一点,这是假设没有插入触发器或其他可能导致潜在瓶颈的情况。

Insert would be faster because with update you need to first search for the record that you are going to update and then perform the update.

Though this hardly seems like a valid comparison as you never have a choice whether to insert or update as the two fill two completely different needs.

EDIT: I should add too that this is with the assumption that there are no insert triggers or other situations that could cause potential bottlenecks.

柏林苍穹下 2024-09-16 17:26:55
Insert Operation : Create  -> Store

Update Operation : Retrieve -> Modify -> Store

插入操作速度更快。

Insert Operation : Create  -> Store

Update Operation : Retrieve -> Modify -> Store

Insert Operation faster.

以为你会在 2024-09-16 17:26:55

通过插入同一张表,您始终可以通过一个查询插入所有行,这比逐一插入要快得多。更新时,您可以一次更新几行,但不能将其应用于每种更新情况,并且通常您必须一次运行一个更新查询(更新特定 id 时) - 在一个大表上,这是非常重要的必须找到该行然后每次更新它,速度很慢。根据我的经验,即使您已经对表建立了索引,它也会变慢。

With an insert into the same table, you can always insert all the rows with one query, making it much faster than inserting one by one. When updating, you can update several rows at a time, but you cannot apply this to every update situation, and often you have to run one update query at a time (when updating a specific id) - and on a big table this is very slow having to find the row and then update it every time. It is also slower even if you have indexed the table, by my experience.

掌心的温暖 2024-09-16 17:26:55

顺便说一句,不要忘记,通过加载比更新更多的插入,当您进行选择时您将拥有更多的行,因此您会减慢读取操作的速度。

所以真正的问题就变成了——你更关心什么,快速插入还是快速阅读。同样,这取决于某些因素 - 特别是(尚未提及)数据库引擎,例如 InnoDB(顺便说一下,它现在是 PHPMyAdmin 中的标准)。

不过,我同意其他人的观点 - 有太多需要根据具体情况考虑,因此您确实需要运行自己的测试并根据您的需求评估情况。

As an aside here, don't forget that by doing loads more inserts than updates, you have more rows when you come to select, so you'll slow down the read operation.

So the real question then becomes - what do you care about more, a quick insert or a speedy read. Again, this is dependant on certain factors - particularly (and not yet mentioned) DB engine, such as InnoDB (which is now standard in PHPMyAdmin incidentally).

I agree with everyone else though - there's too much to consider on a case-by-case basis and therefore you really need to run your own tests and assess the situation from there based on your needs.

层林尽染 2024-09-16 17:26:55

这里有很多不切实际的答案。是的,理论上插入速度较慢,因为它们必须执行查找行的额外步骤。但是,如果您使用的是 1992 年之后创建的数据库,这根本不是完整的情况。

简短的回答:它们的速度相同。(不要选择一种操作而不是另一种操作)为了速度,只需选择正确的操作)。

长答案:更新时,您正在写入内存页面并将它们标记为脏页面。任何现代数据库都会检测到这一点,并将这些页面在缓存中保留更长时间(这与不设置此标志的普通 select 语句相反)。此缓存也足够智能,可以保留经常访问的页面(请参阅LRU-K)。因此,对相同行的后续更新几乎是即时的,无需查找。这是假设您正在基于索引列(例如 ID)进行更新(我将在稍后讨论)。

与快速插入相比,需要创建新页面并将这些页面加载到缓存中。当然,您可以在同一页上放置多个新行,但是当您继续插入时,该页会被填满并丢弃,再也不会使用。因此,没有利用缓存中的页面的重用。 (需要注意的是,“将页面加载到缓存中”也称为“页面错误”,这是大多数环境中数据库技术中排名第一的慢速问题,MonogoDB 总是倾向于分享这个想法)。

如果您要基于不是索引的列进行插入:是的,这比插入慢得多。在任何应用程序中都应该避免出现这种情况。但请注意,如果您在表上确实有索引,它会加快您的更新速度,但也会减慢您的插入速度,因为这意味着新插入的行也必须插入新的索引数据(相比之下)重用现有索引数据而不是生成新数据的更新)。请参阅此处了解有关 MySQL 如何执行此操作的更多详细信息。

最后,多线程/多处理环境也可以彻底改变这个想法。哪个,我不打算讨论这个。那是另一堆蠕虫。您可以为此研究您的数据库+存储引擎类型,并衡量您的应用程序对并发环境的使用...或者,您可以忽略所有这些,只是使用最直观的操作

There's a lot of non-practical answers here. Yes, theoretically inserts are slower because they have to do the extra step of looking up the row. But this is not at all the full picture if you're working with a database made after 1992.

Short answer: they're the same speed. (Don't pick one operation over the other for the sake of speed, just pick the right operation).

Long answer: When updating, you're writing to memory pages and marking them as dirty. Any modern database will detect this and keep these pages in cache longer (this is opposed to a normal select statement which doesn't set this flag). This cache is also smart enough to hold on to pages that are accessed frequently (See LRU-K). So subsequent updates to the same rows will be pretty much instant, no lookups needed. This is assuming you're updating based on index'd columns such as IDs (I'll talk about that in a second).

Compare this to a rapid amount of inserts, new pages will need to be made and these pages needed to be loaded into the cache. Sure you can put multiple new rows on the same page, but as you continue to insert this page is filled up and tossed away never to be used again. Thus, not taking advantage of re-using pages in the cache. (And just as a note, "loading pages into the cache" is also known as a "page fault", which is the #1 slower-downer of database technology in most environments, MonogoDB is always inclined to share this idea).

If you're inserting on basis of a column that isn't index: yeah that is WAY slower than inserting. This should be made infrequent in any app. But mind you, if you DO have indexes on a table, it will speed up your updating but also will slow your inserting because this means newly inserted rows will have to insert new index data as well (as compared to updates which re-use existing index data instead of generating new ones). See here for more details on that in terms of how MySQL does it.

Finally, Multi-threaded/multi-processing environments can also turn this idea on its head. Which, I'm not going to get into that. That's a whole 'nother can of worms. You can do your research on your type of database + storage engine for this as well as gauge your apps use of concurrent enviroment... Or, you can just ignore all that and just use the most intuitive operation.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文