mysql - 多次插入后如何提交
我有 2 个 cron 进程并行运行。
进程 1 进行插入,进程 2 读取这些插入。
我遇到的问题是进程 1 需要插入多行,然后进程 2 才能读取它们。
例如, 1. 进程 1 需要插入 10 行
进程 1 插入 3 行
进程 2 读取这 3 行
进程 1插入行 4..10
进程 2 读取行 4..10
我的内容需要的是
处理 1 行插入 1..10
处理 2 读取行 1..10
A)我是否要为进程 1 中的插入锁定表?
B) 我是否执行开始事务,执行插入,然后提交?
如果表被锁定,其他会话将如何解锁,或者其他会话会收到锁定错误/警告吗?
I have 2 cron processes running in parallel.
Process 1 does inserts and process 2 reads these inserts.
The problem I have is process 1 needs to insert multiple rows before process 2 can read them.
For example,
1. Process 1 needs to insert 10 rows
Process 1 inserts 3 rows
Process 2 reads these 3 rows
Process 1 inserts rows 4..10
Process 2 reads rows 4..10
What I needs is
Process 1 row inserts 1..10
Process 2 reads rows 1..10
A) Do I lock the table for the inserts in process 1?
B) Do I do a begin transaction, do inserts, and then commit?
If the table is locked will other session what for the unlock or will the other sessions get a lock error/warning?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
不要锁桌子。使用交易。交易是原子的。
Don't lock the table. Use a transaction. Transactions are atomic.
如果您需要阻止进程 2 直到进程 1 完成,您应该按照您的建议锁定有问题的表。
如果在单个查询中插入全部 10 行,则可以依赖 MyISAM 表锁定,但是,如果使用单独的查询插入或使用 InnoDB,则需要使用 [LOCK TABLES .. WRITE] 显式锁定表(http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html)在过程1中并在完成后释放锁。
然后,进程 2 将优雅地等待写入锁被释放,然后再从表中读取数据。
这里确保服务器首先接收进程 1 中的查询非常重要,因此您可能希望在那里使用一些魔法。
如果您想确保所有插入都是原子添加的,您可以将插入包装在事务中,但您仍然需要锁定表来解决并发问题。
If you need to block process 2 until process 1 is complete, you should, as you suggested, lock the tables in question.
If you are inserted all 10 rows in a single query, you can rely on MyISAM table locking, however, if you are inserting using separate queries or are using InnoDB, you will want to explicitly lock the tables using [LOCK TABLES .. WRITE] (http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html) in process 1 and release the locks when you're finished.
Process 2 will then graciously wait for the write lock to be released before reading from the table.
It's very important here to ensure that the server receives the queries in process 1 first, so you may wish to use some magic there.
You could wrap the inserts inside of a transaction if you want to ensure that all of the inserts are atomically added, but you will still need to lock the table to solve the concurrency issue.