PHP mySQL 在大量插入语句上出现性能问题,我应该使用其他方法吗?
我有一个脚本可以编译一个可能很长的项目列表 - 仅编译此列表可能需要 10-15 分钟,但这是可以接受的。当我合并一个迭代列表并将它们全部插入到 mySQL 表中的函数时,时间增加了大约 50%。我想知道是否有更快的方法来序列化这些数据?我应该探索 CSV 还是其他东西?或者我可以优化我的代码以更快地完成此操作:
private function toDB(){
$sql[] = "DROP TABLE IF EXISTS checklisttest";$sql[] = "CREATE TABLE checklisttest (
Incident varchar(12) NOT NULL,
TestID mediumint(9) NOT NULL AUTO_INCREMENT,
Element varchar(12) NOT NULL,
Name varchar(128) NOT NULL,
Code varchar(512) NOT NULL,
Expected varchar(512) NOT NULL,
Actual varchar(512) NOT NULL,
AutoVerifyResult varchar(32) NOT NULL,
QAResult varchar(32) DEFAULT NULL,
Comments text,
PRIMARY KEY (TestID)
)";
//iterate through the records $this->records[10001] -- There can be anywhere from 100 - 300 records
foreach($this->records as $inc => $record){
//iterate through the element ids $this->records[10001][E02_04]
foreach($this->records[$inc]["Elements"] as $elementID => $element){
//iterate through the element ids $this->records[10001][E02_04][1] --There can be anywhere from 150 - 350 elements per record.
foreach($element as $key => $val){
$sql[] = "
INSERT INTO `checklistTest` VALUES (\"$inc\",NULL,\"$elementID\",\"$val[name]\",\"$val[code]\",\"$val[expected]\",\"$val[actual]\",\"$val[match]\",\"$val[QAResult]\",NULL)";
}
}
}
foreach($sql as $key => $val){
mysql_select_db("new",$GLOBALS['local']);
mysql_query($val,$GLOBALS['local']) or die(mysql_error());
}
//echo "<textarea style='width:100%;height:400px'>$sql</textarea>";
//mysql_select_db("new",$GLOBALS['local']);
//mysql_query($sql,$GLOBALS['local']) or die(mysql_error());
}
必须有一种更好的方法来执行此操作,我只是没有太多执行此类查询的经验 - 通常它们对我来说只是一次性完成。感谢您的帮助。
感谢您的回答,我在对已接受答案的评论中发布了我的解决方案。
I have a script that compiles a list of items that can be very long - just compiling this list can take 10-15 minutes, but that is acceptable. When I incorporate a function that iterates through the list and inserts them all into a mySQL table, that time is increased about about 50%. I was wondering if there was a faster way of serializing this data? Should i explore CSV or something else? Or can I optimize my code to do this faster:
private function toDB(){
$sql[] = "DROP TABLE IF EXISTS checklisttest";$sql[] = "CREATE TABLE checklisttest (
Incident varchar(12) NOT NULL,
TestID mediumint(9) NOT NULL AUTO_INCREMENT,
Element varchar(12) NOT NULL,
Name varchar(128) NOT NULL,
Code varchar(512) NOT NULL,
Expected varchar(512) NOT NULL,
Actual varchar(512) NOT NULL,
AutoVerifyResult varchar(32) NOT NULL,
QAResult varchar(32) DEFAULT NULL,
Comments text,
PRIMARY KEY (TestID)
)";
//iterate through the records $this->records[10001] -- There can be anywhere from 100 - 300 records
foreach($this->records as $inc => $record){
//iterate through the element ids $this->records[10001][E02_04]
foreach($this->records[$inc]["Elements"] as $elementID => $element){
//iterate through the element ids $this->records[10001][E02_04][1] --There can be anywhere from 150 - 350 elements per record.
foreach($element as $key => $val){
$sql[] = "
INSERT INTO `checklistTest` VALUES (\"$inc\",NULL,\"$elementID\",\"$val[name]\",\"$val[code]\",\"$val[expected]\",\"$val[actual]\",\"$val[match]\",\"$val[QAResult]\",NULL)";
}
}
}
foreach($sql as $key => $val){
mysql_select_db("new",$GLOBALS['local']);
mysql_query($val,$GLOBALS['local']) or die(mysql_error());
}
//echo "<textarea style='width:100%;height:400px'>$sql</textarea>";
//mysql_select_db("new",$GLOBALS['local']);
//mysql_query($sql,$GLOBALS['local']) or die(mysql_error());
}
There must be a better way to go about doing this, I just don't have much experience performing a lot of queries like this - usually they are just one-and-done for me. Thanks for the help.
thanks for the answers, I posted my solution in a comment to the accepted answer.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
有很多因素会影响性能,包括服务器硬件、平均负载、MySQL 设置、内存使用等。我会盲目地假设您有 I/O 瓶颈,并且 MySQL 是 已针对您施加的负载进行了正确配置。
让我们使用一个准备好的语句和一个事务。在本例中我将使用 PDO ,但您可以使用 < a href="http://us.php.net/manual/en/book.mysqli.php" rel="nofollow">mysqli 如果你愿意的话。只是停止使用旧的和失效的 mysql 扩展。
那么,这里发生了什么?首先,我们开始一个事务。我们告诉数据库我们将要做大量工作,我们要么希望全部完成,要么什么也不完成。
其次,我们准备 SQL 语句。看到那些问号了吗?这些称为占位符。稍后我们将告诉数据库在每个占位符处填充特定数据。另请注意,没有引号。这些基本上是在填充占位符时自动添加的。
在循环内部,我们告诉语句要执行,并且我们是使用 PDO 的
execute
方法 传入占位符的值数组。有些人更喜欢使用一次使用一个变量bindParam
< /a>,但我更喜欢数组方法。在循环中重复准备的语句可以比未准备的语句更快,但除非您运行数以万计的查询,否则差异不会真正明显,这听起来就像您所做的那样。
最后,一旦循环完成,我们就告诉数据库提交我们刚刚完成的工作。正如我在评论中提到的,这就是可以大幅提升性能的地方。当您执行提交时,数据库实际上只会将更改永久写入磁盘。这意味着正常的簿记任务可以等到提交发生,而不需要在每次插入时都发生。这样,当您运行插入时,您需要的大量 I/O 不需要实时发生。
如果使用此技术,您还需要进行一项更改。多年来,MySQL 一直被配置为默认情况下不创建事务安全表。这意味着我们实际上需要稍微更改一下
CREATE TABLE
语句:唯一的区别是在最后,在闭括号之后。我们要求 MySQL 使用InnoDB 存储引擎 而不是服务器默认值。这保证了我们将得到一个支持事务的表。
现在,我意识到要求您更改数据库适配器可能有点愚蠢,但这并非没有理由。虽然您可以使用老式 mysql 接口执行事务(通过自己发出
START TRANSACTION
和COMMIT
/ROLLBACK
命令),但您可以' t 使用准备好的语句。虽然这本身并不是一个大问题,但准备-绑定-执行过程是每个现代 PHP 数据库适配器 遵循的过程。旧的 mysql 接口不是现代 PHP 数据库适配器,您确实应该认真考虑切换到 PDO 或 mysqli。另一个性能因素实际上是如何收集要写入的数据。虽然此答案的重点是确保数据库本身的瓶颈尽可能小,但可能您的性能问题出现在流程的早期。您能告诉我们这些数据来自哪里以及您是如何构建它的吗? 您应该认真考虑分析您的代码,这将揭示真正的性能问题。可能是数据库位已经快如闪电,而问题完全出在其他地方。
There are a whole bunch of factors that impact performance, including server hardware, average load, MySQL settings, memory use, etc. I'm going to blindly assume that you have an I/O bottleneck and that MySQL is properly configured for the load you're putting on it.
Let's use a prepared statement and a transaction. I'll be using PDO in this example, but you could use mysqli if you want. Just stop using the old and busted mysql extension.
So, what's happening here? First, we're starting a transaction. We're telling the database that we're about to do a bunch of work, and we either want it all done, or none of it.
Second, we're preparing a SQL statement. See those question marks? Those are called placeholders. We'll later tell the database to fill in specific data at each placeholder. Also note that there are no quotation marks. Those are basically added automatically when the placeholders get filled in.
Inside the loop, we're telling the statement to execute, and we're using PDO's
execute
method to pass in an array of values for the placeholders. Some people prefer doing this one variable at time usingbindParam
, but I prefer the array method.Prepared statements repeated in a loop can be faster than unprepared statements, though the difference won't really be noticeable unless you're running tens of thousands of queries, which is kind of sounds like you are.
Finally, once the loop is finished, we tell the database to commit the work we've just done. As I mentioned in the comments, that's where a big performance boost is possible. The database will only actually permanently write changes to disk when you perform the commit. This means that normal book-keeping tasks can wait until the commit happens, instead of needing to happen on every single insert. This way, the bulk of the I/O you need doesn't need to happen live, as you run the insert.
There's one more change that you'd need to make if you use this technique. For years, MySQL has been configured to not create transaction-safe tables by default. This means that we need to actually change your
CREATE TABLE
statement a bit:The only difference is there at the end, after the close-paren. We're asking MySQL to use the InnoDB storage engine instead of whatever the server default is. This guarantees that we'll get a table that supports transactions.
Now, I realize that asking you to change database adapters might be a bit silly, but it's not without reason. While you can perform transactions using the oldschool mysql interface (by issuing
START TRANSACTION
andCOMMIT
/ROLLBACK
commands yourself), you can't use prepared statements with it. While that alone isn't a deal-breaker, the prepare-bind-execute process is one that every modern PHP database adapter follows. The old mysql interface is not a modern PHP database adapter, and you really should give some serious consideration to switching to PDO or mysqli.One more performance factor is actually how you gather the data you're about to write. While this answer focuses on making sure that the database itself is as small of a bottleneck as practical, it could be that your performance problem is earlier in the process. Can you tell us where this data comes from, and how you're building it? You should seriously consider profiling your code, which will reveal the real performance problem. It could be that the database bits are already lightning-quick and the problem is somewhere else entirely.
插入触发多个插入语句,您可以将它们合并到一个查询中,如下所示 -
注意:现在删除最后一个逗号。即对于数组中的最后一个值,末尾也会附加一个逗号,请删除该逗号,否则您将收到数据库错误。
现在,由于上述插入查询的合并,您只需查询数据库一次,这会改进您的查询性能很多。
可以参考以下链接以获取此方法的详细文档 -
http://www.brainbell。 com/tutorials/MySQL/Inserting_Multiple_Rows.htm
http:// dev.mysql.com/doc/refman/5.5/en/insert.html
Insert of firing multiple insert statements you can combine them in to one query as below -
Note: Now here remove the last comma. ie for the last value in array too a comma will be appended at the end do remove that else you will get database error.
Now because of above merging of insert queries you have to query your database only once which improves your query perfomance a lot.
Your can refer below links for detailed documentation of this method -
http://www.brainbell.com/tutorials/MySQL/Inserting_Multiple_Rows.htm
http://dev.mysql.com/doc/refman/5.5/en/insert.html