PHP PDO 内部如何工作?
我想在我的应用程序中使用 pdo,但在此之前我想了解内部如何使用 PDOStatement->fetch
和 < a href="https://www.php.net/manual/en/pdostatement.fetchall.php" rel="nofollow noreferrer">PDOStatement->fetchAll
。
对于我的应用程序,我想做类似“SELECT * FROM myTable”的操作并插入到 csv 文件中,它有大约 90000 行数据。
我的问题是,如果我在这里使用 PDOStatement->fetch
:
// First, prepare the statement, using placeholders
$query = "SELECT * FROM tableName";
$stmt = $this->connection->prepare($query);
// Execute the statement
$stmt->execute();
var_dump($stmt->fetch(PDO::FETCH_ASSOC));
while ($row = $stmt->fetch(PDO::FETCH_ASSOC))
{
echo "Hi";
// Export every row to a file
fputcsv($data, $row);
}
每次从数据库获取后,该获取的结果都会存储在内存中吗?
这意味着当我进行第二次提取时,内存将具有第一次提取的数据以及第二次提取的数据。
因此,如果我有 90000 行数据,并且每次都进行 fetch 操作,那么内存就会被更新以获取新的获取结果,而不会删除先前获取的结果,因此对于最后一次获取,内存将已经有 89999 行数据。
PDOStatement::fetch 是这样的吗? 作品?
- 从性能角度来看,这个堆栈如何 反对
PDOStatement::fetchAll
?
更新:从内存使用的角度来看有关 fetch 和 fetchAll 的一些内容
只是想在这个问题中添加一些内容,因为最近发现了一些有关 fetch 和 fetchAll 的内容,希望这会让这个问题对于人们访问这个问题是有价值的未来的问题,以了解 fetch 和 fetchAll 参数。
fetch 不会将信息存储在内存中,它会逐行工作,因此它将遍历结果集并返回第 1 行,然后再次访问结果集并返回第 2 行,请注意,它不会返回第 1 行和第 2 行,但只会返回第 2 行,因此 fetch 不会将任何内容存储到内存中,但 fetchAll 会将详细信息存储到内存中。因此,如果我们处理大小约为 100K 的结果集,则 fetch 是比 fetchAll 更好的选择。
I want to use pdo in my application, but before that I want to understand how internallyPDOStatement->fetch
and PDOStatement->fetchAll
.
For my application, I want to do something like "SELECT * FROM myTable" and insert into csv file and it has around 90000 rows of data.
My question is, if I use PDOStatement->fetch
as I am using it here:
// First, prepare the statement, using placeholders
$query = "SELECT * FROM tableName";
$stmt = $this->connection->prepare($query);
// Execute the statement
$stmt->execute();
var_dump($stmt->fetch(PDO::FETCH_ASSOC));
while ($row = $stmt->fetch(PDO::FETCH_ASSOC))
{
echo "Hi";
// Export every row to a file
fputcsv($data, $row);
}
Will after every fetch from database, result for that fetch would be store in memory ?
Meaning when I do second fetch, memory would have data of first fetch as well as data for second fetch.
And so if I have 90000 rows of data and if am doing fetch every time than memory is being updated to take new fetch result without removing results from previous fetch and so for the last fetch memory would already have 89999 rows of data.
- Is this how
PDOStatement::fetch
works ? - Performance wise how does this stack
up againstPDOStatement::fetchAll
?
Update: Something about fetch and fetchAll from memory usage point of view
Just wanted to added some thing to this question as recently found something regarding fetch and fetchAll, hope this would make this question worthwhile for people would visit this question in future to get some understanding on fetch and fetchAll parameters.
fetch does not store information in memory and it works on row to row basis, so it would go through the result set and return row 1, than again would go to the result set and than again return row 2 mind here that it will not return row 1 as well as 2 but would only return row 2, so fetch will not store anything into memory but fetchAll will store details into the memories. So fetch is better option compared to fetchAll if we are dealing with an resultant set of around 100K in size.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
PHP 通常将其结果保存在服务器上。这一切都取决于司机。 MySQL 可以在“无缓冲”模式下使用,但使用起来有点棘手。在大型结果集上使用
fetchAll()
可能会导致网络泛滥、内存耗尽等。在每次需要处理超过 1,000 行的情况下,我都不会使用 PHP。另请考虑您的数据库引擎是否已有 CSV 导出操作。很多人都这样做。
PHP generally keeps its results on the server. It all depends on the driver. MySQL can be used in an "unbuffered" mode, but it's a tad tricky to use.
fetchAll()
on a large result set can cause network flooding, memory exhaustion, etc.In every case where I need to process more than 1,000 rows, I'm not using PHP. Consider also if your database engine already has a CSV export operation. Many do.
对于大数据,我建议您使用
PDO::FETCH_LAZY
而不是PDO::FETCH_ASSOC
。我用它逐行导出到 csv,效果很好。
没有任何“内存不足”错误。
I advice you to use
PDO::FETCH_LAZY
instead ofPDO::FETCH_ASSOC
for big data.I used it for export to csv row by row and it works fine.
Without any "out of memory" errors.