在 php 中读取然后覆盖文件内容的最佳方法是什么?

发布于 2024-11-30 12:07:08 字数 497 浏览 5 评论 0原文

php 中打开文件、读取内容并随后使用基于原始内容的一些输出覆盖文件内容的最干净方法是什么?具体来说,我试图打开一个填充有项目列表(以换行符分隔)的文件,处理/添加项目到列表中,从列表中删除最旧的 N 个条目,最后将列表写回到文件中。

fopen(<path>, 'a+')
flock(<handle>, LOCK_EX)
fread(<handle>, filesize(<path>))
// process contents and remove old entries
fwrite(<handle>, <contents>)
flock(<handle>, LOCK_UN)
fclose(<handle>)

请注意,我需要使用集群()锁定文件,以便跨多个页面请求保护它。 fopen() 时的“w+”标志会起作用吗? php 手册指出它将把文件截断为零长度,所以看起来这可能会阻止我读取文件的当前内容。

What's the cleanest way in php to open a file, read the contents, and subsequently overwrite the file's contents with some output based on the original contents? Specifically, I'm trying to open a file populated with a list of items (separated by newlines), process/add items to the list, remove the oldest N entries from the list, and finally write the list back into the file.

fopen(<path>, 'a+')
flock(<handle>, LOCK_EX)
fread(<handle>, filesize(<path>))
// process contents and remove old entries
fwrite(<handle>, <contents>)
flock(<handle>, LOCK_UN)
fclose(<handle>)

Note that I need to lock the file with flock() in order to protect it across multiple page requests. Will the 'w+' flag when fopen()ing do the trick? The php manual states that it will truncate the file to zero length, so it seems that may prevent me from reading the file's current contents.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

梦晓ヶ微光ヅ倾城 2024-12-07 12:07:08

如果文件不是太大(也就是说,您可以确信加载它不会超出 PHP 的内存限制),那么最简单的方法是将整个文件读入字符串(file_get_contents()),处理字符串,并将结果写回文件(file_put_contents())。这种方法有两个问题:

  • 如果文件太大(例如,数十或数百兆字节),或者处理过程需要大量内存,那么您将耗尽内存(当您有多个实例时更是如此)正在运行的东西)。
  • 该操作具有破坏性;当保存中途失败时,您将丢失所有原始数据。

如果其中任何一个是问题,则计划 B 是处理该文件并同时写入临时文件;成功完成后,关闭这两个文件,重命名(或删除)原始文件,然后将临时文件重命名为原始文件名。

If the file isn't overly large (that is, you can be confident loading it won't blow PHP's memory limit), then the easiest way to go is to just read the entire file into a string (file_get_contents()), process the string, and write the result back to the file (file_put_contents()). This approach has two problems:

  • If the file is too large (say, tens or hundreds of megabytes), or the processing is memory-hungry, you're going to run out of memory (even more so when you have multiple instances of the thing running).
  • The operation is destructive; when the saving fails halfway through, you lose all your original data.

If any of these is a concern, plan B is to process the file and at the same time write to a temporary file; after successful completion, close both files, rename (or delete) the original file and then rename the temporary file to the original filename.

べ繥欢鉨o。 2024-12-07 12:07:08

读写

$data = file_get_contents($filename);

file_put_contents($filename, $data);

Read

$data = file_get_contents($filename);

Write

file_put_contents($filename, $data);
囚你心 2024-12-07 12:07:08

一种解决方案是使用单独的锁定文件来控制访问。

此解决方案假设只有您的脚本或您有权访问的脚本才会想要写入该文件。这是因为脚本需要知道检查单独的文件是否可以访问。

$file_lock = obtain_file_lock();
if ($file_lock) {
    $old_information = file_get_contents('/path/to/main/file');
    $new_information = update_information_somehow($old_information);
    file_put_contents('/path/to/main/file', $new_information);
    release_file_lock($file_lock);
}

function obtain_file_lock() {

    $attempts = 10;
    // There are probably better ways of dealing with waiting for a file
    // lock but this shows the principle of dealing with the original 
    // question.

    for ($ii = 0; $ii < $attempts; $ii++) {
         $lock_file = fopen('/path/to/lock/file', 'r'); //only need read access
         if (flock($lock_file, LOCK_EX)) {
             return $lock_file;
         } else {
             //give time for other process to release lock
             usleep(100000); //0.1 seconds
         }
    }
    //This is only reached if all attempts fail.
    //Error code here for dealing with that eventuality.
}

function release_file_lock($lock_file) {
    flock($lock_file, LOCK_UN);
    fclose($lock_file);
}

这应该可以防止并发运行的脚本读取旧信息并更新该信息,从而导致您在读取文件后丢失另一个脚本已更新的信息。它将只允许脚本的一个实例读取该文件,然后用更新的信息覆盖它。

虽然这有望回答最初的问题,但它并没有提供一个好的解决方案来确保所有并发脚本最终都有能力记录其信息。

One solution is to use a separate lock file to control access.

This solution assumes that only your script, or scripts you have access to, will want to write to the file. This is because the scripts will need to know to check a separate file for access.

$file_lock = obtain_file_lock();
if ($file_lock) {
    $old_information = file_get_contents('/path/to/main/file');
    $new_information = update_information_somehow($old_information);
    file_put_contents('/path/to/main/file', $new_information);
    release_file_lock($file_lock);
}

function obtain_file_lock() {

    $attempts = 10;
    // There are probably better ways of dealing with waiting for a file
    // lock but this shows the principle of dealing with the original 
    // question.

    for ($ii = 0; $ii < $attempts; $ii++) {
         $lock_file = fopen('/path/to/lock/file', 'r'); //only need read access
         if (flock($lock_file, LOCK_EX)) {
             return $lock_file;
         } else {
             //give time for other process to release lock
             usleep(100000); //0.1 seconds
         }
    }
    //This is only reached if all attempts fail.
    //Error code here for dealing with that eventuality.
}

function release_file_lock($lock_file) {
    flock($lock_file, LOCK_UN);
    fclose($lock_file);
}

This should prevent a concurrently-running script reading old information and updating that, causing you to lose information that another script has updated after you read the file. It will allow only one instance of the script to read the file and then overwrite it with updated information.

While this hopefully answers the original question, it doesn't give a good solution to making sure all concurrent scripts have the ability to record their information eventually.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文