PHP 中的 fopen 文件锁定(读取器/写入器类型的情况)
我有一个场景,一个 PHP 进程每秒大约写入 3 次文件,然后几个 PHP 进程正在读取该文件。
该文件本质上是一个缓存。我们的网站对不断变化的数据进行非常持续的轮询,我们不希望每个访问者每次轮询时都访问数据库,因此我们有一个 cron 进程,每秒读取数据库 3 次,处理数据,并将其转储到轮询客户端随后可以读取的文件中。
我遇到的问题是,有时打开文件进行写入需要很长时间,有时甚至长达 2-3 秒。我假设发生这种情况是因为它被读取(或其他东西)锁定,但我没有任何决定性的方法来证明这一点,另外,根据我从文档中了解到的, PHP 不应该锁定任何东西。 这种情况每 2-5 分钟发生一次,所以很常见。
在代码中,我没有执行任何类型的锁定,并且我几乎不关心该文件的信息是否损坏、读取失败或者数据在中间发生更改。读。 然而,我确实关心写入它是否需要 2 秒,本质上,因为每秒必须发生三次的过程现在跳过了几个节拍。
我正在使用以下代码编写文件:
$handle = fopen(DIR_PUBLIC . 'filename.txt', "w");
fwrite($handle, $data);
fclose($handle);
并且我正在直接读取它:(
file_get_contents('filename.txt')
它没有作为静态文件直接提供给客户端,我收到一个常规的 PHP 请求,该请求读取文件并执行一些基本操作与它)
文件大约11kb,所以读/写不需要很多时间。远低于 1 毫秒。
这是问题发生时的典型日志条目:
Open File: 2657.27 ms
Write: 0.05984 ms
Close: 0.03886 ms
不确定是否相关,但读取是通过 apache 在常规 Web 请求中发生的,但写入是由 Linux 的 cron 进行的常规“命令行”PHP 执行,它不会通过阿帕奇。
您知道什么可能导致打开文件出现如此大的延迟吗?
有什么指示可以帮助我查明实际原因吗?
或者,你能想到我可以做些什么来避免这种情况吗?例如,我希望能够为 fopen 设置 50 毫秒的超时,如果它没有打开文件,它就会跳过,让 cron 的下一次运行来处理它。
再说一次,我的首要任务是让 cron 每秒跳动三次,其他一切都是次要的,所以任何想法、建议、任何事情都非常受欢迎。
谢谢!
丹尼尔
I have a scenario where one PHP process is writing a file about 3 times a second, and then several PHP processes are reading this file.
This file is esentially a cache. Our website has a very insistent polling, for data that changes constantly, and we don't want every visitor to hit the DB every time they poll, so we have a cron process that reads the DB 3 times per second, processes the data, and dumps it to a file that the polling clients can then read.
The problem I'm having is that, sometimes, opening the file to write to it takes a long time, sometimes even up to 2-3 seconds. I'm assuming that this happens because it's being locked by reads (or by something), but I don't have any conclusive way of proving that, plus, according to what I understand from the documentation, PHP shouldn't be locking anything.
This happens every 2-5 minutes, so it's pretty common.
In the code, I'm not doing any kind of locking, and I pretty much don't care if that file's information gets corrupted, if a read fails, or if data changes in the middle of a read.
I do care, however, if writing to it takes 2 seconds, esentially, because the process that has to happen thrice a second now skipped several beats.
I'm writing the file with this code:
$handle = fopen(DIR_PUBLIC . 'filename.txt', "w");
fwrite($handle, $data);
fclose($handle);
And i'm reading it directly with:
file_get_contents('filename.txt')
(it's not getting served directly to the clients as a static file, I'm getting a regular PHP request that reads the file and does some basic stuff with it)
The file is about 11kb, so it doesn't take a lot of time to read/write. Well under 1ms.
This is a typical log entry when the problem happens:
Open File: 2657.27 ms
Write: 0.05984 ms
Close: 0.03886 ms
Not sure if it's relevant, but the reads happen in regular web requests, through apache, but the write is a regular "command line" PHP execution made by Linux's cron, it's not going through Apache.
Any ideas of what could be causing this big delay in opening the file?
Any pointers on where I could look to help me pinpoint the actual cause?
Alternatively, can you think of something I could do to avoid this? For example, I'd love to be able to set a 50ms timeout to fopen, and if it didn't open the file, it just skips ahead, and lets the next run of the cron take care of it.
Again, my priority is to keep the cron beating thrice a second, all else is secondary, so any ideas, suggestions, anything is extremely welcome.
Thank you!
Daniel
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我可以想到 3 个可能的问题:
我能想到的解决方案:
I can think of 3 possible problems:
Solutions i can think of:
如果您想保证持续的低开放时间,您应该使用真正快速的解决方案。也许您的操作系统正在执行磁盘同步、数据库文件提交或其他您无法解决的事情。
我建议使用 memcached、redis 甚至 mongoDB 来完成此类任务。您甚至可以编写自己的缓存守护进程,即使是在 php 中(但这完全没有必要,而且可能很棘手)。
如果您绝对、肯定地确定只能通过此文件缓存来解决此任务,并且您在 Linux 下,请尝试使用不同的磁盘 I/O 调度程序,例如截止日期,或 (cfq AND 将 PHP 进程优先级降低到 -3 / -4)。
You should use something really fast solution if you want to guarantee constant low open times. Maybe your OS is doing disk syncs, database file commits, or other things that you can not work around.
I suggest using memcached, redis, or even mongoDB for such tasks. You might even write your own caching daemon, even in php (however this is totally unnecessary, and can be tricky).
If you are absolutely, positively sure that you can only solve this task by this file cache, and you are under Linux, try to use different disk I/O scheduler, like deadline, OR (cfq AND decrease PHP process priority to -3 / -4).