PHP 中的 fopen 文件锁定(读取器/写入器类型的情况)

发布于 2024-09-24 22:08:39 字数 1226 浏览 2 评论 0原文

我有一个场景,一个 PHP 进程每秒大约写入 3 次文件,然后几个 PHP 进程正在读取该文件。

该文件本质上是一个缓存。我们的网站对不断变化的数据进行非常持续的轮询,我们不希望每个访问者每次轮询时都访问数据库,因此我们有一个 cron 进程,每秒读取数据库 3 次,处理数据,并将其转储到轮询客户端随后可以读取的文件中。

我遇到的问题是,有时打开文件进行写入需要很长时间,有时甚至长达 2-3 秒。我假设发生这种情况是因为它被读取(或其他东西)锁定,但我没有任何决定性的方法来证明这一点,另外,根据我从文档中了解到的, PHP 不应该锁定任何东西。 这种情况每 2-5 分钟发生一次,所以很常见。

在代码中,我没有执行任何类型的锁定,并且我几乎不关心该文件的信息是否损坏、读取失败或者数据在中间发生更改。读。 然而,我确实关心写入它是否需要 2 秒,本质上,因为每秒必须发生三次的过程现在跳过了几个节拍。

我正在使用以下代码编写文件:

$handle = fopen(DIR_PUBLIC . 'filename.txt', "w");
fwrite($handle, $data);
fclose($handle);

并且我正在直接读取它:(

file_get_contents('filename.txt')

它没有作为静态文件直接提供给客户端,我收到一个常规的 PHP 请求,该请求读取文件并执行一些基本操作与它)

文件大约11kb,所以读/写不需要很多时间。远低于 1 毫秒。

这是问题发生时的典型日志条目:

  Open File:    2657.27 ms
  Write:    0.05984 ms
  Close:    0.03886 ms

不确定是否相关,但读取是通过 apache 在常规 Web 请求中发生的,但写入是由 Linux 的 cron 进行的常规“命令行”PHP 执行,它不会通过阿帕奇。

您知道什么可能导致打开文件出现如此大的延迟吗?
有什么指示可以帮助我查明实际原因吗?

或者,你能想到我可以做些什么来避免这种情况吗?例如,我希望能够为 fopen 设置 50 毫秒的超时,如果它没有打开文件,它就会跳过,让 cron 的下一次运行来处理它。

再说一次,我的首要任务是让 cron 每秒跳动三次,其他一切都是次要的,所以任何想法、建议、任何事情都非常受欢迎。

谢谢!
丹尼尔

I have a scenario where one PHP process is writing a file about 3 times a second, and then several PHP processes are reading this file.

This file is esentially a cache. Our website has a very insistent polling, for data that changes constantly, and we don't want every visitor to hit the DB every time they poll, so we have a cron process that reads the DB 3 times per second, processes the data, and dumps it to a file that the polling clients can then read.

The problem I'm having is that, sometimes, opening the file to write to it takes a long time, sometimes even up to 2-3 seconds. I'm assuming that this happens because it's being locked by reads (or by something), but I don't have any conclusive way of proving that, plus, according to what I understand from the documentation, PHP shouldn't be locking anything.
This happens every 2-5 minutes, so it's pretty common.

In the code, I'm not doing any kind of locking, and I pretty much don't care if that file's information gets corrupted, if a read fails, or if data changes in the middle of a read.
I do care, however, if writing to it takes 2 seconds, esentially, because the process that has to happen thrice a second now skipped several beats.

I'm writing the file with this code:

$handle = fopen(DIR_PUBLIC . 'filename.txt', "w");
fwrite($handle, $data);
fclose($handle);

And i'm reading it directly with:

file_get_contents('filename.txt')

(it's not getting served directly to the clients as a static file, I'm getting a regular PHP request that reads the file and does some basic stuff with it)

The file is about 11kb, so it doesn't take a lot of time to read/write. Well under 1ms.

This is a typical log entry when the problem happens:

  Open File:    2657.27 ms
  Write:    0.05984 ms
  Close:    0.03886 ms

Not sure if it's relevant, but the reads happen in regular web requests, through apache, but the write is a regular "command line" PHP execution made by Linux's cron, it's not going through Apache.

Any ideas of what could be causing this big delay in opening the file?
Any pointers on where I could look to help me pinpoint the actual cause?

Alternatively, can you think of something I could do to avoid this? For example, I'd love to be able to set a 50ms timeout to fopen, and if it didn't open the file, it just skips ahead, and lets the next run of the cron take care of it.

Again, my priority is to keep the cron beating thrice a second, all else is secondary, so any ideas, suggestions, anything is extremely welcome.

Thank you!
Daniel

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

灯角 2024-10-01 22:08:39

我可以想到 3 个可能的问题:

  • 当通过较低的 php 系统调用读取/写入文件时,文件被锁定,而您却不知道。这应该会阻止文件最多 1/3 秒。你的经期会比那个更长。
  • fs 缓存启动 fsync(),整个系统会阻止对磁盘的读/写,直到完成为止。作为修复,您可以尝试安装更多 RAM 或升级内核或使用更快的硬盘,
  • 您的“缓存”解决方案未分发,并且它每秒多次撞击整个系统中表现最差的硬件......这意味着您不能通过简单地添加更多机器来进一步扩展它,只能通过提高硬盘速度。您应该看看 memcache 或 APC,甚至可能是共享内存 http://www.php.net/manual/en/function.shm-put-var.php

我能想到的解决方案:

  • 将该文件放入 ramdisk http://www.cyberciti.biz/faq/howto-create-linux-ram-disk-filesystem / 。这应该是避免频繁访问磁盘的最简单方法,无需其他重大更改。
  • 使用内存缓存。在本地使用时速度非常快,扩展性良好,被“大”玩家使用。 http://www.php.net/manual/en/book.memcache。 php
  • 使用共享内存。它是为您在这里尝试做的事情而设计的...具有“共享内存区域”...
  • 更改 cron 调度程序以减少更新,也许实现某种事件系统,因此它只会在必要时更新缓存,并且不按时间
  • 使“写入”脚本写入3个不同的文件,并让“读者”随机地从其中一个文件读取。这可能允许在更多文件中“分布”查找,并且可能会减少某个文件在写入时被锁定的机会......但我怀疑它会带来任何明显的好处。

I can think of 3 possible problems:

  • the file get locked when read/writing to it by lower php system calls without you knowing it. This should block the file for 1/3s max. You get periods longer that that.
  • the fs cache starts a fsync() and the whole system blocks read/writes to the disk until that is done. As a fix you may try installing more RAM or upgrading the kernel or using a faster hard disk
  • your "caching" solution is not distributed, and it hits the worst acting piece of hardware in the whole system many times a second... meaning that you cannot scale it further by simply adding more machines, only by increasing the hdd speed. You should take a look at memcache or APC, or maybe even shared memory http://www.php.net/manual/en/function.shm-put-var.php

Solutions i can think of:

  • put that file in a ramdisk http://www.cyberciti.biz/faq/howto-create-linux-ram-disk-filesystem/ . This should be the simplest way to avoid hitting the disk so often, without other major changes.
  • use memcache. Its very fast when used locally, it scales well, used by "big" players. http://www.php.net/manual/en/book.memcache.php
  • use shared memory. It was designed for what you are trying to do here... having a "shared memory zone"...
  • change the cron scheduler to update less, maybe implement some kind of event system, so it would only update the cache when necessary, and not on a time basis
  • make the "writing" script write to 3 files different, and make the "readers" read from one of the files, randomly. This may allow a "distribution" of the loking across more files and it may reduce the chances that a certain file is locked when writing to it... but i doubt it would bring any noticeable benefits.
心的位置 2024-10-01 22:08:39

如果您想保证持续的低开放时间,您应该使用真正快速的解决方案。也许您的操作系统正在执行磁盘同步、数据库文件提交或其他您无法解决的事情。

我建议使用 memcached、redis 甚至 mongoDB 来完成此类任务。您甚至可以编写自己的缓存守护进程,即使是在 php 中(但这完全没有必要,而且可能很棘手)。

如果您绝对、肯定地确定只能通过此文件缓存来解决此任务,并且您在 Linux 下,请尝试使用不同的磁盘 I/O 调度程序,例如截止日期,或 (cfq AND 将 PHP 进程优先级降低到 -3 / -4)。

You should use something really fast solution if you want to guarantee constant low open times. Maybe your OS is doing disk syncs, database file commits, or other things that you can not work around.

I suggest using memcached, redis, or even mongoDB for such tasks. You might even write your own caching daemon, even in php (however this is totally unnecessary, and can be tricky).

If you are absolutely, positively sure that you can only solve this task by this file cache, and you are under Linux, try to use different disk I/O scheduler, like deadline, OR (cfq AND decrease PHP process priority to -3 / -4).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文