我可以让 fcntl 和 Perl 警报配合吗?

发布于 2024-09-24 20:04:39 字数 283 浏览 1 评论 0原文

我在linux、nfs上,涉及多台机器。

我正在尝试使用 fcntl 来实现文件锁定。我一直在使用集群,直到我发现它只能在同一台机器上的进程之间运行。

现在,当我使用 F_SETLKW 调用 fcntl 时,perl 警报(用于添加超时)不再像以前那样工作。这通常没问题,但 ctrl-c 也不起作用。

我相信正在发生的事情是 fcntl 仅每 30 秒左右检查一次信号。警报最终还是回来了。 ctrl-c 被抓住了,...最终。

我可以做些什么来调整 fcntl 检查这些信号的频率吗?

I'm on linux, nfs, with multiple machines involved.

I'm trying to use fcntl to implement filelocking. I was using flock until I discovered it only works between processes on the same machine.

Now when I call fcntl with F_SETLKW, perl alarms (for adding a timeout) don't work as before. This would normally be ok, but ctrl-c doesn't really work either.

What I believe is happening, is that fcntl is only checking for signals every 30 seconds or so. The alarm does come back eventually. The ctrl-c is caught,... eventually.

Is there anything I can do to adjust the frequency with which fcntl checks for these signals?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

半透明的墙 2024-10-01 20:04:40

我绝对不是这方面的专家,但据我所知,正如您所说, fcntl 不适用于您的情况。 fcntl 咨询锁仅在同一台机器内有意义。

如果这是偏离主题的,请忘记我。我使用 File::NFSLock 来解决缓存风暴/dogpile/stampeding 问题。有多个应用程序服务器在 NFS 卷上读取和写入缓存文件(这不是一个好主意,但这就是我们的开始)。

我对 File::NFSLock 进行子类化/包装以修改其行为。我特别需要:

  • 持久锁,当 File::NFSLock 对象超出范围时,这些锁不会消失。使用常规 File::NFSLock,当对象超出范围时,您的锁将消失。这不是我需要的。
  • 实际的锁定文件还包含获取锁定的计算机名称。进程ID显然不足以决定进程是否终止,所以我可以安全地窃取锁文件。因此,我修改了代码,将锁定文件编写为 machine:pid 而不仅仅是 pid

这几年来效果非常好。

直到请求量增加了 10 倍。也就是说,上个月我开始遇到第一个问题,两个后端同时写入一个非常繁忙的缓存文件,从而留下死锁。当我们每天的综合浏览量达到 9-1000 万左右时,我就遇到了这种情况,只是为了给您一个想法。

最终损坏的缓存文件如下所示:

<!-- START OF CACHE FILE BY BACKEND b1 -->
... cache file contents ...
<!--   END OF CACHE FILE BY BACKEND b1 -->
... more cache file contents ... wtf ...
<!--   END OF CACHE FILE BY BACKEND b2 -->

仅当两个后端同时写入同一个文件时才会发生这种情况...尚不清楚此问题是否是由 File::NFSLock + 我们的 mods 或应用程序中的某些错误引起的。

总之,如果您的应用程序不是非常繁忙和流量,那么请选择 File::NFSLock,我认为这是您最好的选择。您确定仍想使用 NFS 吗?

I'm definitely no expert on the matter, but my knowledge is that fcntl, as you also stated, won't work in your case. fcntl advisory locks only make sense within the same machine.

So forget me if this if off-topic. I used File::NFSLock to solve cache storms/dogpile/stampeding problem. There were multiple application servers reading and writing cache files on a NFS volume (not very good idea, but that was what we had start with).

I subclassed/wrapped File::NFSLock to modify its behavior. In particular I needed:

  • persistent locks, that don't go away when a File::NFSLock object goes out of scope. Using regular File::NFSLock, your lock will vanish when the object goes out of scope. This was not what I needed.
  • that actual lock files also contain the name of the machine that acquired the lock. The process id is clearly not enough to decide whether a process is terminated, so I can safely steal the lockfile. So I modified the code to write lockfiles as machine:pid instead of just pid.

This has worked wonderfully for a couple of years.

Until the volume of requests had a 10x increase. That is, last month I started to experience the first problems where a really busy cache file was being written to by two backends at the same time, leaving dead locks behind. This happened for me when we reached around 9-10M overall pageviews per day, just to give you an idea.

The final broken cache file looked like:

<!-- START OF CACHE FILE BY BACKEND b1 -->
... cache file contents ...
<!--   END OF CACHE FILE BY BACKEND b1 -->
... more cache file contents ... wtf ...
<!--   END OF CACHE FILE BY BACKEND b2 -->

This can only happen if two backends write to the same file at the same time... It's not yet clear if this problem is caused by File::NFSLock + our mods or some bug in the application.

In conclusion, if your app is not terribly busy and trafficked, then go for File::NFSLock, I think it's your best bet. You sure you still want to use NFS?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文