我可以让 fcntl 和 Perl 警报配合吗?
我在linux、nfs上,涉及多台机器。
我正在尝试使用 fcntl 来实现文件锁定。我一直在使用集群,直到我发现它只能在同一台机器上的进程之间运行。
现在,当我使用 F_SETLKW 调用 fcntl 时,perl 警报(用于添加超时)不再像以前那样工作。这通常没问题,但 ctrl-c 也不起作用。
我相信正在发生的事情是 fcntl 仅每 30 秒左右检查一次信号。警报最终还是回来了。 ctrl-c 被抓住了,...最终。
我可以做些什么来调整 fcntl 检查这些信号的频率吗?
I'm on linux, nfs, with multiple machines involved.
I'm trying to use fcntl to implement filelocking. I was using flock until I discovered it only works between processes on the same machine.
Now when I call fcntl with F_SETLKW, perl alarms (for adding a timeout) don't work as before. This would normally be ok, but ctrl-c doesn't really work either.
What I believe is happening, is that fcntl is only checking for signals every 30 seconds or so. The alarm does come back eventually. The ctrl-c is caught,... eventually.
Is there anything I can do to adjust the frequency with which fcntl checks for these signals?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我绝对不是这方面的专家,但据我所知,正如您所说, fcntl 不适用于您的情况。 fcntl 咨询锁仅在同一台机器内有意义。
如果这是偏离主题的,请忘记我。我使用 File::NFSLock 来解决缓存风暴/dogpile/stampeding 问题。有多个应用程序服务器在 NFS 卷上读取和写入缓存文件(这不是一个好主意,但这就是我们的开始)。
我对 File::NFSLock 进行子类化/包装以修改其行为。我特别需要:
machine:pid
而不仅仅是pid
。这几年来效果非常好。
直到请求量增加了 10 倍。也就是说,上个月我开始遇到第一个问题,两个后端同时写入一个非常繁忙的缓存文件,从而留下死锁。当我们每天的综合浏览量达到 9-1000 万左右时,我就遇到了这种情况,只是为了给您一个想法。
最终损坏的缓存文件如下所示:
仅当两个后端同时写入同一个文件时才会发生这种情况...尚不清楚此问题是否是由 File::NFSLock + 我们的 mods 或应用程序中的某些错误引起的。
总之,如果您的应用程序不是非常繁忙和流量,那么请选择 File::NFSLock,我认为这是您最好的选择。您确定仍想使用 NFS 吗?
I'm definitely no expert on the matter, but my knowledge is that
fcntl
, as you also stated, won't work in your case. fcntl advisory locks only make sense within the same machine.So forget me if this if off-topic. I used File::NFSLock to solve cache storms/dogpile/stampeding problem. There were multiple application servers reading and writing cache files on a NFS volume (not very good idea, but that was what we had start with).
I subclassed/wrapped File::NFSLock to modify its behavior. In particular I needed:
machine:pid
instead of justpid
.This has worked wonderfully for a couple of years.
Until the volume of requests had a 10x increase. That is, last month I started to experience the first problems where a really busy cache file was being written to by two backends at the same time, leaving dead locks behind. This happened for me when we reached around 9-10M overall pageviews per day, just to give you an idea.
The final broken cache file looked like:
This can only happen if two backends write to the same file at the same time... It's not yet clear if this problem is caused by File::NFSLock + our mods or some bug in the application.
In conclusion, if your app is not terribly busy and trafficked, then go for File::NFSLock, I think it's your best bet. You sure you still want to use NFS?