文件锁未按预期工作

发布于 2024-10-24 17:02:49 字数 817 浏览 5 评论 0原文

我有一个 Thread 扩展类,它应该一次只运行一个实例(跨进程)。为了实现这一点,我尝试使用文件锁。以下是我的代码片段:

class Scanner(Thread):

  def __init__(self, path):
    Thread.__init__(self)
    self.lock_file = open(os.path.join(config.BASEDIR, "scanner.lock"), 'r+')
    fcntl.lockf(self.lock_file, fcntl.LOCK_EX | fcntl.LOCK_NB)

  # Stuff omitted

  def run(self):
    logging.info("Starting scan on %s" % self.path)

    # More stuff omitted

    fcntl.lockf(self.lock_file, fcntl.LOCK_UN)

如果 Scanner 线程已经在运行并且根本不初始化对象,我预计 lockf 调用会抛出异常。但是,我可以在终端中看到这一点:

INFO:root:Starting scan on /home/felix/Music
INFO:root:Starting scan on /home/felix/Music
INFO:root:Scan finished
INFO:root:Scan finished

这表明两个 Scanner 线程同时运行,没有抛出异常。我确信我在这里遗漏了一些非常基本的东西,但我似乎无法弄清楚那是什么。有人可以帮忙吗?

I have a Thread-extending class that is supposed to run only one instance at a time (cross-process). In order to achieve that, I'm trying to use a file lock. Here are bits of my code:

class Scanner(Thread):

  def __init__(self, path):
    Thread.__init__(self)
    self.lock_file = open(os.path.join(config.BASEDIR, "scanner.lock"), 'r+')
    fcntl.lockf(self.lock_file, fcntl.LOCK_EX | fcntl.LOCK_NB)

  # Stuff omitted

  def run(self):
    logging.info("Starting scan on %s" % self.path)

    # More stuff omitted

    fcntl.lockf(self.lock_file, fcntl.LOCK_UN)

I was expecting the lockf call to throw an exception if a Scanner thread was already running and not initialize the object at all. However, I can see this in the terminal:

INFO:root:Starting scan on /home/felix/Music
INFO:root:Starting scan on /home/felix/Music
INFO:root:Scan finished
INFO:root:Scan finished

Which suggests that two Scanner threads are running at the same time, no exception thrown. I'm sure I'm missing something really basic here, but I can't seem to figure out what that is. Can anyone help?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

陪你搞怪i 2024-10-31 17:02:49

最后我自己找到了解决方案。它是使用 fcntl.flock() 而不是 fcntl.lockf(),具有完全相同的参数。不知道为什么这会产生影响。

Found the solution myself in the end. It was to use fcntl.flock() instead of fcntl.lockf(), with the exact same parameters. Not sure why that made a difference.

陈甜 2024-10-31 17:02:49

您正在使用 r+ 打开锁定文件,这会擦除以前的文件并创建一个新文件。每个线程都锁定不同的文件。

使用 wr+a

You're opening the lock file using r+ which is erasing the previous file and creating a new one. Each thread is locking a different file.

Use w or r+a

〗斷ホ乔殘χμё〖 2024-10-31 17:02:49

除了使用集群之外,我还必须像这样打开文件:

fd = os.open(lockfile, os.O_CREAT | os.O_TRUNC | os.O_WRONLY)

否则它不起作用。

Along with using flock, I had to also open the file like so :

fd = os.open(lockfile, os.O_CREAT | os.O_TRUNC | os.O_WRONLY)

It does not work other wise.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文