ReentrantReadWriteLock 上的读锁是否足以并发读取 RandomAccessFile

发布于 2024-08-07 23:30:36 字数 1952 浏览 7 评论 0原文

我正在写一些东西来处理对数据库文件的并发读/写请求。

ReentrantReadWriteLock 看起来像很好的匹配。如果所有线程都访问共享 RandomAccessFile 对象,我需要担心并发读取器的文件指针吗?考虑这个例子:

import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.util.concurrent.locks.ReentrantReadWriteLock;

public class Database {

    private static final int RECORD_SIZE = 50;
    private static Database instance = null;

    private ReentrantReadWriteLock lock;
    private RandomAccessFile database;

    private Database() {
        lock = new ReentrantReadWriteLock();

        try {
            database = new RandomAccessFile("foo.db", "rwd");
        } catch (FileNotFoundException e) {
            e.printStackTrace();
        }
    };

    public static synchronized Database getInstance() {
        if(instance == null) {
            instance = new Database();
        }
        return instance;
    }

    public byte[] getRecord(int n) {
        byte[] data = new byte[RECORD_SIZE];
        try {
            // Begin critical section
            lock.readLock().lock();
            database.seek(RECORD_SIZE*n);
            database.readFully(data);
            lock.readLock().unlock();
            // End critical section
        } catch (IOException e) {
            e.printStackTrace();
        }
        return data;
    }

}

在 getRecord() 方法中,对于多个并发读取器是否可以进行以下交错?

线程 1 ->获取记录(0)
线程 2 ->获取记录(1)
线程 1 ->获取共享锁
线程 2 ->获取共享锁
线程 1 ->寻求记录 0
线程 2 ->寻求记录 1
线程 1 ->读取文件指针处的记录 (1)
线程 2 ->读取文件指针处的记录 (1)

如果使用 ReentrantReadWriteLock 和 RandomAccessFile 确实存在潜在的并发问题,替代方案是什么?

I'm writing something to handle concurrent read/write requests to a database file.

ReentrantReadWriteLock looks like a good match. If all threads access a shared RandomAccessFile object, do I need to worry about the file pointer with concurrent readers? Consider this example:

import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.util.concurrent.locks.ReentrantReadWriteLock;

public class Database {

    private static final int RECORD_SIZE = 50;
    private static Database instance = null;

    private ReentrantReadWriteLock lock;
    private RandomAccessFile database;

    private Database() {
        lock = new ReentrantReadWriteLock();

        try {
            database = new RandomAccessFile("foo.db", "rwd");
        } catch (FileNotFoundException e) {
            e.printStackTrace();
        }
    };

    public static synchronized Database getInstance() {
        if(instance == null) {
            instance = new Database();
        }
        return instance;
    }

    public byte[] getRecord(int n) {
        byte[] data = new byte[RECORD_SIZE];
        try {
            // Begin critical section
            lock.readLock().lock();
            database.seek(RECORD_SIZE*n);
            database.readFully(data);
            lock.readLock().unlock();
            // End critical section
        } catch (IOException e) {
            e.printStackTrace();
        }
        return data;
    }

}

In the getRecord() method, is the following interleaving possible with multiple concurrent readers?

Thread 1 -> getRecord(0)
Thread 2 -> getRecord(1)
Thread 1 -> acquires shared lock
Thread 2 -> acquires shared lock
Thread 1 -> seeks to record 0
Thread 2 -> seeks to record 1
Thread 1 -> reads record at file pointer (1)
Thread 2 -> reads record at file pointer (1)

If there are indeed potential concurrency issues using ReentrantReadWriteLock and RandomAccessFile, what would an alternative be?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

乖乖公主 2024-08-14 23:30:36

这是一个锁定文件和解锁文件的示例程序。

try { // Get a file channel for the file 

    File file = new File("filename");

    FileChannel channel = new RandomAccessFile(file, "rw").getChannel(); // Use the file channel to create a lock on the file.

    // This method blocks until it can retrieve the lock. 

    FileLock lock = channel.lock(); // Try acquiring the lock without blocking. This method returns // null or throws an exception if the file is already locked. 

    try { 

        lock = channel.tryLock();

    } catch (OverlappingFileLockException e){}


    lock.release(); // Close the file 

    channel.close();
} 

catch (Exception e) { } 

This is a sample program that lock file and unlock file.

try { // Get a file channel for the file 

    File file = new File("filename");

    FileChannel channel = new RandomAccessFile(file, "rw").getChannel(); // Use the file channel to create a lock on the file.

    // This method blocks until it can retrieve the lock. 

    FileLock lock = channel.lock(); // Try acquiring the lock without blocking. This method returns // null or throws an exception if the file is already locked. 

    try { 

        lock = channel.tryLock();

    } catch (OverlappingFileLockException e){}


    lock.release(); // Close the file 

    channel.close();
} 

catch (Exception e) { } 
§对你不离不弃 2024-08-14 23:30:36

是的,正如您所概述的那样,此代码未正确同步。如果从未获得写锁,则读写锁没有用处;就好像没有锁一样。

使用传统的同步块使查找和读取对于其他线程来说显得原子,或者创建一个借用的随机访问文件实例池,供单个线程独占使用,并且然后回来了。 (或者,如果您没有太多线程,则只需为每个线程指定一个通道。)

Yes, this code isn't synchronized properly, just as you outline. A read-write lock isn't useful if the write lock is never acquired; it's as if there is no lock.

Use a traditional synchronized block to make the seek and read appear atomic to other threads, or create a pool of RandomAccessFile instances that are borrowed for the exclusive use of a single thread and then returned. (Or simply dedicate a channel to each thread, if you don't have too many threads.)

戏蝶舞 2024-08-14 23:30:36

您可能需要考虑使用文件系统锁而不是管理自己的锁定。

对 RandomAccessFile 调用 getChannel().lock() 以通过 FileChannel 类锁定文件。这可以防止写访问,即使是来自您控制范围之外的进程。

You may want to consider using File System locks instead of managing your own locking.

Call getChannel().lock() on your RandomAccessFile to lock the file via the FileChannel class. This prevents write access, even from processes outside your control.

笑脸一如从前 2024-08-14 23:30:36

ReentrantReadWriteLock 不是对单个锁对象而不是方法进行操作,而是最多支持 65535 个递归写锁和 65535 个读锁。

分配一个读和写锁,

private final Lock r = rwl.readLock();
private final Lock w = rwl.writeLock();

然后对它们进行处理...

另外:您不会考虑锁定后的异常和解锁失败。当您进入方法时调用锁(如互斥锁),然后在 try/catch 块中完成工作,并在finally部分中解锁,例如:

public String[] allKeys() {
  r.lock();
  try { return m.keySet().toArray(); }
  finally { r.unlock(); }
}

Rather operate on the single lock object rather than the method, ReentrantReadWriteLock can support upto a maximum of 65535 recursive write locks and 65535 read locks.

Assign a read and write lock

private final Lock r = rwl.readLock();
private final Lock w = rwl.writeLock();

Then work on them...

Also: you are not catering for an exception and failure to unlock subsequent to locking. Call the lock as you enter the method (like a mutex locker) then do your work in a try/catch block with the unlock in the finally section, eg:

public String[] allKeys() {
  r.lock();
  try { return m.keySet().toArray(); }
  finally { r.unlock(); }
}
2024-08-14 23:30:36

好吧,8.5 年是一段很长的时间,但我希望它不会死……

我的问题是我们需要访问流以尽可能原子地读写。一个重要的部分是我们的代码应该在访问同一文件的多台机器上运行。然而,互联网上的所有示例都停留在解释如何锁定 RandomAccessFile 上,并没有进行更深入的探讨。所以我的出发点是Sam的回答

现在,从远处看,有一定的顺序是有意义的:

  • 锁定文件
  • 打开流对流
  • 执行任何操作
  • 关闭流
  • 释放锁

但是,为了允许在Java中释放锁,流不能被关闭!因此,整个机制变得有点奇怪(而且是错误的?)。

为了使自动关闭工作,必须记住 JVM 以与 try 段相反的顺序关闭实体。这意味着流程如下所示:

  • 打开流
  • 锁定文件 对流
  • 执行任何操作
  • 释放锁
  • 关闭流

测试表明这不起作用。因此,中途自动关闭,并以良好的 Java 1 方式完成其余部分:

try (RandomAccessFile raf = new RandomAccessFile(filename, "rwd");
    FileChannel channel = raf.getChannel()) {
  FileLock lock = channel.lock();
  FileInputStream in = new FileInputStream(raf.getFD());
  FileOutputStream out = new FileOutputStream(raf.getFD());

  // do all reading
  ...

  // that moved the pointer in the channel to somewhere in the file,
  // therefore reposition it to the beginning:
  channel.position(0);
  // as the new content might be shorter it's a requirement to do this, too:
  channel.truncate(0);

  // do all writing
  ...

  out.flush();
  lock.release();
  in.close();
  out.close();
}

请注意,使用此方法的方法仍然必须同步。否则,并行执行在调用 lock() 时可能会抛出 OverlappingFileLockException

请分享经验,如果您有任何...

OK, 8.5 years is a long time, but I hope it's not necro...

My problem was that we needed to access streams to read and write as atomic as possible. An important part was that our code was supposed to run on multiple machines accessing the same file. However, all examples on the Internet stopped at explaining how to lock a RandomAccessFile and didn't go any deeper. So my starting point was Sam's answer.

Now, from a distance it makes sense to have a certain order:

  • lock the file
  • open the streams
  • do whatever with the streams
  • close the streams
  • release the lock

However, to allow releasing the lock in Java the streams must not be closed! Because of that the entire mechanism becomes a little weird (and wrong?).

In order to make auto-closing work one must remember that JVM closes the entities in the reverse order of the try-segment. This means that a flow looks like this:

  • open the streams
  • lock the file
  • do whatever with the streams
  • release the lock
  • close the streams

Tests showed that this doesn't work. Therefore, auto-close half way and do the rest in good ol' Java 1 fashion:

try (RandomAccessFile raf = new RandomAccessFile(filename, "rwd");
    FileChannel channel = raf.getChannel()) {
  FileLock lock = channel.lock();
  FileInputStream in = new FileInputStream(raf.getFD());
  FileOutputStream out = new FileOutputStream(raf.getFD());

  // do all reading
  ...

  // that moved the pointer in the channel to somewhere in the file,
  // therefore reposition it to the beginning:
  channel.position(0);
  // as the new content might be shorter it's a requirement to do this, too:
  channel.truncate(0);

  // do all writing
  ...

  out.flush();
  lock.release();
  in.close();
  out.close();
}

Note that the methods using this must still be synchronized. Otherwise the parallel executions may throw an OverlappingFileLockException when calling lock().

Please share experiences in case you have any...

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文