ReentrantReadWriteLock 上的读锁是否足以并发读取 RandomAccessFile
我正在写一些东西来处理对数据库文件的并发读/写请求。
ReentrantReadWriteLock 看起来像很好的匹配。如果所有线程都访问共享 RandomAccessFile 对象,我需要担心并发读取器的文件指针吗?考虑这个例子:
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class Database {
private static final int RECORD_SIZE = 50;
private static Database instance = null;
private ReentrantReadWriteLock lock;
private RandomAccessFile database;
private Database() {
lock = new ReentrantReadWriteLock();
try {
database = new RandomAccessFile("foo.db", "rwd");
} catch (FileNotFoundException e) {
e.printStackTrace();
}
};
public static synchronized Database getInstance() {
if(instance == null) {
instance = new Database();
}
return instance;
}
public byte[] getRecord(int n) {
byte[] data = new byte[RECORD_SIZE];
try {
// Begin critical section
lock.readLock().lock();
database.seek(RECORD_SIZE*n);
database.readFully(data);
lock.readLock().unlock();
// End critical section
} catch (IOException e) {
e.printStackTrace();
}
return data;
}
}
在 getRecord() 方法中,对于多个并发读取器是否可以进行以下交错?
线程 1 ->获取记录(0)
线程 2 ->获取记录(1)
线程 1 ->获取共享锁
线程 2 ->获取共享锁
线程 1 ->寻求记录 0
线程 2 ->寻求记录 1
线程 1 ->读取文件指针处的记录 (1)
线程 2 ->读取文件指针处的记录 (1)
如果使用 ReentrantReadWriteLock 和 RandomAccessFile 确实存在潜在的并发问题,替代方案是什么?
I'm writing something to handle concurrent read/write requests to a database file.
ReentrantReadWriteLock looks like a good match. If all threads access a shared RandomAccessFile object, do I need to worry about the file pointer with concurrent readers? Consider this example:
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class Database {
private static final int RECORD_SIZE = 50;
private static Database instance = null;
private ReentrantReadWriteLock lock;
private RandomAccessFile database;
private Database() {
lock = new ReentrantReadWriteLock();
try {
database = new RandomAccessFile("foo.db", "rwd");
} catch (FileNotFoundException e) {
e.printStackTrace();
}
};
public static synchronized Database getInstance() {
if(instance == null) {
instance = new Database();
}
return instance;
}
public byte[] getRecord(int n) {
byte[] data = new byte[RECORD_SIZE];
try {
// Begin critical section
lock.readLock().lock();
database.seek(RECORD_SIZE*n);
database.readFully(data);
lock.readLock().unlock();
// End critical section
} catch (IOException e) {
e.printStackTrace();
}
return data;
}
}
In the getRecord() method, is the following interleaving possible with multiple concurrent readers?
Thread 1 -> getRecord(0)
Thread 2 -> getRecord(1)
Thread 1 -> acquires shared lock
Thread 2 -> acquires shared lock
Thread 1 -> seeks to record 0
Thread 2 -> seeks to record 1
Thread 1 -> reads record at file pointer (1)
Thread 2 -> reads record at file pointer (1)
If there are indeed potential concurrency issues using ReentrantReadWriteLock and RandomAccessFile, what would an alternative be?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
这是一个锁定文件和解锁文件的示例程序。
This is a sample program that lock file and unlock file.
是的,正如您所概述的那样,此代码未正确同步。如果从未获得写锁,则读写锁没有用处;就好像没有锁一样。
使用传统的同步块使查找和读取对于其他线程来说显得原子,或者创建一个借用的随机访问文件实例池,供单个线程独占使用,并且然后回来了。 (或者,如果您没有太多线程,则只需为每个线程指定一个通道。)
Yes, this code isn't synchronized properly, just as you outline. A read-write lock isn't useful if the write lock is never acquired; it's as if there is no lock.
Use a traditional
synchronized
block to make the seek and read appear atomic to other threads, or create a pool ofRandomAccessFile
instances that are borrowed for the exclusive use of a single thread and then returned. (Or simply dedicate a channel to each thread, if you don't have too many threads.)您可能需要考虑使用文件系统锁而不是管理自己的锁定。
对 RandomAccessFile 调用
getChannel().lock()
以通过FileChannel
类锁定文件。这可以防止写访问,即使是来自您控制范围之外的进程。You may want to consider using File System locks instead of managing your own locking.
Call
getChannel().lock()
on your RandomAccessFile to lock the file via theFileChannel
class. This prevents write access, even from processes outside your control.ReentrantReadWriteLock 不是对单个锁对象而不是方法进行操作,而是最多支持 65535 个递归写锁和 65535 个读锁。
分配一个读和写锁,
然后对它们进行处理...
另外:您不会考虑锁定后的异常和解锁失败。当您进入方法时调用锁(如互斥锁),然后在 try/catch 块中完成工作,并在finally部分中解锁,例如:
Rather operate on the single lock object rather than the method, ReentrantReadWriteLock can support upto a maximum of 65535 recursive write locks and 65535 read locks.
Assign a read and write lock
Then work on them...
Also: you are not catering for an exception and failure to unlock subsequent to locking. Call the lock as you enter the method (like a mutex locker) then do your work in a try/catch block with the unlock in the finally section, eg:
好吧,8.5 年是一段很长的时间,但我希望它不会死……
我的问题是我们需要访问流以尽可能原子地读写。一个重要的部分是我们的代码应该在访问同一文件的多台机器上运行。然而,互联网上的所有示例都停留在解释如何锁定
RandomAccessFile
上,并没有进行更深入的探讨。所以我的出发点是Sam的回答。现在,从远处看,有一定的顺序是有意义的:
但是,为了允许在Java中释放锁,流不能被关闭!因此,整个机制变得有点奇怪(而且是错误的?)。
为了使自动关闭工作,必须记住 JVM 以与 try 段相反的顺序关闭实体。这意味着流程如下所示:
测试表明这不起作用。因此,中途自动关闭,并以良好的 Java 1 方式完成其余部分:
请注意,使用此方法的方法仍然必须
同步
。否则,并行执行在调用lock()
时可能会抛出OverlappingFileLockException
。请分享经验,如果您有任何...
OK, 8.5 years is a long time, but I hope it's not necro...
My problem was that we needed to access streams to read and write as atomic as possible. An important part was that our code was supposed to run on multiple machines accessing the same file. However, all examples on the Internet stopped at explaining how to lock a
RandomAccessFile
and didn't go any deeper. So my starting point was Sam's answer.Now, from a distance it makes sense to have a certain order:
However, to allow releasing the lock in Java the streams must not be closed! Because of that the entire mechanism becomes a little weird (and wrong?).
In order to make auto-closing work one must remember that JVM closes the entities in the reverse order of the try-segment. This means that a flow looks like this:
Tests showed that this doesn't work. Therefore, auto-close half way and do the rest in good ol' Java 1 fashion:
Note that the methods using this must still be
synchronized
. Otherwise the parallel executions may throw anOverlappingFileLockException
when callinglock()
.Please share experiences in case you have any...