ReaderWriterLockSlim 阻止读取,直到所有排队的写入完成

发布于 2024-12-17 15:35:57 字数 2753 浏览 0 评论 0原文

我正在尝试使用 ReaderWriterLockSlim 类来管理列表。

这个列表的读取次数很多,写入次数很少,我的读取速度很快,而写入速度很慢。

我编写了一个简单的测试工具来检查锁的工作原理。

如果发生以下情况

Thread 1 - Start Write
Thread 2 - Start Read
Thread 3 - Start Write

那么结果如下

Thread 1 starts its write and locks the list.
Thread 2 adds itself to the read queue.
Thread 3 adds itself to the write queue.
Thread 1 finishes writing and releases the lock
Thread 3 aquires the lock and starts its write
Thread 3 finishes writing and releases the lock
Thread 2 performs its read

是否有任何方法可以更改锁的行为,以便允许在授予写锁之前完成在写锁之前排队的任何读请求?

编辑:演示我遇到的问题的代码如下

public partial class SimpleLock : System.Web.UI.Page
{
    public static ReaderWriterLockSlim threadLock = new ReaderWriterLockSlim();

    protected void Page_Load(object sender, EventArgs e)
    {
        List<String> outputList = new List<String>();

        Thread thread1 = new Thread(
            delegate(object output)
            {
                ((List<String>)output).Add("Write 1 Enter");
                threadLock.EnterWriteLock();
                ((List<String>)output).Add("Write 1 Begin");
                Thread.Sleep(100);
                ((List<String>)output).Add("Write 1 End");
                threadLock.ExitWriteLock();
                ((List<String>)output).Add("Write 1 Exit");
            }
        );
        thread1.Start(outputList);

        Thread.Sleep(10);

        Thread thread2 = new Thread(
            delegate(object output)
            {
                ((List<String>)output).Add("Read 2 Enter");
                threadLock.EnterReadLock();
                ((List<String>)output).Add("Read 2 Begin");
                Thread.Sleep(100);
                ((List<String>)output).Add("Read 2 End");
                threadLock.ExitReadLock();
                ((List<String>)output).Add("Read 2 Exit");
            }
        );
        thread2.Start(outputList);

        Thread.Sleep(10);

        Thread thread3 = new Thread(
            delegate(object output)
            {
                ((List<String>)output).Add("Write 3 Enter");
                threadLock.EnterWriteLock();
                ((List<String>)output).Add("Write 3 Begin");
                Thread.Sleep(100);
                ((List<String>)output).Add("Write 3 End");
                threadLock.ExitWriteLock();
                ((List<String>)output).Add("Write 3 Exit");
            }
        );
        thread3.Start(outputList);

        thread1.Join();
        thread2.Join();
        thread3.Join();

        Response.Write(String.Join("<br />", outputList.ToArray()));
    }
}

I'm attempting to use the ReaderWriterLockSlim class to manage a list.

There are many reads to this list and few writes, my reads are fast whereas my writes are slow.

I have a simple test harness written to check how the lock works.

If the following situation occurs

Thread 1 - Start Write
Thread 2 - Start Read
Thread 3 - Start Write

Then the outcome is as follows

Thread 1 starts its write and locks the list.
Thread 2 adds itself to the read queue.
Thread 3 adds itself to the write queue.
Thread 1 finishes writing and releases the lock
Thread 3 aquires the lock and starts its write
Thread 3 finishes writing and releases the lock
Thread 2 performs its read

Is there any way of changing the behaviour of the lock so that any Read requests that were queued before a write lock are allowed to complete before the write locks are granted?

EDIT: The code that demonstrates the issue I have is below

public partial class SimpleLock : System.Web.UI.Page
{
    public static ReaderWriterLockSlim threadLock = new ReaderWriterLockSlim();

    protected void Page_Load(object sender, EventArgs e)
    {
        List<String> outputList = new List<String>();

        Thread thread1 = new Thread(
            delegate(object output)
            {
                ((List<String>)output).Add("Write 1 Enter");
                threadLock.EnterWriteLock();
                ((List<String>)output).Add("Write 1 Begin");
                Thread.Sleep(100);
                ((List<String>)output).Add("Write 1 End");
                threadLock.ExitWriteLock();
                ((List<String>)output).Add("Write 1 Exit");
            }
        );
        thread1.Start(outputList);

        Thread.Sleep(10);

        Thread thread2 = new Thread(
            delegate(object output)
            {
                ((List<String>)output).Add("Read 2 Enter");
                threadLock.EnterReadLock();
                ((List<String>)output).Add("Read 2 Begin");
                Thread.Sleep(100);
                ((List<String>)output).Add("Read 2 End");
                threadLock.ExitReadLock();
                ((List<String>)output).Add("Read 2 Exit");
            }
        );
        thread2.Start(outputList);

        Thread.Sleep(10);

        Thread thread3 = new Thread(
            delegate(object output)
            {
                ((List<String>)output).Add("Write 3 Enter");
                threadLock.EnterWriteLock();
                ((List<String>)output).Add("Write 3 Begin");
                Thread.Sleep(100);
                ((List<String>)output).Add("Write 3 End");
                threadLock.ExitWriteLock();
                ((List<String>)output).Add("Write 3 Exit");
            }
        );
        thread3.Start(outputList);

        thread1.Join();
        thread2.Join();
        thread3.Join();

        Response.Write(String.Join("<br />", outputList.ToArray()));
    }
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

生生不灭 2024-12-24 15:35:57

有没有办法改变锁的行为,以便任何
在写锁之前排队的读请求被允许
在授予写锁之前完成?

几乎完全避免使用锁怎么样?在写入期间,您可以获取锁、复制原始数据结构、修改副本,然后通过用新引用替换旧引用来发布新数据结构。由于在“发布”数据结构后您永远不会修改数据结构,因此您根本不需要锁定读取。

它的工作原理如下:

public class Example
{
  private object writelock = new object();
  private volatile List<string> data = new List<string>();

  public void Write(string item)
  {
    lock (writelock)
    {
      var copy = new List<string>(data); // Create the copy.
      copy.Add(item); // Modify the data structure.
      data = copy; // Publish the modified data structure.
    }
  }

  public string Read(int index)
  {
    return data[index];
  }
}

我们在这里利用的技巧是 data 变量引用的任何内容的不变性。我们唯一需要做的就是将变量标记为易失性

请注意,只有当写入频率足够低并且数据结构足够小以保持复制操作便宜时,此技巧才有效。这不是万能的解决方案。它并不适合所有场景,但它可能适合您。

Is there any way of changing the behaviour of the lock so that any
Read requests that were queued before a write lock are allowed to
complete before the write locks are granted?

What about avoiding the use of locks almost entirely? During a write you can acquire a lock, copy the original data structure, modify the copy, and then publish the new data structure by swapping out the old reference with the new reference. Since you never modify the data structure after it has been "published" then you would not need to lock reads at all.

Here is how it works:

public class Example
{
  private object writelock = new object();
  private volatile List<string> data = new List<string>();

  public void Write(string item)
  {
    lock (writelock)
    {
      var copy = new List<string>(data); // Create the copy.
      copy.Add(item); // Modify the data structure.
      data = copy; // Publish the modified data structure.
    }
  }

  public string Read(int index)
  {
    return data[index];
  }
}

The trick we are exploiting here is the immutability of whatever is reference by the data variable. The only thing we need to do is to mark the variable as volatile.

Note that this trick only works if the writes are sufficiently infrequent enough and the data structure is small enough to keep the copy operation cheap. It is not the be all end all solution. It is not ideal for every scenario, but it may just work for you.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文