MSVC istream 实现锁定缓冲区

发布于 2024-11-03 21:51:13 字数 1178 浏览 4 评论 0原文

我正在使用一些现有代码,这些代码正在反序列化存储在文本文件中的对象(我可能需要阅读数千万个这样的对象)。文件的内容首先被读入 wstring,然后从中创建 wistringstream。在程序上运行 Very Sleepy 分析器显示,它大约 20% 的时间花费在以下调用堆栈中:

Mtxlock or RtlEnterCritialSection
std::_Mutex::_Lock
std::flush
std::basic_istream<wchar_t, std::char_traits<wchar_t> >::get
<rest of my program>

以及与 std::_Mutex::_Unlock 类似的调用堆栈。我使用的是 Visual C++ 2008。

查看 istream,我发现它构造了一个 sentry 对象,该对象调用 _Lock_Unlock<底层 basic_streambuf 上的 /code> 方法。这反过来只需在与该缓冲区关联的 _Mutex 上调用 _Lock_Unlock 即可。然后将它们定义如下:

#if _MULTI_THREAD
    // actually defines non-empty _Lock() and _Unlock() methods
#else /* _MULTI_THREAD */
    void _Lock()
    {   // do nothing
    }

void _Unlock()
    {   // do nothing
    }
#endif /* _MULTI_THREAD */

看起来 _MULTI_THREAD 在 yvals.h 中设置为

#define _MULTI_THREAD   1   /* nontrivial locks if multithreaded */

现在,我知道永远不会有另一个线程尝试访问此缓冲区,但在我看来没有办法在使用标准 iostream 时围绕此锁定,这看起来既奇怪又令人沮丧。我错过了什么吗?有解决方法吗?

I'm working with some existing code which is deserializing objects stored in text files (I potentially need to read tens of millions of these). The contents of the file are first read into a wstring and then it makes a wistringstream from that. Running the Very Sleepy profiler on the program shows that it is spending about 20% of its time in the following call stacks:

Mtxlock or RtlEnterCritialSection
std::_Mutex::_Lock
std::flush
std::basic_istream<wchar_t, std::char_traits<wchar_t> >::get
<rest of my program>

and similar ones with std::_Mutex::_Unlock. I'm using Visual C++ 2008.

Looking in istream, I see that it constructs a sentry object which calls _Lock and _Unlock methods on the underlying basic_streambuf. This in turn just call _Lock and _Unlock on a _Mutex associated with that buffer. These are then defined as follows:

#if _MULTI_THREAD
    // actually defines non-empty _Lock() and _Unlock() methods
#else /* _MULTI_THREAD */
    void _Lock()
    {   // do nothing
    }

void _Unlock()
    {   // do nothing
    }
#endif /* _MULTI_THREAD */

It looks like _MULTI_THREAD is set in yvals.h as

#define _MULTI_THREAD   1   /* nontrivial locks if multithreaded */

Now, I know there will never be another thread trying to access this buffer, but it looks to me like there's no way around this locking while using the standard iostreams, which seems both odd and frustrating. Am I missing something? Is there a workaround for this?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

最终幸福 2024-11-10 21:51:13

检查项目属性、C/C++、代码生成中运行时库的值。如果是多线程的,就改成非多线程的版本。

在 Visual C++ 7.1 (!) 之后的任何版本中,您都不走运 因为它已被删除,您将不得不使用多线程 CRT。

Check the value for Runtime Library in Project properties, C/C++, Code Generation. If it's multi-threaded, change it to a non-multithreaded version.

In any version after Visual C++ 7.1 (!), you are out of luck as it's been removed, and you are stuck with the multithreaded CRT.

夜还是长夜 2024-11-10 21:51:13

在您的情况下, std::flush 似乎毫无意义。我不明白你如何刷新istream,所以我怀疑这是tie的结果。您可能想要取消绑定,即在 wistringstream 上调用 tie(NULL)。这也应该减少所采用的锁的数量。

The std::flush seems senseless in your case. I can't see how you'd flush an istream, so I suspect it's a result of a tie. You may want to un-tie, i.e. call tie(NULL) on your wistringstream. That should also reduce the number of locks taken.

帅气称霸 2024-11-10 21:51:13

事实证明,通过

c = _text_in->get();

用类似的东西替换类似的东西来

c = _text_in->rdbuf()->sbumpc();

直接访问底层缓冲区解决了问题并大大提高了性能。

It turned out accessing the underlying buffer directly by replacing things like

c = _text_in->get();

with things like this

c = _text_in->rdbuf()->sbumpc();

fixed the problem and provided a big boost to performance.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文