有没有办法减少 ostringstream malloc/free 的数量?

发布于 2024-08-23 08:08:19 字数 215 浏览 3 评论 0原文

我正在编写一个嵌入式应用程序。在某些地方,我经常使用 std::ostringstream,因为它对我的目的来说非常方便。然而,我刚刚发现性能受到极大影响,因为向流中添加数据会导致大量调用 malloc 和 free。有什么办法可以避免吗?

我的第一个想法是使 ostringstream 静态并使用 ostringstream::set("") 重置它。但是,这是无法完成的,因为我需要函数可重入。

I am writing an embedded app. In some places, I use std::ostringstream a lot, since it is very convenient for my purposes. However, I just discovered that the performance hit is extreme since adding data to the stream results in a lot of calls to malloc and free. Is there any way to avoid it?

My first thought was making the ostringstream static and resetting it using ostringstream::set(""). However, this can't be done as I need the functions to be reentrant.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

对岸观火 2024-08-30 08:08:19

好吧,Booger 的解决方案是切换到 sprintf()。它不安全且容易出错,但通常速度更快。

但并不总是如此。初始化后我们无法在实时作业中使用它(或 ostringstream),因为两者都执行内存分配和释放。

我们解决这个问题的方法是跳过很多麻烦,以确保我们在启动时执行所有字符串转换(当我们还不需要实时时)。我确实认为有一种情况,我们将自己的转换器写入固定大小的堆栈分配数组中。对于相关的特定转换,我们对大小有一些限制。

对于更通用的解决方案,您可以考虑编写自己的 ostringstream 版本,它使用固定大小的缓冲区(当然,对所保留的边界进行错误检查)。这将需要一些工作,但如果您有很多这些流操作,那么可能是值得的。

Well, Booger's solution would be to switch to sprintf(). It's unsafe, and error-prone, but it is often faster.

Not always though. We can't use it (or ostringstream) on my real-time job after initialization because both perform memory allocations and deallocations.

Our way around the problem is to jump through a lot of hoops to make sure that we perform all string conversions at startup (when we don't have to be real-time yet). I do think there was one situation where we wrote our own converter into a fixed-sized stack-allocated array. We have some constraints on size we can count on for the specific conversions in question.

For a more general solution, you may consider writing your own version of ostringstream that uses a fixed-sized buffer (with error-checking on the bounds being stayed within, of course). It would be a bit of work, but if you have a lot of those stream operations it might be worth it.

旧梦荧光笔 2024-08-30 08:08:19

如果您在创建流之前知道数据有多大,则可以使用 ostrstream ,其构造函数可以将缓冲区作为参数。因此,不会对数据进行内存管理。

If you know how big the data is before creating the stream you could use ostrstream whose constructor can take a buffer as a parameter. Thus there will be no memory management of the data.

再见回来 2024-08-30 08:08:19

处理此问题的公认方法可能是创建您自己的 basic_stringbuf 对象以与您的 ostringstream 一起使用。为此,您有几个选择。一种方法是使用固定大小的缓冲区,并且当/如果您尝试创建太长的输出时,溢出就会失败。另一种可能性是使用向量作为缓冲区。与 std::string 不同,向量保证附加数据将具有摊销的恒定复杂性。它也永远不会从缓冲区中释放数据,除非您强制释放数据,因此它通常会增长到您正在处理的最大大小。从那时起,它不应该分配或释放内存,除非您创建的字符串超出了当前可用的长度。

Probably the approved way of dealing with this would be to create your own basic_stringbuf object to use with your ostringstream. For that, you have a couple of choices. One would be to use a fixed-size buffer, and have overflow simply fail when/if you try to create output that's too long. Another possibility would be to use a vector as the buffer. Unlike std::string, vector guarantees that appending data will have amortized constant complexity. It also never releases data from the buffer unless you force it to, so it'll normally grow to the maximum size you're dealing with. From that point, it shouldn't allocate or free memory unless you create a string that's beyond the length it currently has available.

平生欢 2024-08-30 08:08:19

std::ostringsteam 是一个方便的界面。它通过提供自定义的 std::streambufstd::string 链接到 std::ostream。您可以实现自己的 std::streambuf。这使您可以完成整个内存管理。您仍然可以获得 std::ostream 的良好格式,但您可以完全控制内存管理。当然,结果是您在 char[] 中获得格式化输出 - 但如果您是嵌入式开发人员,这可能不是什么大问题。

std::ostringsteam is a convenience interface. It links a std::string to a std::ostream by providing a custom std::streambuf. You can implement your own std::streambuf. That allows you to do the entire memory management. You still get the nice formatting of std::ostream, but you have full control over the memory management. Of course, the consequence is that you get your formatted output in a char[] - but that's probably no big problem if you're an embedded developer.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文