LIBUMEM 说没有内存泄漏,但 Solaris 上的 PRSTAT 显示泄漏?
我有一个应用程序,我一直试图获得“无内存泄漏”,我已经使用 Totalview 的 MemoryScape 在 Linux 上进行了可靠的测试,没有发现泄漏。我已将应用程序移植到 Solaris (SPARC),并且我正在尝试查找泄漏...
我在 Solaris 上使用了“LIBUMEM”,在我看来它也没有发现泄漏...
这是我的启动命令:
LD_PRELOAD=libumem.so UMEM_DEBUG=audit ./link_outbound config.ini
然后我立即检查 Solaris 上的 PRSTAT,看看启动内存使用情况是多少:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
9471 root 44M 25M sleep 59 0 0:00:00 1.1% link_outbou/3
然后我开始向应用程序发送数千条消息......随着时间的推移,PRSTAT 不断增长......
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
9471 root 48M 29M sleep 59 0 0:00:36 3.5% link_outbou/3
就在我最终停止它之前:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
9471 root 48M 48M sleep 59 0 0:01:05 5.3% link_outbou/3
现在有趣的部分是当我使用LIBUMEM 在此应用程序上显示 48 MB 内存,如下所示:
pgrep link
9471
# gcore 9471
gcore: core.9471 dumped
# mdb core.9471
Loading modules: [ libumem.so.1 libc.so.1 ld.so.1 ]
> ::findleaks
BYTES LEAKED VMEM_SEG CALLER
131072 7 ffffffff79f00000 MMAP
57344 1 ffffffff7d672000 MMAP
24576 1 ffffffff7acf0000 MMAP
458752 1 ffffffff7ac80000 MMAP
24576 1 ffffffff7a320000 MMAP
131072 1 ffffffff7a300000 MMAP
24576 1 ffffffff79f20000 MMAP
------------------------------------------------------------------------
Total 7 oversized leaks, 851968 bytes
CACHE LEAKED BUFCTL CALLER
----------------------------------------------------------------------
Total 0 buffers, 0 bytes
>
如果我通过应用程序发送 10 条消息或 10000 条消息,“7 个超大泄漏,851968 字节”永远不会改变...它始终是“7 个超大泄漏,851968 字节” 。这是否意味着应用程序没有根据“libumem”泄漏?
令人沮丧的是,在 Linux 上,内存保持不变,永远不会改变……但在 Solaris 上,我看到这种缓慢但稳定的增长。
知道这意味着什么吗?我正确使用 libumem 吗?是什么导致 PRSTAT 显示内存增长?
对此的任何帮助将不胜感激......感谢一百万。
I have an application I have been trying to get "memory leak free", I have been through solid testing on Linux using Totalview's MemoryScape and no leaks found. I have ported the application to Solaris (SPARC) and there is a leak I am trying to find...
I have used "LIBUMEM" on Solaris and it seems to me like it als picks up NO leaks...
Here is my startup command:
LD_PRELOAD=libumem.so UMEM_DEBUG=audit ./link_outbound config.ini
Then I immediatly checked the PRSTAT on Solaris to see what the startup memory usage was:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
9471 root 44M 25M sleep 59 0 0:00:00 1.1% link_outbou/3
Then I started to send thousands of messages to the application...and over time the PRSTAT grew..
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
9471 root 48M 29M sleep 59 0 0:00:36 3.5% link_outbou/3
And just before I eventually stopped it:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
9471 root 48M 48M sleep 59 0 0:01:05 5.3% link_outbou/3
Now the interesting part is when I use LIBUMEM on this application that it showing 48 MB memory, like follows:
pgrep link
9471
# gcore 9471
gcore: core.9471 dumped
# mdb core.9471
Loading modules: [ libumem.so.1 libc.so.1 ld.so.1 ]
> ::findleaks
BYTES LEAKED VMEM_SEG CALLER
131072 7 ffffffff79f00000 MMAP
57344 1 ffffffff7d672000 MMAP
24576 1 ffffffff7acf0000 MMAP
458752 1 ffffffff7ac80000 MMAP
24576 1 ffffffff7a320000 MMAP
131072 1 ffffffff7a300000 MMAP
24576 1 ffffffff79f20000 MMAP
------------------------------------------------------------------------
Total 7 oversized leaks, 851968 bytes
CACHE LEAKED BUFCTL CALLER
----------------------------------------------------------------------
Total 0 buffers, 0 bytes
>
The "7 oversized leaks, 851968 bytes" never changes if I send 10 messages through the application or 10000 messages...it is always "7 oversized leaks, 851968 bytes". Does that mean that the application is not leaking according to "libumem"?
What is so frustrating is that on Linux the memory stays constant, never changes....yet on Solaris I see this slow, but steady growth.
Any idea what this means? Am I using libumem correctly? What could be causing the PRSTAT to be showing memory growth here?
Any help on this would be greatly appreciated....thanks a million.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
首选选项是 UMEM_DEBUG=default, UMEM_LOGGING=transaction LD_PRELOAD=libumem.so.1。 这是我用于调试 Solaris 内存泄漏问题的选项,它对我来说效果很好。
RedHat REL version 5
和solarisSunOS 5.9/5.10
的经验,linux进程内存占用不会逐渐增加,相反,它似乎在以下情况下占用了一大块内存:它需要额外的内存并长期使用它们。 (纯粹基于观察,没有对其内存分配机制进行任何研究)。所以你应该发送更多的数据(10K 消息并不大)。dtrace
工具来检查solaris上的内存问题。杰克
the preferred option is
UMEM_DEBUG=default, UMEM_LOGGING=transaction LD_PRELOAD=libumem.so.1.
that is the options that I use for debugging solaris memory leak problem, and it works fine for me.RedHat REL version 5
and solarisSunOS 5.9/5.10
, linux process memory footprint doesn't increase gradually, instead it seems it grabs a large chunk memory when it needs extra memory and use them for a long run. (purely based on observation, haven't done any research on its memory allocation mechanism). so you should send a lot more data (10K messages are not big).dtrace
tool to check memory problem at solaris.Jack
如果
SIZE
列没有增长,则没有泄漏。RSS
(驻留集大小)是您主动使用的内存量,该值随时间变化是正常的。如果发生泄漏,SIZE
会随着时间的推移而增大(而RSS
可能会保持不变,甚至缩小)。If the
SIZE
column doesn't grow, you're not leaking.RSS
(resident set size) is how much of that memory you are actively using, it's normal that that value changes over time. If you were leaking,SIZE
would grow over time (andRSS
could stay constant, or even shrink).