如何在Python中分析多线程程序的内存?
有没有办法在Python中分析多线程程序的内存?
对于 CPU 分析,我使用 cProfile 为每个线程创建单独的分析器统计信息,然后将它们组合起来。但是,我找不到使用内存分析器来执行此操作的方法。我正在使用堆。
有没有办法像 cProfile 一样在 heapy 中组合统计数据?或者您建议其他哪些内存分析器更适合此任务。
有人提出了一个有关分析多线程程序上的 CPU 使用情况的相关问题:如何在Python中分析多线程程序?
还有一个关于内存分析器的问题:Python内存分析器
Is there a way to profile memory of a multithread program in Python?
For CPU profiling, I am using the cProfile to create seperate profiler stats for each thread and later combine them. However, I couldn't find a way to do this with memory profilers. I am using heapy.
Is there a way to combine stats in heapy like the cProfile? Or what other memory profilers would you suggest that is more suitable for this task.
A related question was asked for profiling CPU usage over multi-thread program: How can I profile a multithread program in Python?
Also another question regarding the memory profiler: Python memory profiler
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
如果您愿意分析对象而不是原始内存,则可以使用
gc。 get_objects()
函数,因此您不需要自定义元类。在较新的 Python 版本中,sys.getsizeof()
还可以让您尝试计算出这些对象使用了多少底层内存。If you are happy to profile objects rather than raw memory, you can use the
gc.get_objects()
function so you don't need a custom metaclass. In more recent Python versions,sys.getsizeof()
will also let you take a shot at figuring out how much underlying memory is in use by those objects.有多种方法可以让 valgrind 分析 python 程序的内存: http://www.python.org/dev/faq/#can-i-run-valgrind-against-python
There are ways to get valgrind to profile memory of python programs: http://www.python.org/dev/faq/#can-i-run-valgrind-against-python
好的。我正在寻找的东西似乎并不存在。于是,我找到了一个解决方案——解决这个问题的方法。
我将分析对象,而不是分析内存。这样,我就能够看到程序中特定时间存在多少个对象。为了实现我的目标,我使用了元类,并对现有代码进行了最小的修改。
以下元类向该类的 __init__ 和 __del__ 函数添加了一个非常简单的子例程。
__init__
的子例程将具有该类名的对象数量增加 1,而__del__
减少 1。incAndCall 和 decAndCall 函数使用它们存在的模块的全局变量。
dummyFunction 只是一个非常简单的解决方法。我确信有更好的方法可以做到这一点。
最后,每当你想查看存在的对象数量时,你只需要查看计数器字典即可。一个例子;
我希望这对你有帮助。这对于我的情况来说已经足够了。
Ok. What I was exactly looking for does not seem to exist. So, I found a solution-a workaround for this problem.
Instead of profiling memory, I'll profile objects. This way, I'll be able to see how many objects exist at a specific time in the program. In order to achieve my goal, I made use of metaclasses with minimal modification to already existing code.
The following metaclass adds a very simple subroutine to
__init__
and__del__
functions of the class. The subroutine for__init__
increases the number of objects with that class name by one and the__del__
decreases by one.The incAndCall and decAndCall functions use use global variable of the module they exist.
The dummyFunction is just a very simple workaround. I am sure there are much better ways to do it.
Finally, whenever you want to see the number of objects that exist, you just need to look at the counter dictionary. An example;
I hope this helps you. It was sufficient for my case.
我使用过 Yappi,我已经成功地使用了一些特殊的多线程案例。它有很好的文档,因此您在设置它时不会遇到太多麻烦。
对于特定于内存的分析,请查看 Heapy。请注意,它可能会创建一些您见过的最大的日志文件!
I've used Yappi, which I've had success with for a few special multi-threaded cases. It's got great documentation so you shouldn't have too much trouble setting it up.
For memory specific profiling, check out Heapy. Be warned, it may create some of the largest log files you've ever seen!