Python - 使用 cPickle 加载以前保存的 pickle 使用太多内存?
Python - 使用 cPickle 加载以前保存的 pickle 使用太多内存?
我的 pickle 文件大约有 340MB,但加载后占用了 6GB 内存的 29%。这似乎有点太多了。 pickle 文件是字典的字典。这样合适吗?
使用的代码:
import cPickle as pickle
file = pickle.load( file_handle )
谢谢
Python - Using cPickle to load a previously saved pickle uses too much memory?
My pickle file is about 340MB but takes up 29% of 6gb of memory when loaded. This seems a bit too much. The pickle file is a dictionary of dictionaries. Is this appropriate?
Code used:
import cPickle as pickle
file = pickle.load( file_handle )
Thanks
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我总是对大pickels和sub dicts有记忆问题。因此,我最终通过 pprint 将对象写入文件,然后通过自定义模块加载器导入该文件,以将数据返回到进程范围内。工作正常并且不浪费内存。
I always had memory problems with big pickels and sub dicts. So i ended up writing my objects via pprint into files and later i import that files via a a custom module loader to get the data back in the process scope. Works fine and doesn't waste memory.
1.7GB 左右似乎有点多,但并非不可能。数据在腌制之前占用了多少内存?
取消腌制后,数据所占用的内存量应与腌制前所占用的内存量大致相同,其磁盘格式的大小实际上并不那么重要。
About 1.7GB seems a bit much, but not impossible. How much memory did the data take before it was pickled?
After unpickling the data should take about the same amount of memory as it took before it was pickled, how big it is in it's on-disk format is not really that significant.