防止 itertools.permutation 中的内存错误
首先我想提一下我有一个 3 GB 的内存。
我正在研究一种在节点上随时间呈指数增长的算法,因此我在代码中
perm = list( itertools.permutations(list(graph.Nodes))) # graph.Nodes is a tuple of 1 , 2 , ... n integers
生成了列表中的所有顶点组合,然后我可以处理其中一个排列。
然而,当我运行 40 个顶点的程序时,它给出了内存错误。
在实现中是否有更简单的方法可以生成顶点的所有组合并且不会出现此错误。
Firstly I would like to mention that i have a 3 gb ram.
I am working on an algorithm that is exponential in time on the nodes so for it I have in the code
perm = list( itertools.permutations(list(graph.Nodes))) # graph.Nodes is a tuple of 1 , 2 , ... n integers
which generates all the combinations of vertices in a list and then i can work on one of the permutation.
However when i run the program for 40 vertices , it gives a memory error.
Is there any simpler way in implementation via which i can generate all the combinations of the vertices and not have this error.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
尝试使用由排列生成的迭代器,而不是用它重新创建列表:
通过这样做,python 将仅在内存中保留当前使用的排列,而不是所有排列(就内存使用而言,它确实更好;))
另一方面,一旦内存问题解决,处理所有排列的时间将随着顶点数量呈指数增长......
Try to use the iterator generated by the permutations instead of recreating a list with it :
by doing this, python will keep in memory only the currently used permutation, not all the permutations (in term of memory usage, it is really better ;) )
On the other side, once the memory problem solved, the time to treat all the permutations will be growing exponentially with the number of vertices....
这是行不通的。循环迭代器也不起作用。你看,如果for循环中的代码运行需要1微秒,那么完整运行需要2.587×10^34年。 (参见http://www.wolframalpha.com/input/ ?i=40%21+微秒+in+年)
It won't work. Looping over an iterator won't work either. You see, if the code in the for-loop takes 1 microsecond to run, it will take 2.587×10^34 years to run completely. (See http://www.wolframalpha.com/input/?i=40%21+microseconds+in+years)