多处理池内进程超时
当我使用以下代码时,池结果总是返回超时,我在做的事情在逻辑上是否不正确?
from multiprocessing import Pool, Process, cpu_count
def add(num):
return num+1
def add_wrap(num):
new_num = ppool.apply_async(add, [num])
print new_num.get(timeout=3)
ppool = Pool(processes=cpu_count() )
test = Process(target=add_wrap, args=(5,)).start()
我知道这个错误,并且认为它会在 python 2.6 中得到修复。 4?
When ever I use the following code the pool result always returns a timeout, is there something logically incorrect I am doing?
from multiprocessing import Pool, Process, cpu_count
def add(num):
return num+1
def add_wrap(num):
new_num = ppool.apply_async(add, [num])
print new_num.get(timeout=3)
ppool = Pool(processes=cpu_count() )
test = Process(target=add_wrap, args=(5,)).start()
I'm aware of this bug, and would have thought that it would have been fixed in python 2.6.4?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您不能在进程之间传递 Pool 对象。
如果您尝试此代码,Python 将引发异常:“NotImplementedError:池对象无法在进程之间传递或腌制”。
因此,如果您希望代码正常工作,只需在 add_wrap 方法中创建 Pool 对象即可。
You can't pass Pool objects between processes.
If you try this code, Python will raise a exception : 'NotImplementedError: pool objects cannot be passed between processes or pickled'.
So if you want your code to work, just create your Pool object in the add_wrap method.