python 多进程固定
我目前正在使用 python 多进程来做一些简单的并行编程。 我使用异步装饰器
def async(decorated):
module = getmodule(decorated)
decorated.__name__ += '_original'
setattr(module, decorated.__name__, decorated)
def send(*args, **opts):
return async.pool.apply_async(decorated, args, opts)
return send
,
@async
def evalfunc(uid, start, end):
veckernel(Posx, Posy, Posz, Quant, Delta)
return (uid, GridVal)
def runit(outdir):
async.pool = Pool(8)
results = []
for uid in range(8):
result = evalfunc(uid,Chunks[uid], Chunks[uid+1])
results.append(result)
如果我在 8 处理器或 8 核机器上运行它,它基本上只使用两个核。这是为什么?有没有办法像 pthreads 一样进行正确的核心固定?
多谢, 标记
I am currently using python multiprocess to do some simple parallel programming.
I use an async decorator
def async(decorated):
module = getmodule(decorated)
decorated.__name__ += '_original'
setattr(module, decorated.__name__, decorated)
def send(*args, **opts):
return async.pool.apply_async(decorated, args, opts)
return send
and then
@async
def evalfunc(uid, start, end):
veckernel(Posx, Posy, Posz, Quant, Delta)
return (uid, GridVal)
def runit(outdir):
async.pool = Pool(8)
results = []
for uid in range(8):
result = evalfunc(uid,Chunks[uid], Chunks[uid+1])
results.append(result)
If I run this on a 8 processor or 8 cores machine it essentially uses only two cores. Why is that? Is there a way to do proper core pinning like with pthreads?
Thanks a lot,
Mark
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
如果
apply_async
(例如evalfunc
)调用的函数很快完成,则池中的所有工作进程可能都不会被利用。如果这确实是您的情况,那么您需要向每次调用
evalfunc
传递更多数据,以便每个进程都有更多数据需要处理。If the function being called by
apply_async
(e.g.evalfunc
) finishes very quickly, then all worker processes in the pool may not be utilized.If that is indeed your situation, then you need to pass more data to each call to
evalfunc
so each process has more to chew on.