如何将FairSeq Interactive与多个GPU使用?
我正在尝试为模型生成新的预测,但是我发现使用FairSeq并不是直观的。我发现FairSeq-Interactive
可以帮助生成良好的batch_size设置,但是,似乎一次将使用1 GPU,我想知道是否可以使用多个GPU?希望有人可以帮忙!
非常感谢:)
I am trying to generate new prediction for the model, but I found it is not that intuitive to use fairseq. I found fairseq-interactive
could help to generate with a good settings of batch_size, however, it seems that it will use 1 GPU at a time, I wonder if it is possible to use multiple GPU? Hope someone can kindly help!
Many thanks :)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您不能在Fairseq中本地进行此操作。最好的方法是将您的数据分片并在后台的每个碎片上运行
FairSeq Interactive
。确保为每个碎片设置cuda_visible_devices
,以便将每个碎片的生成放在其他GPU上。该建议还适用于FairSeq-Generate
(对于大型推理工作而言,这将更快)。You cannot do this natively within fairseq. The best way to do this is to shard your data and run
fairseq-interactive
on each shard in the background. Be sure to setCUDA_VISIBLE_DEVICES
for each shard so you put each shard's generation on a different GPU. This advice also applies tofairseq-generate
(which will be significantly faster for large inference jobs).