在最新版本的Pytorch中,默认情况下,所有张力量使用特定设备的最佳实践是什么?
在Pytorch中,如果我做类似
import torch
x = torch.randn(3)
y = x + 5
所有张力的事情,则默认情况下对应于“ CPU”设备。是否有一些方法可以使其如此默认情况下,所有张力都在另一个设备上(例如“ CUDA:0”)?
我知道我总是可以谨慎添加.cuda()
或在创建张量时指定cuda,但是如果我只能在程序开始时直接更改默认设备并成为,那就太好了完成此操作,这样torch.randn(3)
来自所需的设备,而无需每次指定它。
还是出于某种原因要做这是一件坏事?例如,有什么原因我不希望默认情况下在CUDA上进行所有张量/操作?
In pytorch, if I do something like
import torch
x = torch.randn(3)
y = x + 5
all tensors correspond to the "cpu" device by default. Is there some way to make to make it so, by default, all tensors are on another device (e.g. "cuda:0")?
I know I can always be careful to add .cuda()
or specify cuda whenever creating a tensor, but it would be great if I could just change the default device directly at the beginning of the program and be done with it, so that torch.randn(3)
comes from the desired device without having to specify it every time.
Or would that be a bad thing to do for some reason? E.g. is there any reason I wouldn't want every tensor/operation to be done on cuda by default?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
Pytorch具有可选的函数,可以更改张量的默认类型
我找不到任何参考文献或任何文件来回答这个问题。但是,我认为,这是为了避免内存碎片在GPU内存中。
我不是专家,但是内存中的数据应有效地安排,如果没有,则冗余空间将导致OOM。这就是为什么默认情况下,无论您的模型有多少个参数,TensorFlow都会占用GPU的所有内存。您只需通过设置8 AMP文档。
总而言之,我认为手动控制张量的设备而不是将其设置为GPU默认。
Pytorch has an optional function to change the default type of tensor set_default_tensor_type. Applying the default type on the main script:
I couldn't find any reference or any document to answer this question. However, in my opinion, it's to avoid the memory fragmentation in GPU memory.
I'm not an expert, but the data in memory should be arranged in an efficient way, if not, the redundant space will cause OOM. That's why, in default, Tensorflow will take all of your GPU's memory no matter how many parameters your model has. You can improve the space and speed just by setting the tensor shape multiples of 8 amp documents.
In conclusion, I think it's better to control the device of tensor manually instead of setting it gpu as default.
最新的火炬具有 set_default_default_device_device 功能。
但是我不建议将默认设备设置为GPU,GPU没有太多的VRAM,因此您可能需要将大多数数据保留在CPU上,而仅将其推向GPU。
不过,一个不错的选择是将
与Torch.device(Device)
相反地在本地设置Latest Torch has a set_default_device function.
But I wouldn’t recommend setting default device to gpu, gpu doesn’t have that much VRAM, so you may want to keep most data on cpu and only push to gpu the stuff that you are using.
A good option though is to use
with torch.device(device)
instead to set it locally