如何求解RuntimeRor:期望所有张量在同一设备上,但至少找到了两个设备CPU和CUDA:0
我已经在Densenet161上进行了模型培训,然后保存了模型
torch.save(model_ft.state_dict(),'/content/drive/My Drive/Stanford40/densenet161.pth')
,然后跟随这
model = models.densenet161(pretrained=False,num_classes=11)
model_ft.classifier=nn.Linear(2208,11)
model.load_state_dict(torch.load('/content/drive/My Drive/Stanford40/densenet161.pth'))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model=model.to(device)
一点,当我想继续进行模型评估时
test_tuple=datasets.ImageFolder('/content/drive/My Drive/Stanford40/body/valid',transform=data_transforms['valid'])
test_dataloader=torch.utils.data.DataLoader(test_tuple,batch_size=1,shuffle=True)
class_names=test_tuple.classes
i=0
length=dataset_sizes['valid']
y_true=torch.zeros([length,1])
y_pred=torch.zeros([length,1])
for inputs ,labels in test_dataloader:
model_ft.eval()
inputs=inputs.to(device)
outputs=model_ft(inputs)
y_true[i][0]=labels
maxvlaues,indices = torch.max(outputs, 1)
y_pred[i][0]=indices
i=i+1
当我检查我的模型是否已使用此代码移至设备时 next(model.parameters())。is_cuda
结果是正确的。
如何修改代码以摆脱此错误?
我的模型培训部分可以在如何求解typeerror:无法将cuda tensor转换为numpy。使用Tensor.cpu()将张量复制到主机首先
I had done a model training on Densenet161 and I saved my model
torch.save(model_ft.state_dict(),'/content/drive/My Drive/Stanford40/densenet161.pth')
and follow by this
model = models.densenet161(pretrained=False,num_classes=11)
model_ft.classifier=nn.Linear(2208,11)
model.load_state_dict(torch.load('/content/drive/My Drive/Stanford40/densenet161.pth'))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model=model.to(device)
Then, when I want to proceed to the model evaluation
test_tuple=datasets.ImageFolder('/content/drive/My Drive/Stanford40/body/valid',transform=data_transforms['valid'])
test_dataloader=torch.utils.data.DataLoader(test_tuple,batch_size=1,shuffle=True)
class_names=test_tuple.classes
i=0
length=dataset_sizes['valid']
y_true=torch.zeros([length,1])
y_pred=torch.zeros([length,1])
for inputs ,labels in test_dataloader:
model_ft.eval()
inputs=inputs.to(device)
outputs=model_ft(inputs)
y_true[i][0]=labels
maxvlaues,indices = torch.max(outputs, 1)
y_pred[i][0]=indices
i=i+1
and I face the error as in the picture:
when I check whether my model was moved to the device with this codenext(model.parameters()).is_cuda
The result is True.
How can I modify the code to get away from this error?
My model training part can be found at How to solve TypeError: can’t convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您已经将
模型
移至GPU,但不是model_ft
。运行时错误在outputs = model_ft(inputs)
。可能是混合变量名称的情况吗?You have moved
model
to the GPU, but notmodel_ft
. The runtime error is atoutputs = model_ft(inputs)
. Could it be a case of mixed-up variable names?