如何求解RuntimeRor:期望所有张量在同一设备上,但至少找到了两个设备CPU和CUDA:0

发布于 2025-02-10 03:09:38 字数 1641 浏览 1 评论 0原文

我已经在Densenet161上进行了模型培训,然后保存了模型

torch.save(model_ft.state_dict(),'/content/drive/My Drive/Stanford40/densenet161.pth')

,然后跟随这

model = models.densenet161(pretrained=False,num_classes=11)
model_ft.classifier=nn.Linear(2208,11)


model.load_state_dict(torch.load('/content/drive/My Drive/Stanford40/densenet161.pth'))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model=model.to(device)

一点,当我想继续进行模型评估时

test_tuple=datasets.ImageFolder('/content/drive/My Drive/Stanford40/body/valid',transform=data_transforms['valid'])
test_dataloader=torch.utils.data.DataLoader(test_tuple,batch_size=1,shuffle=True)

class_names=test_tuple.classes
i=0

length=dataset_sizes['valid']
y_true=torch.zeros([length,1])
y_pred=torch.zeros([length,1])

for inputs ,labels in test_dataloader:
  
  model_ft.eval()

  inputs=inputs.to(device)
  outputs=model_ft(inputs)

   
  y_true[i][0]=labels
   
  maxvlaues,indices = torch.max(outputs, 1)
  y_pred[i][0]=indices
  i=i+1
  

,我会像图片中一样面临错误:

当我检查我的模型是否已使用此代码移至设备时 next(model.parameters())。is_cuda 结果是正确的。

如何修改代码以摆脱此错误?

我的模型培训部分可以在如何求解typeerror:无法将cuda tensor转换为numpy。使用Tensor.cpu()将张量复制到主机首先

I had done a model training on Densenet161 and I saved my model

torch.save(model_ft.state_dict(),'/content/drive/My Drive/Stanford40/densenet161.pth')

and follow by this

model = models.densenet161(pretrained=False,num_classes=11)
model_ft.classifier=nn.Linear(2208,11)


model.load_state_dict(torch.load('/content/drive/My Drive/Stanford40/densenet161.pth'))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model=model.to(device)

Then, when I want to proceed to the model evaluation

test_tuple=datasets.ImageFolder('/content/drive/My Drive/Stanford40/body/valid',transform=data_transforms['valid'])
test_dataloader=torch.utils.data.DataLoader(test_tuple,batch_size=1,shuffle=True)

class_names=test_tuple.classes
i=0

length=dataset_sizes['valid']
y_true=torch.zeros([length,1])
y_pred=torch.zeros([length,1])

for inputs ,labels in test_dataloader:
  
  model_ft.eval()

  inputs=inputs.to(device)
  outputs=model_ft(inputs)

   
  y_true[i][0]=labels
   
  maxvlaues,indices = torch.max(outputs, 1)
  y_pred[i][0]=indices
  i=i+1
  

and I face the error as in the picture:
enter image description here

when I check whether my model was moved to the device with this code
next(model.parameters()).is_cuda
The result is True.

How can I modify the code to get away from this error?

My model training part can be found at How to solve TypeError: can’t convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

娇妻 2025-02-17 03:09:38

您已经将模型移至GPU,但不是model_ft。运行时错误在outputs = model_ft(inputs)。可能是混合变量名称的情况吗?

You have moved model to the GPU, but not model_ft. The runtime error is at outputs = model_ft(inputs). Could it be a case of mixed-up variable names?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文