RuntimeError: 0D 或 1D 目标张量预期,不支持多目标我正在训练深度学习模型,但我遇到了这个问题
*My Training Model*
def train(model,criterion,optimizer,iters):
epoch = iters
train_loss = []
validaion_loss = []
train_acc = []
validation_acc = []
states = ['Train','Valid']
for epoch in range(epochs):
print("epoch : {}/{}".format(epoch+1,epochs))
for phase in states:
if phase == 'Train':
model.train() *training the data if phase is train*
dataload = train_data_loader
else:
model.eval()
dataload = valid_data_loader
run_loss,run_acc = 0,0 *creating variables to calculate loss and acc*
for data in dataload:
inputs,labels = data
inputs = inputs.to(device)
labels = labels.to(device)
labels = labels.byte()
optimizer.zero_grad() #Using the optimizer
with torch.set_grad_enabled(phase == 'Train'):
outputs = model(inputs)
loss = criterion(outputs,labels.unsqueeze(1).float())
predict = outputs>=0.5
if phase == 'Train':
loss.backward() #backward propagation
optimizer.step()
acc = torch.sum(predict == labels.unsqueeze(1))
run_loss+=loss.item()
run_acc+=acc.item()/len(labels)
if phase == 'Train': #calulating train loss and accucracy
epoch_loss = run_loss/len(train_data_loader)
train_loss.append(epoch_loss)
epoch_acc = run_acc/len(train_data_loader)
train_acc.append(epoch_acc)
else: #training validation loss and accuracy
epoch_loss = run_loss/len(valid_data_loader)
validaion_loss.append(epoch_loss)
epoch_acc = run_acc/len(valid_data_loader)
validation_acc.append(epoch_acc)
print("{}, loss :{},accuracy:{}".format(phase,epoch_loss,epoch_acc))
history = {'Train_loss':train_loss,'Train_accuracy':train_acc,
'Validation_loss':validaion_loss,'Validation_Accuracy':validation_acc}
return model,history[enter image description here][1]
我遇到了错误,因为需要 0D 或 1D 目标张量,不支持多目标,您能否帮助纠正上述代码。参考了之前的相关文章,但未能得到想要的结果。我必须更改哪些代码片段才能使我的模型成功运行。任何建议都是受欢迎的。提前致谢。
*My Training Model*
def train(model,criterion,optimizer,iters):
epoch = iters
train_loss = []
validaion_loss = []
train_acc = []
validation_acc = []
states = ['Train','Valid']
for epoch in range(epochs):
print("epoch : {}/{}".format(epoch+1,epochs))
for phase in states:
if phase == 'Train':
model.train() *training the data if phase is train*
dataload = train_data_loader
else:
model.eval()
dataload = valid_data_loader
run_loss,run_acc = 0,0 *creating variables to calculate loss and acc*
for data in dataload:
inputs,labels = data
inputs = inputs.to(device)
labels = labels.to(device)
labels = labels.byte()
optimizer.zero_grad() #Using the optimizer
with torch.set_grad_enabled(phase == 'Train'):
outputs = model(inputs)
loss = criterion(outputs,labels.unsqueeze(1).float())
predict = outputs>=0.5
if phase == 'Train':
loss.backward() #backward propagation
optimizer.step()
acc = torch.sum(predict == labels.unsqueeze(1))
run_loss+=loss.item()
run_acc+=acc.item()/len(labels)
if phase == 'Train': #calulating train loss and accucracy
epoch_loss = run_loss/len(train_data_loader)
train_loss.append(epoch_loss)
epoch_acc = run_acc/len(train_data_loader)
train_acc.append(epoch_acc)
else: #training validation loss and accuracy
epoch_loss = run_loss/len(valid_data_loader)
validaion_loss.append(epoch_loss)
epoch_acc = run_acc/len(valid_data_loader)
validation_acc.append(epoch_acc)
print("{}, loss :{},accuracy:{}".format(phase,epoch_loss,epoch_acc))
history = {'Train_loss':train_loss,'Train_accuracy':train_acc,
'Validation_loss':validaion_loss,'Validation_Accuracy':validation_acc}
return model,history[enter image description here][1]
I was experiencing the error as 0D or 1D target tensor expected, multi-target not supported could you please help in rectifying the code which is described above. Referred the previous related articles but unable to get the desired result. What are the code snippets I had to change so that my model will run successfully. Any suggestions are mostly welcome. Thanks in Advance.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
您的问题是标签具有正确的形状来计算损失。当您将
.unsqueeze(1)
添加到标签时,您使用此形状 [32,1] 制作标签,这与计算损失的要求不一致。要解决该问题,您只需删除标签的
.unsqueeze(1)
即可。如果您阅读了 CrossEntropLoss,参数:
输出
和[32,3]。labels
并且应该是 [32]。因此,损失函数期望标签位于一维目标而不是多目标
中。Your problem is that labels have the correct shape to calculate the loss. When you add
.unsqueeze(1)
to labels you made your labels with this shape [32,1] which is not consistent to the requirment to calcualte the loss.To fix the problem, you only need to remove
.unsqueeze(1)
for labels.If you read the documentation of CrossEntropLoss, the arguments:
outputs
in your case and [32,3].labels
in your case and should be [32]. Therefore, the loss function expectslabels
to be in1D target not multi-target
.这个问题也可能是由于损失函数造成的。尝试使用可以处理多目标张量的替代损失函数。我使用了 nn.MSELoss() ,错误消失了。
This issue can also be due to loss function. Try using alternative loss functions that can deal with multi-target tensor. I used
nn.MSELoss()
and the error went away.对我来说,问题是
one_hot
没有与num_classes
参数一起使用,因为如果没有它,您的批次可能不包含特定类的元素,这使得的维度>one_hot
编码不同。使用one_hot
中的num_classes
参数来解决该问题。在数据加载器末尾发生这种情况的可能性更高,因为
drop_last=False
导致batch_size
小于数据加载器batch_size
。For me the issue was the
one_hot
not getting used withnum_classes
parameter because without it, chances are your batch might not contain elements of a particular class which makes the dimension ofone_hot
encoding different. Usenum_classes
parameter inone_hot
to fix the issue.This is a higher chance of this happening at the end of dataloader, because
drop_last=False
causes thebatch_size
to be smaller than the dataloaderbatch_size
.