pytorch运行时错误:通过设置效率网络冻结层需要grad = false
我想冻结Pytorch EfficentNet模型中的层。我惯用的方法是行不通的。
from torchvision.models import efficientnet_b0
from torch import nn
from torch import optim
efficientnet_b0_fine = efficientnet_b0(pretrained=True)
for param in efficientnet_b0_fine.parameters():
param.requires_grad = False
efficientnet_b0_fine.fc = nn.Linear(512, 10)
optimizer = optim.Adam(efficientnet_b0_fine.parameters(), lr=0.0001)
loss_function = nn.CrossEntropyLoss()
training(net=efficientnet_b0_fine, n_epochs=epochs, optimizer=optimizer, loss_function=loss_function, train_dl = train_dl)
我得到的错误说:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
训练功能看起来像这样:
for xb, yb in train_dl:
optimizer.zero_grad()
xb = xb.to(device)
yb = yb.to(device)
y_hat = net(xb)
loss = loss_function(y_hat, yb)
loss.backward()
optimizer.step()
如果你们中的一个有解决方案,那就太好了!
I want to freeze the layers in pytorch efficentnet model. My usual way of dooing this doesn't work.
from torchvision.models import efficientnet_b0
from torch import nn
from torch import optim
efficientnet_b0_fine = efficientnet_b0(pretrained=True)
for param in efficientnet_b0_fine.parameters():
param.requires_grad = False
efficientnet_b0_fine.fc = nn.Linear(512, 10)
optimizer = optim.Adam(efficientnet_b0_fine.parameters(), lr=0.0001)
loss_function = nn.CrossEntropyLoss()
training(net=efficientnet_b0_fine, n_epochs=epochs, optimizer=optimizer, loss_function=loss_function, train_dl = train_dl)
The Error I get says:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
The Training function looks like this:
for xb, yb in train_dl:
optimizer.zero_grad()
xb = xb.to(device)
yb = yb.to(device)
y_hat = net(xb)
loss = loss_function(y_hat, yb)
loss.backward()
optimizer.step()
Would be great if one of you has a solution!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论