用户保证:张量的.grad属性正在访问不是叶子张量的
我正在从头开始在Pytorch中创建逻辑回归。但是,当我更新可训练的参数strige&时,我将面临问题。偏见
。这是我的实现,
class LogisticRegression():
def __init__(self, n_iter, lr):
self.n_iter = n_iter
self.lr = lr
def fit(self, dataset):
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
n = next(iter(dataset))[0].shape[1]
self.w = torch.zeros(n, requires_grad=True).to(device)
self.b = torch.tensor(0., requires_grad=True).to(device)
for i in range(self.n_iter):
with tqdm(total=len(dataset)) as pbar:
for x, y in dataset:
x = x.to(device)
y = y.to(device)
y_pred = self.predict(x.float())
loss = self.loss(y, y_pred)
loss.backward()
with torch.no_grad():
print(self.w, self.b)
self.w -= self.w.grad * self.lr
self.b -= self.b.grad * self.lr
self.w.grad.zero_()
self.b.grad.zero_()
pbar.update(1)
print(f'Epoch: {i} | Loss: {loss}')
def loss(self, y, y_pred):
y_pred = torch.clip(y_pred, 1e-7, 1 - 1e-7)
return -torch.mean(
y * torch.log(y_pred + 1e-7) +
(1 - y) * torch.log(1 - y_pred + 1e-7),
axis=0)
def predict(self, x):
return self.sigmoid(torch.matmul(x, self.w) + self.b)
def sigmoid(self, x):
return 1/(1 + torch.exp(-x))
正如我在使用数据集拟合模型时可以看到的,我正在初始化的权重和偏差为零,set onegress_grad = true
,以便稍后可以访问梯度。我使用了Sklearn乳腺癌数据集,
X, y = load_breast_cancer(return_X_y=True) # load dataset
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2) # train test split
# convert all numpy arrays to torch tensor
x_train = torch.tensor(x_train)
x_test = torch.tensor(x_test)
y_train = torch.tensor(y_train)
y_test = torch.tensor(y_test)
# Making it a Torch dataset then into DataLoader
train_dataset = torch.utils.data.TensorDataset(x_train, y_train)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32)
test_dataset = torch.utils.data.TensorDataset(x_test, y_test)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=32)
log = LogisticRegression(n_iter=10, lr=0.001)
log.fit(train_loader)
一旦我将数据集适合到逻辑回归中,就会给我这个错误(我还在梯度更新之前添加了一个logistic回归中的打印语句,显然它具有Grad_fn参数),
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], device='cuda:0', grad_fn=<ToCopyBackward0>) tensor(0., device='cuda:0', grad_fn=<ToCopyBackward0>)
TypeError: unsupported operand type(s) for *: 'NoneType' and 'float'
在此错误开始时,它给出了此用户警告,
UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
我需要帮助解决错误,以便梯度更新和模型成功训练!
I am creating Logistic Regression in Pytorch from scratch. But I am facing an issue when I am updating trainable parameters Weights & biases
. This is my implementation,
class LogisticRegression():
def __init__(self, n_iter, lr):
self.n_iter = n_iter
self.lr = lr
def fit(self, dataset):
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
n = next(iter(dataset))[0].shape[1]
self.w = torch.zeros(n, requires_grad=True).to(device)
self.b = torch.tensor(0., requires_grad=True).to(device)
for i in range(self.n_iter):
with tqdm(total=len(dataset)) as pbar:
for x, y in dataset:
x = x.to(device)
y = y.to(device)
y_pred = self.predict(x.float())
loss = self.loss(y, y_pred)
loss.backward()
with torch.no_grad():
print(self.w, self.b)
self.w -= self.w.grad * self.lr
self.b -= self.b.grad * self.lr
self.w.grad.zero_()
self.b.grad.zero_()
pbar.update(1)
print(f'Epoch: {i} | Loss: {loss}')
def loss(self, y, y_pred):
y_pred = torch.clip(y_pred, 1e-7, 1 - 1e-7)
return -torch.mean(
y * torch.log(y_pred + 1e-7) +
(1 - y) * torch.log(1 - y_pred + 1e-7),
axis=0)
def predict(self, x):
return self.sigmoid(torch.matmul(x, self.w) + self.b)
def sigmoid(self, x):
return 1/(1 + torch.exp(-x))
As you can see when I am fitting the model with a dataset I am initializing Weights and biases with zeroes and set requires_grad=True
so I can access gradients later. I used the sklearn breast cancer dataset,
X, y = load_breast_cancer(return_X_y=True) # load dataset
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2) # train test split
# convert all numpy arrays to torch tensor
x_train = torch.tensor(x_train)
x_test = torch.tensor(x_test)
y_train = torch.tensor(y_train)
y_test = torch.tensor(y_test)
# Making it a Torch dataset then into DataLoader
train_dataset = torch.utils.data.TensorDataset(x_train, y_train)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32)
test_dataset = torch.utils.data.TensorDataset(x_test, y_test)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=32)
log = LogisticRegression(n_iter=10, lr=0.001)
log.fit(train_loader)
As soon as I fit the dataset into Logistic Regression it gives me this error (I have also added one print statement in Logistic regression just before gradient update in which it is clear that it has grad_fn parameter),
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], device='cuda:0', grad_fn=<ToCopyBackward0>) tensor(0., device='cuda:0', grad_fn=<ToCopyBackward0>)
TypeError: unsupported operand type(s) for *: 'NoneType' and 'float'
At the start of this Error, it gives this User warning
UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
I need help to solve the error so the gradient update and the model trains successfully!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
乳腺癌数据集特征的可能值范围很大,从0.001到1000,并且差异也很大,因此它会影响梯度(当梯度变得太大时,它会导致不稳定,后来又导致NANS)。例如,为了克服这种依赖性,在分裂后将数据归一化是普遍的:
因此,一切都应该很好。
Breast cancer dataset features have big range of possible values, from 0.001 to 1000, and big variances too, so it influence gradients (when gradients become too big it leads to instability and later to NaNs). To overcome such dependence it's common practice to normalize data after splitting, for example:
So after that everything should be fine.