pytorch;优化器获得一个空参数列表

发布于 2025-02-05 15:14:58 字数 5489 浏览 3 评论 0原文

是使用Pytorch的深度学习概念中的新手,并尝试构建二进制分类器模型。我在堆栈溢出上尝试了一些解决方案,但我似乎无法解决。也许是由于我的代码的性质。有人可以弄清楚我的代码中可能导致此错误的原因吗 这是我的代码,

import torch
import torch.nn as nn
import numpy as np
from sklearn.datasets import make_blobs
import matplotlib.pyplot as pyp

# creating a dummy dataset from the make_blobs dataset
number_of_samples=5000
#divide the dataset into training(80%) and testing(20%)
training_number=int(number_of_samples*0.8)
#creating the dummy datasest
x,y=make_blobs(n_samples=number_of_samples,centers=2,n_features=64,cluster_std=10,random_state=2020)
y=y.reshape(-1,1)
#converting the numpy arrays into torch tensors
x,y=torch.from_numpy(x),torch.from_numpy(y)
x,y=x.float(),y.float()
#splitting the datasets into training and testing
x_train,x_test=x[:training_number],x[training_number:]
y_train,y_test=y[:training_number],y[training_number:]

#printing the shapes of each dataset
print("x_train shape:",x_train.shape)
print("x_test shape:",x_test.shape)
print("y_train shape:",y_train.shape)
print("y_test shape:",y_test.shape)

#a class to define the neural network us torch nn module
#neural network will have 3 hidden layers and 1 output layer
#hidden layers will have 64,256 and 1024 neurons
#output layer will have a single neuron
class neuralnetwork(nn.Module):
    def _init_(self):
        super().__init__()
        torch.manual_seed(2020)
        self.fc1 = nn.Linear(64, 256)
        self.relu1 = nn.ReLU()
        self.fc2 = nn.Linear(256, 1024)
        self.relu2 = nn.ReLU()
        self.out = nn.Linear(1024, 1)
        self.final = nn.Sigmoid()
    
    def forward(self, x):
        op = self.fc1(x) 
        op = self.relu1(op)
        op = self.fc2(op)
        op = self.relu2(op)
        op = self.out(op)
        y = self.final(op)
        return y

#defining the loss,optimizer and training function for the neural network
def train_network(model,optimizer,loss_function,num_epochs,batch_size,x_train,y_train):
    #start model training
    model.train()
    loss_for_every_epoch=nn.ModuleList()
    for epoch in range(num_epochs):
        train_loss=0.0
        for i in range(0,x_train.shape[0],batch_size):
            #extract train batch from x and y
            input_data=x_train[i:min(x_train.shape[0]),i+batch_size]
            labels=y_train[i:min(y_train.shape[0]),i+batch_size]
            #set gradients to zero before beginning optimization
            optimizer.zero_grad()
            #forwad pass 
            output_data=model(input_data)
            #calculate loss
            loss=loss_function(output_data,labels)
            #backpropagate
            loss.backward()
            #update weights
            optimizer.step()
            train_loss+=loss.item()*batch_size
        print("Epoch: {} - Loss:{:.4f}".format(epoch+1,train_loss )) 
        loss_for_every_epoch.extend([train_loss])
    #predict
    y_test_prediction=model(x_test)
    a=np.where(y_test_prediction>0.5,1,0)
    return loss_for_every_epoch

#create an object of the class
model=neuralnetwork()
#define the loss function
loss_function = nn.BCELoss()#binary cross entropy loss function
#define optimizer
adam_optimizer=torch.optim.Adam(params=model.parameters(),lr=0.001)
#define epochs and batch size
number_of_epochs=100
batch_size=16
#Calling the function for training and pass model, optimizer, loss and related paramters
adam_loss=train_network(model,adam_optimizer,loss_function,number_of_epochs,batch_size,x_train,y_train)

我得到错误

Value error:optimizer got an empty parameter list

,错误主要是从代码的这一部分生成的,

 #create an object of the class
    model=neuralnetwork()
    #define the loss function
    loss_function = nn.BCELoss()#binary cross entropy loss function
    #define optimizer
    adam_optimizer=torch.optim.Adam(params=model.parameters(),lr=0.001)
    #define epochs and batch size
    number_of_epochs=100
    batch_size=16
    #Calling the function for training and pass model, optimizer, loss and related paramters
    adam_loss=train_network(model,adam_optimizer,loss_function,number_of_epochs,batch_size,x_train,y_train)

这可能是我的代码中的原因。 这是完整的堆栈交易

    Traceback (most recent call last)
g:\My Drive\CODE\pythondatascience\simpleneuralnetwork.ipynb Cell 7' in <cell line: 6>()
      4 loss_function = nn.BCELoss()#binary cross entropy loss function
      5 #define optimizer
----> 6 adam_optimizer=torch.optim.Adam(params=model.parameters(),lr=0.001)
      7 #define epochs and batch size
      8 number_of_epochs=100

File c:\Users\DAVE\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\optim\adam.py:81, in Adam.__init__(self, params, lr, betas, eps, weight_decay, amsgrad, maximize)
     78     raise ValueError("Invalid weight_decay value: {}".format(weight_decay))
     79 defaults = dict(lr=lr, betas=betas, eps=eps,
     80                 weight_decay=weight_decay, amsgrad=amsgrad, maximize=maximize)
---> 81 super(Adam, self).__init__(params, defaults)

File c:\Users\DAVE\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\optim\optimizer.py:49, in Optimizer.__init__(self, params, defaults)
     47 param_groups = list(params)
     48 if len(param_groups) == 0:
---> 49     raise ValueError("optimizer got an empty parameter list")
     50 if not isinstance(param_groups[0], dict):
     51     param_groups = [{'params': param_groups}]

ValueError: optimizer got an empty parameter list

Am new in the deep learning concept using pytorch and am try to build a binary classifier model. I have tried some of the solution here on stack overflow but I can't seem to solve it. Maybe it is due to the nature of my code. Can someone figure out what could be the cause of this error in my code
Here is my code

import torch
import torch.nn as nn
import numpy as np
from sklearn.datasets import make_blobs
import matplotlib.pyplot as pyp

# creating a dummy dataset from the make_blobs dataset
number_of_samples=5000
#divide the dataset into training(80%) and testing(20%)
training_number=int(number_of_samples*0.8)
#creating the dummy datasest
x,y=make_blobs(n_samples=number_of_samples,centers=2,n_features=64,cluster_std=10,random_state=2020)
y=y.reshape(-1,1)
#converting the numpy arrays into torch tensors
x,y=torch.from_numpy(x),torch.from_numpy(y)
x,y=x.float(),y.float()
#splitting the datasets into training and testing
x_train,x_test=x[:training_number],x[training_number:]
y_train,y_test=y[:training_number],y[training_number:]

#printing the shapes of each dataset
print("x_train shape:",x_train.shape)
print("x_test shape:",x_test.shape)
print("y_train shape:",y_train.shape)
print("y_test shape:",y_test.shape)

#a class to define the neural network us torch nn module
#neural network will have 3 hidden layers and 1 output layer
#hidden layers will have 64,256 and 1024 neurons
#output layer will have a single neuron
class neuralnetwork(nn.Module):
    def _init_(self):
        super().__init__()
        torch.manual_seed(2020)
        self.fc1 = nn.Linear(64, 256)
        self.relu1 = nn.ReLU()
        self.fc2 = nn.Linear(256, 1024)
        self.relu2 = nn.ReLU()
        self.out = nn.Linear(1024, 1)
        self.final = nn.Sigmoid()
    
    def forward(self, x):
        op = self.fc1(x) 
        op = self.relu1(op)
        op = self.fc2(op)
        op = self.relu2(op)
        op = self.out(op)
        y = self.final(op)
        return y

#defining the loss,optimizer and training function for the neural network
def train_network(model,optimizer,loss_function,num_epochs,batch_size,x_train,y_train):
    #start model training
    model.train()
    loss_for_every_epoch=nn.ModuleList()
    for epoch in range(num_epochs):
        train_loss=0.0
        for i in range(0,x_train.shape[0],batch_size):
            #extract train batch from x and y
            input_data=x_train[i:min(x_train.shape[0]),i+batch_size]
            labels=y_train[i:min(y_train.shape[0]),i+batch_size]
            #set gradients to zero before beginning optimization
            optimizer.zero_grad()
            #forwad pass 
            output_data=model(input_data)
            #calculate loss
            loss=loss_function(output_data,labels)
            #backpropagate
            loss.backward()
            #update weights
            optimizer.step()
            train_loss+=loss.item()*batch_size
        print("Epoch: {} - Loss:{:.4f}".format(epoch+1,train_loss )) 
        loss_for_every_epoch.extend([train_loss])
    #predict
    y_test_prediction=model(x_test)
    a=np.where(y_test_prediction>0.5,1,0)
    return loss_for_every_epoch

#create an object of the class
model=neuralnetwork()
#define the loss function
loss_function = nn.BCELoss()#binary cross entropy loss function
#define optimizer
adam_optimizer=torch.optim.Adam(params=model.parameters(),lr=0.001)
#define epochs and batch size
number_of_epochs=100
batch_size=16
#Calling the function for training and pass model, optimizer, loss and related paramters
adam_loss=train_network(model,adam_optimizer,loss_function,number_of_epochs,batch_size,x_train,y_train)

I get the error

Value error:optimizer got an empty parameter list

The error is mainly generated from this section of code

 #create an object of the class
    model=neuralnetwork()
    #define the loss function
    loss_function = nn.BCELoss()#binary cross entropy loss function
    #define optimizer
    adam_optimizer=torch.optim.Adam(params=model.parameters(),lr=0.001)
    #define epochs and batch size
    number_of_epochs=100
    batch_size=16
    #Calling the function for training and pass model, optimizer, loss and related paramters
    adam_loss=train_network(model,adam_optimizer,loss_function,number_of_epochs,batch_size,x_train,y_train)

What could be the cause in my code.
Here is the full stack trade

    Traceback (most recent call last)
g:\My Drive\CODE\pythondatascience\simpleneuralnetwork.ipynb Cell 7' in <cell line: 6>()
      4 loss_function = nn.BCELoss()#binary cross entropy loss function
      5 #define optimizer
----> 6 adam_optimizer=torch.optim.Adam(params=model.parameters(),lr=0.001)
      7 #define epochs and batch size
      8 number_of_epochs=100

File c:\Users\DAVE\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\optim\adam.py:81, in Adam.__init__(self, params, lr, betas, eps, weight_decay, amsgrad, maximize)
     78     raise ValueError("Invalid weight_decay value: {}".format(weight_decay))
     79 defaults = dict(lr=lr, betas=betas, eps=eps,
     80                 weight_decay=weight_decay, amsgrad=amsgrad, maximize=maximize)
---> 81 super(Adam, self).__init__(params, defaults)

File c:\Users\DAVE\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\optim\optimizer.py:49, in Optimizer.__init__(self, params, defaults)
     47 param_groups = list(params)
     48 if len(param_groups) == 0:
---> 49     raise ValueError("optimizer got an empty parameter list")
     50 if not isinstance(param_groups[0], dict):
     51     param_groups = [{'params': param_groups}]

ValueError: optimizer got an empty parameter list

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

泪冰清 2025-02-12 15:14:59

它应该是def __init __,而不是def _init _ in neuralnetwork类。您根本不会初始化模型对象。因此,它没有任何参数。

It should be def __init__, not def _init_ in neuralnetwork class. You are not initializing your model object at all. Thus, it does not have any parameters.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文