为什么gan歧视者显示错误的课程?
我为文本创建了一个 gan 并使用 vanillagan 训练方法,但主要问题是创建的序列是好的,但是主要问题是当我使用nn.sigmoid 要查看它显示的歧视标签[0],用于创建的数据,这些数据是完全真实的,这是不正确的。 这是我的歧视码:
class Classifier(nn.Module):
def __init__(self, hidden_size, hidden_size2, dropout):
super().__init__()
self.FC1 = nn.Sequential(
nn.Linear(512, hidden_size),
nn.LeakyReLU(0.1),
nn.Dropout(dropout))
self.FC2 = nn.Sequential(
nn.Linear(hidden_size, hidden_size2),
nn.LeakyReLU(0.1),
nn.Dropout(dropout)
)
self.FC3 = nn.Linear(hidden_size2, 1)
self.dropout = nn.Dropout(dropout)
self.bach = nn.BatchNorm1d(512)
self.bach2 = nn.BatchNorm1d(hidden_size)
self.bach3 = nn.BatchNorm1d(64)
def forward(self, x):
z = self.dropout(x)
z = self.bach(z)
z = self.FC1(z)
z = self.bach2(z)
z = self.FC2(z)
z = self.bach3(z)
out = self.FC3(z)
return out
作为该分类器的输入,真实和假序列的隐藏状态将馈入该网络。
火车类
# to create real labels (1s)
def label_real(size):
data = torch.ones(size, 1)
return data.to(device)
# to create fake labels (0s)
def label_fake(size):
data = torch.zeros(size, 1)
return data.to(device)
# function to train the discriminator network
def train_discriminator(optimizer, data_real, data_fake):
b_size = data_real.size(1)
real_label = label_real(b_size)
fake_label = label_fake(b_size)
optimizer.zero_grad()
output_real = discriminator(data_real)
loss_real = criterion(output_real, real_label)
output_fake = discriminator(data_fake)
loss_fake = criterion(output_fake, fake_label)
loss_real.backward()
loss_fake.backward()
optimizer.step()
return loss_real + loss_fake
我的损失功能是bcewithlogits
,这是训练和测试模型后使用nn.sigmoid
的 。请帮助我知道我的神经网络有什么问题?
I created a GAN for text and using VanillaGAN training approach, but the main problem is the sequences that are created are good, but the main problem is that when I use nn.sigmoid
to see the discriminator labels it shows [0] for data that is created which are completely real, and it is not correct.
Here is my Discriminator code:
class Classifier(nn.Module):
def __init__(self, hidden_size, hidden_size2, dropout):
super().__init__()
self.FC1 = nn.Sequential(
nn.Linear(512, hidden_size),
nn.LeakyReLU(0.1),
nn.Dropout(dropout))
self.FC2 = nn.Sequential(
nn.Linear(hidden_size, hidden_size2),
nn.LeakyReLU(0.1),
nn.Dropout(dropout)
)
self.FC3 = nn.Linear(hidden_size2, 1)
self.dropout = nn.Dropout(dropout)
self.bach = nn.BatchNorm1d(512)
self.bach2 = nn.BatchNorm1d(hidden_size)
self.bach3 = nn.BatchNorm1d(64)
def forward(self, x):
z = self.dropout(x)
z = self.bach(z)
z = self.FC1(z)
z = self.bach2(z)
z = self.FC2(z)
z = self.bach3(z)
out = self.FC3(z)
return out
As input to this Classifier, hidden states of real and fake sequences are feed into this network.
My loss function is BCEWithlogits
and this is the Train class
# to create real labels (1s)
def label_real(size):
data = torch.ones(size, 1)
return data.to(device)
# to create fake labels (0s)
def label_fake(size):
data = torch.zeros(size, 1)
return data.to(device)
# function to train the discriminator network
def train_discriminator(optimizer, data_real, data_fake):
b_size = data_real.size(1)
real_label = label_real(b_size)
fake_label = label_fake(b_size)
optimizer.zero_grad()
output_real = discriminator(data_real)
loss_real = criterion(output_real, real_label)
output_fake = discriminator(data_fake)
loss_fake = criterion(output_fake, fake_label)
loss_real.backward()
loss_fake.backward()
optimizer.step()
return loss_real + loss_fake
I use nn.sigmoid
after training and on testing the model. Please help me to know what is wrong with my neural network?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论