将CNN从Keras转换为Pytorch
我需要一些帮助。我正在尝试将CNN从Keras转换为Pytorch。我正在重建MR图像。输入是来自傅立叶域中扫描仪的图像,输出是重建的图像。输入图像有两个通道(第一个通道:实际部分,第二通道:假想部分)。不幸的是,结果完全不同,所以我相信我做错了什么。我只是找不到自己是什么。这是KERAS代码:
def AUTOMAP_Basic_Model(param):
fc_1 = keras.Input(shape=(64,64,2), name='input')
fc_2 = layers.Conv2D(64,(64,1),strides=1,padding='same',activation='relu')(fc_1)
fc_4 = layers.Conv2D(64,(1,64),strides=1,padding='same',activation='relu')(fc_2)
fc_4 = layers.Conv2D(64,(64,1),strides=1,padding='same',activation='relu')(fc_4)
fc_5 = layers.Conv2D(64,(1,64),strides=1,padding='same',activation='relu')(fc_4)
c_1 = layers.Conv2D(64,5,strides=1,padding='same',activation='relu')(fc_5)
c_2 = layers.Conv2D(64,5,strides=1,padding='same',activation='relu')(c_1)
c_3 = layers.Conv2DTranspose(1,7,strides=1,activation='sigmoid',padding='same')(c_2)
model = keras.Model(inputs = fc_1,outputs = c_3)
return model
这是我对Pytorch的翻译:
class AUTOMAP_Basic_Model(nn.Module):
def __init__(self, inputShape, nrFilters):
super(AUTOMAP_Basic_Model, self).__init__()
self.conv1 = nn.Conv2d(2, 64, (64,1), padding='same')
self.relu = nn.ReLU()
self.conv2 = nn.Conv2d(64, 64, (1,64), padding ='same')
self.conv3 = nn.Conv2d(64, 64, (64,1), padding='same')
self.conv4 = nn.Conv2d(64,64,5,padding='same')
self.conv5 = nn.Conv2d(64,64,5,padding='same')
self.convTranspose = nn.ConvTranspose2d(64,1,7,padding=3,output_padding=0)
self.sigmoid = nn.Sigmoid()
self.tan = nn.Tanh()
def forward(self, x):
batch_size = len(x)
out = self.conv1(x)
out = self.relu(out)
out = self.conv2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.relu(out)
out = self.conv2(out)
out = self.relu(out)
out = self.conv4(out)
out = self.relu(out)
out = self.conv5(out)
out = self.relu(out)
out = self.convTranspose(out)
out=self.sigmoid(out)
return out
我是Pytorch的新手,这就是为什么我不知道可能是什么问题的原因。当Keras模型收敛到正确的图像时,Pytorch模型的恒定值为图像中的每个像素的恒定值为0.45。
I need some help. I am trying to convert a CNN from keras to pytorch. I am reconstructing an MR image. The input is the image coming from the scanner in the fourier domain and the output is the reconstructed image. The input image has two channels (first channel: real part, second channel: imaginary part). Unfortunately, the results are quite different, so I believe I am doing something wrong. I just cannot find out myself what it is. Here is the keras code:
def AUTOMAP_Basic_Model(param):
fc_1 = keras.Input(shape=(64,64,2), name='input')
fc_2 = layers.Conv2D(64,(64,1),strides=1,padding='same',activation='relu')(fc_1)
fc_4 = layers.Conv2D(64,(1,64),strides=1,padding='same',activation='relu')(fc_2)
fc_4 = layers.Conv2D(64,(64,1),strides=1,padding='same',activation='relu')(fc_4)
fc_5 = layers.Conv2D(64,(1,64),strides=1,padding='same',activation='relu')(fc_4)
c_1 = layers.Conv2D(64,5,strides=1,padding='same',activation='relu')(fc_5)
c_2 = layers.Conv2D(64,5,strides=1,padding='same',activation='relu')(c_1)
c_3 = layers.Conv2DTranspose(1,7,strides=1,activation='sigmoid',padding='same')(c_2)
model = keras.Model(inputs = fc_1,outputs = c_3)
return model
And this is my translation to pytorch:
class AUTOMAP_Basic_Model(nn.Module):
def __init__(self, inputShape, nrFilters):
super(AUTOMAP_Basic_Model, self).__init__()
self.conv1 = nn.Conv2d(2, 64, (64,1), padding='same')
self.relu = nn.ReLU()
self.conv2 = nn.Conv2d(64, 64, (1,64), padding ='same')
self.conv3 = nn.Conv2d(64, 64, (64,1), padding='same')
self.conv4 = nn.Conv2d(64,64,5,padding='same')
self.conv5 = nn.Conv2d(64,64,5,padding='same')
self.convTranspose = nn.ConvTranspose2d(64,1,7,padding=3,output_padding=0)
self.sigmoid = nn.Sigmoid()
self.tan = nn.Tanh()
def forward(self, x):
batch_size = len(x)
out = self.conv1(x)
out = self.relu(out)
out = self.conv2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.relu(out)
out = self.conv2(out)
out = self.relu(out)
out = self.conv4(out)
out = self.relu(out)
out = self.conv5(out)
out = self.relu(out)
out = self.convTranspose(out)
out=self.sigmoid(out)
return out
I am new to pytorch, thats why I do not know what could be wrong. While the keras model is converging to the right image, the pytorch model is giving me a constant value of 0.45 for every pixel in the image.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
似乎您在Keras和Pytorch中都有一些奇怪的顺序和命名。在第一个FC_3中,丢失了,这可能仍然可行,但我不明白为什么。
在Pytorch实现中,您需要具有4个Conv层(FC_2,FC_3,FC_4,FC_5),但仅实现3(Conv1,Conv2,Conv3)。
类似的内容重写该模型
我会用 “> conv+bn+relu
It seems like you have some strange ordering and naming in both Keras and PyTorch. In the first one fc_3 is missing, which might still be viable but I don't understand why.
In the PyTorch implementation, you need to have 4 conv layer (fc_2, fc_3, fc_4, fc_5), but you only implemented 3 (conv1, conv2, conv3).
I'd rewrite the model with something like:
I'd also suggest using Conv+BN+Relu