自动编码,使用KERAS中的编码和解码层,定义潜在层的问题

发布于 2025-02-10 10:15:53 字数 2030 浏览 2 评论 0 原文

我一直在关注卷积自动编码器,源自 https://medium.com/analytics-vidhya/building-a-convolutional-autoencoder-using-keras-using-keras-using-conv2dtranspose-ca403c8c8c8d144e 使用72x72 Greyscale图像的数据集。我已经能够获得可训练的模型,但是在将其应用于我的数据时已经遇到了问题。

卷积/反卷积:

input = Input(shape=(72,72,1))

e_input = Conv2D(32, (3, 3), activation='relu')(input) #70 70 32
e = MaxPooling2D((2, 2))(e_input) #35 35 32
e = Conv2D(64, (3, 3), activation='relu')(e) #33 33 64
e = MaxPooling2D((2, 2))(e) # 16 16 64
e = Conv2D(64, (3, 3), activation='relu')(e) #16 16 64
e = Flatten()(e) #1 1 12544
e_output = Dense(324, activation='softmax')(e) #1 1 324

d = Reshape((18,18,1))(e_output) #18 18 1, found to be generally N/4 units (so 7 for 28x28)
d = Conv2DTranspose(64,(3, 3), strides=2, activation='relu', padding='same')(d) #36 36 64
d = Conv2DTranspose(64,(3, 3), strides=2, activation='relu', padding='same')(d) #72 72 64
d = Conv2DTranspose(64,(3, 3), activation='relu', padding='same')(d) #72 72 64
d_output = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(d) #72 72 1

这是单独编码和解码的代码,以及定义自动编码器模型:

autoencoder = Model(input, d_output) 
autoencoder.compile(optimizer='adam', loss='binary_crossentropy', metrics=["mae"]) 
encoder = Model(input, e_output) 
encoded_input = Input(shape=(1,1)) 
decoder_layer = autoencoder.layers[-6](encoded_input)
l2 = autoencoder.layers[-5](decoder_layer) 
decoder = Model(encoded_input, l2) 

训练后,我输入以下内容,其中x_train为(2433,72,72)。

encoded_imgs = encoder.predict(x_train)
decoded_imgs = decoder.predict(encoded_imgs)

并且我获得了解码器的错误:“ valueerror:“ model_5”层的输入0与图层不兼容:期望形状=(none,1,1),找到shape =(none,324) “,但是将ENCODED_INPUT的形状更改为(324)会导致另一个错误( valueerror:在调用layer“ dense_2”(键入密度)时遇到的异常

。我不是在正确的时间重塑数据,也许我不必要地使用某些型号。任何帮助将不胜感激。

I've been following a convolutional autoencoder, derived from https://medium.com/analytics-vidhya/building-a-convolutional-autoencoder-using-keras-using-conv2dtranspose-ca403c8d144e using a data set of 72x72 greyscale images. I've been able to obtain a trainable model, but have been getting issues in applying it to my data.

Convolution/deconvolution:

input = Input(shape=(72,72,1))

e_input = Conv2D(32, (3, 3), activation='relu')(input) #70 70 32
e = MaxPooling2D((2, 2))(e_input) #35 35 32
e = Conv2D(64, (3, 3), activation='relu')(e) #33 33 64
e = MaxPooling2D((2, 2))(e) # 16 16 64
e = Conv2D(64, (3, 3), activation='relu')(e) #16 16 64
e = Flatten()(e) #1 1 12544
e_output = Dense(324, activation='softmax')(e) #1 1 324

d = Reshape((18,18,1))(e_output) #18 18 1, found to be generally N/4 units (so 7 for 28x28)
d = Conv2DTranspose(64,(3, 3), strides=2, activation='relu', padding='same')(d) #36 36 64
d = Conv2DTranspose(64,(3, 3), strides=2, activation='relu', padding='same')(d) #72 72 64
d = Conv2DTranspose(64,(3, 3), activation='relu', padding='same')(d) #72 72 64
d_output = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(d) #72 72 1

And here's the code for separate encoding and decoding, as well as defining the autoencoder model:

autoencoder = Model(input, d_output) 
autoencoder.compile(optimizer='adam', loss='binary_crossentropy', metrics=["mae"]) 
encoder = Model(input, e_output) 
encoded_input = Input(shape=(1,1)) 
decoder_layer = autoencoder.layers[-6](encoded_input)
l2 = autoencoder.layers[-5](decoder_layer) 
decoder = Model(encoded_input, l2) 

Following training, I inputted the following, wherein x_train is (2433, 72, 72).

encoded_imgs = encoder.predict(x_train)
decoded_imgs = decoder.predict(encoded_imgs)

And I obtain the error for decoded_imgs: "ValueError: Input 0 of layer "model_5" is incompatible with the layer: expected shape=(None, 1, 1), found shape=(None, 324)", but changing encoded_input's shape to (324) gives rise to another error (ValueError: Exception encountered when calling layer "dense_2" (type Dense).) in calling back layers in decoded_layer...

I speculate I'm not reshaping data at the correct time, or perhaps I'm unnecessarily using some models. Any help would be greatly appreciated.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

谜兔 2025-02-17 10:15:53

解码器的输入是编码器的输出。在模型中,编码器输出的形状为(无,324),但您正在传递形状(1,1)的输入。潜在层的形状为(无,324)。您可以创建一个单独的解码器模型如下:

decoder=Model(e_output, d_output)

请找到工作代码。谢谢你!

The input of the decoder is the output of the encoder. In the model the shape of the encoder's output is (None, 324) but you are passing an input of shape (1, 1). The shape of the latent layer is (None, 324). You can create a separate decoder model as follows:

decoder=Model(e_output, d_output)

Please find the working code here. Thank you!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文