如何增加CNN自动编码器中的Lambda层数?
我正在尝试自定义类似的CNN自动编码器。但是我不了解兰伯达层的含义。什么lambda(lambda x:x [:,0:1])的意思是什么?在这种情况下,如何添加另一个lambda层(即Val3)?
input_img = Input(shape=(384, 192, 2))
## Encoder
x = Conv2D(16, (3, 3), activation='tanh', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(4, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(4, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Reshape([6*3*4])(x) ## Flatten()
encoded = Dense(2,activation='tanh')(x)
## Two variables
val1= Lambda(lambda x: x[:,0:1])(encoded)
val2= Lambda(lambda x: x[:,1:2])(encoded)
## Decoder 1
.....
I am trying to customize a CNN Autoencoder like this. But I do not understand the meaning of Lambda layers. What Lambda(lambda x: x[:,0:1]) means? and how to add one more lambda layer (i.e., val3) in this case?
input_img = Input(shape=(384, 192, 2))
## Encoder
x = Conv2D(16, (3, 3), activation='tanh', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(4, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(4, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Reshape([6*3*4])(x) ## Flatten()
encoded = Dense(2,activation='tanh')(x)
## Two variables
val1= Lambda(lambda x: x[:,0:1])(encoded)
val2= Lambda(lambda x: x[:,1:2])(encoded)
## Decoder 1
.....
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
来自此博客:
因此,Lambda层用于在输入张量上执行操作,但仍被认为是层。例如,假设我有模型:
现在我想在
layer1
之后做x+2
。通常,我会做:但是
x = x+2
不会将其视为模型中的一层。我们知道它之所以存在,是因为我们这样做了,但是其他人将没有办法知道,这使得如果出现问题,很难调试。因此,我们使用lambda:关于
lambda(lambda x:x [:,0:1])
,它是用于张量切片的lambda层。x [:,0:1]
表示“占所有行,但仅获取具有从0到1的索引的列。From this blog:
So Lambda layer is used to perform operations on the input tensor but is still recognized as a Layer. For example, let's say I have the model:
Now I want to do
x+2
afterlayer1
. Normally I will do:But
x=x+2
will not be recognized as a Layer in the model. We know it exists because we do it, but others will not have a way to know, which makes it hard to debug if something goes wrong. So we use Lambda:Regarding
Lambda(lambda x: x[:,0:1])
, it is a Lambda layer for tensor slicing.x[:, 0:1]
means "take all rows, but only get columns that have index from 0 to 1.