如何增加CNN自动编码器中的Lambda层数?

发布于 2025-02-04 11:17:47 字数 1010 浏览 4 评论 0原文

我正在尝试自定义类似的CNN自动编码器。但是我不了解兰伯达层的含义。什么lambda(lambda x:x [:,0:1])的意思是什么?在这种情况下,如何添加另一个lambda层(即Val3)?

input_img = Input(shape=(384, 192, 2))
## Encoder
x = Conv2D(16, (3, 3), activation='tanh', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(4, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(4, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Reshape([6*3*4])(x) ## Flatten()
encoded = Dense(2,activation='tanh')(x)
## Two variables
val1= Lambda(lambda x: x[:,0:1])(encoded)
val2= Lambda(lambda x: x[:,1:2])(encoded)
## Decoder 1
.....

I am trying to customize a CNN Autoencoder like this. But I do not understand the meaning of Lambda layers. What Lambda(lambda x: x[:,0:1]) means? and how to add one more lambda layer (i.e., val3) in this case?

input_img = Input(shape=(384, 192, 2))
## Encoder
x = Conv2D(16, (3, 3), activation='tanh', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(4, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(4, (3, 3), activation='tanh', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Reshape([6*3*4])(x) ## Flatten()
encoded = Dense(2,activation='tanh')(x)
## Two variables
val1= Lambda(lambda x: x[:,0:1])(encoded)
val2= Lambda(lambda x: x[:,1:2])(encoded)
## Decoder 1
.....

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

咆哮 2025-02-11 11:17:47

来自此博客

假设,在致命的dense_layer_3密集层以后,我们想在张量上进行某种操作,例如将值2添加到每个元素中。我们该怎么做?现有层都没有这样做,因此我们必须自己构建一个新层。

因此,Lambda层用于在输入张量上执行操作,但仍被认为是层。例如,假设我有模型:

layer1 = Dense(...)(x)
layer2 = Dense(...)(x)

model.summary() # will have layer1 and layer2

现在我想在layer1之后做x+2。通常,我会做:

layer1 = Dense(...)(x)
x = x+2
layer2 = Dense(...)(x)

model.summary() # will miss the x = x+2 operation

但是x = x+2不会将其视为模型中的一层。我们知道它之所以存在,是因为我们这样做了,但是其他人将没有办法知道,这使得如果出现问题,很难调试。因此,我们使用lambda:

layer1 = Dense(...)(x)
lamb = Lambda(lambda x: x+2)(x) 
layer2 = Dense(...)(x)

model.summary() # will have Lambda layer inside it

关于lambda(lambda x:x [:,0:1]),它是用于张量切片的lambda层。 x [:,0:1]表示“占所有行,但仅获取具有从0到1的索引的列。

From this blog:

Let's say that after the dense layer named dense_layer_3 we'd like to do some sort of operation on the tensor, such as adding the value 2 to each element. How can we do that? None of the existing layers does this, so we'll have to build a new layer ourselves.

So Lambda layer is used to perform operations on the input tensor but is still recognized as a Layer. For example, let's say I have the model:

layer1 = Dense(...)(x)
layer2 = Dense(...)(x)

model.summary() # will have layer1 and layer2

Now I want to do x+2 after layer1. Normally I will do:

layer1 = Dense(...)(x)
x = x+2
layer2 = Dense(...)(x)

model.summary() # will miss the x = x+2 operation

But x=x+2 will not be recognized as a Layer in the model. We know it exists because we do it, but others will not have a way to know, which makes it hard to debug if something goes wrong. So we use Lambda:

layer1 = Dense(...)(x)
lamb = Lambda(lambda x: x+2)(x) 
layer2 = Dense(...)(x)

model.summary() # will have Lambda layer inside it

Regarding Lambda(lambda x: x[:,0:1]), it is a Lambda layer for tensor slicing. x[:, 0:1] means "take all rows, but only get columns that have index from 0 to 1.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文