可以使用主要组件分析网络而不是CNN中的合并层吗?
是否可以使用主组件分析网络替换CNN中的合并层?请详细说明。我在
input_shape = keras.Input(shape=(224, 224,1))
tower_1 = Conv2D(16, (3, 3), padding='same', activation='relu')(input_shape)
reshape_tower1=(tf.reshape(tower_1, [224*224, 16]))
reshape_tower1
Trans_tower1=tf.transpose(reshape_tower1)
Trans_tower1
pca_tower1 = PCA(n_components=10)
pca_tower1.fit(Trans_tower1)
result = pca_tower1.transform(Trans_tower1)
下面疲倦了:
You are passing KerasTensor(type_spec=TensorSpec(shape=(16, 50176),
dtype=tf.float32, name=None), name='tf.compat.v1.transpose_1/transpose:0',
description="created by layer 'tf.compat.v1.transpose_1'"), an intermediate
Keras symbolic input/output, to a TF API that does not allow registering
custom dispatchers, such as `tf.cond`, `tf.function`, gradient tapes, or
`tf.map_fn`. Keras Functional model construction only supports TF API calls
that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other
APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work
around this limitation by putting the operation in a custom Keras layer `call`
and calling that layer on this symbolic input/output.
Is it possible to replace the pooling layer in CNN with a Principal Component Analysis Network? Please elaborate. I tired below
input_shape = keras.Input(shape=(224, 224,1))
tower_1 = Conv2D(16, (3, 3), padding='same', activation='relu')(input_shape)
reshape_tower1=(tf.reshape(tower_1, [224*224, 16]))
reshape_tower1
Trans_tower1=tf.transpose(reshape_tower1)
Trans_tower1
pca_tower1 = PCA(n_components=10)
pca_tower1.fit(Trans_tower1)
result = pca_tower1.transform(Trans_tower1)
Error:
You are passing KerasTensor(type_spec=TensorSpec(shape=(16, 50176),
dtype=tf.float32, name=None), name='tf.compat.v1.transpose_1/transpose:0',
description="created by layer 'tf.compat.v1.transpose_1'"), an intermediate
Keras symbolic input/output, to a TF API that does not allow registering
custom dispatchers, such as `tf.cond`, `tf.function`, gradient tapes, or
`tf.map_fn`. Keras Functional model construction only supports TF API calls
that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other
APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work
around this limitation by putting the operation in a custom Keras layer `call`
and calling that layer on this symbolic input/output.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您似乎会得到仅与numpy阵列一起使用的Sklearn PCA。您可以尝试在PCA之前将张量转换为numpy阵列,但是您将使用它的梯度失去梯度,因此您将无法训练第一个Conv层的参数。顺便说一句,即使您找到接受TF张量的PCA,您也会面临一个主要问题:PCA没有可区分,因此在所有情况下都不能训练参数。
You seem to get the sklearn PCA that only work with numpy array. You can try to convert your tensor to numpy array before the PCA BUT you will loose the gradients with that and so you will not be able to train the parameters of the first Conv layer. By the way even if you find a PCA that accepts tf tensor, you will face a major problem: PCA is not differentiable and so you can't train parameters in all cases.