可以使用主要组件分析网络而不是CNN中的合并层吗?

发布于 2025-02-09 04:09:54 字数 1214 浏览 3 评论 0原文

是否可以使用主组件分析网络替换CNN中的合并层?请详细说明。我在

input_shape = keras.Input(shape=(224, 224,1))

tower_1 = Conv2D(16, (3, 3), padding='same', activation='relu')(input_shape)

reshape_tower1=(tf.reshape(tower_1, [224*224, 16]))
reshape_tower1

Trans_tower1=tf.transpose(reshape_tower1)
Trans_tower1

pca_tower1 = PCA(n_components=10)

pca_tower1.fit(Trans_tower1)

result = pca_tower1.transform(Trans_tower1)

下面疲倦了:

You are passing KerasTensor(type_spec=TensorSpec(shape=(16, 50176), 
dtype=tf.float32, name=None), name='tf.compat.v1.transpose_1/transpose:0', 
description="created by layer 'tf.compat.v1.transpose_1'"), an intermediate 
Keras symbolic input/output, to a TF API that does not allow registering 
custom dispatchers, such as `tf.cond`, `tf.function`, gradient tapes, or 
`tf.map_fn`. Keras Functional model construction only supports TF API calls 
that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other 
APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work 
around this limitation by putting the operation in a custom Keras layer `call` 
and calling that layer on this symbolic input/output.

Is it possible to replace the pooling layer in CNN with a Principal Component Analysis Network? Please elaborate. I tired below

input_shape = keras.Input(shape=(224, 224,1))

tower_1 = Conv2D(16, (3, 3), padding='same', activation='relu')(input_shape)

reshape_tower1=(tf.reshape(tower_1, [224*224, 16]))
reshape_tower1

Trans_tower1=tf.transpose(reshape_tower1)
Trans_tower1

pca_tower1 = PCA(n_components=10)

pca_tower1.fit(Trans_tower1)

result = pca_tower1.transform(Trans_tower1)

Error:

You are passing KerasTensor(type_spec=TensorSpec(shape=(16, 50176), 
dtype=tf.float32, name=None), name='tf.compat.v1.transpose_1/transpose:0', 
description="created by layer 'tf.compat.v1.transpose_1'"), an intermediate 
Keras symbolic input/output, to a TF API that does not allow registering 
custom dispatchers, such as `tf.cond`, `tf.function`, gradient tapes, or 
`tf.map_fn`. Keras Functional model construction only supports TF API calls 
that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other 
APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work 
around this limitation by putting the operation in a custom Keras layer `call` 
and calling that layer on this symbolic input/output.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

倦话 2025-02-16 04:09:54

您似乎会得到仅与numpy阵列一起使用的Sklearn PCA。您可以尝试在PCA之前将张量转换为numpy阵列,但是您将使用它的梯度失去梯度,因此您将无法训练第一个Conv层的参数。顺便说一句,即使您找到接受TF张量的PCA,您也会面临一个主要问题:PCA没有可区分,因此在所有情况下都不能训练参数。

You seem to get the sklearn PCA that only work with numpy array. You can try to convert your tensor to numpy array before the PCA BUT you will loose the gradients with that and so you will not be able to train the parameters of the first Conv layer. By the way even if you find a PCA that accepts tf tensor, you will face a major problem: PCA is not differentiable and so you can't train parameters in all cases.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文