为什么我会发现试图运行Conv1D层的Conv2D错误?

发布于 2025-01-19 23:02:58 字数 976 浏览 1 评论 0原文

我正在尝试编写一个带有回归(一维浮点)输出的简单一维卷积。

model = Sequential()
model.add(Conv1D(filters=1, kernel_size=8, activation='relu'))
model.add(Dense(1, 'softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x = x_train, y = y_train, epochs=3)

这给了我错误:

TypeError: Exception encountered when calling layer "conv1d" (type Conv1D).
Input 'filter' of 'Conv2D' Op has type float32 that does not match type int32 of argument 'input'.
Call arguments received:
• inputs=tf.Tensor(shape=(None, 30931, 4), dtype=int32)

即使我的代码是错误的,我怎么可能在没有 Conv2D 层的情况下得到 Conv2D 错误?

x_train 是一个包含 3361 个训练样本的 numpy 数组,每个 1d 数组的长度为 30931,具有 4 个通道的 np.int32 数据。 shape = (3361,30931, 4)

y_train 是一个包含 3361 个 np.float64 值的 numpy 数组,我正在训练我的网络进行识别。

这种输入数据格式应该有效吗?或者我需要转换它或使用其他数据类型?

我的 Conv1D 层中是否需要 input_shape 参数?如果是的话,应该是什么?

我意识到这过于简单化,并计划一个更复杂的网络来针对更多示例进行训练,但只想先运行它。

I am trying to write a simple 1 dimensional convolution with a regression (1 dimensional float) output.

model = Sequential()
model.add(Conv1D(filters=1, kernel_size=8, activation='relu'))
model.add(Dense(1, 'softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x = x_train, y = y_train, epochs=3)

This gives me the error:

TypeError: Exception encountered when calling layer "conv1d" (type Conv1D).
Input 'filter' of 'Conv2D' Op has type float32 that does not match type int32 of argument 'input'.
Call arguments received:
• inputs=tf.Tensor(shape=(None, 30931, 4), dtype=int32)

Even if my code is wrong, how is it possible I am getting a Conv2D error without even having a Conv2D layer?

x_train is a numpy array of 3361 training examples, each 1d array of length 30931, with 4 channels of np.int32 data. shape = (3361,30931, 4)

y_train is a numpy array of 3361 np.float64 values I am training my network to recognize.

Should this format of input data work? Or do I need to transform it or use another data type?

Do I need an input_shape parameter in my Conv1D layer? If so, what should it be?

I realize this is oversimplified, and plan a much more complex network to train against many more examples, but just want this running first.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

妞丶爷亲个 2025-01-26 23:02:58

您的X_Train数据应为数据类型float。另外,您通常将2D数据拖放到1D中或将一些全局池操作应用于softmax输出层:

import tensorflow as tf

x_train = tf.random.normal((10,30931, 4), dtype=tf.float32)
y_train = tf.random.uniform((10,), maxval=2, dtype=tf.int32)
y_train = tf.keras.utils.to_categorical(y_train, 2)

model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv1D(filters=1, kernel_size=8, activation='relu'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(2, 'softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x = x_train, y = y_train, epochs=3)

关于您的错误消息,conv1dconv2d 图层使用tf.nn.convolution操作内部。有趣的是,问题是由参数过滤器引起的,该具有浮点数据类型,无法处理整数输入。

Your x_train data should be of the data type float. Also, you usually flatten your 2D data into 1D or apply some global pooling operation before feeding it into a softmax output layer:

import tensorflow as tf

x_train = tf.random.normal((10,30931, 4), dtype=tf.float32)
y_train = tf.random.uniform((10,), maxval=2, dtype=tf.int32)
y_train = tf.keras.utils.to_categorical(y_train, 2)

model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv1D(filters=1, kernel_size=8, activation='relu'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(2, 'softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x = x_train, y = y_train, epochs=3)

Regarding your error message, both the Conv1D and Conv2D layer use the tf.nn.convolution operation internally. Interestingly, the problem is caused by the parameter filter, which has a float data type and cannot handle integer inputs.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文