TFIDF向量进入LSTM模型

发布于 2025-02-11 11:26:34 字数 1018 浏览 1 评论 0原文

我正在尝试将我的TFIDF向量送入LSTM模型。 tfidfvectorizer(ngram_range =(1,2),use_idf = true,Analyzer ='Word',max_features = 5000)

这是向量形状 train_vector.shape =(22895,5000) test_vector.shape =(5724,5000)

我正在定义下面的模型:

model = models.Sequential()

model.add(layers.LSTM(64, input_shape=(5000, 1), activation='relu'))
model.add(layers.Dropout(0.2))

model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(1, activation='sigmoid'))

其他参数

model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(train_vector, y_train, validation_data=(test_vector, y_test), epochs=10, batch_size=1024)

tensorflow在此处使用。

我有这个错误

valueerror:layer sequention_2的输入0与图层不兼容:预期ndim = 3,找到ndim = 2。收到完整的形状:(无,5000)

我试图重塑阵列,但错误仍在显示。我知道LSTM需要3D阵列。那么,我该如何以可以馈入LSTM的方式塑造我的阵列呢???

I am trying to feed my TFIDF vector into an LSTM model.
TfidfVectorizer(ngram_range=(1,2), use_idf=True, analyzer='word', max_features = 5000)

Here's the Vector shapes
train_vector.shape = (22895, 5000)
test_vector.shape = (5724, 5000)

I am defining a model like below:

model = models.Sequential()

model.add(layers.LSTM(64, input_shape=(5000, 1), activation='relu'))
model.add(layers.Dropout(0.2))

model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(1, activation='sigmoid'))

Other Parameters

model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(train_vector, y_train, validation_data=(test_vector, y_test), epochs=10, batch_size=1024)

Tensorflow is being used here.

I am getting this error

ValueError: Input 0 of layer sequential_2 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 5000)

I am trying to reshape the arrays, but the errors are still showing. I know LSTM needs 3D array. So how can I shape my arrays in a way that can be fed into LSTM???

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

把回忆走一遍 2025-02-18 11:26:34

要在培训和测试数据中添加新的维度,您可以尝试:

train_vector = train_vector[..., None] # or tf.newaxis instead of None
test_vector = test_vector[..., None] # or tf.newaxis instead of None

或者

train_vector = tf.expand_dims(train_vector, axis=-1)
test_vector = tf.expand_dims(test_vector, axis=-1)

,请注意,如果输出层中有一个节点并且使用了Sigmoid激活功能,则通常将其与binary_crossentropy < /code>损失函数而不是Sparse_categorical_crossentropy,通常用于两个以上的类。

To add a new dimension to your training and test data, you can try:

train_vector = train_vector[..., None] # or tf.newaxis instead of None
test_vector = test_vector[..., None] # or tf.newaxis instead of None

or

train_vector = tf.expand_dims(train_vector, axis=-1)
test_vector = tf.expand_dims(test_vector, axis=-1)

Also, note that if you have one node in your output layer and you are using a sigmoid activation function, you usually combine it with the binary_crossentropy loss function instead of sparse_categorical_crossentropy, which is usually used for more than 2 classes.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文