需要提高我的LSTM模型的准确性

发布于 2025-02-13 16:28:20 字数 2371 浏览 1 评论 0原文

这是我的模型

# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(df)

train_size = int(len(dataset) * 0.8)
test_size = len(dataset) - train_size
train = dataset[0:train_size,:]
test = dataset[train_size:len(dataset),:]

def create_dataset(dataset, look_back=1):
    dataX, dataY = [], []
    for i in range(len(dataset)-look_back-1):
        a = dataset[i:(i+look_back), 0]
        dataX.append(a)
        dataY.append(dataset[i + look_back, 0])
    return np.array(dataX), np.array(dataY)



  # reshape into X=t and Y=t+1
look_back = 15
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
print(trainX.shape)

# reshape input to be [samples, time steps, features]
trainX = np.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1))
testX = np.reshape(testX, (testX.shape[0], testX.shape[1], 1))

from keras.layers import Dropout
from keras.layers import Bidirectional
model=Sequential()
model.add(LSTM(50,activation='relu',return_sequences=True,input_shape=(look_back,1)))
model.add(LSTM(50, activation='relu', return_sequences=True))
model.add(LSTM(50, activation='relu', return_sequences=True))
model.add(LSTM(50, activation='sigmoid', return_sequences=False))
model.add(Dense(50))
model.add(Dense(50))
model.add(Dropout(0.2))
model.add(Dense(1))

model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])

model.optimizer.learning_rate = 0.0001

Xdata_train=[]
Ydata_train=[]

Xdata_train, Ydata_train = create_dataset(train, look_back)
Xdata_train = np.reshape(Xdata_train, (Xdata_train.shape[0], Xdata_train.shape[1], 1))

#training for all data
history = model.fit(Xdata_train,Ydata_train,batch_size=1,epochs=10,shuffle=False)

rmse值约为35,精度非常低。当我iCrese时代时,没有任何变化。我应该采取什么更改来获得高价值的准确性。 在这里,我附加了图形结果以获取一个想法。

我该如何解决?

This is my model

# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(df)

train_size = int(len(dataset) * 0.8)
test_size = len(dataset) - train_size
train = dataset[0:train_size,:]
test = dataset[train_size:len(dataset),:]

def create_dataset(dataset, look_back=1):
    dataX, dataY = [], []
    for i in range(len(dataset)-look_back-1):
        a = dataset[i:(i+look_back), 0]
        dataX.append(a)
        dataY.append(dataset[i + look_back, 0])
    return np.array(dataX), np.array(dataY)



  # reshape into X=t and Y=t+1
look_back = 15
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
print(trainX.shape)

# reshape input to be [samples, time steps, features]
trainX = np.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1))
testX = np.reshape(testX, (testX.shape[0], testX.shape[1], 1))

from keras.layers import Dropout
from keras.layers import Bidirectional
model=Sequential()
model.add(LSTM(50,activation='relu',return_sequences=True,input_shape=(look_back,1)))
model.add(LSTM(50, activation='relu', return_sequences=True))
model.add(LSTM(50, activation='relu', return_sequences=True))
model.add(LSTM(50, activation='sigmoid', return_sequences=False))
model.add(Dense(50))
model.add(Dense(50))
model.add(Dropout(0.2))
model.add(Dense(1))

model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])

model.optimizer.learning_rate = 0.0001

Xdata_train=[]
Ydata_train=[]

Xdata_train, Ydata_train = create_dataset(train, look_back)
Xdata_train = np.reshape(Xdata_train, (Xdata_train.shape[0], Xdata_train.shape[1], 1))

#training for all data
history = model.fit(Xdata_train,Ydata_train,batch_size=1,epochs=10,shuffle=False)

RMSE value is around 35 and accuracy is very low. When I icrese the epochs there is no any variation. What are the changes should I do to get the accuracy at high value.
Here i attached the graphical results to get an idea.

Dataset
Train and Test Prediction
Result

How could I fix this?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

夜清冷一曲。 2025-02-20 16:28:21

只要您的代码曾经使用过一次,我就可以想到一些更改。尝试使用双向LSTMbinary_cross_entropy(假设它是损失的二进制分类),shuffle = true = true在培训中。另外,尝试在LSTM层之间添加下去

Just with a once-over on your code, I can think of a few of the changes. Try using Bidirectional LSTM, binary_cross_entropy (assuming it's a binary classification) for the loss, and shuffle = True on training. Also, try adding Dropout between LSTM layers.

随梦而飞# 2025-02-20 16:28:21

以下是两个建议:

  1. 首先,整个数据集中都不适合归一化器。第一的
    将您的数据划分为火车/测试零件,将缩放器安装在火车上
    数据,然后使用该缩放器转换两种训练/测试。否则
    您将信息从测试数据泄漏到培训中
    进行归一化时(例如最小/最大值或std/平均值
    使用标准缩放器时)。

  2. 您似乎正在使您的y数据正常化,但永远不会恢复
    标准化,因此您最终会以较低的规模输出
    (正如我们在图上看到的)。您可以使用
    scaler.inverse_transform()

  3. 最后,您可能需要从
    LSTM层,通常在任何地方使用Sigmoid不是一个好主意
    除了输出层以外,还可能导致消失的梯度。

Here are couple of suggestions:

  1. First of all, never fit normalizer on the entire dataset. First
    partition your data into train/test parts, fit the scaler on train
    data and then transform both train/test using that scaler. Otherwise
    you are leaking the information from your test data into training
    when doing the normalization (such as min/max values or std/mean
    when using standard scaler).

  2. You seem to be normalizing your y data, but never reverting the
    normalization, as a result you end up with an output on lower scale
    (as we can see on plots). You can redo normalization using
    scaler.inverse_transform().

  3. Finally, you may want to remove sigmoid activation function from the
    LSTM layer, its generally not a good idea to use sigmoid anywhere
    else besides the output layer as it may cause vanishing gradient.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文