需要提高我的LSTM模型的准确性
这是我的模型
# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(df)
train_size = int(len(dataset) * 0.8)
test_size = len(dataset) - train_size
train = dataset[0:train_size,:]
test = dataset[train_size:len(dataset),:]
def create_dataset(dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 0]
dataX.append(a)
dataY.append(dataset[i + look_back, 0])
return np.array(dataX), np.array(dataY)
# reshape into X=t and Y=t+1
look_back = 15
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
print(trainX.shape)
# reshape input to be [samples, time steps, features]
trainX = np.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1))
testX = np.reshape(testX, (testX.shape[0], testX.shape[1], 1))
from keras.layers import Dropout
from keras.layers import Bidirectional
model=Sequential()
model.add(LSTM(50,activation='relu',return_sequences=True,input_shape=(look_back,1)))
model.add(LSTM(50, activation='relu', return_sequences=True))
model.add(LSTM(50, activation='relu', return_sequences=True))
model.add(LSTM(50, activation='sigmoid', return_sequences=False))
model.add(Dense(50))
model.add(Dense(50))
model.add(Dropout(0.2))
model.add(Dense(1))
model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
model.optimizer.learning_rate = 0.0001
Xdata_train=[]
Ydata_train=[]
Xdata_train, Ydata_train = create_dataset(train, look_back)
Xdata_train = np.reshape(Xdata_train, (Xdata_train.shape[0], Xdata_train.shape[1], 1))
#training for all data
history = model.fit(Xdata_train,Ydata_train,batch_size=1,epochs=10,shuffle=False)
rmse值约为35,精度非常低。当我iCrese时代时,没有任何变化。我应该采取什么更改来获得高价值的准确性。 在这里,我附加了图形结果以获取一个想法。
我该如何解决?
This is my model
# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(df)
train_size = int(len(dataset) * 0.8)
test_size = len(dataset) - train_size
train = dataset[0:train_size,:]
test = dataset[train_size:len(dataset),:]
def create_dataset(dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 0]
dataX.append(a)
dataY.append(dataset[i + look_back, 0])
return np.array(dataX), np.array(dataY)
# reshape into X=t and Y=t+1
look_back = 15
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
print(trainX.shape)
# reshape input to be [samples, time steps, features]
trainX = np.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1))
testX = np.reshape(testX, (testX.shape[0], testX.shape[1], 1))
from keras.layers import Dropout
from keras.layers import Bidirectional
model=Sequential()
model.add(LSTM(50,activation='relu',return_sequences=True,input_shape=(look_back,1)))
model.add(LSTM(50, activation='relu', return_sequences=True))
model.add(LSTM(50, activation='relu', return_sequences=True))
model.add(LSTM(50, activation='sigmoid', return_sequences=False))
model.add(Dense(50))
model.add(Dense(50))
model.add(Dropout(0.2))
model.add(Dense(1))
model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
model.optimizer.learning_rate = 0.0001
Xdata_train=[]
Ydata_train=[]
Xdata_train, Ydata_train = create_dataset(train, look_back)
Xdata_train = np.reshape(Xdata_train, (Xdata_train.shape[0], Xdata_train.shape[1], 1))
#training for all data
history = model.fit(Xdata_train,Ydata_train,batch_size=1,epochs=10,shuffle=False)
RMSE value is around 35 and accuracy is very low. When I icrese the epochs there is no any variation. What are the changes should I do to get the accuracy at high value.
Here i attached the graphical results to get an idea.
How could I fix this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
只要您的代码曾经使用过一次,我就可以想到一些更改。尝试使用
双向LSTM
,binary_cross_entropy
(假设它是损失的二进制分类),shuffle = true = true
在培训中。另外,尝试在LSTM层之间添加下去
。Just with a once-over on your code, I can think of a few of the changes. Try using
Bidirectional LSTM
,binary_cross_entropy
(assuming it's a binary classification) for the loss, andshuffle = True
on training. Also, try addingDropout
between LSTM layers.以下是两个建议:
首先,整个数据集中都不适合归一化器。第一的
将您的数据划分为火车/测试零件,将缩放器安装在火车上
数据,然后使用该缩放器转换两种训练/测试。否则
您将信息从测试数据泄漏到培训中
进行归一化时(例如最小/最大值或std/平均值
使用标准缩放器时)。
您似乎正在使您的
y
数据正常化,但永远不会恢复标准化,因此您最终会以较低的规模输出
(正如我们在图上看到的)。您可以使用
scaler.inverse_transform()
。最后,您可能需要从
LSTM层,通常在任何地方使用Sigmoid不是一个好主意
除了输出层以外,还可能导致消失的梯度。
Here are couple of suggestions:
First of all, never fit normalizer on the entire dataset. First
partition your data into train/test parts, fit the scaler on train
data and then transform both train/test using that scaler. Otherwise
you are leaking the information from your test data into training
when doing the normalization (such as min/max values or std/mean
when using standard scaler).
You seem to be normalizing your
y
data, but never reverting thenormalization, as a result you end up with an output on lower scale
(as we can see on plots). You can redo normalization using
scaler.inverse_transform()
.Finally, you may want to remove sigmoid activation function from the
LSTM layer, its generally not a good idea to use sigmoid anywhere
else besides the output layer as it may cause vanishing gradient.