LSTM模型是否保留了之前的训练信息?

发布于 2025-01-09 02:20:02 字数 2949 浏览 3 评论 0原文

我想在循环中将 LSTM 模型拟合到新数据集上,因此我已经像这样实现了它,

#................................define model...........................
model =Sequential()
model.add(LSTM(100, activation='relu', input_shape=(n_input,n_features)))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.summary()

for k, v in enumerate(nse.get_fno_lot_sizes()):
    if v not in ('^NSEI','NIFTYMIDCAP150.NS','NIFTY_FIN_SERVICE.NS','^NSEBANK'):
        #-----------Create Training--------------------

        df = pd.read_csv('data\\'+ v+ ".csv", index_col = 0)

        train = df[['close']].iloc[:int(len(df)*0.8)]
        scaler = MinMaxScaler()
        scaler.fit(train)
        scaled_train = scaler.transform(train)
         
        #------------------------------------------------------
        generator = TimeseriesGenerator(scaled_train,scaled_train,length=n_input, batch_size=1)

        #-----------------------------------------------------
        #fit model
        model.fit(generator,epochs=10)

但是当它在循环中的新数据上进行训练时,我没有看到损失有太大变化。

那么模型定义应该在 for 循环内部吗?或者模型是否保留在对先前数据进行训练时学到的信息,并在对新数据进行训练时从那里开始?

输出看起来像这样,正如您所看到的,第一次迭代的损失有所改善,但在随后的迭代中,尽管损失非常少,但没有改善。所以我在想这个模型是从之前的数据中学到的东西开始的吗?

ABB
Epoch 1/10
340/340 [==============================] - 5s 9ms/step - loss: 0.0110
Epoch 2/10
340/340 [==============================] - 4s 11ms/step - loss: 0.0036
Epoch 3/10
340/340 [==============================] - 5s 14ms/step - loss: 0.0030
Epoch 4/10
340/340 [==============================] - 5s 15ms/step - loss: 0.0026
Epoch 5/10
340/340 [==============================] - 5s 15ms/step - loss: 0.0023
Epoch 6/10
340/340 [==============================] - 4s 11ms/step - loss: 0.0021
Epoch 7/10
340/340 [==============================] - 3s 9ms/step - loss: 0.0021
Epoch 8/10
340/340 [==============================] - 4s 12ms/step - loss: 0.0018
Epoch 9/10
340/340 [==============================] - 6s 18ms/step - loss: 0.0019
Epoch 10/10
340/340 [==============================] - 4s 13ms/step - loss: 0.0016
2095.0888767714823
COFORGE
Epoch 1/10
341/341 [==============================] - 5s 15ms/step - loss: 5.6781e-04
Epoch 2/10
341/341 [==============================] - 5s 15ms/step - loss: 7.1337e-04
Epoch 3/10
341/341 [==============================] - 3s 9ms/step - loss: 8.9877e-04
Epoch 4/10
341/341 [==============================] - 4s 10ms/step - loss: 6.3606e-04
Epoch 5/10
341/341 [==============================] - 5s 14ms/step - loss: 6.4658e-04
Epoch 6/10
341/341 [==============================] - 6s 17ms/step - loss: 5.7911e-04
Epoch 7/10
341/341 [==============================] - 4s 13ms/step - loss: 5.4928e-04
Epoch 8/10
341/341 [==============================] - 4s 11ms/step - loss: 5.8189e-04
Epoch 9/10
341/341 [==============================] - 5s 14ms/step - loss: 5.8669e-04
Epoch 10/10
341/341 [==============================] - 5s 15ms/step - loss: 5.9930e-04

I wanted to fit the LSTM model on new data set in a loop so I have implemented it like this

#................................define model...........................
model =Sequential()
model.add(LSTM(100, activation='relu', input_shape=(n_input,n_features)))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.summary()

for k, v in enumerate(nse.get_fno_lot_sizes()):
    if v not in ('^NSEI','NIFTYMIDCAP150.NS','NIFTY_FIN_SERVICE.NS','^NSEBANK'):
        #-----------Create Training--------------------

        df = pd.read_csv('data\\'+ v+ ".csv", index_col = 0)

        train = df[['close']].iloc[:int(len(df)*0.8)]
        scaler = MinMaxScaler()
        scaler.fit(train)
        scaled_train = scaler.transform(train)
         
        #------------------------------------------------------
        generator = TimeseriesGenerator(scaled_train,scaled_train,length=n_input, batch_size=1)

        #-----------------------------------------------------
        #fit model
        model.fit(generator,epochs=10)

but I do not see much change in loss when it is being trained on new data in the loop.

So should the model definition be inside the for loop? Or does the model retains the information that it has learned during training on previous data and starts from there when it is being trained on new data?

The output looks like this, as you can see for the first iteration there is improvement in loss, but in the subsequent iteration even though loss is very less but there is no improvement. So I am thinking does the model starts with what it learned on previous data?

ABB
Epoch 1/10
340/340 [==============================] - 5s 9ms/step - loss: 0.0110
Epoch 2/10
340/340 [==============================] - 4s 11ms/step - loss: 0.0036
Epoch 3/10
340/340 [==============================] - 5s 14ms/step - loss: 0.0030
Epoch 4/10
340/340 [==============================] - 5s 15ms/step - loss: 0.0026
Epoch 5/10
340/340 [==============================] - 5s 15ms/step - loss: 0.0023
Epoch 6/10
340/340 [==============================] - 4s 11ms/step - loss: 0.0021
Epoch 7/10
340/340 [==============================] - 3s 9ms/step - loss: 0.0021
Epoch 8/10
340/340 [==============================] - 4s 12ms/step - loss: 0.0018
Epoch 9/10
340/340 [==============================] - 6s 18ms/step - loss: 0.0019
Epoch 10/10
340/340 [==============================] - 4s 13ms/step - loss: 0.0016
2095.0888767714823
COFORGE
Epoch 1/10
341/341 [==============================] - 5s 15ms/step - loss: 5.6781e-04
Epoch 2/10
341/341 [==============================] - 5s 15ms/step - loss: 7.1337e-04
Epoch 3/10
341/341 [==============================] - 3s 9ms/step - loss: 8.9877e-04
Epoch 4/10
341/341 [==============================] - 4s 10ms/step - loss: 6.3606e-04
Epoch 5/10
341/341 [==============================] - 5s 14ms/step - loss: 6.4658e-04
Epoch 6/10
341/341 [==============================] - 6s 17ms/step - loss: 5.7911e-04
Epoch 7/10
341/341 [==============================] - 4s 13ms/step - loss: 5.4928e-04
Epoch 8/10
341/341 [==============================] - 4s 11ms/step - loss: 5.8189e-04
Epoch 9/10
341/341 [==============================] - 5s 14ms/step - loss: 5.8669e-04
Epoch 10/10
341/341 [==============================] - 5s 15ms/step - loss: 5.9930e-04

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

秉烛思 2025-01-16 02:20:02

@Stupid_Intern,看起来您在每次迭代中都根据新数据训练模型,但使用之前迭代的拟合作为起点(如果我错了,请纠正我)。

这应该会减少模型达到良好拟合所需的时间,但不会使模型很好地拟合所有数据,而只能拟合您用来拟合它的手头数据。

您想要的是使模型适合所有数据。或者,如果您想保留循环和更新结构,您可以每次将模型拟合到所有累积数据。

@Stupid_Intern, it looks like at each iteration you train the model on new data, but using the fit from previous iterations as a starting point (correct me if I am wrong).

This should decrease the amount of time it takes for the model to get to a good fit, but it will not make the model fit well to all of the data, only the data at hand that you are using to fit it.

What you want is to fit the model to all of the data. Or if you want to keep the loop and updating structure you can fit the model to all of the cumulative data each time.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文