验证精度对于分类数据的LSTM模型始终为零

发布于 2025-01-22 19:21:32 字数 7935 浏览 0 评论 0原文

我试图建立一个分类时间序列的LSTM模型。但是,验证精度始终为零。我认为我的数据有问题,因此我用随机数替换了原始数字。验证精度仍然为零。模型有什么问题吗?

import os
import sys
import pyodbc as pyodbc
import numpy as np
import pandas as pd
import array as arr

from matplotlib import pyplot
from numpy import array, argmax
from time import process_time

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, InputLayer, LSTM, BatchNormalization
from tensorflow.keras.callbacks import EarlyStopping

from sklearn.preprocessing import OneHotEncoder, MinMaxScaler

# 17-Mar-2022: Force run on CPU
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'

# CONSTANTS

TRAINING_DATA_RATIO = 0.9
DSN_STRING = 'DSN=M6_local'

LOOK_BACK_WINDOW = 4


# This is to get source data
def get_raw_data():
    
    conn = pyodbc.connect(DSN_STRING)
    cursor = conn.cursor()

    SQL_EXTRACTION = 'select hashID_123 from tbl_M6 order by DrawID'
    
    return pd.read_sql(SQL_EXTRACTION, conn)

# This is to generate random numbers for training data
def generate_raw_data(size):
    arr = np.random.rand(size) * size
    return np.trunc(arr)


raw_df = generate_raw_data(15180)
raw_df = raw_df.reshape(-1, 1)

oh_encoder = OneHotEncoder(categories=All_categories, sparse=False)
encoded_input = oh_encoder.fit_transform(raw_df)

def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
    """
    Frame a time series as a supervised learning dataset.
    Arguments:
    data: Sequence of observations as a list or NumPy array.
    n_in: Number of lag observations as input (X).
    n_out: Number of observations as output (y).
    dropnan: Boolean whether or not to drop rows with NaN values.
    Returns:
        Pandas DataFrame of series framed for supervised learning.
    """
    n_vars = 1 if type(data) is list else data.shape[1]
    df = pd.DataFrame(data)
    cols, names = list(), list()
    # input sequence (t-n, ... t-1)
    for i in range(n_in, 0, -1):
        cols.append(df.shift(i))
        names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
    # forecast sequence (t, t+1, ... t+n)
    for i in range(0, n_out):
        cols.append(df.shift(-i))
        if i == 0:
            names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
        else:
            names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
    # put it all together
    agg = pd.concat(cols, axis=1)
    agg.columns = names
    
    # drop rows with NaN values
    if dropnan:
        agg.dropna(inplace=True)
    return agg

# Splitting Training/Testing datasets with ratio pratio:1

def Split_data(pdf, pratio):    
    train_size = int(len(pdf) * pratio)
    test_size = len(pdf) - train_size    
    return pdf.iloc[0:train_size], pdf.iloc[train_size:len(pdf)]


draw_reframe = series_to_supervised(encoded_input, LOOK_BACK_WINDOW,1)
train, test = Split_data(draw_reframe, TRAINING_DATA_RATIO)

# Total input = all possible One-Hot Encoding outcome * number of look-back samples.
ALL_INPUT = POSSIBLE_OUTCOME_COL * LOOK_BACK_WINDOW 

# split into input and outputs
train_X, train_y = train.iloc[:,:ALL_INPUT], train.iloc[:,ALL_INPUT:]
test_X, test_y = test.iloc[:,:ALL_INPUT], test.iloc[:,ALL_INPUT:]

def M6_lstm_model():
    # Hyper-parameters
    INPUT_NODES = 45
    LEARNING_RATE = 0.0001

    model = Sequential()

    model.add(LSTM(INPUT_NODES, 
                   return_sequences=False,  
                   input_shape=(train_X.shape[1], train_X.shape[2]) 
                  ,activation='relu'
                  ))

    
    # Output layer
    #model.add(Dense(units=POSSIBLE_OUTCOME_COL,  activation='relu'))
    model.add(Dense(units=train_y.shape[1]))

    model.compile(                  
            loss='categorical_crossentropy',  
optimizer=keras.optimizers.Adam(learning_rate=LEARNING_RATE),
            metrics = ['categorical_accuracy']
    )
    
    return model

lstm_model = M6_lstm_model()
lstm_model.summary() 

custom_early_stopping = EarlyStopping(
    monitor='categorical_accuracy', 
    patience=10, 
    min_delta=0.001, 
    mode='max'
)

EPOCHS = 50

history = lstm_model.fit(
    train_X, train_y,
    epochs=EPOCHS,
    batch_size=16,
    validation_data=(test_X, test_y),
    verbose=1,
    shuffle=False,
    callbacks=[custom_early_stopping]
)

输出就像以下内容;

Epoch 1/20
854/854 [==============================] - 54s 62ms/step - loss: 11.6208 - categorical_accuracy: 0.0000e+00 - val_loss: 13.1296 - val_categorical_accuracy: 0.0000e+00
Epoch 2/20
854/854 [==============================] - 32s 38ms/step - loss: 12.9591 - categorical_accuracy: 7.3217e-05 - val_loss: 11.5824 - val_categorical_accuracy: 0.0000e+00
Epoch 3/20
854/854 [==============================] - 32s 38ms/step - loss: 12.8105 - categorical_accuracy: 1.4643e-04 - val_loss: 12.4107 - val_categorical_accuracy: 0.0000e+00
Epoch 4/20
854/854 [==============================] - 31s 37ms/step - loss: 12.7316 - categorical_accuracy: 1.4643e-04 - val_loss: 10.9091 - val_categorical_accuracy: 0.0000e+00
Epoch 5/20
854/854 [==============================] - 32s 37ms/step - loss: 13.4749 - categorical_accuracy: 2.1965e-04 - val_loss: 10.9705 - val_categorical_accuracy: 0.0000e+00
Epoch 6/20
854/854 [==============================] - 32s 38ms/step - loss: 13.2239 - categorical_accuracy: 2.9287e-04 - val_loss: 11.6188 - val_categorical_accuracy: 0.0000e+00
Epoch 7/20
854/854 [==============================] - 32s 38ms/step - loss: 13.5012 - categorical_accuracy: 2.9287e-04 - val_loss: 10.6353 - val_categorical_accuracy: 0.0000e+00
Epoch 8/20
854/854 [==============================] - 32s 37ms/step - loss: 13.4562 - categorical_accuracy: 2.9287e-04 - val_loss: 9.8759 - val_categorical_accuracy: 0.0000e+00
Epoch 9/20
854/854 [==============================] - 32s 37ms/step - loss: 13.6172 - categorical_accuracy: 2.1965e-04 - val_loss: 12.6144 - val_categorical_accuracy: 0.0000e+00
Epoch 10/20
854/854 [==============================] - 32s 37ms/step - loss: 13.3903 - categorical_accuracy: 3.6609e-04 - val_loss: 9.6623 - val_categorical_accuracy: 0.0000e+00
Epoch 11/20
854/854 [==============================] - 32s 37ms/step - loss: 12.9621 - categorical_accuracy: 3.6609e-04 - val_loss: 12.8088 - val_categorical_accuracy: 0.0000e+00
Epoch 12/20
854/854 [==============================] - 32s 38ms/step - loss: 13.4995 - categorical_accuracy: 2.1965e-04 - val_loss: 9.7154 - val_categorical_accuracy: 0.0000e+00
Epoch 13/20
854/854 [==============================] - 32s 38ms/step - loss: 13.4103 - categorical_accuracy: 2.1965e-04 - val_loss: 12.4104 - val_categorical_accuracy: 0.0000e+00
Epoch 14/20
854/854 [==============================] - 32s 38ms/step - loss: 13.8077 - categorical_accuracy: 8.0539e-04 - val_loss: 10.1903 - val_categorical_accuracy: 0.0000e+00
Epoch 15/20
854/854 [==============================] - 32s 37ms/step - loss: 13.8100 - categorical_accuracy: 6.5895e-04 - val_loss: 9.7783 - val_categorical_accuracy: 0.0000e+00
Epoch 16/20
854/854 [==============================] - 32s 37ms/step - loss: 13.8371 - categorical_accuracy: 5.8574e-04 - val_loss: 12.1615 - val_categorical_accuracy: 0.0000e+00
Epoch 17/20
854/854 [==============================] - 32s 38ms/step - loss: 14.0756 - categorical_accuracy: 5.1252e-04 - val_loss: 9.9183 - val_categorical_accuracy: 0.0000e+00
Epoch 18/20
854/854 [==============================] - 32s 38ms/step - loss: 14.2117 - categorical_accuracy: 4.3930e-04 - val_loss: 10.1652 - val_categorical_accuracy: 0.0000e+00
Epoch 19/20
854/854 [==============================] - 32s 37ms/step - loss: 14.4263 - categorical_accuracy: 3.6609e-04 - val_loss: 9.9861 - val_categorical_accuracy: 0.0000e+00
Epoch 20/20
854/854 [==============================] - 32s 37ms/step - loss: 14.2520 - categorical_accuracy: 3.6609e-04 - val_loss: 10.3836 - val_categorical_accuracy: 0.0000e+00

I tried to build a LSTM model of a categorical time series. However the validation accuracy was always zero. I thought my data were problematic so I replaced my original with random number. The validation accuracy was still zero. Is there anything wrong with the model?

import os
import sys
import pyodbc as pyodbc
import numpy as np
import pandas as pd
import array as arr

from matplotlib import pyplot
from numpy import array, argmax
from time import process_time

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, InputLayer, LSTM, BatchNormalization
from tensorflow.keras.callbacks import EarlyStopping

from sklearn.preprocessing import OneHotEncoder, MinMaxScaler

# 17-Mar-2022: Force run on CPU
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'

# CONSTANTS

TRAINING_DATA_RATIO = 0.9
DSN_STRING = 'DSN=M6_local'

LOOK_BACK_WINDOW = 4


# This is to get source data
def get_raw_data():
    
    conn = pyodbc.connect(DSN_STRING)
    cursor = conn.cursor()

    SQL_EXTRACTION = 'select hashID_123 from tbl_M6 order by DrawID'
    
    return pd.read_sql(SQL_EXTRACTION, conn)

# This is to generate random numbers for training data
def generate_raw_data(size):
    arr = np.random.rand(size) * size
    return np.trunc(arr)


raw_df = generate_raw_data(15180)
raw_df = raw_df.reshape(-1, 1)

oh_encoder = OneHotEncoder(categories=All_categories, sparse=False)
encoded_input = oh_encoder.fit_transform(raw_df)

def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
    """
    Frame a time series as a supervised learning dataset.
    Arguments:
    data: Sequence of observations as a list or NumPy array.
    n_in: Number of lag observations as input (X).
    n_out: Number of observations as output (y).
    dropnan: Boolean whether or not to drop rows with NaN values.
    Returns:
        Pandas DataFrame of series framed for supervised learning.
    """
    n_vars = 1 if type(data) is list else data.shape[1]
    df = pd.DataFrame(data)
    cols, names = list(), list()
    # input sequence (t-n, ... t-1)
    for i in range(n_in, 0, -1):
        cols.append(df.shift(i))
        names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
    # forecast sequence (t, t+1, ... t+n)
    for i in range(0, n_out):
        cols.append(df.shift(-i))
        if i == 0:
            names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
        else:
            names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
    # put it all together
    agg = pd.concat(cols, axis=1)
    agg.columns = names
    
    # drop rows with NaN values
    if dropnan:
        agg.dropna(inplace=True)
    return agg

# Splitting Training/Testing datasets with ratio pratio:1

def Split_data(pdf, pratio):    
    train_size = int(len(pdf) * pratio)
    test_size = len(pdf) - train_size    
    return pdf.iloc[0:train_size], pdf.iloc[train_size:len(pdf)]


draw_reframe = series_to_supervised(encoded_input, LOOK_BACK_WINDOW,1)
train, test = Split_data(draw_reframe, TRAINING_DATA_RATIO)

# Total input = all possible One-Hot Encoding outcome * number of look-back samples.
ALL_INPUT = POSSIBLE_OUTCOME_COL * LOOK_BACK_WINDOW 

# split into input and outputs
train_X, train_y = train.iloc[:,:ALL_INPUT], train.iloc[:,ALL_INPUT:]
test_X, test_y = test.iloc[:,:ALL_INPUT], test.iloc[:,ALL_INPUT:]

def M6_lstm_model():
    # Hyper-parameters
    INPUT_NODES = 45
    LEARNING_RATE = 0.0001

    model = Sequential()

    model.add(LSTM(INPUT_NODES, 
                   return_sequences=False,  
                   input_shape=(train_X.shape[1], train_X.shape[2]) 
                  ,activation='relu'
                  ))

    
    # Output layer
    #model.add(Dense(units=POSSIBLE_OUTCOME_COL,  activation='relu'))
    model.add(Dense(units=train_y.shape[1]))

    model.compile(                  
            loss='categorical_crossentropy',  
optimizer=keras.optimizers.Adam(learning_rate=LEARNING_RATE),
            metrics = ['categorical_accuracy']
    )
    
    return model

lstm_model = M6_lstm_model()
lstm_model.summary() 

custom_early_stopping = EarlyStopping(
    monitor='categorical_accuracy', 
    patience=10, 
    min_delta=0.001, 
    mode='max'
)

EPOCHS = 50

history = lstm_model.fit(
    train_X, train_y,
    epochs=EPOCHS,
    batch_size=16,
    validation_data=(test_X, test_y),
    verbose=1,
    shuffle=False,
    callbacks=[custom_early_stopping]
)

The output is like the following;

Epoch 1/20
854/854 [==============================] - 54s 62ms/step - loss: 11.6208 - categorical_accuracy: 0.0000e+00 - val_loss: 13.1296 - val_categorical_accuracy: 0.0000e+00
Epoch 2/20
854/854 [==============================] - 32s 38ms/step - loss: 12.9591 - categorical_accuracy: 7.3217e-05 - val_loss: 11.5824 - val_categorical_accuracy: 0.0000e+00
Epoch 3/20
854/854 [==============================] - 32s 38ms/step - loss: 12.8105 - categorical_accuracy: 1.4643e-04 - val_loss: 12.4107 - val_categorical_accuracy: 0.0000e+00
Epoch 4/20
854/854 [==============================] - 31s 37ms/step - loss: 12.7316 - categorical_accuracy: 1.4643e-04 - val_loss: 10.9091 - val_categorical_accuracy: 0.0000e+00
Epoch 5/20
854/854 [==============================] - 32s 37ms/step - loss: 13.4749 - categorical_accuracy: 2.1965e-04 - val_loss: 10.9705 - val_categorical_accuracy: 0.0000e+00
Epoch 6/20
854/854 [==============================] - 32s 38ms/step - loss: 13.2239 - categorical_accuracy: 2.9287e-04 - val_loss: 11.6188 - val_categorical_accuracy: 0.0000e+00
Epoch 7/20
854/854 [==============================] - 32s 38ms/step - loss: 13.5012 - categorical_accuracy: 2.9287e-04 - val_loss: 10.6353 - val_categorical_accuracy: 0.0000e+00
Epoch 8/20
854/854 [==============================] - 32s 37ms/step - loss: 13.4562 - categorical_accuracy: 2.9287e-04 - val_loss: 9.8759 - val_categorical_accuracy: 0.0000e+00
Epoch 9/20
854/854 [==============================] - 32s 37ms/step - loss: 13.6172 - categorical_accuracy: 2.1965e-04 - val_loss: 12.6144 - val_categorical_accuracy: 0.0000e+00
Epoch 10/20
854/854 [==============================] - 32s 37ms/step - loss: 13.3903 - categorical_accuracy: 3.6609e-04 - val_loss: 9.6623 - val_categorical_accuracy: 0.0000e+00
Epoch 11/20
854/854 [==============================] - 32s 37ms/step - loss: 12.9621 - categorical_accuracy: 3.6609e-04 - val_loss: 12.8088 - val_categorical_accuracy: 0.0000e+00
Epoch 12/20
854/854 [==============================] - 32s 38ms/step - loss: 13.4995 - categorical_accuracy: 2.1965e-04 - val_loss: 9.7154 - val_categorical_accuracy: 0.0000e+00
Epoch 13/20
854/854 [==============================] - 32s 38ms/step - loss: 13.4103 - categorical_accuracy: 2.1965e-04 - val_loss: 12.4104 - val_categorical_accuracy: 0.0000e+00
Epoch 14/20
854/854 [==============================] - 32s 38ms/step - loss: 13.8077 - categorical_accuracy: 8.0539e-04 - val_loss: 10.1903 - val_categorical_accuracy: 0.0000e+00
Epoch 15/20
854/854 [==============================] - 32s 37ms/step - loss: 13.8100 - categorical_accuracy: 6.5895e-04 - val_loss: 9.7783 - val_categorical_accuracy: 0.0000e+00
Epoch 16/20
854/854 [==============================] - 32s 37ms/step - loss: 13.8371 - categorical_accuracy: 5.8574e-04 - val_loss: 12.1615 - val_categorical_accuracy: 0.0000e+00
Epoch 17/20
854/854 [==============================] - 32s 38ms/step - loss: 14.0756 - categorical_accuracy: 5.1252e-04 - val_loss: 9.9183 - val_categorical_accuracy: 0.0000e+00
Epoch 18/20
854/854 [==============================] - 32s 38ms/step - loss: 14.2117 - categorical_accuracy: 4.3930e-04 - val_loss: 10.1652 - val_categorical_accuracy: 0.0000e+00
Epoch 19/20
854/854 [==============================] - 32s 37ms/step - loss: 14.4263 - categorical_accuracy: 3.6609e-04 - val_loss: 9.9861 - val_categorical_accuracy: 0.0000e+00
Epoch 20/20
854/854 [==============================] - 32s 37ms/step - loss: 14.2520 - categorical_accuracy: 3.6609e-04 - val_loss: 10.3836 - val_categorical_accuracy: 0.0000e+00

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文