keras-最终确定generatordataset迭代器时发生错误

发布于 2025-02-11 03:09:05 字数 3008 浏览 1 评论 0原文

我正在尝试训练神经网络以评估国际象棋位置。我有大约100个CSV文件,每个文件约有10,000个职位,总共大约有1,000,000个职位。由于数据集大小,我正在使用发电机来训练网络。我将网络的输入表示为768号的向量。这是培训代码:

import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
import pandas as pd
import numpy as np

train_directory = '...'
test_directory = '...'

def generate_batches(file_list, batch_size, file_directory):
    cnt = 0
    while True:
        file = file_list[cnt]
        cnt = (cnt + 1) % len(file_list)
        data = pd.read_csv(file_directory + file)

        x = np.array(data['Positions'])
        Y = np.array(data['Evaluations'])

        X = []

        for pos in x:
            curr = []
            for num in pos:
                if(num == '-1'):
                    curr.append(-1)
                elif(num == '1'):
                    curr.append(1)
                elif(num == '0'):
                    curr.append(0)
            X.append(curr)

        X = np.array(X)

        for idx in range(0, X.shape[0], batch_size):
            X_loc = X[idx:(idx + batch_size)]
            Y_loc = Y[idx:(idx + batch_size)]
            
            yield X_loc, Y_loc




train_filenames = []
for file in os.listdir(train_directory):
    if(file.endswith('.csv')):
        train_filenames.append(file)

test_filenames = []
for file in os.listdir(test_directory):
    if(file.endswith('.csv')):
        test_filenames.append(file)


train_generator = generate_batches(train_filenames, 10000, train_directory)
test_generator = generate_batches(test_filenames, 10000, test_directory)


model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(2048, input_shape=(768,), activation='elu'))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dense(2048, activation='elu'))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dense(2048, activation='elu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))


opt = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.7, nesterov=True)
stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=30)
save = tf.keras.callbacks.ModelCheckpoint(filepath='weights.h5', save_weights_only=True, save_best_only=True)
model.compile(optimizer=opt, loss='mse')
model.load_weights('weights.h5')
model.fit(steps_per_epoch=len(train_filenames), workers=1, x=train_generator, max_queue_size=32, epochs=100000, callbacks=[stop, save], validation_data=test_generator, validation_steps=len(test_filenames), batch_size=256)

model.save_weights('weights.h5')

我遇到了培训的怪异问题 - 它可以正好奏效36个时代。损失下降了,尽管相当缓慢,但是在36个时期之后,该程序崩溃了以下错误:

 W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.
 [[{{node PyFunc}}]]

我已经看到了其他问题和相同错误的人,但是他们的解决方案都没有解决我的问题。有人知道如何处理吗?

I'm attempting to train a neural network to evaluate chess positions. I have around 100 CSV files each with about 10,000 positions, leading to roughly 1,000,000 positions in total. Because of the large dataset size, I am using generators to train the network. I am representing the input to the network as a vector of size 768. Here is the training code:

import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
import pandas as pd
import numpy as np

train_directory = '...'
test_directory = '...'

def generate_batches(file_list, batch_size, file_directory):
    cnt = 0
    while True:
        file = file_list[cnt]
        cnt = (cnt + 1) % len(file_list)
        data = pd.read_csv(file_directory + file)

        x = np.array(data['Positions'])
        Y = np.array(data['Evaluations'])

        X = []

        for pos in x:
            curr = []
            for num in pos:
                if(num == '-1'):
                    curr.append(-1)
                elif(num == '1'):
                    curr.append(1)
                elif(num == '0'):
                    curr.append(0)
            X.append(curr)

        X = np.array(X)

        for idx in range(0, X.shape[0], batch_size):
            X_loc = X[idx:(idx + batch_size)]
            Y_loc = Y[idx:(idx + batch_size)]
            
            yield X_loc, Y_loc




train_filenames = []
for file in os.listdir(train_directory):
    if(file.endswith('.csv')):
        train_filenames.append(file)

test_filenames = []
for file in os.listdir(test_directory):
    if(file.endswith('.csv')):
        test_filenames.append(file)


train_generator = generate_batches(train_filenames, 10000, train_directory)
test_generator = generate_batches(test_filenames, 10000, test_directory)


model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(2048, input_shape=(768,), activation='elu'))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dense(2048, activation='elu'))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dense(2048, activation='elu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))


opt = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.7, nesterov=True)
stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=30)
save = tf.keras.callbacks.ModelCheckpoint(filepath='weights.h5', save_weights_only=True, save_best_only=True)
model.compile(optimizer=opt, loss='mse')
model.load_weights('weights.h5')
model.fit(steps_per_epoch=len(train_filenames), workers=1, x=train_generator, max_queue_size=32, epochs=100000, callbacks=[stop, save], validation_data=test_generator, validation_steps=len(test_filenames), batch_size=256)

model.save_weights('weights.h5')

I'm running into a weird issue with the training - it works fine for exactly 36 epochs. The loss goes down, albeit rather slowly, but after 36 epochs the program crashes with the following error:

 W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.
 [[{{node PyFunc}}]]

I've seen others with the same question and same error, but none of their solutions fixed my issue. Does anyone know how to approach this?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文