python CNN,为什么我在不同的桌面上得到不同的结果?我该怎么做才能在不同的机器上得到相同的结果

发布于 2025-01-12 16:16:31 字数 1904 浏览 1 评论 0原文

我运行相同的代码和相同的数据集来训练 CNN(卷积神经网络),并且仅使用 CPU(无 GPU)。我已经设置了随机种子,因此每次运行代码时都可以在一台机器上获得相同的结果。

seed_value= 0

导入 o​​s

os.environ['PYTHONHASHSEED']=str(seed_value)

os.environ["CUDA_VISIBLE_DEVICES"] = "-1"

导入随机

random.seed(seed_value)

导入 numpy as np

np.random.seed(seed_value) )

但我不知道为什么当我在不同的机器上运行相同的代码时结果不同。我能做什么 做?代码如下:

def CNNGETPREDICTVAL(train_xx,train_yy,test_xx,inner_fac_len,loop_lr,nvl_val_1,nvl_val_2,nvl_val_3,loop_dst_num): train_xx=train_xx.drop('date_time',axis=1) test_xx=test_xx.drop(['date_time','key_0'],axis=1)

x_train = train_xx.values.reshape(-1, 1,inner_fac_len,1)
y_train=keras.utils.np_utils.to_categorical(train_yy, num_classes = 3)
x_test = test_xx.values.reshape(-1, 1,inner_fac_len,1)

model = keras.models.Sequential()
init_info=keras.initializers.RandomNormal(mean=0.0,stddev=0.05,seed=2021)
model.add(keras.layers.Conv2D(nvl_val_1, (1, 3), activation='relu',padding='same', input_shape=(1, inner_fac_len, 1),kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))

model.add(keras.layers.Conv2D(nvl_val_2, (1, 3), activation='relu',padding='same',kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))

model.add(keras.layers.Conv2D(nvl_val_3, (1, 3), activation='relu',padding='same',kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))

model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(loop_dst_num, activation='relu',kernel_initializer=init_info))
model.add(keras.layers.Dense(3, activation='softmax',kernel_initializer=init_info))

my_optimizer =tf.optimizers.Adam(learning_rate=loop_lr)
model.compile(optimizer=my_optimizer, loss='mse')

model.fit(x_train, y_train, batch_size=512, epochs=10)
result = model.predict(x_test,batch_size=512,verbose=0)

return result

I run the same code and the same dataset to train the CNN(Convolutional Neural Networks) and CPU used only(NO GPU). I have set the random seed so I can get the same result in one machine each time I run the code.

seed_value= 0

import os

os.environ['PYTHONHASHSEED']=str(seed_value)

os.environ["CUDA_VISIBLE_DEVICES"] = "-1"

import random

random.seed(seed_value)

import numpy as np

np.random.seed(seed_value)

But I do not know why the result is different when I run the same code in different machine.what can i do? The code is as fellow:

def CNNGETPREDICTVAL(train_xx,train_yy,test_xx,inner_fac_len,loop_lr,nvl_val_1,nvl_val_2,nvl_val_3,loop_dst_num):
train_xx=train_xx.drop('date_time',axis=1)
test_xx=test_xx.drop(['date_time','key_0'],axis=1)

x_train = train_xx.values.reshape(-1, 1,inner_fac_len,1)
y_train=keras.utils.np_utils.to_categorical(train_yy, num_classes = 3)
x_test = test_xx.values.reshape(-1, 1,inner_fac_len,1)

model = keras.models.Sequential()
init_info=keras.initializers.RandomNormal(mean=0.0,stddev=0.05,seed=2021)
model.add(keras.layers.Conv2D(nvl_val_1, (1, 3), activation='relu',padding='same', input_shape=(1, inner_fac_len, 1),kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))

model.add(keras.layers.Conv2D(nvl_val_2, (1, 3), activation='relu',padding='same',kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))

model.add(keras.layers.Conv2D(nvl_val_3, (1, 3), activation='relu',padding='same',kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))

model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(loop_dst_num, activation='relu',kernel_initializer=init_info))
model.add(keras.layers.Dense(3, activation='softmax',kernel_initializer=init_info))

my_optimizer =tf.optimizers.Adam(learning_rate=loop_lr)
model.compile(optimizer=my_optimizer, loss='mse')

model.fit(x_train, y_train, batch_size=512, epochs=10)
result = model.predict(x_test,batch_size=512,verbose=0)

return result

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

ˉ厌 2025-01-19 16:16:31

您正在使用 Keras,因此除了 python、随机和 numpy 种子之外,您还应该修复其后端随机种子。如果您使用tensorflow作为后端(默认情况下它是tensorflow,但也可能是theano):

# Seed value
# Apparently you may use different seed values at each stage
seed_value= 0

# 1. Set `PYTHONHASHSEED` environment variable at a fixed value
import os
os.environ['PYTHONHASHSEED']=str(seed_value)

# 2. Set `python` built-in pseudo-random generator at a fixed value
import random
random.seed(seed_value)

# 3. Set `numpy` pseudo-random generator at a fixed value
import numpy as np
np.random.seed(seed_value)

# 4. Set the `tensorflow` pseudo-random generator at a fixed value
import tensorflow as tf
tf.random.set_seed(seed_value)
# for later versions: 
# tf.set_random_seed(seed_value)

You are using Keras, so you should fix its backend random seed, besides python, random and numpy seeds. If you use tensorflow as your backend (by default it's tensorflow, but may be theano):

# Seed value
# Apparently you may use different seed values at each stage
seed_value= 0

# 1. Set `PYTHONHASHSEED` environment variable at a fixed value
import os
os.environ['PYTHONHASHSEED']=str(seed_value)

# 2. Set `python` built-in pseudo-random generator at a fixed value
import random
random.seed(seed_value)

# 3. Set `numpy` pseudo-random generator at a fixed value
import numpy as np
np.random.seed(seed_value)

# 4. Set the `tensorflow` pseudo-random generator at a fixed value
import tensorflow as tf
tf.random.set_seed(seed_value)
# for later versions: 
# tf.set_random_seed(seed_value)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文