python CNN,为什么我在不同的桌面上得到不同的结果?我该怎么做才能在不同的机器上得到相同的结果
我运行相同的代码和相同的数据集来训练 CNN(卷积神经网络),并且仅使用 CPU(无 GPU)。我已经设置了随机种子,因此每次运行代码时都可以在一台机器上获得相同的结果。
seed_value= 0
导入 os
os.environ['PYTHONHASHSEED']=str(seed_value)
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
导入随机
random.seed(seed_value)
导入 numpy as np
np.random.seed(seed_value) )
但我不知道为什么当我在不同的机器上运行相同的代码时结果不同。我能做什么 做?代码如下:
def CNNGETPREDICTVAL(train_xx,train_yy,test_xx,inner_fac_len,loop_lr,nvl_val_1,nvl_val_2,nvl_val_3,loop_dst_num): train_xx=train_xx.drop('date_time',axis=1) test_xx=test_xx.drop(['date_time','key_0'],axis=1)
x_train = train_xx.values.reshape(-1, 1,inner_fac_len,1)
y_train=keras.utils.np_utils.to_categorical(train_yy, num_classes = 3)
x_test = test_xx.values.reshape(-1, 1,inner_fac_len,1)
model = keras.models.Sequential()
init_info=keras.initializers.RandomNormal(mean=0.0,stddev=0.05,seed=2021)
model.add(keras.layers.Conv2D(nvl_val_1, (1, 3), activation='relu',padding='same', input_shape=(1, inner_fac_len, 1),kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))
model.add(keras.layers.Conv2D(nvl_val_2, (1, 3), activation='relu',padding='same',kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))
model.add(keras.layers.Conv2D(nvl_val_3, (1, 3), activation='relu',padding='same',kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(loop_dst_num, activation='relu',kernel_initializer=init_info))
model.add(keras.layers.Dense(3, activation='softmax',kernel_initializer=init_info))
my_optimizer =tf.optimizers.Adam(learning_rate=loop_lr)
model.compile(optimizer=my_optimizer, loss='mse')
model.fit(x_train, y_train, batch_size=512, epochs=10)
result = model.predict(x_test,batch_size=512,verbose=0)
return result
I run the same code and the same dataset to train the CNN(Convolutional Neural Networks) and CPU used only(NO GPU). I have set the random seed so I can get the same result in one machine each time I run the code.
seed_value= 0
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
import random
random.seed(seed_value)
import numpy as np
np.random.seed(seed_value)
But I do not know why the result is different when I run the same code in different machine.what can i do? The code is as fellow:
def CNNGETPREDICTVAL(train_xx,train_yy,test_xx,inner_fac_len,loop_lr,nvl_val_1,nvl_val_2,nvl_val_3,loop_dst_num):
train_xx=train_xx.drop('date_time',axis=1)
test_xx=test_xx.drop(['date_time','key_0'],axis=1)
x_train = train_xx.values.reshape(-1, 1,inner_fac_len,1)
y_train=keras.utils.np_utils.to_categorical(train_yy, num_classes = 3)
x_test = test_xx.values.reshape(-1, 1,inner_fac_len,1)
model = keras.models.Sequential()
init_info=keras.initializers.RandomNormal(mean=0.0,stddev=0.05,seed=2021)
model.add(keras.layers.Conv2D(nvl_val_1, (1, 3), activation='relu',padding='same', input_shape=(1, inner_fac_len, 1),kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))
model.add(keras.layers.Conv2D(nvl_val_2, (1, 3), activation='relu',padding='same',kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))
model.add(keras.layers.Conv2D(nvl_val_3, (1, 3), activation='relu',padding='same',kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(loop_dst_num, activation='relu',kernel_initializer=init_info))
model.add(keras.layers.Dense(3, activation='softmax',kernel_initializer=init_info))
my_optimizer =tf.optimizers.Adam(learning_rate=loop_lr)
model.compile(optimizer=my_optimizer, loss='mse')
model.fit(x_train, y_train, batch_size=512, epochs=10)
result = model.predict(x_test,batch_size=512,verbose=0)
return result
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您正在使用 Keras,因此除了 python、随机和 numpy 种子之外,您还应该修复其后端随机种子。如果您使用tensorflow作为后端(默认情况下它是tensorflow,但也可能是theano):
You are using Keras, so you should fix its backend random seed, besides python, random and numpy seeds. If you use tensorflow as your backend (by default it's tensorflow, but may be theano):