使用MLP的Sklearn GridSearch将随机状态放在哪里?
因此,我做了以下操作:
MLP = MLPRegressor()
parameter_space = {
'hidden_layer_sizes': [(32,), (32,16), (32,16,8), (32,16,8,4), (32,16,8,4,2), (32,32), (32,32,32), (32,32,32,32), (32,32,32,32,32), (16,8,4,2)],
'activation': ['relu'],
'solver': ['adam'],
'learning_rate_init': [1, 0.1, 0.01, 0.001,0.0001,0.00001],
'max_iter': [5000],
'shuffle': [True, False],
'random_state': [0],
'early_stopping': [True, False],
'n_iter_no_change': [50],
}
gs_MLP = GridSearchCV(estimator = MLP, param_grid= parameter_space, cv = 7, n_jobs = -1)
gs_MLP_fit = gs_MLP.fit(X, y)
gs_MLP.score(X,y)
并且我注意到,每当我更改hidden_layer_size中的顺序时,它会提供不同的答案。首先,它说(16,8,4,2),当我将(16,8,4,2)结束时,它说(32,32,32,32)是最好的。
我认为这与Random_State有关吗?我必须将其放入mlpregressor()中吗?如mlpregressor(random_state = 0)
So I did the following:
MLP = MLPRegressor()
parameter_space = {
'hidden_layer_sizes': [(32,), (32,16), (32,16,8), (32,16,8,4), (32,16,8,4,2), (32,32), (32,32,32), (32,32,32,32), (32,32,32,32,32), (16,8,4,2)],
'activation': ['relu'],
'solver': ['adam'],
'learning_rate_init': [1, 0.1, 0.01, 0.001,0.0001,0.00001],
'max_iter': [5000],
'shuffle': [True, False],
'random_state': [0],
'early_stopping': [True, False],
'n_iter_no_change': [50],
}
gs_MLP = GridSearchCV(estimator = MLP, param_grid= parameter_space, cv = 7, n_jobs = -1)
gs_MLP_fit = gs_MLP.fit(X, y)
gs_MLP.score(X,y)
And I noticed that whenever I change the order within the hidden_layer_size it gives different answers. First it said (16,8,4,2) and when I put (16,8,4,2) at end it said (32,32,32,32) is the best.
I assume this has to do with the random_state? Do I have to put it in MLPRegressor() instead? As in MLPRegressor(random_state = 0)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论