在 Optuna 试用对象已提供建议值后,我可以覆盖其超参数吗?
有时,Optuna会建议我真正不想评估的样本 - 通常是因为它与先前评估的解决方案相同或太近。在这种情况下,我想评估一个随机样本。
是否有一种方法可以从agger.suggest_float()
(例如)中覆盖建议的值并将其传递回优化器?
给定两种方法:eval_check()
,该方法列出了变量列表,并确定是否应评估样本,请返回true
如果要为评估和false
(如果不是);并且,estairate()
获取变量列表并评估它们,返回实际数字 - 下面是我试图实现的目标的草图。
import numpy as np
import random
class Objective(object):
def __call__(self, trial):
# suggest values
x1 = trial.suggest_float('x1', 0.0, 1.0)
x2 = trial.suggest_int('x2', 10, 20)
x3 = trial.suggest_categorical('x3', ['cat1', 'cat2', 'cat3'])
# check if we should evaluate, if not get random value
while not eval_check([x1, x2, x3]):
x1 = np.random.uniform(0.0, 1.0)
x2 = np.random.randint(10, 20)
x3 = random.choice(['cat1', 'cat2', 'cat3'])
return evaluate([x1, x2, x3])
sampler = optuna.samplers.NSGAIISampler(population_size=100)
study = optuna.create_study(sampler=sampler)
study.optimize(Objective(),n_trials=1000)
现在,我知道这不会像我想要的那样起作用,因为,例如Optuna 试用
对象建议样本[0.5,15,'cat2']
但是代码> eval_check 不喜欢该示例,因此建议随机示例[0.2,18,'cat1']
,如果我返回的输出,则18,'cat1']))
,然后Optuna会认为这是评估的输出([0.5,15,'cat2'])
,并将该分数与该样本相关联。
有什么方法可以覆盖试用
对象中建议的超参数,以便当我返回分数时,Optuna会将该分数与其模型中的新覆盖超参数相关联?
Occasionally Optuna will suggest a sample that I don't really want to evaluate - usually either because it is the same as, or too close to, a previously evaluated solution. In this case I would like to evaluate a random sample instead.
Is there a way to overwrite the suggested values from trial.suggest_float()
(for example) for a single trial and pass that back to the optimiser?
Given two methods: eval_check()
, which takes a list of variables and makes a determination as to whether the sample should be evaluated or not, returning True
if it is to be evaluated and False
if not; and,evaluate()
which takes a list of variables and evaluates them, returning a real number - below is a sketch of what I am trying to achieve.
import numpy as np
import random
class Objective(object):
def __call__(self, trial):
# suggest values
x1 = trial.suggest_float('x1', 0.0, 1.0)
x2 = trial.suggest_int('x2', 10, 20)
x3 = trial.suggest_categorical('x3', ['cat1', 'cat2', 'cat3'])
# check if we should evaluate, if not get random value
while not eval_check([x1, x2, x3]):
x1 = np.random.uniform(0.0, 1.0)
x2 = np.random.randint(10, 20)
x3 = random.choice(['cat1', 'cat2', 'cat3'])
return evaluate([x1, x2, x3])
sampler = optuna.samplers.NSGAIISampler(population_size=100)
study = optuna.create_study(sampler=sampler)
study.optimize(Objective(),n_trials=1000)
Now, I know this won't work as I want it to, because, say the Optuna trial
object suggests the sample [0.5, 15, 'cat2']
but eval_check
doesn't like that sample and so suggests the random sample [0.2, 18, 'cat1']
, if I return the output of evaluate([0.2, 18, 'cat1'])
, then Optuna will think this is the output of evaluate([0.5, 15, 'cat2'])
and associate that score with that sample in its model.
Is there a way that I can overwrite the suggested hyperparameters in the trial
object such that when I return the score, Optuna will associate that score with the new overwritten hyperparameters in its model?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
Optuna 不假设用户覆盖试用参数。相反,向搜索空间添加约束怎么样?
或者,当
not eval_check([x1, x2, x3])
变为True
。Optuna doesn't assume users overwrite a parameters of trial. Instead, how about adding a constraint to the search space?
Or, raising
optuna.TrialPruned
for an undesired combination of parameters also skips a computing objective function whennot eval_check([x1, x2, x3])
becomesTrue
.