正确使用 fmin_l_bfgs_b 拟合模型参数
我有一些实验数据(对于 y、x、t_exp、m_exp),并且想要使用 约束多元 BFGS 方法。参数E必须大于0,其他无限制。
def func(x, A, B, C, D, E, *args):
return A * (x ** E) * numpy.cos(t_exp) * (1 - numpy.exp((-2 * B * x) / numpy.cos(t_exp))) + numpy.exp((-2 * B * x) / numpy.cos(t_exp)) * C + (D * m_exp)
initial_values = numpy.array([-10, 2, -20, 0.3, 0.25])
mybounds = [(None,None), (None,None), (None,None), (None,None), (0, None)]
x,f,d = scipy.optimize.fmin_l_bfgs_b(func, x0=initial_values, args=(m_exp, t_exp), bounds=mybounds)
有几个问题:
- 我的模型公式
func
应该包含我的自变量x
还是应该从实验数据x_exp
中提供作为x_exp
的一部分>*args? - 当我运行上面的代码时,出现错误
func() 需要至少 6 个参数(给定 3 个)
,我假设是 x,以及我的两个 *args... 我应该如何定义 <代码>函数?
编辑:感谢@zephyr的回答,我现在明白目标是最小化残差平方和,而不是实际函数。我得到了以下工作代码:
def func(params, *args):
l_exp = args[0]
s_exp = args[1]
m_exp = args[2]
t_exp = args[3]
A, B, C, D, E = params
s_model = A * (l_exp ** E) * numpy.cos(t_exp) * (1 - numpy.exp((-2 * B * l_exp) / numpy.cos(t_exp))) + numpy.exp((-2 * B * l_exp) / numpy.cos(theta_exp)) * C + (D * m_exp)
residual = s_exp - s_model
return numpy.sum(residual ** 2)
initial_values = numpy.array([-10, 2, -20, 0.3, 0.25])
mybounds = [(None,None), (None,None), (None,None), (None,None), (0,None)]
x, f, d = scipy.optimize.fmin_l_bfgs_b(func, x0=initial_values, args=(l_exp, s_exp, m_exp, t_exp), bounds=mybounds, approx_grad=True)
我不确定边界是否正常工作。当我为 E 指定 (0, None) 时,我得到运行标志 2,异常终止。如果我将其设置为 (1e-6, None),它运行正常,但选择 1e-6 作为 E。我是否正确指定了边界?
I have a some experimental data (for y, x, t_exp, m_exp), and want to find the "optimal" model parameters (A, B, C, D, E) for this data using the constrained multivariate BFGS method. Parameter E must be greater than 0, the others are unconstrained.
def func(x, A, B, C, D, E, *args):
return A * (x ** E) * numpy.cos(t_exp) * (1 - numpy.exp((-2 * B * x) / numpy.cos(t_exp))) + numpy.exp((-2 * B * x) / numpy.cos(t_exp)) * C + (D * m_exp)
initial_values = numpy.array([-10, 2, -20, 0.3, 0.25])
mybounds = [(None,None), (None,None), (None,None), (None,None), (0, None)]
x,f,d = scipy.optimize.fmin_l_bfgs_b(func, x0=initial_values, args=(m_exp, t_exp), bounds=mybounds)
A few questions:
- Should my model formulation
func
include my independent variablex
or should it be provided from the experimental datax_exp
as part of*args
? - When I run the above code, I get an error
func() takes at least 6 arguments (3 given)
, which I assume are x, and my two *args... How should I definefunc
?
EDIT: Thanks to @zephyr's answer, I now understand that the goal is to minimize the sum of squared residuals, not the actual function. I got to the following working code:
def func(params, *args):
l_exp = args[0]
s_exp = args[1]
m_exp = args[2]
t_exp = args[3]
A, B, C, D, E = params
s_model = A * (l_exp ** E) * numpy.cos(t_exp) * (1 - numpy.exp((-2 * B * l_exp) / numpy.cos(t_exp))) + numpy.exp((-2 * B * l_exp) / numpy.cos(theta_exp)) * C + (D * m_exp)
residual = s_exp - s_model
return numpy.sum(residual ** 2)
initial_values = numpy.array([-10, 2, -20, 0.3, 0.25])
mybounds = [(None,None), (None,None), (None,None), (None,None), (0,None)]
x, f, d = scipy.optimize.fmin_l_bfgs_b(func, x0=initial_values, args=(l_exp, s_exp, m_exp, t_exp), bounds=mybounds, approx_grad=True)
I am not sure that the bounds are working correctly. When I specify (0, None) for E, I get a run flag 2, abnormal termination. If I set it to (1e-6, None), it runs fine, but selects 1e-6 as E. Am I specifying the bounds correctly?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我不想尝试弄清楚您正在使用的模型代表什么,所以这里有一个适合线条的简单示例:
第一个优化是无界的,并给出了正确的答案,第二个优化尊重阻止它的边界达到正确的参数。
您错误的重要一点是,对于几乎所有优化函数,“x”和“x0”指的是您正在优化的参数 - 其他所有内容都作为参数传递。同样重要的是,您的 fit 函数返回正确的数据类型 - 这里我们需要一个值,一些例程需要一个错误向量。此外,除非您想分析计算梯度并提供它,否则您还需要 approx_grad=True 标志。
I didn't want to try to figure out what the model you're using represented, so here's a simple example fitting to a line:
The first optimize is unbounded, and gives the correct answer, the second respects the bounds which prevents it from reaching the correct parameters.
The important thing you have wrong is for almost all the optimize functions, 'x' and 'x0' refer to the parameters you are optimizing over - everything else is passed as an argument. It's also important that your fit function return the correct data type - here we want a single value, some routines expect an error vector. Also you need the approx_grad=True flag unless you want to compute the gradient analytically and provide it.