SciPy 的最小平方误差的相对平方和

发布于 2025-01-10 11:44:12 字数 845 浏览 0 评论 0原文

我对模型拟合和 SciPy 比较陌生;对于任何无知提前表示歉意。

我正在尝试使用 scipy.optimize east_squares 拟合非线性模型。

这是函数:

def growthfunction(theta, t):
    return (theta[0]*np.exp(-np.exp(-theta[1]*(t-theta[2]))))

和一些数据

t = [1, 2, 3, 4] observed = [3, 10, 14, 17]

我首先定义模型

def fun(theta):
    return (myfunction(theta, ts) - observed)

选择一些随机起始参数来优化如下:

theta0 = [1, 1, 1]

然后我利用 leas_squares 来优化

res1 = least_squares(fun, theta0)

这非常有效,除了以下事实:最小二乘在这里优化绝对误差。我的数据随时间变化,这意味着时间点 1 处的 5 误差按比例大于时间点 100 处的 5 误差。我想更改此设置,以便优化相对误差。

我尝试手动执行此操作,但如果我像这样除以 fun(theta) 中的预测值:

def fun(theta):
    return (myfunction(theta, ts) - observed)/myfunction(theta, ts)

least_squares 显示参数太多且无法优化的错误

I am relatively new to model fitting and SciPy; apologies in advance for any ignorance.

I am trying to fit a non-linear model using scipy.optimize least_squares.

Here's the function:

def growthfunction(theta, t):
    return (theta[0]*np.exp(-np.exp(-theta[1]*(t-theta[2]))))

and some data

t = [1, 2, 3, 4]
observed = [3, 10, 14, 17]

I first define the model

def fun(theta):
    return (myfunction(theta, ts) - observed)

Select some random starting parameters to be optimized below:

theta0 = [1, 1, 1]

Then I utilize leas_squares to optimize

res1 = least_squares(fun, theta0)

This works great, except for the fact that least_squares is here optimizing the absolute error. My data changes with time, meaning an error of 5 at time point 1 is proportionally larger than an error of 5 at time point 100. I would like to change this so that instead the relative error is optimized.

I tried doing it manually, but if I divide by the predicted values in fun(theta) like so:

def fun(theta):
    return (myfunction(theta, ts) - observed)/myfunction(theta, ts)

least_squares displays an error that there are too many parameters and cannot optimize

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

聚集的泪 2025-01-17 11:44:13

这是通过获取相对误差来实现的:

from scipy.optimize import least_squares
import numpy as np

def growthfunction(theta, t):
    return (theta[0]*np.exp(-np.exp(-theta[1]*(t-theta[2]))))

t = [1, 2, 3, 4]
observed = [3, 10, 14, 17]

def fun(theta):
    return (growthfunction(theta, t) - observed)/growthfunction(theta, t)

theta0 = [1,1,1]

res1 = least_squares(fun, theta0)
print(res1)

输出:

>>>  active_mask: array([0., 0., 0.])
        cost: 0.0011991963091748607
         fun: array([ 0.00255037, -0.0175105 ,  0.0397808 , -0.02242228])
        grad: array([ 3.15774533e-13, -2.50283465e-08, -1.46139239e-08])
         jac: array([[ 0.05617851, -0.92486809, -1.94678829],
       [ 0.05730839,  0.28751647, -0.6615416 ],
       [ 0.05408162,  0.27956135, -0.20795969],
       [ 0.05758503,  0.166258  , -0.07376148]])
     message: '`ftol` termination condition is satisfied.'
        nfev: 10
        njev: 10
  optimality: 2.5028346541978996e-08
      status: 2
     success: True
           x: array([17.7550016 ,  1.09927597,  1.52223722])

This is working by taking the relative error:

from scipy.optimize import least_squares
import numpy as np

def growthfunction(theta, t):
    return (theta[0]*np.exp(-np.exp(-theta[1]*(t-theta[2]))))

t = [1, 2, 3, 4]
observed = [3, 10, 14, 17]

def fun(theta):
    return (growthfunction(theta, t) - observed)/growthfunction(theta, t)

theta0 = [1,1,1]

res1 = least_squares(fun, theta0)
print(res1)

Output:

>>>  active_mask: array([0., 0., 0.])
        cost: 0.0011991963091748607
         fun: array([ 0.00255037, -0.0175105 ,  0.0397808 , -0.02242228])
        grad: array([ 3.15774533e-13, -2.50283465e-08, -1.46139239e-08])
         jac: array([[ 0.05617851, -0.92486809, -1.94678829],
       [ 0.05730839,  0.28751647, -0.6615416 ],
       [ 0.05408162,  0.27956135, -0.20795969],
       [ 0.05758503,  0.166258  , -0.07376148]])
     message: '`ftol` termination condition is satisfied.'
        nfev: 10
        njev: 10
  optimality: 2.5028346541978996e-08
      status: 2
     success: True
           x: array([17.7550016 ,  1.09927597,  1.52223722])
梅窗月明清似水 2025-01-17 11:44:13

如果没有 最小可重现示例,很难为您提供帮助,但您可以尝试相对最少的更传统版本正方形

def fun(theta):
    return (myfunction(theta, ts) - observed)/observed

,或者也许是为了防止小值/零值,

def fun(theta):
    cutoff = 1e-4
    return (myfunction(theta, ts) - observed)/np.maximum(np.abs(observed),cutoff)

Without a minimal reproducible example it is very hard to help you, but you can try a more traditional version of relative least squares which is

def fun(theta):
    return (myfunction(theta, ts) - observed)/observed

or, perhaps, to guard against small/zero values,

def fun(theta):
    cutoff = 1e-4
    return (myfunction(theta, ts) - observed)/np.maximum(np.abs(observed),cutoff)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文