NumPy 中的加权标准差

发布于 2024-08-24 16:52:11 字数 86 浏览 7 评论 0原文

numpy.average() 有一个权重选项,但 numpy.std() 没有。有人对解决方法有建议吗?

numpy.average() has a weights option, but numpy.std() does not. Does anyone have suggestions for a workaround?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

一世旳自豪 2024-08-31 16:52:11

下面这个简短的“手动计算”怎么样?

def weighted_avg_and_std(values, weights):
    """
    Return the weighted average and standard deviation.

    They weights are in effect first normalized so that they 
    sum to 1 (and so they must not all be 0).

    values, weights -- NumPy ndarrays with the same shape.
    """
    average = numpy.average(values, weights=weights)
    # Fast and numerically precise:
    variance = numpy.average((values-average)**2, weights=weights)
    return (average, math.sqrt(variance))

    

How about the following short "manual calculation"?

def weighted_avg_and_std(values, weights):
    """
    Return the weighted average and standard deviation.

    They weights are in effect first normalized so that they 
    sum to 1 (and so they must not all be 0).

    values, weights -- NumPy ndarrays with the same shape.
    """
    average = numpy.average(values, weights=weights)
    # Fast and numerically precise:
    variance = numpy.average((values-average)**2, weights=weights)
    return (average, math.sqrt(variance))

    
回首观望 2024-08-31 16:52:11

statsmodels中有一个类可以方便计算加权统计:statsmodels.stats.weightstats.DecrStatsW

假设此数据集和权重:

import numpy as np
from statsmodels.stats.weightstats import DescrStatsW

array = np.array([1,2,1,2,1,2,1,3])
weights = np.ones_like(array)
weights[3] = 100

您初始化类(请注意,您必须传入校正因子,增量

weighted_stats = DescrStatsW(array, weights=weights, ddof=0)

然后您可以计算:

  • .mean加权平均值

    <前><代码>>>>加权统计平均值
    1.97196261682243

  • .std加权标准差< /强>:

    <前><代码>>>>加权统计.std
    0.21434289609681711

  • .var加权方差< /强>:

    <前><代码>>>>加权统计.var
    0.045942877107170932

  • .std_mean 加权平均值的标准误差

    <前><代码>>>>加权统计数据.std_mean
    0.020818822467555047

    以防万一您对标准误差和标准差之间的关系感兴趣:标准误差(对于 ddof == 0)计算为加权标准差除以平方权重总和减 1 的根 (GitHub 上 statsmodels 版本 0.9 的相应源代码):

    标准误差 = 标准偏差 / sqrt(sum(权重) - 1)
    

There is a class in statsmodels that makes it easy to calculate weighted statistics: statsmodels.stats.weightstats.DescrStatsW.

Assuming this dataset and weights:

import numpy as np
from statsmodels.stats.weightstats import DescrStatsW

array = np.array([1,2,1,2,1,2,1,3])
weights = np.ones_like(array)
weights[3] = 100

You initialize the class (note that you have to pass in the correction factor, the delta degrees of freedom at this point):

weighted_stats = DescrStatsW(array, weights=weights, ddof=0)

Then you can calculate:

  • .mean the weighted mean:

    >>> weighted_stats.mean      
    1.97196261682243
    
  • .std the weighted standard deviation:

    >>> weighted_stats.std       
    0.21434289609681711
    
  • .var the weighted variance:

    >>> weighted_stats.var       
    0.045942877107170932
    
  • .std_mean the standard error of weighted mean:

    >>> weighted_stats.std_mean  
    0.020818822467555047
    

    Just in case you're interested in the relation between the standard error and the standard deviation: The standard error is (for ddof == 0) calculated as the weighted standard deviation divided by the square root of the sum of the weights minus 1 (corresponding source for statsmodels version 0.9 on GitHub):

    standard_error = standard_deviation / sqrt(sum(weights) - 1)
    
梦开始←不甜 2024-08-31 16:52:11

这是另一种选择:

np.sqrt(np.cov(values, aweights=weights))

Here's one more option:

np.sqrt(np.cov(values, aweights=weights))
如痴如狂 2024-08-31 16:52:11

numpy/scipy 中似乎还没有这样的函数,但是有一个 ticket< /a> 提议添加此功能。在那里你会发现 Statistics.py 它实现了加权标准偏差。

There doesn't appear to be such a function in numpy/scipy yet, but there is a ticket proposing this added functionality. Included there you will find Statistics.py which implements weighted standard deviations.

不喜欢何必死缠烂打 2024-08-31 16:52:11

gaborous 提出了一个非常好的例子:

import pandas as pd
import numpy as np
# X is the dataset, as a Pandas' DataFrame
# Compute the weighted sample mean (fast, efficient and precise)
mean = np.ma.average(X, axis=0, weights=weights) 

# Convert to a Pandas' Series (it's just aesthetic and more 
# ergonomic; no difference in computed values)
mean = pd.Series(mean, index=list(X.keys())) 
xm = X-mean # xm = X diff to mean
# fill NaN with 0 
# a variance of 0 is just void, but at least it keeps the other
# covariance's values computed correctly))
xm = xm.fillna(0) 
# Compute the unbiased weighted sample covariance
sigma2 = 1./(w.sum()-1) * xm.mul(w, axis=0).T.dot(xm); 

加权无偏样本协方差的正确方程,URL(版本:2016-06-28)

There is a very good example proposed by gaborous:

import pandas as pd
import numpy as np
# X is the dataset, as a Pandas' DataFrame
# Compute the weighted sample mean (fast, efficient and precise)
mean = np.ma.average(X, axis=0, weights=weights) 

# Convert to a Pandas' Series (it's just aesthetic and more 
# ergonomic; no difference in computed values)
mean = pd.Series(mean, index=list(X.keys())) 
xm = X-mean # xm = X diff to mean
# fill NaN with 0 
# a variance of 0 is just void, but at least it keeps the other
# covariance's values computed correctly))
xm = xm.fillna(0) 
# Compute the unbiased weighted sample covariance
sigma2 = 1./(w.sum()-1) * xm.mul(w, axis=0).T.dot(xm); 

Correct equation for weighted unbiased sample covariance, URL (version: 2016-06-28)

淡淡的优雅 2024-08-31 16:52:11

频率权重”自从“加权样本标准差Python”以来,谷歌搜索导致了这篇文章:

def frequency_sample_std_dev(X, n):
    """
    Sample standard deviation for X and n,
    where X[i] is the quantity each person in group i has,
    and n[i] is the number of people in group i.
    See Equation 6.4 of:
    Montgomery, Douglas, C. and George C. Runger. Applied Statistics 
     and Probability for Engineers, Enhanced eText. Available from: 
      WileyPLUS, (7th Edition). Wiley Global Education US, 2018.
    """
    n_groups = len(n)
    n_people = sum(n)
    lhs_numerator = sum([ni*Xi**2 for Xi, ni in zip(X, n)])
    rhs_numerator = sum([Xi*ni for Xi, ni in zip(X,n)])**2/n_people
    denominator = n_people-1
    var = (lhs_numerator - rhs_numerator) / denominator
    std = sqrt(var)
    return std

或者修改@Eric的答案如下:

def weighted_sample_avg_std(values, weights):
    """
    Return the weighted average and weighted sample standard deviation.

    values, weights -- Numpy ndarrays with the same shape.
    
    Assumes that weights contains only integers (e.g. how many samples in each group).
    
    See also https://en.wikipedia.org/wiki/Weighted_arithmetic_mean#Frequency_weights
    """
    average = np.average(values, weights=weights)
    variance = np.average((values-average)**2, weights=weights)
    variance = variance*sum(weights)/(sum(weights)-1)
    return (average, sqrt(variance))

print(weighted_sample_avg_std(X, n))

A follow-up to "sample" or "unbiased" standard deviation in the "frequency weights" sense since "weighted sample standard deviation python" Google search leads to this post:

def frequency_sample_std_dev(X, n):
    """
    Sample standard deviation for X and n,
    where X[i] is the quantity each person in group i has,
    and n[i] is the number of people in group i.
    See Equation 6.4 of:
    Montgomery, Douglas, C. and George C. Runger. Applied Statistics 
     and Probability for Engineers, Enhanced eText. Available from: 
      WileyPLUS, (7th Edition). Wiley Global Education US, 2018.
    """
    n_groups = len(n)
    n_people = sum(n)
    lhs_numerator = sum([ni*Xi**2 for Xi, ni in zip(X, n)])
    rhs_numerator = sum([Xi*ni for Xi, ni in zip(X,n)])**2/n_people
    denominator = n_people-1
    var = (lhs_numerator - rhs_numerator) / denominator
    std = sqrt(var)
    return std

Or modifying the answer by @Eric as follows:

def weighted_sample_avg_std(values, weights):
    """
    Return the weighted average and weighted sample standard deviation.

    values, weights -- Numpy ndarrays with the same shape.
    
    Assumes that weights contains only integers (e.g. how many samples in each group).
    
    See also https://en.wikipedia.org/wiki/Weighted_arithmetic_mean#Frequency_weights
    """
    average = np.average(values, weights=weights)
    variance = np.average((values-average)**2, weights=weights)
    variance = variance*sum(weights)/(sum(weights)-1)
    return (average, sqrt(variance))

print(weighted_sample_avg_std(X, n))
画尸师 2024-08-31 16:52:11

我只是在寻找与 numpy np.std 函数等效的 API,它也允许设置 axis 参数:(

我只是用二维测试了它,所以感觉如果有问题,可免费进行改进。)

def std(values, weights=None, axis=None):
    """
    Return the weighted standard deviation.
    axis -- the axis for std calculation
    values, weights -- Numpy ndarrays with the same shape on the according axis.
    """
    average = np.expand_dims(np.average(values, weights=weights, axis=axis), axis=axis)
    # Fast and numerically precise:
    variance = np.average((values-average)**2, weights=weights, axis=axis)
    return np.sqrt(variance)

感谢 Eric O Lebigot 的原始答案

I was just searching for an API equivalent of the numpy np.std function that also allows the axis parameter to be set:

(I just tested it with two dimensions, so feel free for improvements if something is incorrect.)

def std(values, weights=None, axis=None):
    """
    Return the weighted standard deviation.
    axis -- the axis for std calculation
    values, weights -- Numpy ndarrays with the same shape on the according axis.
    """
    average = np.expand_dims(np.average(values, weights=weights, axis=axis), axis=axis)
    # Fast and numerically precise:
    variance = np.average((values-average)**2, weights=weights, axis=axis)
    return np.sqrt(variance)

Thanks to Eric O Lebigot for the original answer.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文