高效的 numpy 零阶保持

发布于 2024-12-27 14:33:34 字数 369 浏览 2 评论 0原文

有没有一种有效的方法使用零阶保持对 numpy 数组进行重新采样?理想情况下,签名类似于 numpy.interp

我知道 scipy .interpolate.interp1d,但我确信矢量化替代方案可用于处理此类情况。

Is there an efficient way to resample a numpy array using zero-order hold? Ideally something with a signature like that of numpy.interp?

I'm aware of the scipy.interpolate.interp1d, but I'm sure that a vectorised alternative would be available for dealing with cases like this.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

情绪少女 2025-01-03 14:33:34

numpy < 1.12

由于您不会插入任何新值,因此最有效的方法是保持原始数组不变并用浮点数对其进行索引。这实际上是零阶保持。

>>> import numpy as np
>>> A = np.array(range(10))
>>> [A[i] for i in np.linspace(0, 9, num=25)]
[0, 0, 0, 1, 1, 1, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 9]

Numpy >= 1.12

numpy v1.11 和 中已弃用浮点数索引在 v1.12 中删除(2017 年 1 月)。上面显示的代码在当前 numpy 版本中引发 IndexError 异常。

您可以通过使用包装器访问数组来重现旧 numpy 版本的浮点索引行为,将浮点索引动态转换为整数。当考虑内存效率时,这将避免使用 numpy.interpscipy.interpolate.interp1d

Numpy < 1.12

Since you won't be interpolating any new values, the most efficient way would be to leave the original array as is and index it with floats. This is effectively a zero-order hold.

>>> import numpy as np
>>> A = np.array(range(10))
>>> [A[i] for i in np.linspace(0, 9, num=25)]
[0, 0, 0, 1, 1, 1, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 9]

Numpy >= 1.12

Indexing with floats was deprecated in numpy v1.11 and removed in v1.12 (Jan 2017). The code shown above raises IndexError exception in current numpy versions.

You can reproduce the float indexing behavior of older numpy versions by using a wrapper to access the array, converting float indices to integers on the fly. When memory efficiency is a concern, this would avoid the need to pre-emptively interpolate intermediate values using numpy.interp or scipy.interpolate.interp1d.

热风软妹 2025-01-03 14:33:34

Numpy 数组不再允许非整数索引,因此接受的解决方案不再有效。 scipy.interpolate.interp1d 很强大,但在许多情况下不是最快的。这是一个强大的代码解决方案,可以时使用 np.searchsorted,不能时则恢复到 scipy 版本。

import numpy as np
from scipy.interpolate import interp1d

def zero_order_hold(x, xp, yp, left=np.nan, assume_sorted=False):
    r"""
    Interpolates a function by holding at the most recent value.

    Parameters
    ----------
    x : array_like
        The x-coordinates at which to evaluate the interpolated values.
    xp: 1-D sequence of floats
        The x-coordinates of the data points, must be increasing if argument period is not specified. Otherwise, xp is internally sorted after normalizing the periodic boundaries with xp = xp % period.
    yp: 1-D sequence of float or complex
        The y-coordinates of the data points, same length as xp.
    left: int or float, optional, default is np.nan
        Value to use for any value less that all points in xp
    assume_sorted : bool, optional, default is False
        Whether you can assume the data is sorted and do simpler (i.e. faster) calculations

    Returns
    -------
    y : float or complex (corresponding to fp) or ndarray
        The interpolated values, same shape as x.

    Notes
    -----
    #.  Written by DStauffman in July 2020.

    Examples
    --------
    >>> import numpy as np
    >>> xp = np.array([0., 111., 2000., 5000.])
    >>> yp = np.array([0, 1, -2, 3])
    >>> x = np.arange(0, 6001, dtype=float)
    >>> y = zero_order_hold(x, xp, yp)

    """
    # force arrays
    x  = np.asanyarray(x)
    xp = np.asanyarray(xp)
    yp = np.asanyarray(yp)
    # find the minimum value, as anything left of this is considered extrapolated
    xmin = xp[0] if assume_sorted else np.min(xp)
    # check that xp data is sorted, if not, use slower scipy version
    if assume_sorted or np.all(xp[:-1] <= xp[1:]):
        ix = np.searchsorted(xp, x, side='right') - 1
        return np.where(np.asanyarray(x) < xmin, left, yp[ix])
    func = interp1d(xp, yp, kind='zero', fill_value='extrapolate', assume_sorted=False)
    return np.where(np.asanyarray(x) < xmin, left, func(x))

这通过了一系列测试用例:

import unittest

import numpy as np
from scipy.interpolate import interp1d

class Test_zero_order_hold(unittest.TestCase):
    r"""
    Tests the zero_order_hold function with the following cases:
        Subsample high rate
        Supersample low rate
        xp Not sorted
        x not sorted
        Left extrapolation
        Lists instead of arrays

    Notes
    -----
    #.  Uses scipy.interpolate.interp1d as the gold standard (but it's slower)
    """
    def test_subsample(self):
        xp = np.linspace(0., 100*np.pi, 500000)
        yp = np.sin(2 * np.pi * xp)
        x  = np.arange(0., 350., 0.1)
        func = interp1d(xp, yp, kind='zero', fill_value='extrapolate', assume_sorted=True)
        y_exp = func(x)
        y = zero_order_hold(x, xp, yp)
        np.testing.assert_array_equal(y, y_exp)
        y = zero_order_hold(x, xp, yp, assume_sorted=True)
        np.testing.assert_array_equal(y, y_exp)

    def test_supersample(self):
        xp = np.array([0., 5000., 10000., 86400.])
        yp = np.array([0, 1, -2, 0])
        x  = np.arange(0., 86400.,)
        func = interp1d(xp, yp, kind='zero', fill_value='extrapolate', assume_sorted=True)
        y_exp = func(x)
        y = zero_order_hold(x, xp, yp)
        np.testing.assert_array_equal(y, y_exp)
        y = zero_order_hold(x, xp, yp, assume_sorted=True)
        np.testing.assert_array_equal(y, y_exp)

    def test_xp_not_sorted(self):
        xp    = np.array([0, 10, 5, 15])
        yp    = np.array([0, 1, -2, 3])
        x     = np.array([10, 2, 14,  6,  8, 10, 4, 14, 0, 16])
        y_exp = np.array([ 1, 0,  1, -2, -2,  1, 0,  1, 0,  3])
        y     = zero_order_hold(x, xp, yp)
        np.testing.assert_array_equal(y, y_exp)

    def test_x_not_sorted(self):
        xp    = np.array([0, 5, 10, 15])
        yp    = np.array([0, -2, 1, 3])
        x     = np.array([10, 2, 14,  6,  8, 10, 4, 14, 0, 16])
        y_exp = np.array([ 1, 0,  1, -2, -2,  1, 0,  1, 0,  3])
        y     = zero_order_hold(x, xp, yp)
        np.testing.assert_array_equal(y, y_exp)

    def test_left_end(self):
        xp    = np.array([0, 5, 10, 15, 4])
        yp    = np.array([0, 1, -2, 3, 0])
        x     = np.array([-4, -2, 0, 2, 4, 6])
        y_exp = np.array([-5, -5, 0, 0, 0, 1])
        y     = zero_order_hold(x, xp, yp, left=-5)
        np.testing.assert_array_equal(y, y_exp)

    def test_lists(self):
        xp    = [0, 5, 10, 15]
        yp    = [0, 1, 2, 3]
        x     = [-4, -2, 0, 2, 4, 6, 20]
        y_exp = [-1, -1, 0, 0, 0, 1, 3]
        y     = zero_order_hold(x, xp, yp, left=-1)
        np.testing.assert_array_equal(y, y_exp)

前两个测试的速度测试表明,numpy 解决方案对于高速率数据的二次采样大约快 100 倍,对于低速率数据的超采样大约快 3 倍。

Numpy arrays no longer allow non-integer indexing, so the accepted solution is no longer valid. scipy.interpolate.interp1d is robust, but not the fastest in many cases. Here's a robust code solution that uses np.searchsorted when it can, and reverts to the scipy version when it can't.

import numpy as np
from scipy.interpolate import interp1d

def zero_order_hold(x, xp, yp, left=np.nan, assume_sorted=False):
    r"""
    Interpolates a function by holding at the most recent value.

    Parameters
    ----------
    x : array_like
        The x-coordinates at which to evaluate the interpolated values.
    xp: 1-D sequence of floats
        The x-coordinates of the data points, must be increasing if argument period is not specified. Otherwise, xp is internally sorted after normalizing the periodic boundaries with xp = xp % period.
    yp: 1-D sequence of float or complex
        The y-coordinates of the data points, same length as xp.
    left: int or float, optional, default is np.nan
        Value to use for any value less that all points in xp
    assume_sorted : bool, optional, default is False
        Whether you can assume the data is sorted and do simpler (i.e. faster) calculations

    Returns
    -------
    y : float or complex (corresponding to fp) or ndarray
        The interpolated values, same shape as x.

    Notes
    -----
    #.  Written by DStauffman in July 2020.

    Examples
    --------
    >>> import numpy as np
    >>> xp = np.array([0., 111., 2000., 5000.])
    >>> yp = np.array([0, 1, -2, 3])
    >>> x = np.arange(0, 6001, dtype=float)
    >>> y = zero_order_hold(x, xp, yp)

    """
    # force arrays
    x  = np.asanyarray(x)
    xp = np.asanyarray(xp)
    yp = np.asanyarray(yp)
    # find the minimum value, as anything left of this is considered extrapolated
    xmin = xp[0] if assume_sorted else np.min(xp)
    # check that xp data is sorted, if not, use slower scipy version
    if assume_sorted or np.all(xp[:-1] <= xp[1:]):
        ix = np.searchsorted(xp, x, side='right') - 1
        return np.where(np.asanyarray(x) < xmin, left, yp[ix])
    func = interp1d(xp, yp, kind='zero', fill_value='extrapolate', assume_sorted=False)
    return np.where(np.asanyarray(x) < xmin, left, func(x))

This passes a bunch of test cases:

import unittest

import numpy as np
from scipy.interpolate import interp1d

class Test_zero_order_hold(unittest.TestCase):
    r"""
    Tests the zero_order_hold function with the following cases:
        Subsample high rate
        Supersample low rate
        xp Not sorted
        x not sorted
        Left extrapolation
        Lists instead of arrays

    Notes
    -----
    #.  Uses scipy.interpolate.interp1d as the gold standard (but it's slower)
    """
    def test_subsample(self):
        xp = np.linspace(0., 100*np.pi, 500000)
        yp = np.sin(2 * np.pi * xp)
        x  = np.arange(0., 350., 0.1)
        func = interp1d(xp, yp, kind='zero', fill_value='extrapolate', assume_sorted=True)
        y_exp = func(x)
        y = zero_order_hold(x, xp, yp)
        np.testing.assert_array_equal(y, y_exp)
        y = zero_order_hold(x, xp, yp, assume_sorted=True)
        np.testing.assert_array_equal(y, y_exp)

    def test_supersample(self):
        xp = np.array([0., 5000., 10000., 86400.])
        yp = np.array([0, 1, -2, 0])
        x  = np.arange(0., 86400.,)
        func = interp1d(xp, yp, kind='zero', fill_value='extrapolate', assume_sorted=True)
        y_exp = func(x)
        y = zero_order_hold(x, xp, yp)
        np.testing.assert_array_equal(y, y_exp)
        y = zero_order_hold(x, xp, yp, assume_sorted=True)
        np.testing.assert_array_equal(y, y_exp)

    def test_xp_not_sorted(self):
        xp    = np.array([0, 10, 5, 15])
        yp    = np.array([0, 1, -2, 3])
        x     = np.array([10, 2, 14,  6,  8, 10, 4, 14, 0, 16])
        y_exp = np.array([ 1, 0,  1, -2, -2,  1, 0,  1, 0,  3])
        y     = zero_order_hold(x, xp, yp)
        np.testing.assert_array_equal(y, y_exp)

    def test_x_not_sorted(self):
        xp    = np.array([0, 5, 10, 15])
        yp    = np.array([0, -2, 1, 3])
        x     = np.array([10, 2, 14,  6,  8, 10, 4, 14, 0, 16])
        y_exp = np.array([ 1, 0,  1, -2, -2,  1, 0,  1, 0,  3])
        y     = zero_order_hold(x, xp, yp)
        np.testing.assert_array_equal(y, y_exp)

    def test_left_end(self):
        xp    = np.array([0, 5, 10, 15, 4])
        yp    = np.array([0, 1, -2, 3, 0])
        x     = np.array([-4, -2, 0, 2, 4, 6])
        y_exp = np.array([-5, -5, 0, 0, 0, 1])
        y     = zero_order_hold(x, xp, yp, left=-5)
        np.testing.assert_array_equal(y, y_exp)

    def test_lists(self):
        xp    = [0, 5, 10, 15]
        yp    = [0, 1, 2, 3]
        x     = [-4, -2, 0, 2, 4, 6, 20]
        y_exp = [-1, -1, 0, 0, 0, 1, 3]
        y     = zero_order_hold(x, xp, yp, left=-1)
        np.testing.assert_array_equal(y, y_exp)

Speed testing on the first two tests show that the numpy solution is about 100X faster for subsampling high rate data, and about 3X faster for supersampling low rate data.

情独悲 2025-01-03 14:33:34

聚会有点晚了,但这是我的想法:

from numpy import zeros, array, sign

def signal_zoh(x,y,epsilon = 0.001):
    """
        Fills in the data from a Zero-Order Hold (stair-step) signal
    """
    deltaX = array(x[1:],dtype='float') - x[:-1]
    fudge = min(deltaX) *epsilon
    retX = zeros((len(x)*2-1,))
    retY = zeros((len(y)*2-1,))
    retX[0::2] = x
    retX[1::2] = x[1:]+fudge*sign(deltaX)
    retY[0::2] = y
    retY[1::2] = y[:-1]
    return retX,retY

A bit late to the party, but here's what I came up with:

from numpy import zeros, array, sign

def signal_zoh(x,y,epsilon = 0.001):
    """
        Fills in the data from a Zero-Order Hold (stair-step) signal
    """
    deltaX = array(x[1:],dtype='float') - x[:-1]
    fudge = min(deltaX) *epsilon
    retX = zeros((len(x)*2-1,))
    retY = zeros((len(y)*2-1,))
    retX[0::2] = x
    retX[1::2] = x[1:]+fudge*sign(deltaX)
    retY[0::2] = y
    retY[1::2] = y[:-1]
    return retX,retY
故事与诗 2025-01-03 14:33:34

这是 numpy 免费版本,具有相同的签名。数据需要按递增顺序排列 - b/c 当您通过“巧妙”使用列表作为嵌套函数默认值时,它们会被转储(加速 100 倍):

def interp0(x, xp, yp):
    """Zeroth order hold interpolation w/ same
    (base)   signature  as numpy.interp."""
    from collections import deque

    def func(x0, xP=deque(xp), yP=deque(yp)):
      if x0 <= xP[0]:
        return yP[0]
      if x0 >= xP[-1]:
        return yP[-1]    
      while x0 > xP[0]:
        xP.popleft()     # get rid of default
        y = yP.popleft() # data as we go.
      return y

return map(func, x)

Here's a numpy free version, with the same signature. Data need to be in increasing order- b/c they get dumped as you go via "clever" usage of a list as the nested function default (a factor of 100 speed-up):

def interp0(x, xp, yp):
    """Zeroth order hold interpolation w/ same
    (base)   signature  as numpy.interp."""
    from collections import deque

    def func(x0, xP=deque(xp), yP=deque(yp)):
      if x0 <= xP[0]:
        return yP[0]
      if x0 >= xP[-1]:
        return yP[-1]    
      while x0 > xP[0]:
        xP.popleft()     # get rid of default
        y = yP.popleft() # data as we go.
      return y

return map(func, x)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文