计算大型火炬张量的边缘(衍生物)的最快方法

发布于 2025-02-07 21:11:39 字数 1135 浏览 0 评论 0 原文

给定一个带有形状的张量(B,C,H,W),我想提取空间数据的边缘,即计算 x y < y < /code> (H,W)的方向衍生物,并计算幅度 i = sqrt(| x_amplitude | x_amplitude |^2+| y_amplitude |^2)


我的当前实现是如下所示,

row_mat = np.asarray([[0, 0, 0], [1, 0, -1], [0, 0, 0]])
col_mat = row_mat.T
row_mat = row_mat[None, None, :, :]  # expand dim to convolve with tensor (batch,channel,width,height)
col_mat = col_mat[None, None, :, :]  # expand dim to convolve with tensor (batch,channel,width,height)

def derivative(batch: torch.Tensor) -> torch.Tensor:
    """
    uses convolution to perform x and y derivatives
    :param batch: input tensor batch
    :return: image derivative magnitudes
    """
    x_amplitude = ndimage.convolve(batch, row_mat)
    y_amplitude = ndimage.convolve(batch, col_mat)
    magnitude = np.sqrt(np.abs(x_amplitude) ** 2 + np.abs(y_amplitude) ** 2)
    return torch.tensor(magnitude)

我想知道是否有一种更快的方法,因为这种方法实际上是使用衍生物的定义进行的,因此可能存在弊端。


ps 。要测试此问题,您可以使用张量 torch.randn(1000,128,28,28),因为这些是我要处理的维度

Given a tensor with shape (b,c,h,w), I want to extract edges of the spatial data, that is, calculate x, y direction derivatives of the (h,w) and calculate the magnitude I=sqrt(|x_amplitude|^2+|y_amplitude|^2)


My current implementation is as followed

row_mat = np.asarray([[0, 0, 0], [1, 0, -1], [0, 0, 0]])
col_mat = row_mat.T
row_mat = row_mat[None, None, :, :]  # expand dim to convolve with tensor (batch,channel,width,height)
col_mat = col_mat[None, None, :, :]  # expand dim to convolve with tensor (batch,channel,width,height)

def derivative(batch: torch.Tensor) -> torch.Tensor:
    """
    uses convolution to perform x and y derivatives
    :param batch: input tensor batch
    :return: image derivative magnitudes
    """
    x_amplitude = ndimage.convolve(batch, row_mat)
    y_amplitude = ndimage.convolve(batch, col_mat)
    magnitude = np.sqrt(np.abs(x_amplitude) ** 2 + np.abs(y_amplitude) ** 2)
    return torch.tensor(magnitude)

I was wondering if there's a faster way, as this approach actually convolves using the definition of a derivative, so there might be downsides to that.


PS. to test this you can use the tensor torch.randn(1000,128,28,28), as these are the dimension I'm dealing with

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

雪落纷纷 2025-02-14 21:11:39

对于此特定操作,您可能可以通过“手动”来加快速度:

import torch.nn.functional as nnf

def derivative(batch: torch.Tensor) -> torch.Tensor:
  # pad batch
  x = nnf.pad(batch, (1, 1, 1, 1), mode='reflect')
  dx2 = (x[..., 1:-2, :-2] - x[..., 1:-2, 2:])**2
  dy2 = (x[..., :-2, 1:-2] - x[..., 2:, 1:-2])**2
  mag = torch.sqrt(dx2 + dy2)
  return mag

For this specific operation you might be able to speed things up a bit by doing it "manually":

import torch.nn.functional as nnf

def derivative(batch: torch.Tensor) -> torch.Tensor:
  # pad batch
  x = nnf.pad(batch, (1, 1, 1, 1), mode='reflect')
  dx2 = (x[..., 1:-2, :-2] - x[..., 1:-2, 2:])**2
  dy2 = (x[..., :-2, 1:-2] - x[..., 2:, 1:-2])**2
  mag = torch.sqrt(dx2 + dy2)
  return mag
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文