我如何矢量化此Pytorch片段?

发布于 2025-01-18 00:52:55 字数 308 浏览 0 评论 0原文

我的 pytorch 代码由于没有被矢量化而运行太慢,而且我不确定如何对其进行矢量化,因为我对 PyTorch 还比较陌生。有人可以帮助我做到这一点或为我指明正确的方向吗?

level_stride = 8
loc = torch.zeros(H * W, 2)
for i in range(H):
   for j in range(W):
       loc[i * H + j][0] = level_stride * (j + 0.5)
       loc[i * H + j][1] = level_stride * (i + 0.5)

My pytorch code is running too slow due to it not being vectorized and I am unsure how to go about vectorizing it as I am relatively new to PyTorch. Can someone help me do this or point me in the right direction?

level_stride = 8
loc = torch.zeros(H * W, 2)
for i in range(H):
   for j in range(W):
       loc[i * H + j][0] = level_stride * (j + 0.5)
       loc[i * H + j][1] = level_stride * (i + 0.5)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

征棹 2025-01-25 00:52:56

首先,您将张量的大小定义为 (H*W, 2)。这当然是完全可选的,但通过让 HW 成为张量中的单独维度,显式保留维度可能更具表现力。这也使得以后的一些操作变得更加容易。

您计算用于填充张量的值来自范围。我们可以使用 torch.arange 函数来获取相同的范围,这些范围已经以张量的形式存在,准备好放入您的 loc 张量中。如果这样做了,您可以将其视为完全省略 for 循环,并将 ji 视为不同的张量。
如果您不熟悉张量,这可能看起来令人困惑,但单个数值和张量之间的运算也同样有效,因此不需要更改其余代码的太多内容。

我将向您展示应用这些更改后代码的外观:

level_stride = 8
loc = torch.zeros(H, W, 2)
j = torch.arange(W).expand((H, W))
loc[:, :, 0] = level_stride * (j + 0.5)
i = torch.arange(H).expand((W, H)).T
loc[:,:,1] = level_stride * (i + 0.5)

最显着的更改是对 ji 的分配以及切片的使用将数据填充到loc中。
为了完整起见,让我们回顾一下分配给 ij 的表达式。

jtorch.arange(W) 开头,它就像一个常规范围,只是采用张量的形式。然后应用 .expand,您可以看到张量被重复。例如,如果 H 为 5,W 为 2,则将创建 2 的范围,并将其扩展为 (5, 2) 的大小。因此,该张量的大小与 loc 张量中的前两个大小相匹配。

i 的开头完全相同,只是 WH 交换了位置。这是因为 i 源自基于 H 而不是 W 的范围。这里值得注意的是 .T 应用在该表达式的末尾。原因是 i 张量仍然必须匹配 loc 的前两个维度。由于这个原因,.T 转置了张量。

如果您有特定原因需要将 loc 张量设为 (H*W, 2) 形状,但对此解决方案感到满意,您可以重塑张量最后使用loc.reshape(H*W,2)

First of all, you defined the tensor to be of size (H*W, 2). This is of course entirely optional, but it might be more expressive to preserve the dimensionality explicitly, by having H and W be separate dimension in the tensor. That makes some operations later on easier as well.

The values you compute to fill the tensor with originate from ranges. We can use the torch.arange function to get the same ranges, already in form of a tensor, ready to be put into your loc tensor. If that is done, you could see it as, completely leaving out the for loops, and just treating j and i as different tensors.
If you're not familiar with tensors, this might seem confusing, but operations between single number values and tensors work just as well, so not much of the rest of the code has to be changed.

I'll give you a expression of how your code could look like with these changes applied:

level_stride = 8
loc = torch.zeros(H, W, 2)
j = torch.arange(W).expand((H, W))
loc[:, :, 0] = level_stride * (j + 0.5)
i = torch.arange(H).expand((W, H)).T
loc[:,:,1] = level_stride * (i + 0.5)

The most notable changes are the assignments to j and i, and the usage of slicing to fill the data into loc.
For completeness, let's go over the expressions that are assigned to i and j.

j starts as a torch.arange(W) which is just like a regular range, just in form of a tensor. Then .expand is applied, which you could see as the tensor being repeated. For example, if H had been 5 and W 2, then a range of 2 would have been created, and expanded to a size of (5, 2). The size of this tensor thereby matches the first two sizes in the loc tensor.

i starts just the same, only that W and H swap positions. This is due to i originating from a range based on H rather then W. Notable here is that .T is applied at the end of that expression. The reason for this is that the i tensor still has to match the first two dimensions of loc. .T transposes the tensor for this reason.

If you have a subject specific reason to have the loc tensor in the (H*W, 2) shape, but are otherwise happy with this solution, you could reshape the tensor in the end with loc.reshape(H*W,2).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文