如何并行对多个张量进行thergrid?
假设我们有一个大小[60,9]的张量X和一个大小的张量[60,9] 是否可以进行操作,例如xx,yy = torch.meshgrid(x,y)
,以便xx和yy具有大小[60,9,9]和xx [i, :,:],yy [i,:,:]
基本上是torch.meshgrid(x [i],y [i])
?
内置的torch.meshgrid
操作仅接受1D张量,是否可以在不使用循环的情况下执行上述操作(因为不使用GPU的并行操作,这是效率低下的)?
Let's say we have a tensor x of size [60,9] and a tensor y of size [60,9]
Is it possible to do an operation like xx,yy = torch.meshgrid(x,y)
such that xx and yy is of size [60,9,9] and xx[i,:,:], yy[i,:,:]
is basically torch.meshgrid(x[i],y[i])
?
The built-in torch.meshgrid
operation only accepts 1d tensors, is it possible to do the above operation without using for loops (which is inefficient as it does not take use of GPU's parallel operation)?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我不相信您会获得任何收益,因为在GPU上没有进行张量的初始化。因此,提出的方法确实是在
x
和y
或使用map
作为一个迭代:I don't believe you will gain anything since the initialization of the tensors is not done on the GPU. So a proposed approach would indeed be to loop over
x
andy
or usingmap
as an iterable:在为
grid_sample
生成网格的情况下,我找到了一种方法,这被称为一批图像。特别是,您有2个张量
grid_x
,grid_y
带有shape(m,n),其中m是批处理大小,n是每个网格维度的点数。您需要grid_sample
的网格是大小(m,n,n,2),您可以通过在第一个维度上循环,应用:一种更有效的方法是:
它的阅读不是很容易,但是它显示了
meshgrid
的替代方法。I found a way to do this in the case when a grid is being generated for
grid_sample
, which is being called for a batch of images.In particular, you have 2 tensors
grid_x
,grid_y
with shape (M, N), where M is the batch size and N is the number of points per grid dimension. The grid you need forgrid_sample
is of size (M, N, N, 2), and you could obtain it by looping over the first dimension, applying:A much more efficient way to do this is:
It's not very easy to read, but it shows an alternative to
meshgrid
.