如何仅在特定位置进行2D卷积?
这个问题已经多次问,但我仍然无法得到我想要的。想象一下,
data=np.random.rand(N,N) #shape N x N
kernel=np.random.rand(3,3) #shape M x M
我知道卷积通常意味着将内核放在全部数据上。但是在我的情况下,n
和m
是10000的订单。因此,我希望在数据中的特定位置获得卷积的价值,例如(10 ,37)在所有位置都没有进行不必要的计算。因此,输出将只是一个数字。主要目标是减少计算和内存费用。是否有最少的调整功能可以实现此功能?
This question has been asked multiple times but still I could not get what I was looking for. Imagine
data=np.random.rand(N,N) #shape N x N
kernel=np.random.rand(3,3) #shape M x M
I know convolution typically means placing the kernel all over the data. But in my case N
and M
are of the orders of 10000. So I wish to get the value of the convolution at a specific location in the data, say at (10,37) without doing unnecessary calculations at all locations. So the output will be just a number. The main goal is to reduce the computation and memory expenses. Is there any inbuilt function that does this with minimal adjustments?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
实际上,应用卷积为特定位置与仅在
data
和 floppipted kernel本身的(刻度)乘法的条目上的单纯总和相一致。这里是一个可再现的例子。代码
输出
内核
正在通过卷积的定义进行翻转 - 横向滤波器填充的横向通行性神经网络>在这里,是善良的沃伦·韦克斯(Warren Weckesser)。谢谢!关键是要理解您提供的索引。我假设它是指
data
中的子矩阵的左上角。但是,当m
奇怪时,它也可以参考中点。概念
n = 7
和m = 3
的不同示例示例并以
?
编辑1:
请注意,该视频中的讲师没有明确提及在点乘之前需要翻转内核才能遵守数学上适当的卷积定义。
编辑2 :
对于大型
m
和目标索引,靠近data
的边界,valueerror:操作数无法与形状...
可能会抛出。为了防止这种情况,请用零填充矩阵数据
可以防止这种情况(尽管它炸毁了内存要求)。 IEIndeed, applying the convolution for a particular position coincides with the mere sum over the entries of a (pointwise) multiplication of the submatrix in
data
and the flipped kernel itself. Here, is a reproducible example.Code
with output
The
kernel
is being flipped by definition of the convolution as explained in here and was kindly pointed Warren Weckesser. Thanks!The key is to make sense of the index you provided. I assumed it refers to the upper left corner of the sub-matrix in
data
. However, it can refer to the midpoint as well whenM
is odd.Concept
A different example with
N=7
andM=3
exemplifies the ideaand is presented in here for the kernel
which, when flipped, yields
EDIT 1:
Please note that the lecturer in this video does not explicitly mention that flipping the kernel is required before the pointwise multiplication to adhere to the mathematically proper definition of convolution.
EDIT 2:
For large
M
and target index close to the boundary ofdata
, aValueError: operands could not be broadcast together with shapes ...
might be thrown. To prevent this, padding the matrixdata
with zeros can prevent this (although it blows up the memory requirement). I.e.