python 中的无循环 3D 矩阵乘法
我希望在 python (numpy) 中执行以下操作。
Matrix A is M x N x R
Matrix B is N x 1 x R
矩阵乘法 AB = C,其中 C 是 M x 1 x R 矩阵。 本质上,A 的每个 M x N 层(其中的 R 层)都是独立地与 B 中的每个 N x 1 向量相乘的矩阵。我确信这是一个单行。我一直在尝试使用tensordot(),但我似乎给了我意想不到的答案。
我已经在 Igor Pro 中编程近 10 年了,现在我正在尝试将其页面转换为 python。
I am looking to do the following operation in python (numpy).
Matrix A is M x N x R
Matrix B is N x 1 x R
Matrix multiply AB = C, where C is a M x 1 x R matrix.
Essentially each M x N layer of A (R of them) is matrix multiplied independently by each N x 1 vector in B. I am sure this is a one-liner. I have been trying to use tensordot(), but I that seems to be giving me answers that I don't expect.
I have been programming in Igor Pro for nearly 10 years, and I am now trying to convert pages of it over to python.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
对于死灵术感到抱歉,但是使用宝贵的 np.einsum 可以大大改进这个答案。
请注意,它有几个优点:首先,速度快。 np.einsum 通常优化得很好,但此外,np.einsum 足够聪明,可以避免创建 MxNxR 临时数组,而是直接在 N 上执行收缩。
但也许更重要的是,它非常具有可读性。毫无疑问,这段代码是正确的;你可以毫无困难地将它变得更加复杂。
请注意,如果您愿意,可以简单地从 B 和 einsum 语句中删除虚拟“D”轴。
Sorry for the necromancy, but this answer can be substantially improved upon, using the invaluable np.einsum.
Note that it has several advantages: first of all, its fast. np.einsum is well-optimized generally, but moreover, np.einsum is smart enough to avoid the creation of an MxNxR temporary array, but performs the contraction over N directly.
But perhaps more importantly, its very readable. There is no doubt that this code is correct; and you could make it a lot more complicated without any trouble.
Note that the dummy 'D' axis can simply be dropped from B and the einsum statement if you wish.
numpy.tensordot() 是正确的方法it:
编辑:第一个版本是错误的,这个版本计算了更多它应该计算的内容,并丢弃了其中的大部分。也许对最后一个轴进行 Python 循环是更好的方法。
另一个编辑:我得出的结论是,
numpy.tensordot()
不是这里的最佳解决方案。将会更加高效(尽管更难掌握)。
numpy.tensordot() is the right way to do it:
Edit: The first version of this was faulty, and this version computes more han it should and throws away most of it. Maybe a Python loop over the last axis is the better way to do it.
Another Edit: I've come to the conclusion that
numpy.tensordot()
is not the best solution here.will be more efficient (though even harder to grasp).
另一种方法(对于像我这样不熟悉爱因斯坦符号的人来说更容易)是np.matmul()。重要的是最后两个索引具有匹配的维度 ((M, N) x (N, 1))。为此,请使用
np.transpose()
示例:Another way to do it (easier for those not familiar with Einstein notation, like me) is
np.matmul()
. The important thing is just to have the matching dimensions ((M, N) x (N, 1)) in the last two indices. For this usenp.transpose()
Example: