python 中的无循环 3D 矩阵乘法

发布于 2024-10-23 18:24:55 字数 302 浏览 3 评论 0原文

我希望在 python (numpy) 中执行以下操作。

Matrix A is M x N x R
Matrix B is N x 1 x R

矩阵乘法 AB = C,其中 C 是 M x 1 x R 矩阵。 本质上,A 的每个 M x N 层(其中的 R 层)都是独立地与 B 中的每个 N x 1 向量相乘的矩阵。我确信这是一个单行。我一直在尝试使用tensordot(),但我似乎给了我意想不到的答案。

我已经在 Igor Pro 中编程近 10 年了,现在我正在尝试将其页面转换为 python。

I am looking to do the following operation in python (numpy).

Matrix A is M x N x R
Matrix B is N x 1 x R

Matrix multiply AB = C, where C is a M x 1 x R matrix.
Essentially each M x N layer of A (R of them) is matrix multiplied independently by each N x 1 vector in B. I am sure this is a one-liner. I have been trying to use tensordot(), but I that seems to be giving me answers that I don't expect.

I have been programming in Igor Pro for nearly 10 years, and I am now trying to convert pages of it over to python.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

不知在何时 2024-10-30 18:24:55

对于死灵术感到抱歉,但是使用宝贵的 np.einsum 可以大大改进这个答案。

import numpy as np

D,M,N,R = 1,2,3,4
A = np.random.rand(M,N,R)
B = np.random.rand(N,D,R)

print np.einsum('mnr,ndr->mdr', A, B).shape

请注意,它有几个优点:首先,速度快。 np.einsum 通常优化得很好,但此外,np.einsum 足够聪明,可以避免创建 MxNxR 临时数组,而是直接在 N 上执行收缩。

但也许更重要的是,它非常具有可读性。毫无疑问,这段代码是正确的;你可以毫无困难地将它变得更加复杂。

请注意,如果您愿意,可以简单地从 B 和 einsum 语句中删除虚拟“D”轴。

Sorry for the necromancy, but this answer can be substantially improved upon, using the invaluable np.einsum.

import numpy as np

D,M,N,R = 1,2,3,4
A = np.random.rand(M,N,R)
B = np.random.rand(N,D,R)

print np.einsum('mnr,ndr->mdr', A, B).shape

Note that it has several advantages: first of all, its fast. np.einsum is well-optimized generally, but moreover, np.einsum is smart enough to avoid the creation of an MxNxR temporary array, but performs the contraction over N directly.

But perhaps more importantly, its very readable. There is no doubt that this code is correct; and you could make it a lot more complicated without any trouble.

Note that the dummy 'D' axis can simply be dropped from B and the einsum statement if you wish.

薔薇婲 2024-10-30 18:24:55

numpy.tensordot() 是正确的方法it:

a = numpy.arange(24).reshape(2, 3, 4)
b = numpy.arange(12).reshape(3, 1, 4)
c = numpy.tensordot(a, b, axes=[1, 0]).diagonal(axis1=1, axis2=3)

编辑:第一个版本是错误的,这个版本计算了更多它应该计算的内容,并丢弃了其中的大部分。也许对最后一个轴进行 Python 循环是更好的方法。

另一个编辑:我得出的结论是,numpy.tensordot()不是这里的最佳解决方案。

c = (a[:,:,None] * b).sum(axis=1)

将会更加高效(尽管更难掌握)。

numpy.tensordot() is the right way to do it:

a = numpy.arange(24).reshape(2, 3, 4)
b = numpy.arange(12).reshape(3, 1, 4)
c = numpy.tensordot(a, b, axes=[1, 0]).diagonal(axis1=1, axis2=3)

Edit: The first version of this was faulty, and this version computes more han it should and throws away most of it. Maybe a Python loop over the last axis is the better way to do it.

Another Edit: I've come to the conclusion that numpy.tensordot() is not the best solution here.

c = (a[:,:,None] * b).sum(axis=1)

will be more efficient (though even harder to grasp).

软糯酥胸 2024-10-30 18:24:55

另一种方法(对于像我这样不熟悉爱因斯坦符号的人来说更容易)是np.matmul()。重要的是最后两个索引具有匹配的维度 ((M, N) x (N, 1))。为此,请使用 np.transpose() 示例:

M, N, R = 4, 3, 10
A = np.ones((M, N, R))
B = np.ones((N, 1, R))

# have the matching dimensions at the very end
C = np.matmul(np.transpose(A, (2, 0, 1)), np.transpose(B, (2, 0, 1))) 
C = np.transpose(C, (1, 2, 0))

print(A.shape)
# out: #(4, 3, 10)
print(B.shape)
# out: #(3, 1, 10)
print(C.shape)
# out: #(4, 1, 10)

Another way to do it (easier for those not familiar with Einstein notation, like me) is np.matmul(). The important thing is just to have the matching dimensions ((M, N) x (N, 1)) in the last two indices. For this use np.transpose() Example:

M, N, R = 4, 3, 10
A = np.ones((M, N, R))
B = np.ones((N, 1, R))

# have the matching dimensions at the very end
C = np.matmul(np.transpose(A, (2, 0, 1)), np.transpose(B, (2, 0, 1))) 
C = np.transpose(C, (1, 2, 0))

print(A.shape)
# out: #(4, 3, 10)
print(B.shape)
# out: #(3, 1, 10)
print(C.shape)
# out: #(4, 1, 10)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文