是否可以知道哪些 SciPy / NumPy 函数在多个内核上运行?

发布于 2024-11-28 01:18:04 字数 205 浏览 4 评论 0原文

我试图明确找出 SciPy/NumPy 中的哪些函数在多个处理器上运行。例如,我可以在 SciPy 参考手册中读到 SciPy 使用此功能,但我更感兴趣的是到底哪些函数确实运行并行计算,因为并非所有函数都运行并行计算。理想的情况当然是当您键入 help(SciPy.foo) 时包含它,但情况似乎并非如此。

任何帮助将不胜感激。

最好的,

马蒂亚斯

I am trying to figure out explicitly which of the functions in SciPy/NumPy run on multiple processors. I can e.g. read in the SciPy reference manual that SciPy uses this, but I am more interested in exactly which functions do run parallel computations, because not all of them do. The dream scenario would of course be if it is included when you type help(SciPy.foo), but this does not seem to be the case.

Any help will be much appreciated.

Best,

Matias

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

怪我鬧 2024-12-05 01:18:04

我认为这个问题最好解决您使用的 BLAS/LAPACK 库,而不是 SciPy/NumPy。

某些 BLAS/LAPACK 库(例如 MKL)本身使用多个内核其他实现可能不会。

以 scipy.linalg.solve 为例,这是它的源代码(为了清楚起见,省略了一些错误处理代码):

def solve(a, b, sym_pos=0, lower=0, overwrite_a=0, overwrite_b=0,
          debug = 0):
    if sym_pos:
        posv, = get_lapack_funcs(('posv',),(a1,b1))
        c,x,info = posv(a1,b1,
                        lower = lower,
                        overwrite_a=overwrite_a,
                        overwrite_b=overwrite_b)
    else:
        gesv, = get_lapack_funcs(('gesv',),(a1,b1))
        lu,piv,x,info = gesv(a1,b1,
                             overwrite_a=overwrite_a,
                             overwrite_b=overwrite_b)

    if info==0:
        return x
    if info>0:
        raise LinAlgError, "singular matrix"
    raise ValueError,\
          'illegal value in %-th argument of internal gesv|posv'%(-info)

如您所见,它只是两个 LAPACK 函数系列的薄包装(例如 DPOSVDGESV)。

SciPy 级别不存在并行性,但您可以在系统上使用多个内核来观察该函数。唯一可能的解释是您的 LAPACK 库能够使用多个内核,无需 NumPy/SciPy 执行任何操作来实现这一点

I think the question is better addressed to the BLAS/LAPACK libraries you use rather than to SciPy/NumPy.

Some BLAS/LAPACK libraries, such as MKL, use multiple cores natively where other implementations might not.

To take scipy.linalg.solve as an example, here's its source code (with some error handling code omitted for clarity):

def solve(a, b, sym_pos=0, lower=0, overwrite_a=0, overwrite_b=0,
          debug = 0):
    if sym_pos:
        posv, = get_lapack_funcs(('posv',),(a1,b1))
        c,x,info = posv(a1,b1,
                        lower = lower,
                        overwrite_a=overwrite_a,
                        overwrite_b=overwrite_b)
    else:
        gesv, = get_lapack_funcs(('gesv',),(a1,b1))
        lu,piv,x,info = gesv(a1,b1,
                             overwrite_a=overwrite_a,
                             overwrite_b=overwrite_b)

    if info==0:
        return x
    if info>0:
        raise LinAlgError, "singular matrix"
    raise ValueError,\
          'illegal value in %-th argument of internal gesv|posv'%(-info)

As you can see, it's just a thin wrapper around two families of LAPACK functions (exemplified by DPOSV and DGESV).

There is no parallelism going on at the SciPy level, yet you observe the function using multiple cores on your system. The only possible explanation is that your LAPACK library is capable of using multiple cores, without NumPy/SciPy doing anything to make this happen.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文