.NET 在数值计算中的速度

发布于 2024-08-13 14:05:49 字数 500 浏览 3 评论 0原文

根据我的经验,.NET 比本机代码慢 2 到 3 倍。 (我实现了 L-BFGS 进行多变量优化)。

我在 stackoverflow 上追踪到了广告 http://www.centerspace.net/products/

速度真是惊人,速度接近到本机代码。他们怎么能这么做呢? 他们说:

问。 NMath 是“纯粹的”.NET 吗?

A.答案在某种程度上取决于您对“纯.NET”的定义。 NMath 是用 C# 编写的,加上一个小的托管 C++ 层。然而,为了提高基本线性代数运算的性能,NMath 确实依赖于本机英特尔数学内核库(包含在 NMath 中)。但没有 COM 组件,没有 DLL,只有 .NET 程序集。此外,在托管 C++ 层中分配并由本机代码使用的所有内存都是从托管堆中分配的。

有人可以向我解释更多吗?

In my experience, .NET is 2 to 3 times slower than native code. (I implemented L-BFGS for multivariate optimization).

I have traced the ads on stackoverflow to
http://www.centerspace.net/products/

the speed is really amazing, the speed is close to native code. How can they do that?
They said that:

Q. Is NMath "pure" .NET?

A. The answer depends somewhat on your definition of "pure .NET". NMath is written in C#, plus a small Managed C++ layer. For better performance of basic linear algebra operations, however, NMath does rely on the native Intel Math Kernel Library (included with NMath). But there are no COM components, no DLLs--just .NET assemblies. Also, all memory allocated in the Managed C++ layer and used by native code is allocated from the managed heap.

Can someone explain more to me?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

千紇 2024-08-20 14:05:50

我发布了一篇博客文章来解决这个问题。

I've posted up a blog article addressing this question.

飘逸的'云 2024-08-20 14:05:50

关键是C++/CLI。它允许您将 C++ 代码编译到托管 .NET 程序集中。

The key is C++/CLI. It allows you to compile C++ code into a managed .NET assembly.

始于初秋 2024-08-20 14:05:50

如今,制作混合 .Net/本机库已成为行业标准,以便利用两个平台的优势来优化性能。不仅是 NMath,许多带有 .net 接口的商业和免费库都是这样工作的。例如:Math.NET Numerics、dnAnalytics、极限优化、FinMath 等。与 MKL 的集成对于 .net 数值库非常流行,其中大多数仅使用托管 C++ 程序集作为中间级别。但这个解决方案有很多缺点:

  1. 英特尔 MKL 是专有软件,而且有点贵。但一些库(例如 dnAnalytics)提供了用纯 .net 代码免费替换 MKL 功能的功能。当然,它要慢得多,但它是免费的,而且功能齐全。

  2. 它降低了您的兼容性,您需要为 32 位和 64 位模式提供重型托管 C++ 内核 dll。

    它降低了您的兼容性,

  3. 托管到本机调用需要执行封送,这会降低快速频繁调用的操作(例如 Gamma 或 NormalCDF)的性能。

    托管到

RTMath FinMath 库解决了最后两个问题。我真的不知道他们是怎么做到的,但他们提供了单个纯.net dll,可以为任何CPU平台编译并支持32位和64位。此外,当我需要调用 NormalCDF 数十亿次时,我没有看到 MKL 的任何性能下降。

Today it's industry standard to make mixed .Net/native libraries in order to take advantages of both platforms for performance optimization. Not only NMath, many commercial and free libraries with .net interface working like this. For example: Math.NET Numerics, dnAnalytics, Extreme Optimization, FinMath and many others. Integration with MKL is extremely popular for .net numerical libraries, and most of them just use Managed C++ assembly as an intermediate level. But this solution has a number of drawbacks:

  1. Intel MKL is a proprietary software and it's a bit expensive. But some libraries like dnAnalytics provides a free replacement of MKL functionality with pure .net code. Off course, it's much slower, but it's free and fully functional.

  2. It reduces your compatibility you need to have heavy managed C++ kernel dlls for both 32bit and 64bit mode.

  3. Managed to native calls need performing marshaling which slow down performance of fast frequently called operations such as Gamma or NormalCDF.

Last two problems solved in RTMath FinMath library. I don't really know how they did it, but they provide single pure .net dll which compiled for Any CPU platform and supports 32bit and 64bit. Also I didn't seen any performance degradation against MKL when I need to call NormalCDF billions times.

泪冰清 2024-08-20 14:05:50

由于(本机)英特尔 MKL 正在进行数学计算,因此您实际上并不是在托管代码中进行数学计算。您仅使用 .Net 中的内存管理器,因此 .Net 代码可以轻松使用结果。

Since the (native) Intel MKL is doing the math, you're actually not doing the math in managed code. You're merely using the memory manager from .Net, so the outcomes are easily used by .Net code.

面犯桃花 2024-08-20 14:05:50

我从@Darin Dimitrov 对他的答案的评论和@Trevor Misfeldt 对@Darin 的评论的评论中了解了更多信息。因此将其作为答案发布给未来的读者。

NMath 使用 P/Invoke 或 C++/CLI 调用英特尔数学核心库本机函数,这是完成最密集计算的地方,这就是它如此快的原因。

时间花费英特尔 MKL 内部的分解方法上。 也不需要复制数据。所以,这不是 CLI 快不快的问题这是关于执行发生的地方

另外@Paul 的博客也值得一读。这是摘要。

C# 速度快,但内存分配慢。
重用变量作为引用或输出参数,而不是从方法中返回新变量。分配新变量会消耗内存并减慢执行速度。 @Haymo Kutschbach 对此做了很好的解释。

如果精度不是必需的,那么从双精度切换到单精度的性能增益是相当可观的(更不用说数据存储所节省的内存了)。

对于许多短计算,从 C# 调用 C++/cli 例程,将所有指针固定到托管空间中分配的数据,然后调用 Intel 库通常比使用 P/Invoke 直接从 C# 调用库更好,因为整理数据的成本。
正如 @Haymo Kutschbach 在评论中提到的,对于 blittable 类型,C++/CLI 和 C# 之间没有区别。 blittable 类型的数组和仅包含 blittable 成员的类的数组在封送期间被固定而不是复制。请参阅 https://msdn.microsoft.com/ en-us/library/75dwhxf7(v=vs.110).aspx 查看可直接传送和不可传送类型的列表。

I learnt more form @Darin Dimitrov's comment to his answer and @Trevor Misfeldt's comment to @Darin's comment. Hence posting it as an answer, for future readers.

NMath uses P/Invoke or C++/CLI to call Intel Math Kernel Library native functions which is where the most intensive calculations are done and which is why it is so fast.

The time is spent in decomposition methods inside of Intel's MKL. No copying of data is required, either. So, it's not an issue of whether CLI is fast or not. It's about where the execution happens.

Also @Paul's blog is also a good read. Here's the summary.

C# is Fast, Memory Allocation Is Not.
Reuse the variables as ref or out parameters, instead of returning new variables from methods. Allocating a new variable consumes memory and slows down execution. @Haymo Kutschbach has explained this well.

If the precision is not necessary, the performance gain in switching from double to single precision is considerable (not to mention the memory saving for the data storage).

For many short computations, to call a C++/cli routine from C#, pinning all pointers to data allocated in the managed space, and then call the Intel library is generally better than using P/Invoke to call the library directly from C#, due to the cost of marshaling the data .
As mentioned by @Haymo Kutschbach in comments, for blittable types however, no difference between C++/CLI and C#. Arrays of blittable types and classes that contain only blittable members are pinned instead of copied during marshaling. Refer https://msdn.microsoft.com/en-us/library/75dwhxf7(v=vs.110).aspx for a list of blittable and non-blittable types.

只是偏爱你 2024-08-20 14:05:49

他们怎么能做到这一点?

与大多数 .NET 数值库一样,NMath 只不过是嵌入在 .NET 程序集中的 Intel MKL 的包装器,可能通过与 C++/CLI 链接来创建 混合程序集。您可能只是对那些实际上不是用 .NET 编写的部分进行了基准测试。

F#.NET 期刊文章 数值库:特殊函数、插值和随机数字(2008 年 3 月 16 日)和数值库:线性代数和谱方法(2008 年 4 月 16 日)测试了相当多的功能,NMath 实际上是所有商业库中最慢的。他们的 PRNG 比所有其他库都慢,比免费的 Math.NET 库慢 50%,缺少一些基本功能(例如计算 Gamma(-0.5) 的能力)和其他基本功能(Gamma -他们确实提供的相关功能)被破坏了。 Extreme Optimization 和 Bluebit 在特征求解器基准测试中都击败了 NMath。 NMath 当时甚至没有提供傅立叶变换。

更令人惊讶的是,有时性能差异很大。我们测试的最昂贵的商业数值库 (IMSL) 在 FFT 基准测试中比免费的 FFTW 库慢 500 倍以上,并且当时没有一个库使用多核。

事实上,正是这些库的质量差促使我们将自己的F# for Numerics 商业化 库(100% 纯 F# 代码)。

How can they do that?

Like most of the numerical libraries for .NET, NMath is little more than a wrapper over an Intel MKL embedded in the .NET assembly, probably by linking with C++/CLI to create a mixed assembly. You've probably just benchmarked those bits that are not actually written in .NET.

The F#.NET Journal articles Numerical Libraries: special functions, interpolation and random numbers (16th March 2008) and Numerical Libraries: linear algebra and spectral methods (16th April 2008) tested quite a bit of functionality and NMath was actually the slowest of all the commercial libraries. Their PRNG was slower than all others and 50% slower than the free Math.NET library, some basic functionality was missing (e.g. the ability to calculate Gamma(-0.5)) and other basic functionality (the Gamma-related functions they did provide) was broken. Both Extreme Optimization and Bluebit beat NMath at the eigensolver benchmark. NMath didn't even provide a Fourier Transform at the time.

Even more surprisingly, the performance discrepancies were sometimes huge. The most expensive commercial numerical library we tested (IMSL) was over 500× slower than the free FFTW library at the FFT benchmark and none of the libraries made any use of multiple cores at the time.

In fact, it was precisely the poor quality of these libraries that encouraged us to commercialize our own F# for Numerics library (which is 100% pure F# code).

小帐篷 2024-08-20 14:05:49

我是 ILNumerics 的主要开发人员之一。因此,显然我有偏见;)但我们对内部结构的披露更加公开,因此我将对我们的速度“秘密”提供一些见解。

这完全取决于系统资源的利用方式!如果您追求纯粹的速度并且需要处理大型数组,您将确保(按重要性排序,最重要的在前)

  1. 适当管理您的内存! “天真的”内存管理会导致性能不佳,因为它会给 GC 带来很大的压力,导致内存碎片并降低内存局部性(从而降低缓存性能)。在像 .NET 这样的垃圾收集环境中,这可以归结为防止频繁的内存分配。在 ILNumerics 中,我们实现了一个高性能内存池来实现这一目标(并确定性地处理临时数组以获得良好、舒适的语法,而无需笨拙的函数语义)。

  2. 利用并行性!这同时针对:线程级并行性和数据级并行性。通过对计算的计算密集部分进行线程化来利用多个核心。在 X86/X64 CPU 上,SIMD/多媒体扩展(例如 SSE.XX 和 AVX)允许进行小而有效的矢量化。当前的 .NET 语言无法直接寻址它们。这是 MKL 可能仍然比“纯”.NET 代码更快的唯一原因。 (但解决方案已经在增加。)

  3. 要实现高度优化的语言(如 FORTRAN 和 C++)的速度,必须将相同的优化应用于您的代码,就像为它们所做的那样。 C# 提供了这样做的选项。

请注意,应按顺序遵循这些预防措施!如果瓶颈是内存带宽并且处理器花费大部分时间等待新数据,那么关心 SSE 扩展甚至边界检查删除是没有意义的。此外,对于许多简单的操作来说,投入巨大的努力来实现最后的微小规模达到峰值性能甚至是不值得的!考虑 LAPACK 函数 DAXPY 的常见示例。它将向量 X 的元素添加到另一个向量 Y 的相应元素。如果这是第一次执行此操作,则必须从主内存中获取 X 和 Y 的所有内存。您对此几乎无能为力。而内存就是瓶颈!因此,无论最后的加法是在 C# 中以简单的方式完成

for (int i = 0; i < C.Length; i++) {
    C[i] = X[i] + Y[i]; 
}

还是通过使用向量化策略完成 - 它都必须等待内存!

我知道,这个答案在某种程度上“过度回答”了这个问题,因为大多数这些策略目前尚未在提到的产品中使用(还?)。通过遵循这些要点,您最终将获得比“本机”语言中的每个简单实现更好的性能。

如果您有兴趣,可以透露一下您对L-BFGS的实现吗?我很乐意将其转换为 ILNumerics 并发布比较结果,我相信这里列出的其他库也愿意效仿。 (?)

I am one of the lead developers of ILNumerics. So I am biased, obviously ;) But we are more disclosed regarding our internals, so I will give some insights into our speed 'secrets'.

It all depends on how system resources are utilized! If you are about pure speed and need to handle large arrays, you will make sure to (ordered by importance, most important first)

  1. Manage your memory appropriately! 'Naive' memory management will lead to bad performance, since it stresses the GC badly, causes memory fragmentation and degrades memory locality (hence cache performance). In a garbage collected environment like .NET, this boils down to preventing from frequent memory allocations. In ILNumerics, we implemented a high performance memory pool in order to archieve this goal (and deterministic disposal of temporary arrays to get a nice, comfortable syntax without clumsy function semantics).

  2. Utilize parallelism! This targets both: thread level parallelism and data level parallelism. Multiple cores are utilized by threading computation intensive parts of the calculation. On X86/X64 CPUs SIMD/multimedia extensions like SSE.XX and AVX allow a small but effective vectorization. They are not directly addressable by current .NET languages. And this is the only reason, why MKL may is still faster than 'pure' .NET code. (But solutions are already rising.)

  3. To archieve the speed of highly optimized languages like FORTRAN and C++, the same optimizations must get applied to your code as done for them. C# offers the option do do so.

Note, these precautions should be followed in that order! It does not make sense to care about SSE extensions or even bound check removal, if the bottleneck is the memory bandwith and the processor(s) spend most the time waiting for new data. Also, for many simple operations it does not even pays of to invest huge efforts to archieve the very last tiny scale up to peak performance! Consider the common example of the LAPACK function DAXPY. It adds the elements of a vector X to the corresponding element of another vector Y. If this is done for the first time, all the memory for X and Y will have to get fetched from the main memory. There is little to nothing you can do about it. And memory is the bottleneck! So regardless if the addition at the end is done the naive way in C#

for (int i = 0; i < C.Length; i++) {
    C[i] = X[i] + Y[i]; 
}

or done by using vectorization strategies - it will have to wait for memory!

I know, this answer does somehow 'over answers' the question, since most of these strategies are currently not utilized from the mentioned product (yet?). By following thoses points, you would eventually end up with much better performance than every naive implementation in a 'native' language.

If you are interested, you could disclose your implementation of L-BFGS? I'll be happy to convert it to ILNumerics and post comparison results and I am sure, other libraries listed here would like to follow. (?)

饮湿 2024-08-20 14:05:49

关于 C++/CLI 的观点是正确的。为了完成这个图片,还有两个有趣的点:

  • .NET 内存管理(垃圾收集器)显然不是这里的问题,因为 NMath 仍然依赖于它

  • 性能优势实际上是由 Intel MKL 提供的,它为许多 CPU 提供了极其优化的实现。从我的角度来看,这是关键的一点。使用直接、简单的 C/C++ 代码并不一定能提供比 C#/.NET 更优越的性能,有时甚至更糟。然而,C++/CLI 允许您利用所有“脏”优化选项。

The point about C++/CLI is correct. To complete the picture, just two additional interesting points:

  • .NET memory management (garbage collector) obviously is not the problem here, as NMath still depends on it

  • The performance advantage is actually provided by Intel MKL, which offers implementations extremely optimized for many CPUs. From my point of view, this is the crucial point. Using straight-forward, naiv C/C++ code wont necessarily give you superior performance over C#/.NET, it's sometimes even worse. However C++/CLI allows you to exploit all the "dirty" optimization options.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文