基本算术运算的大 O 复杂度
乘法、平方根、对数、标量和矩阵乘积等基本算术运算的广泛算法的 Big-O 复杂度是多少?
是否存在就 Big-O 复杂性而言更高效但在实际解决方案中不太普遍的奇异算法(例如,未在流行的软件库中实现)?
What is the Big-O complexity for widespread algorithms of the basic arithmetic operations like multiplication, square root, logarithm, scalar and matrix product?
Are there exotic algorithms which are more efficient, in terms of Big-O complexity, but are not very widespread in practical solutions (e.g. not implemented in popular software libraries)?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(7)
请参阅 http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
平方的矩阵乘积矩阵:
还有一个 O(N2.38) Coppersmith–Winograd 算法 但由于隐藏常数巨大,我认为它不会广泛传播。
大整型乘法:
还有一个 n log n · 2O(log* n) 算法于 2008 年发布,但由于太新而无法广泛传播。
通常,朴素方法对于正常大小的输入来说已经足够了。
See http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
Matrix product of square matrices:
There is also a O(N2.38) Coppersmith–Winograd algorithm but I don't think it's wide-spread due to the huge hidden constant.
Big-int multiplication:
There are also an n log n · 2O(log* n) algorithm published in 2008 but that was too new to be widespread.
Usually the naïve method is good enough for normal-sized input.
您会将大多数简单的操作视为 O(1),因为您的输入大小通常是固定的(即 32 位或 64 位)。
在正常情况下,您的平台将执行完全相同的乘法、平方根、对数等操作,无论您的输入“大小”如何(即 int a = 0;和 int b = Int32.MaxValue 均为 32 位)整数)。
一旦你开始查看矩阵或表示任意精度的数字,它就会变得有趣,但有人已经链接了维基百科摘要,所以我不会深入讨论。
只是不要使用 Schönhage–Strassen 来乘以“正常”小数字。它会让我哭泣。仅仅因为算法的时间复杂度为 O(n2) 并不意味着它不好 - 特别是当 n 几乎总是 25< /sup> 或 26。
You'll consider most simple operations as O(1) because your input size is usually fixed (i.e. 32- or 64-bits).
Under normal conditions, your platform will be performing exactly the same operation for multiplication, square root, logarithm etc. regardless of the "size" of your input (i.e. int a = 0; and int b = Int32.MaxValue are both 32-bit integers).
It gets interesting once you start to look at matrices or representing arbitrary-precision numbers, but someone already linked the wikipedia summary, so I won't go into that.
Just don't use Schönhage–Strassen to multiply "normal" little numbers. It'll make me cry. Just because an algorithm is O(n2) doesn't mean it's bad - particularly when n is almost always 25 or 26.
运算不复杂,算法复杂。例如,平方根算法有多种,它们的复杂度也不同。
Operations don't have complexity, algorithms do. For example, there are various square root algorithms, and they will have different complexity.
平方根和对数可以通过多种方式实现,极大地影响复杂性(根据所需的精度来判断)。
如果它们是用查找表(和某种插值)来实现的,那么内存需求确实会激增,因为需要更高的精度,但复杂性在于查找数组中的值并可能应用插值。
更流行的是,它们似乎是通过系列定义来实现的。递归或迭代语句多轮,直到达到所需的精度。这里,由于需要更高的精度,轮数可能会变得非常高,并且计算本身也会受到精度增加的影响。
Square root and logarithm may be implemented in various ways, greatly affecting the complexity (judged by the required precision).
If they are implemented with lookup tables (and some kind of interpolation), the memory requirement really explodes as more precision is required but the complexity is that of looking up the value in the array and possibly applying the interpolation.
More popularly they seem to be implemented by their series definitions. Recurse or iterate a statement for a number of rounds until you reach the required precision. Here the number of rounds may get very high as more precision is needed, and also the calculations itself are affected by the increased precision.
看一下 BigInteger,关于任意长度的整数。现在,一切都有成本,就输入的大小而言,即位数(对于数字
K
通常为O(log K)
位)。我将使用N
作为下面的位数。例如,加法和减法现在是
O( N )
。乘法是O( N^2 )
(简单)或使用 FFT 的O( n (log n)^(2+epsilon) )
。其他算法包括“幂”函数,它需要 O(N) 乘法。 (除了现在每个乘法都有一个成本!)
BigDecimals 还有额外的复杂性,它是任意长度的十进制等价物,除了一些更基本的操作之外,有些事情也更有趣(特别是如果你想要找出你想要多少精度)。你可以看一下Java的实现。
Take a look at BigInteger, on arbitrary-length integers. Everything now has a cost, in terms of the size of the input, which is the number of bits (typically
O(log K)
bits for a numberK
). I will useN
for the number of bits below.For example, addition and subtraction is now
O( N )
. Multiplication is eitherO( N^2 )
(naive) orO( n (log n)^(2+epsilon) )
with FFT.Other algorithms include the "power" function, which takes
O( N )
multiplication. (except now each multiplication has a cost!)And there are additional complexities for BigDecimals, which is the arbitrary-length decimal equivalent, and beyond some of the more basic operations, some of the things are more interesting as well (especially if you want to figure out how much precision you want). You can take a look at Java's implementation.
有一种傅立叶类型算法也可以进行整数乘法(Schonhage- Strassen)
我认为 Strassen 算法的一个版本在整数乘法方面比正常算法稍好一些,但现在我想到了,最终结果与简单的相同......
加法和减法是差不多,只是加法和减法。不过,除法和平方根可能很有趣......
另外:请注意,到目前为止每个人都谈论过整数算术。一旦你达到了浮动/双打,所有的赌注就都结束了。然后你就进入了数值分析的世界,这是它自己的一个完整领域。 。
There's a Fourier type algorithm that does integer multiplication as well (Schonhage-Strassen)
I thought that there was a version of Strassen's algorithm that does slightly better than normal for integer multiplication, but now that I think of it, that one ends up being the same as straightforward...
Addition and subtraction are pretty much, just addition and subtraction. Division and square root are probably interesting though...
ALSO: Note that so far everyone has talked about INTEGER arithmetic. Once you get to floats/doubles all bets are off. Then you get into the world of numerical analysis, and that's a whole field of it's own...
大量位数的除法和平方根并不比乘法复杂多少。对于这两种运算,普通的旧牛顿迭代可以以牛顿迭代仅具有乘法的方式进行排列。由于每一步的正确位数都会加倍,因此我们只需将每一步的计算精度加倍即可。
Division and square roots for huge number of bits are not much more complex than multiplication. For both operations, plain old Newton iteration can be arranged in such a way that the Newton iteration only has multiplications. Since the number of correct digits is doubled in each step, we can just double the precision of the calculation in each step.