乘以 2 或将数字与自身相加哪个更好? 大数字

发布于 2024-08-02 03:31:47 字数 499 浏览 4 评论 0原文

我需要一些帮助来决定什么是更好的性能。 我正在使用 bigints (超过 500 万位数字),大部分计算(如果不是全部)都是将当前 bigint 加倍的部分。 所以我想知道是否最好将每个单元格(bigint 的一部分) 2,然后对其进行修改,然后您就知道其余的了。 或者最好只是添加 bigint 到其自身。

我也在考虑实现的难易程度(2 个 bigint 相加比乘以 2 更复杂),但我更关心性能而不是代码的大小或实现的难易程度。

其他信息: 我将用 C++ 对其进行编码,我对 bigint 相当熟悉(只是从未遇到过这个问题)。 我不需要任何源代码或类似的东西,我只需要一个很好的意见和解释/证明,因为我需要从一开始就做出一个好的决定,因为该项目将相当大,并且主要围绕这部分构建这在很大程度上取决于我现在的选择。

谢谢。

I need some help deciding what is better performance wise.
I'm working with bigints (more then 5 million digits) and most of the computation (if not all) is in the part of doubling the current bigint. So i wanted to know is it better to multiply every cell (part of the bigint) by 2 then mod it and you know the rest. Or is it better just add the bigint to itself.

I'm thinking a bit about the ease of implementation too (addition of 2 bigints is more complicated then multiplication by 2) , but I'm more concerned about the performance rather then the size of code or ease of implementation.

Other info:
I'll code it in C++ , I'm fairly familiar with bigints (just never came across this problem).
I'm not in the need of any source code or similar i just need a nice opinion and explanation/proof of it , since i need to make a good decision form the start as the project will be fairly large and mostly built around this part it depends heavily on what i chose now.

Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

浅暮の光 2024-08-09 03:31:47

尝试对每一位进行位移。 这可能是最快的方法。 当您将一个整数向左移位时,您将其加倍(乘以 2)。 如果链中有几个长整数,那么您需要存储最高有效位,因为移位后它就会消失,并且您需要将其用作下一个长整数的最低有效位。

这实际上并不重要。 现代 64 位计算机可以在对两个整数进行位移的同时(1 个时钟周期)将它们相加,因此所需的时间也一样长。 我建议您尝试不同的方法,然后报告是否存在重大时间差异。 所有三种方法都应该很容易实现,并且使用随机数生成器生成 5mb 的数字也应该很容易。

Try bitshifting each bit. That is probably the fastest method. When you bitshift an integer to the left, then you double it (multiply by 2). If you have several long integers in a chain, then you need to store the most significant bit, because after shifting it, it will be gone, and you need to use it as the least significant bit on the next long integer.

This doesn't actually matter a whole lot. Modern 64bit computers can add two integers in the same time it takes to bitshift them (1 clockcycle), so it will take just as long. I suggest you try different methods, and then report back if there is any major time differences. All three methods should be easy to implement, and generating a 5mb number should also be easy, using a random number generator.

粉红×色少女 2024-08-09 03:31:47

要存储 500 万位整数,您需要相当多的位 - 如果您指的是二进制数字,则需要 500 万位;如果是十进制数字,则需要约 1700 万位。 假设数字以二进制表示形式存储,并且您的算术以某种大小的块进行,例如 32 位或 64 位。

  • 如果将数字添加到自身,则每个块都会添加到自身以及前一个块相加的进位。 任何结转都会保留到下一个块。 这是几个加法操作,以及一些用于跟踪进位的簿记。

  • 如果通过左移来乘以 2,则乘法需要进行一次左移操作,然后进行一次右移操作 + 并加 1 来获得进位。 进位记账稍微简单一些。

从表面上看,shift版本似乎稍微快一些。 然而,将数量加倍的总成本很大程度上受到数量大小的影响。 1700 万位数超过了 CPU 的 L1 高速缓存,并且处理时间可能会被内存获取操作淹没。 在现代 PC 硬件上,内存读取比加法和移位慢几个数量级。

这样,您可能想要选择一个对您来说更容易实现的方案。 我倾向于左移版本。

To store a 5 million digit integer, you'll need quite a few bits -- 5 million if you were referring to binary digits, or ~17 million bits if those were decimal digits. Let's assume the numbers are stored in a binary representation, and your arithmetic happens in chunks of some size, e.g. 32 bits or 64 bits.

  • If adding the number to itself, each chunk is added to itself and to the carry from the addition of the previous chunk. Any carry forward is kept for the next chunk. That's a couple of addition operation, and some book keeping for tracking the carry.

  • If multiplying by two by left-shifting, that's one left-shift operation for the multiplication, and one right-shift operation + and with 1 to obtain the carry. Carry book keeping is a little simpler.

Superficially, the shift version appears slightly faster. The overall cost of doubling the number, however, is highly influenced by the size of the number. A 17 million bits number exceeds the cpu's L1 cache, and processing time is likely overwhelmed by memory fetch operations. On modern PC hardware, memory fetch is orders of magnitude slower than addition and shifting.

With that, you might want to pick the one that's simpler for you to implement. I'm leaning towards the left-shift version.

多孤肩上扛 2024-08-09 03:31:47

你尝试过移位吗?
<< 乘以 2
>>> 除以 2

did you try shifting the bits?
<< multiplies by 2
>> divides by 2

一杯敬自由 2024-08-09 03:31:47

左移一位与乘以二相同!
链接解释了其机制并给出了示例。

int A = 10; //...01010 = 10
int B = A<<1; //..010100 = 20

Left bit shifting by one is the same as a multiplication by two !
This link explains the mecanism and give examples.

int A = 10; //...01010 = 10
int B = A<<1; //..010100 = 20
故事↓在人 2024-08-09 03:31:47

如果它真的很重要,您需要编写所有三种方法(包括位移!),并在不同的输入上分析它们。 (使用小数字、大数字和随机数字,以避免结果出现偏差。)

很抱歉“自己动手”的答案,但这确实是最好的方法。 没有人比你更关心这个结果,这让你成为解决这个问题的最佳人选。

If it really matters, you need to write all three methods (including bit-shift!), and profile them, on various input. (Use small numbers, large numbers, and random numbers, to avoid biasing the results.)

Sorry for the "Do it yourself" answer, but that's really the best way. No one cares about this result more than you, which just makes you the best person to figure it out.

︶ ̄淡然 2024-08-09 03:31:47

实施良好的 BigNum 乘法 是 O(N log(N) log(log(N )))。因此,加法应该比乘以 2 更快。但是,只有当您将两个任意 bignum 相乘时,这才是正确的;如果您的库知道您正在将一个 bignum 乘以一个小整数。 。

正如其他人所指出的,位移也是一种选择,但时间复杂度更快,但只有当您的 bignum 库支持位时,这才有效 转移。

Well implemented multiplication of BigNums is O(N log(N) log(log(N)). Addition is O(n). Therefore, adding to itself should be faster than multiplying by two. However that's only true if you're multiplying two arbitrary bignums; if your library knows you're multiplying a bignum by a small integer it may be able to optimize to O(n).

As others have noted, bit-shifting is also an option. It should be O(n) as well but faster constant time. But that will only work if your bignum library supports bit shifting.

芸娘子的小脾气 2024-08-09 03:31:47

大部分计算(如果不是全部)都是将当前 bigint 加倍的部分

如果所有计算都是将数字加倍,为什么不保留一个不同的(基数为 2)比例字段呢? 然后只需在比例上加一,它可以是一个普通的旧整数。 这肯定比对几百万位的任何操作都要快。

IOW,使用bigfloat。

随机基准

use Math::GMP;
use Time::HiRes qw(clock_gettime CLOCK_REALTIME CLOCK_PROCESS_CPUTIME_ID);

my $n = Math::GMP->new(2);
$n = $n ** 1_000_000;

my $m = Math::GMP->new(2);
$m = $m ** 10_000;

my $str;
for ($bits = 1_000_000; $bits <= 2_000_000; $bits += 10_000) {
    my $start = clock_gettime(CLOCK_PROCESS_CPUTIME_ID);
    $str = "$n" for (1..3);
    my $stop = clock_gettime(CLOCK_PROCESS_CPUTIME_ID);
    print "$bits,@{[($stop-$start)/3]}\n";
    $n = $n * $m;
}

似乎表明 GMP 正在 O(n) 时间内完成转换(其中 n 是二进制数中的位数)。 这可能是由于特殊情况,即 1 后跟一百万个(或两个)零; GNU MP 文档 说它应该更慢( 比 O(N^2) 更好。

但仍然 /img197/6527/chartp.png

most of the computation (if not all) is in the part of doubling the current bigint

If all your computation is in doubling the number, why don't you just keep a distinct (base-2) scale field? Then just add one to scale, which can just be a plain-old int. This will surely be faster than any manipulation of some-odd million bits.

IOW, use a bigfloat.

random benchmark

use Math::GMP;
use Time::HiRes qw(clock_gettime CLOCK_REALTIME CLOCK_PROCESS_CPUTIME_ID);

my $n = Math::GMP->new(2);
$n = $n ** 1_000_000;

my $m = Math::GMP->new(2);
$m = $m ** 10_000;

my $str;
for ($bits = 1_000_000; $bits <= 2_000_000; $bits += 10_000) {
    my $start = clock_gettime(CLOCK_PROCESS_CPUTIME_ID);
    $str = "$n" for (1..3);
    my $stop = clock_gettime(CLOCK_PROCESS_CPUTIME_ID);
    print "$bits,@{[($stop-$start)/3]}\n";
    $n = $n * $m;
}

Seems to show that somehow GMP is doing its conversion in O(n) time (where n the number of bits in the binary number). This may be due to the special case of having a 1 followed by a million (or two) zeros; the GNU MP docs say it should be slower (but still better than O(N^2).

http://img197.imageshack.us/img197/6527/chartp.png

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文