以编程方式分解大量数字
好吧,我有一个巨大的数字f
。实际上,这个数字只有 100 多位数字长。我知道这些因子的大小大致相同。
如果我的资源和时间有限,我应该使用什么语言和算法?我在限制时间内包括了编写算法的时间长度。
想法?
编辑:我所说的有限是指在尽可能短的时间内。
Alright, so I have a huge number f
. This number is just over 100 digits long, actually. I know that the factors are of approximately the same size.
If I have limited resources and time, what language and algorithm should I use? I am including the length of time to code the algorithm in the restricted time.
Thoughts?
EDIT: By limited, I mean in the least amount of time possible.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
最先进的素因数分解算法是二次筛及其变体。对于大于 100 位的数字,数字筛 变得更加高效。
此处有它的开源实现。它能够在 2.2 GHz 上仅用 4 小时将 100 位数字分解为两个大致相等的素数AMD Althon。
这就是算法和示例实现。这可能足以为您提供想法或帮助您开始。
The state-of-the-art prime factorization algorithm is the quadratic sieve and its variants. For numbers larger than 100 digits, the number sieve becomes more efficient.
There's an open-source implementation of it here. It's able to factor a 100 digit number into two roughly equal primes in only 4 hours on a 2.2 GHz AMD Althon.
So there's the algorithm and a sample implementation. That might be enough to give you ideas or get you started.
这听起来像是云计算的一个很好的练习(也可能是一个罕见的好例子)。这应该很容易运行并行处理。将因素池划分到每个流程中。
像这样的东西可能会有所帮助: http://blog.controlgroup.com/2010/10/13/hadoop-and-amazon-elastic-mapreduce-analyzing-log-files/ 更多详细信息,请访问 http://en.wikipedia.org/wiki/Apache_Hadoop#Hadoop_on_Amazon_EC2.2FS3_services 。
(在过去的一个月中,我观看了一个很好的视频演示,演示了类似于我在这里建议的内容的穹顶 - 但当然,现在我找不到链接。)
特别是如果您不需要以编程方式执行此操作,看看http://www.alpertron.com.ar/ECM.HTM . (链接自 http://en.wikipedia.org/wiki/Quadratic_sieve。)请注意第一个链接上“在多台机器中分解数字”下的注释。 (由于源代码可用,您也可以使用 Hadoop 等以编程分布式方式运行它。)
This sounds like a good exercise (and possibly a rare good example) for cloud computing. This should be easy to run parallel processing against. Divide your pools of factors across each of your processes.
Something like this may prove helpful: http://blog.controlgroup.com/2010/10/13/hadoop-and-amazon-elastic-mapreduce-analyzing-log-files/ More details at http://en.wikipedia.org/wiki/Apache_Hadoop#Hadoop_on_Amazon_EC2.2FS3_services .
(In the past month, I had watched a nice video demonstration of doming something similar to what I'm suggesting here - but of course, now I can't find the link.)
Especially if you don't need to do this programatically, take a look at http://www.alpertron.com.ar/ECM.HTM . (Linked to from http://en.wikipedia.org/wiki/Quadratic_sieve.) Pay particular attention to the notes under "Factoring a number in several machines" on the first link. (As the source code is available, you could run this is a programatically distributed fashion as well, using Hadoop or the like.)