复合函数的时间复杂度(以 n 表示)
假设 n 是正整数,复合函数的执行如下:
(define (composite? n)
(define (iter i)
(cond ((= i n) #f)
((= (remainder n i) 0) #t)
(else (iter (+ i 1)))))
(iter 2))
在我看来,这里的时间复杂度(有严格限制)是 O(n) 或者相当大的 theta(n)。我现在只是盯着它看。因为每次循环时我们都会向 iter 的参数加 1,所以看起来是 O(n)。有更好的解释吗?
Assuming n is a positive integer, the composite function performs as follows:
(define (composite? n)
(define (iter i)
(cond ((= i n) #f)
((= (remainder n i) 0) #t)
(else (iter (+ i 1)))))
(iter 2))
It seems to me that the time complexity (with a tight bound) here is O(n) or rather big theta(n). I am just eyeballing it right now. Because we are adding 1 to the argument of iter every time we loop through, it seems to be O(n). Is there a better explanation?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
所写的函数是 O(n)。但是,如果将测试 (= in) 更改为 (< n (* ii)),时间复杂度就会下降到 O(sqrt(n)),这是一个相当大的改进;如果n是一百万,时间复杂度从一百万下降到一千。该测试之所以有效,是因为如果 n = pq,则 p 和 q 之一必须小于 n 的平方根,而另一个则大于 n 的平方根;因此,如果找不到小于 n 平方根的因子,则 n 不能为合数。 Newacct 的答案正确地表明,如果 n 很大,算术成本很重要,但算术成本是 log log n,而不是 newacct 所建议的 log n。
The function as written is O(n). But if you change the test (= i n) to (< n (* i i)) the time complexity drops to O(sqrt(n)), which is a considerable improvement; if n is a million, the time complexity drops from a million to a thousand. That test works because if n = pq, one of p and q must be less than the square root of n while the other is greater than the square root of n; thus, if no factor is found less than the square root of n, n cannot be composite. Newacct's answer correctly suggests that the cost of the arithmetic matters if n is large, but the cost of the arithmetic is log log n, not log n as newacct suggests.
不同的人会根据他们的假设以及他们对问题的考虑因素给出不同的答案。
假设您在每个循环内执行的等式和余数运算均为 O(1),则该复杂度为 O(n)。确实,处理器在 O(1) 内完成这些操作,但这仅适用于固定精度数字。由于我们正在讨论渐近复杂性,并且根据定义,“渐近”涉及事物无限制增长时所发生的情况,因此我们需要考虑任意大的数字。 (如果你的问题中的数字是有界的,那么算法的运行时间也将是有界的,因此整个算法在技术上将是 O(1),显然不是你想要的。)
对于任意精度的数字,我会说等式和余数一般所花费的时间与数字的大小(即 log n)成正比。 (除非你能以某种方式在摊销分析中优化它)所以,如果我们考虑到这一点,算法将是 O(n log n)。有些人可能认为这很挑剔
Different people will give you different answers depending on what they assume and what they factor into the problem.
It is O(n) assuming that the equality and remainder operations you do inside each loop are O(1). It is true that the processor does these in O(1), but that only works for fixed-precision numbers. Since we are talking about asymptotic complexity, and since "asymptotic", by definition, deals with what happens when things grow without bound, we need to consider numbers that are arbitrarily big. (If the numbers in your problem were bounded, then the running time of the algorithm would also be bounded, and thus the entire algorithm would be technically O(1), obviously not what you want.)
For arbitrary-precision numbers, I would say that equality and remainder in general take time proportional to the size of the number, which is log n. (Unless you can optimize that away in amortized analysis somehow) So, if we consider that, the algorithm would be O(n log n). Some might consider this to be nitpicky