算法分析

发布于 2024-10-17 12:22:27 字数 37 浏览 5 评论 0原文

为什么我们在算法分析中总是考虑大输入值,例如:用大哦表示法?

why we always consider large value of input in analysis of algorithm for eg:in big-oh notation ?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

倾听心声的旋律 2024-10-24 12:22:27

Big-O 表示法的目的正是为了计算出运行时间(或空间)如何变化随着输入大小的增加 - 换句话说,它的扩展程度如何。

如果您只对小输入感兴趣,那么您不应该使用 Big-O 分析……除此之外,通常有一些方法的扩展性非常差,但对于小输入却非常有效。

The point of Big-O notation is precisely to work out how the running time (or space) varies as the size of input increases - in other words, how well it scales.

If you're only interested in small inputs, you shouldn't use Big-O analysis... aside from anything else, there are often approaches which scale really badly but work very well for small inputs.

绝不服输 2024-10-24 12:22:27

因为最坏情况下的性能通常比最好情况下的性能更成问题。如果最坏情况下的性能可以接受,那么您的算法将运行良好。

Because the worst case performance is usually more of a problem than the best case performance. If your worst case performance is acceptable your algorithm will run fine.

一笑百媚生 2024-10-24 12:22:27

算法分析不仅仅意味着在计算机上运行它们以查看哪个更快。相反,它能够查看算法并确定其执行方式。这是通过查看算法的数量级来完成的。随着项目数量(N)的变化,它会对执行所需的操作数量(时间)产生什么影响。这种分类方法称为 BIG-O 表示法。

程序员使用 Big-O 来粗略估计各种算法用于“大”输入的“多少秒”和“多少内存”

Analysis of algorithms does not just mean running them on the computer to see which one is faster. Rather it is being able to look at the algorithm and determine how it would perform. This is done by looking at the order of magnitude of the algorithm. As the number of items(N) changes what effect does it have on the number of operations needed to execute(time). This method of classification is referred to as BIG-O notation.

Programmers use Big-O to get a rough estimate of "how many seconds" and "how much memory" various algorithms use for "large" inputs

十级心震 2024-10-24 12:22:27

这是因为 BigO 符号的定义。给定 O(f(n)) 是 g([n 的列表大小]) 的界限:对于 n、n0 的某些值,n、n0 < 的所有值n,g([list]) 的运行时间或空间复杂度小于 G*f(n),其中 G 是任意常数。

这意味着,当您的输入超过一定大小后,该函数将不会扩展到超出某个函数。因此,如果 f(x) = x(等于 O(n)),n2 = 2 * n1,我正在计算的函数不会花费超过两倍的时间。现在,请注意,如果 O(n) 为真,则 O(n^2) 也为真。如果我的函数永远不会比 double 更差,那么它也永远不会比 square 更差。在实践中,通常给出已知的最低阶函数。

It's because of the definition of BigO notation. Given O(f(n)) is the bounds on g([list size of n]): For some value of n, n0, all values of n, n0 < n, the run-time or space- complexity of g([list]) is less than G*f(n), where G is an arbitrary constant.

What that means is that after your input goes over a certain size, the function will not scale beyond some function. So, if f(x) = x (being eq to O(n)), n2 = 2 * n1, the function i'm computing will not take beyond double the amount of time. Now, note that if O(n) is true, so is O(n^2). If my function will never do worse than double, it will never do worse than square either. In practice the lowest order function known is usually given.

私野 2024-10-24 12:22:27

Big O 没有提及算法的扩展能力有多好。 “多好”是相对的。它是量化算法如何扩展的通用方法,但对于任何特定目的的适用性或缺乏适用性并不是该符号的一部分。

Big O says nothing about how well an algorithm will scale. "How well" is relative. It is a general way to quantify how an algorithm will scale, but the fitness or lack of fitness for any specific purpose is not part of the notation.

夜灵血窟げ 2024-10-24 12:22:27

假设我们想检查 no 是否是素数。 Ram 和 Shyam 提出了以下解决方案。

Ram 的解决方案

   for(int i = 2; i <= n-1; i++) 
            if( n % i == 0 )
                return false;
         return true;

现在我们知道上述算法将运行 n-2 次。

shyam 的解决方案

 for(int i = 2; i <= sqrt(n); i++)
   if ( n % i == 0 )
     return false;
 return true;

上述算法将运行 sqrt(n) - 1 次

假设在两种算法中每次运行都需要单位时间(1ms),则

if n = 101

第一个算法: - 花费的时间为 99 毫秒,甚至比眨眼的时间还短

第二算法:- 大约 9 毫秒,这同样不明显。

如果 n = 10000000019

第一个算法:- 所用时间为 115 天,即一年的第三天。

第二算法:- 大约 1.66 分钟,相当于喝一杯咖啡。

我想现在没什么可说的了:D

Suppose we want to check whether a no is prime or not. And Ram and Shyam came up with following solutions.

Ram's solution

   for(int i = 2; i <= n-1; i++) 
            if( n % i == 0 )
                return false;
         return true;

now we know that the above algorithm will run n-2 times.

shyam's solution

 for(int i = 2; i <= sqrt(n); i++)
   if ( n % i == 0 )
     return false;
 return true;

The above algorithm will run sqrt(n) - 1 times

Assuming that in both the algorithms each run takes unit time(1ms) then

if n = 101

1st algorithm:- Time taken is 99 ms which is even less than blink of an eye

2nd algorithm:- Around 9 ms which again is not noticable.

if n = 10000000019

1st algorithm:- Time taken is 115 days which is 3rd of an year.

2nd algorithm:- Around 1.66 minutes which is equivalent to sipping a cup of coffee.

I think nothing need to be said now :D

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文