越差越好。 有例子吗?

发布于 2024-07-12 00:50:57 字数 1507 浏览 6 评论 0 原文

是否有一种广泛使用的算法,其时间复杂度比其他已知算法更差,但在所有实际情况下都是一个更好选择( 更差复杂性,但更好否则)?

可接受的答案可能采用以下形式:

有算法ABO(N**2)O(N) 时间 相应的复杂性,但是 B 有一个如此大的常数,以至于没有 与 A 相比,输入较少的优点 那么其中的一些原子 宇宙。

答案中的重点示例:

  • 单纯形算法 - 最坏情况是指数时间 - 凸优化问题的已知多项式时间算法。

  • 中值的朴素中值算法 - 最坏情况 O(N**2) vs. 已知的 O(N) 算法。

  • 回溯正则表达式引擎 - 最坏情况指数 O(N) Thompson NFA 基于引擎。

所有这些示例都利用了最坏情况与平均情况。

是否有不依赖于最坏情况与平均情况之间差异的示例?


相关:

  • “越差越好”的兴起 。 (就这个问题而言,“更坏更好”短语的使用范围比本文中的更窄(即算法时间复杂度))

  • Python 的设计哲学

    <块引用>

    ABC集团力求完美。 例如,他们使用基于树的数据 已被证明的结构算法 对于渐进大是最优的 收藏(但不太适合 小收藏)。

    如果没有计算机能够存储这些大型集合(换句话说,在这种情况下“大”不够大),那么这个示例就是答案。

  • 用于方阵乘法的

    Coppersmith–Winograd 算法 就是一个很好的例子(它是最快(2008),但不如更差的算法)。 还有其他吗? 来自维基百科文章:“它在实践中并未使用,因为它只为大到无法由现代硬件处理的矩阵提供优势(Robinson 2005)。”

Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)?

An acceptable answer might be in a form:

There are algorithms A and B that
have O(N**2) and O(N) time
complexity correspondingly, but B
has such a big constant that it has no
advantages over A for inputs less
then a number of atoms in the
Universe.

Examples highlights from the answers:

  • Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems.

  • A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm.

  • Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines.

All these examples exploit worst-case vs. average scenarios.

Are there examples that do not rely on the difference between the worst case vs. average case scenario?


Related:

  • The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article)

  • Python's Design Philosophy:

    The ABC group strived for perfection.
    For example, they used tree-based data
    structure algorithms that were proven
    to be optimal for asymptotically large
    collections (but were not so great for
    small collections).

    This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case).

  • Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others?
    From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(24

无人接听 2024-07-19 00:50:57

快速排序 的最坏情况时间复杂度为 O(N^2),但通常被认为更好与在最坏情况下具有 O(N log n) 时间复杂度的其他排序算法相比。

quick-sort has worst case time complexity of O(N^2) but it is usually considered better than other sorting algorithms which have O(N log n) time complexity in the worst case.

十雾 2024-07-19 00:50:57

Simplex 是一种在最坏情况下具有指数时间复杂度的算法,但对于任何实际情况,它都是多项式的。 线性规划的多项式算法可能存在,但它们非常复杂并且通常具有很大的常数。

Simplex is an algorithm which has exponential time complexity in the worst case but for any real case it is polynomial. Probably polynomial algorithms for linear programming exist but they are very complicated and usually have large constants.

无尽的现实 2024-07-19 00:50:57

蒙特卡罗积分是一种计算定积分的概率方法,无法保证返回正确答案。 然而,在现实世界中,它返回准确答案的速度远远快于可证明正确的方法。

Monte Carlo integration is a probabilistic method of calculating definite integrals that has no guarantee of returning the correct answer. Yet, in real-world situations it returns an accurate answer far faster than provably correct methods.

ゃ懵逼小萝莉 2024-07-19 00:50:57

“越差越好”也可以在语言中看到,例如 Perl、Python、Ruby、Php 甚至 C# 或 Java 背后的思想,或者任何非汇编语言或 C 语言(C++ 可能适合或不适合)。

基本上总是有一个“完美”的解决方案,但很多时候最好使用“更糟糕”的工具/算法/语言来更快地获得结果,并且减少痛苦。 这就是为什么人们使用这些高级语言,尽管从理想的计算机语言的角度来看它们“更糟糕”,而是更以人为本。

"Worse is Better" can be seen in languages too, for example the ideas behind Perl, Python, Ruby, Php even C# or Java, or whatever language that isn't assembler or C (C++ might fit here or not).

Basically there is always a "perfect" solution, but many times its better to use a "worse" tool/algorithm/language to get results faster, and with less pain. Thats why people use these higher level languages, although they are "worse" from the ideal computer-language point of view, and instead are more human oriented.

悍妇囚夫 2024-07-19 00:50:57

用于方阵乘法的 Coppersmith–Winograd 算法。 其时间复杂度为 O(n2.376) vs. 简单乘法算法的 O(n3) 或 vs. em> Strassen 算法 的 O(n2.807)。

来自维基百科文章:

但是,与施特拉森不同
算法,实际中没有使用
因为它只提供了一个优势
对于大到不能
由现代硬件处理
(罗宾逊,2005)。

Coppersmith–Winograd algorithm for square matrix multiplication. Its time complexity is O(n2.376) vs. O(n3) of a naive multiplication algorithm or vs. O(n2.807) for Strassen algorithm.

From the wikipedia article:

However, unlike the Strassen
algorithm, it is not used in practice
because it only provides an advantage
for matrices so large that they cannot
be processed by modern hardware
(Robinson 2005).

水溶 2024-07-19 00:50:57

此语句可以应用于几乎任何并行算法。 它们在计算早期没有得到深入研究的原因是,对于单个执行线程(想想单处理器),它们在渐近复杂性、小n,或两者兼而有之。 然而,在当前和未来的计算平台的背景下,可以利用几个(想想多核)、几百个(想想 GPU)或几千个(想想超级计算机)处理元素的算法将击败顺序版本的裤子在挂钟时间中,即使所有处理器花费的总时间/能量对于并行版本来说要大得多。

排序、图形算法和线性代数技术等都可以通过承担一点额外的簿记、通信和运行时开销的成本来加速挂钟时间,以便实现并行化。

This statement can be applied to nearly any parallel algorithm. The reason they were not heavily researched in the early days of computing is because, for a single thread of execution (think uniprocessor), they are indeed slower than their well-known sequential counterparts in terms of asymptotic complexity, constant factors for small n, or both. However, in the context of current and future computing platforms, an algorithm which can make use of a few (think multicore), few hundred (think GPU), or few thousand (think supercomputer) processing elements will beat the pants of the sequential version in wall-clock time, even if the total time/energy spent by all processors is much greater for the parallel version.

Sorts, graph algorithms, and linear algebra techniques alike can be accelerated in terms of wall-clock time by bearing the cost of a little extra bookkeeping, communication, and runtime overhead in order to parallelize.

淡淡的优雅 2024-07-19 00:50:57

通常是一种可以轻松并行化随机化 将被选择,而不是缺乏这些品质的竞争算法。 此外,通常情况下,当使用精确算法时,问题的近似解决方案是可以接受的将产生指数运行时间,如旅行推销员问题

Often an algorithm (like quicksort) that can be easily parallelized or randomized will be chosen over competing algorithms that lack these qualities. Furthermore, it is often the case that an approximate solution to a problem is acceptable when an exact algorithm would yield exponential runtimes as in the Travelling Salesman Problem.

来世叙缘 2024-07-19 00:50:57

如果没有计算机能够存储这些大型集合,这个示例就是答案。

推测该集合的大小为 641K。


当我们在 BAE SYSTEMS 的技术计算小组工作时,该小组负责各种飞机的结构和空气动力学代码,我们的代码库至少可以追溯到 25 年前(三分之一的员工已经在那里工作了那么久)。

许多算法针对 16 位大型机上的性能进行了优化,而不是针对可扩展性。 这些优化完全适合 20 世纪 70 年代的硬件,但在取代它的 32 和 64 位系统上的较大数据集上表现不佳。 如果您选择可扩展性较差但在您当前使用的硬件上运行效果更好的东西,请注意这是一种优化,将来可能不再适用。 在编写 1970 年代的例程时,我们在 2000 年代放入其中的数据大小是不切实际的。 不幸的是,试图从这些代码中提取出清晰的算法,然后可以实现它以适应现代硬件,这并不是一件容易的事。

除了让海洋沸腾之外,所谓的“所有实际情况”通常是一个随时间变化的变量。

This example would be the answer if there were no computers capable of storing these large collections.

Presumably the size of the collection was 641K.


When working in the technical computing group for BAE SYSTEMS, which looked after structural and aerodynamic code for various aircraft, we had a codebase going back at least 25 years (and a third of the staff had been there that long).

Many of the algorithms were optimised for performance on a 16bit mainframe, rather than for scalability. These optimisations were entirely appropriate for 1970s hardware, but performed poorly on larger datasets on the 32 and 64 bit systems which replaced it. If you're choosing something with worse scalability which works better on the hardware you are currently working on, be aware that this is an optimisation, and it may not apply in the future. At the time those 1970s routines were written, data size we put into them in the 2000s was not practical. Unfortunately, trying to extract a clear algorithm from those codes which then could be implemented to suit modern hardware was not trivial.

Short of boiling the oceans, what counts as 'all practical situations' is often a time dependent variable.

幸福不弃 2024-07-19 00:50:57

计算几何就是一个例子。 多边形三角剖分 由于 Cazelle,但由于实施的难度和巨大的常数,它几乎从未在实践中实施过。

One example is from computational geometry. Polygon triangulation has worst case O(N) algorithm due to Chazelle, but it is almost never implemented in practice due to toughness of implementation and huge constant.

逆光下的微笑 2024-07-19 00:50:57

不太正确,但与基于 DFA 的正则表达式相比,基于回溯的正则表达式具有指数最坏情况,但几乎总是使用基于回溯的正则表达式而不是基于 DFA 的正则表达式。

编辑:(JFS)

正则表达式匹配可以简单而快速(但在 Java、Perl 中速度很慢) 、PHP、Python、Ruby...)

反向引用增加的力量
代价高昂:在最坏的情况下
案例,最著名的实现
需要指数搜索算法。

正则表达式引擎

这种方法(DFA)确实更高效,并且甚至可以进行调整以允许捕获和非贪婪匹配,但它也有重要的缺点:

  • 环视是不可能的
  • 反向引用也是不可能的
  • 正则表达式预编译更长并且占用更多内存

从好的方面来说,除了避免最坏情况的指数运行时间外,DFA 方法还避免了最坏情况的堆栈使用,即与输入数据的大小成线性关系。

[3]:

Not quite on the mark, but backtracking-based regular expressions have an exponential worst case versus O(N) for DFA-based regular expressions, yet backtracking-based regular expressions are almost always used rather than DFA-based ones.

EDIT: (JFS)

Regular Expression Matching Can Be Simple And Fast (but is slow in Java, Perl, PHP, Python, Ruby, ...):

The power that backreferences add
comes at great cost: in the worst
case, the best known implementations
require exponential search algorithms.

Regular Expression Engines:

This method (DFA) is really more efficient, and can even be adapted to allow capturing and non-greedy matching, but it also has important drawbacks:

  • Lookarounds are impossible
  • Back-references are also impossible
  • Regex pre-compilation is longer and takes more memory

On the bright side, as well as avoiding worst-case exponential running times, DFA approaches avoid worst-case stack usage that is linear in the size of the input data.

[3]:

那伤。 2024-07-19 00:50:57

存在用于确定素数的多项式时间算法,但在实践中,使用指数时间算法或执行足够的概率计算以获得足够的确定性总是更快。

There exists a polynomial time algorithm for determining primality, but in practice, it's always faster to use an exponential time algorithm or to perform enough probabilistic calculations to have sufficient certainty.

萝莉病 2024-07-19 00:50:57

对于固定长度输入,基数排序的时间复杂度为 O(n),但更常用的是快速排序,尽管渐近运行时间较差,因为基数排序的每个元素开销通常要高得多。

Radix sort has time-complexity O(n) for fixed-length inputs, but quicksort is more often used, despite the worse asympotic runtime, because the per-element overhead on Radix sort is typically much higher.

千纸鹤带着心事 2024-07-19 00:50:57

好吧,考虑解决旅行推销员问题。 唯一完美的解决方案是测试所有可能的路线。 然而,随着 N 的增加,我们的硬件和时间限制就变得不可能了。 所以我们想到了很多启发法。

这给我们带来了你问题的答案。 对于 NP 完全问题,启发式(更糟糕)比暴力破解更好。 这描述了“越坏越好”始终正确的情况。

Ok, consider solving the traveling sales man problem. The ONLY perfect solution is to test all possible routes. However that becomes impossible with our hardware and time-limits as N increases. So we have thought of many of heuristics.

Which brings us to the answer of your question. Heuristics (worse) are better than brute-force for NP-complete problems. This describes the situation in which "Worse is Better" is always true.

指尖上的星空 2024-07-19 00:50:57

在计算一组数字的中位数时,可以使用与快速排序非常相似的算法。 您围绕一个数字进行分区,所有较大的数字都位于一侧,所有较小的数字都位于另一侧。 然后丢弃一侧并递归计算较大一侧的中位数。 在最坏的情况下,这需要 O(n^2),但在平均情况下相当快(O(n),具有较低的常数)。

您可以获得保证的最坏情况 O(n) 性能,常数约为 40。这称为 中位数算法的中位数。 实际上,你永远不会使用这个。

When calculating the median of a group of numbers, you can use an algorithm very similar to quicksort. You partition around a number, and all the bigger ones go to one side, and all the smaller ones go the other side. Then you throw away one side and recursively calculate the median of the larger side. This takes O(n^2) in the worst case, but is pretty fast (O(n) with a low constant) in the average case.

You can get guaranteed worst-case O(n) performance, with a constant of about 40. This is called the median of medians algorithm. In practice, you would never use this.

何必那么矫情 2024-07-19 00:50:57

如果我理解这个问题,那么您要求的算法在理论上更好,但在所有情况下实际上都更差。 因此,人们不会指望它们会被实际使用,除非被错误地使用。

一个可能的例子是通用记忆化。 理论上,所有确定性函数调用都应该针对所有可能的输入进行记忆。 这样复杂的计算就可以被简单的表查找所取代。 对于许多问题,该技术可以有效地用时间换取存储空间。 但是假设有一个中央存储库,其中存储了所有人类计算机使用的所有可能功能的所有可能输入的结果。 任何地方的任何人第一次进行计算都将是最后一次。 所有后续尝试都会导致表查找。

但我可以想到不这样做的几个原因:

  1. 存储所有结果所需的内存空间可能会大得难以置信。 所需比特的数量似乎会超过宇宙中粒子的数量。 (但即使是估计这个数字的任务也是艰巨的。)

  2. 构建一个有效的算法来记忆如此巨大的问题空间是很困难的。 。
  3. 随着客户端数量的增加,与中央存储库的通信成本可能会超过收益。

我相信你还能想到其他问题。

事实上,这种时间/空间权衡在实践中非常常见。 理想情况下,所有数据都将存储在 L1 缓存中,但由于大小限制,您始终需要将一些数据放在磁盘或(恐怖!)磁带上。 先进的技术减少了这些权衡的一些痛苦,但正如我上面所建议的,存在局限性。


回应 JF Sebastian 的评论:

假设我们考虑阶乘存储库,而不是通用记忆存储库。 并且它不会保存所有可能输入的结果。 相反,它将仅限于从 1N! 的结果。现在很容易看出,任何执行阶乘的计算机都会从查找结果而不是进行计算中受益。 即使计算 (N+1)! ,查找也将是一个巨大的胜利,因为计算将减少到 N!(N+1)

现在,为了使这个“更好”的算法变得更糟,我们可以增加 N 或增加使用存储库的计算机数量。

但我可能不理解这个问题的一些微妙之处。 他们就是我思考的方式,我不断地想出可以很好扩展的例子,直到它们不能为止。

If I understand the question, you are asking for algorithms that are theoretically better but practically worse in all situations. Therefore, one would not expect them to actually be used unless by mistake.

One possible example is universal memoization. Theoretically, all deterministic function calls should be memoized for all possible inputs. That way complex calculations could be replaced by simple table lookups. For a wide range of problems, this technique productively trades time for storage space. But suppose there were a central repository of the results of all possible inputs for all possible functions used by all of humanity's computers. The first time anyone anywhere did a calculation it would be the last time. All subsequent tries would result in a table lookup.

But there are several reasons I can think of for not doing this:

  1. The memory space required to store all results would likely be impossibly large. It seems likely the number of needed bits would exceed the number of particles in the universe. (But even the task of estimating that number is daunting.)

  2. It would be difficult to construct an efficient algorithm for doing the memoiztion of that huge a problem space.

  3. The cost of communication with the central repository would likely exceed the benefit as the number of clients increase.

I'm sure you can think of other problems.

In fact, this sort of time/space trade-off is incredible common in practice. Ideally, all data would be stored in L1 cache, but because of size limitations you always need to put some data on disk or (horrors!) tape. Advancing technology reduces some of the pain of these trade-offs, but as I suggested above there are limits.


In response to J.F. Sebastian's comment:

Suppose that instead of a universal memoization repository, we consider a factorial repository. And it won't hold the results for all possible inputs. Rather it will be limited to results from 1 to N! Now it's easy to see that any computer that did factorials would benefit from looking up the result rather than doing the calculation. Even for calculating (N+1)! the lookup would be a huge win since that calculation would reduce to N!(N+1).

Now to make this "better" algorithm worse, we could either increase N or increase the number of computers using the repository.

But I'm probably not understanding some subtlety of the question. They way I'm thinking of it, I keep coming up with examples that scale well until they don't.

抽个烟儿 2024-07-19 00:50:57

合并排序与快速排序

快速排序的平均时间复杂度为 O(n log n)。 它可以对数组进行就地排序,即空间复杂度为 O(1)。

归并排序的平均时间复杂度为 O(n log n),但其空间复杂度要差得多: θ( n)。 (链表有一个特殊情况)

因为快速排序最坏情况的时间复杂度是 θ(n^2)(即所有元素都落在每个主元的同一侧),而归并排序最坏情况是 O(n log n),合并排序是库实现者的默认选择。

在这种情况下,我认为合并排序最坏情况时间复杂度的可预测性胜过快速排序低得多的内存需求。

鉴于有可能大大降低快速排序时间复杂度最坏情况的可能性(例如通过随机选择枢轴),我认为人们可能会认为,除了快速排序的病态情况之外,归并排序在所有情况下都更糟糕。

Mergesort versus Quicksort

Quick sort has an average time complexity of O(n log n). It can sort arrays in place, i.e. a space complexity of O(1).

Merge sort also has an average time complexity of O(n log n), however its space complexity is much worse: Θ(n). (there is a special case for linked lists)

Because of the worst case time complexity of quick sort is Θ(n^2) (i.e. all elements fall on the same side of every pivot), and mergesort's worst case is O(n log n), mergesort is the default choice for library implementers.

In this case, I think that the predictability of the mergesort's worst case time complexity trumps quicksorts much lower memory requirements.

Given that it is possible to vastly reduce the likelihood of the worst case of quicksort's time complexity (via random selection of the pivot for example), I think one could argue that mergesort is worse in all but the pathological case of quicksort.

小女人ら 2024-07-19 00:50:57

我一直将“更坏更好”这个术语理解为与具有正确解决方案的问题相关,这些问题非常复杂,其中存在相对更容易理解的近似(或足够好的)解决方案。

这使得设计、生产和维护变得更加容易。

I've always understood the term 'worse is better' to relate to problems with correct solutions that are very complex where an approximate (or good enough) solution exists that is relatively easier to comprehend.

This makes for easier design, production, and maintenance.

忱杏 2024-07-19 00:50:57

有一种 O(n) 算法用于从未排序的集合中选择第 k 个最大元素,但很少使用它来代替排序,这当然是 O(n logn)。

There's an O(n) algorithm for selecting the k-th largest element from an unsorted set, but it is rarely used instead of sorting, which is of course O(n logn).

我家小可爱 2024-07-19 00:50:57

尽管插入排序的复杂度为 O(n2),但对于小集合 (n < 10) 来说,插入排序比任何其他排序算法都要快。 这是因为嵌套循环很小并且执行速度很快。 许多库(包括 STL)都实现了排序方法,实际上将其用于小数据子集以加快速度。

Insertion sort despite having O(n2) complexity is faster for small collections (n < 10) than any other sorting algorithm. That's because the nested loop is small and executes fast. Many libraries (including STL) that have implementation of sort method actually using it for small subsets of data to speed things up.

瀟灑尐姊 2024-07-19 00:50:57

蒙特卡罗整合已经被建议,但更具体的例子是金融领域的蒙特卡罗定价也是一个建议。 这里的方法比其他方法更容易编码,并且可以做更多的事情,但它比有限差分要慢得多。

执行 20 维有限差分算法并不实际,但 20 维定价执行很容易设置。

Monte carlo integration was already suggested but a more specific example is Monte Carlo pricing in finance is also a suggestion. Here the method is much easier to code and can do more things than some others BUT it is much slower than say, finite difference.

its not practical to do 20dimensional finite difference algorithms, but a 20 dimensional pricing execution is easy to set up.

强者自强 2024-07-19 00:50:57

意大利面排序比任何其他排序算法都要好,因为它的设置时间复杂度为 O(n) up,执行时间复杂度为 O(1),提取排序后的数据复杂度为 O(n)。 它以 O(n) 的空间复杂度完成了所有这些。 (总体性能:时间和空间均为 O(n)。)然而,由于某些奇怪(明显)的原因,没有人将它用于任何用途,而是更喜欢远劣的 O(nlogn) 算法及其同类。

The Spaghetti sort is better than any other sorting algorithm in that it is O(n) to set up, O(1) to execute and O(n) to extract the sorted data. It accomplishes all of this in O(n) space complexity. (Overall performance: O(n) in time and space both.) Yet, for some strange (obvious) reason, nobody uses it for anything at all, preferring the far inferior O(nlogn) algorithms and their ilk.

别想她 2024-07-19 00:50:57
  1. Y-fast-trie 对于后继/前驱具有复杂的 loglogu 时间,但它具有相对较大的常数,因此 BST(即 logn)可能更好,这是因为 log(n) 在任何实际使用中都非常小,因此常数最重要。

  2. 融合树的查询复杂度为 O(logn/loglogu),但常量非常大,BST 可以在 logn 中实现相同的效果,这又更好了(loglogu 非常小,所以 O(logn/loglogu)=O(logn )出于任何实际原因)。

  3. 确定性中值算法即使是 O(n) 也非常慢,因此使用排序 (nlogn) 或概率版本(理论上可能需要 O(n!),但很有可能需要 O(n)并且 T*O(n) 的概率随 T 和 n) 呈指数下降要好得多。

  1. Y-fast-trie has loglogu time complexily for successor / predecessor but it has relativly big constants so BST (which is logn) is probably better, this is because log(n) is very small anyways in any practical use so the constants matter the most.

  2. Fusion tree have an O(logn/loglogu) query complexity but with very big constants and a BST can achieve the same in logn which is better again (also loglogu is extremely small so O(logn/loglogu)=O(logn) for any practical reason).

  3. The deterministic median algorithm is very slow even though it's O(n), so using a sort (nlogn) or the probabilistic version (theoretically could take O(n!) but with a very high probability it takes O(n) and the probability it would take T*O(n) drops exponentially with T and n) is much better.

帅气称霸 2024-07-19 00:50:57

迭代深化

与使用 alpha- 增强的简单深度优先搜索相比beta 剪枝 与不良(或不存在的)分支排序启发式将导致扫描更多节点。 然而,当使用良好的分支排序启发式方法时,由于 alpha-beta 修剪的增强效果,树的很大一部分会被消除。 与时间或空间复杂性无关的第二个优点是,可以尽早建立对问题域的解决方案的猜测,并且随着搜索的进行,该猜测会得到完善。 正是第二个优点使其在许多问题领域如此有吸引力。

Iterative Deepening

When compared to a trivial depth-first search augmented with alpha-beta pruning an iterative deepening search used in conjunction with a poor (or non-existent) branch ordering heuristic would result in many more nodes being scanned. However, when a good branch ordering heuristic is used a significant portion of the tree is eliminated due to the enhanced effect of the alpha-beta pruning. A second advantage unrelated to time or space complexity is that a guess of the solution over the problem domain is established early and that guess gets refined as the search progresses. It is this second advantage that makes it so appealing in many problem domains.

倚栏听风 2024-07-19 00:50:57
Quick-sort has worst case time complexity of O(N^2)! 
It is considered better than other sorting algorithms 
like mergesort heapsort etc. which have O(N log n) time complexity 
in the worst case.
The reason may be the 
1.in place sorting 
2.stability, 
3.very less amount of code involved.
Quick-sort has worst case time complexity of O(N^2)! 
It is considered better than other sorting algorithms 
like mergesort heapsort etc. which have O(N log n) time complexity 
in the worst case.
The reason may be the 
1.in place sorting 
2.stability, 
3.very less amount of code involved.
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文