堆与二叉搜索树 (BST)

发布于 2024-11-10 04:07:25 字数 79 浏览 9 评论 0原文

堆和 BST 有什么区别?

何时使用堆、何时使用 BST?

如果你想以排序的方式获取元素,BST 是否比堆更好?

What is the difference between a heap and BST?

When to use a heap and when to use a BST?

If you want to get the elements in a sorted fashion, is BST better over heap?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

怪异←思 2024-11-17 04:07:25

摘要

          Type      BST (*)   Heap
Insert    average   log(n)    1
Insert    worst     log(n)    log(n) or n (***)
Find any  worst     log(n)    n
Find max  worst     1 (**)    1
Create    worst     n log(n)  n
Delete    worst     log(n)    log(n)

此表中除插入之外的所有平均时间均与其最差时间相同。

  • *:在这个答案中的任何地方,BST ==平衡BST,因为不平衡渐进地糟糕
  • **:使用这个答案中解释的一个简单的修改
  • ***: log(n) 表示指针树堆,n 表示动态数组堆

二叉堆相对于 BST 的优点

BST 相对于二叉堆的优势

  • 搜索任意元素的时间复杂度为O(log(n))这是 BST 的杀手级功能。

    对于堆来说,一般来说是O(n),除了最大的元素是O(1)

堆相对于 BST 的“错误”优势

  • 堆查找最大值、BST 的时间为 O(1) O(log(n)) .

    这是一种常见的误解,因为修改 BST 来跟踪最大元素并在该元素可能更改时更新它是微不足道的:在插入较大的交换时,在删除时查找第二大的。 我们可以使用二叉搜索树来模拟堆操作吗?< /a>(由 Yeo 提及)。

    实际上,与 BST 相比,这是堆的一个限制唯一有效的搜索是最大元素的搜索。

平均二进制堆插入是O(1)

来源:

  • 论文:http://i.stanford.edu/pub/cstr/reports/cs/tr/74/460/CS-TR-74-460.pdf
  • WSU 幻灯片: - WSU 幻灯片: <一href="https://web.archive.org/web/20161109132222/http://www.eecs.wsu.edu/%7Eholder/courses/CptS223/spr09/slides/heaps.pdf" rel="noreferrer">https://web.archive.org/web/20161109132222/http://www.eecs.wsu.edu/~holder/courses/CptS223/spr09/slides/heaps.pdf

直观的论据:

  • 底部树级别的元素比顶部级别的元素数量呈指数级增加,因此新元素几乎肯定会在底部
  • 堆插入 从底部开始,BST必须从顶部开始

在二叉堆中,以a处增加值出于同样的原因,给定索引也是O(1)。但如果您想这样做,您可能需要在堆操作上保留一个额外的索引如何为基于最小堆的优先级队列实现 O(logn) 减少键操作? 例如迪克斯特拉。无需额外时间成本即可实现。

GCC C++ 标准库在真实硬件上插入基准

我对 C++ std::set (红黑树 BST)和std::priority_queue (动态数组堆)插入来看看我对插入时间的判断是否正确,这就是我得到的:

在此处输入图像描述

  • 基准代码
  • 绘图脚本
  • <一个href="https://github.com/cirosantilli/media/blob/f5e3457835746c2a319664160a897ed264e16622/data/bst_vs_heap_vs_hashmap.dat" rel="noreferrer">绘图数据
  • 在 Lenovo ThinkPad 中的 Ubuntu 19.04、GCC 8.3.0 上测试P51笔记本电脑,配备 CPU:Intel Core i7-7820HQ CPU(4 核/8 线程,2.90 GHz 基础,8 MB 缓存),RAM:2x Samsung M471A2K43BB1-CRC(2x 16GiB,2400 Mbps),SSD:Samsung MZVLB512HAJQ-000L7(512GB, 3,000 MB/s)

很明显:

GCC C++ 标准库在 gem5 上插入基准测试

gem5 是一个完整的系统模拟器,因此通过 m5 dumpstats 提供无限精确的时钟。所以我尝试用它来估计单个插入的时间。

输入图像描述这里

解释:

  • 堆仍然是恒定的,但是现在我们更详细地看到有几行,并且每一个较高的行都更加稀疏。

    这必须对应于越来越高的插入的内存访问延迟。

  • TODO 我无法真正完整地解释 BST,因为它看起来不那么对数并且更恒定。

    有了这个更详细的信息,我们还可以看到一些不同的线,但我不确定它们代表什么:我希望底线更细,因为我们插入了顶部底部?

Buildroot 设置 上进行基准测试aarch64 HPI CPU

BST 无法在数组上有效实现

堆操作只需要在单个树枝上向上或向下冒泡,因此 O(log(n)) 最坏情况交换,O(1) 平均。

保持 BST 平衡需要树旋转,这可能会更改另一个树的顶部元素,并且需要移动整个数组 (O(n))。

可以在数组上有效地实现堆

可以根据当前索引计算父索引和子索引如此处所示

没有像 BST 这样的平衡操作。

删除 min 是最令人担忧的操作,因为它必须是自上而下的。但它总是可以通过“渗透”堆的单个分支来完成 如此处解释。这会导致 O(log(n)) 最坏情况,因为堆始终保持良好平衡。

如果您为删除的每个节点插入一个节点,那么您将失去堆提供的渐近 O(1) 平均插入的优势,因为删除将占主导地位,您还不如使用 BST。然而,Dijkstra 每次删除都会多次更新节点,所以我们没问题。

动态数组堆与指针树堆

堆可以在指针堆之上有效地实现:是否可以实现高效的基于指针的二进制堆实现?

动态数组实现的空间效率更高。假设每个堆元素只包含一个指向结构体的指针:

  • 树实现必须为每个元素存储三个指针:父元素、左子元素和右子元素。因此内存使用量始终为 4n(3 个树指针 + 1 个 struct 指针)。

    树 BST 还需要进一步的平衡信息,例如黑红度。

  • 动态数组实现的大小在加倍后可以是2n。因此平均为 1.5n

另一方面,树堆具有更好的最坏情况插入,因为复制后备动态数组使其大小加倍需要 O(n) 最坏情况,而树堆只是为每个进行新的小分配节点。

不过,后备数组加倍是 O(1) 摊销的,因此它归结为最大延迟考虑。 此处提到

哲学

  • BST 维护父代和所有子代之间的全局属性(左小,右大)。

    BST 的顶部节点是中间元素,需要全局知识来维护(知道有多少个较小和较大的元素)。

    此全局属性的维护成本较高(log n insert),但提供更强大的搜索(log n search)。

  • 堆维护父级和直接子级之间的本地属性(父级>子级)。

    堆的顶部节点是大元素,只需要局部知识来维护(知道你的父节点)。

比较 BST、Heap 和 Hashmap:

  • BST:可以是合理的:

  • 堆:只是一个排序机。不可能是高效的无序集合,因为您只能快速检查最小/最大元素。

  • 哈希映射:只能是无序集合,不是高效的排序机,因为哈希会混淆任何排序。

双向链表

双向链表可以看作堆的子集,其中第一项具有最高优先级,因此我们也在这里比较它们:

  • 插入:
    • 位置:
      • 双向链表:插入的项必须是第一个或最后一个,因为我们只有指向这些元素的指针(除非我们有一个指向感兴趣位置的指针,例如在迭代期间)
      • 二叉堆:插入的项可以位于任何位置。比链表限制更少。
    • 时间:
      • 双向链表:O(1) 最坏的情况,因为我们有指向项目的指针,而且更新非常简单
      • 二叉堆:平均O(1),因此比链表差。权衡具有更通用的插入位置。
  • 搜索:O(n) 对于两者

一个用例是当堆的键是当前时间戳时:在这种情况下,新条目将始终位于列表的开头。所以我们甚至可以完全忘记确切的时间戳,而只保留列表中的位置作为优先级。

这可用于实现 LRU 缓存。就像对于像 Dijkstra 这样的堆应用程序,您将需要保留一个从键到列表相应节点的附加哈希映射,以找到要快速更新的节点。

不同平衡 BST 的比较

虽然到目前为止我所看到的通常被归类为“平衡 BST”的所有数据结构的渐近插入和查找时间都是相同的,但不同的 BBST 确实有不同的交易 -关闭。我还没有完全研究过这一点,但最好在这里总结这些权衡:

  • 红黑树。似乎是截至 2019 年最常用的 BBST,例如 GCC 8.3.0 C++ 实现使用的 BBST
  • AVL树。看起来比 BST 更平衡一些,因此它可能会更好地减少查找延迟,但代价是查找成本稍高一些。 Wiki 总结道:“AVL 树经常与红黑树进行比较,因为两者都支持相同的操作集,并且基本操作所需的时间相同。对于查找密集型应用程序,AVL 树比红黑树更快,因为与红黑树类似,AVL 树通常对于任何 mu < 1/2 都不是权重平衡的,也不是 mu 平衡的。差异巨大的数字的子孙。”
  • WAVL原始论文提到该版本在重新平衡和轮换操作的范围方面的优点。

另请参阅

关于 CS 的类似问题:https://cs.stackexchange。 com/questions/27860/二进制搜索树和二进制堆之间的区别是什么

Summary

          Type      BST (*)   Heap
Insert    average   log(n)    1
Insert    worst     log(n)    log(n) or n (***)
Find any  worst     log(n)    n
Find max  worst     1 (**)    1
Create    worst     n log(n)  n
Delete    worst     log(n)    log(n)

All average times on this table are the same as their worst times except for Insert.

  • *: everywhere in this answer, BST == Balanced BST, since unbalanced sucks asymptotically
  • **: using a trivial modification explained in this answer
  • ***: log(n) for pointer tree heap, n for dynamic array heap

Advantages of binary heap over a BST

  • average time insertion into a binary heap is O(1), for BST is O(log(n)). This is the killer feature of heaps.

    There are also other heaps which reach O(1) amortized (stronger) like the Fibonacci Heap, and even worst case, like the Brodal queue, although they may not be practical because of non-asymptotic performance: Are Fibonacci heaps or Brodal queues used in practice anywhere?

  • binary heaps can be efficiently implemented on top of either dynamic arrays or pointer-based trees, BST only pointer-based trees. So for the heap we can choose the more space efficient array implementation, if we can afford occasional resize latencies.

  • binary heap creation is O(n) worst case, O(n log(n)) for BST.

Advantage of BST over binary heap

  • search for arbitrary elements is O(log(n)). This is the killer feature of BSTs.

    For heap, it is O(n) in general, except for the largest element which is O(1).

"False" advantage of heap over BST

  • heap is O(1) to find max, BST O(log(n)).

    This is a common misconception, because it is trivial to modify a BST to keep track of the largest element, and update it whenever that element could be changed: on insertion of a larger one swap, on removal find the second largest. Can we use binary search tree to simulate heap operation? (mentioned by Yeo).

    Actually, this is a limitation of heaps compared to BSTs: the only efficient search is that for the largest element.

Average binary heap insert is O(1)

Sources:

Intuitive argument:

  • bottom tree levels have exponentially more elements than top levels, so new elements are almost certain to go at the bottom
  • heap insertion starts from the bottom, BST must start from the top

In a binary heap, increasing the value at a given index is also O(1) for the same reason. But if you want to do that, it is likely that you will want to keep an extra index up-to-date on heap operations How to implement O(logn) decrease-key operation for min-heap based Priority Queue? e.g. for Dijkstra. Possible at no extra time cost.

GCC C++ standard library insert benchmark on real hardware

I benchmarked the C++ std::set (Red-black tree BST) and std::priority_queue (dynamic array heap) insert to see if I was right about the insert times, and this is what I got:

enter image description here

  • benchmark code
  • plot script
  • plot data
  • tested on Ubuntu 19.04, GCC 8.3.0 in a Lenovo ThinkPad P51 laptop with CPU: Intel Core i7-7820HQ CPU (4 cores / 8 threads, 2.90 GHz base, 8 MB cache), RAM: 2x Samsung M471A2K43BB1-CRC (2x 16GiB, 2400 Mbps), SSD: Samsung MZVLB512HAJQ-000L7 (512GB, 3,000 MB/s)

So clearly:

  • heap insert time is basically constant.

    We can clearly see dynamic array resize points. Since we are averaging every 10k inserts to be able to see anything at all above system noise, those peaks are in fact about 10k times larger than shown!

    The zoomed graph excludes essentially only the array resize points, and shows that almost all inserts fall under 25 nanoseconds.

  • BST is logarithmic. All inserts are much slower than the average heap insert.

  • BST vs hashmap detailed analysis at: What data structure is inside std::map in C++?

GCC C++ standard library insert benchmark on gem5

gem5 is a full system simulator, and therefore provides an infinitely accurate clock with with m5 dumpstats. So I tried to use it to estimate timings for individual inserts.

enter image description here

Interpretation:

  • heap is still constant, but now we see in more detail that there are a few lines, and each higher line is more sparse.

    This must correspond to memory access latencies are done for higher and higher inserts.

  • TODO I can't really interpret the BST fully one as it does not look so logarithmic and somewhat more constant.

    With this greater detail however we can see can also see a few distinct lines, but I'm not sure what they represent: I would expect the bottom line to be thinner, since we insert top bottom?

Benchmarked with this Buildroot setup on an aarch64 HPI CPU.

BST cannot be efficiently implemented on an array

Heap operations only need to bubble up or down a single tree branch, so O(log(n)) worst case swaps, O(1) average.

Keeping a BST balanced requires tree rotations, which can change the top element for another one, and would require moving the entire array around (O(n)).

Heaps can be efficiently implemented on an array

Parent and children indexes can be computed from the current index as shown here.

There are no balancing operations like BST.

Delete min is the most worrying operation as it has to be top down. But it can always be done by "percolating down" a single branch of the heap as explained here. This leads to an O(log(n)) worst case, since the heap is always well balanced.

If you are inserting a single node for every one you remove, then you lose the advantage of the asymptotic O(1) average insert that heaps provide as the delete would dominate, and you might as well use a BST. Dijkstra however updates nodes several times for each removal, so we are fine.

Dynamic array heaps vs pointer tree heaps

Heaps can be efficiently implemented on top of pointer heaps: Is it possible to make efficient pointer-based binary heap implementations?

The dynamic array implementation is more space efficient. Suppose that each heap element contains just a pointer to a struct:

  • the tree implementation must store three pointers for each element: parent, left child and right child. So the memory usage is always 4n (3 tree pointers + 1 struct pointer).

    Tree BSTs would also need further balancing information, e.g. black-red-ness.

  • the dynamic array implementation can be of size 2n just after a doubling. So on average it is going to be 1.5n.

On the other hand, the tree heap has better worst case insert, because copying the backing dynamic array to double its size takes O(n) worst case, while the tree heap just does new small allocations for each node.

Still, the backing array doubling is O(1) amortized, so it comes down to a maximum latency consideration. Mentioned here.

Philosophy

  • BSTs maintain a global property between a parent and all descendants (left smaller, right bigger).

    The top node of a BST is the middle element, which requires global knowledge to maintain (knowing how many smaller and larger elements are there).

    This global property is more expensive to maintain (log n insert), but gives more powerful searches (log n search).

  • Heaps maintain a local property between parent and direct children (parent > children).

    The top node of a heap is the big element, which only requires local knowledge to maintain (knowing your parent).

Comparing BST vs Heap vs Hashmap:

  • BST: can either be either a reasonable:

    • unordered set (a structure that determines if an element was previously inserted or not). But hashmap tends to be better due to O(1) amortized insert.
    • sorting machine. But heap is generally better at that, which is why heapsort is much more widely known than tree sort
  • heap: is just a sorting machine. Cannot be an efficient unordered set, because you can only check for the smallest/largest element fast.

  • hash map: can only be an unordered set, not an efficient sorting machine, because the hashing mixes up any ordering.

Doubly-linked list

A doubly linked list can be seen as subset of the heap where first item has greatest priority, so let's compare them here as well:

  • insertion:
    • position:
      • doubly linked list: the inserted item must be either the first or last, as we only have pointers to those elements (unless we have a pointer to the position of interest e.g. during iteration)
      • binary heap: the inserted item can end up in any position. Less restrictive than linked list.
    • time:
      • doubly linked list: O(1) worst case since we have pointers to the items, and the update is really simple
      • binary heap: O(1) average, thus worse than linked list. Tradeoff for having more general insertion position.
  • search: O(n) for both

An use case for this is when the key of the heap is the current timestamp: in that case, new entries will always go to the beginning of the list. So we can even forget the exact timestamp altogether, and just keep the position in the list as the priority.

This can be used to implement an LRU cache. Just like for heap applications like Dijkstra, you will want to keep an additional hashmap from the key to the corresponding node of the list, to find which node to update quickly.

Comparison of different Balanced BST

Although the asymptotic insert and find times for all data structures that are commonly classified as "Balanced BSTs" that I've seen so far is the same, different BBSTs do have different trade-offs. I haven't fully studied this yet, but it would be good to summarize these trade-offs here:

  • Red-black tree. Appears to be the most commonly used BBST as of 2019, e.g. it is the one used by the GCC 8.3.0 C++ implementation
  • AVL tree. Appears to be a bit more balanced than BST, so it could be better for find latency, at the cost of slightly more expensive finds. Wiki summarizes: "AVL trees are often compared with red–black trees because both support the same set of operations and take [the same] time for the basic operations. For lookup-intensive applications, AVL trees are faster than red–black trees because they are more strictly balanced. Similar to red–black trees, AVL trees are height-balanced. Both are, in general, neither weight-balanced nor mu-balanced for any mu < 1/2; that is, sibling nodes can have hugely differing numbers of descendants."
  • WAVL. The original paper mentions advantages of that version in terms of bounds on rebalancing and rotation operations.

See also

Similar question on CS: https://cs.stackexchange.com/questions/27860/whats-the-difference-between-a-binary-search-tree-and-a-binary-heap

蓝海 2024-11-17 04:07:25

堆只是保证较高级别上的元素比较低级别上的元素更大(对于最大堆)或更小(对于最小堆),而 BST 则保证顺序(从“左”到“右”)。如果你想要排序的元素,请使用 BST。

Heap just guarantees that elements on higher levels are greater (for max-heap) or smaller (for min-heap) than elements on lower levels, whereas BST guarantees order (from "left" to "right"). If you want sorted elements, go with BST.

揪着可爱 2024-11-17 04:07:25

何时使用堆以及何时使用 BST

堆更适合 findMin/findMax (O(1)),而 BST 则擅长所有查找 (O(logN))。对于这两种结构,插入都是O(logN)。如果您只关心findMin/findMax(例如与优先级相关),请使用堆。如果你想让一切都排序,那就选择 BST。

此处的前几张幻灯片非常清楚地解释了事情。

When to use a heap and when to use a BST

Heap is better at findMin/findMax (O(1)), while BST is good at all finds (O(logN)). Insert is O(logN) for both structures. If you only care about findMin/findMax (e.g. priority-related), go with heap. If you want everything sorted, go with BST.

First few slides from here explain things very clearly.

酒浓于脸红 2024-11-17 04:07:25

正如其他人提到的,堆可以在 O(1) 中执行 findMin findMax,但不能在同一数据结构中执行这两者。不过我不同意 Heap 在 findMin/findMax 中更好。事实上,只需稍加修改,BST 就可以两者 findMin findMax 在 O(1) 内。

在此修改后的 BST 中,每次执行可能会修改数据结构的操作时,您都会跟踪最小节点和最大节点。例如,在插入操作中,您可以检查最小值是否大于新插入的值,然后将最小值分配给新添加的节点。相同的技术可以应用于最大值。因此,此 BST 包含这些信息,您可以在 O(1) 中检索它们。 (与二进制堆相同)

在此 BST(平衡 BST)中,当您 pop minpop max 时,要分配的下一个最小值是后继< /em> 的最小节点,而下一个要分配的最大值是最大节点的前任值。因此它的执行时间为 O(1)。然而我们需要重新平衡树,因此它仍然会运行 O(log n)。 (与二进制堆相同)

我有兴趣在下面的评论中听到您的想法。谢谢:)

更新

交叉引用类似问题我们可以使用二叉搜索树来模拟堆操作吗?有关模拟堆的更多讨论使用二叉搜索树。

As mentioned by others, Heap can do findMin or findMax in O(1) but not both in the same data structure. However I disagree that Heap is better in findMin/findMax. In fact, with a slight modification, the BST can do both findMin and findMax in O(1).

In this modified BST, you keep track of the the min node and max node everytime you do an operation that can potentially modify the data structure. For example in insert operation you can check if the min value is larger than the newly inserted value, then assign the min value to the newly added node. The same technique can be applied on the max value. Hence, this BST contain these information which you can retrieve them in O(1). (same as binary heap)

In this BST (Balanced BST), when you pop min or pop max, the next min value to be assigned is the successor of the min node, whereas the next max value to be assigned is the predecessor of the max node. Thus it perform in O(1). However we need to re-balance the tree, thus it will still run O(log n). (same as binary heap)

I would be interested to hear your thought in the comment below. Thanks :)

Update

Cross reference to similar question Can we use binary search tree to simulate heap operation? for more discussion on simulating Heap using BST.

习ぎ惯性依靠 2024-11-17 04:07:25

二叉搜索树使用这样的定义:对于每个节点,其左侧的节点具有较小的值(key),而其右侧的节点具有较大的值(key)。

作为二叉树的实现,堆使用以下定义:

如果 A 和 B 是节点,其中 B 是 A 的子节点,则 A 的值(键)必须大于或等于该值B.的(关键)
键(A) ≥ 键(B)。

http://wiki.answers.com/Q/Difference_ Between_binary_search_tree_and_heap_tree

我今天遇到了同样的问题我的考试我做对了。微笑 ... :)

A binary search tree uses the definition: that for every node,the node to the left of it has a less value(key) and the node to the right of it has a greater value(key).

Where as the heap,being an implementation of a binary tree uses the following definition:

If A and B are nodes, where B is the child node of A,then the value(key) of A must be larger than or equal to the value(key) of B.That is,
key(A) ≥ key(B).

http://wiki.answers.com/Q/Difference_between_binary_search_tree_and_heap_tree

I ran in the same question today for my exam and I got it right. smile ... :)

若有似无的小暗淡 2024-11-17 04:07:25

BST 在堆上的另一种用途;因为一个重要的区别:

  • 在 BST 中找到后继者和前任者将花费 O(h) 时间。 (在平衡 BST 中为 O(logn)),
  • 而在堆中,则需要 O(n) 时间来查找某个元素的后继或前驱。

在堆上使用 BST:现在,假设我们使用一个数据结构来存储航班的着陆时间。如果着陆时间差小于“d”,我们无法安排航班着陆。并假设许多航班已计划降落在数据结构(BST 或堆)中。

现在,我们想要安排另一个在 t 降落的航班。因此,我们需要计算t与其后继者和前任者的差值(应该>d)。
因此,我们为此需要一个 BST,如果平衡的话,它可以在 O(logn) 内快速

编辑:

排序 BST 需要 O(n) 时间来按排序顺序打印元素(中序遍历),而 Heap 可以在 O(n logn) 时间内完成。
堆提取最小元素并重新堆化数组,这使得它在 O(n logn) 时间内完成排序。

Another use of BST over Heap; because of an important difference :

  • finding successor and predecessor in a BST will take O(h) time. ( O(logn) in balanced BST)
  • while in Heap, would take O(n) time to find successor or predecessor of some element.

Use of BST over a Heap: Now, Lets say we use a data structure to store landing time of flights. We cannot schedule a flight to land if difference in landing times is less than 'd'. And assume many flights have been scheduled to land in a data structure(BST or Heap).

Now, we want to schedule another Flight which will land at t. Hence, we need to calculate difference of t with its successor and predecessor (should be >d).
Thus, we will need a BST for this, which does it fast i.e. in O(logn) if balanced.

EDITed:

Sorting BST takes O(n) time to print elements in sorted order (Inorder traversal), while Heap can do it in O(n logn) time.
Heap extracts min element and re-heapifies the array, which makes it do the sorting in O(n logn) time.

胡大本事 2024-11-17 04:07:25

将数组中的所有 n 个元素插入 BST 需要 O(n logn)。数组中的 n 个元素可以在 O(n) 时间内插入到堆中。这给堆带来了明显的优势

Insert all n elements from an array to BST takes O(n logn). n elemnts in an array can be inserted to a heap in O(n) time. Which gives heap a definite advantage

初见你 2024-11-17 04:07:25

堆是实现优先级队列的具体数据结构。

输入图片此处描述

没有理由将我们的分支因子保持固定并等于 2。
一般来说,我们可以有 D 叉树:

在此处输入图像描述

用于堆的树是完整树。创建堆是为了获得
数据中优先级最高的元素。然而二分查找作为
名称表示用于搜索。

  • 堆不适合检查元素是否存储在其中。除了经历所有的事情你别无选择
    元素,直到您找到所需的元素,或者我们到达
    数组的末尾。这意味着线性时间算法。这是
    O(log(n)) 用于二叉搜索树。

  • 二叉搜索树不允许重复,但堆允许。

  • 二叉搜索树是有序的,但堆不是。

  • 在堆中插入和删除将花费 O(log(n)) 时间。在二叉搜索树中,如果树是完全的,这可能需要 O(n) 时间
    不平衡。在下图中,您可以看到两个 BST,右侧一个是链接的
    因此插入和删除可能需要 O(N),但对于左边的 O(Log(N))

在此处输入图像描述

  • 堆可以在线性时间内构建(使用 heapify),但是创建 BST 需要 O(n * log(n))。

  • 堆是完整的树。这是非常重要的事实。因为当你插入时,你必须从最后一个插入,这样在交换你之后
    将具有完整的树结构。同样,当你删除时,
    删除从root开始,删除root然后保留
    完整的树结构树中的最后一个元素应该是
    插入根目录。

参考

Heap is a concrete data structure to implement a priority queue.

enter image description here

There is no reason to keep our branching factor fixed and equal to 2.
In general we could have d-ary trees:

enter image description here

Trees that used for heap are complete trees. Heaps are created to get
the highest priority element in the data. However binary search as the
name says are for searching.

  • Heaps are not good for is checking whether or not an element is stored in them. You have no other choice than going through all the
    elements until you find the one you are looking for, or we get to the
    end of the array. This means a linear time algorithm. It is
    O(log(n)) for binary search trees.

  • Binary Search Tree doesn’t allow duplicates, however, the Heap does.

  • The Binary Search Tree is ordered, but the Heap is not.

  • Insert and remove will take O(log(n)) time in a heap. In a Binary Search Tree, this may take up to O(n) time, if the tree is completely
    unbalanced. In the image below you see two BSTs, right one is chained
    so insert and remove might take O(N) but for the left one O(Log(N))

enter image description here

  • Heap can be built in linear time (with heapify), however, the BST needs O(n * log(n)) to be created.

  • Heaps are complete trees. This is very important fact. Because when you insert you have to insert from the last so that after swapping you
    would have complete tree structure. Similarly, when you delete,
    deletion starts from root, you delete the root and then to keep the
    complete tree structure the last element in the tree should be
    inserted into the root.

Reference

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文