为什么Dijkstra算法使用递减密钥?

发布于 2025-01-05 02:53:02 字数 411 浏览 5 评论 0原文

Dijkstra 的算法教给我如下,

while pqueue is not empty:
    distance, node = pqueue.delete_min()
    if node has been visited:
        continue
    else:
        mark node as visited
    if node == target:
        break
    for each neighbor of node:
         pqueue.insert(distance + distance_to_neighbor, neighbor)

但我一直在阅读有关算法的一些内容,我看到的很多版本都使用减少键而不是插入。

为什么会这样?两种方法有什么区别?

Dijkstra's algorithm was taught to me was as follows

while pqueue is not empty:
    distance, node = pqueue.delete_min()
    if node has been visited:
        continue
    else:
        mark node as visited
    if node == target:
        break
    for each neighbor of node:
         pqueue.insert(distance + distance_to_neighbor, neighbor)

But I've been doing some reading regarding the algorithm, and a lot of versions I see use decrease-key as opposed to insert.

Why is this, and what are the differences between the two approaches?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

阳光下的泡沫是彩色的 2025-01-12 02:53:02

使用减少键而不是重新插入节点的原因是为了保持优先级队列中的节点数量较少,从而保持优先级队列出队总数较小并且每个优先级队列平衡的成本较低。

在将节点及其新优先级重新插入优先级队列的 Dijkstra 算法的实现中,对于图中的 m 个边中的每一个,将一个节点添加到优先级队列中。这意味着优先级队列上有 m 个入队操作和 m 个出队操作,总运行时间为 O(m Te + m Td),其中 T< sub>e 是入队到优先级队列所需的时间,Td 是从优先级队列出队所需的时间。

在支持减少密钥的 Dijkstra 算法的实现中,保存节点的优先级队列从其中的 n 个节点开始,并且在算法的每一步中删除一个节点。这意味着堆出列的总数为n。每个节点可能会为进入该节点的每条边调用一次减少键,因此完成的减少键总数最多为 m。运行时间为 (n Te + n Td + m Tk),其中 Tk是调用减键所需的时间。

那么这对运行时有什么影响呢?这取决于您使用的优先级队列。下面是一个快速表格,显示了不同优先级队列以及不同 Dijkstra 算法实现的总体运行时间:

Queue          |  T_e   |  T_d   |  T_k   | w/o Dec-Key |   w/Dec-Key
---------------+--------+--------+--------+-------------+---------------
Binary Heap    |O(log N)|O(log N)|O(log N)| O(M log N)  |   O(M log N)
Binomial Heap  |O(log N)|O(log N)|O(log N)| O(M log N)  |   O(M log N)
Fibonacci Heap |  O(1)  |O(log N)|  O(1)  | O(M log N)  | O(M + N log N)

如您所见,对于大多数类型的优先级队列,渐近运行时间和递减键确实没有差异版本不太可能做得更好。但是,如果您使用优先级队列的 斐波那契堆 实现,那么 Dijkstra 算法确实会渐近使用减键时效率更高。

简而言之,使用递减键加上良好的优先级队列,可以将 Dijkstra 的渐近运行时间降低到超出继续进行入队和出队的情况下所能达到的程度。

除此之外,一些更高级的算法,例如Gabow的最短路径算法,使用Dijkstra算法作为子程序,并严重依赖减少密钥实现。他们利用这样一个事实:如果您提前知道有效距离的范围,您可以基于该事实构建一个超高效的优先级队列。

希望这有帮助!

The reason for using decrease-key rather than reinserting nodes is to keep the number of nodes in the priority queue small, thus keeping the total number of priority queue dequeues small and the cost of each priority queue balance low.

In an implementation of Dijkstra's algorithm that reinserts nodes into the priority queue with their new priorities, one node is added to the priority queue for each of the m edges in the graph. This means that there are m enqueue operations and m dequeue operations on the priority queue, giving a total runtime of O(m Te + m Td), where Te is the time required to enqueue into the priority queue and Td is the time required to dequeue from the priority queue.

In an implementation of Dijkstra's algorithm that supports decrease-key, the priority queue holding the nodes begins with n nodes in it and on each step of the algorithm removes one node. This means that the total number of heap dequeues is n. Each node will have decrease-key called on it potentially once for each edge leading into it, so the total number of decrease-keys done is at most m. This gives a runtime of (n Te + n Td + m Tk), where Tk is the time required to call decrease-key.

So what effect does this have on the runtime? That depends on what priority queue you use. Here's a quick table that shows off different priority queues and the overall runtimes of the different Dijkstra's algorithm implementations:

Queue          |  T_e   |  T_d   |  T_k   | w/o Dec-Key |   w/Dec-Key
---------------+--------+--------+--------+-------------+---------------
Binary Heap    |O(log N)|O(log N)|O(log N)| O(M log N)  |   O(M log N)
Binomial Heap  |O(log N)|O(log N)|O(log N)| O(M log N)  |   O(M log N)
Fibonacci Heap |  O(1)  |O(log N)|  O(1)  | O(M log N)  | O(M + N log N)

As you can see, with most types of priority queues, there really isn't a difference in the asymptotic runtime, and the decrease-key version isn't likely to do much better. However, if you use a Fibonacci heap implementation of the priority queue, then indeed Dijkstra's algorithm will be asymptotically more efficient when using decrease-key.

In short, using decrease-key, plus a good priority queue, can drop the asymptotic runtime of Dijkstra's beyond what's possible if you keep doing enqueues and dequeues.

Besides this point, some more advanced algorithms, such as Gabow's Shortest Paths Algorithm, use Dijkstra's algorithm as a subroutine and rely heavily on the decrease-key implementation. They use the fact that if you know the range of valid distances in advance, you can build a super efficient priority queue based on that fact.

Hope this helps!

不必你懂 2025-01-12 02:53:02

2007年,有一篇论文研究了使用减键版本和插入版本之间执行时间的差异。请参阅http://www.cs.sunysb.edu/~ rezaul/papers/TR-07-54.pdf

他们的基本结论是不对大多数图表使用减少键。特别是对于稀疏图,非减少键比减少键版本要快得多。有关更多详细信息,请参阅论文。

In 2007, there was a paper that studied the differences in execution time between using the decrease-key version and the insert version. See http://www.cs.sunysb.edu/~rezaul/papers/TR-07-54.pdf

Their basic conclusion was not to use the decrease-key for most graphs. Especially for sparse graphs, the non-decrease key is significantly faster than the decrease-key version. See the paper for more details.

趁微风不噪 2025-01-12 02:53:02

有两种方法来实现 Dijkstra:一种使用支持​​减少键的堆,另一种使用不支持的堆。

一般来说,它们都是有效的,但通常首选后者。
在下文中,我将使用“m”表示边数,使用“n”表示图的顶点数:

如果您想要最坏情况下的最佳复杂性,您将使用支持的斐波那契堆减少键:你会得到一个不错的 O(m + nlogn)。

如果您关心平均情况,您也可以使用二叉堆:您将得到 O(m + nlog(m/n)logn)。证明位于此处,第 99/100 页。如果图是稠密的(m>>n),则这个图和前一个图都趋向于 O(m)。

如果您想知道在真实图表上运行它们会发生什么,您可以检查 这篇论文

实验结果将表明,“更简单”的堆在大多数情况下会给出最佳结果。

事实上,在使用减少键的实现中,Dijkstra 在使用简单的二叉堆或配对堆时比使用斐波那契堆时表现更好。这是因为斐波那契堆涉及较大的常数因子,并且减少键操作的实际数量往往比最坏情况预测的要小得多。

出于类似的原因,不必支持减少键操作的堆具有更少的常数因子并且实际上性能最佳。特别是当图形稀疏时。

There are two ways to implement Dijkstra: one uses a heap that supports decrease-key and another a heap that doesn't support that.

They are both valid in general, but the latter is usually preferred.
In the following I'll use 'm' to denote the number of edges and 'n' to denote the number of vertices of our graph:

If you want the best possible worst-case complexity, you would go with a Fibonacci heap that supports decrease-key: you'll get a nice O(m + nlogn).

If you care about the average case, you could use a Binary heap as well: you'll get O(m + nlog(m/n)logn). Proof is here, pages 99/100. If the graph is dense (m >> n), both this one and the previous tend to O(m).

If you want to know what happens if you run them on real graphs, you could check this paper, as Mark Meketon suggested in his answer.

What the experiments results will show is that a "simpler" heap will give the best results in most cases.

In fact, among the implementations that use a decrease-key, Dijkstra performs better when using a simple Binary heap or a Pairing heap than when it uses a Fibonacci heap. This is because Fibonacci heaps involve larger constant factors and the actual number of decrease-key operations tends to be much smaller than what the worst case predicts.

For similar reasons, a heap that doesn't have to support a decrease-key operation, has even less constant factors and actually performs best. Especially if the graph is sparse.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文