广度优先搜索和迭代深化之间的区别

发布于 2024-09-04 20:21:13 字数 205 浏览 15 评论 0原文

我理解 BFS 和 DFS,但是我一辈子都无法弄清楚迭代深化和 BFS 之间的区别。显然,迭代加深与 DFS 具有相同的内存使用量,但我无法看出这是如何可能的,因为它只是像 BFS 一样不断扩展。 如果有人能澄清那就太棒了。

如果需要,要处理的树:

    A
   / \
  B   C
 /   / \
D   E   F

I understand BFS, and DFS, but for the life of me cannot figure out the difference between iterative deepening and BFS. Apparently Iterative deepening has the same memory usage as DFS, but I am unable to see how this is possible, as it just keeps expanding like BFS.
If anyone can clarify that would be awesome.

tree to work on if required:

    A
   / \
  B   C
 /   / \
D   E   F

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

雪花飘飘的天空 2024-09-11 20:21:13

根据我的理解,迭代加深 DFS 下降到深度 1,然后 DFS 下降到深度 2 ...下降到深度 n ,依此类推,直到找不到更多级别,

例如,我认为该树会被读取,

read                   visited               depth

A                      A                     1

ABC                    ABAC                  2

ABDCEF                 ABDBACECF             3

我相信它很漂亮为每个级别进行深度限制的单独 DFS 并丢弃内存。

From What I understand iterative deepening does DFS down to depth 1 then does DFS down to depth of 2 ... down to depth n , and so on till it finds no more levels

for example I think that tree would be read

read                   visited               depth

A                      A                     1

ABC                    ABAC                  2

ABDCEF                 ABDBACECF             3

I believe its pretty much doing a separate DFS with depth limit for each level and throwing away the memory.

时光无声 2024-09-11 20:21:13

从我对算法的理解来看,IDDFS(迭代加深深度优先搜索)简单来说就是多次执行深度优先搜索,加深每次迭代搜索的节点层次。因此,内存要求与深度优先搜索相同,因为最大深度迭代只是完整的深度优先搜索。

因此,对于您给出的树的示例,第一次迭代将访问节点 A,第二次迭代将访问节点 A、B 和 C,第三次迭代将访问树的所有节点。

这样做的原因是,如果搜索有时间限制,那么算法至少会在完全遍历之前达到时间限制时知道什么是“良好评分”节点。树。

这与广度优先搜索不同,因为在每次迭代时,都会像深度优先搜索一样访问节点,而不是广度优先搜索。通常,IDDFS 算法可能会存储每次迭代中找到的“最佳评分”节点。

From my understanding of the algorithm, IDDFS (iterative-deepening depth-first search) is simply a depth-first search performed multiple times, deepening the level of nodes searched at each iteration. Therefore, the memory requirements are the same as depth-first search because the maximum depth iteration is just the full depth-first search.

Therefore, for the example of the tree you gave, the first iteration would visit node A, the second iteration would visit nodes A, B, and C, and the third iteration would visit all of the nodes of the tree.

The reason it is implemented like this is so that, if the search has a time constraint, then the algorithm will at least have some idea of what a "good scoring" node is if it reaches the time-limit before doing a full traversal of the tree.

This is different than a breadth-first search because at each iteration, the nodes are visited just like they would be in a depth-first search, not like in a breadth-first search. Typically, IDDFS algorithms would probably store the "best scoring" node found from each iteration.

随遇而安 2024-09-11 20:21:13

内存使用量是它在任意时刻保存的最大节点数。不是访问的节点数。

在任何时候,IDFS 只需要存储它正在扩展的分支中的节点。如果我们正在扩展 C(在您的示例中),则仅存储 A 和 C。 BFS必须保存它正在搜索的深度的所有节点。要查看效果,请使用分支因子为 8 而不是 2 的树。要搜索深度为 3,BFS 需要存储大量 64 个节点。 IDFS 只需要 3 个。

the memory usage is the the maximum number of nodes it saves at any point. not the number of nodes visited.

at any time IDFS needs to store only the nodes in the branch it is expanding.only the A and C if we are expanding C (in your e.g). BFS must save all the nodes of the depth it is searching. to see the effect take a tree with branching factor 8 rather than 2. to search to a depth of 3, BFS needs to store a massive 64 nodes. IDFS needs only 3.

你的往事 2024-09-11 20:21:13

在迭代深化搜索的每次迭代中,我们都有一个限制,我们使用 DFS 方法遍历图,但是,对于每次迭代的每一步,我们只需要跟踪从根到深度 d 的路径内的节点。这就是内存的节省。

例如,看看下图的最后一行。如果我们使用了 BFS,我们必须跟踪深度为 2 的所有节点。但是因为我们使用的是 DFS,所以我们不需要将所有节点都保存在内存中,因为有些节点已经被访问过,所以我们不需要需要它们,或者尚未访问过,所以我们稍后会添加。我们只保留通往根的路径(灰色路径)。

输入图片此处描述

图片来自 Peter Norvig 和 Stuart Russel 所著的人工智能书籍

In each iteration of Iterative-Deepening Search, we have a limit and we traverse the graph using the DFS approach, however, for each step of each iteration, we just need to keep track of only nodes inside the path from the root to depth d. That's the saving in memory.

For example, look at the last row of the picture below. If we have used BFS, we had to keep track of all nodes up to depth 2. But because we are using DFS, we don't need to keep all of them in memory since either some nodes are already visited so we don't need them, or not visited yet so we will add it later. We just keep our path to the root (the gray path).

enter image description here

The picture is from Artificial Intelligence book by Peter Norvig and Stuart Russel

路还长,别太狂 2024-09-11 20:21:13

迭代深化的 DFS 概念非常简单!

主要思想:

  1. 修改原来的DFS算法,使其停止在提供的最大深度。
  2. 您可以调用 DFS 获取最大深度。即,1,2,3,.....,n。

因此,在您的情况下(对于给定的树)

深度= 1

                   A

> 深度= 2

                   A
                  / \
                 B   C

> 深度= 3

                   A
                  / \
                 B   C
                /   / \
               D   E   F
                                

现在,让我们假设会有'b ' 每个状态的操作和解决方案确实有 'd' 个操作,那么,

  • Space : O(d)
  • Time : O(b^d)

现在,问题出现了,那么迭代加深的DFS和BFS有什么区别呢?

答案是:与 BFS 相比,具有迭代加深的 DFS 在内存消耗方面要好得多。 BFS 空间复杂度的最坏情况是 O(b^d)

The concept of DFS with Iterative Deepening is quite straightforward !

Main Idea:

  1. Modify the original DFS algorithm to stop at a maximum provided depth.
  2. You can call the DFS for a maximum depths. i.e, 1,2,3,.....,n.

So in your case ( for the given tree )

depth = 1

                   A

depth = 2

                   A
                  / \
                 B   C

depth = 3

                   A
                  / \
                 B   C
                /   / \
               D   E   F
                                

Now, let's suppose that there would be 'b' actions per state and the solution do have 'd' actions, then,

  • Space : O(d)
  • Time : O(b^d)

Now, the question arises that, then what is the difference between DFS with Iterative Deepening and BFS ?

The answer would be: DFS with Iterative Deepening would be far more better in terms of memory consumption as compared with the BFS. The worst case scenario of BFS for Space complexity is O(b^d).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文