动态规划和分而治之
我正在阅读关于动态规划的注释,我遇到了以下评论。
如果子问题不独立,即 子问题共享子子问题,然后分而治之算法重复解决公共问题 子子问题。 因此,它做了超出必要的工作
这是什么意思?您能举例说明上述观点吗?
I was reading notes on Dynamic programming, and I encountered the following comment.
If the subproblems are not independent, i.e.
subproblems share subsubproblems, then a divideand-conquer algorithm repeatedly solves the common
subsubproblems.
Thus, it does more work than necessary
What does this mean ? Can you give me examples to make the above point clear ?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
作者提到了这样一个事实:许多分而治之的算法都存在相互重叠的子问题。例如,考虑这个非常简单的斐波那契实现:
如果跟踪计算斐波那契(4)的调用,我们会得到
换句话说,总共进行了 9 次函数调用,但这里只有 5 次唯一的调用(0 到 4 的斐波那契数(包括 0 到 4))。如果递归调用在子问题之间共享而不是每次都从头开始重新计算,则该算法可以变得更加高效。这是动态规划背后的关键思想之一。
为了计算 Fn(第 n 个斐波那契数),上面的代码将总共进行 2Fn+1 - 1 次递归调用。由于斐波那契数列呈指数级快速增长,因此这需要指数级大量工作。然而,可以利用许多递归调用相同的事实来显着简化这一过程。我们不是从 Fibonacci(4) 开始并向下工作,而是从 Fibonacci(0) 开始并向上工作。具体来说,我们将构建一个长度为 5 的表(我们称之为 FTable),并按如下方式填充:
现在,假设我们要计算 FTable[2]。这要求我们知道 FTable[0] 和 FTable[1],但我们已经知道了,因为它们在表中。因此,我们可以设置
使用类似的逻辑,我们可以从 FTable[2] 和 FTable[1] 计算 FTable[3]:
并从 FTable[2] 和 FTable 计算 FTable[4] [3]:
请注意,我们如何通过以相反的顺序构建递归调用来避免进行大量重叠的递归调用!现在计算斐波那契数的时间为 O(n),比以前快了指数级。使用一些更高级的数学,我们可以做得更好,但这确实说明了为什么动态规划可以解决不可行的问题并使它们突然变得可行。
希望这有帮助!
The author refers to the fact that many divide-and-conquer algorithms have subproblems that overlap with one another. Consider, for example, this very simple Fibonacci implementation:
If you trace out the calls done to compute Fibonacci(4), we get
In other words, 9 total function calls are made, but there's only five unique calls here (Fibonacci of 0 to 4, inclusive). This algorithm could be made much more efficient if the recursive calls were shared across the subproblems rather than recomputed from scratch each time. This is one of the key ideas behind dynamic programming.
To compute Fn (the nth Fibonacci number), the above code will make a total of 2Fn+1 - 1 recursive calls. Since the Fibonacci numbers grow exponentially quickly, this requires exponentially much work. However, it's possible to use the fact that many recursive calls are identical to simplify this dramatically. Rather than starting at Fibonacci(4) and working down, let's start at Fibonacci(0) and work up. Specifically, we'll build a table (let's call it FTable) of length 5 and will fill it in as follows:
Now, suppose we want to compute FTable[2]. This requires us to know FTable[0] and FTable[1], but we already do know that because they're in the table. We thus can set
Using similar logic, we can compute FTable[3] from FTable[2] and FTable[1]:
And FTable[4] from FTable[2] and FTable[3]:
Notice how we avoided making lots of overlapping recursive calls by just building them up in the reverse order! This now computes Fibonacci numbers in time O(n), which is exponentially faster than before. Using some more advanced math we can do even better than this, but this does illustrate why dynamic programming can take infeasible problems and make them suddenly feasible.
Hope this helps!