为什么 0/1 背包问题需要一个 2 维数组来记忆,而 House Robber 问题需要一个 1 维数组?

发布于 2025-01-15 12:08:49 字数 2074 浏览 4 评论 0原文

我问这个问题是关于动态规划,我是它的初学者。我很好地理解了 House Robbers 问题,并发现 0/1 Knapsack 与之类似。但我尝试使用一维数组以类似的方式对其进行编码,但它给出了错误的答案。该解决方案有一个二维数组,这让我感到困惑,为什么需要二维数组来存储剩余/占用的重量,因为在递归过程中我们已经传递了剩余/占用的重量。任何帮助将不胜感激。

def knapSack(self,sackw, weights, values, n):
    # my approach using a 1-D array to memoize
    dp = [-1]*n
    def recur(i, weightleft):
        if weightleft <= 0 or i >= n:
            return 0
        if weights[i] > weightleft:
            return recur(i+1, weightleft)
        if dp[i] != -1:
            return dp[i]
        else:
            res = dp[i] = max(values[i] + recur(i+1, weightleft - weights[i]), recur(i+1, weightleft))
        return res
    
    return recur(0, sackw)

的二维数组给出的答案

def solve_knapsack(profits, weights, capacity):
    dp = [[-1 for x in range(capacity+1)] for y in range(len(profits))]
    return knapsack_recursive(dp, profits, weights, capacity, 0)
    def knapsack_recursive(dp, profits, weights, capacity, currentIndex):

      # base checks
        if capacity <= 0 or currentIndex >= len(profits):
            return 0

  # if we have already solved a similar problem, return the result from memory
        if dp[currentIndex][capacity] != -1:
            return dp[currentIndex][capacity]

  # recursive call after choosing the element at the currentIndex
  # if the weight of the element at currentIndex exceeds the capacity, we
  # shouldn't process this
        profit1 = 0
        if weights[currentIndex] <= capacity:
            profit1 = profits[currentIndex] + knapsack_recursive(
        dp, profits, weights, capacity - weights[currentIndex], currentIndex + 1)

  # recursive call after excluding the element at the currentIndex
        profit2 = knapsack_recursive(
    dp, profits, weights, capacity, currentIndex + 1)

        dp[currentIndex][capacity] = max(profit1, profit2)
        return dp[currentIndex][capacity]

使用下面测试用例

  1. :权重 = [4,5,1],值 = [1,2,3],n = 3,sackweight = 4。预期输出 = 3。

只有这个运行。其余的都给出了错误的答案。它们太大了,无法张贴在这里。

I'm asking this in reference to Dynamic Programming, I'm a beginner at it. I understood the House Robbers problem nicely and found 0/1 Knapsack similar to it. But I tried to code it up in similar way using 1-D array but it gave wrong answers. The solution had a 2-D array which is confusing me that why is there a need for 2-D array to store the remaining/occupied weight, as during recursion we are already passing the remaining/occupied weight. Any help would be appreciated.

def knapSack(self,sackw, weights, values, n):
    # my approach using a 1-D array to memoize
    dp = [-1]*n
    def recur(i, weightleft):
        if weightleft <= 0 or i >= n:
            return 0
        if weights[i] > weightleft:
            return recur(i+1, weightleft)
        if dp[i] != -1:
            return dp[i]
        else:
            res = dp[i] = max(values[i] + recur(i+1, weightleft - weights[i]), recur(i+1, weightleft))
        return res
    
    return recur(0, sackw)

The given answer using 2-D array below

def solve_knapsack(profits, weights, capacity):
    dp = [[-1 for x in range(capacity+1)] for y in range(len(profits))]
    return knapsack_recursive(dp, profits, weights, capacity, 0)
    def knapsack_recursive(dp, profits, weights, capacity, currentIndex):

      # base checks
        if capacity <= 0 or currentIndex >= len(profits):
            return 0

  # if we have already solved a similar problem, return the result from memory
        if dp[currentIndex][capacity] != -1:
            return dp[currentIndex][capacity]

  # recursive call after choosing the element at the currentIndex
  # if the weight of the element at currentIndex exceeds the capacity, we
  # shouldn't process this
        profit1 = 0
        if weights[currentIndex] <= capacity:
            profit1 = profits[currentIndex] + knapsack_recursive(
        dp, profits, weights, capacity - weights[currentIndex], currentIndex + 1)

  # recursive call after excluding the element at the currentIndex
        profit2 = knapsack_recursive(
    dp, profits, weights, capacity, currentIndex + 1)

        dp[currentIndex][capacity] = max(profit1, profit2)
        return dp[currentIndex][capacity]

Test Cases:

  1. weights = [4,5,1], values = [1,2,3], n = 3, sackweight = 4. Expected output = 3.

Only this one runs. The rest give wrong answer. They are too big to post here.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

久随 2025-01-22 12:08:49

“发现 0/1 个与其相似的背包”

据我了解,这个假设是不正确的。

在 0/1 背包问题中,您必须最大化该值并将总重量保持在给定限制内。您选择的项目顺序没有限制(基本上没有“您不能选择三个连续项目”之类的限制)。由于您要维护 DP 状态的三个属性(索引、重量、每个项目的值),因此您使用 2D DP。

在房屋抢劫者问题中,您可以最大化价值,而无需检查被抢劫房屋的数量/重量。您只需避免选择连续的房屋(可以使用索引来完成,不需要额外的财产)。因此它只需要 2 个属性(房屋价值和指数),它是使用 1D DP 完成的。

关于为什么您的一维方法不正确的一个小注释:

假设您的 dp 表的值 dp[5] = 10,该值是根据权重 计算的x。假设您再次递归地达到 i=5,但这次的权重不同。您的 dp[5] = 10 将被返回,这是不正确的,因为不同的权重可能会导致不同的可能值。

"found 0/1 Knapsack similar to it"

From what I understand, this assumption is incorrect.

In 0/1 Knapsack problem, you have to maximize the value and keep the total weight under a given limit. There's no restriction on the item ordering that you select (basically, no constraint like "you can't select three consecutive items" or so). Since you're maintaining three properties in the DP state (index, weight, value of each item) you use 2D DP.

In the House Robbers problem, you're maximizing the value without having to keep a check on the count/weight of houses robbed. You just have to avoid picking consecutive houses (which can be done using indexes, no additional property needed). So it requires only 2 properties (house value and index), it is done using 1D DP.

A small note on why your 1D approach is incorrect:

Let's say your dp table has a value dp[5] = 10, which was calculated for a weight of x. Let's say your recursively reach i=5 again, but with a different weight this time. Your dp[5] = 10 will get returned, which is incorrect because a different weight might lead to a different possible value.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文