动态编程算法如何在惯用的 Haskell 中实现?

发布于 2024-10-17 00:18:07 字数 120 浏览 8 评论 0原文

Haskell 和其他函数式编程语言都是围绕不维护状态的前提构建的。我对函数式编程的工作原理和其中的概念仍然很陌生,所以我想知道是否可以以 FP 方式实现 DP 算法。

可以使用哪些函数式编程结构来做到这一点?

Haskell and other functional programming languages are built around the premise of not maintaining state. I'm still new to how functional programming works and concepts in it, so I was wondering if it is possible to implement DP algorithms in an FP way.

What functional programming constructs can be used to do that?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

倾城泪 2024-10-24 00:18:07

执行此操作的常见方法是通过惰性记忆。从某种意义上说,递归斐波那契函数可以被视为动态规划,因为它计算重叠子问题的结果。我意识到这是一个令人厌倦的例子,但这里是一个品味。它使用 data-memocombinators 库进行惰性记忆。

import qualified Data.MemoCombinators as Memo

fib = Memo.integral fib'
    where
    fib' 0 = 0
    fib' 1 = 1
    fib' n = fib (n-1) + fib (n-2)

fib 是记忆版本,而 fib' 只是“强力”解决问题,但使用记忆的 fib 来计算其子问题。其他 DP 算法都是以相同的风格编写的,使用不同的备忘录结构,但其想法相同,即以简单的函数方式计算结果并进行记忆。

编辑:我最终屈服并决定提供一个可记忆的类型类。这意味着现在记忆更加容易:

import Data.MemoCombinators.Class (memoize)

fib = memoize fib'
    where
    fib' :: Integer -> Integer  -- but type sig now required 
    ...

您无需遵循类型,只需记忆任何内容即可。如果您喜欢,仍然可以使用旧方法。

The common way to do this is via lazy memoization. In some sense, the recursive fibonacci function can be considered dynamic programming, because it computes results out of overlapping subproblems. I realize this is a tired example, but here's a taste. It uses the data-memocombinators library for lazy memoization.

import qualified Data.MemoCombinators as Memo

fib = Memo.integral fib'
    where
    fib' 0 = 0
    fib' 1 = 1
    fib' n = fib (n-1) + fib (n-2)

fib is the memoized version, and fib' just "brute forces" the problem, but computes its subproblems using the memoized fib. Other DP algorithms are written in this same style, using different memo structures, but the same idea of just computing the result in a straightforward functional way and memoizing.

Edit: I finally gave in and decided to provide a memoizable typeclass. That means that memoizing is easier now:

import Data.MemoCombinators.Class (memoize)

fib = memoize fib'
    where
    fib' :: Integer -> Integer  -- but type sig now required 
    ...

Instaead of needing to follow the type, you can just memoize anything. You can still use the old way if you like it.

森林散布 2024-10-24 00:18:07

Rabhi 和 Lapalme 的算法:函数式编程方法对此有一个很好的章节,其中说明了一些 FP 概念的使用,即高阶函数惰性求值< /强>。我认为我可以重现其高阶函数的简化版本。

它的简化之处在于它仅适用于采用 Int 作为输入并生成 Int 作为输出的函数。因为我们以两种不同的方式使用 Int,所以我将为它们创建同义词“Key”和“Value”。但不要忘记,因为这些是同义词,所以完全可以使用键和值,反之亦然。它们仅用于可读性。

type Key = Int
type Value = Int

dynamic :: (Table Value Key -> Key -> Value) -> Key -> Table Value Key
dynamic compute bnd = t
 where t = newTable (map (\coord -> (coord, compute t coord)) [0..bnd])

让我们稍微剖析一下这个函数。

首先,这个函数的作用是什么?从类型签名中我们可以看到它以某种方式操作表。事实上,第一个参数“计算”是一个函数(因此动态是一个“高阶”函数),它从表中生成某种值,第二个参数只是某种上限,告诉我们在哪里停止。作为输出,“动态”函数为我们提供了某种表格。如果我们想得到一些 DP 友好问题的答案,我们运行“动态”,然后从我们的表中查找答案。

要使用这个函数来计算斐波那契数,我们会像这样运行它。

fib = findTable (dynamic helper n) n
 where
  helper t i =
    if i <= 1
       then i
       else findTable t (i-1) + findTable t (i-2)

现在不要太担心理解这个 fib 函数。当我们探索“动态”时,它会变得更加清晰。

第二,我们需要了解哪些先决条件才能理解这个函数?我假设您或多或少熟悉语法,[0..x] 表示列表从 0 到 x,->在类型签名中,例如 Int ->表-> ... 与 ->在像 \coord -> 这样的匿名函数中...如果您对这些不满意,它们可能会妨碍您。

另一个要解决的先决条件是这个查找表。我们不想担心它是如何工作的,但假设我们可以从键值对列表创建它们,并查找其中的条目:

newTable :: [(k,v)] -> Table v k
findTable :: Table v k -> k -> v

这里需要注意三件事:

  • 为了简单起见,我们不使用 如果你要求Haskell 标准库中的
  • findTable 从表中查找不存在的值,它会崩溃。如果需要的话,我们可以使用更高级的版本来避免这种情况,但这是另一篇文章的主题,
  • 奇怪的是,我没有提到任何类型的“向表添加值”功能,尽管本书和标准 Haskell 库提供了一。为什么不呢?

最后,这个功能实际上是如何工作的?这里发生了什么?我们可以稍微放大一下函数的核心部分,

t = newTable (map (\coord -> (coord, compute t coord)) [0..bnd])

然后有条不紊地将其分解。从外到内,我们得到了 t = newTable (...),这似乎告诉我们正在从某种列表构建一个表。无聊的。那名单呢?

map (\coord -> (coord, compute t coord)) [0..bnd]

这里我们有一个高阶的 map 函数,它沿着列表从 0 到 bnd 并生成一个新列表。为了计算新列表,它使用函数 \coord -> (坐标,计算 t 坐标)。
请记住上下文:我们正在尝试从键值对构建一个表,因此如果您研究元组,第一部分坐标必须是键,第二部分计算坐标必须是值。第二部分是事情变得令人兴奋的地方。让我们再放大一点,

compute t coord

我们正在从键值对构建一个表,我们插入这些表的值来自运行“compute t coord”。我之前没有提到的是,compute 将一个表和一个键作为输入,并告诉我们应该将什么值插入到表中,换句话说,我们应该将什么值与该键关联起来。然后,将其带回到动态编程中,计算函数使用表中先前的值来计算我们应该插入的新值。

仅此而已!为了在 Haskell 中进行动态编程,我们可以通过使用从表中查找先前值的函数将值连续插入单元格来构建某种表格。很容易,对吧?...或者是吗?

或许你也有和我类似的经历。所以我想分享我目前处理这个功能的进展。当我第一次读到这个函数时,似乎有一种直观的感觉,我没有多想。然后我仔细读了一遍,犹豫了一下,等等什么?!这怎么可能行得通?再看一下这里的这段代码。

compute t coord

为了计算给定单元格的值并填充表格,我们传入 t,这正是我们首先尝试创建的表格。如果正如您所指出的那样,函数式编程与不变性有关,那么使用我们尚未计算的值的业务如何可能起作用?如果你对 FP 有一点了解,你可能会像我一样问自己,“这是一个错误吗?”,这不应该是“折叠”而不是“地图”吗?

这里的关键是惰性评估。能够从自身的某些部分创造出不可变的价值的一点点魔法都可以归结为懒惰。作为一名长期黄带 Haskeller 的人,我仍然觉得懒惰的概念有点令人困惑。所以我必须让其他人接管这里。

与此同时,我只是告诉自己这没关系。我满足于将桌子想象成一个点,上面有很多箭头伸出来。以 fib 为例:

o
|
|--0--> 1
|
|--1--> 1
|
|--2--> 2
|
|--3--> 2
.
.
.

表中我们还没有看到的部分是尚未发现的领域。当我们第一次沿着列表走下去时,一切都未被发现。

o
.
.
.

当我们想要计算第一个值时,我们不需要了解有关表的更多信息,因为 i <= 1。

  helper t i =
    if i <= 1
       then i
       else findTable t (i-1) + findTable t (i-2)


o
|
|--0--> 1
.
.
.

当我们想要计算连续值时,我们总是只回顾表中已经发现的部分(动态编程,嘿嘿!)。要记住的关键一点是,我们在这里 100% 使用不可变的值,除了懒惰之外没有任何花哨的技巧。 “t”实际上意味着表,而不是“迭代 42 处当前状态的表”。只是当我们真正要求它时,我们才会发现 table 中那些告诉我们对应于 42 的值是什么的位。

希望 StackOverflow 上的其他人能够比我走得更远,而不会含糊不清地嘟囔着“嗯,是的,懒惰什么的”,这真的不是什么大问题:-)

Rabhi and Lapalme's Algorithms: A Functional Programming Approach has a nice chapter on this which illustrates some FP concepts being put to use, namely higher order functions and lazy evaluation. I assume it's OK for me to reproduce a simplified version of their higher order function.

It's simplified in that it only works on functions that take Int as input and produce Int as output. Because we're using Int in two different ways, I'll make synonyms for them "Key" and "Value". But don't forget that because these are synonynms, it's perfectly possible to use Keys and Values and vice-versa. They're only used for readability.

type Key = Int
type Value = Int

dynamic :: (Table Value Key -> Key -> Value) -> Key -> Table Value Key
dynamic compute bnd = t
 where t = newTable (map (\coord -> (coord, compute t coord)) [0..bnd])

Let's dissect this function a little.

First, what does this function do? From the type signature we can see that it somehow manipulates tables. Indeed the first argument "compute" is a function (hence dynamic is a "higher order" function) which produces some sort of value from a table, and the second argument is just some kind of upper bound, telling us where to stop. And as output, the "dynamic" function gives us some kind of Table. If we want to get the answer to some DP-friendly problem, we run "dynamic" and then look up the answer from our Table.

To use this function to compute fibonacci, we would run it a little like this

fib = findTable (dynamic helper n) n
 where
  helper t i =
    if i <= 1
       then i
       else findTable t (i-1) + findTable t (i-2)

Don't worry too much about understanding this fib function for now. It'll become a bit clearer as we explore "dynamic".

Second, what sort of prerequisites do we need to know about to understand this function? I'll assume you're more or less familiar with the syntax, the [0..x] to indicate a list from 0 to x, the -> in type signatures like Int -> Table -> ... versus the -> in anonymous functions like \coord -> ... If you're not comfortable with these, they might get in the way.

Another prerequisite to tackle is this lookup Table. We don't want to worry about how it works, but let's assume that we can create them from lists of key-value pairs and also look up entries in them:

newTable :: [(k,v)] -> Table v k
findTable :: Table v k -> k -> v

Three things to note here:

  • For simplicity, we're not using the equivalent from the Haskell standard library
  • findTable will crash if you ask it to look up a non-existent value from the table. We can use a fancier version to avoid this if needed, but that's a subject for a different post
  • Strangely enough, I didn't mention any sort of "add a value to the table" function, even though the book and standard Haskell libraries provide one. Why not?

Finally, how does this function actually work? What's going on here? We can zoom in a bit on the meat of the function,

t = newTable (map (\coord -> (coord, compute t coord)) [0..bnd])

and methodically tear it apart. Going from the outside in, we've got t = newTable (...), which seems to tell us that we're building a table from some sort of list. Boring. What about the list?

map (\coord -> (coord, compute t coord)) [0..bnd]

Here we've got the higher order map function walking down a list from 0 to bnd and producing a new list as a result. To compute the new list, it's using a function \coord -> (coord, compute t coord).
Keep in mind the context: we're trying to build a table from key value-pairs, so if you study the tuple, the first part coord must be the key and the second part compute t coord must be the value. That second part is where things get exciting. Let's zoom in a little further

compute t coord

We're building up a table from key-value pairs and the value we're plugging into these tables comes from running "compute t coord". Something I didn't mention earlier is that compute takes a table and a key as input and tells us what value we ought to plug into the table, in other words, what value we should associate with that key. The idea then, to bring this back to dynamic programming, is that the compute function uses previous values from the table to compute that new value we ought to plug in.

And that's all! To do dynamic programming in Haskell we can build up some kind of table by successively plugging values into cells using a function that looks up prior values from the table. Easy, right?... or is it?

Perhaps you have a similar experience as I did. So I want to share my current progress grappling with this function. When I first read this function, it seemed to make a kind of intuitive sense and I didn't think much more of it. Then I read it closer and did a sort of double-take, wait what?! How can this possibly work? Take a second look at this snippet of code here.

compute t coord

To compute the value at a given cell and thus fill the table, we pass in t, the very table we're trying to trying to create in the first place. If functional programming is about immutability as you point out, how can this business of using values we haven't computed yet possibly work? If you have a little bit of FP under your belt, you might be asking yourself as I did, "is that an error?", shouldn't this be a "fold" instead of a "map"?

The key here is lazy evaluation. The little bit of magic that makes it possible to create an immutable value from bits of itself all comes down to laziness. Being a sort of long-term-yellow-belt Haskeller, I still find the notion of laziness a bit baffling. So I'll have to let somebody else take over here.

In the meantime, I simply tell myself that this is OK. I content myself with visualising the Table as a sort of dot with lots of arrows sticking out of it. Taking fib as an example:

o
|
|--0--> 1
|
|--1--> 1
|
|--2--> 2
|
|--3--> 2
.
.
.

The bits of the table we haven't seen yet are undiscovered territory. When we first walk down the list it's all undiscovered

o
.
.
.

When we want to compute the first value, we don't need to know anything more about the table because i <= 1.

  helper t i =
    if i <= 1
       then i
       else findTable t (i-1) + findTable t (i-2)


o
|
|--0--> 1
.
.
.

When we want to compute successive values, we're always only looking back into already discovered parts of the table (dynamic programming, hey-hey!). The key thing to remember is that we're 100% working with immutable values here, no fancy tricks beside laziness. "t" really means the table, and not "the table in its current state at iteration 42". It's just that we only discover the bits of table that tell us what the value that corresponds to 42 is when we actually ask for it.

Hopefully with others on StackOverflow, you'll go further than me and not be left mumbling vaguely "uhm yeah, laziness something or another" It's really not a big deal :-)

旧夏天 2024-10-24 00:18:07

如果你想使用带有 2 或 3 个参数的 DP(例如,处理字符串时),你可以使用不可变数组:

import Data.Array.IArray

answer :: String -> Int
answer s = table ! (1, l)
  where
    l = length s

    --signatyres are needed, because GHC doesn't know what kind of Array we need
    --string is stored in Array because we need quick access to individual chars
    a :: Array Int Char
    a = listArray (1, l) s

    table :: Array (Int, Int) Int
    table = listArray ((1, 1), (l, l)) [f i j | i <- [1..l], j <- [1..l]]

    f i j |    i    >     j    = 0
          |    i    ==    j    = 1
          | (a ! i) == (a ! j) = 2 + table ! (i+1, j-1)
          | otherwise          = maximum [table ! (i+1, j), table ! (i, j-1)]

这段代码解决了以下任务:给定一个字符串 S,找到 S 的最大长度的子序列,这将是回文(子序列不需要是连续的)。

基本上,“f”是递归函数,数组“table”是所有可能值的矩阵。因为 Haskell 是惰性的,所以只需要计算“f”的答案值。换句话说,这是带有记忆的递归。所以使用 Data.Memocombinators,它是一样的,但已经由其他人编写了:)

If you want to use DP with 2 or 3 parameters (for example, when processing strings) you can use immutable array:

import Data.Array.IArray

answer :: String -> Int
answer s = table ! (1, l)
  where
    l = length s

    --signatyres are needed, because GHC doesn't know what kind of Array we need
    --string is stored in Array because we need quick access to individual chars
    a :: Array Int Char
    a = listArray (1, l) s

    table :: Array (Int, Int) Int
    table = listArray ((1, 1), (l, l)) [f i j | i <- [1..l], j <- [1..l]]

    f i j |    i    >     j    = 0
          |    i    ==    j    = 1
          | (a ! i) == (a ! j) = 2 + table ! (i+1, j-1)
          | otherwise          = maximum [table ! (i+1, j), table ! (i, j-1)]

This code solves the following task: given a string S, find the subsequence of S of maximum length, which would be a palyndrome (subsequence doesn't need to be continuous).

Basically, 'f' is the resursive function, and array 'table' is a matrix of all its possible values. Because Haskell is lazy, only needed for the answer values of 'f' are computed. In other words, this is recursion with memoization. So use Data.Memocombinators, which is just the same, but already written by somebody else :)

瑾兮 2024-10-24 00:18:07

由于懒惰,haskell 中的动态编程可以优雅地表达,请参阅此页面上的第一个示例

Dynamic programming in haskell can be expressed elegantly thanks to laziness, see the first example on this page

那小子欠揍 2024-10-24 00:18:07

动态规划算法通常利用将问题简化为更简单问题的想法。它的问题可以表述为一些基本事实(例如,从方形单元格到其自身的最短路径的长度为 0)加上一组循环规则,这些规则准确地显示了如何减少问题“从单元格中查找最短路径” (i,j)(0,0)" 解决问题 "从单元格中查找最短路径 (i-1,j)、(i,j-1)(0,0) 选择最佳的”。 AFAIK 这可以很容易地用函数式程序来表达;没有国家参与。

Dynamic programming algorithms usually exploit the idea of reducing a problem to simpler problem(s). Its problems can be formulated as some basic fact (say, the shortest path from a square cell to itself has length 0) plus a set of recurrent rules which show exactly how to reduce problem "find shortest path from cell (i,j) to (0,0)" to problem "find shortest paths from cells (i-1,j), (i,j-1) to (0,0); select the best". AFAIK this can be easily expressed in functional style program; no state involved.

夕嗳→ 2024-10-24 00:18:07

通过查看答案,如果我们谈论递归+缓存或简单的动态编程(DP),我感觉有点奇怪。

因为如果只是 DP,下面的代码正是这样做的, https://jelv.is/ blog/Lazy-Dynamic-Programming/

basic a b = d m n
  where (m, n) = (length a, length b)
        d i 0 = i
        d 0 j = j
        d i j
          | a !! (i - 1) ==  b !! (j - 1) = ds ! (i - 1, j - 1)
          | otherwise = minimum [ ds ! (i - 1, j)     + 1
                                , ds ! (i, j - 1)     + 1
                                , ds ! (i - 1, j - 1) + 1
                                ]

        ds = Array.listArray bounds
               [d i j | (i, j) <- Array.range bounds]
        bounds = ((0, 0), (m, n))

而且这个 DP 版本与其他语言没有太大不同,因为如果我在 Javascript 中尝试它,它会有点冗长,但以类似的方式编写。

function levenshtein(str1, str2) {
    const m = str1.length + 1
    const n = str2.length + 1
    const mat = new Array(m).fill(0).map(() => 
        new Array(n).fill(0)
    )
    
    for (let i = 0; i < m; i++) {
        mat[i][0] = i
    }
    for (let j = 0; j < n; j++) {
        mat[0][j] = j
    }
    
    for (let i = 1; i < m; i++) {
        const ic = str1[i-1]
        for (let j = 1; j < n; j++) {
            const jc = str2[j-1]
            if (ic == jc) {
                mat[i][j] = mat[i-1][j-1]
            } else {
                mat[i][j] = Math.min(
                    mat[i-1][j],
                    mat[i][j-1],
                    mat[i-1][j-1]
                ) + 1
            }
        }
    }

    return mat[m-1][n-1]
}

所以我想知道问题是否是关于使用递归+缓存?

By going over the answers, i felt a bit strange if we're talking about recursion + caching or simply dynamic programming (DP).

Because if it's just DP, the following code does exactly that, https://jelv.is/blog/Lazy-Dynamic-Programming/

basic a b = d m n
  where (m, n) = (length a, length b)
        d i 0 = i
        d 0 j = j
        d i j
          | a !! (i - 1) ==  b !! (j - 1) = ds ! (i - 1, j - 1)
          | otherwise = minimum [ ds ! (i - 1, j)     + 1
                                , ds ! (i, j - 1)     + 1
                                , ds ! (i - 1, j - 1) + 1
                                ]

        ds = Array.listArray bounds
               [d i j | (i, j) <- Array.range bounds]
        bounds = ((0, 0), (m, n))

And this DP version isn't too different from other languages, because if i tried it in Javascript, it'll be a bit verbose, but writes in a similar way.

function levenshtein(str1, str2) {
    const m = str1.length + 1
    const n = str2.length + 1
    const mat = new Array(m).fill(0).map(() => 
        new Array(n).fill(0)
    )
    
    for (let i = 0; i < m; i++) {
        mat[i][0] = i
    }
    for (let j = 0; j < n; j++) {
        mat[0][j] = j
    }
    
    for (let i = 1; i < m; i++) {
        const ic = str1[i-1]
        for (let j = 1; j < n; j++) {
            const jc = str2[j-1]
            if (ic == jc) {
                mat[i][j] = mat[i-1][j-1]
            } else {
                mat[i][j] = Math.min(
                    mat[i-1][j],
                    mat[i][j-1],
                    mat[i-1][j-1]
                ) + 1
            }
        }
    }

    return mat[m-1][n-1]
}

So I wonder if the question is about using recursion + caching ?

预谋 2024-10-24 00:18:07

这个解决方案怎么样(?)

 fibs = [0,1] ++ [n | i <- [2..], let n = fibs !! (i-1) + fibs !! (i-2)]

这个有效吗?(?)它使用惰性求值和无限列表的概念。

How about this solution (?)

 fibs = [0,1] ++ [n | i <- [2..], let n = fibs !! (i-1) + fibs !! (i-2)]

Is this efficient (?) It uses the concept of lazy evaluation and infinite lists.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文