理解 Haskell 中的绑定函数

发布于 2025-01-01 15:15:47 字数 327 浏览 3 评论 0原文

我熟悉范畴论中的单子(事实上,它们是一个非常简单的概念),但 Haskell 中的 >>= 函数完全让我困惑。好的,所以将绑定应用于 M a 的值和函数 a ->; M u 与首先将 monad 应用于此函数,然后在指定值处对其求值并将结果相乘相同: a >>= f相同>加入$(fmap f)$a。但这如何是计算的自然描述呢?是否有一些有用的看待它的方式可以帮助我理解它?

是否有一些不错的文章适合刚从 C++ 丛林中出来的人?

I am familiar with monads in category theory (they are a very easy concept, in fact), yet >>= function in Haskell completely puzzles me. Ok, so applying bind to a value of M a and a function a -> M u is the same as first applying the monad to this function, then evaluating it at the specified value and multiplying the result: a >>= f is the same as join $ (fmap f) $ a. But how is this a natural description of computation? Is there some useful way of looking at it that would help me understand it?

Is there some nice article somewhere that is not geared towards someone fresh from the C++ jungle?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

摘星┃星的人 2025-01-08 15:15:48

从计算的角度来看,也许 =<< 更容易理解(它只是 flip (>>=))。它有输入 (=<<) :: (Monad m) =>; (a→mb)→妈-> m b,对应函数应用的类型,cf ($) :: (a -> b) ->一个-> b。所以 >>= 只是单子层面上翻转的函数应用。

此外,(>>=) 用于对 do 表示法进行脱糖,并且 do 在语法上与命令式代码非常对应(以合适的形式)单子)。

Perhaps =<< is easier to understand from a computation point of view (it's just flip (>>=)). It has typing (=<<) :: (Monad m) => (a -> m b) -> m a -> m b, and corresponds to the type of function application, cf ($) :: (a -> b) -> a -> b. So >>= is just flipped function application on the monadic level.

Also, (>>=) is used in desugaring do notation, and do syntactically corresponds very much to imperative code (in a suitable monad).

醉梦枕江山 2025-01-08 15:15:48

下面是它如何作为计算模型工作的粗略想法:带有 Monad 实例的类型构造函数 M 表示参数数据结构,非参数部分该结构可以包含其他信息。 returnjoin 对应于结构中这些部分的某种幺半群。一个函数 a -> M b 基于类型 a 的输入在该结构中引入信息。因此,通过提升函数 a -> M bM a -> M b 我们使用 M 的参数信息来创建一些非参数信息,然后将其与 M a 类型值中已存在的信息相结合代码>.

类型a -> 的不对称性质M b 为非参数信息流提供了固有的方向,而关联性要求意味着整体顺序是唯一重要的。

最终结果是使用某种具有内置因果概念的上下文来增强函数。

Here's a rough idea of how it works out as a model of computation: A type constructor M with an instance of Monad represents a parametric data structure, and the non-parametric parts of that structure can contain other information. The return and join correspond to some sort of monoid for those parts of the structure. A function a -> M b introduces information in that structure, based on an input of type a. So by lifting a function a -> M b to M a -> M b we're using the parametric information of M to create some non-parametric information, then combining that with the information already present in a value of type M a.

The asymmetric nature of the type a -> M b gives an inherent direction to the flow of non-parametric information, while the associativity requirement means that the overall order is the only thing that matters.

The end result is augmenting functions with some sort of context that has a built-in notion of cause-and-effect.

捂风挽笑 2025-01-08 15:15:47

考虑一元函数复合运算符 <=<。这类似于 .,只不过它适用于一元函数。它可以简单地用 >>= 来定义,因此了解一个可以让我们了解另一个。

(<=<) :: (b -> m c) -> (a -> m b) -> a -> m c
(f <=< g) x =  g x >>= f

(.) :: (b -> c) -> (a -> b) -> a -> c
(f . g) x = g x |> f
  where z |> h = h z

对于.,首先“执行”g,然后对g的输出执行f >。在<=<的情况下,g及其效果首先“执行”,然后f 并执行其效果。实际上,说一个发生在另一个“之前”有点用词不当,因为并非所有单子都这样工作。

也许更准确的说法是 f 可以利用 g 提供的附加上下文信息。但并不完全正确,因为g可能夺走上下文信息。如果你想 100% 正确地描述单子,你真的必须如履薄冰。

但在几乎所有重要情况下,f <=< g 表示一元函数 g效果(以及结果)随后将影响一元函数的行为函数f


要解决有关 v >>= f = join (fmap fv) 的问题,

请考虑 f :: a -> m b 和 v :: m a 。 fmap f v 是什么意思?那么 fmap :: (c -> d) -> MC-> m d,在本例中 c = ad = m b,因此 fmap f :: ma ->米(mb)。当然,现在我们可以将 v :: m a 应用于此函数,从而得到 m (mb)。但是结果类型 m (mb) 到底意味着什么?

内部 m 表示从 f 生成的上下文。 outer m 表示源自 v 的上下文(注意 fmap 不应干扰此原始上下文)。

然后您加入那个m (mb),将这两个上下文粉碎在一起形成m a。这是 Monad 定义的核心:您必须提供一种将上下文粉碎在一起的方法。您可以检查各种 Monad 实例的实现细节,以尝试了解它们如何将上下文“粉碎”在一起。不过,这里的要点是,在将“内部上下文”与“外部上下文”合并之前,“内部上下文”是不可观察的。如果您使用 v >>= f,则不存在函数 f 接收纯值 a 的实际概念 并产生一个简单的单子结果m b。相反,我们知道 fv 的原始上下文中起作用。

Consider the monadic function composition operator <=<. This is analogous to . except it works on monadic functions. It can be defined simply in terms of >>=, so learning about one will educate us about the other.

(<=<) :: (b -> m c) -> (a -> m b) -> a -> m c
(f <=< g) x =  g x >>= f

(.) :: (b -> c) -> (a -> b) -> a -> c
(f . g) x = g x |> f
  where z |> h = h z

In the case of ., g is "performed" first, and then f is performed on the output of g. In the case of <=<, g and its effects are "performed" first, and then f and its effects are performed. It's a bit of a misnomer to say that one happens "before" the other, actually, since not all monads work that way.

Perhaps it is more accurate to say that f can take advantage of additional contextual information provided by g. But that's not entirely correct, since g could potentially take away contextual information. If you want to 100% correctly describe monads, you really have to walk on eggshells.

But in almost all nontrivial cases, f <=< g means that the effects (as well as the result) of the monadic function g will subsequently influence the behavior of the monadic function f.


To address questions about v >>= f = join (fmap f v)

Consider f :: a -> m b and v :: m a. What does it mean to fmap f v? Well fmap :: (c -> d) -> m c -> m d, and in this case c = a and d = m b, so fmap f :: m a -> m (m b). Now, of course, we can apply v :: m a to this function, resulting in m (m b). but what exactly does that result type m (m b) mean?

The inner m represents the context from produced from f. The outer m represents the context originating from v (n.b. fmap should not disturb this original context).

And then you join that m (m b), smashing those two contexts together into m a. This is the heart of the definition of Monad: you must provide a way to smash contexts together. You can inspect the implementation details of various Monad instances to try and understand how they "smash" contexts together. The takeaway here, though, is that the "inner context" is unobservable until you merge it with the "outer context". If you use v >>= f, then there is no actual notion of the function f receiving a pure value a and producing a simple monadic result m b. Instead, we understand that f acts inside the original context of v.

并安 2025-01-08 15:15:47

唔。我认为一个很好的思考方式是 >>= 让您可以组合计算;计算本身的形式为 a ->; m b。所以 m b 只是代表计算的结果

因此,计算只需要一些值并产生一些结果。列表类型就是一个很好的例子:a -> [b] 表示非确定性计算。它需要一个输入,但可以产生多个结果。就其本身而言,a ->; [b] 作为计算是有意义的。但你会如何结合这些呢?自然的答案是,您将对之前的所有结果执行每个连续的“计算”。这正是 >>= 对列表所做的事情。

真正帮助我了解其实际价值的一件事是思考 DFA 和 NFA。您可以想象在 Haskell 中简单地编写一个 DFA,如下所示:

data State = S1 | S2 | S3 | S4 | Q
data Input = A | B
transition :: State -> Input -> State
transition S1 A = S2
transition S1 B = S3
-- and so on...

然后我们可以折叠输入:

 foldl transition S1 [A, A, B, B]

现在,我们如何获取此代码并将其推广到 NFA?嗯,NFA 的转换“函数”可以被认为是一种非确定性计算。所以我们定义如下:

transition S1 A = [S1, S2]
transition S1 B = []

但是现在我们必须做一些奇怪的体操才能使用foldl!幸运的是,我们可以用 foldM 来代替。所以这里单子建模的“计算”就是非确定性转移函数。

Hmm. I think a good way to think of it is that >>= lets you compose computations; the computations themselves are in the form a -> m b. So m b just represents the result of a computation.

So a computation just takes some value and produces some result. A good example here is the list type: a -> [b] represents a non-deterministic computation. It takes one input but can produce multiple results. By itself, the a -> [b] as a computation makes sense. But how would you combine these? The natural answer is that you would perform each consecutive "computation" on all of the previous results. And this is exactly what >>= does for lists.

One thing that really helped me see the practical value of this was thinking about DFAs and NFAs. You can imagine trivially writing a DFA in Haskell something like this:

data State = S1 | S2 | S3 | S4 | Q
data Input = A | B
transition :: State -> Input -> State
transition S1 A = S2
transition S1 B = S3
-- and so on...

Then we could just fold over input:

 foldl transition S1 [A, A, B, B]

Now, how would we take this code and generalize it to NFAs? Well, the transition "function" for an NFA can be thought of as a non-deterministic computation. So we define something like:

transition S1 A = [S1, S2]
transition S1 B = []

But now we'd have to do some weird gymnastics to use foldl! Happily, we can just foldM instead. So here the "computation" modeled by the monad is the non-deterministic transition function.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文