如何理解Yoneda的自然同构中的普遍量化?
在学习Yoneda引理时,我遇到了Haskell中基本自然同构的以下编码:
forward :: Functor f => (forall r . (a -> r) -> f r) -> f a
forward f = f id
backward :: Functor f => f a -> (forall r. (a -> r) -> f r)
backward x f = fmap f x
我试图简化向后
to flip fmap
的实现,但失败了后者具有类型fa-> (a - > r) - > f r
。
从这里开始,我被困在指出两个实现之间的差异是什么。更重要的是,将任何一个函数应用于混凝土函子都会产生相同的结果类型:
ghci> :t bw (Just "")
bw (Just "") :: (String -> r) -> Maybe r
ghci> :t (flip fmap) (Just "")
(flip fmap) (Just "") :: (String -> r) -> Maybe r
问题:
- 两者之间的差异到底是什么?
- 我可以用一个不能与另一个做些什么?
- 为什么
向后
完全需要通用量化?
While learning about the Yoneda lemma, I came across the following encoding of the underlying natural isomorphism in Haskell:
forward :: Functor f => (forall r . (a -> r) -> f r) -> f a
forward f = f id
backward :: Functor f => f a -> (forall r. (a -> r) -> f r)
backward x f = fmap f x
I tried to simplify the implementation of backward
to flip fmap
but failed as the latter has type f a -> (a -> r) -> f r
.
From here on I'm stuck in pinpointing what precisely are the differences between the two implementations. More so as applying either function to a concrete functor yields the same result type:
ghci> :t bw (Just "")
bw (Just "") :: (String -> r) -> Maybe r
ghci> :t (flip fmap) (Just "")
(flip fmap) (Just "") :: (String -> r) -> Maybe r
Questions:
- What exactly is the difference between the two?
- Is there something I can do with one that can't be done with the other?
- Why is the universal quantification needed at all for
backward
?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
在Haskell中,我们为我们传达的值编写lambdas:
但是,当我们编译为封闭于系统F的低级核心语言时,我们还会看到 type lambdas 。代码变得更像
是:呼叫者应选择并传递类型
@A
,然后是该类型的值x :: @a
,最后回到X
。现在,您的问题来自
此处的两种(同构)类型,
t
不取决于a
。这两种类型确实是同构的,在较低的层面上,这只是用类型lambda“翻转”术语的问题:由于在Haskell中,我们通常不会写或看到类型 - lambdas,因此很难区分在这之间。此外,当编译器执行类型推理时,它将(据我所知)推断出第一个类型的推断,其中类型 - lambdas都是开始的。
不多。毕竟,它们具有同构类型。一个在最高级别上具有
forall a
,而另一个则位于另一个位置。一旦您像以前一样应用对参数,GHC将自动选择添加推断
@A
参数的正确位置,因此您不会轻易注意到差异。这两个呼叫是这些形式的:最重要的是,GHC可以重新将类型重新化,因此我们获得了
并不真地。
通常,当定义
a
和b
之间的同构时,我们想要任何一种类型都有forall(例如,
a = forallr。vr
)然后,我们在该职位上得到一个福特。请注意,在这种情况下,back
的类型也是异摩式,如果我们省略了量词,我们将获得的类型(并让Haskell重新添加它在顶级级别上)。
(相比之下,
forw ::(Forallr。vr)类型中的量词无法移动到顶级。
In Haskell we write lambdas for the values we pass:
but when we compile to the lower-level Core language, which is closed to System F, we also see type lambdas. The code becomes more like
which means: the caller should choose and pass a type
@a
, then a value of that typex :: @a
and finally receive that value backx
.Now, your issue arises from the two (isomorphic) types
Here,
T
does not depend ona
. The two types are indeed isomorphic, and at the lower level it's just a matter of "flipping" the term-lambda with the type-lambda:Since in Haskell we do not usually write or see the type-lambdas, it is hard to distinguish between these. Further, when the compiler performs type inference it will (as far as I can understand) infer the first one, where the type-lambdas are all at the beginning.
Not much. They have isomorphic types, after all. One has the
forall a
at the topmost level, while the other has it in another position.Once you apply then to arguments as you did, GHC will automatically choose the right place to add the inferred
@a
argument, so you won't easily notice the difference. The two calls are of these forms:On top, GHC can re-generalize the type, hence we obtain
Not really.
Usually, when defining an isomorphism between
A
andB
we wantIf either type has a forall (say,
A = forall r . V r
) then we get a forall in that position. Note that the type forback
in this case is also isormoprhic towhich is what we would get if we omitted the quantifier (and let Haskell re-add it implicitly at the top-level).
(By contrast, the quantifier in the type of
forw :: (forall r . V r) -> B
can not be moved to the top level.)