冯诺依曼机和 Lambda
Brian 在论证问题时的前提 “副作用是件好事吗?” 很有趣:
计算机是冯诺依曼机器,旨在与效果配合良好(而不是设计用于与 lambda 配合良好)
我对这些方法的并置感到困惑。 我无法将它们视为黑白。 什么是证明价值:
计算机是冯诺依曼机器,旨在与效果配合良好[1]
最后一部分让我感到困惑:
而不是被设计为与 lambda 良好配合 [2]
Lambda 是否用作函数式编程的符号? 或者它们是函数式编程的委婉说法? 真正的信息是什么?
在什么意义上,前提[1]和[2]的部分是正确的? 回复中有哪些隐藏前提? 有人可以证明原来的前提是合理的吗? 冯诺依曼机和 Lambda 到底如何工作?
Brian's premise in his argument to the question "Are side effects a good thing?" is interesting:
computers are von-Neumann machines that are designed to work well with effects (rather than being designed to work well with lambdas)
I am confused by the juxtaposition of the approaches. I cannot see them as black and white. What is the proof-value of:
computers are von-Neumann machines that are designed to work well with effects [1]
The last part confuses me:
rather than being designed to work well with lambdas [2]
Are the Lambdas used as symbols for Functional programming? Or are they euphenisms for functional programing? What is the real message?
In what sense, the parts of the premise [1] and [2] are right? What are the hidden premises in the reply? Can someone justify the original premise? How do the von-Neumann machines and Lambdas really work?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
这里是我的意思的更深入的内容,尽管看看其他人是否同意或他们要说什么会很有趣。
考虑一下当今计算机是如何工作的。 您拥有具有整数和浮点寄存器的硬件,以及大量随机存取存储器,以及大部分指令的形式“基于读取该寄存器/存储单元的值,将这个新值插入该寄存器” /细胞'。 (当涉及到缓存行、一致性和内存模型等时,更新内存单元会对性能产生各种影响。)整数是 32 或 64 位,几乎所有编程语言都呈现与硬件完全匹配的这些数据类型。 几乎每个运行时都使用一个小型调用堆栈,其中堆栈分配的对象很便宜,以及一个更昂贵的“堆”,当需要非基于堆栈的生命周期时,可以在其中创建和销毁其他对象。
现在考虑大多数现代函数式编程语言。 不变性是常态; 你很少会用新的价值观“戳”记忆。 (这意味着您创建了更多新对象,这意味着您分配了更多对象。)Lambda 和延续是常态; 很少有对象的生命周期与堆栈相对应。 (事实上,一些 FP 运行时不使用堆栈;在 CPS 实现中,堆栈和程序计数器的概念是不合适的。)递归是一个循环构造,因此您至少需要“尾部”调用才能不消耗无论如何,堆栈。 实际上一切都需要“堆”分配,当然你需要一个垃圾收集器。 代数数据类型提供标记数据; 理论上,这些标签只需要额外的 2 或 3 位数据,但为了匹配运行时间,它们通常需要额外的内存字或更多字。 ...我有点绕,但是你在 FP 语言中最常做的事情往往与规模最差或的事情完全对应在典型的计算机硬件架构和基本语言运行时上最昂贵。
事情并不一定是这样的。 人们可以想象这样一个世界:运行时避开堆栈,并使堆/分配速度更快(而不是多线程应用程序的瓶颈)。 人们可以想象这样一个世界,其中可互操作的整数类型有 29 或 60 位,并且运行时/硬件使用该字的额外剩余位,例如 GC、或代数类型标签等。 (我认为某些 FP 实现/运行时会执行其中一些技巧。)等等等等...重点是,如果您以现代函数式语言为给定,然后围绕它设计运行时/硬件,它看起来会非常不同来自当今的典型硬件/运行时。
(我认为我没有很好地传达这一点,而且我对许多我不太清楚的细节并不精确,但希望您在这里理解我论文的要点。)
Here is more depth of what I meant, though it will be interesting to see if others agree or what they have to say.
Consider how computers work today. You have hardware that has integer and floating-point registers, and a vast array of random access memory, and instructions which are mostly of the form 'based on reading the value of this register/memory cell, go poke this new value into this register/cell'. (Updating memory cells has all kinds of perf implications when it comes to cache lines and coherency and memory models and whatnot.) Integers are 32 or 64 bits, and nearly all programming languages surface these data types that exactly match the hardware. Nearly every runtime works with a small call stack where stack-allocated objects are cheap, and a more expensive 'heap' where other objects can be created and destroyed when non-stack-based-lifetimes are needed.
Now consider most modern functional programming languages. Immutability is the norm; you will rarely 'poke' memory with new values. (This means you create more new objects, which means you allocate more.) Lambdas and continuations are the norm; you more rarely have object lifetimes that correspond to the stack. (Indeed, some FP runtimes don't use a stack; in a CPS implementation the notion of stack and program counter are out-of-place.) Recursion is a looping construct, so you at least need 'tail' calls to not consume the stack anyway. Practically everything needs to "heap" allocate, and of course you need a garbage-collector. Algebraic data types provide tagged data; in theory these tags would only require an extra 2 or 3 bits of data, but to match the runtime, they often need to have an extra word of memory or more. ... I am kind of meandering, but the things you do most often in an FP language tend to correspond to exactly to the things that scale worst or are most expensive on the typical computer hardware architecture and basic language runtime.
It doesn't have to be that way. One can imagine a world where the runtime eschews a stack, and makes heap/allocation fast (and not a bottleneck for multi-threaded apps). One can imagine a world where the interoperable integer types have 29 or 60 bits, and the runtime/hardware use the extra leftover bits of the word for e.g. the GC, or algebraic type tags, or whatnot. (I think some FP implementations/runtimes do some of these tricks.) Etc. etc... The point is, if you take a modern functional language as a given, and then design runtime/hardware around it, it would look very different from the typical hardware/runtimes of today.
(I don't think I communicated that terrifically, and I am imprecise about many details I don't know precisely, but hopefully you grok the gist of my thesis here.)
我不完全确定你在问什么,但当我读到它时,你在问他所说的 lambda 是什么意思?
他指的是 lambda 演算,它构成了函数式编程的大部分理论基础。 它是一种抽象符号,用于(除其他外)描述和推理高阶函数。
冯·诺依曼机器基本上就是我们所拥有的。 程序是通过操作和访问存储(我们的 RAM)的指令来执行的。 也就是说,一切都是通过副作用隐式完成的。 数据从 RAM 中的某个区域读取,进行一些处理,然后写回 RAM 中的某个(可能是其他)区域。
如果没有副作用,CPU 将仅限于在开机时处理其寄存器中恰好存在的任何垃圾数据。
Lambda 演算没有副作用的概念,因此基于此原理的机器不会区分“CPU 可以访问的内容”(本质上是我们的寄存器)和“可以间接访问的内容”(我们的 RAM)。 这样一台机器中的一切都将基于功能原则,即函数接受一个或多个参数并返回一个新值,而不修改现有值。 (不,我不确定这在硬件中如何工作......:))
这能回答你的问题吗?
I'm not entirely sure what you're asking, but as I read it, you're asking what he means by lambdas?
He is referring to lambda calculus, which form much of the theoretical basis for functional programming. It is an abstract notation for (among other things) describing and reasoning about higher-order functions.
A Von Neumann machine is basically what we have. Programs are executed by instructions manipulating and accessing a store (our RAM). That is, everything is implicitly done through side effects. Data is read from some area in RAM, processed a bit, and written back to some (other, perhaps) area in RAM.
Without side effects, the CPU would be limited to manipulating whatever garbage data happened to be in its registers when it was powered on.
Lambda calculus has no notion of side effects, so a machine based around this principle would not have the distinction between "what the CPU can access" (our registers, essentially), and "what can be accessed indirectly" (our RAM). Everything in such a machine would be based around functional principles, of functions taking one or more arguments, and returning a new value, never modifying existing ones. (And no, I'm not sure how that would work in hardware... :))
Does that answer your question?