递归函数的网络模拟是什么?

发布于 2024-11-16 02:51:31 字数 492 浏览 3 评论 0原文

这是 Wolfram 科学会议上提出的一个雄心勃勃的问题:是否存在递归函数的网络模拟之类的东西?也许是一种迭代的“地图缩减”模式?如果我们在迭代中添加交互,事情就会变得复杂:大量交互实体的连续迭代会产生非常复杂的结果。如果能有一种方法来观察定义复杂系统的无数交互的后果,那就太好了。我们能否在包含嵌套传播循环的连接节点的迭代网络中找到递归函数的对应项?

分布式计算的基本模式之一是 Map-Reduce:它可以在元胞自动机 (CA) 和神经网络 (NN) 中找到。 NN 中的神经元通过突触收集信息(reduce)并将其发送到其他神经元(map)。 CA 中的单元的行为类似,它们从邻居处收集信息(reduce),应用转换规则(reduce),然后再次将结果提供给邻居。因此>如果<有一个递归函数的网络模拟,那么Map-Reduce肯定是其中的重要组成部分。存在什么样的迭代“map-reduce”模式?某些类型的“地图缩减”模式是否会导致某些类型的流甚至涡流或旋转?我们可以为映射缩减模式制定一个微积分吗?

This is an ambitious question from a Wolfram Science Conference: Is there such a thing as a network analog of a recursive function? Maybe a kind of iterative "map-reduce" pattern? If we add interaction to iteration, things become complicated: continuous iteration of a large number of interacting entities can produce very complex results. It would be nice to have a way of seeing the consequences of the myriad interactions that define a complex system. Can we find a counterpart of a recursive function in an iterative network of connected nodes which contain nested propagation loops?

One of the basic patterns of distributed computation is Map-Reduce: it can be found in Cellular Automata (CA) and Neural Networks (NN). Neurons in NN collect informations through their synapses (reduce) and send it to other neurons (map). Cells in CA act similar, they gather informations from their neighbors (reduce), apply a transition rule (reduce), and offer the result to their neighbors again. Thus >if< there is a network analog of a recursive function, then Map-Reduce is certainly an important part of it. What kind of iterative "map-reduce" patterns exist? Do certain kinds of "map-reduce" patterns result in certain kinds of streams or even vortices or whirls? Can we formulate a calculus for map-reduce patterns?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

情话已封尘 2024-11-23 02:51:31

我将尝试一下有关神经网络中递归的问题,但我真的不明白映射缩减是如何发挥作用的。我知道神经网络可以执行分布式计算,然后将其简化为更本地化的表示,但术语map-reduce是这种分布式/本地管道的一个非常具体的品牌,主要与谷歌和Hadoop相关。

不管怎样,你的问题的简单答案是神经网络中没有通用的递归方法;事实上,在神经网络中实现通用角色值绑定这一非常相关的简单问题目前仍然是一个悬而未决的问题。

神经网络 (ANN) 中的角色绑定和递归等问题如此困难的一般原理是,ANN 本质上是非常相互依赖的;事实上,这就是他们大部分计算能力的来源。而函数调用和变量绑定都是非常明确的操作;它们所包含的内容是要么全有要么全无的事情,而这种离散性在许多情况下是一种宝贵的特性。因此,在不牺牲任何计算能力的情况下在另一个内部实现一个确实非常棘手。

以下是尝试部分解决方案的一小部分论文样本。幸运的是,很多人都觉得这个问题很有趣!

视觉分割和动态绑定问题:提高人工神经网络浮游生物分类器的鲁棒性(1993)

组合式绑定问题的解决方案连接主义

A(有点) 绑定问题的新解决方案

I'll take a stab at the question about recursion in neural networks, but I really don't see how map-reduce plays into this at all. I get that neural network can perform distributed computation and then reduce it to a more local representation, but the term map-reduce is a very specific brand of this distributed/local piping, mainly associated with google and Hadoop.

Anyways, the simple answer to your question is that there isn't a general method for recursion in neural networks; in fact, the very related simpler problem of implementing general purpose role-value bindings in neural networks is currently still an open question.

The general principle of why things like role-binding and recursion in neural networks (ANNs) are so hard is that ANNs are very interdependent by nature; indeed that is where most of their computational power is derived from. Whereas function calls and variable bindings are both very delineated operations; what they include is an all-or-nothing affair, and that discreteness is a valuable property in many cases. So implementing one inside the other without sacrificing any computational power is very tricky indeed.

Here is a small sampling of papers that try there hand at partial solutions. Lucky for you, a great many people find this problem very interesting!

Visual Segmentation and the Dynamic Binding Problem: Improving the Robustness of an Artificial Neural Network Plankton Classifier (1993)

A Solution to the Binding Problem for Compositional Connectionism

A (Somewhat) New Solution to the Binding Problem

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文