关于神经网络的图灵完备性有哪些实际证明?什么nns可以执行代码/算法?
我对神经网络的计算能力感兴趣。人们普遍认为循环神经网络是图灵完备的。现在我正在寻找一些证明这一点的论文。
到目前为止我发现了什么:
神经网络的图灵可计算性,Hava T. Siegelmann 和 Eduardo D. Sontag,1991
我认为这仅从理论角度来看才有趣,因为它需要具有无限精确的神经元活动(以某种方式将状态编码为有理数)。
S。 Franklin 和 M. Garzon,神经可计算性
这需要无限数量的神经元,而且似乎不太实用。
(请注意,我的另一个问题尝试指出这样的理论结果和实践之间的这种问题。)
我主要寻找一些真正可以执行一些代码的神经网络,我也可以在实践中模拟和测试这些代码。当然,在实践中,他们的记忆力会受到某种限制。
有谁知道这样的事情吗?
I'm interested in the computational power of neural nets. It is generally accepted that recurrent neural nets are Turing complete. Now I was searching for some papers which proofs this.
What I found so far:
Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991
I think this is only interesting from a theoretical point of view because it needs to have the neuron activity of infinite exactness (to encode the state somehow as a rational number).
S. Franklin and M. Garzon, Neural computability
This needs an unbounded number of neurons and also doesn't really seem to be that much practical.
(Note that another question of mine tries to point out this kind of problem between such theoretical results and the practice.)
I'm searching mostly for some neural net which really can execute some code which I can also simulate and test in practice. Of course, in practice, they would have some kind of limited memory.
Does anyone know something like this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
也许是这张纸? http://lipas.uwasa.fi/stes/step96/step96/hyotyniemi1/< /a>
Maybe this paper? http://lipas.uwasa.fi/stes/step96/step96/hyotyniemi1/
有点偏离主题,但也许对您的搜索有用(听起来像硕士/博士论文)。根据我使用学习算法进行分类、分割等方面的经验,贝叶斯学习由于其强大的数学基础而优于所有形式的神经网络、遗传算法和其他听起来很漂亮的算法。
在我的书中,数学基础使技术优于临时方法。例如,贝叶斯网络的结果可以在数学上解释为概率(如果您愿意,甚至可以使用 p 值),而神经网络通常是猜测。不幸的是,贝叶斯统计听起来并不像“神经网络”那么性感,尽管它可以说更有用且有根据。
我很高兴看到有人在学术环境中正式解决这个问题。
A bit off topic, but perhaps useful in your search (which sounds like a Masters/Ph.D. thesis). In my experience of using learning algorithms for things like classification, segmentation, etc., Bayesian learning is superior to all forms of neural nets, genetic algorithms, and other nifty sounding algorithms due to its strong mathematical basis.
A foundation in mathematics, in my book, makes a technique superior to ad-hoc methods. For example, the result from a Bayesian network can be mathematically interpreted as a probability (even with a p-value if you like), while a neural net is often guesswork. Unfortunately, Bayesian statistics doesn't sound as sexy as "neural network" even though it's arguably more useful and well founded.
I would love to see somebody shake this out formally in an academic setting.