神经网络模型中隐藏层的数量
有人能够向我解释或向我指出一些资源,说明为什么(或在什么情况下)在神经网络中多个隐藏层是必要的或有用的?
Would someone be able to explain to me or point me to some resources of why (or situations where) more than one hidden layer would be necessary or useful in a neural network?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
基本上更多的层允许表示更多的功能。人工智能课程的标准书籍,Russell 和 Norvig 所著的《人工智能,一种现代方法》,在第 20 章中详细介绍了为什么多层很重要。
一个重要的一点是,通过足够大的单个隐藏层,您可以表示每个层连续函数,但至少需要 2 层才能表示每个不连续函数。
但实际上,至少 99% 的情况下单层就足够了。
Basically more layers allow more functions to be represented. The standard book for AI courses, "Artificial Intelligence, A Modern Approach" by Russell and Norvig, goes into some detail of why multiple layers matter in Chapter 20.
One important point is that with a sufficiently large single hidden layer, you can represent every continuous function, but you will need at least 2 layers to be able to represent every discontinuous function.
In practice, though, a single layer is enough at least 99% of the time.
每一层都以指数方式有效地提高了适应的潜在“复杂性”(而不是向单层添加更多节点的乘法方式)。
Each layer effectively raises the potential "complexity" of adaptation in an exponential fashion (as opposed to a multiplicative fashion of adding more nodes to a single layer).