人工神经网络中的连接
如果我有一个具有任意数量层的人工神经网络,并且最后一个(退出)节点仅接受 3 个输入(1,x1,x2),如果之前的隐藏层有 4 个节点,这意味着它创建了 4 个到最后一个节点。我就忽略1吗?或者我是否必须强制最后一层有 3 个节点?
如果最后一个隐藏层只有 2 或 1 个节点,我也很难处理这个问题。
我知道这并不是一个真正的编程问题,但我在编程(到目前为止)只是处理这些问题方面并不困难。
所以是的,我们的想法是用户可以告诉我隐藏层的数量和每层的节点数量。
If I have an artificial neural network with any amount of layers and the last (exit) node only accepts 3 inputs (1,x1,x2) what do I do if the hidden layer before has 4 nodes, meaning it creates 4 connections to the last node. Do I just ignore 1? Or do I have to force the last layer to have 3 nodes?
I'm having a hard time dealing with this also with the case if the last hidden layer has only 2 or 1 nodes.
I know this isn't really a programming question but I'm not having a hard time programming this (so far) just dealing with those problems.
So yeah, the idea is that the user can tell me the amount of hidden layers and amount of nodes per layer.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
恒定的输入称为偏差。假设您对神经元使用一种典型函数(输入加权和的 sigmoid 函数),如果一个神经元未连接到 bias 输入,则该神经元只能在输入为 0 时输出 0...因此,您失去了前馈神经网络的通用函数逼近能力。为了避免这种情况,神经网络有两种方式
输入为 (x1, x2, 1)。神经元简单地计算 sigmoid(w1 * x1 + w2 * x2 + ... + wn * xn)。每一层都与前一层完全连接
我倾向于选择第二个解决方案,因为它使代码和数据更加精简,没有特殊情况,bias被视为特殊的输入,有点像ground电路的电压。
The input which is constant is called bias. Assuming you use a typical function for the neuron (sigmoid of weighted sum of the inputs) If one neuron is not connected to the bias input, then that neuron can only output 0 if input is 0... Thus you loose the universal function approximation capability of a feed-forward neural network. To avoid that, 2 ways
input is (x1, x2, 1). The neurons simply compute sigmoid(w1 * x1 + w2 * x2 + ... + wn * xn). Each layer is fully connected to the previous one
I tend to prefer the 2nd solution, as it makes the code and the data more streamlined, no special case, bias is seen a special input, a bit like ground voltage for an electric circuit.