感知器中的校正添加到哪个权重?
我正在尝试单层感知器,我想我(大部分)理解了一切。但是,我不明白应该将校正(学习率*误差)添加到哪些权重。在我看到的例子中,它似乎是任意的。
I'm experimenting with single-layer perceptrons, and I think I understand (mostly) everything. However, what I don't understand is to which weights the correction (learning rate*error) should be added. In the examples I've seen it seems arbitrary.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
好吧,看起来你一半回答了你自己的问题:你确实纠正了所有非零权重,但你并没有纠正所有的相同数量。
相反,您按照其传入激活的比例来校正权重,因此,如果单元 X 激活非常强烈,而单元 Y 仅激活了一点点,并且存在很大的误差,那么从单元 X 到输出的权重将被远远校正大于单元 Y 的输出权重。
此过程的技术术语称为增量规则,其详细信息可以在其wiki 文章。此外,如果您想升级使用多层感知器(单层感知器的计算能力非常),请参阅对 Minsky 和 Papert 反对使用它们的论点的讨论),讨论了一种称为反向传播的类似学习算法此处。
Well, it looks like you half answered your own question: its true you correct all of the non-zero weights, you don't correct all by the same amount.
Instead, you correct the weights in proportion to their incoming activation, so if unit X activated really strongly and unit Y activated just a lil bit, and there was a large error, then the weight going from unit X to the output would be corrected far more than unit Y's weights-to-output.
The technical term for this process is called the delta rule, and its details can be found in its wiki article. Additionally, if you ever want to upgrade using to multilayer perceptrons (single layer perceptrons are very limited in computational power, see a discussion of Minsky and Papert's argument against using them here), an analogous learning algorithm called back propogation is discussed here.
回答了我自己的问题。
根据 http://intsys.mgt.qub.ac.uk/notes/perceptr .html,“将此修正添加到有输入的任何权重中”。换句话说,不要将校正添加到神经元值为 0 的权重中。
Answered my own question.
According to http://intsys.mgt.qub.ac.uk/notes/perceptr.html, "add this correction to any weight for which there was an input". In other words, do not add the correction to weights whose neurons had a value of 0.