matlab神经网络梯度下降和均方误差
我想知道梯度下降算法如何在 matlab 网络训练中工作以及如何计算 MSE - 我有自己的应用程序,但它不能像 matlab nn 一样工作,我想知道为什么。 我的算法看起来像这样:
foreach epoch
gradient_vector = 0 // this is a vector
rmse = 0
foreach sample in data set
output = CalculateForward(sample.input)
error = sample.target - output
rmse += DotProduct(error,error)
gradient_part = CalculateBackward(error)
gradient_vector += (gradient_part / number_of_samples)
end
network.AddToWeights( gradient_vector * learning_rate)
rmse = sqrt(rmse/number_of_samples)
end
我的算法与 matlab 的类似吗?
I want to know how grdient descent algorithm works on matlab network training and how MSE is calculated - I have my own app but it doesnt work as the matlab nn and I want to know why.
My algorithm looks like this:
foreach epoch
gradient_vector = 0 // this is a vector
rmse = 0
foreach sample in data set
output = CalculateForward(sample.input)
error = sample.target - output
rmse += DotProduct(error,error)
gradient_part = CalculateBackward(error)
gradient_vector += (gradient_part / number_of_samples)
end
network.AddToWeights( gradient_vector * learning_rate)
rmse = sqrt(rmse/number_of_samples)
end
I it something similar what matlab does?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
它看起来与 MATLAB 的功能很接近,但请记住,该工具箱是为广泛的应用程序而设计的。您的算法在每个时期将每个数据条目提供给网络一次。 Matlab 的工具箱可以在每个 epoch 多次呈现数据,每个 epoch 多次更新,并且可以通过多种方式更新。我向您保证,您的确切方法可以使用现有的 matlab 工具箱进行复制,但需要进行非常具体的设置,可以通过深入研究您正在使用的神经网络的帮助文件来找到该设置。其中一些可能比其他的更接近你正在做的事情,所以要敏锐。祝你好运!
It appears close to what MATLAB does, but keep in mind that the toolbox is designed for a broad base of applications. Your algorithm gives each data entry once to the network once per epoch. Matlab's toolbox can present the data multiple times per epoch, update multiple times per epoch, and can update in a number of ways. I assure you that your exact method can be duplicated with the existing matlab toolbox, but with a very specific setting, which can be found by digging around in the help files for the neural network you're using. Some of them may be closer to what you're doing than others, so be discerning. Good luck!