Matlab 中的神经网络反向传播发生了什么
我是MATLAB新手,我想验证C语言中的在线反向传播(BP)代码。我需要测试代码是否与相同的网络设置完全相同。网络设置为原始BP(对于XOR问题)2个输入,2个隐藏节点和1个输出。使用的学习率设置为 0.01,动量为 0.95,而停止标准为 0.01,性能度量为 sse。 epoch为1(因为我想检查从前向传播到后向传播的精确计算,以验证网络设置与C中完全相同) 这是我的代码:
clear all;clc
input = [0 0; 0 1; 1 0; 1 1]';
target = [0 1 1 0]; % p = [-1 -1 2 2; 0 5 0 5]; % t = [-1 -1 1 1];
state0 = 1367;
rand('state',state0)
net = newff(input,target,2,{},'traingd');
net.divideFcn = '';
%set max epoh, goal, learning rate, show stp
net.trainParam.epochs =1;
net.trainParam.goal = 0.01;
net.performFcn ='sse';
net.trainParam.lr = 0.01;
net.adaptFcn=' ';
net.trainParam.show = 100;
net.trainparam.mc = 0.95;
net.layers{1}.transferFcn = 'logsig';
net.layers{2}.transferFcn = 'logsig';
wih = net.IW{1,1};
wihb= net.b{1,1};
who = net.LW{2,1};
whob = net.b{2,1};
%Train
net = train(net,input,target); %adapt
y= sim(net,input);
e=target-y;
perf = sse(e)
运行后,我发现 y(1) 是 0.818483286935909 与手动计数不同,手动计数为 0.609299823823181 (我通过计算重新检查==>
for i=1:size(input,2)
hidden(1) = logsig( wih (1)*input(1) + wih(2)*input(2) + wihb(1) );
hidden(2) = logsig( wih (3)*input(1) + wih(4)*input(2) + wihb(2) );
out(i) = logsig( hidden(1)*who(1) + hidden(2)*who(2) + whob(1) );end )
我的问题是: 1)原来的MATLAB是使用traingd吗? 2)实际上是什么 净=训练(净,输入,目标); y= sim(网络,输入);使用train和sim手动计算结果为0.609299823823181而不是0.818483286935909。
3)与上面的 matlab 代码相比,我在 C 中的粗略前向传播有什么不同?
拜托,请帮助我。
I am newbie in MATLAB, I want to verify the online back propagation(BP) code in C. I need to test the code whether it is exactly the same with the same network setting. The network setting is original BP (for XOR problem) 2 inputs, 2 hidden nodes and 1 output. The learning rate setting used is 0.01, momentum 0.95 while stopping criteria is 0.01 and the performance measure is sse. the epoch is 1 (because I want to check the exactly calculation from forward propagation to backward propagate, in order to verify the network setting exactly the same as in C)
here is my code:
clear all;clc
input = [0 0; 0 1; 1 0; 1 1]';
target = [0 1 1 0]; % p = [-1 -1 2 2; 0 5 0 5]; % t = [-1 -1 1 1];
state0 = 1367;
rand('state',state0)
net = newff(input,target,2,{},'traingd');
net.divideFcn = '';
%set max epoh, goal, learning rate, show stp
net.trainParam.epochs =1;
net.trainParam.goal = 0.01;
net.performFcn ='sse';
net.trainParam.lr = 0.01;
net.adaptFcn=' ';
net.trainParam.show = 100;
net.trainparam.mc = 0.95;
net.layers{1}.transferFcn = 'logsig';
net.layers{2}.transferFcn = 'logsig';
wih = net.IW{1,1};
wihb= net.b{1,1};
who = net.LW{2,1};
whob = net.b{2,1};
%Train
net = train(net,input,target); %adapt
y= sim(net,input);
e=target-y;
perf = sse(e)
after run, I've found that the y(1) is 0.818483286935909
it is different from manual count which is 0.609299823823181
( i recheck by calculate ==>
for i=1:size(input,2)
hidden(1) = logsig( wih (1)*input(1) + wih(2)*input(2) + wihb(1) );
hidden(2) = logsig( wih (3)*input(1) + wih(4)*input(2) + wihb(2) );
out(i) = logsig( hidden(1)*who(1) + hidden(2)*who(2) + whob(1) );end )
my questions is:
1) is the original MATLAB is using traingd?
2) what does really
net = train(net,input,target);
y= sim(net,input); do where manual calculation resulted 0.609299823823181 rather than 0.818483286935909 using train and sim.
3) what are the different that my crude forward propagation in C compared to matlab code as above?
please,please help me.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
1)我相信Matlabs“train”命令使用批量学习,而不是在线学习。也许你应该研究一下Matlab中的“adapt”函数来进行在线训练,但不知道它是否有什么好处。您是在问 train() 和 traingd() 是否实际上是相同的方法,还是在问 train 是否也使用梯度下降?
2)Matlab帮助说“通常训练的一个时期被定义为所有输入向量到网络的单一呈现。然后根据所有这些呈现的结果更新网络。”
我想这意味着训练将反向传播并“训练”一次网络,然后你根据这个训练过的网络模拟一个答案。
3)这里列出的C代码是你程序中的所有代码吗?如果是这样,我想区别在于Matlab更新权重一次然后前馈,而你的C代码似乎只前馈?或者我错过了什么/你遗漏了什么?
希望我正确理解了您所有的问题,有时有些不清楚,如果我有问题,请发表评论。
1) I believe that Matlabs "train" command uses batch learning, not online. Perhaps you should look into the "adapt" function in Matlab for online training, don't know if it's any good though. Are you asking if train() and traingd() are actually the same methods or are you asking if train also use gradient-descent?
2) Matlab help says "Typically one epoch of training is defined as a single presentation of all input vectors to the network. The network is then updated according to the results of all those presentations."
I guess this means train will backpropagate and "train" the network one time, and then you simulate a answer based on this trained network.
3) Is the C code listed here all the code in your program? If so, i guess the difference is that Matlab updates the weights once and then feed-forward, while your C code only seem to feed-forward?? Or have i missed something/you left something out?
Hope i have understood all your questions correctly, they were a bit unclear at times, please comment if i got something wrong..
谢谢Niclas,我看过adapt函数,我猜newff函数初始化不同的权重(在newff init和reinit激活函数期间)
2)我也相信traingd使用批量训练。但是当我检查输出时:
3)C 代码如下:
谢谢。
thank Niclas, I have seen adapt function, I guess the newff function initialize different weight (during newff init and reinit activation function)
2) I also believe traingd using batch training. but when I checked the output:
3) the C code just as follows:
thanks.