基于差分进化算法优化神经网络参数
在传统的前馈神经网络中,权值和阈值的优化采用的梯度下降的方法进行优化的。在优化过程中发现,梯度下降在迭代次数比较多的情况下,收敛速度慢,预测精度低的行为。差分进化算法在优化线性和非线性是比较不错的算法,收敛速度快,精度高。它对于处理线性和非线性的问题恰好可以引入优化神经网络参数中来。
我们知道,前馈神经网络是以误差函数作为梯度函数进行下降而找到最优的权值和阈值。
这里可以利用差分进化算法的良好优化性能,以误差函数作为我们的适应度函数,以权值、阈值作为我们的变量。因此则可以将其看待为优化多维单目标优化,这样以来结合差分进化算法的优越性,良好的解决了神经网络的权值、阈值优化。
需要源代码的点击下方链接!
具体的测试文件代码如下:
clc
clear
close all
%%%%%%%%%%%%%%%%存在的问题%%%%%%%%%%%%%%%%%%
%适应度函数的建立还需要进一步优化
%提高优化效率以及如何动态选择最好的隐含层网络数
%%%%%%%%%%%%%%%%%%%%%%%%%确定网络各层神经元个数%%%%%%%%%%%%%%%%%%%%%%%%%%%
insize = 4; %输入层神经元数目--------根据数据集所决定的
hidesize = 5; %隐含层神经元数目--------由自己设定
outsize = 1; %输出层神经元数目--------输入神经元与输出神经元的个数相同
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%导入特征数据数据%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
load('iris.mat')
data1=iris(:,1:4);
input=data1';
testdata=input(:,80:110);
input_traindata=input(:,1:140)';
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%导入标签%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
output_train=iris(1:140,end)';
output_test=iris(80:110,end)';
=mapminmax(output_train); %训练样本输出值归一化到[-1,1]之间
%%%%%%%%%%使用HSDE进行优化参数----训练模型%%%%%%%%%%%%%%%%%%%
output_traindata=output_testLable'; %归一化后的结果
popsize=10;
invidualsize=hidesize*(insize +outsize+1)+outsize;
iter=500;
=HSDE(popsize,invidualsize,iter,input_traindata,output_traindata); %优化参数还是有问题
loglog(optvalue,'m-','linewidth',1.5);
xlabel('FEs');ylabel('error value')
%%%%%%%%%%%%%%设计将优化出来的值进行转化为权值和阈值,存档%%%%%%%%%%%%%%%%%%%%%%
c=best(1:hidesize*(insize +outsize));
p=zeros(hidesize,insize +outsize);
pp=zeros(outsize,insize +outsize);
for i=1:size(c,2)/(insize+1)
for j=1:(insize+1)
p(i,j)=c(1);
c(1)=[];
end
end
W1=p(:,1:insize);
B1=p(:,(insize+1));
d=best(hidesize*(insize +outsize)+1:end);
for i=1:size(d,2)/(hidesize+1)
for j=1:hidesize+1
pp(i,j)=d(1);
d(1)=[];
end
end
W2=pp(:,1:hidesize);
B2=pp(:,hidesize+1);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%测试模型%%%%%%%%%%%%%%%%%%%%%%%%%
accuracy_numbers=0;
tempyout = zeros(outsize,size(testdata,2));
for i = 1:size(testdata,2)
testdata_x = testdata(:,i);
tesetdata_hidein = W1*testdata_x+B1;%隐含层输入值
testdata_hideout = zeros(hidesize,1);%隐含层输出值
for j = 1:hidesize
testdata_hideout(j) = SigmoidFun(tesetdata_hidein(j));
end
testdata_yin= W2*testdata_hideout+B2;%输出层输入值
testdata_yout = zeros(outsize,1);
for j = 1:outsize
testdata_yout(j) = SigmoidFun( testdata_yin(j));
end
tempyout(:,i) =testdata_yout; %最终测试集的输出值-----这里需要将其与测试集的类别进行对比
% if testdata_yout>0 && testdata_yout<=1.5
% result_out(i)=1;
% elseif testdata_yout>1.5 && testdata_yout<=2.5
% if result_out(i)==testdata(3,i)
% accuracy_numbers= accuracy_numbers+1;
% end
% hold on;
% else
% scatter(testdata_x(1),testdata_x(2),&#39;g&#39;)%给分类做一个边缘分割线---怎样处理边缘框线
% result_out(i)=-1;
% if result_out(i)==testdata(3,i)
% accuracy_numbers= accuracy_numbers+1;
% end
% hold on;
% title(&#39;BP神经网络测试模型分类效果图&#39;);
% end
%
%
end
test_simu=mapminmax(&#39;reverse&#39;, tempyout,outputps); %把仿真得到的数据还原为原始的数量级
error=abs(test_simu-output_test); %预测值和真实值的误差
%%%%%%%%%%%%判断函数的预测精度%%%%%%%%%%%%%%%%%%%%%
successful_numbers=0;
unsuccessful_numbers=0;
successful_accuracy=abs(output_test-test_simu);
for i=1:size(successful_accuracy,2)
if successful_accuracy(:,i)/output_test(:,i)<0.05
successful_numbers= successful_numbers+1;
else
unsuccessful_numbers= unsuccessful_numbers+1;
end
end
prec_accuracy= successful_numbers/size(output_test,2);
disp([&#39;预测精度为:&#39;,num2str( prec_accuracy)]);
%%第十步 真实值与预测值误差比较
figure(2)
plot(output_test(1,:),&#39;bo-&#39;)
hold on
plot(test_simu(1,:),&#39;r*-&#39;)
hold on
plot(error(1,:),&#39;square&#39;,&#39;MarkerFaceColor&#39;,&#39;b&#39;)
legend(&#39;expectancy value&#39;,&#39;predicted value&#39;,&#39;error&#39;);
xlabel(&#39;data array&#39;);
ylabel(&#39;output value&#39;);
这里是以鸢尾花为数据集进行训练和测试,这里选择了后35个数据集作为测试集,测试的精度可以达到:
预测精度为:0.96774
预测图形走势:
页:
[1]