Detailed comprehensive learning particle swarm optimization algorithm CLPSO (with matlab code given by the author)

Recently, the team has published two improved algorithms based on CLPSO[1]. Here, we will analyze this comprehensive learning algorithm according to our own understanding, and share the source code given by the author for everyone to learn: https://download.csdn.net/download/Bernard_S/12612817 (free download, itself is moved from the author's home page, ha ha). First of all, make a comprehensive evaluation of CLPSO. As the author said, the result of the algorithm is very good on the single peak problem, but the result of dealing with the single peak problem is poor. The experiment I have done proves that this is true, but I am still amazed by the performance of the algorithm on the multi peak problem. So at present, most of the improvements of CLPSO are focused on improving the mining capacity to strengthen its competitiveness in the single peak problem. Next, I will analyze the performance of CLPSO in exploration and mining through the detailed explanation of CLPSO and briefly describe the improvements of some published papers. (screenshots of formulas are from references)
Particle swarm optimization
As we all know, particle swarm optimization algorithm gives each particle in the population two key components: speed and position. And the evolution is realized through the changes of these two components in the iterative process. The specific methods are as follows:

That is, each particle in evolution is actually controlled by two components, that is, its own historical optimal solution and the global historical optimal solution. In fact, these two components also control the exploration and mining ability of the particle swarm optimization algorithm. The self optimal pulls the particle to explore the optimal region in the global range, while the global optimal pulls the particle to move to the existing optimal region to realize the mining of the optimal solution!
CLPSO improvements
The above brief description is actually to facilitate the analysis of how the improvement of CLPSO affects the balance between exploration and production of particle swarm optimization algorithm, so as to achieve the current results. In fact, the improvement of the algorithm is very simple and effective. Next, the comprehensive learning strategy is analyzed in detail.
Its core is actually the following formula, which only improves the speed update term in the classical particle swarm optimization algorithm, changes its historical optimal term into a new comprehensive learning factor, and deletes the global optimal term.

So next we will focus on analyzing his comprehensive learning factors. First, let's see how it produces:

for k=1:ps
    ar=randperm(D);
    ai(k,ar(1:m(k)))=1;
    fi1=ceil(ps*rand(1,D));
    fi2=ceil(ps*rand(1,D));
    fi=(pbestval(fi1)<pbestval(fi2))'.*fi1+(pbestval(fi1)>=pbestval(fi2))'.*fi2;
    bi=ceil(rand(1,D)-1+Pc(k));
    if bi==zeros(1,D),rc=randperm(D);bi(rc(1))=1;end
    f_pbest(k,:)=bi.*fi+(1-bi).*f_pbest(k,:);
end

The simple description is as follows: for each particle, randomly select two particles in the whole population, compare the fitness of the two particles and select the better one as the candidate, and then cross the candidate with the historical best of the particle according to the dimension according to the cross probability Pc defined in the article, so as to generate a comprehensive learning factor, that is, pbest in the formula Fri. At the same time, the iteration stagnation times of particles that have not changed are recorded in the iteration process (note that the stagnation times of changed particles are not reset to zero in the source code). When the stagnation times of a particle are greater than the threshold, a comprehensive learning factor is regenerated for it.
The definition of Pc is as follows

t=0:1/(ps-1):1;t=5.*t;
Pc=0.0+(0.5-0.0).*(exp(t)-exp(t(1)))./(exp(t(ps))-exp(t(1)));

It is also easy to understand that for each particle of the entire population, a linearly increasing Pc (Pc=[0,0.5]) is defined, and Pc does not change with iteration.
In fact, all the improvements of the whole algorithm are as follows, which are very simple to understand but very effective. Here is my understanding of the improvement:
First, why should we remove the global optimal term?
In fact, many improvements to CLPSO deliberately add the global optimal term to enhance its mining capacity. I think HCLPSO[2] is a variant with better effect. It mainly introduces multiple group topologies, retains a Cl group itself, and then generates a new CL group with global term. In fact, like many such improvements, it destroys the topology of CLPSO itself. Therefore, experiments show that HCLPSO is inferior to CLPSO in dealing with multimodal problems. Let's talk about why I understand to remove the global optimization item: first, the comprehensive learning factor itself will select the better of the two random particles, that is, there is a high probability that the global optimization will be selected as the learning goal by one or more particles in the population in each iteration. Secondly, the global optimization term will greatly offset the exploration ability of the comprehensive learning factor (the oscillation problem described below), because the comprehensive learning factor itself is a probability combination of the particle's own optimization and the random learning target.
Why integrated learning factors play a role in multimodal problems
In fact, there is an explanation in the original text, that is, the oscillation problem in the evolution of particle swarm optimization algorithm itself! From the evolution formula of the speed of classical particle swarm optimization algorithm, we can see that in addition to the inertia term W*V, the latter two items actually control the direction of the whole evolution process of particles, and also determine the emphasis of particles on exploration and mining in search. Then, when (Pbest-x) and (Gbest-x) go in opposite or consistent directions in multiple dimensions, one of them will inevitably lose its function or even react, that is, the two forces in mechanics are in the same direction or completely opposite. The particle trapped in multi generation evolution will inevitably lose its search ability, which will not only waste the number of calculations, but also easily fall into premature convergence. This is why CLPSO mixes randomly selected targets into its historical optimization, and then removes the global optimization. In fact, this phenomenon is avoided. The fact proves that the effect is remarkable!
Finally, the source code is pasted for everyone to learn. The specific experimental code is attached above.

function [gbest,gbestval,fitcount]= CLPSO_new_func(fhd,Max_Gen,Max_FES,Particle_Number,Dimension,VRmin,VRmax,varargin)
%[gbest,gbestval,fitcount]= CLPSO_new_func('f8',3500,200000,30,30,-5.12,5.12)
rand('state',sum(100*clock));
me=Max_Gen;
ps=Particle_Number;
D=Dimension;
cc=[1 1];   %acceleration constants
t=0:1/(ps-1):1;t=5.*t;
Pc=0.0+(0.5-0.0).*(exp(t)-exp(t(1)))./(exp(t(ps))-exp(t(1)));
% Pc=0.5.*ones(1,ps);
m=0.*ones(ps,1);
iwt=0.9-(1:me)*(0.7/me);
% iwt=0.729-(1:me)*(0.0/me);
cc=[1.49445 1.49445];
if length(VRmin)==1
    VRmin=repmat(VRmin,1,D);
    VRmax=repmat(VRmax,1,D);
end
mv=0.2*(VRmax-VRmin);
VRmin=repmat(VRmin,ps,1);
VRmax=repmat(VRmax,ps,1);
Vmin=repmat(-mv,ps,1);
Vmax=-Vmin;
pos=VRmin+(VRmax-VRmin).*rand(ps,D);

for i=1:ps;
    e(i,1)=feval(fhd,pos(i,:),varargin{:});
end

fitcount=ps;
vel=Vmin+2.*Vmax.*rand(ps,D);%initialize the velocity of the particles
pbest=pos;
pbestval=e; %initialize the pbest and the pbest's fitness value
[gbestval,gbestid]=min(pbestval);
gbest=pbest(gbestid,:);%initialize the gbest and the gbest's fitness value
gbestrep=repmat(gbest,ps,1);

stay_num=zeros(ps,1);

ai=zeros(ps,D);
f_pbest=1:ps;f_pbest=repmat(f_pbest',1,D);
for k=1:ps
    ar=randperm(D);
    ai(k,ar(1:m(k)))=1;      %%%If you look carefully here, it seems that the author wants to add the global optimal term. In fact m It has not been updated and the global optimum has not been used
    fi1=ceil(ps*rand(1,D));
    fi2=ceil(ps*rand(1,D));
    fi=(pbestval(fi1)<pbestval(fi2))'.*fi1+(pbestval(fi1)>=pbestval(fi2))'.*fi2;
    bi=ceil(rand(1,D)-1+Pc(k));
    if bi==zeros(1,D),rc=randperm(D);bi(rc(1))=1;end
    f_pbest(k,:)=bi.*fi+(1-bi).*f_pbest(k,:);
end

stop_num=0;
i=1;


while i<=me&fitcount<=Max_FES
    i=i+1;
    for k=1:ps
        
        if stay_num(k)>=5
            %     if round(i/10)==i/10%|stay_num(k)>=5
            stay_num(k)=0;
            ai(k,:)=zeros(1,D);
            f_pbest(k,:)=k.*ones(1,D);
            ar=randperm(D);
            ai(k,ar(1:m(k)))=1;
            fi1=ceil(ps*rand(1,D));
            fi2=ceil(ps*rand(1,D));
            fi=(pbestval(fi1)<pbestval(fi2))'.*fi1+(pbestval(fi1)>=pbestval(fi2))'.*fi2;
            bi=ceil(rand(1,D)-1+Pc(k));
            if bi==zeros(1,D),rc=randperm(D);bi(rc(1))=1;end
            f_pbest(k,:)=bi.*fi+(1-bi).*f_pbest(k,:);
        end
        
        for dimcnt=1:D
            pbest_f(k,dimcnt)=pbest(f_pbest(k,dimcnt),dimcnt);
        end
        aa(k,:)=cc(1).*(1-ai(k,:)).*rand(1,D).*(pbest_f(k,:)-pos(k,:))+cc(2).*ai(k,:).*rand(1,D).*(gbestrep(k,:)-pos(k,:));%~~~~~~~~~~~~~~~~~~~~~~
        vel(k,:)=iwt(i).*vel(k,:)+aa(k,:);
        vel(k,:)=(vel(k,:)>mv).*mv+(vel(k,:)<=mv).*vel(k,:);
        vel(k,:)=(vel(k,:)<(-mv)).*(-mv)+(vel(k,:)>=(-mv)).*vel(k,:);
        pos(k,:)=pos(k,:)+vel(k,:);
        
        if (sum(pos(k,:)>VRmax(k,:))+sum(pos(k,:)<VRmin(k,:)))==0;
            e(k,1)=feval(fhd,pos(k,:),varargin{:});
            fitcount=fitcount+1;
            tmp=(pbestval(k)<=e(k));
            if tmp==1
                stay_num(k)=stay_num(k)+1;
            end
            temp=repmat(tmp,1,D);
            pbest(k,:)=temp.*pbest(k,:)+(1-temp).*pos(k,:);
            pbestval(k)=tmp.*pbestval(k)+(1-tmp).*e(k);%update the pbest
            if pbestval(k)<gbestval
                gbest=pbest(k,:);
                gbestval=pbestval(k);
                gbestrep=repmat(gbest,ps,1);%update the gbest
            end
        end
        
    end
    
    % if round(i/100)==i/100
    %     plot(pos(:,D-1),pos(:,D),'b*');hold on;
    %     for k=1:floor(D/2)
    %         plot(gbest(:,2*k-1),gbest(:,2*k),'r*');
    %     end
    %     hold off
    %     title(['PSO: ',num2str(i),' generations, Gbestval=',num2str(gbestval)]);
    %     axis([VRmin(1,D-1),VRmax(1,D-1),VRmin(1,D),VRmax(1,D)])
    %     drawnow
    % end
    
    if fitcount>=Max_FES
        break;
    end
    if (i==me)&(fitcount<Max_FES)
        i=i-1;
    end
end
gbestval

Posted by Weiry on Wed, 01 Jun 2022 03:21:21 +0530