英文原文--An Improved Particle Swarm Optimization Algorithm.pdf

上传人:苏美尔 文档编号:6066034 上传时间:2020-09-04 格式:PDF 页数:5 大小:208.23KB
返回 下载 相关 举报
英文原文--An Improved Particle Swarm Optimization Algorithm.pdf_第1页
第1页 / 共5页
英文原文--An Improved Particle Swarm Optimization Algorithm.pdf_第2页
第2页 / 共5页
英文原文--An Improved Particle Swarm Optimization Algorithm.pdf_第3页
第3页 / 共5页
英文原文--An Improved Particle Swarm Optimization Algorithm.pdf_第4页
第4页 / 共5页
英文原文--An Improved Particle Swarm Optimization Algorithm.pdf_第5页
第5页 / 共5页
亲,该文档总共5页,全部预览完了,如果喜欢就下载吧!
资源描述

《英文原文--An Improved Particle Swarm Optimization Algorithm.pdf》由会员分享,可在线阅读,更多相关《英文原文--An Improved Particle Swarm Optimization Algorithm.pdf(5页珍藏版)》请在三一文库上搜索。

1、 An Improved Particle Swarm Optimization Algorithm Lin Lu, Qi Luo, Jun-yong Liu, Chuan Long School of Electrical Information, Sichuan University, Chengdu, Sichuan, China, 610065 Abstract A hierarchical structure poly-particle swarm optimization (HSPPSO) approach using the hierarchical structure conc

2、ept of control theory is presented. In the bottom layer, parallel optimization calculation is performed on poly-particle swarms, which enlarges the particle searching domain. In the top layer, each particle swam in the bottom layer is treated as a particle of single particle swarm. The best position

3、 found by each particle swarm in the bottom layer is regard as the best position of single particle of the top layer particle swarm. The result of optimization on the top layer particle swarm is fed back to the bottom layer. If some particles trend to local extremum in particle swarm optimization (P

4、SO) algorithm implementation, the particle velocity is updated and re-initialized. The test of proposed method on four typical functions shows that HSPPSO performance is better than PSO both on convergence rate and accuracy. 1. Introduction Particle swarm optimization algorithm first proposed by Dr.

5、 Kennedy and Dr. Eberhart in 1995 is a new intelligent optimization algorithm developed in recent years, which simulates the migration and aggregation of bird flock when they seek for food1. Similar to evolution algorithm, PSO algorithm adopts a strategy based on particle swarm and parallel global r

6、andom search. PSO algorithm determines search path according to the velocity and current position of particle without more complicated evolution operation. PSO algorithm has better performance than early intelligent algorithms on calculation speed and memory occupation, and has less parameters and i

7、s easier to realize. At present, PSO algorithm is attracting more and more attention of researchers, and has been widely used in fields like function optimization, combination optimization, neural network training, robot path programming, pattern recognition, fuzzy system control and so on2. In addi

8、tion, The study of PSO algorithm also has infiltrated into electricity, communications, and economic fields. Like other random search algorithms, there is also certain degree of premature phenomenon in PSO algorithm. So in order to improve the optimization efficiency, many scholars did improvement r

9、esearches on basic PSO algorithm, such as modifying PSO algorithm with inertia weight3, modifying PSO algorithm with contraction factor4, and combined algorithm with other intelligent algorithm5. These modified algorithms have further improvement in aspects like calculation efficiency, convergence r

10、ate and so on. Proper coordination between global search and local search is critical for algorithm finally converging to global optimal solution. Basic PSO algorithm has simple concept and is easy to control parameters, but it takes the entire optimization as a whole without detailed division, and

11、it searches on the same intensity all along that to a certain extent leads to premature convergence. A hierarchical structure poly-particle swarm optimization approach utilizing hierarchy concept of control theory is presented, in which the parallel optimization calculation employs poly-particle swa

12、rm in the bottom layer, which is equivalent to increase particle number and enlarges the particle searching domain. To avoid algorithm getting in local optimum and turning into premature, disturbance strategy is introduced, in which the particle velocity is updated and re-initialized when the flying

13、 velocity is smaller than the minimum restrictions in the process of iteration. The best position found by each poly-particle swarm in the bottom layer is regard as best position of single particle in the top layer. The top layer performs PSO optimization and feeds the global optimal solution back t

14、o the bottom layer. Independent search of the poly-particle swarm on bottom layer can be used to ensure that the optimization to carry out in a wider area. And in top layer, particle swarms tracking of current global optimal solution can be used to ensure the convergence of the algorithm. Several be

15、nchmark functions have been used to test the algorithm in this paper, and the results show that the new algorithm performance well in optimization result and convergence characteristic, and can avoid premature phenomenon effectively. 2. Basic PSO algorithm PSO algorithm is swarm intelligence based e

16、volutionary computation technique. The individual in swarm is a volume-less particle in multidimensional search space. The position in search space represents potential solution of optimization problem, and the flying velocity determines the direction and step of search. The particle flies in search

17、 space at definite velocity which is dynamically adjusted according to its own flying experience and its companions flying experience, i.e., constantly adjusting its approach direction and velocity by tracing the best position found so far by particles themselves and that of the whole swarm, which f

18、orms positive feedback of swarm optimization. Particle swarm tracks the two best current positions, moves to better region gradually, and finally arrives to the best position of the whole search space. Supposing in a D dimension objective search space, PSO algorithm randomly initializes a swarm form

19、ed by m particles, then the position Xi (potential solution of optimization problem)of ith particle can be presented as xi1 , xi2 , , xiD, substitute them into the object function and adaptive value will be come out, which can be used to evaluate the solution. Accordingly, flying velocity can be rep

20、resented as vi1, vi2, viD. Individual extremum Pipi1 , pi2 , , piD represents the best previous position of the ith particle, and global extremum Pgpg1 , pg2 , , pgD represents the best previous position of the swarm. Velocity and position are updated each time according to the formulas below. 1 1 1

21、2 2 11 maxmax 11 minmin 11 ()() if,; if,; kkkkkk ididididgdid kk idid kk idid kkk ididid vwvc r pxc r px vvvv vvvv xxv + + + + =+ = = = = =+ (3) The top layer PSO algorithm adjusts particle velocity according to the global optimum of each swarm on bottom layer. Independent search of the L poly-parti

22、cle swarm on the bottom layer can be used to ensure the optimization to be carried out in a wider area. On top layer, particle swarms tracking of current global optimal solution can be used to ensure the convergence of the algorithm, in which both attentions are paid to the precision and efficiency

23、of the optimization process. (2) Introduce disturbance strategy. Optimization is guided by the cooperation and competition between particles in PSO algorithm. Once a particle finds a position which is currently optimum, other particles will quickly move to the spot, and gather around the point. Then

24、 the swarm will lose variety and there will be no commutative effect and influence between particles. If particles have no variability, the whole swarm will stagnate at the point. If the point is local optimum, particle swarm wont be able to search the other areas and will get in local optimum which

25、 is so-called premature phenomenon. To avoid premature, the particle velocity need to be updated and re-initialized without considering the former strategy when the velocity of particles on the bottom layer is less than boundary value and the position of particle cant be updated with velocity. 4. Al

26、gorithm flow (1) Initialization. Set swarm number L, particle swarm scales m and algorithm parameters: inertia weight, learning factor, velocity boundary value, and the largest iterative number. (2) Each swarm on bottom layer randomly generates m original solutions, which are regarded as current opt

27、imum solution pij of particles meanwhile. Adaptive value of all particles is computed. The optimum adaptive value of all particles is regarded as current optimum solution pig of swarm, which is transferred to top layer. (3) Top layer accepts L pig from bottom layer as original value of particles, wh

28、ich are regarded as their own current optimum solution pig of particles meanwhile. The optimum adaptive value of all particles is regarded as current optimum solution pg of swarm, which is transferred to bottom layer. (4) Bottom layer accepts pg form top layer, updates velocity and position of parti

29、cle according to formula (2), if velocity is less than the boundary value, the particle velocity is updated and re-initialized. (5) Bottom layer computes adaptive value of particles, and compares with current individual extremum. The optimum value is regarded as current optimum solution pij of parti

30、cles. The minimum value of pij is compared with global extremum. The optimum value is regarded as current global extremum pig of swarm, which is transferred to top layer. (6) Top layer accepts pig from bottom layer. The optimum adaptive value of pig is compared with current global extremum pg. The o

31、ptimum adaptive value is regarded as current global extremum pg. Velocity and position of each particle are updated according to formula (3). (7) Top layer computes adaptive value of particles, and compares with current individual extremum. The optimum value is regarded as current optimum solution p

32、ig of particles. The minimum value of pig is compared with global extremum pg. The optimum value is regarded as current global extremum pg of swarm, which is transferred to bottom layer. (8) Evaluate end condition of iteration, if sustainable then output the result pg , if unsustainable then turn to

33、 (4). 5. Example In order to study algorithm performance, tests are done on four common benchmark functions: Spherical, Rosenbrock, Griewank and Rastrigin. The adaptive value of the four functions is zero. Spherical and Rosenbrock are unimodal function, while Griewank and Rastrigin are multimodal fu

34、nction which has a great of local minimum. f1:Spherical function 2 1 1 ( ),100100 n ii i f xxx = = f2:Rosenbrock function 222 21 1 ( )100 ()(1) , 100100 n iiii i fxxxxx + = =+ f3:Griewank function 2 3 11 1001 ( )(100)cos()1, 4000 nn i i ii x fxx i = =+ 100100 i x f4:Rastrigin function 2 4 1 ( )(10co

35、s(2)10), 100100 n iii i fxxxx = =+ Set the dimension of four benchmark function to 10, the corresponding maximum iteration number to 500, swarm scale m to 40, number of swarm to 5, inertia weight to 0.7298, c1, c2 and c3 to 1.4962. Each test is operated 10 times randomly. Table 1 presents the result

36、s of four benchmark test function with PSO and HSPPSO. Table1Compare of simulation results function algorithm minimum maximum average f1 PSO HSPPSO 3.0083e-9 3.5860e-12 8.9688e-5 1.9550e-7 3.5601e-8 4.2709e-11 f2 PSO HSPPSO 5.7441 4.2580 8.8759 7.8538 7.65997 5.5342 f3 PSO HSPPSO 0 0 24.9412 2.3861

37、7.36575 0.23861 f4 PSO HSPPSO 4.9750 0.9950 13.9392 7.9597 10.15267 4.4806 Table 1 shows that HSPPSO performance is better than PSO in searching solution, and gets better results than PSO both in test for local searching and global searching. Though the difference of HSPPSO between PSO in searching

38、solution of unimodal function is not obvious, HSPPSO performance is better than PSO for multimodal function, which indicates that HSPPSO has better application foreground in avoiding premature convergence and searching global optimum. Figure 2 to 9 are adaptive value evolution curve after 10 times r

39、andom iteration for four benchmark functions with HSPPSO and PSO, which indicates that HSPPSO performance is better than PSO both in initial convergence velocity and iteration time. PSO even cannot find optimum value in some tests. 050100150200250300350400450500 0 0.5 1 1.5 2 2.5 Figure 2HSPPSO test

40、 function f1 050100150200250300350400450500 0 0.5 1 1.5 2 2.5 3 3.5 Figure 3PSO test function f1 050100150200250300350400450500 0 50 100 150 200 250 Figure 4HSPPSO test function f2 050100150200250300350400450500 0 50 100 150 200 250 300 350 400 450 Figure 5PSO test function f2 0501001502002503003504

41、00450500 0 50 100 150 200 250 Figure 6HSPPSO test function f3 Adaptive value Iteration number Iteration number Adaptive value Iteration number Adaptive value Adaptive value Iteration number Adaptive value Iteration number 050100150200250300350400450500 0 5 10 15 20 25 30 Figure 7PSO test function f3

42、 050100150200250300350400450500 0 50 100 150 200 250 Figure 8HSPPSO test function f4 050100150200250300350400450500 0 10 20 30 40 50 60 70 Figure 9PSO test function f4 6. Conclusion Comparing with basic PSO, HSPPSO proposed in this paper can get better adaptive value and have faster convergence rate

43、 on condition of the same iteration step. HSPPSO provides a new idea for large scale system optimization problem. References 1 J Kennedy, R Eberhart. Particle Swarm OptimizationC.In : Proceedings of IEEE International Conference on Neural Networks, 1995, Vol 4, 1942-1948. 2 Vanden Bergh F, Engelbrec

44、ht A P. Training Product Unit Networks Using Cooperative Particle Swarm Optimization C. IEEE International Joint Conference on Neural Networks, Washington DC, USA.2001,126-131. 3 Shi Y, Eberhart R C. A modified particle swarm optimizer C IEEE World Congress on Computational Intelligence, 1998: 69- 7

45、3 4 Eberhart R C, Shi Y.Comparing inertia weights and constriction factors in particle swarm optimizationC Proceedings of the IEEE Conference on Evolutionary Computation, ICEC, 2001: 84- 88 5 Lvbjerg M, Rasmussen T K, Krink T.Hybrid particle swarm optimizer with breeding and subpopulationsC Third Genetic and Evolutionary Computation Conference. Piscataway, NJ: IEEE Press, 2001 6 WANG Ling. Intelligent Optimization Algorithms with Applications. Beijing: Tsinghua University Press, 2001. Iteration number Adaptive value Iteration number Adaptive value Adaptive value Iteration number

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 科普知识


经营许可证编号:宁ICP备18001539号-1