Informatica 35 (2011) 123-137 113 Identification and Prediction Using Neuro-Fuzzy Networks with Symbiotic Adaptive Particle Swarm Optimization Cheng-Jian Lin and Chun-Cheng Peng Department of Computer Science and Information Engineering National Chin-Yi University of Technology, Taichung County, Taiwan 411, R.O.C. E-mail: cjlin@ncut.edu.tw, goudapeng@gmail.com Chi-Yung Lee Department of Computer Science and Information Engineering Nankai University of Technology, Nantou, Taiwan 542, R.O.C. E-mail: cylee@nkut.edu.tw Keywords: particle swarm optimization, symbiotic evolution, neuro-fuzzy network, identification, prediction Received: October 16, 2009 This study presents a novel symbiotic adaptive particle swarm optimization (SAPSO) for neuro-fuzzy network design. The proposed SAPSO uses symbiotic evolution and adaptive particle swarm optimization with neighborhood operator (APSO-NO) to improve the performance of the traditional PSO. In APSO-NO, we combine the neighborhood operator and the adaptive particle swarm optimization to tune the particles that are most significant. Simulation results have shown that the proposed SAPSO performs better and requires less computation time than the traditional PSO. Povzetek: Razvita je nova metoda nevronskih mrez z uporabo roja delcev. 1 Introduction Neuro-fuzzy networks (NFNs) have been demonstrated to be successful [l]-[9]. Two typical types of NFNs are the Mamdani-type and TSK-type models. In Mamdani-type NFNs [3]-[4], the minimum fuzzy implication is used in fuzzy reasoning. In TSK-type NFNs [5]-[8], the consequent of each rule is a function input variable. The generally adopted function is a linear combination of input variables plus a constant term. Many researchers [6]-[7] have shown that using TSK-type NFNs achieve superior performance in network size and learning accuracy than using Mamdani-type NFNs. Training parameters is a problem in the design of a NFN. To solve this problem, back-propagation (BP) training is widely used [3]-[8], It is a powerful training technique that can be applied to networks. Nevertheless, the steepest descent technique is used in BP training to minimize the error function. The algorithm may allow the local minima to be reached very quickly, yet the global solution may never be found. In addition, the performance of BP training depends on the initial values of the system parameter, and for different network topologies, new mathematical expressions for each network layer have to be derived. Considering the disadvantages mentioned above, one might be faced with suboptimal performances, even for a suitable NFN topology. Hence, techniques capable of training network parameters and finding a global solution while optimizing the overall structure are needed. In this respect, a new algorithm, called particle swarm optimization (PSO), appears to be a better algorithm than the BP algorithm. It is an evolutionary computation technique developed by Kennedy and Eberhart in 1995 [10], The underlying motivation for the development of the PSO algorithm was the social behavior of animals, such as birds flocking together, fish swimming in schools, and insects swarming together. Several researchers have used the PSO method to solve some optimization problems, like control problems [11]-[13] and neural network design [14]-[15], The performance of most stochastic optimization algorithms, including the PSO and genetic algorithms (GAs), declines as the dimensionality of the search space increases. These algorithms stop when they generate a solution that falls in the optimal region, a small volume of the search space surrounding the global optimum. The probability in stochastic optimization algorithms decreases exponentially as the dimensionality of the search space increases. It is clear that, in a similar topology, it is harder to find the global optimum in a high-dimensional problem than it is in a low-dimensional problem. One way to overcome this exponential increase in difficulty is to partition the search space into lower dimensional subspaces, as long as the optimization algorithm can guarantee that it will be able to search every possible region of the search space. In this paper, a novel learning algorithm, called symbiotic adaptive particle swarm optimization (SAPSO), that tunes the parameters of NFNs is proposed. The proposed SAPSO is different from the traditional PSO. In the traditional PSO, each particle represents a fuzzy system. But in SAPSO, each particle represents only one fuzzy rule. A R-rule fuzzy system is constructed by 102 Informática 35 (2011) 101-114 K.V. Devi et al. selecting and combining R particles from a given swarm. The proposed SAPSO consists of symbiotic evolution and adaptive particle swarm optimization with neighborhood operator to improve the performance of the traditional PSO. The advantages of the proposed SAPSO are summarized as follows: (1) SAPSO can reduce population sizes; (2) the computation request of SAPSO is less than that of the traditional PSO in each generation; (3) in the learning process, the relative parameters in a fuzzy system are searched; they prevent interference from other parameters that can find what the best parameter values are; (4) the adjusting parameter strategy of SAPSO is more significant than the traditional PSO. The rest of this paper is organized as follows. After reviewing of training algorithms for NFNs in Section 2, Section 3 illustrates the structure of the TSK-type fuzzy model. An overview of PSO is given in Section 4. A novel symbiotic adaptive particle swarm optimization (SAPSO) is proposed in Section 5. Sections 6 and 7 respectively present the simulation results and discussion. Finally, the conclusion is given in the last section. 2 Related works Besides the most-applied BP algorithm, some other traditional optimization approaches had been applied to training NFNs, such as the Broyden-Fletcher- Goldfarb-Shanno (BFGS) [16]-[17], conjugate gradient (CG) [18]-[19], and Levenberg-Marquardt (LM) [20]-[21] methods. In the context of deterministic unconstrained optimisation, quasi-Newton (QN) methods, sometimes called variable metric methods, are well-known algorithms for finding local minima of specific functions. QN methods are based on Newton's method to find the stationary point of a function, where the gradient is zero. Newton's method assumes that the function can be locally approximated as a quadratic in the region around the optimum, and requires the first and second derivatives [22], i.e. the gradient vector and the Hessian matrix, to find the stationary point. Moreover, the Newton's method and its variants require that the Hessian is positive definite - a condition that is difficult to guarantee in practice. Conjugate Gradient methods are in principle approaches suitable for large-scale problems [23], The basic idea of CG methods is to find the stepsize along a linear combination of the current gradient vector and the previous search direction. On the other hand, equipped with a damping factor, the LM (so-called damped Gauss-Newton) methods are capable of relaxing the difficulties of Hessian-based training, i.e. the ill-conditioning of the Hessian matrix. In addition, when the damping factor is zero, the LM methods become identical to the Gauss-Newton approach; while as the damping factor gets close to infinity, the LM methods are then get equivalent to the steepest descent method. As indicating in the Introduction section, although the traditional second-order approaches generally have faster convergent speeds, they are still in the situation of local optimization. Evolutional approaches such as particle swarm optimization (PSO) [10], differential evolution (DE) [21], and symbiotic evolution (SE) [25] have been developed for training NFNs [26]-[28], respectively. Since this paper focuses on the PSO approach, the concepts of DE and SE methods are omitted here and suggested to refer the relevant literature for further details, whereas the overview of the PSO and our proposed symbiotic adaptive PSO are presented in this paper. 3 Structure of a TSK-type neuro-fuzzy network (TNFN) A fuzzy model is a knowledge-based system characterized by a set of rules that model the relationship between the control input and output. The reasoning process is defined by means of the inference method, aggregation operators, and fuzzy connectives. The fuzzy knowledge base contains the definitions of fuzzy sets, which are stored in a fuzzy database, and a collection of fuzzy rules. Fuzzy rules are defined by their antecedents and consequents, which relate an observed input state to a desired control action. Most fuzzy systems employ the inference method proposed by Mamdani, in which the consequent parts are defined by fuzzy sets [1]. A Mamdani-type fuzzy rule has the form: IFxi is A ¡J (m¡J, (6) where the summation is over all the inputs, and ir,, are the corresponding parameters of the consequent part. w,y can be any real value. If w,;=0, ;' >0, the TNFN model in this case will be called the zero-order TNFN model. Layer 5 (Output Node): Each node in this layer corresponds to one output variable. The node integrates all the actions recommended by layers 3 and 4 and acts as a defuzzifier with 2>!4) I" V ) y=u™ j=i j=i (7) I' 3=1 ,(3) I' 3= 1 ,(3) 4 An overview of particle swarm optimization Particle swarm optimization (PSO) [10] is a recently invented high performance optimizer that possesses several highly desirable attributes, including a basic algorithm that is easy to understand and implement. This algorithm is similar to genetic algorithms and evolutionary algorithms, but requires less computational memory and fewer lines of code. The PSO conducts searches using a population of particles which correspond to individuals in GA. Each particle has a -> -> velocity vector y. and a position vector .r to represent a possible solution. Consider an optimization problem that requires the simultaneous optimization of variables. A collection or swarm of particles are defined, where each particle is assigned a random position in the TV-dimensional problem space so that each particle's position corresponds to a possible solution of the optimization problem. The particles fly rapidly, moving at their own respective velocities, and search the space. PSO has a simple rule, namely, that each particle has three choices in evolution: (1) insist on itself; (2) move towards its own current best position; each particle remembers its own personal best position that it has found, called the local best; (3) move towards the current best position of the population; each particle also knows the best position found by any other particle in the swarm, called the global best. PSO reaches a balance among these three choices. At each time step, each of the particle positions is scored to obtain a fitness value based on how well it solves the current problem. Using the local best position (Lbest) and the global best position (Gbest), a new velocity for each particle is updated using 116 Informatica 35 (2011) 113-122 C.-J. Lin et al. v, (k -n ">*V (k) + fa * rcmdO* (Lbest -x, (k)) (8) + 2 * rand() * (Gbest - x, (k)) where®, (f>l, and (p2 are called the coefficient of inertia, the cognitive study, and the society study, respectively. The term rand() refers to uniformly distributed random numbers in [0, 1], The term v, is limited to the range + v mra • If the velocity violates this limit, it will be set to its proper limit. The concept of the updated velocity is illustrated in Fig. 2. Figure 2: The diagram of the updated velocity in the PSO. A variable velocity enables every particle to search around its individual best position and the global best position. Based on the updated velocities, each particle changes its position according to the following: x,(Är + l) = x,(Är) + v,(Är + l) (9) When every particle is updated, the fitness value of each particle is calculated again. If the fitness value of the new particle is higher than those of the local best/global best, then the local best/global best will be replaced with the new particle. When the above updating processes are repeated step-by-step, the whole population will evolve toward the optimum solution. A detailed flowchart is shown in Fig. 3. 5 The symbiotic adaptive particle swarm optimization (SAPSO) In this section, we will introduce the symbiotic adaptive particle swarm optimization (SAPSO) for NFN design. SAPSO uses symbiotic evolution and adaptive particle swarm optimization with neighborhood operator. The detailed process is described below. 5.1 The Design of Neuro-Fuzzy Network Using SAPSO Symbiotic evolution was first proposed in an implicit fitness-sharing algorithm that was used in an immune system model [31], Unlike the traditional PSO that uses each particle in a swarm as a full solution to a problem, in symbiotic evolution, each individual in a population represents only a partial solution to a problem. The goal of each individual is to form a partial solution that will be combined with other partial solutions currently in the population to build an effective full solution. In a normal evolution algorithm, a single individual is responsible for the overall performance, and is assigned a fitness value according to its performance. In symbiotic evolution, the fitness of an individual (a partial solution) is calculated by summing up the fitness values of all possible combinations of that individual with other current individuals (partial solutions) and then dividing the sum by the total number of combinations. The representation of a fuzzy system using SAPSO is shown in Fig. 4. In Fig. 4. we can see that if we need R rules to construct a fuzzy system, we will have R sub-swarms. Each sub-swarm produces its own sub-particles. The current best parameters, called the cooperative best (Cbest), of the fuzzy system are recorded. As with the traditional PSO, the velocities and sub-particles in every sub-swarm need to be updated. The evolution process of SAPSO includes coding. K-si Rule U Rule 2.k ... RulefÄ-TM Rule/?./» Rule f.k Rule^.P Rule/?./1 Swarm 1 Cbest / Cbest 2 ... | Cbest {R-l) | Cbest/? Cooperative Best (Cbest) Figure 4: The representation of a fuzzy system by SAPSO. Figure 3: The typical PSO flowchart illustrates the steps and update equations. •v "0j Figure 5: Coding a rule of SAPSO into a sub-particle. IDENTIFICATION AND PREDICTION USING. Informatica 35 (2011 ) 113-122 117 initialization, fitness assignment, and sub-particle updating. The coding step is concerned with the membership functions and the fuzzy rules of a fuzzy system that represent the particles in SAPSO. The initialization step assigns the sub-swarm values before the evolution process. The fitness assignment step gives a suitable fitness value to each fuzzy system during the evolution process. The complete learning process is described step by step below. A. Coding step: The first step in SAPSO is to code a fuzzy rule into a sub-particle. Figure 5 shows a fuzzy rule that is given by Eq. (2), where /??,, and cr represent a Gaussian membership function with mean and deviation in the /'th dimension and the / th rule node, respectively. B. Initialization step: Before SAPSO is designed, an initial sub-swarm should be generated. As in the traditional PSO, an initial sub-swarm is generated randomly within a fixed range. C. Fitness assignment step: As mentioned before for SAPSO, the fitness value of a rule (a sub-particle) is calculated by summing up the fitness values of all the possible combinations, which are randomly selected, and then dividing the sum by the total number of combinations. The details for assigning the fitness value are described step-by-step as follows. Step 1: Randomly choose one sub-particle from each sub-swarm and assemble it to form a particle. This particle represents a fuzzy system derived from these sub-particles. Step 2: Evaluate the performance of every fuzzy system that is generated from Step 1 to obtain a fitness value. Step 3: The fitness records are initially set to zero. Accumulate the fitness value of the selected sub-particles to the fitness records. Step 4: Repeat the above steps until each rule (sub-particle) in a sub-swarm has been selected a sufficient number of times, and record how many times each sub-particle has participated in the fuzzy systems. Step 5: Divide the accumulated fitness value of each sub-particle by the number of times it has been selected. The average fitness value represents the performance of a rule. In this paper, the fitness value is designed according to the follow formulation: Fitness Value= l/(l + JE(y,y)/T) (10) where E(y,y) = fj (j,.-^.)2 dD i=i where y. represents the true value of the /'th output, ^ represents the predicted value, E(y,y) is an error function, and T represents the number of the training data of each generation. D. Updating velocities and sub-particles: When the fitness value of each sub-particle is obtained from the fitness assignment step, the Lbest of each sub-particle and the Gbest of each sub-swarm are updated simultaneously using adaptive particle swarm optimization with neighborhood operator (APSO-NO). The algorithm of APSO-NO is described in subsection 4.2. E. Updating cooperative best (Cbest): When the fitness value of every fuzzy system is obtained, we can find the best fuzzy system in each generation. If the fitness value of any fuzzy system is higher than the best cooperative one, the cooperative best will be replaced. The steps mentioned above are repeated until the predetermined condition is achieved. 5.2 The Adaptive Particle Swarm Optimization with Neighborhood Operator (APSO-NO) In recent years, many researchers [30], [32] have proposed using stability analysis on dynamic PSO in order to obtain an understanding on how it searches for a global optimal solution and the strategy it uses to tune parameters. In this paper, the velocity of a particle at the (k / )-th iteration is redefined for SAPSO as follows: v, (k +1) = a> * v, (k) + ^ * rand() * (Lbest - xl (k)) + /j)2* randQ * (Gbest - xl (k)) + tf>3 * randQ * (Cbest - xl (k)) (12) where a, fa. ,and fa are called the coefficient of inertia, cognitive study, group study, and society study, respectively. We hope to accelerate every sub-particle in a direction toward the best self (Gbest), the best of a partial solution (Lbest), and the best of the full solution (Cbest). The particle will be reduced to one dimension for easy analysis. Thus, Eq. (12) is rewritten as follows: v(k + \) = co*v(k) + a*(Z-x(k)) (13) where a = fa* randQ + fa* randQ + fa * randQ tj\ * randQ* Lbest + + l-a)2 -4 co 2 2 2 (18) ® + l a ^(co + l-a)2 -4a> According to stability theory, the behavior of a particle is stable if and only if j;.,]< 1 and|/.2|< 1 • Since the eigenvalues . are a function of the parameters m, (/it ■ ([>■, ■ and , eigenvalue analysis will be carried out under the following four conditions to find the stable condition of the system. The detailed proofs refer to [30], '(1) a = 0, => 0 < a < 2 (2) a < ® +1 - 2-v/îo, => 0 < ® < 1 (3)® + l-2V® 0<®<1 (4) ® +1 + 2-y/® 0 <®<1 (19) Based on the above analysis and with the use of the parameters a and co , the criterion of convergence < 1 and 0 < a < 2co + 2 00.5) It was shown from Eq. (14) that is a random number distributed in [0, (^ )] and that its average value is A + ^2 + . We use three parameters, k1 , and , 3 where o < Kl + k2 + k3 < 1 • Therefore, ^, * v, (k) + 3k1cc * rand() * (Lbest - xr (k)) + 3* rand() * (Vbest - xl (k)) + 3tc3a * randQ * (Cbest - xl (k)) (27) where Vbest is defined as the best solution in the neighborhood around the sub-particles that are waiting to be updated. The neighborhood is identified by calculating the distances between the candidate sub-particles and the other sub-particles. The pseudo code is shown in Fig. 6. We can see that the number of neighborhoods gradually increases according to the generations. When the generations are close to terminal conditions, Vbest tends toward Gbest. 1. Get dist[i] by calculating distances between the candidate sub-particle and all other sub-particle. 2. Find the ciistm .: from distfi], 3. Define a threshold ^ = 0.5 + 0.5*(iterationnow /iterationmax) 4. if i <0.8 if I > distfi]/dist Figure 6: The pseudo code for finding Vbest in every iteration. 6 Simulation results In this section, the proposed SAPSO is applied to the TNFN design and compared with the traditional PSO. Both SAPSO and the traditional PSO are used to adjust IDENTIFICATION AND PREDICTION USING. Informatica 35 (2011 ) 113-122 119 the antecedent and consequent parameters of fuzzy rules in TNFN. We use three different simulations for all methods. The first simulation uses the example given by Narendra and Parthasarathy [33], The second simulation predicts the chaotic time series [34], and the third example approximates a piecewise function [35], In our simulations, the numbers of swarms and sub-swarms are set to 50 and 10. The initial parameters of the traditional PSO and SAPSO are given in Table 1. In SAPSO, we use different parameter values to observe the effect on performance. All the programs are developed using MATLAB 6.1 software, and each problem is simulated on a Pentium III 1GHz desktop computer. Each experiment is run 20 times. the input it(k)=sin(2jik/25). Figure 8 and Table 2 show the learning curves and the performance of PSO and SAPSO with different parameter values. ^^^Eara m etc r k2(2) 5 Model co Traditional PSO 0.4 2.0 2.0 NA NA SAPSOl 0.4 0.3 0.3 0.3 0.5 SAPS02 0.4 0.5 0.5 0 0.5 SAPS03 0.4 0.5 0 0.5 0.5 SAPSO-/ 0.4 0.2 0.4 0.4 0.5 SAPS05 0.4 0.2 0.4 0.4 0.6-0.4 SAPS06 0.4 0.1 0.3 0.6 0.3 SAPSO 7 0.4 0.1 0.3 0.6 0.5 SAPSOS 0.4 0.1 0.3 0.6 0.7 SAPSOP 0.4 0.1 0.3 0.6 0.6-0.4 Model RMSE(Ave) RMSE(Best) Traditional PSO 0.023 0.011 SAPSOi 0.100 0.059 SAPS02 0.068 0.024 SAPSOi 0.066 0.035 SAPSOI 0.031 0.012 SAPS05 0.038 0.024 SAPSOi 0.099 0.057 SAPSO7 0.037 0.020 SAPSO« 0.119 0.108 SAPSOi* 0.016 0.012 Table 2: The performance comparison with two methods. Table 1: The initial parameters of the traditional PSO and the SAPSO. -PSO -SAPS 01 -SAPS02 ----SAPS03 ----- SAPS04 --SAPS05 --SAPS06 -SAPS07 SAPS08 ----SAPS09 Example 1-Identification of Nonlinear Dynamic System The first example used for identification is described by the difference equation y(k) y(k +1) = 1 + y2(k) + 11* (k) (28) The output of this equation depends nonlinearly on both its past value and the input, but the effects of the input and output values are not additive. The training input patterns are randomly generated in the interval [-2, 2] for the training data. In this problem, we use five fuzzy rules, and evolution progressed for 1000 generations. After 1000 generations, the average best root mean square error (RMSE) of the output approximates 0.016. Figures 7 (a)-(b) show the outputs of the two methods for ' \ A ^ \f f W li V V \l (a) (b) Figure 7: Results of the desired output and the model output of (a) the PSO method, and (b) the SAPSO method. Figure 8: The learning curves of the PSO and the SAPSO with different parameter values. Example 2-Prediction of the Chaotic Time Series The Mackey-Glass chaotic time series x(t) in consideration here is generated from the following delay differential equation: *(,)_ 0.2x(i-r)._0>l3r(i) (29) dt l + x'" (t-T) Crowder [34] extracted 1000 input-output data pairs {x, y '\ which consisted of four past values of x(t). i.e. | vt / -18), xti -12), M1 - 6), xiiY.xti + 6)] (30) where r=17 and x(0)=1.2. There are four inputs into the model, corresponding to the values of x(t). and one output representing the value x(t+ It), where At is a time prediction into the future. The first 500 pairs (from x(l) to x(500)) are the training data set, while the remaining 500 pairs (fromx(501) to x(1000)) are the testing data set used for validating the proposed method. The number of fuzzy rules is set to 6. The average best RMSE of the prediction output approximates 0.009 after 1000 generations. Figures 9 (a) and (b) show the prediction results of PSO and SAPSO. Table 3 shows the comparison results of the prediction performance of all methods. Figure 10 shows the RMSE curves of the two models. 120 Informatica 35 (2011) 113-122 C.-J. Lin et al. k a h „ ,4 « Z1 A 1 / i r r- A \'> i /i (a) (b) Figure 9: The prediction results of (a) the PSO and (b) the SAPSO. -SAPS01 -SAPS02 ----SAPS03 SAPS04 --SAPS05 --SAPS06 -SAPS07 SAPS08 ----SAPS09 Model RMSE(Ave) RMSE(Best) Traditional PSO 0.012 0.006 SAPSOi 0.025 0.013 SAPS02 0.015 0.010 SAPSOi 0.015 0.012 SAPSO-/ 0.010 0.006 SAPS05 0.010 0.007 SAPSOi 0.029 0.012 SAPSO 7 0.011 0.008 SAPSOS 0.019 0.010 SAPSOP 0.009 0.006 outputs of the function / with the PSO method and the SAPS09 method. The solid line represents the output of function f9 and the dotted line represents the approximation of various methods. The results comparing our model with PSO are tabulated in Table 4. ----SAPS03 — SAPS04 --SAPS05 --SAPSOG -SAPS07 SAPS08 ----SAPS09 0 100 200 300 400 500 600 700 800 900 1000 Generations Figure 11: The learning curves of the PSO and the SAPSO with different parameter values for picccwisc problem. Figure 10: The learning curves of the PSO and the SAPSO with different parameter values for prediction problem. \ i \ ! V / \ A / \ \ V ?' \ A (a) (b) Figure 12: The results of approximation using (a) the PSO method and (b) the SAPS09 method. Table 3: The performance comparison with two methods Example 3-Approximation of the Piecewise Function The piecewise function was studied by Zhang [35] and Xu [36] and is defined as: f -2.186.i--12.864 -10 <.r<-2 (3 J) /(*)=] 4.246.1- — 2 < < 0 [lOe"0-051"0'5 sin[(0.03.\- + 0.7).r] 0 < .r < 10 over the domain D = [-10, 10], The piecewise function is continuous and can be analyzed. However, traditional analytical tools are inadequate and often fail. This failure may be due to two reasons, namely, the wide-band information hidden at the turning points and the amalgamation of linearity and nonlinearity. In this example, 200 training input patterns are uniformly generated from Eq. (31). Seven fuzzy rules are generated in this example. The RMSE curve is shown in Fig. 11 with all methods. Figures 12 (a)-(b) show the Model RMSE(Ave) RMSE(Best) Traditional PSO 0.28 0.12 SAPSO/ 3.35 3.15 SAPS02 1.15 0.45 SAPSOi 0.33 0.16 SAPSO-/ 0.43 0.14 SAPSOi 0.24 0.13 SAPSO6 2.96 2.18 SAPSO 7 0.21 0.09 SAPSO,V 0.64 0.36 SAPSOP 0.20 0.09 Table 4: The performance comparison with two methods. The average computation time per generation for three examples with the PSO and SAPSO is tabulated in Table 5. We only update the value of sub-particles in each sub-swarm for SAPSO, and the total adjusted parameters of SAPSO are less than that of PSO. Therefore, the computation time required by SAPSO is less than that by PSO. IDENTIFICATION AND PREDICTION USING. Informatica 35 (2011 ) 113-122 121 >J^xample Identification of Nonlinear Prediction of Approximation \ \ the Chaotic of the Piecewise \ Dynamic \ Time Series Function Model \ System PSO 1.21 7.5 3.15 SAPSO 0.35 1.70 0.65 Table 5: The average computation time of three examples for the PSO and the S APSO (Unit: sec). 7 Discussion From the above experimental results, we find that the parameters and<>' affect the performance of SAPSO. In order to test the relationship between the search trajectory and the parameterS, we use the same value otV , k2 , and a in SAPS06, SAPS07, SAPS08, and SAPS09. We define the collection degree (CD) of a sub-swarm for each generation as follows: N N CD = V T \particle (/') - particle (j')|| (32) 7=1 /=/+1 where N is the number of sub-swarms and || jj is the 2- nonn (the Euclidean one for a vector). When CD is small, the particles are close to each other. Figures 13 (a)-(c) and figures 14 (a)-(c) show the simulation results of example 1 when