Read PDF Differential Evolution: Fundamentals and Applications in Electrical Engineering

Free download. Book file PDF easily for everyone and every device. You can download and read online Differential Evolution: Fundamentals and Applications in Electrical Engineering file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Differential Evolution: Fundamentals and Applications in Electrical Engineering book. Happy reading Differential Evolution: Fundamentals and Applications in Electrical Engineering Bookeveryone. Download file Free Book PDF Differential Evolution: Fundamentals and Applications in Electrical Engineering at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Differential Evolution: Fundamentals and Applications in Electrical Engineering Pocket Guide.

Computer fundamentals and computer networking, aside from course descriptions, interest rate models. Fundamentals and applications in electrical engineering. Download The Fundamentals of Electrical Engineering or any other file. Basics of Differential Evolution. Detailed step-by-step analysis. Demonstrated different applications of the DE in electrical engineering. Download The Fundamentals of Electrical Engineering. Proved very efficient and robust in function optimization and has been applied to solve problems in many scientific and engineering fields.

This program shares a common first year with the Mechanical Engineering Technology programs. Optimal control with engineering applications. Find Your Courses. This process is experimental and the keywords may be updated as the learning algorithm improves. Differential evolution fundamentals and applications in electrical engineering?

The Black-Scholes model. Thomas Assembly Plant that closed in September Fanshawe s civil engineering courses have had a fantastic impact on my life. Conditional tail expectation, applications of the lognormal distribution. Only one course from each numbered series can be used in the Engineering Fundamentals category within a major program. Anyong Qing -- Differential evolution is a very simple but very powerful stochastic optimizer. Check back regularly for updates. It is also an ideal text for students pursuing diploma programmes in Electrical Engineering.

The algorithm has user-defined and algorithm-dependent control parameters like the almost all population-based metaheuristics. These control parameters consisting of crossover probability Cr , mutation scale factor F and population size Np can greatly affect the success, performance and convergence characteristic of the algorithm Yang , p. Even though the number of generations G max is not taken into consideration as a control parameter some stopping criteria can be used. As can clearly be seen from the flowchart, initialization stage and evolution cycle by genetic operators i. In the algorithm, initial population vectors representing individual solutions are randomly generated within a predefined search space considering the upper and lower bounds of each parameter as presented in the flowchart.

Evolution of solutions is then achieved by some genetic operations mentioned above via equations given in the flowchart until a predefined termination criterion i. Thus, the vector providing the lower f x value is selected as an optimal solution for the optimization problem under consideration Price et al , p. During the operations base vector is chosen to be the best vector of the current generation, there is only a pair of differential vectors and a binomial crossover is used.

More details for the algorithm and notations related to equations mentioned in figure 1 can be found in previous papers of DE applications in geophysical problems e. The produced anomaly was also corrupted with normally distributed zero-mean pseudo-random numbers. The different percentages of noise levels i. Note that red circles are located every two meters. Before the inversion studies, the convergence and the parameter resolution characteristics were analysed by producing some prediction error maps for each pair of the model parameters by setting the rest of the parameters to the true values and using a relatively narrower search spaces Balkaya , Ekinci et al In order to perform these significant analyses, equation 2 was used for the forward modelling procedure.

White circles in the error maps show the true values for the parameter pairs, and the axes values indicate search space bounds for each model parameter figure 3. The topographies in these error images clearly illustrated the nature of the inverse problem under consideration. The behaviour of nearly circular closing contours surrounding the global minima for the pairs of A — xo , zo — xo and q — xo showed that these model parameters are uncorrelated with each other and most likely resolvable independently using an efficient inversion technique. Elliptical contours sloping to the error energy axes are clearly seen from the map of A versus zo.

This feature points a positive correlation, namely, an increase in a parameter also increases the other one. Similar elliptical contours but indicating a negative correlation are evident in the error map of zo versus q figure 3 which means that an increase in a parameter decreases the other one. On contrary to the parameter pairs mentioned above, unclosed contours and a narrow valley topography in the vicinity of global minimum are easily seen for the pair of A — q.

Although the contour lines seem nearly parallel to the A axis, gentle sloping contours suggest a positive correlation between these parameters. Thus, there may be many equivalent solutions within the same error limits. Nevertheless, the contour lines lying approximately parallel to the A axis imply that the probability of solving q successfully is higher than the parameter A.

Considering the produced prediction error maps for each pair of the parameters, it is avowable that the depth, shape and exact origin of the causative body can be estimated substantially through the amplitude inversion of 2D analytical signal of a TMA. Error energy maps for parameter pairs of noise-free synthetic data case.

White circles in each map indicate the true solution. Axes values indicate search space bounds for each model parameter. In the second stage, considering the synthetic ASA given in figure 2 upper right panel , some parameter tuning studies relying mainly on choosing optimum control parameters i. F , Cr and Np were carried out to increase the efficacy of the metaheuristic. First, we investigated the effects of F and Cr pair on the solution.

Relatively large search spaces presented in table 1 were used for the model parameters. Table 2 shows the results obtained through the analysis of rms values produced by using various F versus Cr values at the end of 30 independent runs. As clearly seen from the table, the best statistical results were obtained by using the values F of 0. Additionally, considering these values, we statistically analysed the Np parameter using 10, 15, 20 and 30 times of D number of unknown parameters , respectively for the synthetic noise-free case.

Table 3 shows a brief presentation of the results obtained at the end of 30 independent runs of the algorithm. A threshold value i. The algorithm was terminated when the error energy value dropped below the VTR between the two successive generations or G max reached The best minimum top , mean middle and standard deviation bottom results of rms values obtained from parameter tuning studies considering various F and Cr values for DE algorithm. The best statistical results are shown in boldface type i. Considering the minimum, mean and standard deviation of rms values presented in table 3 , the highest Np value i.

It is quite obvious that more generations affect the mean function evaluation and total elapsed time. However, because the anomaly equation used in our case does not need a high computation time during the optimization, the Np value was set to 30 D i. Additionally, based on the parameter tuning results mentioned above, we also set the control parameters as F : 0.

After determining the best control parameters for our problem, ASAs of synthetic TMAs having various percentages of noise content were inverted using wider search space bounds given in table 1. Considering the estimated model parameters table 4 , it can be concluded that DE algorithm yielded satisfactory results except for the parameter A even in the existence of various noise contents.

Furthermore, these reasonable solutions obtained through the inversions clearly substantiate the results of prediction error maps figure 3. As mentioned previously, prediction error maps indicated that the parameters zo , xo and q are most likely resolvable. The inversion results illustrating the fit between noise-free data and the one having the highest noise content i. The fits between the synthetic and calculated ASAs generated using the actual and the best-estimated parameters, respectively are quite good.

After some trial-and-error applications, an optimum N value was determined for the NFG procedure. In order to show the effect of the selection of optimum harmonic interval, two NFG sections are shown in figure 4 upper panels. It is obvious that fully closed symmetric contours showing the main local maximum white circle indicated the exact solutions with the help of optimum N value upper right panel in figure 4.

Satisfactory solutions were also obtained via the EUL technique, although to a lesser extent than for the NFG technique figure 4 lower panel. When considering the noisy data case figure 5 , the NFG technique produced reasonable model parameters like the DE algorithm. On the other hand, the EUL technique partially meets the expectations in this case. Note that optimum value of N was found to be 16 for NFG computation. NFG sections upper panels and EUL solutions lower panel obtained from the noise-added response of Model 1 figure 2 lower left panel. Note that optimum value of N was found to be 20 for NFG computation.

Using the model parameters given in table 5 , a synthetic TMA caused by two thin dikes Model 2 was produced figure 6 upper left panel to test the performance of DE algorithm. In the evaluation, the same best control parameters given previously were used. Search space bounds and the estimated parameters through the inversion are given in table 5. DE optimization revealed a pleasing agreement between the synthetic and calculated ASAs figure 6 upper right panel.

Additionally, independent runs were terminated by the nearly same number of generations, indicating the stability of the inversion algorithm in this case. On contrary to the solutions obtained from the noisy data of Model 1, the NFG solutions could not fulfil the expectations for Model 2. However, it must be noted that the closest model parameters to the actual ones were achieved via the DE algorithm. True parameters, search spaces and estimated parameters of Model 2 and Model 3 through DE algorithm. TMAs of Model 2 upper left panel and Model 3 lower left panel.

ASA inversion results of Model 2 upper right panel and Model 3 lower right panel. NFG sections upper panels and EUL solutions lower panel obtained from the response of Model 2 figure 6 upper left panel. NFG sections upper panels and EUL solutions lower panel obtained from the response of Model 3 figure 6 lower left panel. Note that optimum value of N was found to be 25 for NFG computation. In the next simulation, a more complicated TMA Model 3 having interference from a neighbouring causative source was analysed to test the efficacy of the DE optimization.

In this case, we used two thin dikes having opposite polarities, and we also reduced the distance between the sources to get a complex TMA figure 6 lower left panel. The true parameters of each causative body are given in table 5. It is clearly seen from the produced TMA that the anomaly pattern does not seem to be caused by multiple sources at first sight. However, after the ASA computation, the magnetic traces of two anomalous bodies explicitly became visible. This simulation also emphasized the significance of the ASA computation in magnetic anomaly interpretation. Search space bounds and the estimated model parameters through the DE optimization are given in table 5.

Although using such a large search space bounds, model parameters were resolved successfully, confirming the robustness and the validity of the algorithm. The good agreement between the synthetic and calculated ASA curves is also demonstrated in figure 6 lower right panel. Table 6 shows the best statistical results, indicating that low rms values were obtained in each run.

Additionally, it was also observed that a maximum number of generations is adequate for the inversion of ASAs due to two causative sources table 6. It is clearly seen that unlike the DE algorithm, these techniques could not provide reasonable results with the multiple bodies located relatively close to each other.

ASAs of both anomalies were obtained using the method of Agarwal and Srivastava Since these geological structures causing the TMAs may not be represented by exact idealized subsurface models, the field data sets were inverted without fixing the shape factor to an idealized body to obtain possible geometries whose ASAs of magnetic responses are similar to the ASAs of observed data.

The same statistical analyses performed in synthetic simulations were also considered. The field anomalies are most likely caused by single causative sources. However, it is encouraging to note that there is only a marginal difference between the optimal intrinsic control parameter values, apart from the optimal population size for the Ackley function. Nowadays electrical and electronic products are an indispensable part of daily life. However, although differential evolution has been applied to many engineering fields, its application in electrical and electronic engineering is still in its infancy.

Its potential here has not been evident to most application engineers involved and accordingly has yet to be exploited. To boost the awareness of differential evolution in electrical and electronic engineering, differential evolution is applied to solve some representative electrical and electronic engineering problems. The third part of this book is dedicated to presenting a consistent introduction to applications of differential evolution to various electrical and electronic engineering problems.

Chapter 9 presents an introductory survey on applications of differential evolution in electrical and electronic engineering. Communication, computer engineering, control theory and engineering, electrical engineering, electromagnetics, electronics, magnetics, power engineering, signal and information processing are covered.

Finally, Chapters 10—16 are given over to invited contributions from researchers with unique expertise in specific fields. Topics covered include the next generation internet, grid computing, antennas, integrated circuits, power engineering, license plate tracking, and color map generation. These further demonstrate the versatility and potential of differential evolution. Preface xix 5 What is Available from the Companion Website As already mentioned, two literature surveys were conducted, one on differential evolution, the other on test beds for single-objective optimization.

Numerous publications have been gathered together into two separate bibliographies. Due to limitations of space, it is impossible to present a hard copy of these two bibliographies. More importantly, it might be more convenient and beneficial for fellow researchers in these areas to have an electronic and online copy. Therefore, these two bibliographies are posted online as part of the companion website. Through the literature survey on test beds for single-objective optimization, more than toy functions and tens of application problems for single-objective optimization have been collected.

The formulation, mathematical features, and historical citation mistakes of each toy function and application problem have been documented. Such a collection will be extremely helpful to fellow researchers working on single-objective optimization as well as in others relevant fields. If I find a book on algorithms, I will always check whether there are any source codes available. Given sufficient computational resources, I will try out those algorithms myself. This invariably gives me a better understanding of those algorithms. Acknowledgement I owe a great deal to Prof.

Lee of Nanyang Technological University, Singapore. He is the best man I have ever met in my life. I would take this opportunity to thank Prof. Lim, director of Temasek Laboratories, National University of Singapore, for his support and encouragement of my study on differential evolution. I would also like to thank my colleagues and former colleagues of Temasek Laboratories, National University of Singapore, especially Mr Y. Gan, Dr X. Xu, and Mr C. Lin, for fruitful discussions and considerable assistance. Special thanks go to Prof.

Yang, Prof. Nie, Ms M. Meng, and Mr Y. Feng and Prof. My appreciation also goes to Mr J. Murphy and R. Their professionalism and patience have facilitated improvements to the book. Writing a book is a long and arduous process. I could not have done it without the support of my family. Thank you very much, Jiaoli, Chen, and Tian, for your support. I am greatly indebted to my brother, Anbing, and sister-in-law, Juhua Liu, for voluntarily taking on the responsibility of my upbringing when I suddenly lost my parents in Without their loving care, I would not have had the opportunities I have had and my life would doubtless have taken a less satisfying course.

A one-dimensional discontinuous objective function. A continuous non-differentiable one-dimensional objective function. A two-dimensional multimodal objective function. A one-dimensional unimodal objective function. A two-dimensional unimodal function. Fortran-style pseudo-code for dichotomous algorithms. Fortran-style pseudo-code for one implementation of the cubic interpolation algorithm. Fortran-style pseudo-code for univariate search algorithms. Fortran-style pseudo-code for pattern search algorithms. Fortran-style pseudo-code for downhill simplex algorithm.

Mod-01 Lec-38 Genetic Algorithms

Fortran-style pseudo-code for BFGS algorithm. Fortran-style pseudo-code for conjugate gradient algorithm. Fortran-style pesudo-code for the Monte Carlo algorithm. Fortran-style pseudo-code for simulated annealing algorithm. General flow chart of genetic algorithms. Fortran-style pseudo-code for initialization of genetic algorithms. Fortran-style pseudo-code for binary tournament selection.

One-point crossover. Two-point crossover.

mekkadonmusic.com/the-governors-stake.php

Differential Evolution: Fundamentals and Applications in Electrical Engineering

Fortran-style pseudo-code for binomial crossover. Fortran-style pseudo-code for exponential crossover. Exponential crossover. Arithmetic one-point crossover. Arithmetic two-point crossover. Fortran-style pseudo-code for arithmetic binomial crossover. Arithmetic exponential crossover. Non-uniform arithmetic one-point crossover. Non-uniform arithmetic multi-point crossover. Fortran-style pseudo-code for non-uniform arithmetic binomial crossover. Fortran-style pseudo-code for non-uniform arithmetic exponential crossover.

Non-uniform arithmetic exponential crossover. General flow chart of evolution strategies. Block diagram for particle swarm optimization. Flow chart of classic differential evolution. Fortran-style pseudo-code for random reinitialization. Fortran-style pseudo-code for bounce-back. Flow chart of dynamic differential evolution. Fortran-style pseudo-code of opposition-based differential evolution. Fortran-style pseudo-code for non-uniform mutation. Flow chart of Pareto set Pareto differential evolution.

Flow chart of non-dominated sorting differential evolution. Performance of standard binary genetic algorithm for 8-dimensional translated sphere function. Efficiency of differential evolution, real-coded genetic algorithm and particle swarm optimization for 8-dimensional translated sphere function. Robustness of differential evolution, real-coded genetic algorithm and particle swarm optimization for 8-dimensional translated sphere function. Efficiency of differential evolution, real-coded genetic algorithm and particle swarm optimization for dimensional translated sphere function.

Robustness of differential evolution, real-coded genetic algorithm and particle swarm optimization for dimensional translated sphere function. Effect of dimension on differential evolution. Efficiency of differential evolution, real-coded genetic algorithm and particle swarm optimization for 8-dimensional Qing function. Robustness of differential evolution, real-coded genetic algorithm and particle swarm optimization for 8-dimensional Qing function. Efficiency of differential evolution, real-coded genetic algorithm and particle swarm optimization for dimensional Qing function.

Robustness of differential evolution, real-coded genetic algorithm and particle swarm optimization for dimensional Qing function. Landscape of Qing function. Robustness of differential evolution for the 8-dimensional sphere function. Efficiency of differential evolution for the 8-dimensional sphere function. Robustness of differential evolution for the dimensional sphere function. Efficiency of differential evolution for the dimensional sphere function. Effect of dimension on robustness of differential evolution for the sphere function.

Effect of dimension on efficiency of differential evolution for the sphere function. Robustness of differential evolution for the 8-dimensional step function 2. Efficiency of differential evolution for the 8-dimensional step function 2. Robustness of differential evolution for the dimensional step function 2.

Efficiency of differential evolution for the dimensional step function 2. Effect of dimension on robustness of differential evolution for step function 2. Effect of dimension on efficiency of differential evolution for step function 2. Robustness of differential evolution for the 8-dimensional hyper-ellipsoid function. Efficiency of differential evolution for the 8-dimensional hyper-ellipsoid function. Robustness of differential evolution for the dimensional hyper-ellipsoid function. Efficiency of differential evolution for the dimensional hyper-ellipsoid function. Effect of dimension on robustness of differential evolution for the hyper-ellipsoid function.

Effect of dimension on efficiency of differential evolution for the hyper-ellipsoid function. Robustness of differential evolution for the 8-dimensional Qing function. Efficiency of differential evolution for the 8-dimensional Qing function. Robustness of differential evolution for the dimensional Qing function. Efficiency of differential evolution for the dimensional Qing function. Effect of dimension on robustness of differential evolution for the Qing function.

Effect of dimension on efficiency of differential evolution for the Qing function. Robustness of differential evolution for the 8-dimensional Schwefel function 2. Efficiency of differential evolution for the 8-dimensional Schwefel function 2. Robustness of differential evolution for the dimensional Schwefel function 2. Efficiency of differential evolution for the dimensional Schwefel function 2. Effect of dimension on robustness of differential evolution for the Schwefel function 2.

Effect of dimension on efficiency of differential evolution for the Schwefel function 2. Robustness of differential evolution for the 8-dimensional Schwefel function 1. Efficiency of differential evolution for the 8-dimensional Schwefel function 1. Robustness of differential evolution for the dimensional Schwefel function 1. Efficiency of differential evolution for the dimensional Schwefel function 1. Effect of dimension on robustness of differential evolution for the Schwefel function 1. Effect of dimension on efficiency of differential evolution for the Schwefel function 1. Robustness of differential evolution for the 8-dimensional Rastrigin function.

Efficiency of differential evolution for the 8-dimensional Rastrigin function. Robustness of differential evolution for the dimensional Rastrigin function. Efficiency of differential evolution for the dimensional Rastrigin function. Effect of dimension on robustness of differential evolution for the Rastrigin function. Effect of dimension on efficiency of differential evolution for the Rastrigin function.

Robustness of differential evolution for the 8-dimensional Ackley function. Efficiency of differential evolution for the 8-dimensional Ackley function. Efficiency of differential evolution for the dimensional Ackley function. Robustness of differential evolution for the dimensional Ackley function. Effect of dimension on robustness of differential evolution for the Ackley function.

Effect of dimension on efficiency of differential evolution for the Ackley function. Topology of the first network. Request success rate. The grid architecture. Configuration of a linear array. Optimized excitations of Figure Radiation patterns for the time-modulated hexagonal planar array at the center frequency f0: a sum pattern; b difference pattern; c double-difference pattern.

Diagram of an adaptive time-modulated antenna array controlled by HDE. The adapted time-modulated antenna array patterns, with suppressed sideband radiations. Null depth as a function of the number of power measurements. The adapted time-modulated antenna array patterns sideband radiations have not been suppressed.

Penalty function for performance measure m. Flow chart for DESA. Trial point generation. Acceleration mechanism. Topology for DAMP2. Topology for NAND. Cost profile of DAMP1 at the initial point. Deregulated electricity market operation. A market information flow. Market clearing price obtained by the intersection of demand and supply curves. Flow chart of the proposed bidding strategy method.

The real bidding production versus random bidding production of generator 1. The real bidding production versus random bidding production of generator 2. The real bidding production versus random bidding production of generator 3. Real profits versus simulated profits of 11 generators. Profits of generator G1 by setting different confidence levels. VLPR outdoor system. Applications of differential evolution in the automobile and automotive industry.

Applications of differential evolution in defense.

Special order items

Applications of differential evolution in economics. Applications of differential evolution in environmental science and engineering. Applications of differential evolution in the gas, oil, and petroleum industry. Applications of differential evolution in water management.


  1. Experimental design issues for the early detection of disease: novel designs.
  2. SAINS MALAYSIANA.
  3. Abraham Lincolns Gettysburg Address Illustrated.
  4. Iphone Hacks: Pushing the Iphone and iPod Touch Beyond Their Limits!
  5. Retire Worry Free: Essays on Risk and Money Management.

Applications of differential evolution in the iron and steel industry. Applications of differential evolution in enterprise and management. Applications of differential evolution in mechanics. Applications of differential evolution in medicine and pharmacology.

Applications of differential evolution in optics. Applications of differential evolution in seismology. Applications of differential evolution in thermal engineering. Deterministic optimization algorithms hybridized with differential evolution.

In Applications Electrical And Fundamentals Differential Evolution Engineering

Other stochastic algorithms hybridized with differential evolution. Biased case studies with one intrinsic control parameters flexible. Homepages of some outstanding researchers. Hardness of toy functions in the tentative benchmark test bed. Optimal population size and safeguard zone of population size for the sphere function. Optimal population size and safeguard zone of population size for step function 2. Optimal population size and safeguard zone of population size for the hyper-ellipsoid function. Optimal population size and safeguard zone of population size for the Qing function.

Optimal population size and safeguard zone of population size for the Schwefel function 2. Optimal population size and safeguard zone of optimal population size for the Schwefel function 1. Optimal population size and safeguard zone of optimal population size for the Rastrigin function. Optimal population size and safeguard zone of optimal population size for the Ackley function. Standard and alternative search spaces for fully simulated member toy functions in the tentative benchmark test bed. Design and optimization of LDPC codes using differential evolution.

See a Problem?

Controllers designed by using differential evolution. Applications of differential evolution in robotics. Motor modeling and design by using differential evolution. Synthesized ideal antenna arrays using differential evolution. Applications of differential evolution to one-dimensional electromagnetic inverse problems. Two-dimensional electromagnetic inverse problems solved by using differential evolution. Three-dimensional electromagnetic inverse problems solved by using differential evolution. Microwave devices design by using differential evolution.

Applications of differential evolution in analysis of electronic circuits and systems. Filter design by using differential evolution. Non-filter circuit design by using differential evolution. Applications of differential evolution in magnetics. Fuzzy rules. Quantization results. Loosely speaking, optimization is the process of finding the best way to use available resources, while at the same time not violating any of the constraints that are imposed. More accurately, we may say that we wish to define a system mathematically, identify its variables and the conditions they must satisfy, define properties of the system, and then seek the state of the system values of the variables that gives the most desirable largest or smallest properties.

This general process is referred to as optimization. It is not our purpose here to define a system. This is the central problem of various disciplines which are sciences or are struggling to become sciences. Our concern here is, given a meaningful system, what variables will make the system have the desirable properties. N is the number of optimization parameters, or the dimension of the optimization problem. Di, either continuous or discrete, is the search space of xi, the ith optimization parameter. Di and Dj are not necessarily identical, from the point of view of either type or size. Objective and constraint functions in many real-world application problems can be formulated in many ways.

There might be better formulation of objective and constraint functions to describe a particular optimization problem. Any knowledge about the optimization problem should be worked into the objective and constraint functions. Good objective and constraint functions can make all the difference. The rate of administration charge for fund i is ci. Suppose he invests an amount xi in the ith fund. How can he plan his investment so that he will get maximum return? A total of m experiments have been done during which experimental data y have been collected.

It is then required to fit the mathematical model h x to the collected experimental data y. Materials for a radome can only be chosen from the available materials database. A radome has to meet certain electromagnetic and mechanic demands. It is also subject to economic constraints and fabrication limitations. They affect the value of objective and constraint functions. If there are no optimization parameters, we cannot define the objective and constraint functions.

In the investment fund management problem, the optimization parameters are the amounts of money invested in each fund. In experimental data fitting problems, the optimization parameters are the parameters that define the model. In the radome design problem, the optimization parameters might include the material index in the materials database, material thickness, and some other parameters.

An optimization parameter can be continuous, discrete, or even symbolic. An Introduction to Optimization 3 1. For instance, in the investment fund management problem, the fund manager wants to maximize the return. In fitting experimental data to a user-defined model, we might minimize the total deviation of observed data from predictions based on the model.

In the radome design problem, we have to maximize the strength and minimize the distortion and cost. Almost all optimization problems have objective functions. However, in some cases, such as the design of integrated circuit layouts, the goal is to find optimization parameters that satisfy the constraints of the model.

Differential Evolution Fundamentals And Applications In Electrical Engineering

The user does not particularly want to optimize anything, so there is no reason to define an objective function. This type of problem is usually called a feasibility problem. On the other hand, in some optimization problems, there is more than one objective function. For instance, in the radome design problem, it would be nice to minimize weight and maximize strength simultaneously. An objective function has at least one global optimum, and may have multiple local optima as shown in Figure 1.

For an optimization problem with multiple objective functions, optimal solution points corresponding to different objective functions may be inconsistent. These features are very important for choosing optimization algorithms to solve the optimization problem of interest. The optimization process of a decomposable objective function can be performed in a sequence of N independent optimization processes, where each parameter is optimized independently. However, many objective functions are not decomposable in this manner. The Chung—Reynolds function is partially decomposable and is therefore easy to optimize.

A continuous non-differentiable function is shown in Figure 1. Otherwise, it is multimodal. Figure 1. Noise is most often used to represent randomness. Differential Evolution 6 x2 x1 Figure 1. On the other hand, scalable objective functions can be scaled to any dimension. As dimension increases, the search space size also increases exponentially. The difficulty of finding an optimal solution increases accordingly. The aforementioned sphere function and Chung—Reynolds function are symmetric while the Rosenbrock saddle function is non-symmetric or asymmetric.

Objective function values at all optimal solution points are equal. In this regard, no optimal solution is distinguishable from others. Some people confuse uniqueness with modality. For the investment fund management problem, the amount of money invested must not exceed the available money.

Constraints are not absolutely necessary for an optimization problem. In fact, the field of unconstrained optimization is a large and important one for which a lot of algorithms and software are available. However, it has been argued that almost all problems really do have constraints. For example, any optimization parameter denoting the number of objects in a system can only be meaningful if it is less than the number of elementary particles in the known universe! Nevertheless, in practice, answers that make good sense in terms of the underlying physics can often be obtained without putting constraints on the optimization problems.

Sometimes, constraint functions and objective functions for an optimization problem are exchangeable, depending on the priority of the desirable properties. In fact, later in this book, we avoid distinguishing objective functions and constraint functions whenever possible. Moreover, without loss of generality, in the rest of this book, we are exclusively concerned An Introduction to Optimization 7 with minimization problems unless explicitly stated otherwise. As a matter of fact, it is very easy to convert a maximization problem into minimization problem.

Many people regard the search space for an optimization problem as a constraint. However, in this book, we do not take this approach. In fact, many optimization problems come directly from real-world applications. Publications on optimization that do not mention applications are very rare. There is no need for us to prove the usefulness of optimization by presenting a long list of fields of application in which optimization has been involved.

Space considerations also do not permit us to give an exhaustive and in-depth review on applications of optimization. Therefore, no further discussion on applications of optimization will be given here. Numerous optimization algorithms have been proposed. In general, these algorithms can be divided into two major categories, deterministic and stochastic. Hybrid algorithms which combine deterministic and stochastic features are stochastic in essence and regarded as such. However, it is acceptable to treat them as a third category from the point of view of purity. If the algorithm is run multiple times on the same computer, the search time for each run will be exactly the same.

In other words, deterministic optimization is clonable. Dimension is a good criterion for classifying deterministic optimization algorithms. Deterministic optimization algorithms are accordingly divided into one-dimensional and multi-dimensional deterministic optimization algorithms.

Some multi-dimensional deterministic algorithms need the help of one-dimensional deterministic optimization algorithms. Prominent algorithms include the exhaustive search algorithm, dichotomous algorithms, the parabolic Differential Evolution 8 interpolation algorithm, and the Brent algorithm. If the minimum of the objective function f x is known, a nonlinear equation can be formulated. In this case, the Secant algorithm for nonlinear equations is applicable.

Usually, the sample points are equally spaced within [a, b]. The minimum value of the objective function at each and every sample point is regarded as the optimum and the corresponding sample point is regarded as the optimal solution. The exhaustive search algorithm is also known as enumeration algorithm or brute force algorithm [3], p. The parabolic interpolation algorithm [4], p. The minimum point of the parabola, a new estimate of the minimum point of the objective function, will replace one of the three previous points. This process is repeated until termination conditions are fulfilled. An Introduction to Optimization 9 1.

The objective function in each iteration is approximated by an interpolating parabola through three existing points. The minimum point of the parabola is taken as a guess for the minimum point. It is accepted and used to generate a smaller interval if it lies within the bounds of the current interval. Otherwise, the algorithm falls back to an ordinary golden section step. Therefore, this process has to be repeated until an acceptable root is located. This motivates the update scheme of the Newton algorithm for nonlinear equation [4], pp. In this case, it is approximated by finite difference.

The Newton update scheme accordingly becomes the Secant update scheme [4], p. Three algorithms in this category are commonly used. Differential Evolution 10 1. The basic logic is similar to that of the parabolic interpolation algorithm. However, in this instance, evaluation of both objective function and its derivative at each point is required. Consequently, the approximation polynomial can be constructed using fewer points.

The Fortran-style pseudo-code for one implementation is given in Figure 1. Obviously this approach soon becomes prohibitive as the dimension N and the number of sample points mi for each dimension increase. The Fortran-style pseudo-code is given in Figure 1. The essential idea behind it is to move from one solution point to the next.

There are two kinds of moves in the pattern search algorithm: the exploratory move and the pattern move. An exploratory move consists of a series of univariate searches. Each univariate search moves a little along its coordinate direction. The pattern move is larger in size. Fortran-style pseudo-code is given in Figure 1.

However, it may not converge to the optimal solution point in the case of non-decomposable objective functions. Powell proposed a simple but vastly superior variation. For example, a two-dimensional simplex is a triangle and three-dimensional simplex is a tetrahedron. Identify the vertices with the highest, second highest, and lowest objective function value, xH, xS, and xL.

A schematic three-dimensional reflection is drawn in Figure 1. It is also straightforward to generalize the corresponding onedimensional Secant algorithm for nonlinear equation to multiple dimensions [4], pp. Four algorithms in this category are commonly used.