Table Of ContentNOTICE TO READERS
It vour librarv is not alreadv a standing/continuation order customer or subscni>er to this series, mav we recommend thai vou place a
standing/continuation or subscription order to receive tmmediaielv upon publication ail new volumes. Should vou fmd that these volumes no longer serve
vour needs vour order can l>e cancelled at anv time without notice.
Copies ot all previously published volumes are available. Λ lullv descriptive catalogue will Ικ· gladlv sent on request.
ROBERT MAXWELL
Publisher
it At. lirUiletl Titles
BROADBENT & MASUBl CHI: Multilingual Clossarv of Auiomaiii Control Technology
EYKHOKK: Trends and Progress in Svstem Identification
ISERMANN: Svstem Identification Tutorials (Aulotnntmi Sperm/ Iwue)
U.K. Pergamon Press, Headington Hill Hall, Oxford OX3 OBW, England
U.S.A. Pergamon Press Inc., Maxwell House, Fairview Park, Elmsford, New York 10523, U.S.A.
CANADA Pergamon Press Canada, Suite 104, 150 Consumers Road, Willowdale, Ontario M2J 1P9,
Canada
AUSTRALIA Pergamon Press (Aust.) Pty. Ltd., P.O. Box 544, Potts Point, N.S.W. 2011, Australia
FEDERAL REPUBLIC Pergamon Press GmbH, Hammerweg 6, D-6242 Kronberg, Federal Republic of Germany
OF GERMANY
JAPAN Pergamon Press, 8th Floor, Matsuoka Central Building, 1-7-1 Nishishinjuku, Shinjuku-ku,
Tokyo 160, Japan
BRAZIL Pergamon Editora Ltda., Rua Eça de Queiros, 346, CEP 04011, Sâo Paulo, Brazil
PEOPLES REPUBLIC Pergamon Press, Qianmen Hotel, Beijing, Peoples Republic of China
OF CHINA
Copyright © 1986 IFAC
All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in
any form or by any means: electronic, electrostatic, magnetic tape, mechanical, photocopying, recording or other
wise, without permission in writing from the publishers.
First edition 1986
Library of Congress Cataloging in Publication Data
IFAC Workshop on Applications of Nonlinear Program
ming to Optimization and Control (5th : 1985 : Capri, Italy)
Control applications of nonlinear programming and optimization.
Includes indexes.
I. Automatic control—Congresses. 2. Nonlinear
programming—Congresses. 3. Mathematical
optimization—Congresses. I. Di Pillo, G.
II. International Federation of Automatic Control.
III. Title.
TJ212.2.I339 1985 629.8 86-4957
British Library Cataloguing in Publication Data
IFAC Workshop {5th : 1985 : Capri)
Control application of nonlinear programming
and optimization.—(IFAC proceedings)
1. Automatic control—Mathematical
models 2. Nonlinear programming
I. Title II. Di Pillo. G.
III. International Federation of Automatic Control
IV. Series
629.8'312 TJ213
ISBN 0-08-031665-4
These proceedings were reproduced by means of the photo-offset process using the manuscripts supplied by the
authors of the different papers. The manuscripts have been typed using different typewriters and typefaces. The
lay-out, figures and tables of some papers did not agree completely with the standard requirements: consequently
the reproduction does not display complete uniformity. To ensure rapid publication this discrepancy could not be
changed: nor could the English be checked completely. Therefore, the readers are asked to excuse a?iy deficiencies
of this publication which may be due to the above mentioned reasons.
The Editor
Printed in Great Britain fry A. W'heaton Csf Co. Ltd., Exeter
CONTROL APPLICATIONS OF
NONLINEAR PROGRAMMING
AND OPTIMIZATION
Proceedings of the Fifth IF AC Workshop,
Capn, Italy, 11-14 June 1985
Edited by
G. DI PILLO
Dipartimento di Informatica e Sistemistica,
University of Rome "La Sapienza", Rome, Italy
Published for the
INTERNATIONAL FEDERATION OF AUTOMATIC CONTROL
by
PERGAMON PRESS
OXFORD · NEW YORK · TORONTO · SYDNEY · FRANKFURT
TOKYO · SAO PAULO · BEIJING
FIFTH IFAC WORKSHOP ON CONTROL APPLICATIONS OF
NONLINEAR PROGRAMMING AND OPTIMIZATION
Sponsored by
The International Federation of Automatic Control (IFAC)
Technical Committee on Mathematics of Control
Technical Committee on Theory
Co-sponsored by
University of Rome "La Sapienza", Rome, Italy
University of Calabria, Cosenza, Italy
University of Salerno, Salerno, Italy
Technical Committees on Engineering and Technological Sciences, National Council of
Researches, Italy
Consorzio Campano di Ricerca per l'Informatica e l'Automazione Industriale, Naples,
Italy
Azienda Autonoma di Cura, Soggiorno e Turismo, Capri, Italy
International Programme Committee
A. Miele, USA (Chairman)
G. Di Pillo, Italy
F. M. Kirillova, USSR
D. Q. Mayne, UK
N. Olhoff, Denmark
B. L. Pierson, USA
H. E. Rauch, USA
A. Ruberti, Italy
R. W. H. Sargent, UK
K. H. Well, FRG
National Organizing Committee
G. Di Pillo (Chairman)
L. Grandinetti
A. Miele
F. Zirilli
PREFACE
This volume contains a selection of papers presented at the
Workshop on Control Applications of Nonlinear Programming and
Optimization held in Capri, Italy, during 11-14 June 1985.
The purpose of the Workshop was to exchange ideas and
information on the applications of optimization and nonlinear
programming techniques to real life control problems, to
investigate new ideas that arise from this exchange and to look
for advances in nonlinear programming and optimization theory
which are useful in solving modern control problems.
The Workshop benefited of the sponsorship of the
International Federation of Automatic Control (IFAC) through the
Committees on Theory and on Mathematics of Control. It was the
fifth IFAC Workshop on the subject.
The attendance to the Workshop was of fiftyfive experts from
sixteen countries. Four invited and twentysix contributed papers
were presented and discussed; invited speakers were A.E. Bryson,
Jr., R. Bulirsch, H.J. Kelley and J.L. Lions.
The scientific program of the Workshop covered various
aspects of the optimization of control systems and of the
numerical solution of optimization problems; specific
applications concerned the optimization of aircraft trajectories,
of mineral and metallurgical processes, of wind tunnels, of
nuclear reactors; computer aided design of control systems was
also considered in some papers.
The scientific program was arranged by an International
Committee chaired by Angelo Miele (USA), with other members being
G.Di Pillo (Italy), F.M. Kirillova (USSR) D.Q. Mayne (UK), N.
Olhoff (Denmark), B.L. Pierson (USA), H.E. Rauch (USA), A.
Ruberti (Italy), R.W.H. Sargent (UK) and K.H. Well (FRG).
All contributed papers included in this volume have been
reviewed; thanks are due, for their contribution in the reviewing
procedure, to R. Bulirsch, J.L. de Jong, p. Fleming, H.J. Kelley,
F.M. Kirillova, J.L. Lions, D.Q. Mayne, A. Miele, H.J. Oberle,
B.L. Pierson, A.L.Tits, K.H. Well and F. Zirilli.
Finally, it was a great pleasure for me to have served as
chairman of the organizing committee.
Gianni Di Pillo
vu
Copyright © IFAC Control Applications of Nonlinear
Programming and Optimization, Capri, Italy, 1985
ON THE ORTHOGONAL COLLOCATION AND
MATHEMATICAL PROGRAMMING
APPROACH FOR STATE CONSTRAINED
OPTIMAL CONTROL PROBLEMS
O. E. Abdelrahman* and B. M. Abuelnasr**
* Department of Mathematics and Computer Sciences, Zagazig University,
Zagazig, Egypt
** Department of Computer Sciences and Automatic Control, Faculty of Engineering,
University of Alexandria, Egypt
Abstract. The orthogonal collocation approach is now well known to solve,
effectively, the state constrained optimal control problems. Mathematical
programming technique v/as also used as an effective tool to construct the
optimal trajectories. In this paper, a study is done on the efficiency
and accuracy requirements of the combined orthogonal collocation and mat
hematical programming approach, as regarding the employed optimization
algorithm, and the number of orthogonal collocation points. It is shown,
by experimentation with numerical examples that Fletcher-Powell optimiza
tion algorithm is much more faster to produce convergence than Fletcher-
Reeves algorithm. The efficiency can be a ratio of six-to-one. The resul
ts are compared with an alternative approach to solve the same problem.
It is shown that the present algorithm is less costly than the alternative
approach, although requiring more computation time. The choice is then a
compromise one. As the number of orthogonal points increases, the result
ing solutions are more accurate, but the convergence speed decreases. Ex
perimentation with N, shows a save of five-to-one in computing time can
be achieved with almost the same cost function. Finally, it is shown, by
a numerical example, that uniformly distributed collocation points result
in non-optimal solutions, which also violate the problem constraints. It
is a numerical proof of the superiority of the orthogonal collocation
approach.
Keywords. Orthogonal collocation; mathematical programming; optimization
algorithms; convergence speed; state constrained problems.
INTRODUCTION STATEMENT OF THE PROBLEM
State constrained optimal control problems AND ITS SOLUTION
pose a challenging two point boundary val Given the state description of the dynamic
ue problemCTPBVP). Different approaches system as
exist which solve the resulting TPBVP. The
orthogonal collocation approach, as a meth x = f(x,u,t) , x(0) = x0 (1)
od of approximating functions, is used to
construct the problem solutions with good where x(t) is an nxl state vector, u(t) is
to excellent accuracies(Oh and Luss,1977, an rx1 control vector, and f(x,u,t) is an
and Abdelrahman, 1980). Combined with mat nxl vector function of x,u,and t. The
hematical programming, the orthogonal col control vector u(t) is assumed unconstrai
location was used to solve the state cons ned.
trained optimal control problem(Abuelnasr
and Abdelrahman, 1981). The emphasis on It is required to minimize, with respect
just getting a numerically programmed sol to u, the cost functional
ution without examining the optimization
algorithm, which finally gives the requir J(u) = £ f L(x,u,t) dt, (2)
ed solution, sometimes result in non effi
subject to the differential constraints of
cient solutions, as far as computation
time is concerned. In this paper, a look Eq.(1), and the state ineouality constrai
at two optimization algorithms, namely nt
that of Fletcher-Reeves and Fletcher-Powell
(Kuester and Mize, 1973), is shown to give s(x,t) <0 (3)
a comparatively large efficiency. Also,we
look at the approximating method of the The solution of the above posed problem,
orthogonal collocation. It is found possi using the orthogonal collocation approach
ble to obtain almost optimal solutions is well known(Abuelnasr and Abdelrahman,
with a reasonable number of collocation 1981)· Here, we give a brief outline of the
points. The orthogonality of the collocat steps which will lead finally to the posed
ion points is also shown to be the right problem solution. Thus, the solution of the
choice for approximating the solution of problem will consist of three stages. The
problem, as an otherwise choice based on first stage formulates a TPBVP for the
non-orthogonal collocating points will following unconstrained optimization prob
give erroneous results. lem.
1
9 O. E. Abdelrahman and B. M. Abuelnasr
Minimize J(u), given by Eq.(2), with respe
ct to u , subject to the differential con s(x,t) + 0.5<* U) = 0 (10)
straints given by Eq.(1). This is done by
The equality (10) is then evaluated at the
defining the Hamiltonian of the problem
N interior collocation points chosen as
H(x,u,X,t) =L(x,u,t) + λΤ f(x,u,t), (Z|) the zeros of transformed Legendre polynom
ials to give
where A(t) is an nx1 adjoint vector, known
G(y ,<*) = 0 , (u)
also as Lagrange multiplier vector, and c
T where G is an mx(N+1) vector.
( ) denotes vector transposition. The fol
lowing canonic equations and the necessary (ii) Construct a cost function,CF, as
condition of optimality will result
follows
2n(N+1)
ολ (5) CF F2(y) (12)
i=1
where F. is the ith component of F.
(6)
til· °
(iii) Minimize the cost function,CF,
subject to the equality constraint given
Using Eq.(6) in Eq.(5) will give the foll by Eq.(11), using a suitable optimization
owing two sets of equations algorithm.
x =f(x„A ,t) , x(0) = x
n NUMERICAL EXAMPLES
• ° (7) The examples introduced below are extracted
A =g(x,j\ ,t) , X(t := 0 , from the control literature(Jacobson and
f Lele, 1969)· A comparison of this work with
while the optimal control, u(t) is obtained
other authors work can thus be done to eva
from Eq.(6). In Eqs»(7),g is an nx1 vector
luate the presented algorithms.
function of x, λ , and t, Also, the solution
of Eqs.(7)poses a TPBVP.
Example 1.
The second stage is the transformation of consider
the TPBVP, obtained from the first stage,
into a corresponding set of algebraic equ X1 = x2 x1(0)= 0
ations by using the collocation of x(t) (13)
x = -x + u , Xp(0)= -1
andA(t) over the time interval (0,t)
f
(see Appendix I), Good to excellent accur
with the following performance index to be
acies can be achieved using collocation
minimized with respest to u
points chosen as the zeros of transformed
t
Legendre polynomials(A transformed Legendre f
Oolynomial is a Legendre oolynomial defined J(u)= / (x2 +xf 4 0.005 u2) dt, (iZf)
on (0,1))· 0 ' d
subject to the inequality constraint
Denote the set of algebraic equations by
x^t) - 8(t-0.5)2 + 0.5 <0 (15)
?(y) = 0 , (3)
The inequality(15) is of the second order,
where y is a vector of order 2nx(N+1), n
which means it has to be differentiated
being the problem dimension, and N is the
twice to obtain the control u(t).
number of interior collocation points. The
details of getting Eq.(o) is illustrated
Formulation of the Solution
in Appendix I(the case of unconstrained
The technique of mathematical programming
optimal control problem is illustrated ,
and orthogonal collocation of section two
where the state equations are linear and
will be used to solve the example. Thus,
the cost is a quadratic in x and u).
first, the unconstrained TP3VP is formulat
ed as follows
The components of y are those of x(t) and
A(t) at the interior collocation points,
i.e., y can be written as x1 - x2 x^O) =0
10θλ , Χρ(0)=-1
y + (9) 2 (16)
where y,r and y are respectively, the = -2x, , Aid) = 0
approximations of x(t) andA(t), using
X = -2x - X + λ , λ (1)=0
orthogonal collocation. Also, y can be 2 2 1 2 2
partitioned into two components,y^ and y^,
c while the optimal control,u(t), is given by
where y is an mx(N+1) vector of constrain
c u = - 100 λ
ed state components, m being the dimension 2
of the constrained variables in the inequ Then, introduce the slack variable, 0t(t),
ality (3), while y~c is a (2n-m)x(N+1) ve to inequalityd5) to obtain the equality
ctor of unconstrained components and the
adjoint vector at (H+1) points. x^t) - 3(t-0.5)2+ 0.5 + 0.5*2U)=0 (17)
By following the procedure of the last sec
Finally, the third stage solves a constra
tion, a set of algebraic equations are obt
ined optimization problem by using the
ained from Eqs.(l6) and Eq.(17). The optim
technique of mathematical programming. Thus
ization algorithms of Fletcher-Reeves and
Fletcher-Powell are then applied to obtain
(i) The inequality constraint(3) is
the problem solution. Table 1 shows a comp
transformed into an equality constraint by
introducing a slack variable oC(t), to obt arison of the speed of convergence of the
obtained solutions, using the two algorith
ain
ms. For comparison purposes, the correspon
ding results of Jacobson and Lele(l969) are
Orthogonal Collocation and Mathematical Programming
included in Table 1.
TABLE 1 Results of Two Optimization Algorithms
Algorithm Number of Iterations Cost Function,J
Fletcher-Reeves» 466 0.7045
Fletcher-Powell 76 0.6742
*N=7, the number of collocation points
Jacobson and Lele(1969) 16 0.75
It is observed, by looking at Table 1, that in Table 3 and Fig. 2.
the second algorithm, namely Fletcher-Powell
is much more efficient than Fletcher-Reeves. TABLE 3 Results of Varying N on the
The efficiency can be measured in terms of Convergence Speed and the
the ratio of the number of iterations to Optimal Cost Function
produce convergence. In the table, Fletcher-
Powell is six times more efficient than Number of Iterations Cost
Fletcher-Reeves. By comparison, the Jacobs Function,J
on and Lele approach produces results, whi
ch are more efficient , although giving a 149 0.134405
slightly higher cost. A common feature of 112 0.145040
all methods used in Table 1, is the use of
conjugate gradients to search for the mini 119 0.145990
mum of the objective function. Besides, 505 0.136334
Jacobson and Lele used the conjugate gradi
ent in the function space, which proved to Note: A Fletcher-Reeves algorithm is
be more efficient than the normal conjugate implemented in this example
gradient(Lasdon, Hitter, and V/aren, 1967)· Jacobson
The wide variation in the number of iterat
and Lele(1969) 16 0.164
ions in the first two lines of the table,
is that Fletcher-Powell is a second order (Using the Conjugate Gradient Method)
method, which produces quadratic convergen-
ce(see Appendix II, where a matrix H is used The table shows, at first, a decrease,then
to accelerate convergence); while Fletcher- an increase in the convergence iteration
Reeves is a first order method, which gives cycles. This phenomena is due to the inter
linear convergence. play between the truncation and the round
off error. As N increases, the accuracy
Another point to be discussed, i.e. the increases, and consequently the truncation
effect of orthogonality of collocation poi error decreases. But, the roundoff error
nts on the approximation of the solution of increases by increasing N. There is, thus
the problem. For this purpose, example 1 is a value of N at which there is a minimum
re-solved, but using collocation points not for the combined truncation and roundoff
based on the zeros of orthogonal polynomials. errors. The figure also confirms the above
The points are chosen on an equal interval claims. The optimum cost function,j, is
basis. The obtained results are shown in not much sensitive to the variation of IT.
Table 2. Thus, based on the number of iterations
and the associated graphs in Fig. 2, an
TABLE 2 Results of Two Sets of optimum value for N can be selected. The
Collocation Points presented values of the table show that
N=5 is an optimum choice. For comparison,
N=7 Cost Function the results of Jacobson and Lele are incl
J uded. The same observations and comments
1. Collocation points are * 0.7045 concerning the number of iterations and
zeros of Legendre polynomials accuracy will apply as before. In addition,
the increase of N, is assosiated with an
2. Collocation points * 1.0411 increase of the number of equations to be
distributed equally on (0,1) solved, in the form 2n(N+1); and thus more
* Fletcher-Reeves algorithm is implemented computation time is needed for convergence.
But, a gain in accuracy is evident as showm
in the third column of Table 3. The advant
The resulting trajectories for the choices age of the conjugate gradient in the funcf
in Table 2 are plotted in Fig. 1 . Table 1 ion space is also note^.
and Fig. 1 show a non-optimal solution for
the equal interval collocation points,which CONCLUSIONS
motivâtes the use of the orthogonal colloc Looking at the results in the tables and
ation method for approximating the problem their associated figures, several conclut-
solutions. ions can be reached. The orthogonal collo
cation and mathematical programming is pr
Example 2. esented as an alternative approach to solve
Same as example 1, except that this case the state constrained optimal control prob
treats a constraint of the first order, lem. It seems appropriate to be compared to
given by other approaches, as was shown in Table 1,
and Table 3, as far accuracy is concerned.
x2(t) -8(t-0.5r + 0.5 <0 (18) gIrta diise natl sion cothncel ufduendc titohna t sptahcee cpornojduugcaetse fa
In this example, N, the number of orthogo ster convergence than the normal conjugate
nal collocation points is given values of gradient,as used in Fletcher-Reeves and
4,5,6,and 7. The results are illustrated Fletcher-Powell algorithms. The last concl-
4 O. E. Abdelrahman and B. M. Abuelnasr
sion accounted for the relatively smaller as follows
iterations in Table 1 and Table 3 of Jacob- x(t)N»+g2 c. ,t 'Ί "1
son and Lele results· The final conclusion
of the paper is a strong preference to the
(25)
aunsye ooft heorr tmheotghoonda l ofc oalplporcoaxtiimoan tiontgh ert het havna ri Xtu sN+2 d, t j-1
ables of the problem.
for 0^ t Stt«. The unknown constants c.,d·;
REFERENCES
j=1,2,...,N+2 can be determined by satisfy
Abdelrahman, 0. E. (1980). Orthogonal coll ing Eqs.(25)at t=0,at t=t , and at N inter
f
ocation for optimal control problems. ior collocation points. By imposing that
M. Sc. thesis, Dept. Comput. Sei., Un?.··
Eqs.(22)are satisfied at t=0,t.,t~,...,t,
of Alex., Egypt. N
Abuelnasr. B. M. , and Abdelrahman, 0. E. tN+1=tf tilen 2n(N+1) equations will result.
(19815· Extension of orthogonal colloc By using the initial condition on x(t) and
ation for optimal control problems with the final condition onA(t), another set of
state, inequality constraints. Paper pr 2n equations is obtained.Then, in all,
esented at the 8th IFAC Congress, nyoto, 2n(N+2) algebraic equations can be formed
Japan, Aug. 21-25, 19Ö1.
in the unknowns c.,d., j=1,2,...,N+2.
Jacobson D. H. , and Lele M. M. (1969). A
transformation technique for optimal (3)Depending on the form of f and L,the
control problems with state inequality resulting set of 2n(N+2) algebraic equatio
constraints. IEEE Trans. Autom. Control, ns can be solved, giving the unknown coeff
icients in x(t) and >(t). If f is linear
Kuester J. L. , and Mize J. H. (1973). Opt and L is quadratic functions of x and u,th
imization Techniques with Fortran. McG en Eqs.(22) become linear in x andA . In
raw-Hill. this case, the unknown coefficients can be
Lasdon, L. S. ,Mitter S. K. , and Waren A. obtained by solving the following matrix
D. (1967). The conjugate gradient method equation
for optimal control problems. IEEE Trans.
AC =B , (26)
Autom. Control.12.152-158.
Oh. S. H. , and Luss R. (1977). Use of ort where C =üc],c2>...,cN+2,d],ά^,...,dN+|T,
hogonal collocation method in optimal is a 2n(N+2)vect.r x· of coefficients, A is a
control problems. Int. J. Control.26. known square matrix of dimension 2n(N+2),
657-673. and B is a known 2n(N+2) vector. If A is
invertible, then C is obtained from
APPENDIX I
The orthogonal Collocation Approach for C =A"1B (27)
Unconstrained TPBVP ,-1
where A is inverse matrix of A.
This appendix is a summary of the necessary
steps followed to solve an unconstrained (if)Once C is obtained, hence c.,d.;
TPBVP, using orthogonal collocation . The
j=1,2,....N+2. By substituting C in
•procedure is taken from Oh and Luss(1977).
Eqs.(25), approximations of x(t), andA(t),
Thus, consider the following problem,
using N interior collocation points, can be
a system is described by
obtained. Also, the optimal vector,u(t),
x = f(x,u,t) , x(0)= x (-19) can be approximated by using Eqs. (2/+) and
0
the obtained value of C.
where x,u,and f have been defined before,
and x is the initial state vector. It is
APPENDIX II
required to find the optimal control funct Fletcher-Reeves and Fletcher-Powell
ion, u(t), which minimizes Optimization Algorithms
The purpose of these algorithms is to find
J(u)= / τ L(x,u,t) dt , (20) a local minimum of unconstrained function
0 of more than one variable. Both algorithms
use conjugate gradients to generate the
subject to the differential constraint of
necessary changes in the function variables
Eq.(19). The steps of solving the above po
Thus, consider the unconstrained minimizat
sed problem are as follows.
ion of the following function
(1)Define the Hamiltonian,H r v x .j , χ^ j ..,Xp),
H(x,u, A,t)=L(x,u,t) + AT(t)f(x,u,t), (21)
using,first, the Fletcher-Reeves algorithm.
From H, obtain the following canonic equat
The steps proceed as follows
ions
(1)A starting point is selected,i.e.,
x=f(x,λ,t) , x(0)=x
o (2Z) x-, ,x , ...,x are chosen.
2
A=g(x, >,t) , A(t )= 0
f (2) The direction of steepest descent
where Ais obtained from is determined by determining the following
> = -££ , (23) direction vector components(normalized
form) at the starting point,
ax
.(k) (k)
and the optimal control,u, is obtained from M
the necessary condition
*H .i=l,2f<
0 (2lf) 2
& '£> ] *
The set of equations(22) constitutes a *XJ
TPBVP.
where k=0, for the starting point.
(2)Choose a number of interior colloca
(3)A one dimensional search is then
tion points,N,as the N zeros of a transfor
conducted along the direction of steepest
med Legendre polynomial. Then,expand x(t)
descent utilizing the relation,
andX(t) into a finite power series in t
Orthogonal Collocation and Mathematical Programming 5
**'i(-n-e-w-)^= xi(old) sMi i=1,2,...,p, x(k) = xCk+V) - X (k)
where s is the distance moved in the M. dir
ections. When a minimum is obtained along " K*XJ " vBx;
the direction of steepest descent, a new
A new one dimensional search is performed
"conjugate direction"is evaluated at the new
in the new direction· The process is repe
point with the following normalized compon
ated until convergence is obtained.
ents
(k-D (k-i)
M.
1 Γ P / WLU Τ*=ΤΓ—(k-Hn\2n i
Lfii i *xi M.,
where iL==11,,22,,......,,nn,, and
C'*-0 J2 ,i£,(k) i2
r
i3
1 2
ii,[ «SA ']
(if)A one dimensional search is then
conducted in this direction. When a minimum
is found, an overall convergence check is
made. If convergence is achieved, the proc
edure terminates. If convergence is not ac
hieved, new "conjugate direction" vector
components are evaluated per step(3)· This
process continues until convergence is ach
ieved or n+1 directions have been reached.
If a cycle of n+1 directions have been com
pleted, a new cycle is started consisting
of a steepest descent direction(step 2)and
n "conjugate directions"(step 3)·
The Fletcher-Powell algorithm proceeds as
follows
(1)Select a starting point.
(2)Compute a direction of search. In
normalized form, this is as follows
(k)
0 = 1
(k)
M
Γ p P »VI
( H
lfj a
where i=1,2,...,p, k is the iteration index
(k=0 at the starting point), M. are the
direction vector components,and ^F^ . is the
jth component of the gradient vector. H. .
χί J
is the i-j element of a symmetric positive
definite matrix(pxp), which is initially
chosen to be the identity matrix. Thus, the
initial direction of search is the path of
steepest descent.
(3)A one dimensional search is conduc
ted in the direction chosen by the previous
step until a minimum is located using the
relation
xi(new)= xi(old)+ sMi> i=1.2,...,p,
where s is the step size in the direction
of search.
(Zf)A convergence check is made. If di
vergence is achieved, the procedure is te
rminated. Otherwise, a new search, direction
is chosen per step(2) except lTk+ is calc
ulated as follows
(k+1) (k) (k) (k)
H = H + A B
where
(k)
,(k) _ ÛX (Δχ(*>)Τ
( (k))T (AG<k>)
AX
,(k) H(k)AQ(k) (*G<k))T (k)
(*G<k>)T j(k)AG(k)