Subido por Luis Miguel González Ortíz

systems-with-delays-analysis-control-and-computations

Anuncio
Systems with Delays
Scrivener Publishing
100 Cummings Center, Suite 541J
Beverly, MA 01915-6106
Publishers at Scrivener
Martin Scrivener([email protected])
Phillip Carmical ([email protected])
Systems with Delays
Analysis, Control, and
Computations
A.V. Kim and A.V. Ivanov
Copyright © 2015 by Scrivener Publishing LLC. All rights reserved.
Co-published by John Wiley & Sons, Inc. Hoboken, New Jersey, and Scrivener Publishing LLC, Salem,
Massachusetts.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or
by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to
the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax
(978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be
addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030,
(201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts
in preparing this book, they make no representations or warranties with respect to the accuracy or
completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your
situation. You should consult with a professional where appropriate. Neither the publisher nor author
shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact
our Customer Care Department within the United States at (800) 762-2974, outside the United States at
(317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may
not be available in electronic formats. For more information about Wiley products, visit our web site
at www.wiley.com. For more information about Scrivener products please visit www.scrivenerpublishing.com.
Cover design by Kris Hackerott
Library of Congress Cataloging-in-Publication Data:
ISBN 978-1-119-11758-2
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1
Contents
Preface
ix
1 Linear time-delay systems
1.1 Introduction
1.1.1 Linear systems with delays
1.1.2 Wind tunnel model
1.1.3 Combustion stability in liquid propellant
rocket motors
1.2 Conditional representation of differential equations
1.2.1 Conditional representation of ODE and PDE
1.2.2 Conditional representation of DDE
1.3 Initial Value Problem. Notion of solution
1.3.1 Initial conditions (initial state)
1.3.2 Notion of a solution
1.4 Functional spaces
1.4.1 Space C[−τ,0]
1.4.2 Space Q[−τ,0]
1.4.3 Space Q[−τ,0)
1.4.4 Space H = Rη × Q[−τ,0)
1.5 Phase space H. State of time-delay system
1.6 Solution representation
1.6.1 Time-varying systems with delays
1.6.2 Time-invariant systems with delays
1.7 Characteristic equation and solution expansion
into a series
1.7.1 Characteristic equation and eigenvalues
1.7.2 Expansion of solution into a series on
elementary solutions
1
1
1
2
v
3
5
5
6
9
9
10
11
12
12
13
14
15
16
16
20
24
24
26
vi
Contents
2 Stability theory
2.1 Introduction
2.1.1 Statement of the stability problem
2.1.2 Eigenvalues criteria of asymptotic stability
2.1.3 Stability via the fundamental matrix
2.1.4 Stability with respect to a class of functions
2.2 Lyapunov-Krasovskii functionals
2.2.1 Structure of Lyapunov-Krasovskii quadratic
functionals
2.2.2 Elementary functionals and their properties
2.2.3 Total derivative of functionals with respect
to systems with delays
2.3 Positiveness of functionals
2.3.1 Definitions
2.3.2 Sufficient conditions of positiveness
2.3.3 Positiveness of functionals
2.4 Stability via Lyapunov-Krasovskii functionals
2.4.1 Stability conditions in the norm || · || H
2.4.2 Stability conditions in the norm || · ||
2.4.3 Converse theorem
2.4.4 Examples
2.5 Coefficient conditions of stability
2.5.1 Linear system with discrete delay
2.5.2 Linear system with distributed delays
39
29
30
31
32
33
36
3
59
59
60
67
67
68
69
73
73
74
75
75
76
76
76
78
Linear quadratic control
3.1 Introduction
3.2 Statement of the problem
3.3 Explicit solutions of generalized Riccati equations
3.3.1 Variant 1
3.3.2 Variant 2
3.3.3 Variant 3
3.4 Solution of Exponential Matrix Equation
3.4.1 Stationary solution method
3.4.2 Gradient methods
3.5 Design procedure
3.5.1 Variants 1 and 2
3.5.2 Variant 3
3.6 Design case studies
3.6.1 Example 1
3.6.2 Example 2
36
37
40
46
46
47
47
49
50
51
52
53
54
54
56
Contents
3.6.3 Example 3
3.6.4 Example 4
3.6.5 Example 5: Wind tunnel model
3.6.6 Example 6: Combustion stability in liquid
propellant rocketmotors
vii
78
80
82
84
4 Numerical methods
4.1 Introduction
4.2 Elementary one-step methods
4.2.1 Euler’smethod
4.2.2 Implicit methods (extrapolation)
4.2.3 Improved Euler’smethod
4.2.4 Runge-Kutta-like methods
4.3 Interpolation and extrapolation of the
model pre-history
4.3.1 Interpolational operators
4.3.2 Extrapolational operators
4.3.3 Interpolation-Extrapolation operator
4.4 Explicit Runge-Kutta-like methods
4.5 Approximation orders of ERK-like methods
4.6 Automatic step size control
4.6.1 Richardson extrapolation
4.6.2 Automatic step size control
4.6.3 Embedded formulas
89
89
91
92
95
96
97
98
98
100
101
102
104
106
106
107
108
5 Appendix
5.1 i-Smooth calculus of functionals
5.1.1 Invariant derivative of functionals
5.1.2 Examples
5.2 Derivation of generalized Riccati equations
5.3 Explicit solutions of GREs (proofs of theorems)
5.3.1 Proof of Theorem 3.2
5.3.2 Proof of Theorem 3.3
5.3.3 Proof of Theorem 3.4
5.4 Proof of Theorem 1.1. (Solution representation)
111
111
111
116
124
134
134
137
139
139
Bibliography
143
Index
164
Preface
At present there are elaborated effective control and numerical methods
and corresponding software for analysis and simulating different classes
of ordinary differential equations (ODE) and partial differential equations
(PDE). The progress in this direction results in wide application of these
types of equations in practice. Another class of differential equations is
represented by delay differential equations (DDE), also called systems with
delays, time-delay systems, hereditary systems, functional differential
equations.
Delay differential equations are widely used for describing and mathematical modeling of various processes and systems in different applied problems [3, 5, 1, 27, 32, 33, 34, 40, 50, 62, 63, 183, 91, 107, 108, 111, 127, 183].
Delay in dynamical systems can have several causes, for example: technological lag, signal transmission and information delay, incubational
period (infection diseases), time of mixing reactants (chemical kinetics),
time of spreading drugs in a body (pharmaceutical kinetics), latent period
(population dynamics), etc.
Though at present different theoretical aspects of time-delay theory (see,
for example, [3, 1, 27, 32, 34, 50, 62, 63, 67, 72, 73, 183, 91, 107, 111, 127]
and references therein) are developed with almost the same completeness
as the corresponding parts of ODE theory, practical implementation of
many methods is very difficult because of infinite dimensional nature of
systems with delays.
Also it is necessary to note that, unlike ODE, even for linear DDE there
are no methods of finding solutions in explicit forms, and the absence of
generally available general-purpose software packages for simulating such
systems cause a big obstacle for analysis and application of time-delay
systems.
In this book we try to fill up this gap.
ix
x
Preface
The main aim of the book is to present new constructive methods of
DDE theory and to give readers practical tools for analysis, control design
and simulating of linear systems with delays1.
The main outstanding features of this book are the following:
1. on the basis of i-smooth analysis we give a complete description of the structure and properties of quadratic LyapunovKrasovskii functionals2;
2. we describe a new control design technique for systems with
delays, based on an explicit form of solutions of linear quadratic control problems;
3. we present new numerical algorithms for simulating DDE.
Acknowledgements
N.N.Krasovskii, A. B. Lozhnikov, Yu.F.Dolgii, A. I. Korotkii, O. V. Onegova,
M. V. Zyryanov, Young Soo Moon, Soo Hee Han.
Research was supported by the Russian Foundation for Basic Research
(projects 08-01-00141, 14-01-00065, 14-01-00477, 13-01-00110), the program “Fundamental Sciences for Medicine” of the Presidium of the Russian
Academy of Sciences, the Ural-Siberia interdisciplinary project.
1
The present volume is devoted to linear time-delay system theory. We plan to prepare a
special volume devoted to analysis of nonlinear systems with delays.
2
Including properties of positiveness, and constructive presentation of the total derivative
of functionals with respect to time-delay systems.
Chapter 1
Linear time-delay
systems
1.1
1.1.1
Introduction
Linear systems with delays
In this book we consider methods of analysis, control and
computer simulation of linear systems with delays
0
ẋ(t) = A(t) x(t)+Aτ (t) x(t−τ (t))+
G(t, s) x(t+s) ds+u ,
−τ (t)
(1.1)
where A(t), Aτ (t) are n×n matrices with piece-wise continuous elements, G(t, s) is n × n matrix with piece-wise continuous elements on R ×[−τ, 0], u is a given n–dimensional
vector-function, τ (t) : R → [−τ, 0] is a continuous function, τ is a positive constant.
Much attention will be paid to the special class of linear
time-invariant systems
0
ẋ(t) = A x(t) + Aτ x(t − τ ) +
−τ
1
G(s) x(t + s) ds + u , (1.2)
2
Systems with Delays
where A, Aτ are n × n constant matrices, G(s) is n × n
matrix with piece-wise continuous elements on [−τ, 0], τ is
a positive constant1 .
Usually we will consider u as the vector of control parameters. There are two possible variants:
1) u = u(t) is the function of time t;
2) u depend on the current and previous state of the
system, for example,
0
u = C x(t) +
D(s) x(t + s) ds .
(1.3)
−τ
Consider some models of control systems with delays.
1.1.2
Wind tunnel model
A linearized model of the high-speed closed-air unit wind
tunnel is [134, 135]
ẋ1 (t) = −a x1 (t) + a k x2 (t − τ ) ,
ẋ2 (t) = x3 (t) ,
ẋ3 (t) = −ω 2 x2 (t) − 2 ξ ω x3 (t) + ω 2 u3 (t) ,
(1.4)
1
, k = −0.117, ω = 6, ξ = 0.8, τ = 0.33 s.
with a =
1.964
The state variable x1 , x2 , x3 represent deviations from
a chosen operating point (equilibrium point) of the following quantities: x1 = Mach number, x2 = actuator position
guide vane angle in a driving fan, x3 = actuator rate. The
delay represents the time of the transport between the fan
and the test section.
The system can be written in matrix form
ẋ(t) = A0 x(t) + Aτ x(t − τ ) + B u(t) ,
1 I.e.
in this case τ (t) ≡ τ .
(1.5)
Linear Time-delay Systems 3
where
⎡
−a
⎢
A0 = ⎣ 0
0
0
0
1
⎤
⎥
⎦,
0 −ω 2 −2 ξ ω
⎤
⎡
0 ak 0
⎥
⎢
Aτ = ⎣ 0 0 0 ⎦ ,
0 0 0
⎡
⎤
0
⎢
⎥
B=⎣ 0 ⎦.
ω2
1.1.3
Combustion stability in liquid propellant
rocket motors
A linearized version of the feed system and combustion
chamber equations, assuming nonsteady flow, is given by2
φ̇(t) = (γ − 1) φ(t) − γ φ(t − δ) + μ(t − δ)
p 0 − p1
1
−ψ(t) +
μ̇1 (t) =
ξJ
2Δp
1
μ̇(t) =
[−μ(t) + ψ(t) − P φ(t)]
(1 − ξ)J
1
(1.6)
[μ1 (t) − μ(t)] .
ψ̇(t) =
E
Here
φ(t) = fractional variation of pressure in the combustion
chamber,
t is the unit of time normalized with gas residence time,
θg , in steady operation,
τ̃ = value of time lag in steady operation,
p̃ = pressure in combustion chamber in steady operation,
2 The
example is adapted from [36, 58].
4
Systems with Delays
τ pγ = const for some number γ,
τ̃
δ= ,
θg
μ(t) = fractional variation of injection and burning rate,
ψ(t) = relative variation of p1 ,
p1 = instantaneous pressure at that place in the feeding
line where the capacitance representing the elasticity
is located,
ξ = fractional length for the constant pressure supply,
J = inertial parameter of the line,
P = pressure drop parameter,
μ1 (t) = fractional variation of instantaneous mass flow upstream of the capacitance,
Δp = injector pressure drop in steady operation,
p0 = regulated gas pressure for constant pressure supply,
E = elasticity parameter of the line.
For our purpose we have taken
u=
p 0 − p1
2Δp
to be a control variable and guided by [36] have adopted
the following representative numerical values:
γ = 0.8, ξ = 0.5, δ = 1, J = 2, P = 1, E = 1.
This gives, for x(t) = (φ(t), μ1 (t), μ(t), ψ(t)),
ẋ(t) = A0 x(t) + Aτ x(t − 1) + Bu(t) ,
where
⎡
0.2 0
0
0
⎤
⎥
⎢
⎢ 0 0 0 −1 ⎥
⎥,
⎢
A0 = ⎢
⎥
−1
0
−1
1
⎦
⎣
0 1 −1 0
(1.7)
Linear Time-delay Systems 5
⎡
⎢
⎢
Aτ = ⎢
⎢
⎣
−0.8 0 1 0
0
0
0
⎡
⎢
⎢
B=⎢
⎢
⎣
⎤
⎥
0 0 0 ⎥
⎥,
0 0 0 ⎥
⎦
0 0 0
⎤
0
⎥
1 ⎥
⎥.
0 ⎥
⎦
0
The system (1.7) has two roots with positive real part:
λ1,2 = 0.11255 ± 1.52015 i.
1.2
Conditional representation of differential equations
1.2.1
Conditional representation of ODE and
PDE
Let us remember that for ODE
ẋ(t) = g(t, x(t)) ,
(1.8)
the conditional representation is
ẋ = g(t, x) ,
(1.9)
i.e. the argument t is not pointed out in state variable x(t).
The conditional representation of the partial differential
equation
∂y(t, x)
∂ 2 y(t, x)
= a
,
∂t
∂x2
is
∂y
∂2y
(1.10)
= a 2,
∂t
∂x
i.e. the arguments t and x are not pointed out in the function y(t, x).
6
Systems with Delays
Thus in order to obtain the conditional representation
of an ODE it is necessary to make in this equation the
following substitutions
x(t) to replace by x,
x (t) to replace by x .
(1.11)
Example 1.1. The linear control ODE
x (t) = a(t)x(t) + u(t) ,
can be written in the conditional form as
x = a(t)x + u(t) ,
note, we omit variable t only in the state variable x(t) but
not in the coefficients a(t) and u(t). One can omit t also
in the control variable u(t), in this case the conditional
representation will be
x = a(t)x + u .
Remark 1.1 It is necessary to emphasize, conditional
representation is very useful for describing local properties
of differential equations, for application of geometrical language and methods.
1.2.2
Conditional representation of DDE
Let us introduce the conditional representation of systems
with delays (1.1). First of all it necessary to note, differential equations with time lags differ from ODE by presence (involving) point x(t − τ ) and/or segment x(t + s),
−τ ≤ s < 0, which characterize previous history (prehistory) of the solution x(t).
The conditional representation of time-delay systems
(1.1) can be introduced in the following way. In H an
Linear Time-delay Systems 7
element of trajectory of the system is written as a pair
xt ≡ {x(t); x(t + s), −τ ≤ s < 0} ∈ H. Then, using the
notation
xt ≡ {x(t); x(t + s), −τ ≤ s < 0} ≡
≡ {x(t); x(t + ·)} ≡ {x, y(·)}t
(1.12)
we obtain the conditional representation
0
ẋ = A(t) x + Aτ (t) y(−τ (t)) +
G(t, s) y(s) ds + u ,
−τ (t)
(1.13)
for system (1.1) in the space H.
Correspondingly, the conditional representation of timeinvariant system (1.2) is
0
ẋ = A x + Aτ y(−τ ) +
G(s) y(s) ds + u .
(1.14)
−τ
Conditional representations (1.9), (1.13) and (1.14) have
no “physical sense”, and formulas (1.13) and (1.14) are
understood as systems (1.1) and (1.2) considered in the
phase space H. It is convenient to use representations
(1.9), (1.13) and (1.14) for investigating local properties
of differential equations.
Remark 1.2. It is necessary to emphasize that we use
in {x, y(·)}t (see (1.13) ) different letters for denoting current vector x(t) = x and the function-delay x(t + ·) = y(·),
because they play different roles in the dynamic of timedelay systems. One can use a fruitful analogy between
a train and the presentation {x, y(·)}t of the element of
trajectory of the system. The current point x plays the
role of a locomotive and the function-delay y(·) presents
the trucks which follow the locomotive. This is not only an
8
Systems with Delays
imaginary analogy. In many examples the solutions have
some kind of “inertness” because of the presence of delay
terms.
So, in order to obtain conditional representation of DDE
we make in this equation the following replacements:
1) substitution (1.11) for current point x(t),
2) substitution:
x(t + s) to replace by y(s), −τ ≤ s < 0,
(1.15)
for pre-history x(t + s), −τ ≤ s < 0.
In particular,
x(t + τ (t)) is replaced by y(−τ (t)) ,
x(t + τ ) is replaced by y(−τ ) ,
x(t + s) is replaced by y(s) for − τ ≤ s < 0 .
Remark 1.3. Employment of conditional representations (1.13) and (1.14) allows clearly to separate in the
structure of time delay systems the finite dimensional components x and infinite dimensional components y(·) and to
formulate results in such a way that if function-delay y(·)
disappears then the results turn into the corresponding results of ODE theory (with exactness in notation). It allows
to carry out a methodological analysis of results and methods of the theory of differential equations with deviating
arguments.
Linear Time-delay Systems 9
1.3
1.3.1
Initial Value Problem. Notion of solution
Initial conditions (initial state)
In the present section we consider the statement of the
initial value problem for time-delay systems. Remember,
for ODE (1.9) the initial condition has the form: (t0 , x0 ),
where t0 is the initial time moment and x0 is the initial
state.
In order to define the solution x(t) of time-delay system
(1.13) (or (1.14) ) it is necessary to know an initial point x0
and an initial function y 0 (·), i.e. at the initial time moment
t0 the solution x(·) should satisfy the initial conditions:
x(t0 ) = x0 ,
(1.16)
x(t0 + s) = y 0(s), −τ ≤ s < 0.
(1.17)
So an initial state of a system with delays we will consider as a pair h0 = {x0 , y 0(·)}.
Remark 1.4. In general case an initial point x0 and
an initial function y 0 (·) are not related, i.e. can be chosen
independently.
Thus, we can formulate the initial value problem for
system (1.13): for a given initial state (position) h0 =
{x0 , y 0(·)} and an initial time moment t0 to find the solution x(t), t ≥ t0 − τ, of system (1.13) which satisfies the
initial conditions (1.16), (1.17).
Now we can give the definition of a solution of an initial value problem, but first it is necessary to discuss what
functions y 0(·) we will consider as initial functions.
In many papers and books the classes of continuous and
measurable initial functions are considered. These classes of
10
Systems with Delays
functions are very useful for investigating different aspects
of time-delay systems, however
1) in many applied problems initial conditions are discontinuous, so in this case the class of continuous initial
functions is insufficient;
2) the consideration of systems with delays with respect
to measurable initial functions requires application of
mathematical methods which could be complicated
for engineers and applied mathematicians, and besides
that, measurable initial conditions are very rare cases
in practical problems for time-delay systems.
So in the present book we develop different aspects of
the theory of time-delay systems for initial conditions h0 =
{x0 , y 0(·)} with piece-wise continuous functions y 0 (·) =
{y 0(s), −τ ≤ s < 0}, because this class of initial functions
cover almost all admissible initial conditions.
It is necessary to emphasize, in the first place, piece-wise
continuous functions include the class of continuous functions and, in the second place, measurable functions can be
approximated by piece-wise continuous functions.
1.3.2
Notion of a solution
Now let us define what we will understand under the solutions of time-delay systems.
Definition 1.1. The solution of system (1.13) corresponding to an initial time moment t0 and an initial state
h0 = {x0 , y 0(·)} is the function x(t) = x(t; t0 , h0 ) which
satisfies the following conditions:
1) x(t) is defined on some interval [t0 − τ, t0 + κ), κ > 0,
2) x(t) satisfies the initial conditions (1.16)–(1.17),
Linear Time-delay Systems 11
3) x(t) is continuous on [t0 , t0 + κ) and has on this time
interval piece-wise continuous derivative,
4) x(t) satisfies the equation (1.13) on [t0 , t0 + κ)3 .
It is necessary to make some comments in respect of this
definition.
1) Initial state h0 = {x0 , y 0(·)} can be discontinuous, so
the corresponding solution can be discontinuous on
initial time interval [t0 − τ, t0 ]. However the solution
should be continuous for t ∈ [t0 , t0 + κ).
2) The derivative x (t) of the solution can have discontinuities on [t0 , t0 + κ) but, it is very important, we
require that at points of discontinuities of the solution
of equation (1.13) is satisfied for the right-hand side
derivative x (t + 0).
3) It is necessary to note, at initial time moment t0 the
derivative of the solution x(t) can be discontinuous,
in this case it is supposed that at initial time moment
t0 equations (1.13) are satisfied for the right-hand side
derivative x (t0 + 0).
1.4
Functional spaces
The contemporary time-delay system theory is developed
on the basis of functional approach to description and investigation of such equations. That is, segments of solutions are considered as elements of some functional space.
This approach will be discussed in the following sections.
In the present section we describe functional spaces
C[−τ, 0], Q[−τ, 0] and H = Rn × Q[−τ, 0) which will be
used for the realization of this approach.
3 At points of discontinuity of the derivative x (t) the equation (1.13) should
be satisfied for the right-hand side derivative x (t + 0).
12
Systems with Delays
Space C[−τ, 0]
1.4.1
C[−τ, 0] is the set of n-dimensional continuous on [−τ, 0]
functions. For any two functions φ(·), ψ(·) ∈ C[−τ, 0] there
is defined the distance
φ(·) − ψ(·)C = max φ(s) − ψ(s) .
−τ ≤s≤0
Space Q[−τ, 0]
1.4.2
Q[−τ, 0] is the set of n-dimensional functions q(s), −τ ≤
s ≤ 0, with the properties:
1) q(·) is continuous on the interval [−τ, 0] except, may be,
a finite set of points of discontinuity of the first kind (at
which q(·) is continuous on the right) ;
2) q(·) is bounded on [−τ, 0].
Let us make some remarks:
1) the term discontinuity of the first kind at a point s∗ ∈
(−τ, 0) means that at this point the function q(·) has
finite unequal left-side and right-side limits4 ;
2) the term continuous on the right means that at the point
s∗ of discontinuity we set
q(s∗ ) = lim q(s) ,
s→s∗ +0
i.e. at this point the function takes value q(s∗ ) equal to
the right-side limits;
3) different functions of Q[−τ, 0] can have different points
of discontinuity.
The distance between two elements q (1) (·), q (2) (·) of this
space is defined as
q (1) (·) − q (2) (·)Q = sup q (1) (s) − q (2) (s) .
−τ ≤s≤0
4 These
points are called points of discontinuity.
Linear Time-delay Systems 13
Every continuous on [−τ, 0] function belongs to Q[−τ, 0],
hence C[−τ, 0] ⊂ Q[−τ, 0].
1.4.3
Space Q[−τ, 0)
The set Q[−τ, 0) consists of n-dimensional functions y(s),
−τ ≤ s < 0, with the properties:
1) y(·) is continuous on the half-interval [−τ, 0) except, may
be, a finite set of points of discontinuity of the first kind
(at which q(·) is continuous on the right) ;
2) y(·) is bounded on [−τ, 0);
3) there exists finite left-side limit at zero lim y(s).
s→0−
Remark 1.5. For example, the function y ∗ (s) =
1
, −τ ≤ s < 0, does not belong to Q[−τ, 0) besin
s
cause left-side limit lim y(s) does not exist. The function
s→0−
1
y∗ (s) = , −τ ≤ s < 0, also does not belong to Q[−τ, 0)
s
because is unbounded at zero.
The distance between two elements y (1) (·), y (2) (·) ∈
Q[−τ, 0) is defined as
y (1) (·) − y (2) (·)τ = sup y (1) (s) − y (2) (s) .
−τ ≤s<0
It is necessary to note, functions of the space Q[−τ, 0)
are constrictions of functions of the space Q[−τ, 0] on the
half-interval [−τ, 0), i.e. can be obtained from functions of
the space Q[−τ, 0] by removing their values at zero. For
example, for a function q(·) ∈ Q[−τ, 0] the corresponding constriction on the half-interval [−τ, 0) is the function
y(·) = {q(s), −τ ≤ s < 0} which satisfies all above mentioned properties of elements Q[−τ, 0).
However, there is no one-to-one correspondence between
spaces Q[−τ, 0] and Q[−τ, 0) ! The space Q[−τ, 0] broader
14
Systems with Delays
than Q[−τ, 0). For example, to one element y(·) ∈ Q[−τ, 0)
correspond infinite numbers of functions
qx (·) =
x for s = 0
y(s) for − τ ≤ s < 0
of the space Q[−τ, 0], because for every x ∈ Rn the function
qx (·) is the element of Q[−τ, 0].
1.4.4
Space H = Rη × Q[−τ, 0)
The space Rn × Q[−τ, 0) is the Cartesian product of the
finite dimensional space Rn and infinite dimensional space
Q[−τ, 0), i.e. elements of this space are pairs {x, y(·)}
where x is the vector from Rn and y(·) is the function
from Q[−τ, 0).
For convenience we denote the space Rn ×Q[−τ, 0) using
one letter H, i.e. H = Rn × Q[−τ, 0), and correspondently,
its elements we denote also by one letter h = {x, y(·)}.
The distance between two elements
h(1) = {x(1) , y (1) (·)} , h(2) = {x(2) , y (2) (·)} ∈ H
is defined as
h(1) − h(2) H = max{x(1) − x(2) , y (1) − y (2) τ }.
One can see the spaces H and Q[−τ, 0] are isometric,
that is there exists one-to-one mapping A : Q[−τ, 0] → H
which preserves the distance between elements. For example, to a function q(·) ∈ Q[−τ, 0] corresponds pair
{q(0); q(s), τ ≤ s < 0} ∈ H, and vice versa, to an element
{x, y(·)} ∈ H corresponds the function
q(·) =
x for s = 0
y(s) for − τ ≤ s < 0
which is the element of Q[−τ, 0]. Hence, from the point
of view of the mathematical properties two spaces H and
Q[−τ, 0] are identical, i.e. they are one and the same space!
Linear Time-delay Systems 15
So sometimes we will use term “function” and for pair
{x, y(·)} ∈ H.
However, for the description of properties of systems
with delays it will be more convenient to use the presentation H = Rn × Q[−τ, 0).
1.5
Phase space H. State of time-delay
system
Phase space of a dynamical system is called the set of all
possible instantaneous states of the system.
In case of ODE attention, usually, is not paid to the
notion of the phase space because it is almost always Rn .
But infinite dimensional systems (functional differential
equations, partial differential equations) one can consider
in different phase spaces, so the choice of suitable phase
space plays in this case a very important role.
For time-delay system (1.13) phase space is usually
taken as the set of definition the right-hand sides f of the
system and is defined in its turn by the class of admissible
initial functions.
In the present book we consider systems with respect
to piece-wise continuous initial conditions. So it will be
convenient to take the set H = Rn ×Q[−τ, 0) as the phase
space of systems with delays. In the phase space H the
state at time moment t of a system with delays is the pair
{x(t); x(t + ·)} of the current point x(t) and function-delay
x(t + ·) = {x(t + s), −τ ≤ s < 0}. We will denote this pair
as
xt = {x(t); x(t + ·)}.
(1.18)
Note, (1.18) is the element of the space H = Rn ×Q[−τ, 0).
Remark 1.6. As was noted in the previous section the
spaces H and Q[−τ, 0] are isometric. Nevertheless for our
goals it is convenient to use the phase space in the form
16
Systems with Delays
H = Rn × Q[−τ, 0) in order to explicitly distinguish finite dimensional and infinite dimensional components in
the structure of systems and functionals. This structure
of the space H (as the Cartesian product of finite dimensional and infinite dimensional spaces) allows to clean up
the structure of systems with delays, to give more precise
descriptions of some constructions and properties.
Remark 1.7. Though we chose H as the basic phase
space for describing and investigating time-delay systems,
nevertheless in many cases it is convenient to use (and we
will do it!) the space C[−τ, 0] for describing some properties of nonlinear systems. The space C[−τ, 0] is convenient
for investigating and describing asymptotic properties of
time-delay systems, and the space H is convenient for investigating local properties of systems.
Remark 1.8. Linear time-delay systems often are considered in the phase space Rn × L2 [−τ, 0), because this
space is the Hilbert space and has many constructions convenient for describing and investigation linear systems. 1.6
Solution representation
In this section we give special representations of solutions of
linear time-varying and time-invariant systems with delays.
1.6.1
Time-varying systems with delays
Consider a linear system
0
ẋ = A(t) x + Aτ (t) y(−τ ) +
−τ
G(t, s) y(s) ds + u(t) (1.19)
Linear Time-delay Systems 17
where A(t), Aτ (t) and u(t) are n×n, n×n and n×1 matrices
with piece-wise continuous on R coefficients; G(t, s) is n×n
matrix with piece-wise continuous on R × [−τ, 0] elements.
Similar to ODE’s the solutions of system (1.19) can be
expressed in terms of fundamental matrix5
⎡
⎤
f11 (t, ξ) f12 (t, ξ) . . . f1n (t, ξ)
⎢
⎥
⎢ f21 (t, ξ) f22 (t, ξ) . . . f2n (t, ξ) ⎥
⎢
⎥ (1.20)
F [t, ξ] = ⎢
⎥
.
.
...
.
⎣
⎦
fn1 (t, ξ) fn2 (t, ξ) . . . fnn (t, ξ)
which is the solution of the matrix delay differential equation
∂F [t, ξ]
= A(t) F [t, ξ] +
∂t
0
+ Aτ (t) F [t − τ, ξ] + G(t, s) F [t + s, ξ] ds , t > ξ , (1.21)
−τ
under the condition
F [ξ, ξ] = I ,
F [t, ξ] = 0 for t < ξ .
(1.22)
Theorem 1.1. The solution x(t) = x(t; t0 , h0 ) of system (1.21) corresponding to an initial condition
t0 ∈ R ,
(1.23)
h0 = {x0 , y 0(·)} ∈ H = Rn × Q[−τ, 0)
has the form
0
x(t) = F [t, t0 ] x0 +
F [t, t0 +τ +s] Aτ (t0 +τ +s) y 0(s) ds+
−τ
5 Also
called the state transition matrix.
18
Systems with Delays
⎤
⎡
0 s
+ ⎣ F [t, t0 + s − ν] G(t0 + s − ν, ν) dν ⎦ y 0(s) ds+
−τ
−τ
t
F [t, ρ] u(ρ) dρ .
+
(1.24)
t0
Proof of the theorem is given in Appendix.
Example 1.2. Let us find a fundamental matrix of the
system
ẋ1 = aτ (t) y2 (−τ ) ,
(1.25)
ẋ2 = 0 ,
where aτ (t) is a continuous on R function.
The corresponding matrices A(t) and Aτ (t) are
0 0
0 aτ (t)
A=
, Aτ =
.
0 0
0
0
The fundamental matrix
f11 (t, ξ) f12 (t, ξ)
F [t, ξ] =
,
f21 (t, ξ) f22 (t, ξ)
should satisfy the system of differential equations
f˙11 (t, ξ) f˙12 (t, ξ)
∂F [t, ξ]
=
≡
∂t
f˙21 (t, ξ) f˙22 (t, ξ)
=
0 aτ (t)
0
0
f11 (t − τ, ξ) f12 (t − τ, ξ)
f21 (t − τ, ξ) f22 (t − τ, ξ)
,
where dot “·” denotes the derivative with respect to the
variable t.
Linear Time-delay Systems 19
Thus we have the system of 4 differential equations
⎧ ˙
f11 (t, ξ) = aτ (t) f21 (t − τ, ξ) ,
⎪
⎪
⎪
⎪
⎪
⎨ f˙12 (t, ξ) = aτ (t) f22 (t − τ, ξ) ,
⎪
f˙21 (t, ξ) = 0 ,
⎪
⎪
⎪
⎪
⎩ f˙22 (t, ξ) = 0 ,
with the initial conditions (see (1.22))
⎧
f11 (ξ, ξ) = f22 (ξ, ξ) = 1 ,
⎪
⎪
⎨
f21 (ξ, ξ) = f12 (ξ, ξ) = 0 ,
⎪
⎪ f (t, ξ) = 0 for t < ξ , i, j = 1, 2 .
⎩
ij
Subsequently calculating we can find
f21 (t, ξ) = 0 ,
1 for t ≥ ξ ,
f22 (t, ξ) =
0 for t < ξ ,
f11 (t, ξ) =
1 for t ≥ ξ ,
0 for t < ξ ,
⎧
t
⎪
⎪
⎪
⎪
⎨
aτ (ν)dν for t ≥ ξ + τ ,
f12 (t, ξ) =
ξ+τ
⎪
⎪
⎪
⎪
⎩ 0 for t < ξ + τ ,
hence
F [t, ξ] =
F [t, ξ] =
0 0
0 0
1 0
0 1
for t < ξ ,
for t ∈ [ξ, ξ + τ ] ,
20
Systems with Delays
⎛
t
⎜ 1
F [t, ξ] = ⎜
⎝
ξ+τ
0
⎞
aτ (ν)dν ⎟
⎟ for t > ξ + τ .
⎠
1
1.6.2
Time-invariant systems with delays
For linear time-invariant system
0
ẋ = A x + Aτ y(−τ ) +
G(s) y(s) ds + u(t)
(1.26)
−τ
the fundamental matrix F [t, ξ] has the property
F [t, ξ] = F [t − ξ] ,
and the solution of the initial value problem (1.26), (1.23)
has the form
0
x(t) = F [t − t0 ] x0 +
F [t − t0 − τ − s] Aτ y 0(s) ds +
−τ
⎤
⎡
0 s
+ ⎣ F [t − t0 − s + ν] G(ν) dν ⎦ y 0 (s) ds+
−τ
−τ
t
F [t − ξ] u(ξ) dξ .
+
(1.27)
t0
Taking into account that for time-invariant systems we
can always take the initial moment t0 = 0, hence in this
Linear Time-delay Systems 21
case the fundamental matrix
⎡
f11 (t) f12 (t)
⎢
⎢ f21 (t) f22 (t)
F [t] = ⎢
⎢ .
.
⎣
fn1 (t) fn2 (t)
. . . f1n (t)
⎤
⎥
. . . f2n (t) ⎥
⎥
⎥
...
.
⎦
. . . fnn (t)
(1.28)
is the solution of the matrix delay differential equation
dF [t]
= A F [t] + Aτ F [t − τ ] +
dt
0
G(s) F [t + s] ds , t > 0 ,
−τ
(1.29)
under the condition
F [0] = I ,
(1.30)
F [t] = 0 for t < 0 .
Hence the solution corresponding to the initial moment
t0 and an initial pair {x0 , y 0(s)} can be presented in the
form
0
F [t − τ − s] Aτ y 0 (s) ds +
x(t) = F [t] x0 +
−τ
⎤
⎡
0 s
t
0
+ ⎣ F [t − s + ν] G(ν) dν ⎦ y (s) ds+ F [t−ξ] u(ξ) dξ .
−τ
−τ
0
(1.31)
Example 1.3. Let us find a fundamental matrix of the
system
ẋ1 = x2 ,
(1.32)
ẋ2 = −y1 (−τ ) .
For this system the corresponding constant matrices A
22
Systems with Delays
and Aτ have the forms
0 1
A=
,
0 0
Aτ =
0
0
−1 0
.
The fundamental matrix
f11 (t − ξ) f12 (t − ξ)
,
F [t − ξ] =
f21 (t − ξ) f22 (t − ξ)
is the solution of the following system of differential equations
˙
˙
f
(t
−
ξ)
f
(t
−
ξ)
11
12
∂F [t − ξ]
=
=
˙
˙
∂t
f21 (t − ξ) f22 (t − ξ)
=
+
0 1
f11 (t − ξ) f12 (t − ξ)
+
0 0
f21 (t − ξ) f22 (t − ξ)
0 0
f11 (t − ξ − τ ) f12 (t − ξ − τ )
−1 0
f21 (t − ξ − τ ) f22 (t − ξ − τ )
.
So, in order to find the elements of the fundamental matrix F it is necessary to solve the system of 4 differential
equations with delays
⎧ ˙
f11 (t − ξ) = f21 (t − ξ) ,
⎪
⎪
⎪
⎪
⎪
⎨ f˙12 (t − ξ) = f22 (t − ξ) ,
(1.33)
⎪
f˙21 (t − ξ) = −f11 (t − ξ − τ ) ,
⎪
⎪
⎪
⎪ ˙
⎩
f22 (t − ξ) = −f12 (t − ξ − τ ) ,
with respect to the initial conditions (see (1.22))
f11 (0) = f22 (0) = 1 ,
f12 (0) = f21 (0) = 0 ,
(1.34)
Linear Time-delay Systems 23
fij (t − ξ) = 0 for t < ξ , i, j = 1, 2 .
(1.35)
Let us solve this system using the step method.
STEP 1. Because of the condition (1.35) the system (1.33)
has on the time interval [ξ, ξ + τ ] the form
⎧ ˙
f11 (t − ξ)
⎪
⎪
⎪
⎪
⎪
⎨ f˙12 (t − ξ)
⎪ f˙21 (t − ξ)
⎪
⎪
⎪
⎪
⎩ f˙22 (t − ξ)
= f21 (t − ξ) ,
= f22 (t − ξ) ,
= 0,
= 0.
Taking into account the initial conditions (1.34) we obtain
f˙21 (t − ξ) = 0 , f˙22 (t − ξ) = 1 , for t ∈ [ξ, ξ + τ ] ,
and
f˙11 (t − ξ) = 1 , f˙12 (t − ξ) = t − ξ , for t ∈ [ξ, ξ + τ ] .
Thus
F [t − ξ] =
1 t−ξ
0
1
for t ∈ [ξ, ξ + τ ] .
STEP 2. On the next time interval [ξ + τ, ξ + 2τ ] system
(1.33) has the form
⎧ ˙
f11 (t − ξ)
⎪
⎪
⎪
⎪
⎪
⎨ f˙12 (t − ξ)
⎪
f˙21 (t − ξ)
⎪
⎪
⎪
⎪
⎩ f˙22 (t − ξ)
= f21 (t − ξ) ,
= f22 (t − ξ) ,
= −1 ,
= t−ξ−τ
24
Systems with Delays
with the initial conditions
⎧
f11 (τ ) = 1 ,
⎪
⎪
⎪
⎪
⎪
⎨ f12 (τ ) = τ ,
f21 (τ ) = 0 ,
⎪
⎪
⎪
⎪
⎪
⎩ f22 (τ ) = 1 .
In general case the coefficients of the fundamental matrix F have the form
κ[ t−ξ
]
τ
f11 (t − ξ) = f22 (t − ξ) =
(−1)m
m=0
(t − ξ − mτ )2m
,
(2m)!
where κ[t] denotes the integer part of t,
κ[ t−ξ
]
τ
f12 (t − ξ) = f21 (t − ξ) =
(−1)m
m=0
(t − ξ − mτ )2m+1
.
(2m + 1)!
1.7
1.7.1
Characteristic equation and solution
expansion into a series
Characteristic equation and eigenvalues
Consider a linear time-invariant system with delays
0
ẋ = A x + Aτ y(−τ ) +
G(s) y(s) dt
(1.36)
−τ
where A, Aτ and constant n × n matrices, G(s) is n × n
matrix with piece-wise continuous on [−τ, 0] elements.
Similar to ODE case let us look for solutions of (1.36)
in an exponential form
x(t) = eλt C
(1.37)
Linear Time-delay Systems 25
where λ is a complex number and C ∈ Rn . Substituting
(1.37) into (1.36) gives
0
λt
λt
λ(t−τ )
λe C = Ae C + Aτ e
G(s) eλ(t+s) C ds .
C+
−τ
Canceling the factor eλt and rearranging terms we obtain
A − λIn×n + Aτ e−λτ +
0
G(s) eλs ds C = 0
−τ
or
χ(λ) C = 0 ,
(1.38)
where
χ(λ) = A − λIn×n + Aτ e−λτ +
0
G(s) eλs ds .
(1.39)
−τ
A nonzero vector C satysfying (1.37) exists if and only
if the matrix χ(λ) is singular, i.e.
det χ(λ) = 0 .
(1.40)
this is the so-called characteristic equation. The determinant Δ(λ) = det χ(λ) is called the characteristic quasipolynomial (characteristic function).
The complex number λ = α + iβ, which is a solution of
cheracteristic equation (1.40), is called an eigenvalue. The
corresponding vector C ∈ Rn , satisfying (1.38), is called
an eigenvector.
The dynamics of system (1.36) is completely defined by
the roots6 of equation (1.39). However, unlike ODE these
roots can be found in explicit forms only in some rare cases.
Nevertheless there are qualitative results concerning distribution of eigenvalues.
6 Characteristic
roots.
26
Systems with Delays
Theorem 1.2. Either χ(λ) is a polynomial7 or χ(λ) has
infinitely many roots λ1 , λ2 , . . . such that Reλk → −∞ as
k → ∞.
Theorem 1.3. Let λ is an eigenvalue, then
1) |λ| ≤ V ar[−τ,0] A if Re λ ≥ 0,
2) |λ| ≤ e−τ Re λ V ar[−τ,0] A if Re λ ≤ 0.
Corollary 1.1. For every specific system (1.36) there
exists a real number γ such that system (1.36) has no zeros
in the right half-plane Reλ > γ.
1.7.2
Expansion of solution into a series on elementary solutions
We already have shown that for every characteristic root λ
there exists a vector C ∈ Rn such that the function eλt C
will satisfy system (1.36) for every t ∈ R. If the root has
a multiplicity m > 1 then, in general case, there can be
solutions of the form
φ(t) = eλt p(t)
(1.41)
where p(t) : R → Rn is a polynomial of a degree less than
m.
The maximal number of linear independent solutions of
the form (1.41) corresponding to a characteristic root λ is
equal to its multiplicity. We will call these solutions elementary solutions of system (1.36).
Every solution of system (3.41) can be connected with
a series [192, 193]
x(t) ∼
∞
k=1
7 Hence
it has a finite number of roots.
pk (t) ezk t ,
(1.42)
Linear Time-delay Systems 27
where zk , k = 1, ∞, are poles of system (3.41), pk (t) is a
polynomial (the degree of the polynomial is 1 less than the
multiplicity of the root zk ). If for some Δ > 0 and α there
are no poles of system (3.41) in the strip α − Δ < Re z <
α + Δ, then the asymptotic formula holds:
x(t) =
pk (t) ezk t + O(eαt ) .
(1.43)
Re zk >α
From this result follows an important corollary. If real
parts of all eigenvalues are negative, i.e. Re zk < 0, then
every solution of system (1.36) tends to zero.
Chapter 2
Stability theory
2.1
Introduction
The method of the Lyapunov function1 is one of the most
effective methods for investigation of ODE dynamics. Efficiency of the Lyapunov function method for ODE is based
on the fact that application of Lyapunov’s function allows
us to investigate stability of solutions without solving corresponding ODE.
In case of DDE the direct Lyapunov method was elaborated in [111, 112] in terms of the infinite-dimensional
Lyapunov-Krasovskii functionals.
In this chapter we
1) describe general structure of the quadratic LyapunovKrasovskii functionals;
2) derive the constructive formula of total derivative of
the functionals with respect to systems with delays;
3) present basic theorems of the Lyapunov-Krasovskii
functional methods for investigating stability of systems with delays.
1 This
method is also called the direct or the second Lyapunov method.
29
30
Systems with Delays
2.1.1
Statement of the stability problem
In this chapter we consider linear time-invariant systems
with delays
0
ẋ = A x + Aτ y(−τ ) +
G(s) y(s) ds ,
(2.1)
−τ
h = {x, y(·)} ∈ H.
Obviously, system (2.1) has the zero solution x(t) = 0.
Further we will investigate stability of this solution.
The origin (the zero element) of space H is the stationary point of system (2.1), hence, generally speaking, we
can identify the zero solution x(t) ≡ 0 and the origin of
H. So further, the terms “stability of the zero solution”
and “stability of the origin” will be used as synonyms.
Further we will use the following definitions.
Definition 2.1. The zero solution x(t) ≡ 0 of system
(2.1) is stable if for any positive ε there exists a positive δ
such that if h < δ then x(t; t0 , h) ≤ ε for all t ≥ t0 . Definition 2.2. The zero solution x(t) ≡ 0 of system
(2.1) is asymptotically stable if it is stable and
x(t; t∗ , h) → 0 as t → ∞ .
Definition 2.3. The zero solution x(t) ≡ 0 of system
(2.1) is exponentially stable if there exist positive constants
a and b such that for any (t∗ , h) ∈ R × H
x(t; t∗ , h) ≤ a hH e−b (t−t∗ ) for t ≥ t∗ .
Remark 2.1. The interval [−τ, 0] is compact, so in all
above definitions one can use the functional norm xt H of
Stability Theory
31
the solutions instead of the finite dimensional norm x(t).
Note that, using suitable substitution, we can reduce
investigating stability of arbitrary solutions of specific DDE
system to investigating stability of the zero solution of some
“perturbed” DDE.
Moreover, if a solution of system (2.1) corresponding to
some initial function is (asymptotically) stable then a solution corresponding to any other initial function also will
be (asymptotically) stable. hence for linear DDE we can
say about (asymptotic) stability of DDE system, but not
only a specific solution.
Also note the following useful proposition (A.Zverkin).
Theorem 2.1.
1) System (2.1) is stable if and only if for every
(t0 , h) ∈ R × H the corresponding solution x(t; t0 , h)
is bounded;
2) If for every (t0 , h) ∈ R ×H the corresponding solution
x(t; t0 , h) of system (2.1) tends to zero then the system
is asymptotically stable.
2.1.2
Eigenvalues criteria of asymptotic stability
As we already mentioned in subsection “Expansion of solution into a series on elementary solutions”, every solution
of system (2.1) tends to zero if all eigenvalues have negative
real parts. In other words, the condition
Re zk < 0
32
Systems with Delays
for all eigenvalues, is necessary and sufficient condition of
asymptotic stability of system (2.1).
Also now we can note that for linear DDE asymptotic
stability and exponential stability are equivalent.
2.1.3
Stability via the fundamental matrix
At present there are no effective algorithms of computing
the eigenvalues for linear systems with distributed delays
in order to check stability.
In this subsection we discuss another method of practical verification of stability of the closed-loop system. The
method is very simple for implementation and consists of
computing the fundamental matrix of the system. The
fundamental matrix can be numerically calculated using
Time-delay system toolbox [4].
Consider the homogeneous time-invariant system
0
ẋ = A x + Aτ y(−τ ) +
G(s) y(s) ds .
(2.2)
−τ
We can fomulate (see, for example, [76, 32]) the following stability conditions in terms of the fundamental matrix.
Theorem 2.2. System (2.2) is
1) stable if and only if there exists a constant k > 0 such
that
F
[t]
≤ k , t ≥ 0;
(2.3)
n×n
2) asymptotically stable if and only if there exist constants k > 0 and α > 0 such that
≤ k e−α t , t ≥ 0 .
(2.4)
F [t] n×n
Stability Theory
33
The fundamental matrix can be found numerically using Time-delay system toolbox [4] as the solution of
system (1.29), (1.30).
Remark 2.2. Thus, one can easily check stability (or
instability) of system (1.26) solving numerically system
(1.29), (1.30) and verifying the corresponding properties
(2.3) or (2.4) of the matrix F [t].
Note, if at least one of the coefficients of the matrix F [t]
is not uniformly bounded then system (1.26) is unstable. 2.1.4
Stability with respect to a class of functions
First of all it is necessary to note that, as emphasized by
many authors (see, for example [111, 107]), complete correct statement of a stability problem for a concrete system
with delays should include description of a class of admissible initial functions (initial disturbances).
In this case it is sufficient to consider stability of solution of specific time-delay system only with respect to
admissible initial disturbances.
Remark 2.3. In [172] one can find an example of
a time-delay system which is unstable with respect to the
class of all continuous disturbances, but is stable with respect to more narrow class of admissible initial functions.
Of course, classes of admissible initial disturbances are
different in different problems, so in general stability theory
usually the class of all continuous or piece-wise continuous
initial functions (disturbances) is considered. Nevertheless,
in some problems such class of initial functions can be superfluous.
34
Systems with Delays
Let L be a subset (a system of functions) of the space H.
Definition 2.4. System (2.1) is stable with respect to
a class of functions L if for any h ∈ L the corresponding
solution is bounded.
Definition 2.5. System (2.1) is asymptotically stable
with respect to a class of functions L if
lim x(t; h) = 0
(2.5)
for any h ∈ L.
t→∞
Note, from linearity of system (2.1) it follows that if the
system is (asymptotically) stable with respect to a class L
then the system will be also (asymptotically) stable with
respect to the space L∗ = span{L} spanned on L, and,
moreover, the system will be (asymptotically) stable with
respect to the class
L̄ = span
xt (h) .
h∈L t≥0
As we already mentioned, in many cases it is difficult
to prove stability of a system with respect to the class all
continuous initial functions. In this case one can check stability of the system with respect to a class of test initial
functions L by computer simulation.
The corresponding class of functions can be chosen, for
example, in the following way.
It is well known that there exist orthogonal systems of
continuous on the interval [−τ, 0] functions {φi (·)}∞
i=0 such
that every function ψ(·) ∈ C[−τ, 0] can be expanded in
series2
∞
ψ(s) =
γi φi (s) , −τ ≤ s ≤ 0 ,
(2.6)
i=0
2 For
example, trigonometrical system.
Stability Theory
35
with some coefficients {γi }∞
i=0 ⊂ R.
One can consider first k functions
φ1 (·), φ2(·), . . . , φk (·) ∈ C[−τ, 0]
(2.7)
as basic (test) functions, and investigate stability of system
(2.1) with respect to this finite class of functions. In this
case the system will be stable with respect to subspace of
functions, which are linear combinations of “basic” functions (2.7).
From the linearity of system (2.1) it follows that if for
every basic function φ1 ,. . .,φk the corresponding solution
x(t, φi ) tends to zero as t → ∞, then for arbitrary constants γ1 ,. . .,γk the solution x(t, φ), corresponding to an
initial function (2.9), also tends to zero. So, it is sufficient
to check convergence to zero only for functions φ1 ,. . .,φk .
Definition 2.6. System (2.1) is asymptotic stable with
respect to a class of functions (2.7) if
lim x(t; φi ) = 0
(2.8)
t→∞
i = 1, . . . , k.
Also it is necessary to note, though the series (2.6) contains an infinite number of terms, nevertheless taking into
account the presence of some uncertainties at every specific
(applied) problem one can consider a class of admissible
initial disturbances as a finite sum
k
ψ(s) =
γi φi (s) , −τ ≤ s ≤ 0 ,
(2.9)
i=0
∞
γ1 ,. . .,γk ∈ R, assuming that remainder part
γi φi (·)
i=k+1
of the series (2.6) corresponds to uncertainties.
Depending on the concrete problem one can choose his
own system of (linear independent) test functions.
36
Systems with Delays
2.2
Lyapunov-Krasovskii functionals
2.2.1
Structure of Lyapunov-Krasovskii quadratic
functionals
For investigating linear finite dimensional systems
ẋ = A x
(2.10)
the quadratic Lyapunov functions
v(x) = x P x
(2.11)
are usually used (here P is n × n symmetric matrix).
For linear DDE (2.1) similar role play the quadratic
Lyapunov-Krasovskii functionals, which have in general
case the following presentation
0
V [x, y(·)] = x P x + 2x
D(s) y(s) ds +
−τ
0
0 0
y (s) Q(s) y(s) ds +
+
−τ
y (s) R(s, ν) y(ν) ds dν +
−τ −τ
0 0
+
0
+
−τ
⎡⎛
⎣⎝
−τ ν
0
ν
y (s) Π(s) y(s) ds dν +
⎞
⎛ 0
⎞⎤
y(s) ds⎠ Γ ⎝ y(s) ds⎠⎦ dν
(2.12)
ν
where P , D(s), Q(s), R(s, ν), Π(s), Γ(ν, s) are n × n matrices and s, ν ∈ [−τ, 0].
One can see that general quadratic functional (2.12) is
composed by the system of more simple elementary functionals
V [x, y(·)] = W1 [x] + W2 [x, y(·)] +
+ W3 [y(·)] + W4 [y(·)] + W5 [y(·)] + W6 [y(·)]
where
Stability Theory
W1 [x] = x P x ,
37
(2.13)
0
W2 [x, y(·)] = 2 x
D(s) y(s) ds ,
(2.14)
y (s) Q(s) y(s) ds ,
(2.15)
−τ
0
W3 [y(·)] =
−τ
0 0
W4 [y(·)] =
y (s) R(s, ν) y(ν) ds dν ,
(2.16)
−τ −τ
0 0
W5 [y(·)] =
y (s) Π(s) y(s) ds dν ,
(2.17)
−τ ν
⎞ ⎛ 0
⎞⎤
⎡⎛ 0
0
W6 [y(·)] = ⎣⎝ y(s) ds⎠ Γ ⎝ y(s) ds⎠⎦ dν .
−τ
ν
ν
(2.18)
So the properties of functional (2.12) are defined by the
properties of these elementary functionals.
In the next subsection we describe some properties of
these functionals.
2.2.2
Elementary functionals and their properties
In Appendix we presented basic constructions of i-smooth
calculus and examples of calculating invariant derivatives
of some general classes of functionals.
In this subsection we present the corresponding formulas for described above elementary functionals.
The following formulas are valid:
38
Systems with Delays
For functional (2.13) :
∂W1 [x]
= 2P x ,
∂x
(P is symmetric matrix).
∂y W1 [x] = 0
For functional (2.14) :
∂W2 [x]
=2
∂x
0
D(s) y(s) ds ,
−τ
0
dD(s)
y(s) ds .
∂y W2 [x] = 2 x D(0) x−D(−τ ) y(−τ )−
ds
−τ
The formulas follow from Example 5.4 with
ω[x, s, y(s)] = 2 x D(s) y(s) .
For functional (2.15) :
∂W3 [y(·)]
= 0,
∂x
∂y W3 [x, y(·)] = x Q(0) x − y (−τ ) Q(−τ ) y(−τ )−
0
−
−τ
y (s)
dQ(s)
y(s) ds .
ds
The formulas follow from Example 5.4 with
ω[s, y(s)] = y (s) Q(s) y(s) .
Stability Theory
For functional (2.16) :
∂W4 [y(·)]
= 0,
∂x
∂y W4 [x, y(·)] =
0
= x
R(0, s) + R (s, 0) y(s) ds −
−τ
0
− y (−τ )
R(−τ, s) + R (s, −τ ) y(s) ds −
−τ
0 0
−
y (s)
−τ −τ
∂R(s, ν) ∂R(s, ν)
y(ν) ds dν .
+
∂s
∂ν
The formulas follow from Example 5.7 with
γ[s, y(s); ν, y(ν)] = y (s) R(s, ν) y(ν) .
For functional (2.17) :
∂W5 [y(·)]
= 0,
∂x
∂y W5 [x, y(·)] =
= τ x Π(s) x −
0
y (s) Π(s) y(s) ds−
−τ
0
0
−
−τ ν
y (s)
dΠ(s)
y(s) ds dν .
ds
The formulas follow from Example 5.5 with
ω[s, y(s)] = y (s) Π(s) y(ν) .
39
40
Systems with Delays
For functional (2.18) :
∂W6 [y(·)]
= 0.
∂x
∂y W6 [x, y(·)] =
⎛ 0
⎞ ⎛ 0
⎞
0 0
y(s) ds dν − ⎝ y(s) ds⎠ Γ ⎝ y(s) ds⎠ .
= 2 x Γ
−τ ν
−τ
−τ
The formulas follow from Example 5.6.
2.2.3
Total derivative of functionals with respect
to systems with delays
Total derivative of the quadratic function (2.11) with respect to system (2.10) has the simple form
!
"
v̇(2.10) = x A P + P A x .
(2.19)
Analysing total derivative (2.19) one can check various
properties of original system (2.10) without calculating its
solutions.
Let us derive a formula of total derivative of the
quadratic Lyapunov-Krasovskii functional (2.12) with respect to system (2.1).
The constructive formula of total derivative of LyapunovKrasovskii functional V [h] with respect to system (2.1) has
the form
V̇(2.1) [x, y(·)] = ∂y V [x, y(·)] +
∂V [x, y(·)]
A x + Aτ y(−τ ) +
+
∂x
0
G(s) y(s) ds
−τ
(2.20)
Stability Theory
41
where ∂y V [x, y(·)] is the invariant derivative of the functional V [x, y(·)].
Taking into account that
∂V [x, y(·)]
∂W1 [x] ∂W2 [x, y(·)]
=
+
+
∂x
∂x
∂x
+
∂W3 [y(·)] ∂W4 [y(·)] ∂W5 [y(·)] ∂W6 [y(·)]
+
+
+
∂x
∂x
∂x
∂x
and
∂y V [x, y(·)] = ∂y W1 [x] + ∂y W2 [x, y(·)] +
+∂y W3 [x, y(·)]+∂y W4 [x, y(·)]+∂y W5 [x, y(·)]+∂y W6 [x, y(·)] ,
we obtain
⎡
V̇(2.1) [x, y(·)] = 2 ⎣ x P +
0
⎤
y (s) D (s) ds ⎦ ×
−τ
0
× A x + Aτ y(−τ ) +
G(s) y(s) ds +
−τ
0
+ 2 x D(0) x − D(−τ ) y(−τ ) −
−τ
+x Q(0) x−y (−τ ) Q(−τ ) y(−τ )−
0
−τ
0
+x
dD(s)
y(s) ds +
ds
y (s)
dQ(s)
y(s) ds+
ds
R(0, s) + R (s, 0) y(s) ds −
−τ
0
− y (−τ )
−τ
R(−τ, s) + R (s, −τ ) y(s) ds −
42
Systems with Delays
0 0
−
∂R(s, ν) ∂R(s, ν)
y(ν) ds dν +
+
∂s
∂ν
y (s)
−τ −τ
0
+ τ x Π(0) x −
y (s) Π(s) y(s) ds −
−τ
0 0
−
y (s)
−τ ν
0 0
+2 x Γ
⎛ 0
⎞ ⎛ 0
⎞
y(s) ds dν−⎝ y(s) ds⎠ Γ ⎝ y(s) ds⎠ =
−τ ν
dΠ(s)
y(s) ds dν +
ds
−τ
−τ
= 2 x P A x + 2 x P Aτ y(−τ ) + 2 x P
0
G(s) y(s) ds +
−τ
0
+2x A
D(s) y(s) ds + 2 y
(−τ )Aτ
−τ
0
D(s) y(s) ds +
−τ
0 0
+2
y (s) D(s) G(ν) y(ν) ds dν +
−τ −τ
0
+ 2 x D(0) x − D(−τ ) y(−τ ) −
−τ
0
+x Q(0) x−y (−τ ) Q(−τ ) y(−τ )−
−τ
0
+x
−τ
dD(s)
y(s) ds +
ds
y (s)
dQ(s)
y(s) ds+
ds
R(0, s) + R (s, 0) y(s) ds −
Stability Theory
0
− y (−τ )
43
R(−τ, s) + R (s, −τ ) y(s) ds −
−τ
0 0
∂R(s, ν) ∂R(s, ν)
+
y(ν) ds dν +
∂s
∂ν
y (s)
−
−τ −τ
0
+ τ x Π(0) x −
y (s) Π(s) y(s) ds −
−τ
0 0
−
y (s)
−τ ν
0 0
+ 2 x Γ
dΠ(s)
y(s) ds dν +
ds
⎛ 0
⎞ ⎛ 0
⎞
y(s) ds dν − ⎝ y(s) ds⎠ Γ ⎝ y(s) ds⎠ ,
−τ ν
−τ
−τ
thus finally we obtain the following formula of total
derivative:
V̇(2.1) [x, y(·)] =
= x 2 P A + 2 D(0) + Q(0) + τ Π(0) x +
+ x 2 P Aτ − 2 D(−τ ) y(−τ ) +
0
+x
2 P G(s) − 2
−τ
+ y (−τ )
0
dD(s)
+ R(0, s) + R (s, 0) + 2 A D(s) y(s) ds+
ds
2 Aτ D(s) − R(−τ, s) + R (s, −τ ) y(s) ds +
−τ
0
0
+
−τ −τ
y (s) 2 D(s) G(ν) −
∂R(s, ν) ∂R(s, ν)
−
y(ν) ds dν −
∂s
∂ν
44
Systems with Delays
− y (−τ ) Q(−τ ) y(−τ ) −
0
−
y (s) Π(s) +
−τ
0 0
−
−τ ν
+ 2 x Γ
0 0
−τ ν
y (s)
dQ(s)
y(s) ds −
ds
dΠ(s)
y(s) ds dν +
ds
⎛ 0
⎞ ⎛ 0
⎞
y(s) ds dν − ⎝ y(s) ds⎠ Γ ⎝ y(s) ds⎠ ,
−τ
−τ
(2.21)
{x, y(·)} ∈ H.
Note, total derivative (2.21) is defined on elements of
H, but not at solutions.
The relation between total derivative (2.21) and the
derivative of Lyapunov-Krasovskii functionals along solutions gives us the following
Theorem 2.3. Functional (2.12) has right-hand derivatives along solutions of system (2.1), and for any (t0 , h) ∈
R×H
d
V [xt (t0 , h)] = V̇(2.1) [xt (t0 , h)] , for t ≥ t0 .
dt
(2.22)
In many cases it is sufficient to consider DDE in spaces
of smoother functions than H, for example, C[−τ, 0],
C 1 [−τ, 0], Lipk [−τ, 0]. This is connected with the fact
that solutions xt (t0 , h) of DDE belong to these spaces when
t ≥ t0 + τ . So often we can require the invariant differentiability of Lyapunov functionals not on the whole space
Stability Theory
45
H, but only on its subset.
Remark 2.4. The most general formula of total derivative is defined as right-hand Dini-derivative along solutions
[111, 112]
+
V̇(2.1)
[h]
"
1 !
V [xt∗ +Δt (t∗ , h)] − V [t, h] . (2.23)
= lim
Δt→0 Δt
From the mathematical point of view the application of formula (2.23) in theorems of the Lyapunov functional method
is natural and allows us to prove the corresponding converse theorems. Nevertheless direct implementation of formula (2.23) is difficult because it requires, at least formally,
calculating the solution xt of system (1.18).
R.Driver [49] proposed to calculate the total derivative
as
"
1 !
∗
V [hΔt ] − V [h] ,
(2.24)
V̇(2.1) [h] = lim
Δt→+0 Δt
where h = {x, y(·)}, hΔt = {x + f (h)Δt, y (Δt) (·)},
x + f (h)s for 0 ≤ s < Δt ,
y (Δt) =
y(s) for − τ ≤ s < 0 ,
where
0
f (h) = A x + Aτ y(−τ ) +
G(s) y(s) ds .
−τ
Though the formula (2.24) does not require calculating solutions, nevertheless utilization of this formula is also complicated because of computation of right-hand Dini derivatives.
46
Systems with Delays
2.3
Positiveness of functionals
In applications usually positive and non-negative LyapunovKrasovskii functionals are used. In this section we discuss
these properties of functionals.
2.3.1
Definitions
Definition 2.7. Functional
V [x, y(·)] : Rn × Q[−τ, 0) → R
is
1) positive definite on H (on C[−τ, 0]) if there exists a
function a ∈ K such that for any h ∈ H (h ∈ C[−τ, 0])
the following inequality is hold
V [h] ≥ a(hH ) ;
2) positive on H (on C[−τ, 0]) if
V [h] > 0
for h = 0, h ∈ H (h ∈ C[−τ, 0]);
3) non-negative on H (on C[−τ, 0]) if for any h ∈ H
(h ∈ C[−τ, 0])
V [h] ≥ 0 .
Note that positiveness and positive definiteness are not
equivalent for functionals on H (and C[−τ, 0]), i.e. not
every positive on C[−τ, 0] functional will be also positive
definite on C[−τ, 0] (see further Remark 2.5).
Stability Theory
2.3.2
47
Sufficient conditions of positiveness
Lyapunov-Krasovskii functional (2.12) is positive if, for example,
⎡
⎤
1
P D(s)
⎣ τ
⎦ > 0,
(2.25)
D (s) Q(s)
Π(s) ≥ 0 ,
R(s, ν) ≥ 0 ,
Γ(s, ν) ≥ 0
(2.26)
for s, ν ∈ [−τ, 0]. Condition (2.25) guarantees positiveness
of the sum
W1 [x] + W2 [x, y(·)] + W3 [y(·)] ,
and conditions (2.26) guarantee non-negativeness on H of
functionals W4 [y(·)], W5 [y(·)] and W6 [y(·)], respectively.
In general case analysis of positiveness of the quadratic
functionals is a very difficult task. It requires special investigation in every concrete case.
2.3.3
Positiveness of functionals
Remember that for continuous finite dimensional functions
v(x) : Rn → R there are two equivalent definitions of positive definiteness. Let us formulate them as a proposition.
Lemma 2.1. Let a function v(x) be continuous in the
region {x ∈ Rn : x < L}. Then the following conditions
are equivalent:
1. v(x) > 0 for 0 < x < L,3
2. there exists a function a ∈ K such that v(x) ≥ a(x)
for x < L. 3 If
L = ∞ then it is also supposed that
lim
x→∞
inf v(x) = 0.
48
Systems with Delays
For a continuous functional V [h] a similar proposition
can be proved in space SLk [−τ, 0] consisting of functions
z(·) : [−τ, 0] → Rn which satisfy the Lipschitz condition
z(s1 ) − z(s2 ) ≤ k|s1 − s2 |, for s1 , s2 ∈ [−τ, 0], where
constant k > 0. In this space we consider the metric
ρ ( z (1) (·), z (2) (·) )
= z (1) (·) − z (2) (·)C . Functions from SLk [−τ, 0] are
not supposed to be differentiable, hence Lipk [−τ, 0] ⊂
SLk [−τ, 0]. This space has the following properties.
Lemma 2.2.
1. SLk [−τ, 0] is the nonlinear complete metric space,
#
$
2. Sε = z(·) ∈ SLk [−τ, 0] : ρ(z(·), 0) = ε
(ε > 0) is
the compact set.
Theorem 2.5. For a continuous functional V [·]:
SLk [−τ, 0] → R the following conditions are equivalent:
1. functional V is positive on SLk [−τ, 0], i.e. V [z(·)] > 0
for 0 = z(·) ∈ SLk [−τ, 0] and
lim inf V [z(·)] =
z(·)
0;
C
→∞
2. there exists a function a ∈ K such that V [z(·)] ≥
a(z(·)C ) for z(·) ∈ SLk [−τ, 0].
Proof. It is evident that from condition 2 of the theorem follows validity of condition 1. Let us prove the converse implication. Consider the function
w(r) = min V [z(·)],
z(·)∈Sr
r ≥ 0,
(2.27)
which is continuous and w(r) > 0 for r > 0. The functional
V is continuous and positive definite, and for any r > 0 the
sphere Sr is compact, hence lim w(r) > 0, and therefore
r→∞
Stability Theory
49
there exists a function a ∈ K such that w(r) > a(r) for
r > 0.
Theorem 2.6. If V : SLk [−τ, 0] → R is a continuous
functional, then there exists b ∈ K such that V [z(·)] ≤
b(z(·)C ) for z(·) ∈ SLk [−τ, 0].
Proof. One can easily check that the function b(r) =
max V [z(·)] satisfies the terms of the theorem.
z(·)∈Sr
Remark 2.5. In the space C[−τ, 0] the proposition,
similar to Theorem 2.5, is not valid, because the sphere
in C[−τ, 0] is not a compact set. Consider, for example,
0
the functional V [z(·)] = z 2 (s)ds. Obviously V [z(·)] > 0
−τ
for z(·) = 0. Let us fix arbitrary ε > 0 and construct
(i)
a sequence {z (i) }∞
i=1 ⊂ C[−τ, 0] by the rule: z (s) =
0
εsi
ε2
(i)
, −τ ≤ s ≤ 0. Calculate V [z (·)] = 2i s2i ds =
τi
τ
2
−τ
τε
. Hence V [z (i) (·)] → 0 as i → ∞, meanwhile
2i + 1
ε|si |
(i)
= ε.
z (·)C = max
−τ ≤s≤0 τ i
2.4
Stability via Lyapunov-Krasovskii functionals
As we already mentioned, necessary and sufficient conditions for asymptotic stability of a linear time-invariant
system consist in negativeness of real parts of all roots of
the corresponding characteristic equation. However, unlike
ODE, for DDE at present there are no effective methods of
direct verification of this property of eigenvalues.
50
Systems with Delays
Application of Lyapunov-Krasovskii functionals allows
one to avoid these difficulties and investigate stability of
DDE without calculating eigenvalues or DDE solutions.
Though, using this approach we can obtain, as a rule,
only sufficient conditions of stability, utilization of different types of Lyapunov-Krasovskii functionals enables us
to obtain various forms of stability conditions in terms of
parameters of systems.
In this section we present basic theorems of LyapunovKrasovskii functional method for linear systems with delays. The results of this section are based on [111, 112, 49].
Let us define K as the set of continuous strictly increasing functions a(·) : [0, +∞) → [0, +∞), a(0) = 0.
2.4.1
Stability conditions in the norm · H
Theorem 2.7. If there exist quadratic Lyapunov-Krasovskii
functional V [x, y(·)] and a function a ∈ K such that for
all h = {x, y(·)} ∈ H
1. V [h] ≥ a(hH ) ,
2. V̇(2.1) [h] ≤ 0 ,
then system (2.1) is stable.
Theorem 2.8. If there exist quadratic LyapunovKrasovskii functional V [x, y(·)] and functions a, b, c ∈ K
such that for any h = {x, y(·)} ∈ H the following conditions are satisfied
1. a(hH ) ≤ V [h] ≤ b(hH ) ,
2. V̇(2.1) [h] ≤ −c(hH ) ,
then system (2.1) is asymptotically stable.
Stability Theory
51
In many cases we can construct the Lyapunov functionals, for which total derivatives are only non-positive (but
not negative definite). Nevertheless under some additional
conditions it can be sufficient for asymptotic stability of
DDE [111].
Theorem 2.9. If there exists quadratic LyapunovKrasovskii functional V [x, y(·)] and a function a ∈ K
such that for all h = {x, y(·)} ∈ H \ {0}
1. V [h] ≥ a(hH ) ,
2. V̇(2.1) [h] < 0 ,
then system (2.1) is asymptotically stable.
2.4.2
Stability conditions in the norm · Theorem 2.10.
If there exist quadratic LyapunovKrasovskii functional V [x, y(·)] and a function a ∈ K
such that for all {x, y(·)} ∈ H
1. V [x, y(·)] ≥ a(x) ,
2. V̇(2.1) [x, y(·)] ≤ 0 ,
then system (2.1) is stable.
Theorem 2.11. If there exist quadratic LyapunovKrasovskii functional V [x, y(·)] and functions a, b, c ∈ K
such that for any h = {x, y(·)} ∈ H the following conditions are satisfied
1. a(x) ≤ V [x, y(·)] ≤ b(hH ) ,
2. V̇(2.1) [x, y(·)] ≤ −c(x) ,
then system (2.1) is asymptotically stable.
52
Systems with Delays
Remark 2.6. Taking into account that hH ≥ x for
any h = {x, y(·)} ∈ H , hence we can substitute the first
condition of Theorem 11.3 by
1. V [h] ≥ a(x) .
Remark 2.7. Because of the smoothing of DDE solutions one can substitute in the above theorems the space
H by the spaces of more smooth functions, for example,
C[−τ, 0] or Lipk [−τ, 0].
2.4.3
Converse theorem
The following converse theorem is valid [111, 76].
Theorem 2.12. If system (2.1) is asymptotically stable
then for any positive definite n × n matrix W there exist a
positive definite quadratic Lyapunov-Krasovski functional
V [x, y(·)] and a constant k > 0 such that
V [x, y(·)] ≤ k h2H
(2.28)
V̇(2.1) [x, y(·)] ≤ −x W x .
(2.29)
Moreover for any L > 0 there exists a constant cL such
that
cL x3 ≤ V [x, y(·)]
(2.30)
for hH ≤ L, h = {x, y(·)}.
Unfortunately the theorem does not give us rules of
funding parameters of Lyapunov-Krasovskii functionals,
nevertheless the theorem guarantees that we can follow this
way and our attempts can be successful.
Stability Theory
2.4.4
53
Examples
Consider two examples.
Example 2.1. For equation
0
(τ + s)y(s)ds (a > 0, b ≥ 0)
aẋ = −bx −
(2.31)
−τ
one can consider the Lyapunov functional
0 0
%
2
V [x, y(·)] = ax +
−τ
&2
y(u)du ds.
s
The functional V is invariantly differentiable and its total
derivative with respect to equation(2.31)
has the form4
0
%
&
V̇(2.31) [x, y(·)] = 2x − bx − (τ + s)y(s)ds
0 0
y(u)duds−
+2x
−τ s
0
%
y(s)ds
&2
−τ
= −2bx −
2
−τ
%
0
y(s)ds
&2
.
−τ
Thus, if b = 0 then the zero solution of (2.31) is uniformly
stable, and if b > 0 then the zero solution is globally uniformly asymptotically stable.
Example 2.2. [111]. Let us apply invariantly differentiable Lyapunov functional
x2
V [x, y(·)] =
+μ
2α
0
y 2(s)ds
(α, μ > 0)
−τ
for investigating stability of the origin of the linear equation
ẋ = −α x + β y(−τ ) ,
4 Here
0
we also use the equality
0 0
(τ + s)y(s)ds =
−τ
(2.32)
y(u)duds .
−τ s
54
Systems with Delays
where α and β are constants. The total derivative of V
with respect to (2.32) is the quadratic form of the variables
x and y(−τ )
V̇(2.32) [t, x, y(·)] = −x2 +
β
xy(−τ ) + μx2 − μy 2(−τ ) .
α
This quadratic form is negative definite if
4(1 − μ)μ >
β2
,
α2
(2.33)
hence, if there exists μ > 0 that satisfies condition (2.33),
then the zero solution of (2.32) is uniformly stable. For
μ = 0.5 the left-hand side of (2.33) achieves a maximum,
and in this case inequality (2.33) takes the form β 2 < α2
or |β| < α.
2.5
Coefficient conditions of stability
In this section we present some stability conditions
(in terms of system coefficients) obtained using specific
Lyapunov-Krasovskii functionals. More complicated conditions and further references one can find, for example, in
[4, 107, 108, 51].
2.5.1
Linear system with discrete delay
Consider a system
ẋ = A x + Aτ y(−τ )
(2.34)
where A and Aτ are constant n × n matrices. Suppose
that the eigenvalues of the matrix A have negative real
parts. Then there exists a symmetric matrix C such that
the matrix
D = A C + C A
is negative definite.
Stability Theory
55
Let us consider Lyapunov-Krasovskii functional
0
V [x, y(·)] = x P x +
y (s) Q y(s) ds
(2.35)
−τ
where Q is n × n constant positive definite matrix.
Obviously there exist positive constants a and b such
that
a x ≤ V [x, y(·)] ≤ b hH .
One can easily calculate total derivative
V̇(2.34) [x, y(·)] = −x D x + 2 x P Aτ y(−τ )+
+x Q x − y (−τ ) Q y(−τ ) .
(2.36)
The right-side of (2.36) is the quadratic form of the variables x and y(−τ ). Let us estimate this quadratic form.
Let matrices Q and (P − Q) be positive definite. Then
there exist positive constants λ and μ such that
x (P − Q) x ≥ λ x2 ,
(2.37)
x Q x ≥ μ x2 .
(2.38)
Let us suppose that
'
λ μ − P Aτ n×n > 0 .
(2.39)
Note, if this inequality is valid then there exists a constant
α ∈ (0, min{λ, μ}) such that
'
(λ − α) (μ − α) − P Aτ n×n > 0 ,
then one can estimate
V̇(2.34) [x, y(·)] ≤
≤ −λ x2 + 2 P Aτ n×n x y(−τ ) − μ y(−τ )2 =
= −(λ − α) x2 − (μ − α) y(−τ )2 +
56
Systems with Delays
+ 2 P Aτ n×n x y(−τ ) −
!
"
− α x2 + y(−τ )2 ≤ 5
'
≤ −2 (λ − α) (μ − α) x y(−τ ) +
+ 2 P Aτ n×n x y(−τ ) −
!
"
− α x2 + y(−τ )2 ≤
"
!'
≤ −2
(λ − α) (μ − α) − P Aτ n×n x y(−τ ) −
− α x2 − α y(−τ )2 ≤
≤ −α x2 .
Thus all conditions of Theorem 2.11 are satisfied and we
can formulate the following proposition.
Theorem 2.13 [76]. Let conditions (2.37) – (2.39) be
satisfied, then system (2.34) is asymptotically stable. 2.5.2
Linear system with distributed delays
Consider a system
0
ẋ = A x +
G(s) y(s) ds
(2.40)
−τ
where A is constant n × n matrix, G(s) is n × n matrix
with continuous elements on [−τ, 0].
Consider n×n nonsingular matrix C(s) with continuous
elements on [−τ, 0], and define the matix
s
Q(s) =
C (ν) C(ν) dν , s ∈ [−τ, 0] .
−τ
5 Further
we use inequality −(a + b) ≤ −2
√
a b for a, b ≥ 0.
(2.41)
Stability Theory
57
Let P be a symmetric positive definite n × n matrix.
Consider the Lyapunov-Krasovskii functional of the form
0
V [x, y(·)] = x P x +
y (s) Q(s) y(s) ds .
(2.42)
−τ
One can easily prove that there exist positive constants
a, b, c such that
a x ≤ V [x, y(·)] ≤ b x + c y(·)τ .
The total derivative of functional (2.42) with respect to
system (2.40) can be presented in the following form
V̇(2.40) [x, y(·)] =
0
G(s) y(s) ds −
= x A P + P A + Q(0) x + 2 x P
−τ
− y (−τ ) Q(−τ ) y(−τ ) −
0
y (s)
−τ
dQ(s)
y(s) ds =
ds
0
= x M x−y (−τ ) Q(−τ ) y(−τ )− ξ{x,y(·)} (s) ξ{x,y(·)}
(s) ds
−τ
(2.43)
where
M = A P + P A + Q(0) +
⎡ 0
⎤
+ P ⎣ G(s) C −1(s) (C −1 (s)) G (s) ds ⎦ P ,
−τ
ξ{x,y(·)} (s) = y (s) C (s) − x P G(s) C −1 (s) .
Taking into account that Q(−τ ) = 0 and that the last
term in (2.43) is non-negative, we obtain
V̇(2.40) [x, y(·)] ≤ x M x .
(2.44)
58
Systems with Delays
Hence we can formulate the following proposition.
Theorem 2.14 [51]. If there exist symmetric positive
definite matrix P , symmetric negative definite matrix M,
and nonsingular matrix C(s), −τ ≤ s ≤ 0, such that
0
−M + A P + P A +
C(s) C (s) ds +
−τ
⎛ 0
⎞
+ P ⎝ G(s) C −1 (s) (C −1 (s)) G (s) ds⎠ P = 0 , (2.45)
−τ
then system (2.40) is asymptotically stable.
Note, if we fix some matrices M and C(s) then equation
(2.45) is the classic matrix Riccati equation with respect
to the matrix P .
Chapter 3
Linear quadratic control
3.1
Introduction
In this chapter we discuss a problem of designing stabilizing
controller for linear systems with delays.
Further we will be interested mainly in investigating asymptotic stability of systems, so, for brevity, in the sequel
the word “stability” will be often used instead of “asymptotic stability”1 .
For linear finite-dimensional systems, linear quadratic
regulator (LQR) theory plays a special role among various
approaches because an optimal gain can be easily calculated by solving an Algebraic Riccati Equation (ARE) and
the corresponding control stabilizes the closed-loop system
under mild conditions.
For systems with delays, theoretical aspects of LQR
problem have been also well developed in different directions [14, 45, 46, 52, 66, 91, 104, 114, 121, 129, 165, 179,
182], and it was shown that the optimal control (which is
a linear operator on a space of functions) is given by solutions of some specific differential equations, the so-called
generalized Riccati equations (GREs) [52, 165, 166]. But,
unfortunately, for systems with delays the above mentioned
1 I.e. in this chapter under stability we understand the asymptotic stability of
systems.
60
Systems with Delays
advantages of finite-dimensional systems are not preserved
because there are no effective methods of solving GREs.
Approximate numerical methods [52, 105, 165, 166, 174] for
the system of GREs (which consists of the algebraic matrix
equation, ordinary and partial differential equations) are
very complicated and their practical realization is far more
difficult than that for the corresponding algebraic Riccati
equation.
Among various papers devoted to LQR problems an explicit solution was obtained in [180, 181] under some special
conditions for generalized quadratic cost functional.
However, in order to find an explicit solution of GREs
it is necessary to calculate unstable poles of an open-loop
system and to compute a set of special functions, which are
still difficult tasks.
In this chapter we describe methods of finding explicit
solutions of GREs using special choices of the parameters
of the generalized quadratic functional.
The approach is based on the principles that generalized quadratic cost functional and its coefficients are again
design parameters.
3.2
Statement of the problem
In this chapter we consider an LQR problem for systems
with delays
0
ẋ = A x + Aτ y(−τ ) +
G(s)y(s)ds + B u
(3.1)
−τ
where A, Aτ , B are n × n, n × n, n × r constant matrices,
G(s) is n × n matrix with continuous elements on [−τ, 0],
x ∈ Rn and u ∈ Rr .
We consider the system (3.2) in the phase space H =
Rn × Q[−τ, 0).
Linear Quadratic Control 61
Remember that (3.1) is the conditional representation
of system
0
ẋ(t) = A x(t) + Aτ x(t − τ ) +
G(s)x(t + s)ds + B u(t) .
−τ
(3.2)
We consider a state of time-delay systems as a pair
{x, y(·)} ∈ H, hence the corresponding representation of a
linear state feedback control is
0
u(x, y(·)) = C x +
D(s) y(s) ds ,
(3.3)
−τ
where C is r × n constant matrices, D(s) is r × n matrix
with continuous on [−τ, 0] elements.
Calculated along specific trajectory xt the closed-loop
control (3.3) has the presentation
0
u(xt ) = C x(t) +
D(s) x(t + s) ds .
(3.4)
−τ
Closed-loop system, corresponding to system (3.1) and
state feedback control (3.3), can be easily constructed as
0 !
"
ẋ = (A + B C) x + Aτ y(−τ ) +
G(s) + B D(s) y(s) ds
−τ
(3.5)
that corresponds to the conventional representation
ẋ(t) = (A + B C) x(t) + Aτ x(t − τ ) +
0 !
+
−τ
"
G(s) + B D(s) x(t + s) ds .
62
Systems with Delays
We investigate a problem of constructing stabilizing
feedback control (3.3) on the basis of minimization of the
generalized quadratic cost functional
∞(
J=
x (t)Φ0 x(t) + 2 x (t)
0
Φ1 (s) x(t + s) ds +
−τ
0
0 0
+
x (t + s) Φ2 (s, ν) x(t + ν) ds dν +
−τ −τ
0
+
x (t + s) Φ3 (s) x(t + s) ds +
−τ
0
0
+
x (t + s) Φ4 (s) x(t + s) ds dν +
−τ ν
+ x (t − τ ) Φ5 x(t − τ ) +
)
+ u(t) N u(t) dt
(3.6)
on trajectories of system (3.1).
Here Φ0 and Φ5 are constant n × n matrices; Φ1 (s),
Φ3 (s) and Φ4 (s) are n×n matrix with piece-wise continuous
elements on [−τ, 0], Φ2 (s, ν) is n × n matrix with piecewise continuous elements on [−τ, 0] × [−τ, 0], N is r × r
symmetric positive definite matrix.
The state weight functional in (3.6) is the quadratic
functional
Z[x, y(·)] = x Φ0 x + 2x
0
Φ1 (s) y(s) ds +
−τ
0 0
+
−τ −τ
y (s) Φ2 (s, ν) y(ν) ds dν +
Linear Quadratic Control 63
0
+
y (s) Φ3 (s) y(s) ds +
−τ
0 0
+
y (s) Φ4 (s) y(s) ds dν +
−τ ν
+ y (−τ ) Φ5 y(−τ )
(3.7)
on space H = Rn ×Q[−τ, 0), so we can write cost functional
(3.6) in the compact form
∞(
)
Z[xt ] + u(t) N u(t) dt .
J=
(3.8)
0
Remark 3.1. It is noted that most papers consider the
quadratic functional
∞(
)
x (t)Φ0 x(t) + u (t) N u(t) dt ,
J∗ =
(3.9)
0
however, taking into account that the matrices Φ0 , Φ1 (s),
Φ2 (s, ν), Φ3 (s), Φ4 (s) and Φ5 are, generally speaking, design parameters, the problem (3.1), (3.6) has more degree
of freedom.
To the generalized LQR problem (3.1), (3.6) corresponds the following system of matrix generalized Riccati
equations2
P A + A P + D(0) + D (0) + F (0) + τ Π(0) + Φ0 = P K P ,
(3.10)
dD(s)
+ P K − A D(s) − P G(s) = R(0, s) + Φ1 (s) ,
ds
(3.11)
2 Derivation
of generalized Riccati equations is given in Appendix.
64
Systems with Delays
∂R(s, ν) ∂R(s, ν)
+
=
∂s
∂ν
= D (s) G(ν) + G (s) D(ν) − D (s) K D(ν) + Φ2 (s, ν) ,
(3.12)
dF (s)
+ Π(s) = Φ3 (s) ,
(3.13)
ds
dΠ(s)
(3.14)
= Φ4 (s) ,
ds
with the boundary conditions
D(−τ ) = P Aτ ,
(3.15)
R(−τ, s) = Aτ D(s) ,
(3.16)
F (−τ ) = Φ5 ,
(3.17)
and the symmetry conditions
P = P , R(s, ν) = R (ν, s) ,
for −τ ≤ s ≤ 0, −τ ≤ ν ≤ 0.
Here
K = B N −1 B .
(3.18)
In the next section we show that on the basis of suitable
choices of matrices Φ0 , Φ1 (s), Φ2 (s, ν), Φ3 (s, ν), Φ4 (s, ν)
and Φ5 we can simplify equations (3.10) – (3.16) and find
solutions in explicit forms.
Theorem 3.1. If:
1) state weight quadratic functional (3.7) is positive definite on H = Rn × Q[−τ, 0);
2) GREs (3.10) – (3.16) have a solution P , D(s), R(s, ν),
F (s) and Π(s) such that the quadratic functional
0
W [x, y(·)] = x P x + 2 x
−τ
D(s) y (s) ds +
Linear Quadratic Control 65
0 0
+
y (s) R(s, ν) y(ν) ds dν +
−τ −τ
0
+
y (s) F (s) y(s) ds +
−τ
0 0
+
y (s) Π(s) y(s) ds dν
(3.19)
−τ ν
is positive definite on H = Rn × Q[−τ, 0),
then system (3.2) is stabilizable and the feedback control
u∗ (xt ) = −N −1 B P x(t) +
0
D(s) x(t + s) ds
(3.20)
−τ
provides the optimal solution of generalized LQR problem
(3.1), (3.6) in the stabilizing class of controls and the optimal value of the cost functional J for an initial position
{x, y(·)} is given by (3.19).
Proof. To prove the theorem we show that the closedloop system (3.5), corresponding to the control (3.20), is
asymptotically stable.
Let us consider positive definite functional (3.19) as
Lyapunov-Krasovskii functional for closed-loop system.
Taking into account that matrices P , D(s), R(s, ν),
F (s) and Π(s) satisfy the system of GREs (3.10) – (3.16),
one find that total derivative of functional (3.19) with respect to closed-loop system (3.5) has the form
Ẇ(3.5) [x, y(·)] = −Z[x, y(·)] .
(3.21)
The weight quadratic functional Z[x, y(·)] is positive
definite, hence the functional (3.21) will be negative definite on H.
66
Systems with Delays
Thus the closed-loop system is asymptotically stable. Remarks 3.2.
1) to prove theorem it is sufficient to check positive definiteness of functionals (3.7) and (3.19) not on whole
H = Rn × Q[−τ, 0), but only on SLk [−τ, 0];
2) the theorem is valid if instead of the positive definiteness of functionals (3.7) and (3.19) the following
conditions are satisfied:
Z[x, y(·)] ≥ a(x) ,
W [x, y(·)] ≥ b(x) ,
for a(·), b(·) ∈ K;
3) positiveness of quadratic functionals (3.7) and (3.19)
can be verified using matrix inequality methods.
Remarks 3.3. Note that
1) functional Z[x, y(·)] is positive definite if, for example,
⎡
⎤
1
Φ Φ1 (s)
⎣ τ 0
⎦ > 0,
Φ1 (s) Φ3 (s)
Φ2 (s, ν) ≥ 0 ,
Φ4 (s) ≥ 0 ,
Φ5 ≥ 0 .
for s, ν[−τ, 0];
2) functional W [x, y(·)] is positive definite if, for example,
⎡
⎤
1
P D(s)
⎣ τ
⎦ > 0,
D (s) F (s)
Π(s) ≥ 0 ,
for s, ν[−τ, 0].
R(s, ν) ≥ 0
Linear Quadratic Control 67
3.3
Explicit solutions of generalized Riccati equations
Now we present an approach to finding explicit solutions
of GREs (3.10) – (3.16). The approach is based on an
appropriate choice of matrices Φ0 , Φ1 and Φ2 in the cost
functional (3.7) or (3.8).
3.3.1
Variant 1
Theorem 3.2. Let
1) matrix P be the solution of the matrix equation
P A + A P + M = P K P ,
(3.22)
where M is a symmetric n × n matrix;
2) matrices D(s) and R(s, ν) are defined by
D(s) = e−[P K−A ](s+τ ) P Aτ ,
R(s, ν) =
Q(s) D(ν) for (s, ν) ∈ Ω1 ,
D (s) Q (ν) for (s, ν) ∈ Ω2 ,
(3.23)
(3.24)
where
(
)
Ω1 = (s, ν) ∈ [−τ, 0] × [−τ, 0] : s − ν < 0 ,
(
)
Ω2 = (s, ν) ∈ [−τ, 0] × [−τ, 0] : s − ν > 0 ,
and
Q(s) = Aτ e[P K−A ](s+τ ) ,
(3.25)
3) F (s) and Π(s) are n × n matrices with continuous
differentiable elements on [−τ, 0].
Then the matrices P , D(s), R(s, ν), F (s) and Π(s) are
solutions of GREs (3.10) – (3.16) with matrices
!
"
Φ0 = M − D(0) + D (0) − F (0) − τ Π(0) ,
68
Systems with Delays
Φ1 (s) = P G(s) − R(0, s) ,
Φ2 (s, ν) = D (s) K D(ν) − D (s) G(ν) − G (s) D(ν) ,
Φ3 (s) =
dF (s)
+ Π(s) ,
ds
Φ4 (s) =
dΠ(s)
,
ds
Φ5 = F (−τ ) .
(3.26)
Proof. The statement of the theorem can be verified
by the direct substitution (detailed proof of the theorem is
given in Appendix).
Remark 3.4. From Theorem 3.2 it follows that
Φ1 (s) = −D (0) Q (s) .
(3.27)
3.3.2
Variant 2
Theorem 3.3. Let
1) matrix P be the solution of the exponential matrix
equation (EME)
P A+A P +e−[P K−A ] τ P Aτ +Aτ P e−[P K−A ] τ +M =
= P KP ,
(3.28)
where M is a symmetric n × n matrix;
2) matrices D(s) and R(s, ν) have the forms (3.23) –
(3.25)
3) F (s) and Π(s) are n × n matrices with continuous
differentiable elements on [−τ, 0].
Linear Quadratic Control 69
Then the matrices P , D(s), R(s, ν), F (s) and Π(s) are
solutions of GREs (3.10) – (3.16) with matrices
Φ0 = M − F (0) − τ Π(0) ,
Φ1 (s) = P G(s) − R(0, s) ,
Φ2 (s, ν) = D (s) K D(ν) − D (s) G(ν) − G (s) D(ν) .
Φ3 (s) =
dF (s)
+ Π(s) ,
ds
Φ4 (s) =
dΠ(s)
,
ds
Φ5 = F (−τ ) .
(3.29)
Proof. The statement of the theorem can be verified
by the direct substitution (detailed proof of the theorem is
given in Appendix).
3.3.3
Variant 3
Theorem 3.4. Let
1) F (s) and Π(s) be n × n matrices with continuous
differentiable elements on [−τ, 0], M be a symmetric
n × n matrix;
2) matrix P be the solution of the Riccati matrix equation
P (A+Aτ )+(A +Aτ )P +F (0)+τ Π(0)+M = P K P ,
(3.30)
3) matrices D(s) and R(s, ν) have the form
D(s) ≡ P Aτ ,
(3.31)
R(s, ν) ≡ Aτ P Aτ .
(3.32)
70
Systems with Delays
Then the matrices P , D(s), R(s, ν), F (s) and Π(s) are
solutions of GREs (3.10) – (3.16) with matrices
Φ0 = M
"
!
Φ1 (s) = P K − A − Aτ P Aτ − P G(s),
Φ2 (s, ν) = Aτ P K P Aτ − Aτ P G(ν) − G (s) P Aτ ,
dF (s)
+ Π(s) ,
ds
dΠ(s)
Φ4 (s) =
,
ds
Φ5 = F (−τ ) .
Φ3 (s) =
(3.33)
Proof. The statement of the theorem can be verified
by the direct substitution (detailed proof of the theorem is
given in Appendix).
Remark 3.5. If we take, for instance,
Φ0 = q 2 K − q (A + Aτ ) − q (A + Aτ ) − F (0) − τ Π(0) ,
where q is arbitrary number, then the matrix equation
(3.30) has the solution
P = qE,
which is the positive definite matrix.
The simple form of the solution allows us to examine non-negativeness of the corresponding quadratic functional. The substitution of D(s) and R(s, ν) into (3.19)
yields
W [x, y(·)] =
0
= x P x +2x
P Aτ y(s) ds +
−τ
Linear Quadratic Control 71
0 0
+
y (s) Aτ P Aτ y(ν) ds dν =
−τ −τ
!
"
= x P x + 2 x P
0
Aτ y(s) ds +
−τ
!0
y
+
(s) Aτ
−τ
"
!0
ds P
"
Aτ y(ν) dν
=
−τ
0
!
= x + Aτ
0
" !
"
y(s) ds P x + Aτ y(s) ds .
−τ
−τ
Hence, if matrix P is positive definite then the functional
W [x, y(·)] is non-negative on H = Rn × Q[−τ, 0).
In case of system (3.1) only with discrete delay (i.e.
G(s) ≡ 0) can we obtain sufficient conditions of positiveness of the weight functional Z[x, y(·)] (3.7).
First of all we can see that in this case the matrix Φ2
has the form
Φ2 = Aτ P B N −1 B P Aτ ,
hence the corresponding term in the functional (3.7) can
be presented in the following form
0 0
y (s) Φ2 y(ν) ds dν =
−τ −τ
0 0
=
y (s) Aτ P B N −1 B P Aτ y(ν) ds dν =
−τ −τ
⎞
⎞
⎛ 0
⎛ 0
= ⎝ y (s) ds ⎠ Aτ P B N −1 B P Aτ ⎝ y (ν) dν ⎠ =
−τ
−τ
72
Systems with Delays
⎛
= ⎝ B P Aτ
0
⎞
⎛
y (s) ds ⎠ N −1 ⎝ B P Aτ
−τ
0
⎞
y (s) ds ⎠ ,
−τ
(3.34)
and, obviously, this term is non-negative on H, because the
matrix N −1 is positive definite.
From presentations (3.33) it follows that the fifth and
sixth terms of functional (3.7) are non-negative on H if
Φ4 (s) =
dΠ(s)
≥ 0 for s ∈ [−τ, 0] ,
ds
Φ5 = F (−τ ) ≥ 0 .
Note that the quadratic functional
0
x Φ0 x + 2x
0
Φ1 (s) y(s) ds +
−τ
y (s) Φ3 (s) y(s) ds
−τ
in (3.7) is positive if, for example,
⎡
⎤
1
Φ0 Φ1 (s)
⎣ τ
⎦ > 0 for s ∈ [−τ, 0] .
Φ1 (s) Φ3 (s)
Thus we obtain the following sufficient conditions for
positiveness of the weight functional (3.7) with coefficients
(3.33) :
M ≥ 0,
dΠ(s)
≥ 0 for s ∈ [−τ, 0] ,
ds
F (−τ ) ≥ 0 ,
⎡
⎤
!
"
1
M
P K − A − Aτ P Aτ ⎥
⎢
τ
⎢
⎥ > 0
"
!
⎣
⎦
dΠ(s)
+ Π(s)
Aτ P P K − A − Aτ
ds
for s ∈ [−τ, 0].
Linear Quadratic Control 73
We emphasize once more that we set matrices M, F (s)
and Π(s) by ourself. Also remember, that we obtained
formula (3.34) under assumption G(s) ≡ 0.
3.4
Solution of Exponential Matrix Equation
To construct explicit solutions of GREs on the basis of
the described approach it is necessary to solve ARE (3.22)
or specific EME (3.28). ARE appears in various control
problems and methods of its solving are well-developed,
including effective software realizations.
We will not discuss theoretical aspects of solvability of
EME. Probably it is connected with controllability and observability of system (2.1). The aim of this section is to
discuss approximate methods of solving EME.
Approximate solutions of EME can be found on the basis of general methods of solution matrix equations
F (P ) = 0 ,
(3.35)
where P is n × n matrix and F (P ) is given by
F (P ) ≡ P A + A P + M − P KP +
+ e−[P K−A ] τ P Aτ + Aτ P e−[P K−A ] τ .
(3.36)
In this section we describe two of such methods.
3.4.1
Stationary solution method
The method consists of the following procedure. Fix n × n
matrix P0 , which is considered as the initial approximation,
and solve the initial value problem
⎧
⎨ Ṗ (t) + F (P (t)) = 0 , t > 0 ,
(3.37)
⎩ P (0) = P0 .
74
Systems with Delays
If there exists a finite limit P∗ = lim P (t) of the solution
t→∞
P (t) of problem (3.37), then we can consider the limit matrix P∗ as an approximate solution of (3.35).
Initial value problem (3.37) can be solved using standard numerical procedures. The described stationary solution method is realized in Time-delay System Toolbox [99].
3.4.2
Gradient methods
To solve matrix equation (3.35) one can also use gradient
methods. Consider, for example, application of the Newton
method.
Denote by Γ(P ) a one-to-one operator that maps n × n
matrix P into a n2 –dimensional vector P̄ according the
rule:
⎛
⎞
p11 p12 · · · p1n
⎜ p21 p22 · · · p2n ⎟
⎟
Γ⎜
⎝ ··· ··· ··· ··· ⎠ =
pn1 pn2 · · · pnn
= (p11 , p12 , · · · , p1n , p21 , p22 , · · · , p2n , · · · , pn1 , pn2 , · · · , pnn ) .
Then we can rewrite equation (3.35) in the form
Γ(F (P )) = F̄ (P ) = 0 .
(3.38)
Obviously, a matrix P is the solution of (3.35) if and
only if the corresponding vector P̄ is the solution of (3.38).
Then one can realize the following iteration procedure:
&
dF̄ (Pk ) %
P̄k+1 − P̄k = −F̄ (Pk ),
dP̄
(3.39)
dF̄ (P )
is the Jacobean (which determinant should
dP̄
be non-zero at every iteration).
where
Linear Quadratic Control 75
3.5
3.5.1
Design procedure
Variants 1 and 2
To construct the feedback control u∗ (x, y(·)) according to
the approach described above, it is necessary only to find
the matrix P which is the solution of ARE (3.22) or EME
(3.28). Then, taking into account the explicit form of the
matrix D(s) (3.23), we obtain the following explicit form
of the feedback controller in case of Variants 1 and 2
u∗ (x, y(·)) = − N −1 B P x+
0
e−[P K−A ](s+τ ) P Aτ y(s) ds .
−τ
(3.40)
Stabilizing properties of this controller one can check by
Theorem 3.1.
Note to prove stabilizing properties of the feedback control (3.40) it is sufficient to check asymptotic stability of
the corresponding closed-loop system
ẋ = (A − B N −1 B P ) x + Aτ y(−τ ) +
0 !
+
G(s) −B N
−1
−[P K−A ](s+τ )
Be
"
P Aτ
y(s) ds . (3.41)
−τ
Stability of closed-loop system can be checked using
some sufficient conditions.
It is necessary to note, verification of asymptotic stability of closed-loop system (3.41) with respect to all initial
functions of H is a very laborious and difficult task.
However, using computer simulation and special functions of Time-delay system toolbox one can check stability of system (3.41) with respect to a special classes functions L ⊂ H.
76
Systems with Delays
3.5.2
Variant 3
In this case D(s) = P Aτ , hence the explicit form of the
feedback control is
u∗ (x, y(·)) = − N −1 B P x +
0
P Aτ y(s) ds
(3.42)
−τ
and the corresponding closed-loop system has the form
ẋ = (A − B N −1 B P ) x + Aτ y(−τ ) +
0 !
+
G(s) − B N
−1
B P Aτ
"
y(s) ds .
(3.43)
−τ
3.6
Design case studies
In this chapter we apply the proposed approach to designing feedback controllers for linear time-delay systems. In
all examples the simulation was realized using the software
package [99].
3.6.1
Example 1
Consider the system [58]
*
+
*
+
* +
0 1
1 0
1
ẋ =
x+
y(−1) +
u.
0 0
0 0
1
(3.44)
Note, the open-loop system has two roots with nonnegative real parts: λ1 = 0.56714 and λ2 = 0.0. To construct
the controller according to the proposed method let us take
the weighting matrices as
*
+
1 0
M=
, N = 1.
0 1
Linear Quadratic Control 77
The matrix P , which is the solution of the corresponding
ARE (3.22), has the form
*
+
1 0
P =
0 1
and the closed-loop control is
,
u0 (x, y(·)) = −1 −1 x+
,
+ −1 −1
-
0 ⎡
⎤
⎣
⎦×S
−1 −1
e 0 −1
0.3679 0
0
0
y(s) ds .
−1
(3.45)
Using special functions of Time-delay System Toolbox [99]
one can check that solutions of the closed-loop systems tend
to zero (see Figure 3.1).
1
0.8
0.6
0.4
x
1
x
0.2
0
−0.2
x2
−0.4
−0.6
−0.8
0
5
10
15
t
Fig. 3.1
20
25
30
35
78
Systems with Delays
3.6.2
Example 2
Consider the system [180]
*
+
*
+
*
+
0 1
−0.3 −0.1
0
ẋ =
x+
y(−5) +
u.
0 0
−0.2 −0.4
0.333
(3.46)
The open-loop system has two roots with positive real
parts. Let us take the following weight matrices:
*
+
1 0
M=
, N = 1.
0 1
Solution P of the corresponding ARE (3.22) has the form
*
+
2.6469 3.0030
P =
3.0030 7.9486
and the corresponding closed-loop control is
,
,
u0 (x, y(·)) = −1 −2.6469 x + 0 −0.333 I ,
(3.47)
where
⎡
0 ⎣ 0 −0333
e 1 −0.8814
I=
−1
⎤
⎦×S
0.1053 0.1921
−0.0052 0.1297
y(s) ds .
Using special functions of Time-delay System Toolbox
[99] one can check that solutions of the closed-loop systems
tend to zero (see Figure 3.2).
3.6.3
Example 3
Consider the system [120]
*
+
*
+
* +
0 1
0.3 0.6
0
ẋ =
x+
y(−5) +
u . (3.48)
0 0
0.2 0.4
1
Linear Quadratic Control 79
1.4
1.2
1
0.8
x
1
x
0.6
0.4
0.2
x
2
0
−0.2
−0.4
0
5
10
15
t
20
25
30
35
Fig. 3.2
Open-loop system is unstable. Let us take the weighting
matrices as
*
+
1 0
M=
, N = 1.
0 1
The matrix P , which is the solution of the corresponding
ARE (3.22), has the form
*
+
1.7321 1.0000
P =
1.0000 1.7321
and the closed-loop control is
,
,
u0 (x, y(·)) = −1 −1.7321 x + 0 −1 I ,
where
⎡
−1
0 ⎣ 0
e 1 −1.7321
I=
−5
(3.49)
⎤
⎦×S
−0.0080 −0.0159
−0.0043 −0.0086
y(s) ds .
80
Systems with Delays
Using special functions of Time-delay System Toolbox
[99] one can check that solutions of the closed-loop systems
tend to zero (see Figure 3.3).
2
1.5
1
x1
0.5
x
0
−0.5
−1
x2
−1.5
−2
0
5
10
15
t
20
25
30
35
Fig. 3.3
3.6.4
Example 4
Consider the system [180]
ẋ = Ax + Aτ y(−0.5) + Bu ,
(3.50)
where
A=
0.1 1
1 0.1
B=
, Aτ =
0.1
0
0.1 −0.1
0.2 −0.2
0 −0.2
,
and
G(s) ≡
0 0
0 0
, τ = 0.5 .
,
Linear Quadratic Control 81
Let the weight matrices are
M=
1 0
0 1
, N=
2 0
0 2
.
and matrices Φ0 , Φ1 (s), Φ2 (s, ν) has the form (3.29).
To find the solution P of the corresponding EME (3.28)
using the stationary solution method it is necessary to solve
on the interval [0, 10] the matrix differential equation
dP (t)
= P (t) A + A P (t) + e−[P (t) K−A ] τ P (t) Aτ +
dt
+ Aτ P (t) e−[P (t) K−A ] τ + M − P (t)KP (t)
(3.51)
with the initial condition
0 10
20 30
P (0) =
.
Each component of the matrix P (t) tends to a constant.
The limit matrix
P (10) =
102.2789 87.8829
87.8829 69.7475
(3.52)
can be considered as the approximate solution of EME
(3.28).
The corresponding closed-loop control is
u0 (x, y(·)) =
+
−8.8312 −8.8190
4.3987
4.4203
−0.0500 −0.0500
0
0.0500
x+
I,
(3.53)
where
⎡
0 ⎣ −0.7831
e 0.1181
I=
−0.5
−0.3230
−1.2239
⎤
⎦×S
10.2113
10.1265
−20.3230
−20.2949
y(s) ds .
82
Systems with Delays
1.2
1
0.8
x1
x
0.6
0.4
0.2
x
2
0
−0.2
0
1
2
3
4
5
t
6
7
8
9
10
Fig. 3.4
Using functions of Time-delay System Toolbox [99] one
can check that solutions of the closed-loop systems tend to
zero (see Figure 3.4).
Also using special functions of Time-delay System Toolbox one can find optimal value of the cost functional
Juopt ≈ 3, 2189.
For example, values of the cost functional corresponding to the scaled optimal control are: J0.8uopt ≈ 3, 3971,
J1.2uopt ≈ 3, 3312.
3.6.5
Example 5: Wind tunnel model
A linearized model of the high-speed closed-air unit wind
tunnel was described in Chapter 1 (see (1.5).
Let us design for this model the closed-loop control using
LQR algorithms.
Let us take the weight matrices as
Linear Quadratic Control 83
⎡
1 0 0
⎤
⎢
⎥
M =⎣ 0 1 0 ⎦,
N = 1.
0 0 1
Solution P of the corresponding ARE (3.22) has the
form
⎤
⎡
0.9820
0
0
0
1.0837 0.0115 ⎦ .
P =⎣
0
0.0115 0.0169
Thus the corresponding closed-loop control is
,
- ,
u0 (x, y(·)) = 0 −0.4142 −0.6101 x+ 0 0 −36 I ,
where
⎡
−0.5092
0
0 0
I=
e
⎢
⎢
⎢
⎣
0
0
1.0000
0
−50.9117
−41.1639
⎤
⎥
⎥
⎥×S
⎦
−τ
⎡
0
⎣ 0
0
0.0495
0
0
⎤
0
0 ⎦ y(s) ds .
0
The corresponding closed-loop system has the form
⎡
⎢
ẋ = ⎣
0
0
0
0
1.0000
⎥
⎦x +
−50.9117 −41.1639
0
⎡
⎤
−0.5092
0 0.0596 0
⎤
⎡
0 0
⎢
+⎣ 0
0
⎥
⎢
0 ⎦ y(−τ ) + ⎣ 0 0
0
0
0
0
0
⎤
⎥
⎦ I (3.54)
0 0 −1296
Using Time-delay System Toolbox one can check that
solutions of the closed-loop systems tend to zero (see Figure
3.5).
84
Systems with Delays
1.5
1
x1
0.5
x2
x
0
−0.5
x3
−1
−1.5
0
1
2
3
4
5
6
t
7
8
9
10
Fig. 3.5
3.6.6
Example 6: Combustion stability in liquid
propellant rocket motors
A linearized version of the feed system and combustion
chamber equations was described in Chapter 1 (see (1.6).
Let us design for this model the closed-loop control using
LQR algorithms.
Let us take the weight matrices as
⎡
1 0 0 0
⎤
⎥
⎢
⎢ 0 1 0 0 ⎥
⎥
M =⎢
⎢ 0 0 1 0 ⎥,
⎦
⎣
0 0 0 1
N = 1.
(3.55)
Using function lqdelay we can find the matrices
C=
,
0.0398 −1.1134 0.2332 −0.1198
-
,
Linear Quadratic Control 85
D0 =
,
0 −1 0 0
-
,
⎡
−0.2 0.0398 −1 0
⎢ 0
−1.1134 0
1
D1 = ⎢
⎣ 0
0.2332 −1 −1
0
−1.1198 1
0
⎡
−3.3101 0 4.1376 0
⎢ 0.1794 0 −0.2243 0
D2 = ⎢
⎣ −0.0180 0 0.0225 0
0.2386 0 −0.2983 0
⎤
⎥
⎥,
⎦
⎤
⎥
⎥.
⎦
Thus to system (1.7) with the weight matrices (3.55) corresponds LQR control
,
u0 (x, y(·)) = 0.0398 −1.1134 0.2332 −0.1198 x +
0 ,
eD1 ×S D2 y(s) ds .
+ 0 −1 0 0 ×
−5
The corresponding closed-loop system has the form
⎡
γ−1
0
0
⎤
⎥
−1.1134 0.2332 −1.1198 ⎥
⎥ x(t) +
⎥
0
−1
1
⎦
1
−1
0
⎤
⎡
⎤
0 0 0 0
0
⎥
⎢
⎥
⎢ 0 −1 0 0 ⎥
0 ⎥
⎥
⎥ x(t − δ) + ⎢
⎢ 0 0 0 0 ⎥×
0 ⎥
⎦
⎣
⎦
0 0 0 0
0 0 0
0
⎢
⎢ 0.0398
ẋ(t) = ⎢
⎢ −1
⎣
0
⎡
−γ 0 1
⎢
⎢ 0 0 0
+⎢
⎢ 0 0 0
⎣
0
0
eD1 ×S D2 y(s) ds .
×
−5
(3.56)
86
Systems with Delays
Using functions of Toolbox one can check that solutions
of the closed-loop systems tend to zero (see Figure 3.6).
1
x1
x
2
x3
0.5
x
x4
0
−0.5
0
5
10
15
20
25
t
30
35
40
45
50
Fig. 3.6
Note that for γ = 0.95 and δ = 0.87 one can find solutions of GREs. However, the corresponding controller does
not stabilize the system (see Figure 3.7).
Linear Quadratic Control 87
200
150
x1
x2
x
3
x
4
100
x
50
0
−50
−100
−150
0
10
20
30
40
50
t
Fig. 3.7
60
70
80
90
100
Chapter 4
Numerical methods
4.1
Introduction
In this chapter we describe an approach to constructing numerical methods for linear time-varying systems with delays
0
ẋ = A(t) x + Aτ (t) y(−τ (t)) +
G(t, s) y(s) ds + v(t)
−τ (t)
(4.1)
with the initial conditions
x(t0 ) = x0 ,
(4.2)
x(t0 + s) = y 0(s) , −τ ≤ s < 0 .
(4.3)
Here A(t), Aτ (t) are n×n matrices with piece-wise continuous elements, G(t, s) is n×n matrix with piece-wise continuous elements on R × [−τ, 0], u is a given n–dimensional
vector-function, τ (t) : R → [−τ, 0] is a continuous function, τ is a positive constant; {x0 , y 0(·)} ∈ Rl × Q[−τ, 0).
For convenience, we will use the following notation for
system (4.1)
ẋ = f (t, x, y(·))
(4.4)
89
90
Systems with Delays
where
f (t, x, y(·)) ≡ A(t) x + Aτ (t) y(−τ (t)) +
0
G(t, s) y(s) ds + v(t) .
+
−τ (t)
Note, unlike ODE, even for linear DDE there are no
general methods of finding solutions in explicit forms. So
elaboration of numerical algorithms is the only way to find
trajectories of the corresponding systems.
At present various specific numerical methods are constructed for solving specific delay differential equations.
Most investigations are devoted to numerical methods
for systems with discrete delays and Volterra integrodifferential equations.
An exhaustive review of papers published until 1972 on
DDE numerical methods is given in [38]. Consequent development of DDE numerical analysis and the corresponding bibliography is reflected in [63, 8, 9, 10, 11] and the
corresponding chapters of the books [77, 72].
For specific classes of DDE there were elaborated special
codes: [12, 35, 56, 83, 149, 156, 186].
Unfortunately, most of these algorithms are laboriuos
for practical implementation even for simple DDE initial
value problems, because the algorithms are based on complicated schemes of handling the discontinuities of DDE
solutions.
In this chapter we follow the approach [65, 22, 102] to
constructing numerical DDE methods. The approach is
based on the assumption of smoothness of DDE solutions.
The distinguishing feature of the approach is that the
numerical methods for DDE are direct analogies of the corresponding classical numerical methods of ODE theory, i.e.,
if delays disappear, then the methods coincide with ODE
methods.
Numerical Methods
91
Of course, exact (analytical) solutions of DDE have, as
a rule, discontinuities of derivatives which can affect the
numerical algorithms used for their approximate solving.
However
• for a specific DDE, an initial function can be approximated, as a rule, by a sequence of (initial) functions
which generate smooth solutions1 ,
• our numerical experiments showed that described in
the book algorithms are robast with respect to discontinuities of derivatives of DDE solutions.
4.2
Elementary one-step methods
The aim of this section is to demonstrate the basic idea
of the general approach (to constructing numerical methods) on a simple one-step numerical scheme for initial value
problem (4.4) – (4.3).
For the sake of simplicity we consider a uniform (regular) grid tn = t0 + nΔ, n = 0, 1, . . . , N, of the interval
!
θ "
; and suppose that the ratio
[t0 , t0 + θ] here Δ =
N
τ
= m is a positive integer.
Δ
Our aim is to obtain on the interval [t0 , θ] approximations un ∈ Rl , n = 0, 1, . . . , N, to the solution x(t) of the
initial value problem (4.4) – (4.3) at points t0 ,. . .,tN ; that
is
un ≈ x(tn ) , n = 0, 1, . . . , N .
Definition 4.1. A sequence {un }, that approximates
the solution x(t), is called the discrete model2 of system
(4.4).
1 Hence, taking into account continuous dependence of DDE solution on initial
data and the approximate character of numerical procedures, we can assume that
the given initial function generates the smooth solution.
2 Numerical model, approximate model.
92
Systems with Delays
4.2.1
Euler’s method
General scheme
The method is very simple but not practical. However, an
understanding of this method builds the way for the construction of the more practical (but also more complicated)
numerical methods for DDE.
The discrete model
u0 = x0 ,
(4.5)
un+1 = un + Δf (tn , un , utn (·))
(4.6)
is called Euler’s method.
Interpolation
To find at time tn the next approximation un+1 using
Euler’s scheme (4.6) it is necessary to calculate the rightpart f (t, x, y(·)) of system (4.4) on the pre-history
{ui , n − m ≤ i ≤ n}
(4.7)
of the discrete model. Pre-history (4.7) of the discrete
model is a finite set of vectors un−m ,. . .,un , meanwhile the
functionals f in the right part of system (4.4) is defined, in
general case, on functions of H. Hence, to calculate a value
of the functional f on the pre-history of the discrete model
it is necessary to make an interpolation of the approximate
solution un .
Thus under utn (·) in (4.6) it is necessary to understand
a function
utn (·) ≡ {ũ(s) , tn − τ ≤ s < 0 } ,
(4.8)
constructed by the finite set of points (4.7) using an interpolational procedure.
Note, because of the interpolational error, an order of
accuracy of method (4.6) should also depend on interpolational error.
Numerical Methods
93
One can use a simple piece-wise constant interpolation
u(t) =
ui, t ∈ [ti , ti+1 ) ,
y 0(t0 − t), t ∈ [t0 − τ, t0 ) ,
(4.9)
to construct utn (·).
The method (4.5), (4.6), (4.9) is Euler’s method with
piece-wise constant interpolation of the discrete pre-history
(of the model).
Convergence of Euler’s method
Let us investigate convergence of the method.
Definition 4.2. Numerical method
1) converges, if un − x(tn ) → 0 as Δ → 0 for all n =
1, . . . , N;
2) has a convergence order p, if there exists a constant C
such that
un − x(tn ) ≤ CΔp for all n = 1, . . . , N.
Euler’s method (4.6) – (4.9) converges and has the convergence order p = 1.
Theorem 4.1. Let the solution x(t) of the initial value
problem (4.4) – (4.3) be twice continuous differentiable
function. Then Euler’s method (4.6) – (4.9) converges and
has the convergence order p = 1.
The described Euler’s method with the piece-wise constant interpolation is the simplest of converging methods.
To obtain more accurate methods it is necessary to use high
order interpolational procedures and more complicated discrete models. Such methods will be discussed in the next
chapter.
94
Systems with Delays
Now let us consider the realization of Euler’s scheme for
specific systems with constant, time-varying and distributed delays.
Constant discrete delay
Consider a system with the discrete delay
ẋ(t) = A(t) x(t) + Aτ x(t − τ ) .
(4.10)
τ
= m is a positive integer, then the numerical
Δ
model (4.6) has the simple form
un+1 = un + Δ A un + Aτ un−m .
If the ratio
Note that in this case an interpolation is not necessary!
However, if we use τ –incommensurable mesh of the time
interval then it is also necessary to make an interpolation
for approximation of the delay term.
Time-varying discrete delay
Consider a system with time-varying delay
ẋ(t) = A x(t) + Aτ x(t − τ (t))
(4.11)
0 < τ (t) ≤ τ . In this case the corresponding Euler’s scheme
is
un+1 = un + Δ A un + Aτ ũ(tn − τ (tn )) ,
where ũ(t) : [tn−m , tn ] → Rl is an interpolation of the discrete values un−m , . . . , un .
Distributed delay
Consider a system with distributed delay
0
ẋ(t) = A x(t) + Aτ x(t − τ ) +
G(s) x(t + s) ds . (4.12)
−τ
Numerical Methods
95
According to Euler’s scheme we calculate only discrete
values un−m , . . . , un . So in order to compute the integral it is necessary, similar to time-varying case, to construct an interpolational function ũ(t) of the discrete values
un−m , . . . , un . Then the corresponding numerical model is
0
un+1 = un + Δ A un + Aτ ũ(tn − τ ) +
G(s) ũ(tn + s) ds .
−τ
4.2.2
Implicit methods (extrapolation)
In Euler’s method we use the presentation
tn+1
x(tn+1 ) = x(tn ) +
f (t, x(t), xt (·)) dt ,
(4.13)
tn
and approximate the integral by the formula
tn+1
f (t, x(t), xt (·)) dt ≈ Δ f (tn , x(tn ), xtn (·)) .
tn
Similar to ODE case, it seems reasonable that a more
accurate value would be obtained if we were to approximate
the integral in (4.13) by the trapezoidal rule
tn+1
f (t, x(t), xt (·)) dt ≈
tn
≈
Δ
f (tn , x(tn ), xtn (·)) + f (tn+1 , x(tn+1 ), xtn+1 (·)) ,
2
that leads to the numerical scheme
Δ
f (tn , un , utn (·)) + f (tn+1 , un+1, utn+1 (·)) .
un+1 = un +
2
(4.14)
96
Systems with Delays
Equation (4.14) gives us only the implicit formula for
un+1 (because un+1 is also involved in the right-hand side
of (4.14)), so this scheme is the implicit numerical method.
In order to use the implicit method (4.14) it is necessary to calculate values of the functional f (t, x, y(·)) on
functions
utn+1 (·) = {u(tn+1 + s) , −τ ≤ s < 0} .
(4.15)
In case of discrete delays, i.e. (4.11), we can use an
interpolation ũ(t) : [tn−m , tn ] → Rl in order to calculate
utn+1 (·) = ũ(tn+1 − τ (tn+1 ))
(4.16)
if τ (tn+1 ) ≥ Δ.
However, if τ (tn+1 ) < Δ, then, in order to calculate
(4.16), it is necessary to make an extrapolation of the
pre-history utn (·) on the interval [tn , tn + Δ].
Remark 4.1. In case of distributed delays it is also necessary to make an extrapolation.
This method has accuracy O(Δ2 ) if the second order interpolation is used.
4.2.3
Improved Euler’s method
One can modify implicit method (4.14) in order to obtain
an explicit method.
We can predict un+1 by Euler’s formula
ûn+1 = un + Δ f (tn , x(tn ), xtn (·))
and, in order to obtain a more accurate approximation,
substitute this value into the right-hand side of (4.14) instead of un+1
Δ
f (tn , un , utn (·)) +
un+1 = un +
2
Numerical Methods
+ f (tn+1 , un + Δ f (tn , un , utn (·)), utn+1 (·)) .
97
(4.17)
This explicit scheme is the improved Euler’s method.
This method has accuracy O(Δ2 ) if the second order
interpolation is used.
4.2.4
Runge-Kutta-like methods
In this section we describe for DDE numerical methods
which are direct generalization of the classic Runge-Kutta
methods of ODE’s. Note that parameters of these methods
are the same as in ODE case, i.e., if delays disappear then
we obtain the classic Runge-Kutta method for ODE.
Runge-Kutta-like methods of the second order
The Runge-Kutta-like method (of order 2) has the form
f (tn + a Δ, un + b Δ f (tn , un , utn ), utn +aΔ ) ,
(4.18)
where constants a and b are to be selected. For example,
if we take a = b = 12 , then we obtain the midpoint method
Δ
Δ
, un + f (tn , un , utn ), utn + Δ (·)) .
2
2
2
(4.19)
We emphasize that the coefficients of the method are
the same as in ODE case.
un+1 = un + Δ f (tn +
Runge-Kutta-like method of the fourth order
Runge-Kutta method of order 4 is the classic and one of
the most popular numerical method for ODE, because its
rate of convergence is O(Δ4 ) and it is easy to code.
For DDE this method has the following form
1
un+1 = un + Δ (h1 + 2h2 + 2h3 + h4 ) ,
6
98
Systems with Delays
h1 = f (tn , un , utn (·)) ,
Δ
Δ
, un + h1 , utn + Δ (·)) ,
2
2
2
Δ
Δ
h3 = f (tn + , un + h2 , utn + Δ (·)) ,
2
2
2
h4 = f (tn + Δ, un + Δh3 , utn +Δ (·)) .
h2 = f (tn +
The method has the fourth order of convergence (under
an appropriate smoothness of solutions) if we use the prehistory interpolation utn by piece-wise cubic splines and
the continued extrapolation utn + Δ (·).
2
Numerical Methods
4.3
4.3.1
99
Interpolation and extrapolation of
the model pre-history
Interpolational operators
In this section we describe methods of interpolation and
extrapolation of the pre-history of the discrete model un
using functions composed by polynomials of p-th degree.
Let us consider the same partition of the time interval
[t0 , t0 + θ] as in the previous section. Remember, this partition is uniform only for the sake of simplicity.
Also remember that the pre-history {ui }n of the discrete
model {ui}N
−m at time tn is the set of m + 1 vectors:
{ui}n = {ui ∈ Rl , n − m ≤ i ≤ n} .
This set of vectors defines at time tn the future dynamics
of the discrete model.
Definition 4.3. Interpolational operator I of the discrete model pre-history is a mapping I : {ui}n → u(·) ∈
Q[tn − τ, tn ].
Definition 4.4. We say that an interpolational operator
I has an approximation order p at a solution x(t) if there
exist constants C1 , C2 such that
x(t) − u(t) ≤ C1
max
i≥0,n−Nτ ≤i≤n
ui − xi + C2 Δp (4.20)
for all n = 0, 1, . . . , N and t ∈ [tn − τ, tn ].
Example 4.1. The following mapping uses the piecewise linear interpolation and is the interpolational operator
of the second order:
I : {ui }n → u(t) =
100
Systems with Delays
⎧ !
"
⎨ (t − t )u + (t − t)u 1 , t ∈ [t , t ] ,
i i+1
i+1
i
i i+1
Δ
=
⎩ 0
y (t0 − t), t ∈ [t0 − τ, t0 ) .
(4.21)
General interpolational operators I can be constructed
using splines of a degree p. Without loss of generality we
m
can suppose that
= k is a natural, otherwise one can
p
take m divisible p. Let us divide the interval [tn − τ, tn ] =
[tn−m − τ, tn ] by k subintervals [tni−1 , tni ], i = 0, 1, . . . , k −
1, of the length pΔ in such a way that tn0 = tn , tn1 =
tn−p ,. . . . At every subinterval [tni−1 , tni ] we construct an
interpolational polynomial Lp (t) = Lip (t) according to the
values uni −p , uni −p+1 , . . .,uni :
Lip (t)
=
p
l=0
uni −l
ni
.
j=ni −p;
t − tj
.
tni −l − tj
j =n −l
(4.22)
i
Then we can define the following interpolational operator
I (of the discrete pre-history)
Lip (t), tni−1 ≤ t < tni , t ≥ t0 ,
I : {ui }n → u(t) =
y 0(t0 − t), t ∈ [t0 − τ, t0 ) .
(4.23)
Theorem 4.2. Let the solution x(t) of the initial value
problem (4.4) – (4.3) be (p + 1)–times continuous differentiable on the interval [t0 − τ, t0 + θ]. Then interpolational
operator (4.23) has an approximation order p + 1.
One can use other types of interpolation for DDE numerical methods.
4.3.2
Extrapolational operators
Some DDE numerical methods require to calculate a prehistory utn +a (·) of the discrete model for a > 0. In this
Numerical Methods 101
case it is necessary to use an extrapolation of the model on
the interval [tn , tn + a].
Definition 4.5. Extrapolational operator Ea ( a > 0 )
of the discrete model pre-history is a mapping E : {ui}n →
u(·) ∈ Q[tn , tn + aΔ].
Definition 4.6. We say that an extrapolational operator Ea has an approximation order p at a solution x(t) if
there exist constants C3 , C4 such that
x(t) − u(t) ≤ C3 max ui − xi + C4 (Δ)p
n−m≤i≤n
for all n = 0, 1, . . . , N − 1, and t ∈ [tn , tn + aΔ].
(4.24)
One of the extrapolation methods, is an extrapolation
by continuity of an interpolational polynomial
E : {ui }n → u(t) = L0p (t), t ∈ [tn , tn + aΔ] ,
(4.25)
over the right side of the point tn ; here L0p (t) is the interpolational polynomial of a degree p constructed by the values
uj at the interval [tn−p , tn ]:
L0p (t)
=
p
l=0
un−l
n
.
t − tj
.
tn−l − tj
j=n−p;j =n−l
Definition 4.7. An extrapolation constructed by:
- spline interpolation on the interval [tn , tn − τ ],
- continuation of the last polynomial on [tn , tn + Δ],
is called an extrapolation by continuation or continued extrapolation.
Theorem 4.3. Let a solution x(t) of initial value problem (4.4) – (4.3) be (p + 1)-times continuous differentiable
on [t0 − τ, t0 + θ]. Then the continued extrapolation operator, corresponding to an interpolational spline of a degree
p, has an approximation order of the degree p + 1.
102
Systems with Delays
4.3.3
Interpolation-Extrapolation operator
In some cases it is convenient to unify interpolational operator and extrapolation operator into the one operator of
interpolation-extrapolation.
Definition 4.8. Interpolation-extrapolation operator
IE of the pre-history of a discrete model is a mapping
IE : {ui}n → u(·) ∈ Q[tn − τ, tn + aΔ] ,
a > 0 is a constant.
Definition 4.9. An interpolation-extrapolation operator IE has an approximation order p at a solution x(t) if
there exist constants C5 , C6 such that
x(t) − u(t) ≤ C5 max ui − xi + C6 (Δ)p
n−m≤i≤n
(4.26)
for all n = 0, 1, . . . , N − 1, and t ∈ [tn − τ, tn + aΔ].
Definition 4.10. An operator IE is consistent if
u(ti ) = ui , i = n − m, . . . , n .
Definition 4.11. An operator IE satisfies the Lipschitz
condition if there exists a constant LI such that for any
(1)
(2)
discrete pre-histories {ui }n and {ui }n
max
[tn −τ ≤t≤tn +aΔ
u(1) (t) − u(2) (t) ≤ LI
(1)
(2)
max ui − ui ,
n−m≤i≤n
"
"
!
!
(1)
(2)
where u(1) (·) = IE {ui }n , u(2) (·) = IE {ui }n .
The methods of interpolation and extrapolation described in this section are consistent and satisfy the Lipschitz condition.
Numerical Methods 103
4.4
Explicit Runge-Kutta-like methods
Let some interpolation operator I and extrapolation operator E be fixed.
Explicit k–stage3 Runge-Kutta-like method (further we
use the abbreviation ERK) with the interpolation I and
the extrapolation E is the numerical model
u0 = x0 ;
un+1 = un + Δ
k
(4.27)
σi hi (un , utn (·)), n = 1, . . . , N − 1 ,
i=1
h1 (un , utn (·)) = f (tn , un , utn (·)) ,
(4.28)
(4.29)
hi (un , utn (·)) =
= f (tn + ai Δ, un + Δ
i−1
bij hj (un , utn (·)), utn +ai Δ (·)) .
j=1
(4.30)
The pre-history of the discrete model is defined as
⎧ 0
y (t + s − t0 ) for t + s < t0 ,
⎪
⎨
I({ui}n ) for tn − τ ≤ t + s < tn ,
ut (s) =
⎪
⎩
E({ui}n ) for tn ≤ t + s ≤ tn + aΔ,
(4.31)
a = max |ai | .
1≤i≤k
Numbers ai , σi , bij are called the coefficients of the
method. We denote σ = max |σi |, b =
max
|bij |.
1≤i≤k
1≤i≤k; 1≤j≤k−1
Let us investigate a convergence order (in the sense of
Definition 4.2) of ERK-like methods.
3k
is a natural number.
104
Systems with Delays
Definition 4.12. Residual ψ(tn ) of ERK-like method
is the function
xn+1 − xn −
ψ(tn ) =
σi hi (xn , xtn (·)) .
Δ
i=1
k
Note that a residual is defined on an exact solution x(t)
and does not depend on an interpolation and an extrapolation.
Definition 4.13. A residual ψ(tn ) has an order p if
there exists a constant C such that ψ(tn ) ≤ CΔp for all
n = 0, 1, . . . , N − 1.
Theorem 4.4. Let numerical method (4.27) – (4.31)
have
1) an approximation order p1 > 0,
2) error of pre-history interpolation of an order p2 > 0,
3) error of pre-history extrapolation of an order p3 > 0.
Then the method converges and has the convergence order
p = min {p1 , p2 , p3 }.
4.5
Approximation orders of ERK-like
methods
For ODE an approximation order of an explicit numerical
Runge-Kutta method is defined using the expansion of an
exact solution and a right part of ODE into the Taylor series.
Example 4.2. It is known that for ODE
ẋ = f (t, x)
Numerical Methods 105
the improved Euler method
"
Δ!
f (tn , un ) + f (tn + Δ, un + Δf (tn , un ))
un+1 = un +
2
has the second approximation order at a sufficiently smooth
solution. Consider the procedure of estimation of the approximation order of this method. The residual of the
method is
"
xn+1 − xn 1 !
ψ(tn ) =
− f (tn , xn )+f (tn , xn +Δf (tn , xn )) .
Δ
2
Expanding an exact solution x(t) into Taylor’s series we
obtain
xn+1 = xn + ẋ(tn )Δ + ẍ(tn )
= xn + f (tn , xn )Δ +
Δ2
2
Δ2
+ O(Δ3 ) =
2
∂f (tn , xn )
∂f (tn , xn )
+
f (tn , xn ) + O(Δ3 ) .
∂t
∂x
Also we have
f (tn + Δ, xn + Δf (tn , xn )) =
∂f (tn , xn ) ∂f (tn , xn )
+
f (tn , xn ) Δ+O(Δ2 ) .
∂t
∂x
Substituting these formulas into the residual we obtain
ψ(tn ) = O(Δ2 ).
= f (tn , xn )+
For DDE an approximation order of a numerical method
also can be found using expansion of a solution and a right
part of DDE into Taylor’s series. However, in this case it is
necessary to use the techniques of the i–smooth analysis.
We emphasize that coefficients of Taylor’s series expansion of a solution and a right part of DDE are the same as
for ODE. Thus the following proposition is valid.
Theorem 4.5. If an ERK-method for ODE has an approximation order p then an ERK-like method for DDE
106
Systems with Delays
with the same coefficients also has an approximation order p.
This theorem together with Theorem 4.4 (on a convergence order) allow us to construct for DDE analogies of all
known ERK-methods of ODE theory. Of course, in DDE
case it is necessary to use the suitable operators of interpolation and extrapolation.
For example, the improved Euler method for DDE (with
the same coefficients as in Example 4.2) with piece-wise
linear interpolation (4.21) and extrapolation (4.25) has the
second convergence order.
The 4–stage ERK-like method for DDE has the following form
1
un+1 = un + Δ (h1 + 2h2 + 2h3 + h4 ) ,
6
h1 = f (tn , un , utn (·)) ,
h2 = f (tn +
Δ
Δ
, un + h1 , utn + Δ (·)) ,
2
2
2
h3 = f (tn +
Δ
Δ
, un + h2 , utn + Δ (·)) ,
2
2
2
h4 = f (tn + Δ, un + Δh3 , utn +Δ (·)) .
This method has the fourth order of convergence (under
an appropriate smoothness of solutions) if we use the prehistory interpolation by piece-wise cubic splines and the
continued extrapolation.
For an approximation order p ≥ 5 there is no p-stage
ERK-methods; this fact is called the Butcher barriers [72].
Further we describe 6-stage ERK-method of order p = 5 –
the so-called Runge-Kutta-Fehlberg method.
Numerical Methods 107
4.6
4.6.1
Automatic step size control
Richardson extrapolation
In case of DDE the Richardson extrapolation can be obtained in the same way as for ODE. This procedure allows us to derive a practical error estimate of a numerical
method.
Consider for the initial value problem (4.4) – (4.3) a
numerical method of an order p. Fix Δ > 0 and calculate
two values u1 and u2 of the corresponding numerical model.
Denote x1 = x(t0 + Δ) and x2 = x(t0 + 2Δ), then
ε1 = x1 − u1 = CΔp+1 + O(Δp+2) ,
ε2 = x2 − u2 =
p+1
= CΔ
+ CΔp+1 (1 + O(Δ)) + O(Δp+2) =
= 2CΔp+1 + O(Δp+2 ) .
(4.32)
Factor 2 arises in ε2 because it consists of the transferred
error of the first step and the local error of the second step.
Let w be the value of the numerical model corresponding
to one step of the double length 2Δ. Then
x2 − w = C(2Δ)p+1 + O(Δp+2) .
(4.33)
From (4.32) and (4.33) we obtain
ε2 =
u2 − w
+ O(Δp+2) .
p
2 −1
(4.34)
Hence the value
û2 = u2 +
u2 − w
2p − 1
approximates x2 = x(t0 + 2Δ) with the order p + 1.
This procedure is called Richardson extrapolation and
allows one to elaborate a class of extrapolational methods
for ODE, among which the most powerful is, apparently,
the Gragg-Bulirsch-Stoer algorithm [72].
108
4.6.2
Systems with Delays
Automatic step size control
On the basis of estimate (4.34) one can organize a procedure of an automatic step size control that guarantees a
given accuracy tol. Below we describe the corresponding
algorithm using notation err for the error.
Let Δold be an initial value of the step. We calculate two
values u1 and u2 of the discrete model corresponding to this
step, and the value w of the discrete model corresponding
to the double step 2Δold . Calculate the error
err =
2p
1
|u2,i − wi|
,
max
− 1 i=1,...,l
di
where the index i denotes the corresponding coordinate of
the vectors, di is a scale factor. If di = 1 then we have an
absolute error, if di = |u2,i| then we have a relative error.
One can use other norms and scales.
From relations
err = C(2Δold )p+1 ,
tol = C(2Δnew )p+1
we obtain the formula for a new step size
1
tol p+1
Δnew =
Δold .
err
There are possible two variants:
1) Δnew < Δold , then we accept the new step size Δnew ;
2) Δnew > Δold , then we accept two previous model values
u1 and u2 , and to calculate u3 we use Δold , or even can
it enlarge.
For practical realization of the algorithm for ODE the
following more complicated procedure
Δnew = min f acmax, max f acmin, f ac
tol
err
1
p+1
Δold
Numerical Methods 109
is usually used. It allows one to avoid big increasing or
decreasing of a step size. In many programs f ac = 0.8,
f acmax ∈ [1.5, 5].
4.6.3
Embedded formulas
In the previous subsection we described the algorithm of a
step size control on the basis of one numerical method for
two different step sizes Δ and 2Δ.
However, to obtain an error estimate and to organize an
automatic step size control procedure one can also use values of two numerical models of different orders with respect
to one step size.
This approach is especially effective if coefficients ai ,
bij of Butcher’s tableau of the lower-order method coincide
with the part of the coefficients of the higher-order method,
because, in this case, for the high order method one can
use some of the already calculated values of the low order
method. Such methods are called embedded methods.
A method of an order p
un+1 = un + Δ
k
σi hi (utn (·))
i=1
is considered as the basic method, and a method of the
order p + 1
ûn+1 = un + Δ
k
σ̂i hi (utn (·))
i=1
is used for estimation of an error.
An example of the embedded methods is the pair of
improved Euler method and Runge-Kutta method of the
third order
0
1
1
2
1
1 1
4 4
1 1
0
un+1
2 2
1 1 4
ûn+1
6 6 6
This method is called the Runge-Kutta-Fehlberg method
of the order 2–3 (RKF 2(3)) (for DDE it is necessary to use
the discrete model pre-history interpolation and extrapolation of the second order).
More accurate is Runge-Kutta-Fehlberg method of the
order 4 – 5 (RKF 4(5))
0
1
1
4
4
3
3
9
8
32
32
12 1932
7200 7296
−
13 2197
2197 2197
439
3680
845
1
−8
−
216
513
4104
3544 1859
11
8
1
2
−
−
−
2
27
2565 4104
40
25
1408
2197
1
un+1
0
−
216
2565
4104
5
6656
9
16
28561
0
−
ûn+1
135
12825 56430
50
0
2
55
This method is usually used in most of the software
packages for ODE (in DDE case it is necessary to use an
interpolation-extrapolation operator of the fourth order).
Chapter 5
Appendix
5.1
i-Smooth calculus of functionals
In functional
V [x, y(·)] : Rn × Q[−τ, 0) → R
(5.1)
x is the finite dimensional variable, so we can calculate the
∂V
(of course, if these derivatives exist).
gradient
∂x
In this section we describe basic constructions of the invariant derivative of a functional with respect to the functional variable y(·).
5.1.1
Invariant derivative of functionals
In the sequel, for {x, y(·)} ∈ H and Δ > 0 we denote by
EΔ [x, y(·)] the set of functions Y (·) : [−τ, Δ] → Rn such
that:
1. Y (0) = x ,
2. Y (s) = y(s) , −τ ≤ s < 0 ,
3. Y (·) is continuous on [0, Δ] .
111
112
Systems with Delays
That is, EΔ [x, y(·)] is the set of all continuous continuations of {x, y(·)} on the interval [0, Δ]. Also we let
E[h] =
EΔ [h].
Δ>0
For functional (5.1) and a function Y (·) ∈ E[h] we can
construct the function
ψ̂Y (ξ) = V [x, yξ (·)] ,
(5.2)
where yξ (·) = {Y (ξ + s), −τ ≤ s < 0} ∈ Q[−τ, 0) and
ξ ∈ [0, Δ]. Note, function (5.2) and the interval [0, Δ] depend on the choice of Y (·) ∈ E[h].
Definition 5.1. Functional (5.1) has at point p =
{x, y(·)} ∈ Rn × Q[−τ, 0) the invariant derivative (i–
derivative) ∂y V [x, y(·)] with respect to y(·), if for any
Y (·) ∈ E[x, y(·)] the corresponding function (5.2) has at
dψ̂Y (0)
zero right-hand derivative
invariant with respect to
dξ
Y (·) ∈ E[x, y(·)]1. And in this case we set
∂y V [p] =
dψ̂Y (0)
.
dξ
Remark 5.1. Existence of the invariant derivative depends on local properties of function (5.2) in the right
neighborhood of zero, so in Definition A.1 we can substitute the set E[x, y(·)] by EΔ [x, y(·)] for some Δ > 0.
Example 5.1. Let in the functional
0
β[y(s)]ds
V [y(·)] =
−τ
1 I.e.
the value
dψ̂Y (0)
is the same for all Y (·) ∈ E[x, y(·)].
dξ
(5.3)
Appendix
113
β : Rn → R is a continuous function. We emphasize that we calculate the invariant derivative at point
h = {x, y(·)} ∈ Rn × Q[−τ, 0) (containing x) though functional (5.3) does not depend on x. Let Y (·) be an arbitrary
function of E[x, y(·)], then (5.2) has the form
0
ψ̂Y (ξ) = V [yξ (·)] =
ξ
β[Y (ξ + s)]ds =
−τ
β[Y (s)]ds.
−τ +ξ
dψ̂Y (0)
and taking into account
dξ
that Y (0) = x, Y (−τ ) = y(−τ ) we obtain
ξ
"
dψ̂Y (0)
d!
=
β[Y (s)]ds
=
dξ
dξ
ξ=+0
Calculating the derivative
−τ +ξ
= β[Y (0)] − β[Y (−τ )] = β[x] − β[y(−τ )].
dψ̂Y (0)
= β[x] − β[y(−τ )] is invariant with redξ
spect to Y (·) ∈ E[x, y(·)] and depends only on {x, y(·)}.
Hence functional (5.3) has at every point h = {x, y(·)} ∈
Rn ×Q[−τ, 0) the invariant derivative ∂y V [x, y(·)] = β[x]−
β[y(−τ )].
Thus
Let us emphasize once more, though functional (5.3)
depends only on y(·) nevertheless its invariant derivative
∂y V [x, y(·)] is defined on pairs {x, y(·)} ∈ H. It means,
for calculating invariant derivatives of regular functionals
the very important role play “boundary values” of “test
functions” {x, y(·)}. For this reason, for example, functional (5.3) does not have invariant derivatives on functions y(·) ∈ L2 [−τ, 0) though functional (5.3) is defined
on L2 [−τ, 0) (if the integral in the right-hand side of
(5.3) is the Lebesgue integral). The matter is, functions
y(·) ∈ L2 [−τ, 0) are not defined at separate points2 , so,
2 These
functions are not defined on sets of measure zero.
114
Systems with Delays
generally speaking, one value β[y(−τ )] is not also defined.
However, if a function y(·) ∈ L2 [−τ, 0) is continuous from
the right at the point s = −τ , then for (5.3) we can calculate at point {x, y(·)} ∈ Rn × L2 [−τ, 0) the invariant
derivative ∂y V = β[x] − β[y(−τ )].
Singular functionals (3.7), (3.8) also have invariant
derivatives. However, these derivatives are defined only
for sufficiently smooth functions.
Example 5.2. Let in functional (3.7) the function P [z]
is continuous differentiable and a function y(·) ∈ Q[−τ, 0)
has right-hand side derivative at point s = −τ . Then (3.7)
has at y(·) the invariant derivative
∂P [y(−τ )]
ẏ(−τ ) .
∂z
Indeed, to calculate the invariant derivative we should construct the function (5.2)
∂y V [y(·)] =
ψ̂Y (ξ) = V [yξ (·)] = P [y(ξ − τ )] , ξ ∈ [0, Δ) .
Obviously, ψ̂Y (ξ) has right-hand side derivative at ξ = 0
only if the function y(s), −τ ≤ s < 0, has right-hand side
derivative at point s = −τ , and in this case
∂y V [y(·)] =
∂P [y(−τ )]
dψ̂Y (0)
=
ẏ(−τ ) .
dξ
∂z
Remark 5.2. For calculating of the invariant derivative
of singular functional (3.7) we did not use continuations
Y (·) ∈ E[x, y(·)] of the function y(·).
In Definition 5.1 we introduced the notion of the invariant derivatives with respect to y(·). Now for functional (5.1) we give a general definition of its derivatives
with respect to x and y(·).
Appendix
115
Let p = {x, y(·)} ∈ Rn × Q[−τ, 0) and Y (·) ∈ E[x, y(·)],
then we can construct the function
ψY (z, ξ) = V [x + z, yξ (·)] ,
(5.4)
z ∈ Rn , ξ ∈ [0, Δ], yξ (·) = {Y (ξ + s), −τ ≤ s < 0}.
Functional (5.1) has at point
∂V [p]
and parp = {x, y(·)} ∈ Rn × Q[−τ, 0) gradient
∂x
tial invariant derivative ∂y V [p], if for any Y (·) ∈ E[x, y(·)]
∂ψY (0)
and rightthe function (5.4) has at zero gradient
∂z
dψY (0)
hand side derivative
, invariant with respect to
dζ
Y (·) ∈ E[x, y(·)]. And in this case we set
Definition 5.2.
∂ψY (0)
∂V [p]
=
,
∂x
∂z
∂y V [p] =
dψY (0)
.
dξ
Consider some rules and formulas which allow to calculate invariant derivatives of different functionals without
using the definition. For invariant derivatives basic rules
of differential calculus of finite-dimensional functions are
valid.
If functionals V [x, y(·)], W [x, y(·)] : Rn × Q[−τ, 0) →
R have at point h = {x, y(·)} ∈ H invariant derivatives
∂y V [x, y(·)] and ∂y W [x, y(·)] then the sum, difference and
the product of these functionals have invariant derivatives
at point h and
!
"
∂y V [x, y(·)] + W [x, y(·)] = ∂y V [x, y(·)] + ∂y W [x, y(·)] ,
!
"
∂y V [x, y(·)] − W [x, y(·)] = ∂y V [x, y(·)] − ∂y W [x, y(·)] ,
!
"
∂y V [x, y(·)] · W [x, y(·)] = ∂y V [x, y(·)] · W [x, y(·)] +
+ V [x, y(·)] · ∂y W [x, y(·)] .
116
Systems with Delays
Moreover, if W [x, y(·)] = 0 then
V [x, y(·)]
W [x, y(·)]
∂y
=
5.1.2
=
∂y V [x, y(·)] · W [x, y(·)] − V [x, y(·)] · ∂y W [x, y(·)]
.
W 2 [x, y(·)]
Examples
Two examples of calculating the invariant derivatives of
functionals defined on Q[−τ, 0) we discussed in the previous section. In this subsection we calculate invariant derivatives of more complicated functionals.
Example 5.3. Let in the functional
0
V [y(·)] =
0
α(
−τ
β[y(s)] ds )dν
(5.5)
ν
α : R → R is a continuous differentiable function,
β : Rn → R is a continuous function. The integral in
the right-hand side of (5.5) does not depend on x, so
∂V [p]
= 0.
∂x
In order to calculate the invariant derivative with respect to y(·) let us fix an arbitrary Y (·) ∈ E[x, y(·)] and
consider
0
ψY (ξ) =
α
0
%
−τ
ν
0
α
=
−τ
&
β[Y (ξ + s)]ds dν =
%
ξ
ν+ξ
&
β[Y (s)]ds dν .
Appendix
One can calculate
⎞
⎛ 0
! ξ
"
d ⎝
dψY (0)
=
α
β[Y (s)]ds dν ⎠
dξ
dξ
−τ
ν+ξ
0
= β[Y (0)]
α̇
0
−
α̇
%
0
−τ
&
β[Y (s)]ds β[Y (ν)]dν =
ν
= β[Y (0)]
α̇
0
%
−τ
α̇
+
−τ
&
β[Y (s)]ds dν −
ν
0
0
ξ=+0
0
%
−τ
=
0
%
&
β[Y (s)]ds dν +
ν
&
β[Y (s)]ds d
ν
0
,
β[Y (s)] ds
-
ν
0
= β[Y (0)]
α̇
0
%
−τ
0
+
&
β[Y (s)]ds dν +
ν
0
, %
&d α
β[Y (s)]ds =
−τ
ν
0
= β[Y (0)]
α̇
%
−τ
+ α(0) − α
0
&
β[Y (s)]ds dν +
ν
0
%
&
β[Y (s)]ds .
=
117
118
Systems with Delays
Taking into account that Y (0) = x and Y (s) = y(s), −τ ≤
s < 0, we obtain
0
∂y V [p] = β[x]
α̇
−τ
+ α(0) − α
%
0
ν0
%
&
β[y(s)]ds dν +
&
β[y(s)]ds .
−τ
Example 5.4. Suppose in the functional
0
ω[x, s, y(s)] ds
V [x, y(·)] =
(5.6)
−τ
ω : Rn × [−τ, 0] × Rn → R is a continuous differentiable
function. The corresponding function (5.4) has the form
0
ψY (z, ξ) =
ω[x + z, s, Y (ξ + s)] ds ,
−τ
where Y (·) ∈ E[x, y(·)].
Obviously
∂V [p]
=
∂x
0
−τ
∂ω[x, s, y(s)]
ds
∂x
(it is necessary to note, we can obtain this partial derivative
by direct differentiating of (5.6) with respect to x).
One can represent the function ψY (z, ξ) as
ξ
ω[x + z, s − ξ, Y (s)] ds ,
ψY (z, ξ) =
−τ +ξ
Appendix
119
then
⎛
=
d ⎝
dξ
dψY (0, 0)
=
dξ
ξ
⎞
ω[x + z, s − ξ, Y (s)] ds ⎠
−τ +ξ
=
ξ=+0
0
= ω[x, 0, Y (0)] − ω[x, −τ, Y (−τ )] −
−τ
∂ω[x, s, Y (s)]
ds ,
∂s
∂ω
is the derivative with respect to the second vari∂s
able. Taking into account that Y (0) = x and Y (s) = y(s),
−τ ≤ s < 0, we obtain the following formula of the invariant derivative
where
0
∂ω[x, s, y(s)]
ds .
∂y V [x, y(·)] = ω[x, 0, x]−ω[x, −τ, y(−τ )]−
∂s
−τ
Example 5.5. Consider a functional
0 0
V [x, y(·)] =
ω[x, s, y(s)] ds dν ,
(5.7)
−τ ν
ω : Rn × [−τ, 0] × Rn → R is a continuous differentiable
function. For this functional the corresponding function
(5.4) has the form
0 0
ψY (z, ξ) =
ω[x + z, s, Y (ξ + s)] ds dν ,
−τ ν
Y (·) ∈ E[x, y(·)].
120
Systems with Delays
One can easily calculate
∂V [p]
=
∂x
0 0
−τ ν
∂ω[x, s, y(s)]
ds dν .
∂x
One can represent the function ψY (z, ξ) as
0 ξ
ω[x + z, s − ξ, Y (s)] ds dν ,
ψY (z, ξ) =
−τ ν+ξ
then
⎞
⎛ 0 ξ
dψY (0, 0)
d ⎝
ω[x + z, s − ξ, Y (s)] ds dν ⎠
=
dξ
dξ
−τ ν+ξ
=
ξ=+0
0
= τ ω[x, 0, Y (0)] −
ω[x, s, Y (s)] ds −
−τ
0
0
−
−τ ν
∂ω[x, s, Y (s)]
ds dν ,
∂s
∂ω
is the derivative with respect to the second vari∂s
able. Taking into account that Y (0) = x and Y (s) = y(s),
−τ ≤ s < 0, we obtain the following formula of the invariant derivative
where
0
∂y V [x, y(·)] = τ ω[x, 0, x] −
ω[x, s, y(s)] ds −
−τ
0
0
−
−τ ν
∂ω[x, s, y(s)]
ds dν .
∂s
Appendix
121
Example 5.6. Consider a functional
⎞ ⎛ 0
⎞⎤
⎡⎛ 0
0
V [x, y(·)] = ⎣⎝ y(s) ds⎠ Γ ⎝ y(s) ds⎠⎦ dν ,
−τ
ν
ν
(5.8)
Γ is n × n symmetric constant matrix.
The corresponding function (5.4) has the form
0
ψY (ξ) =
⎞ ⎛ 0
⎞⎤
⎡⎛ 0
⎣⎝ Y (s + ξ) ds⎠ Γ ⎝ Y (s + ξ) ds⎠⎦ dν ,
−τ
ν
ν
Y (·) ∈ E[x, y(·)].
One can represent the function ψY (ξ) as
⎡⎛
⎞ ⎛ ξ
⎞⎤
0
ξ
⎥
⎢
ψY (ξ) = ⎣⎝ Y (s) ds⎠ Γ ⎝ Y (s) ds⎠⎦ dν ,
−τ
ν+ξ
ν+ξ
then
dψY (0)
=
dξ
⎞
⎛ ⎡⎛
⎞ ⎛ ξ
⎞⎤
0
ξ
d ⎜ ⎢⎝
⎥
⎟
=
Y (s) ds⎠ Γ ⎝ Y (s) ds⎠⎦ dν ⎠
⎝ ⎣
dξ
−τ
ν+ξ
ν+ξ
⎛ 0
⎞⎤
⎡
0 !
"
= ⎣ Y (0) − Y (ν) Γ ⎝ Y (s) ds⎠⎦ dν +
−τ
0
+
−τ
ν
⎞
⎡⎛ 0
⎤
!
"
⎣⎝ Y (s) ds⎠ Γ Y (0) − Y (ν) ⎦ dν ,
ν
=
ξ=+0
122
Systems with Delays
hence
∂y V [x, y(·)] =
⎛ 0
⎞⎤
⎡
0 !
"
= ⎣ x − y(ν) Γ ⎝ y(s) ds⎠⎦ dν +
−τ
0
+
⎞ ν
⎡⎛ 0
⎤
!
"
⎣⎝ y(s) ds⎠ Γ x − y(ν) ⎦ dν =
−τ
= 2 x Γ
= 2 x Γ
ν
0 0
0
y(s) ds dν − 2
−τ ν
0 0
−τ
0
y(s) ds dν − 2
−τ ν
−τ
= 2 x Γ
0 0
⎛ 0
⎞
y (ν) Γ ⎝ y(s) ds⎠ dν =
ν
⎞
⎛ 0
⎝ y(s) ds⎠ Γ y(ν) dν =
ν
y(s) ds dν +
−τ ν
0
+2
−τ
⎞
⎡ 0
⎤
⎛ 0
⎝ y(s) ds⎠ Γ d ⎣ y(s) ds⎦ =
ν
= 2 x Γ
0
+
−τ
ν
0 0
y(s) ds dν +
−τ ν
⎡⎛ 0
⎞ ⎛ 0
⎞⎤
d ⎣⎝ y(s) ds⎠ Γ ⎝ y(s) ds⎠ ⎦ =
ν
= 2 x Γ
0 0
ν
y(s) ds dν −
−τ ν
⎞ ⎛ 0
⎞
⎛ 0
− ⎝ y(s) ds⎠ Γ ⎝ y(s) ds⎠ .
−τ
−τ
Appendix
123
Example 5.7. Let us calculate the invariant derivative
of the functional
0 0
V [x, y(·)] =
γ[x; s, y(s); u, y(u)] ds du .
−τ −τ
For a function Y (·) ∈ E[x, y(·)] we construct
0 0
ψY (z, ξ) =
γ[x + z; s, Y (s + ξ); u, Y (u + ξ)] ds du ,
−τ −τ
then
⎛
=
d ⎜
⎝
dξ
dψY (0, 0)
=
dξ
ξ
ξ
⎞
⎟
γ[x; s − ξ, Y (s); u − ξ, Y (u)] ds du ⎠
−τ +ξ −τ +ξ
ξ=+0
0
γ[x; 0, Y (0); u, Y (u)] du −
=
−τ
0
−
γ[x; −τ, Y (−τ ); u, Y (u)] du +
−τ
0
γ[x; s, Y (s); 0, Y (0)] ds −
+
−τ
0
−
γ[x; s, Y (s); −τ, Y (−τ )] ds −
−τ
0 0
−
−τ −τ
∂γ[x; s, Y (s); u, Y (u)]
ds du −
∂s
=
124
Systems with Delays
0 0
∂γ[x; s, Y (s); u, Y (u)]
ds du ,
∂u
−
−τ −τ
hence
∂y V [x, y(·)] =
0
0
γ[x; 0, x; u, y(u)] du −
=
−τ
γ[x; −τ, y(−τ ); u, y(u)] du +
−τ
0
0
γ[x; s, y(s); 0, x] ds −
+
−τ
γ[x; s, y(s); −τ, y(−τ )] ds −
−τ
0 0
−
−τ −τ
∂γ[x; s, y(s); u, y(u)]
ds du −
∂s
0 0
−
−τ −τ
∂γ[x; s, y(s); u, y(u)]
ds du .
∂u
Also one can easily check that
∂V [x, y(·)]
=
∂x
0 0
−τ −τ
∂γ[x; s, y(s); u, y(u)]
ds du .
∂x
5.2
Derivation of generalized Riccati equations
In this section we give deduction of GREs (3.10) – (3.16).
Let us denote by W [x, y(·)] the optimal value of the
cost functional for the problem (3.1) – (3.6) at a position
{x, y(·)} ∈ H. Let us assume that the functional W [x, y(·)]
Appendix
125
is invariantly differentiable at this position, then we can
construct the function
∂W [x, y(·)] α(u) =
A x+Aτ y(−τ )+
∂x
0
G(s) y(s)ds+B u +
−τ
+ ∂y W [x, y(·)] + Z[x, y(·)] + u N u .
(5.9)
Optimal control u∗ (x, y(·)) should minimize the function
α(u) and, moreover, α(u∗ (x, y(·))) = 0. The function α(u)
is a quadratic function with respect to u ∈ Rr , so the value
u∗ minimizing the function α(u) one can find using relation
∂α(u)
= 0,
∂u
(5.10)
because
∂ 2 α(u)
= N > 0.
∂u2
From (5.10) it follows
∂α(u)
∂W [x, y(·)]
= 2Nu + B = 0,
∂u
∂x
hence
∂W [x, y(·)]
1
.
u∗ (x, y(·)) = − N −1 B 2
∂x
(5.11)
Substituting (5.11) into (5.9) we obtain
∂W [x, y(·)] Ax + Aτ y(−τ ) +
α(u∗ (x, y(·)) =
∂x
0
+
−τ
+
1
∂W [x, y(·)] + ∂y W [x, y(·)]+
G(s)y(s)ds − BN −1 B 2
∂x
1 ∂W [x, y(·)]
∂W [x, y(·)]
B N −1 B + Z[x, y(·)] . (5.12)
4
∂x
∂x
126
Systems with Delays
Let us suppose that the optimal value of the cost functional has the quadratic form
0
W [x, y(·)] = x P x + 2 x
D(s)y(s)ds +
−τ
0 0
+
y (s)R(s, ν)y(ν)ds dν +
−τ −τ
0
+
y (s) F (s) y(s) ds +
−τ
0 0
+
y (s) Π(s) y(s)ds dν .
(5.13)
−τ ν
Gradient and the invariant derivative of this functional are
∂W [x, y(·)]
= 2Px + 2
∂x
0
D(s) y(s) ds ,
(5.14)
−τ
∂y W [x, y(·)] = 2 x D(0) x − 2 x D(−τ ) y(−τ ) −
− 2 x
0
−τ
0
dD(s)
y(s) ds +
ds
0
R(0, ν) y(ν) dν − y (−τ )
+x
−τ
0 0
−
−τ −τ
R(−τ, ν) y(ν) dν −
−τ
!
∂R(s, ν)
y(ν) ds dν +
y (s)
∂s
0
−τ
"
y (s)R(s, 0) ds x −
Appendix
−
!0
127
"
y (s) R(s, −τ ) ds y(−τ ) −
−τ
0 0
−
y (s)
−τ −τ
∂R(s, ν)
y(ν) ds dν +
∂ν
+ x F (0) x − y (−τ ) F (−τ ) y(−τ ) −
0
−
y (s)
−τ
dF (s)
y(s)ds +
ds
0
+ τ x Π(0) x −
y (s) Π(s) y(s)ds −
−τ
0 0
−
y (s)
−τ ν
dΠ(s)
y(s)ds dν .
ds
(5.15)
Substituting (5.14) and (5.15) into (5.12) we obtain
0
!
"
α(u∗ (x, y(·))) = 2 x P + y (s) D (s) ds A x+Aτ y(−τ )+
−τ
0
0
!
"
G(s) y(s)ds − B N −1 B P x + D(s) y(s) ds +
+
−τ
−τ
+ 2 x D(0) x − 2 x D(−τ ) y(−τ ) − 2 x
0
−τ
0
0
R(0, ν) y(ν) dν − y (−τ )
+x
−τ
dD(s)
y(s) ds +
ds
R(−τ, ν) y(ν) dν −
−τ
128
Systems with Delays
0 0
−
−τ −τ
"
!0
∂R(s, ν)
y (s)
y (s) R(s, 0) ds x −
y(ν) ds dν +
∂s
−τ
−
!0
"
y (s) R(s, −τ ) ds y(−τ ) −
−τ
0 0
−
y (s)
−τ −τ
∂R(s, ν)
y(ν) ds dν +
∂ν
+ x F (0) x − y (−τ ) F (−τ ) y(−τ ) −
0
−
y (s)
−τ
dF (s)
y(s)ds +
ds
0
+ τ x Π(0) x −
y (s) Π(s) y(s)ds −
−τ
0 0
−
y (s)
−τ ν
dΠ(s)
y(s)ds dν +
ds
0
!
"
+ x P + y (s) D (s) ds B N −1 B ·
−τ
0
"
!
· P x + D(s) y(s) ds + Z[x, y(·)] =
−τ
1
1 1
−1 = 2 x P A− P B N B P +D(0)+ F (0)+ τ Π(0) x+
2
2
2
+ 2 x P Aτ − D(−τ ) y(−τ ) +
Appendix
+2 x
0 −τ
!0
+2
−τ
129
1
dD(s) 1
+ R(0, s) y(s) ds +
P G(s) − P B N −1 B D(s) −
2
ds
2
"
1 1
−1 y (s) D (s) A− D (s) B N B P + R(s, 0) ds x+
2
2
!0
"
y (s) 2 D (s) Aτ − R(s, −τ ) ds y(−τ ) +
+
−τ
0
0
+
−τ −τ
y (s) 2 D (s) G(ν) − D (s) B N −1 B D(ν)−
∂R(s, ν) ∂R(s, ν) y(ν) ds dν −
−
−
∂s
∂ν
0
− y (−τ )
R(−τ, ν) y(ν) dν −
−τ
0
−
y (s)
−τ
0 0
−
−τ ν
dF (s)
+ Π(s) y(s) ds −
ds
y (s)
dΠ(s)
y(s) ds dν +
ds
− y (−τ ) F (−τ ) y(−τ ) + Z[x, y(·)] =
1
1 1
−1 = 2 x P A− P B N B P +D(0)+ F (0)+ τ Π(0) x+
2
2
2
+ 2 x P Aτ − D(−τ ) y(−τ ) +
+2 x
0 −τ
1
dD(s) 1
+ R(0, s) y(s) ds +
P G(s) − P B N −1 B D(s) −
2
ds
2
130
Systems with Delays
0 +2 x
−τ
1
1
A D(s)− P B N −1 B D(s)+ R (s, 0) y(s) ds3+
2
2
!0
+
"
y (s) 2 D (s) Aτ − R(s, −τ ) ds y(−τ ) +
−τ
0
0
+
−τ −τ
y (s) 2 D (s) G(ν) − D (s) B N −1 B D(ν) −
∂R(s, ν) ∂R(s, ν) y(ν) ds dν −
−
−
∂s
∂ν
!0
"
−
y (ν) R (−τ, ν) dν y(−τ )4 −
−τ
0
−
y (s)
−τ
0 0
−
−τ ν
dF (s)
+ Π(s) y(s) ds −
ds
y (s)
dΠ(s)
y(s) ds dν −
ds
− y (−τ ) F (−τ ) y(−τ ) + Z[x, y(·)] =
1
1
1
= 2 x P A− P B N −1 B P +D(0)+ F (0)+ τ Π(0) x+
2
2
2
+ 2 x P Aτ − D(−τ ) y(−τ ) +
0 P G(s) − P B N −1 B D(s) −
+2x
−τ
3 Here
dD(s)
+ A D(s) +
ds
the following property of matrix algebra: if z M y is a scalar,
! we "used
then z M y = y M z.
4 See previous footnote.
Appendix
+
!0
131
1
1
R(0, s) + R (s, 0) y(s) ds +
2
2
"
y (s) 2 D (s) Aτ −R(s, −τ )−R (−τ, s) ds y(−τ )+
+
−τ
0 0
+
y (s) 2 D (s) G(ν) − D (s) B N −1 B D(ν)−
−τ −τ
∂R(s, ν) ∂R(s, ν) −
y(ν) ds dν −
∂s
∂ν
0
dF (s)
+ Π(s) y(s) ds −
− y (s)
ds
−
−τ
0 0
−
−τ ν
y (s)
dΠ(s)
y(s) ds dν −
ds
− y (−τ ) F (−τ ) y(−τ ) + Z[x, y(·)] =
= 2 x
1
1
1 P A + A P − P B N −1 B P +
2
2
2
1
1
1
1
+ D(0) + D (0) + F (0) + τ Π(0) x5 +
2
2
2
2
+ 2 x P Aτ − D(−τ ) y(−τ ) +
0 P G(s) − P B N −1 B D(s) −
+2x
−τ
5 Here
dD(s)
+ A D(s)+
ds
1
1
+ R(0, s) + R (s, 0) y(s) ds +
2
2
we used the property of quadratic forms: x L x = x
arbitrary n × n matrix L and x ∈ Rn .
!
1
2
L+
1
2
"
L x for
132
Systems with Delays
!0
"
y (s) 2 D (s) Aτ −R(s, −τ )−R (−τ, s) ds y(−τ )+
+
−τ
0 0
+
−τ −τ
y (s) D (s) G(ν)+G(s) D(ν)−D (s) B N −1 B D(ν)−
∂R(s, ν) ∂R(s, ν) −
y(ν) ds dν 6 −
∂s
∂ν
0
dF (s)
− y (s)
+ Π(s) y(s) ds −
ds
−
−τ
0 0
−
y (s)
−τ ν
dΠ(s)
y(s) ds dν −
ds
− y (−τ ) F (−τ ) y(−τ ) + Z[x, y(·)] .
Taking into account that functional Z[x, y(·)] has the
form (3.7) we obtain
α(u∗ (x, y(·))) =
= x P A + A P − P BN −1 B P +
+ F (0) + D(0) + D (0) + Φ0 + τ Π(0) x +
+2 x P Aτ − D(−τ ) y(−τ )+
+2 x
0 P G(s) − P B N −1 B D(s) −
−τ
6 Here
0 0
2
−τ −τ
dD(s)
+
ds
we used that
y (s) D (s) G(ν) y(ν) ds dν =
0 0
−τ −τ
y (s) D (s) G(ν)+G (s) D(ν) y(ν) ds dν .
Appendix
133
+A D(s) + R(0, s) + Φ1 (s) y(s) ds+
!0
+
"
y (s) 2 D (s) Aτ − R(s, −τ ) − R (−τ, s) ds y(−τ )+
−τ
0 0
+
y (s) D (s) G(ν)+G (s) D(ν)−D (s) B N −1 B D(ν)−
−τ −τ
∂R(s, ν) ∂R(s, ν)
−
+ Φ2 (s, ν) y(ν) ds dν +
−
∂s
∂ν
0
dF (s)
+ y (s) Φ3 (s) −
− Π(s) y(s) ds +
ds
−τ
0 0
+
y (s) Φ4 (s) −
−τ ν
dΠ(s)
y(s) ds dν +
ds
+ y (−τ ) Φ5 − F (−τ ) y(−τ ) .
Because {x, y(·)} is an arbitrary element of H so the
quadratic functional α(u∗ (x, y(·))) equal to zero if its coefficients will be equal to zero.
Thus we obtain the system of generalized Riccati equations (3.10) – (3.16).
134
Systems with Delays
5.3
Explicit solutions of GREs (proofs of
theorems)
5.3.1
Proof of Theorem 3.2
Lemma 5.1. Let n × n matrix P be the solution of the
matrix equation
P A + A P + M = P K P
(5.1)
where M is a symmetric n×n matrix. Then n×n matrices
D(s) = e−[P K−A ](s+τ ) P Aτ ,
Q(s) D(ν) for (s, ν) ∈ Ω1 ,
D (s) Q (ν) for (s, ν) ∈ Ω2 ,
R(s, ν) =
where
(5.2)
(5.3)
(
)
Ω1 = (s, ν) ∈ [−τ, 0] × [−τ, 0] : s − ν < 0 ,
(
)
Ω2 = (s, ν) ∈ [−τ, 0] × [−τ, 0] : s − ν > 0 ,
and
Q(s) = Aτ e[P K−A ](s+τ ) .
(5.4)
are solutions of system
dD(s) + P K − A D(s) = 0 ,
ds
∂R(s, ν) ∂R(s, ν)
+
= 0,
∂s
∂ν
with boundary conditions
Proof.
1) Matrix D(s).
(5.5)
(5.6)
D(−τ ) = P Aτ ,
(5.7)
R(−τ, s) = Aτ D(s) .
(5.8)
Appendix
(
135
)
First let us calculate matrix D(s). The solution of system (5.5) on the interval [−τ, 0] has the form
D(s) = e−[P K−A ](s+τ ) CD ,
and the constant CD can be found from the boundary condition (5.7)
D(−τ ) = CD = P Aτ .
Thus
D(s) = e−[P K−A ](s+τ ) P Aτ .
2) Matrix R(s, ν).
Now let us check that matrix (5.3) is the solution of system (5.6).
Region Ω1 . In this region
R(s, ν) = Q(s) D(ν) .
(5.9)
Substituting (5.9) into (5.6) we have
dQ(s)
dD(ν)
D(ν) + Q(s)
= 0.
ds
dν
Taking into account (5.5) we can replace
− P K − A D(s), then we obtain
dD(s)
by
ds
dQ(s)
D(ν) − Q(s) P K − A D(ν) = 0 ,
ds
or
dQ(s)
− Q(s) P K − A
ds
D(ν) = 0 .
Because D(ν) = 0, hence Q(s) should be the solution of
the following equation
dQ(s)
− Q(s) P K − A = 0 .
(5.10)
ds
136
Systems with Delays
The solution of this equation on the interval [−τ, 0] has the
form
Q(s) = CQ e[P K−A ](s+τ ) ,
and the constant CQ can be found from the boundary condition (5.8)
Q(−τ ) = CQ = Aτ ,
Thus in the region Ω1 the matrix R(s, ν) has the form (5.4).
Region Ω2 . In this region
R(s, ν) = D (s) Q (ν) .
(5.11)
Substituting (5.11) into (5.6) we have
dD (s) dQ (ν)
Q (ν) + D (s)
= 0.
ds
dν
Taking into account (5.5) we can replace
−D (s) P K − A , then we obtain
dD (s)
by
ds
dQ (ν)
= 0.
−D (s) P K − A Q (ν) + D (s)
dν
or
D (s)
dQ (ν) − P K − A Q (ν)
dν
= 0,
Because D (s) = 0 hence Q(s) is the solution of the following equation
dQ (ν) − P K − A Q (ν) = 0 .
dν
(5.12)
One can see equation (5.12) is the same as (5.10) if we it.
So in the region Ω2 matrix R(s, ν) has the form (5.4).
Appendix
(
137
)
Now let us check the property R(s, ν) = R (ν, s). One
can see that
D (ν) Q (s) for (s, ν) ∈ Ω1 ,
Q(ν) D(s) for (s, ν) ∈ Ω2 ,
R (s, ν) =
hence, after interchanging s and ν, we obtain (it is necessary to note, if we interchange s and ν, then it is also
necessary to interchange Ω1 and Ω2 )
R (ν, s) =
D (s) Q (ν) for (s, ν) ∈ Ω2 ,
Q(s) D(ν) for (s, ν) ∈ Ω1 ,
thus the condition R(s, ν) = R (ν, s) is satisfied.
Proof of Theorem 3.2. Let matrix P be the solution of the matrix equation (3.22) and matrices D(s) and
R(s, ν) have the form (3.23) – (3.25).
If we choose weight matrices Φ0 , . . ., Φ5 as (3.26) then,
substituting these matrices and matrices P , D(s), R(s, ν)
into GREs (3.10) – (3.17) we obtain, using Lemma 5.1,
identity.
Hence matrices P , D(s), R(s, ν) are solutions of GREs
(3.10) – (3.17) corresponding to the weight matrices (3.26).
5.3.2
Proof of Theorem 3.3
Lemma 5.2. Let P be the solution of the exponential
matrix equation
P A+A P +e−[P K−A ] τ P Aτ +Aτ P e−[P K−A ] τ +M = P KP ,
(5.13)
where M is a symmetric n × n matrix. Then the matrix
P and matrices D(s), R(s, ν), defined by (5.2) – (5.4), are
solutions of the following system
P A + A P + D(0) + D (0) + M = P KP ,
(5.14)
138
Systems with Delays
dD(s) + P K − A D(s) = 0 ,
ds
(5.15)
∂R(s, ν) ∂R(s, ν)
+
= 0,
∂s
∂ν
(5.16)
with boundary conditions (5.7) – (5.8).
Proof. The difference between this system and system
(5.1) – (5.6) consists only in the presence of the term D(0)
in (5.14), hence matrices D(s), Q(s) and R(s, ν) have the
same forms as in the Lemma 5.1 (see (5.2) – (5.4)).
Substituting
D(0) = e−[P K−A ] τ P Aτ
into (5.14) we obtain the exponential matrix equation
(5.13).
Thus, solving equation (5.13) we find matrix P and
then, substituting this matrix into (5.2) – (5.4) we obtain
D(s), Q(s) and R(s, ν).
Using direct substitution one can check that if the matrix P is the solution of EME (5.13) then the triple P , D(s)
and R(s, ν) satisfy system (5.14) – (5.16), (5.7), (5.8). Proof of Theorem 3.3. Let matrix P be the solution of the matrix equation (3.28) and matrices D(s) and
R(s, ν) have the form (3.23) – (3.25).
If we choose weight matrices Φ0 , . . ., Φ5 as (3.29) then,
substituting these matrices and matrices P , D(s), R(s, ν)
into GREs (3.10) – (3.17) we obtain, using Lemma 5.2,
identity.
Hence matrices P , D(s), R(s, ν) are solutions of GREs
(3.10) – (3.17) corresponding to the weight matrices (3.29).
Appendix
(
5.3.3
139
)
Proof of Theorem 3.4
Theorem can be proved by the direct substitution of the
corresponding matrices to the GREs.
5.4
Proof of Theorem 1.1. (Solution representation)
Proof. Note, to prove formula (1.24) it is sufficient to
show that the derivative of (1.24) is equal to the right-side
of equation (1.19).
∂F [t, ξ]
Differentiating (1.24) and substituting
by the
∂t
right-side of (1.21) we obtain
ẋ(t) =
0
+
−τ
∂F [t, t0 ] 0
x +
∂t
∂F [t, t0 + τ + s]
Aτ (t0 + τ + s) y 0(s) ds +
∂t
⎤
⎡
0 s
∂F [t, t0 + s − ν]
+ ⎣
G(t0 + s − ν, ν) dν ⎦ y 0(s) ds +
∂t
−τ
−τ
t
+
∂F [t, ρ]
u(ρ) dρ + u(t) =
∂t
t0
⎡
= ⎣A(t) F [t, t0 ] + Aτ (t) F [t − τ, t0 ] +
0
⎤
G(t, s) F [t + s, t0 ] ds⎦ x0 +
−τ
0 A(t) F [t, t0 + τ + s] + Aτ (t) F [t − τ, t0 + τ + s] +
+
−τ
140
Systems with Delays
0
G(t, η) F [t + η, t0 + τ + s] dη ×
+
−τ
×Aτ (t0 + τ + s) y 0(s) ds +
0 * s A(t) F [t, t0 + s − ν] + Aτ (t) F [t − τ, t0 + s − ν] +
+
−τ
−τ
+
0
G(t, η) F [t+η, t0+s−ν] dη G(t0 +s−ν, ν) dν y 0(s) ds+
+
−τ
t *
A(t) F [t, ρ] + Aτ (t) F [t − τ, ρ] +
+
t0
+
0
G(t, s) F [t + s, ρ] ds u(ρ) dρ + u(t) =
+
−τ
*
0
= A(t) F [t, t0 ] x0 +
F [t, t0 +τ +s] Aτ (t0 +τ +s) y 0 (s) ds+
−τ
0 s
F [t, t0 + s − ν] G(t0 + s − ν, ν) dν
+
−τ
y 0 (s) ds +
−τ
t
+
*
F [t, ρ] u(ρ) dρ + Aτ (t) F [t − τ, t0 ] x0 +
+
t0
0
F [t − τ, t0 + τ + s] Aτ (t0 + τ + s) y 0(s) ds +
+
−τ
Appendix
(
)
0 s
F [t − τ, t0 + s − ν] G(t0 + s − ν, ν) dν
+
−τ
t
+
−τ
+
⎡
F [t − τ, ρ] u(ρ) dρ + ⎣
0
y 0 (s) ds +
⎤
G(t, η) F [t + η, t0 ] dη ⎦ x0 +
−τ
t0
⎞
⎛
0 0
+ ⎝ G(t, η) F [t + η, t0 + τ + s] dη ⎠ ·
−τ
−τ
·Aτ (t0 + τ + s) y 0(s) ds +
0 * s 0
G(t, η) F [t + η, t0 + s − ν] dη
+
−τ
−τ
−τ
·
+
·G(t0 + s − ν, ν) dν y 0(s) ds +
⎤
⎡
t 0
+ ⎣ G(t, η) F [t + η, ρ] dη ⎦ u(ρ) dρ + u(t) =
t0
141
−τ
= A(t) x(t) + Aτ (t) x(t − τ ) +
0
+
&
%
G(t, η) F [t + η, t0 ] x0 dη +
−τ
0
0
F [t + η, t0 + τ + s] ·
G(t, η)
+
−τ
−τ
·Aτ (t0 + τ + s) y 0(s) ds
dη +
142
Systems with Delays
0 * s
0
F [t + η, t0 + s − ν] ·
G(t, η)
+
−τ
−τ
−τ
+
·G(t0 + s − ν, ν) dν y 0 (s) ds
0
+
⎡
G(t, η) ⎣
−τ
t
dη +
⎤
F [t + η, ρ] u(ρ) dρ ⎦ dη + u(t) =
t0
= A(t) x(t) + Aτ (t) x(t − τ ) +
0
+ G(t, η) F [t + η, t0 ] x0 +
−τ
0
F [t + η, t0 + τ + s] Aτ (t0 + τ + s) y 0(s) ds +
+
0
+
−τ
⎡
⎣
−τ
s
⎤
F [t + η, t0 + s − ν] G(t0 + s − ν, ν) dν ⎦ y 0 (s) ds +
−τ
t
F [t + η, ρ] u(ρ) dρ
+
dη + u(t) =
t0
0
= A(t) x(t) + Aτ (t) x(t − τ ) +
G(t, η) x(t + η) dη + u(t) .
−τ
Initial Conditions
The theorem is proved.
Bibliography
[1] Andreeva, E.A., Kolmanovskii, V.B. and Shaikhet,
L.E. (1992) Control of Systems with Delay. Nauka,
Moscow. (In Russian)
[2] Andreeva, I.Yu. and Sesekin, A.N. The degenerate
linear-quadratic problem for systems with time delay.
Automation and remote control. 1997, no. 7, pp. 43–
54. (In Russian)
[3] Azbelev, N.V. and Rakhmatullina, L.F. (1996) Theory of Linear Abstract Functional Differential Equations and Applications. World Federation Publisher
Company, Atlanta.
[4] Azuma, T., Kondo, T. and Uchida, K. Memory state
feedback control synthesis for linear systems with
time delay via a finite number of linear matrix inequalities. Proc. IFAC Workshop Linear Time Delay
Systems. Grenoble, France, July 1998, pp. 75–80.
[5] Babskii, V.G. and Myshkis, A.D. (1983) Mathematical Models in Biology connected with regard of Delays, Appendix to: J.D. Murray, Lectures on Nonlinear Differential Equations. Models in Biology, Mir,
Moscow, pp. 383–394. (In Russian)
[6] Bahvalov, N.S. (1973) Numerical methods. Nauka,
Moscow. (In Russian)
[7] Baker, C.T.H., Makroglou, A. and Short, E. (1979)
Stability regions for Volterra integro-differential
143
144
Systems with Delays
equations, SIAM J. Numer. Anal. Vol. 16, pp. 890–
910.
[8] Baker, C.T.H., Paul, C.A.H. and Wille, D.R. (1995)
Issues in the numerical solution of evolutionary delay
differential equations, Advances in Comput. Math.
Vol. 3, pp. 171–196.
[9] Baker, C.T.H. (1996) Numerical analysis of Volterra
functional and integral equations — state of the art,
MCCM Tech. rep. No. 292, University of Manchester.
[10] Baker, C.T.H., Bocharov, G.A., Filiz, A., Ford, N.J.,
Paul, C.A.H., Rihan, F.A., Tang, A., Thomas, R.M.,
Tian, H. and Wille, D.R. (1998) Numerical modelling
by retarded functional differential equations, Technical report No. 335, University of Manchester.
[11] Baker, C.T.H., Paul, C.A.H. and Wille, D.R. (1995)
A bibliography on the numerical solution of delay differential equations, Technical report No. 269, University of Manchester.
[12] Baker, C.T.H., Butcher, J.C. and Paul, C.A.H.
(1992) Experience of STRIDE applied to delay differential equations, MCCM Tech. rep. No. 208, University of Manchester.
[13] Banks, H.T. and Kappel, F. (1979) Spline approximation for functional differential equations, J. Diff.
Equat. Vol. 34, pp. 496–522.
[14] Banks, H.T., Rosen, I.G. and Ito, K. A spline based
technique for computing Riccati operators and feedback controls in regulator problems for delay equations. SIAM J. Sci. Stat. Comput. 1984, 5.
[15] Banks, H.T. and Manitius, A. (1974) Application
of Abstract Variational Theory to Hereditary Systems — a survey, IEEE Trans. Automat. Control,
AC-19, no. 5, pp. 524–533.
Bibliography 145
[16] Barbashin, E.A. and Krasovskii, N.N. (1952) On
Global Stability of Motion, Doklady AN SSSR,
Vol. 86, pp. 453–456. (In Russian)
[17] Barbu, V. and Da Prato, G. (1983) Hamilton-Jacobi
Equations in Hilbert Spaces. Pitman, Boston.
[18] Barnea, D.I. (1969) A Method and New Results
for Stability and Instability of Autonomous Functional Differential Equations, SIAM J. Appl. Math.,
Vol. 17, pp. 681–697.
[19] Bellen, A. (1985) Consrained mesh methods for functional differential equations, Intentional Series of Numerical Mathematics, Verlag, Basel, pp. 52–70.
[20] Bellen, A. (1997) Contractivity of continuous RungeKutta methods for delay differential equations, Appl.
Num. Math. Vol. 24, pp. 219–232.
[21] Bellen, A., Guglielmi, N. and Torelli, L. (1997) Asymptotic stability properties of Theta-methods for
the pantograph equation, Appl. Num. Math. Vol. 24,
pp. 279–293.
[22] Bellman, R. and Cooke, K.L. (1963) DifferentialDifference Equation. Acad. Press, New York – London.
[23] Bocharov, G.A., Merchuk, G.I. and Romanyukha,
A.A. (1996) Numerical solution by LMMs of stiff
delay differential systems modelling as immune response, Numerishe Mathematik, Vol. 73, pp. 131–
148.
[24] Brunner, H. (1984) Implicit Runge-Kutta methods of
optimal order for Volterra integro-differential equations, Math. Comp., Vol. 42, pp. 95–109.
[25] Brykalov, S.A. (1989) Nonlinear Boundary Problems
and Steady-states Existence for heating Control Sys-
146
Systems with Delays
tems, Dokl. Akad. Nauk. SSSR Vol. 307, no 1,
pp. 11–14. (In Russian)
[26] Burton, T.A. (1978) Uniform Asymptotic Stability in
Functional Differential Equations, Proc. Amer. Math.
Soc., Vol. 68, no. 2, pp. 195–200.
[27] Burton, T.A. (1985) Stability and Periodic Solutions
of Ordinary and Functional Differential Equations.
Acad. Press, New York.
[28] Burton, T.A. and Hatvani, L. (1989) Stability Theorems for Nonautonomous Functional Differential
Equations by Liapunov Functionals, Tohoku Math.
J., Vol. 41, no. 1, pp. 65–104.
[29] Burton, T.A., Huang, G. and Mahfoud, W.E. (1985)
Liapunov Functionals of Convolution Type, J. Math.
Anal. Appl., Vol. 106, no. 1, pp. 249–272.
[30] Burton, T.A. and Zhang, S. (1986) Unified Boundedness, Periodicity and Stability in Ordinary and
Functional Differential Equations, Annal. Mat. Pur.
Appl., CXLV, pp. 124–158.
[31] Chetaev, N.G. (1990) Stability of Motion. Nauka,
Moscow. (In Russian)
[32] Chukwu, E.N. (1992) Stability and time-optimal control of hereditary systems. Academic Press.
[33] Collatz, L., Meinardus, G. and Wetterling, W. eds
(1983) Differential-difference equations. ISNM 62,
Birkhauser, Basel.
[34] Corduneanu, C. (1973) Integral Equations and Stability of Feedback Systems. Acad. Press, New York –
London.
[35] Corwin, S.P., Sarafyan, D. and Thompson, S. (1997)
DKLAG6: A code based on continuously imbedded
sixth-order Runge-Kutta methods for the solution
Bibliography 147
of state-dependent functional differential equations,
Appl. Num. Math. Vol. 24, pp. 319–330.
[36] Crocco, L. Aspects of combustion stability in liquid
propellant rocket motors, Part I: Fundamentals —
Low frequency instability with monopropellants, J.
Amer. Rocket Soc., Vol. 21, No. 6, pp. 163–178, 1951.
[37] Cruz, M. and Hale, J. (1970) Stability of Functional Differential Equations of Neutral Type, J. Diff.
Equat., Vol. 7, pp. 334–355.
[38] Cryer, C. (1972) Numerical methods for functional
differential equations / In Delay and functional differential equations and their application, Schmitt K.
ed. Acad. Press, New York, pp. 17–101.
[39] Cryer, C. and Tavernini, L. (1972) The numerical
solution of Volterra functional differential equations
by Euler’s method, SIAM J. Numer. Anal. Vol. 9,
pp. 105–129.
[40] Cushing, J.M. (1977) Integro-differential equations
and delay models in population dynamics. Lect. Notes
in Biomath. 20, Springer-Verlag, Berlin.
[41] Dahlquist, G. (1956) Numerical integration of ordinary differential equations, Math. Scand., Vol. 4,
pp. 33–50.
[42] Daletskii, Yu.L. and Krein, M.G. (1970) Stability of
Solutions of Differential Equations in Banach Space.
Nauka, Moscow. (In Russian)
[43] Datko, R. (1985) Remarks Concerning the Asymptotic Stability and Stabilization of Linear Delay Differential Equations, J. Math. Anal. Appl., Vol. 111,
no. 2, pp. 571–581.
[44] Datko, R. The LQR problem for functional differential equations. Proc. American Control Conference.
San Francisco, California, June 1993, pp. 509–511.
148
Systems with Delays
[45] Delfour, M.C. (1986) The Linear-quadratic Optimal
Control Problem with Delays in State and Control
Variables: a State Space Approach, SIAM J. Contr.
Optimiz., Vol. 24, no. 5, pp. 835–883.
[46] Delfour, M.C., McCalla, C. and Mitter, S.K. (1975)
Stability and the Infinite Time Quadratic Cost Problem for Linear Hereditary Differential Systems, SIAM
J. Control, Vol. 13, no. 1, pp. 48–88.
[47] Delfour, M.C. and Manitius, A. (1980) The Structure
Operator F and its Role in the Theory of Retarded
Systems, J. Math. Anal.Appl., Vol. 73, pp. 466–490.
[48] Dolgii, Yu.F. and Kim, A.V. (1991) Lyapunov Functionals Method for After-effect Systems, Diff. Uravn.,
Vol. 27, no. 8, pp. 1313–1318. (In Russian)
[49] Driver, R.D. (1962) Existence and Stability of Solutions of Delay-differential Systems, Arch. Ration.
Mech. Anal., Vol. 10, pp. 401–426.
[50] Driver, R.D. (1977) Ordinary and Delay Differential
Equations, Springer-Verlag, New York.
[51] Dugard, L. and Verriest, E.I. (Eds). (1998) Stability
and control of time-delay systems. Springer–Verlag,
New York – Heidelberg – Berlin.
[52] Eller, D.H., Aggarwal, J.K. and Banks, H.T. Optimal
control of linear time-delay systems. IEEE Trans. Automat. Control. 1969, 14, 678–687.
[53] Elsgol’ts, L.E. (1954) Stability of Solutions of
Differential-difference Equations, Uspekhi Mat. Nauk,
Vol. 9, pp. 95–112. (In Russian)
[54] Elsgol’ts, L.E. and Norkin, S.B. (1971) Introduction
to the Theory of Differential Equations with Deviating Arguments. Nauka, Moscow. (In Russian)
[55] Enright, W.N. and Hayashi, H. (1997) Convergence
analysis of the solution of retarded and neutral delay
Bibliography 149
differential equations by continuous numerical methods, SIAM J. Num. Anal. Vol. 35, pp. 572–585.
[56] Enright, W.N. and Hayashi, H. (1997) A delay differential equation solver based on a continuous RungeKutta method with defect control, Num. Algorithms.
Vol. 16, pp. 349–364.
[57] Feldstein, A., Iserles, A. and Levin, D. (1995) Embedding of delay equations into an infinite-dimensional
ode system, J. Diff. Equat. Vol. 117, pp. 127–150.
[58] Fiagbedzi, Y.A. and Pearson, A.E. (1986) Feedback
stabilization of linear autonomous time lag system,
IEEE Trans. Automat. Control, Vol. 31, pp. 847–855.
[59] Fiagbedzi, Y.A. and Pearson, A.E. Output feedback
stabilization of delay systems via generalization of the
transformation method. Int. J. Control. 1990, 51, no.
4, 801–822.
[60] Fleming, W. and Rishel, R. (1975) Deterministic and
Stochastic Optimal Control. Springer–Verlag, New
York.
[61] Furumochi, T. (1975) On the Convergence Theorem
for Integral Stability in Functional Differential Equations, Tohoku Math. J, . Vol. 27, pp. 461–477.
[62] Gabasov, R.F. and Kirillova, F.M. (1974) Maximum Principle in Optimal Control Theory, Nauka i
Tekhn., Minsk. (In Russian)
[63] Gabasov, R. and Kirillova, F. (1976) The Qualitative
Theory of Optimal Processes. Marcel Dekker, New
York.
[64] Gaishun, I.V. (1972) Asymptotic Stability of a System with Delays, Diff. Uravn., Vol. 8, no. 5,
pp. 906–908. (In Russian)
[65] Gel’fand, I.M. and Shilov, G.E. (1958) Generalized
Functions , Fizmatgiz, Moscow. (In Russian)
150
Systems with Delays
[66] Gibson, J.S. Linear-quadratic optimal control of
hereditary differential systems: infinite dimensional
Riccati equations and numerical approximations.
SIAM J. Control and Optimization. 1983, 21, 95–
135.
[67] Gopalsamy, K. (1992) Stability and Oscillations in
Delay Differential Equations of Population Dynamics. Kluwer Academic Publishers, Dordrecht.
[68] Goreckii, H., Fuksa, S., Grabovskii, P. and Korytowskii, A. (1989) Analysis and synthesis of time delay systems. John Wiley & Sons (PWN), Poland.
[69] Gruber, M. (1969) Path Integrals and Lyapunov
Functionals, IEEE Trans. Automat. Control, AC-14,
no. 5, pp. 465–475.
[70] Haddock, J., Krisztin, T. and Terjeki, J. (1985) Invariance Principles for Autonomous Functional Differential Equations. J. Integral Equations. Vol. 10,
pp. 123–136.
[71] Haddock, J. and Terjeki, J. (1983) LiapunovRazumikhin Functions and an Invariance Principle
for Functional Differential Equations, J. Diff. Equat.,
Vol. 48, no. 1, pp. 95–122.
[72] Hairer, E., Norsett, S. and Wanner, G. (1987) Solving
Ordinary Differential Equations. Nonstiff Problems.
Springer, Berlin.
[73] Halanay, A. (1966) Differential Equations: Stability,
Oscillations, Time-lags. Acad. Press, New York.
[74] Hale, J. and Cruz, M. (1970) Existence, Uniqueness
and Continuous Dependence for Hereditary Systems,
Ann. Mat. Pure Appl., Vol. 85, pp. 63–82.
[75] Hale, J. and Kato, J. (1978) Phase Space for Retarded Equations with Infinite Delay, Funkcial. Ekvac., Vol. 21, pp. 11–41.
Bibliography 151
[76] Hale, J. and Verduyn Lunel, S. (1993) Introduction to
Functional Differential Equations. Springer–Verlag,
New York – Heidelberg – Berlin.
[77] Hall, G. and Watt, Y.M. (eds.) (1976) Modern numerical methods for ordinary differential equations,
Clarendon Press, Oxford.
[78] Hatvani, L. (1988) On the Asymptotic Stability of the
Solutions of Functional Differential Equations, Colloq. Math. Soc. J. Bolyai. Qualitative Theory of Differential Equations, Szeged (Hungary), pp. 227–238.
[79] Hino, Y., Murakami, S. and Naito, T. (1991) Functional Differential Equations with Infinite Delay.
Springer, Berlin.
[80] Infante, E.F. and Castelan, W.B. (1978) A Lyapunov
Functional for a Matrix Difference-differential Equation, J. Diff. Equat., Vol. 29, no. 3, pp. 439–451.
[81] Iserles, A. and Norsett, S.P. (1990) On the theory
of parallel Runge-Kutta methods, IMA J. Numer.
Anal., Vol. 10, pp. 463–488.
[82] Iserles, A. (1994) Numerical analysis of delay differential equations with variable delay, Ann. Numer.
Math., Vol. 1, pp. 133–152.
[83] Jackiewicz, Z. and Lo, E. (1993) The apgorithm
SNDDELM for the numerical solution of systems
of neutral delay differential equations. Appendix in:
Y.Kuang, Delay Differential Equations with Applications in Population Dynamics, Academic Press,
Boston.
[84] Kamenskii, G.A. and Skubachevskii, A.L. (1992) Linear Boundary Value Problems for Differential Difference Equations. MAI, Moscow. (In Russian)
[85] Kantorovich, L.V. and Akilov, G.P. (1977) Functional analysis, Nauka, Moscow.
152
Systems with Delays
[86] Kato, J. (1973) On Lyapunov-Razumikhin type Theorems for Functional Differential Equations, Funkcial. Ekvac., Vol. 16, pp. 225–239.
[87] Kato, J. (1980) Liapunov’s Second Method in Functional Differential Equations, Tohoku Math. J.,
Vol. 32, no. 4, pp. 487–497.
[88] Kemper, G.A. (1972) Linear multistep methods for a
class of functional differential equations, Num. Math.
Vol. 19, pp. 361–372.
[89] Kim, A.V. Direct Lyapunov method for systems with
delays. Ural State University Press, Ekaterinburg,
Russia, 1992. (In Russian)
[90] Kim, A.V. (1996) i–Smooth Analysis and Functional
Differential Equations. Russian Acad. Sci. Press
(Ural Branch), Ekaterinburg. (In Russian)
[91] Kim, A.V. (1999) Functional differential equations.
Application of i–smooth calculus. Kluwer Academic
Publishers, The Netherlands.
[92] Kim, A.V. (1994) On the Dynamic Programming
Method for Systems with Delays, Systems Analysis
– Modelling – Simulation, Vol. 15, pp. 1–12.
[93] Kim, A.V. (1995) Dynamic Programming Method
for Systems with Control Delays, Systems Analysis
– Modelling – Simulation, Vol. 18–19, pp. 337–340.
[94] Kim, A.V. (1996) Systems with Delays: New Trends
and Paradigms, Proceedings of the Symposium on
Modelling, Analysis and Simulation. Computational
Engineering in Systems Application (IMACS Multiconference). Symposium on Modelling, Analysis and
Simulation. Lille, France, July 9–12. Vol. 1, pp. 225–
228.
[95] Kim, A.V. and Pimenov, V.G. (1997) Numerical
Methods for Time-delay Systems on the Basis of
Bibliography 153
i-Smooth Analysis, Proceedings of the 15th World
Congress on Scientific Computation, Modelling and
Applied Mathematics. Berlin, August 1997. V. 1:
Computational Mathematics, pp. 193–196.
[96] Kim, A.V. and Pimenov, V.G. (1998) On application of i–smooth analysis to elaboration of numerical
methods for functional differential equations, Transactions of the Institiute of Mathematics and Mechanics Ural Branch RAS, Vol. 5, pp. 104–126. (In
Russian)
[97] Kim, A.V. and Pimenov, V.G. (1998) Multistep numerical methods for functional differential equations,
Mathematics and Computers in Simulation, Vol. 45,
pp. 377–384.
[98] Kim, A.V., Han, S.H., Kwon, W.H. and Pimenov,
V.G. Explicit numerical methods and LQR control algorithms for time-delay systems. Proc. International
Conference on Electrical Engineering. Kyungju, Korea, July 21–25, 1998.
[99] Kim, A.V., Kwon, W.H., Pimenov, V.G., Han, S.H.,
Lozhnikov, A.B. and Onegova, O.V. Time-Delay System Toolbox (for use with MATLAB). Beta Version.
Seoul National University, Seoul, Korea. October,
1999.
[100] Kim, A.V., Kwon, W.H. and Han, S.H. (1999) Explicit solutions of some classes of LQR problems for
systems with delays. Technical report N SNU-EETR-1999-21. School of Electrical Engineering, Seoul
National University, Korea.
[101] Kim, A.V. and Lozhnikov, A.B. (1999) Explicit solutions of finite-time linear quadratic control problems
for systems with delays. Proceedings of 12th CISL
Winter Workshop, February 10–11, 1999. Seoul National University, Korea.
154
Systems with Delays
[102] Kim, A.V. and Pimenov, V.G. Numerical methods for
delay differential equations. Application of i-smooth
calculus. (Lecture Notes in Mathematics, Vol. 44).
Research Institute of Mathematics — Global Analysis Research Center. Seoul National University, Seoul,
Korea, 1999.
[103] Kolmanovskii, V.B. and Koroleva, N.I. (1989) Optimal Control of Some Bilinear Hereditary Systems, Prikl. Mat. Mekh., Vol. 53, pp. 238–243. (In
Russian)
[104] Kolmanovskii, V.B. and Maizenberg, T.L. (1973) Optimal Control of Stochastic Systems with Delays, Autom. Remote Control, no. 1, pp. 47–62. (In Russian)
[105] Kolmanovskii, V.B. and Maizenberg, T.L. Optimal
estimation of system states and problems of control
of systems with delay. Prikl. Mat. Mekh. 1977, 41,
pp. 446–456.
[106] Kolmanovskii, V.B. and Matasov, A.I. Efficient control algorithms for hereditary dynamic systems. 13th
Triennial World Congress. San Francisko, USA.
1996, pp. 403–408.
[107] Kolmanovskii, V.B. and Myshkis, A.D. (1992) Applied Theory of Functional Differential Equations.
Kluwer Academic Publishers, Dordrecht.
[108] Kolmanovskii, V.B. and Nosov, V.R. (1986) Stability of Functional Differential Equations. Academic
Press, New York.
[109] Kolmanovskii, V.B. and Nosov, V.R. (1984) Systems
with Delays of Neutral Type, Autom. Remote Control, no. 1. (In Russian)
[110] Kramer, J.D.R. (1960) On control of Linear Systems
with Time Lags Inform. Control, Vol. 3, no. 4.
Bibliography 155
[111] Krasovskii, N.N. (1959) Some Problems of Stability
of Motion , Gostekhizdat, Moscow. (English transl.:
Stability of Motion, Stanford Univ. Press, 1963.)
[112] Krasovskii, N.N. (1956) On Application of the Second Lyapunov Method to Equations with Time Lags,
Prikl. Mat. Mekh., Vol. 20, no. 3, pp. 315–327. (In
Russian)
[113] Krasovskii, N.N. (1956) On the Asymptotic Stability
of Systems with Delays, Prikl. Mat. Mekh., Vol. 20,
pp. 513–518. (In Russian)
[114] Krasovskii, N.N. (1962) On Analytical Constructing
of an Optimal Regulator for Systems with Time Lag,
Prikl. Mat. Mekh., Vol. 26, pp. 39–51. (In Russian)
[115] Krasovskii, N.N. (1964) Optimal Processes in Systems with Time Lag. Proc. 2nd IFAC Congress,
Basel, 1963. Butterworths, London.
[116] Krasovskii, N.N. and Osipov, Yu.S. (1963) On Stabilization of Control Object with Delays, Izv. AN
SSSR: Tekhn. kibern., no. 6. (In Russian)
[117] Krein, S.G. (1967) Linear Differential Equations in
Banach Space, Fizmatgiz, Moscow. (In Russian)
[118] Krisztin, T. (1990) Stability for Functional Differential Equations and Some Variational Problems, Tohoku Math. J., Vol. 42, no. 3, pp. 407–417.
[119] Kryazhimskii, A.V. (1973) Differential Difference Deviating Game, Izv. AN SSSR: Tekhn. kibern., no. 4,
pp. 71–79. (In Russian)
[120] Kubo, T. and Shimemura, E. Exponential stabilization of systems with time-delay by optimal memoryless feedback. Mathematics and Computers in Simulation. 1998, 45, 319–328.
[121] Kushner, H.J. and Barnea, D.I. (1970) On the Control of a Linear Functional-differential Equation with
156
Systems with Delays
Quadratic Cost, SIAM J. Control, Vol. 8, no. 2,
pp. 257–275.
[122] Kwon, O.B. and Pimenov, V.G. (1998) Implicit
Runge-Kutta-like methods for functional differential
equations, Transactions of the Ural State University,
pp. 68–78. (In Russian)
[123] Kwon, W.H. and Pearson, A.E. (1980) Feedback Stabilization of Linear Systems with Delayed Control,
IEEE Trans. Automat. Control, Vol. 25, pp. 266–
269.
[124] Kwon, W.H., Kim, A.V., Lozhnikov, A.B. and Han,
S.H. LQR problems for systems with delays: explicit solution, algorithms, software. Proc. KoreaJapan joint workshop on Robust and predictive control of time-delay systems. Seoul, Korea, January 27–
28, 1999.
[125] Kwong, R.H. A stability theory for the linearquadratic-gaussian problem for systems with delays
in the state, control and observations. SIAM J. Control and Optimization. 1980, 18, no. 1, 266–269.
[126] Lakshmikantham, V. (1990) Recent Advances in Liapunov Method for Delay Differential Equations, Differential Equations: Stability and Control (Lecture
Notes in Pure and Applied Mathematics, Series/127),
pp. 333–434.
[127] Lakshmikantham, V. and Leela, S. (1969) Differential and Integral Inequalities, V. 2. Acad. Press, New
York.
[128] Laksmikantham, V., Leela, S. and Sivasundaram, S.
(1991) Liapunov Functions on Product Space and
Stability Theory of Delay Differential Equations, J.
Math. Anal. Appl., Vol. 154, pp. 391–402.
Bibliography 157
[129] Lee, E.B. Generalized quadratic optimal controller
for linear hereditary systems. IEEE Trans. Automat.
Control. 1980, 25, 528–531.
[130] Levin, J.J. and Nohel, J. (1964) On Nonlinear Delay
Equation, J. Math. Anal. Appl., Vol. 8, pp. 31–44.
[131] Lyapunov, A.M. (1935) The General Problem of
Motion Stability. ONTI, Moscow – Leningrad. (In
Russian)
[132] Malek-Zavarei, M. and Jamshidi, M. Time-delay systems. Analysis, optimization and applications. NorthHolland, Amsterdam, 1987.
[133] Malkin, I.G. (1966) Stability of Motion. Nauka,
Moscow. (In Russian)
[134] Manitius, A. Feedback controllers for a wind tunnel
model involving a delay: Analytical design and numerical simulation. IEEE Trans. Automat. Control,
1984, 29, no. 12, 1058–1068.
[135] Manitius, A. and Tran, H. Numerical simulation of a
nonlinear feedback controller for a wind tunnel model
involving a time delay, Optimal Control Application
and Methods, Vol. 7, pp. 19–39, 1986.
[136] Markushin, E.M. (1971) Quadratic Functionals for
Systems with Time Lags, Diff. Uravn., Vol. 7, no. 2,
pp. 369–370. (In Russian)
[137] Martynyuk, A.A. Technical stability in dynamics.
Tekhnika, Kiev, 1973. (In Russian)
[138] Meinardus, G. and Nurnberger, G. (eds.) (1985)
Delay Equations, Approximation and Application,
Birkhauser, Basel.
[139] Milshtein, G.N. (1981) Quadratic Lyapunov’s Functionals for Systems with Delays, Diff. Uravn.,
Vol. 17, no. 6, pp. 984–993. (In Russian)
158
Systems with Delays
[140] Milshtein, G.N. (1987) Positive Lyapunov’s Functionals for Linear Systems with Delays, Diff. Uravn.,
Vol. 23, no. 12, pp. 2051–2060. (In Russian)
[141] Mikolajska, Z. (1969) Une Remarque sur des Notes
der Razumichin et Krasovskij sur la Stabilite Asimptotique, Ann. Polon. Math., Vol. 22.1, pp. 69–72.
[142] Moiseev, N.D. About some methods of the technical stability theory. Transactions of Zhukovskii VVI
Academy. 1945, 135. (In Russian)
[143] Myshkis, A.D. (1972) Linear Differential Equations with Delayed Argument, Nauka, Moscow. (In
Russian) (First ed.: 1951; German transl.: Lineare
Differentialgleichungen mit nacheilendem Argument,
VEB Deutsch. Verlag, Berlin, 1955.)
[144] Myshkis, A.D. (1949) General Theory of Differential Equations with Deviating Argument, Usp. Mat.
Nauk, Vol. 4, no. 5, pp. 99–141. (In Russian)
[145] Myshkis, A.D. (1977) On some Problems of the Theory of Differential Equations with Deviating Argument, Usp. Mat. Nauk, Vol. 32, no. 2, pp. 174–202.
(In Russian)
[146] Myshkis, A.D. and Elsgol’ts, L.E. (1967) The Status
and Problems of the Theory of Differential Equations
with Deviating Argument, Usp. Mat. Nauk, Vol. 22,
no. 2, pp. 21–57. (In Russian)
[147] Neves, K.W. (1975) Automatic integration of functional differential equations: An approach, ACM
Trans. Math. Soft., pp. 357–368.
[148] Neves, K.W. (1975) Automatic integration of functional differential equations, Collected Algorithms
from ACM, Alg. 497.
[149] Neves, K.W. and Thompson, S. (1992) Software for
the numerical solution of systems of functional differ-
Bibliography 159
ential equations with state-dependent delays, Appl.
Num. Math. Vol. 9, pp. 385–401.
[150] Oberle, H.J. and Pesch, H.J. (1981) Numerical treatment of delay differential equations by Hermite interpolation Numer. Math., Vol. 37, pp. 235–255.
[151] Oppelstrup, J. (1978) The RKFHB4 method for
delay differential equations by Hermite interpolation, Lect. Notes in Math., Springer-Verlag, Berlin.
Vol. 631, pp. 133–146.
[152] Osipov, Yu.S. (1965) Stabilization of Control Systems
with Delays, Diff. Uravn., Vol. 1, no. 5, pp. 463–
473. (In Russian)
[153] Osipov, Yu.S. (1965) On Stabilization of Nonlinear
Control Systems with Delays in Critical Case, Diff.
Uravn., Vol. 1, no. 7, pp. 908–922. (In Russian)
[154] Osipov, Yu.S. (1965) On Reduction Principle in Critical Cases of Stability of Systems with Time Lags,
Prikl. Mat. Mekh., Vol. 29, no. 5, pp. 810–820. (In
Russian)
[155] Osipov, Yu.S. and Pimenov, V.G. (1978) On Differential Game Theory for Systems with Delays, Prikl.
Mat. Mekh., Vol. 42, no. 6, pp. 969–977. (In
Russian)
[156] Paul, C.A.H. (1995) A User Guide to ARCHI, MCCM
Tech. rep. No. 283, University of Manchester.
[157] Pimenov, V.G. (1987) On a Regulation Problem for
System with Control Delay, Methods of Positional
and Programmed Control, Sverdlovsk, pp. 107–121.
(In Russian)
[158] Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze,
R.V. and Mischenko, E.F. (1962) The Mathematical
Theory of Optimal Processes. Interscience, New York.
160
Systems with Delays
[159] Prasolov, A.V. (1981) On Application of Lyapunov
Functions for Investigating of Instability of Systems with Delays, Vestnik Leningrad Univ., Ser. 1,
pp. 116–118. (In Russian)
[160] Prasolov, A.V. (1988) Tests of Instability for Systems with Delays, Vestnik Leningrad Univ., Ser. 1,
pp. 108–109. (In Russian)
[161] Ragg, B.C. and Stapleton, C.A. (1969) Time Optimal
Control of Second-order Systems with Transport Lag,
Intern. J. Control, Vol. 9 no. 3, pp. 243–257.
[162] Razumikhin, B.S. (1988) Stability of Hereditary Systems. Nauka, Moscow. (In Russian)
[163] Razumikhin, B.S. (1956) On Stability of Systems
with Time Lag, Prikl. Mat. Mekh., Vol. 20, pp. 500–
512. (In Russian)
[164] Repin, Yu.M. (1965) Quadratic Lyapunov Functionals for Systems with Delays, Prikl. Mat. Mekh.,
Vol. 29, pp. 564–566. (In Russian)
[165] Ross, D.W. (1971) Controller Design for Time Lag
Systems via Quadratic Criterion, IEEE Trans. Aut.
Control, Vol. 16, pp. 664–672.
[166] Ross, D.W. and Flugge-Lotz, I. (1969) An Optimal Control Problem for Systems with Differentialdifference Equation Dynamics SIAM J. Control,
Vol. 7, no. 4, pp. 609–623.
[167] Samarskii, A.A. and Gulin, A.V. (1989) Numerical
methods. Nauka, Moscow. (In Russian)
[168] Skeel, R. (1976) Analysis of Fixed-Stepsize Methods
SIAM J. Numer. Anal., Vol. 13, pp. 664–683.
[169] Seifert, G. (1982) On Caratheodory Conditions for
Functional Differential Equations with Infinite Delays, Rocky Mount. J. Math., Vol. 12, no. 4,
pp. 615–619.
Bibliography 161
[170] Shimanov, S.N. (1960) On Instability of the Motion
of Systems with Retardations, Prikl. Mat. Mekh.,
Vol. 24, pp. 55–63. (In Russian)
[171] Shimanov, S.N. (1965) On the Theory of Linear Differential Equations with Retardations, Diff. Uravn.,
Vol. 1, pp. 102–116. (In Russian)
[172] Shimbell, A. Contribution to the mathematical biophysics of the central nervous system with the special
reference to learning. Bull Math. Biophysica, 1950,
no. 12, pp. 241–275.
[173] Shimemura, E., Uchida, K. and Kubo, T. LQ regulator design method for systems with delay based
on spectral decomposition of the hamiltonian. Int. J.
Control. 1988, 47, no. 4, pp. 953–965.
[174] Soliman, M.A. and Ray, W.H. Optimal feedback control for linear-quadratic system having time delay.
Int. J. Control. 1972, 15, no. 4, pp. 609–627.
[175] Soner, H.M. (1988) On the Hamilton-Jacobi Equations in Banach Spaces, J. Optimiz. Appl., Vol. 57,
no. 3, pp. 429–437.
[176] Stetter, H. (1973) Analysis of discretaization methods
for ordinary differential equations, Springer-Verlag,
Berlin.
[177] Tavernini, L. (1971) One-step methods for the numerical solution of Volterra functional differential equations, SIAM J. Numer. Anal., Vol. 8, pp. 786–795.
[178] Tavernini, L. (1975) Linear multistep method for the
numerical solution of Volterra functional differential
equations, Appl. Anal. Vol. 1, pp. 169–185.
[179] Uchida, K. and Shimemura, E. Closed-loop properties of the infinite-time linear-quadratic optimal regulator for systems with delay. Int. J. Control. 1986,
43, no. 3, pp. 773–779.
162
Systems with Delays
[180] Uchida, K., Shimemura, E., Kubo, T. and Abe,
N. The linear-quadratic optimal control approach to
feedback control design for systems with delay. Automatica. 1988, 24, no. 6, pp. 773–780.
[181] Uchida, K., Shimemura, E., Kubo, T. and Abe, N..
Optimal regulator for linear systems with delays in
state and control. Spectrum decomposition and prediction approach. Analysis and optimization of systems. Lecture Notes in Control and Information Sciences, Springer-Verlag, 1988, 22, pp. 32–43.
[182] Vinter, R.B. and Kwong, R.H. (1981) The Infinite
Quadratic Control Problem for Linear Systems with
State and Control Delays: An Evolution Equation
Approach, SIAM J. Contr. Optimiz., Vol. 19, no. 1,
pp. 139–153.
[183] Volterra, V. (1931) Theorie Mathematique de la Lutte
poir la Vie. Gauthier–Villars, Paris.
[184] Wen, L.Z. (1982) On the Uniform Asymptotic Stability in Functional Differential Equations, Proc. Amer.
Math. Soc., Vol. 85, no. 4, pp. 533–538.
[185] Wenzhang, H. (1989) Generalization of Liapunov’s
Theorem in a Linear Delay System, J. Math. Anal.
Appl., Vol. 142, no. 1, pp. 83–94.
[186] Wille, D.R. and Baker, C.T.H. (1992) DELSOL —
A numerical code for the solution of systems of delay differential equations, Appl. Num. Math. Vol. 9,
pp. 223–234.
[187] Wolenski, P.R. (1992) Hamilton-Jacobi Theory for
Hereditary Control Problem, Seminar Notes in Functional Analysis and Partial Differential Equations,
1991–1992 (Department of Mathematics at Louisiana
State University).
[188] Yoshizawa, T. (1966) Stability Theory by Liapunov’s
Second Method. Math. Soc. Japan, Tokyo.
Bibliography 163
[189] Zennaro, M. (1985) On the p–stability of one-step collocation for delay differential equations, Intentional
Series of Numerical Mathematics, Verlag, Basel,
pp. 334–343.
[190] Zennaro, M. (1995) Delay differential equations: theory and numerics, Theory and numerics of ordinary
and partial differential equation, OUP, Oxford, pp.
291–333.
[191] Zhang, S. (1989) Unified Stability Theorems in
RFDE and NFDE, Chin. Sci. Bull., Vol. 34, no. 7,
pp. 543–548.
[192] Zubov, V.I. On the theory of linear time-invariant
systems with delays. Izvestiya VUZov. Matematika,
1958, N 6, s. 86–95. (In Russian)
[193] Zubov, V.I. Mathematical methods of investigation of
controlled systems. Sudpromgiz, Moscow, 1959. (In
Russian)
[194] Zverkin, A.M. (1959) Dependence of the Stability
of Solutions of Linear Differential Equations with
Lag upon the Choice of the Initial Moment Vestnik
Moskov. Univ. Ser. Mat. Mekh. Astr., Vol. 5, pp. 15–
20. (In Russian)
[195] Zverkin, A.M. (1968) Differential Equations with Deviating Argument Fifth Summer Math. School, Kiev.
(In Russian)
[196] Zverkina, T.S. (1975) Numerical integration of differential equations with delays Transactions of the
seminar on the theory of differential equations with
deviating arguments, Vol. IX, pp. 82–86.
Index
linear system with delays, 11
– time-invariant , 11
– with discrete delay,
64
– with distributed delay, 66
Lipschitz condition, 112
LQR problem, 70
Lyapunov-Krasovskii quadratic
functionals, 46
automatic step size control, 118
conditional representation,
16
consistent, 112
converse theorem, 62
discrete model, 101
exponential matrix equation, 78
– solution, 83
extrapolation by continuation, 111
extrapolational operator,
111
phase space, 25
piece-wise constant interpolation, 103
residual, 114
Runge-Kutta-Fehlberg method,
120
Runge-Kutta-like method,
113
generalized Riccati equations, 73
– explicit solutions,
77
gradient methods, 84
improved Euler method,
119
initial value problem, 19
interpolation-extrapolation
operator, 112
interpolational operator,
109
164
solution
– asymptotically stable, 40
– exponentially stable, 40
– stable, 40
stationary solution method,
83
Also of Interest
By the same author
i-Smooth Analysis: Theory and Applications, by A.V. Kim, ISBN
9781118998366. A totally new direction in mathematics, this revolutionary new study introduces a new class of invariant derivatives of functions and establishes relations with other derivatives, such as the Sobolev
generalized derivative and the generalized derivative of the distribution
theory. DUE OUT IN MAY 2015.
Check out these other titles from Scrivener Publishing
Reverse Osmosis: Design, Processes, and Applications for Engineers 2nd
Edition, by Jane Kucera, ISBN 9781118639740. This is the most comprehensive and up-to-date coverage of the “green” process of reverse osmosis in industrial applications, completely updated in this new edition to
cover all of the processes and equipment necessary to design, operate, and
troubleshoot reverse osmosis systems. DUE OUT IN MAY 2015.
Pavement Asset Management, by Ralph Haas and W. Ronald Hudson, with
Lynne Cowe Falls, ISBN 9781119038702. Written by the founders of the
subject, this is the single must-have volume ever published on pavement
asset management. DUE OUT IN MAY 2015.
Open Ended Problems: A Future Chemical Engineering Approach, by J.
Patrick Abulencia and Louis Theodore, ISBN 9781118946046. Although
the primary market is chemical engineers, the book covers all engineering
areas so those from all disciplines will find this book useful. DUE OUT IN
MARCH 2015.
Fracking, by Michael Holloway and Oliver Rudd, ISBN 9781118496329.
This book explores the history, techniques, and materials used in the
practice of induced hydraulic fracturing, one of today’s hottest topics, for
the production of natural gas, while examining the environmental and
economic impact. NOW AVAILABLE!
Formation Testing: Pressure Transient and Formation Analysis, by Wilson
C. Chin, Yanmin Zhou, Yongren Feng, Qiang Yu, and Lixin Zhao, ISBN
9781118831137. This is the only book available to the reservoir or petroleum engineer covering formation testing algorithms for wireline and
LWD reservoir analysis that are developed for transient pressure, contamination modeling, permeability, and pore pressure prediction. NOW
AVAILABLE!
Electromagnetic Well Logging, by Wilson C. Chin, ISBN 9781118831038.
Mathematically rigorous, computationally fast, and easy to use, this new
approach to electromagnetic well logging does not bear the limitations
of existing methods and gives the reservoir engineer a new dimension to
MWD/LWD interpretation and tool design. NOW AVAILABLE!
Desalination: Water From Water, by Jane Kucera, ISBN 9781118208526.
This is the most comprehensive and up-to-date coverage of the “green”
process of desalination in industrial and municipal applications, covering all of the processes and equipment necessary to design, operate, and
troubleshoot desalination systems. NOW AVAILABLE!
Tidal Power: Harnessing Energy From Water Currents, by Victor Lyatkher,
ISBN 978111720912. Offers a unique and highly technical approach to
tidal power and how it can be harnessed efficiently and cost-effectively,
with less impact on the environment than traditional power plants. NOW
AVAILABLE!
Electrochemical Water Processing, by Ralph Zito, ISBN 9781118098714.
Two of the most important issues facing society today and in the future
will be the global water supply and energy production. This book
addresses both of these important issues with the idea that non-usable
water can be purified through the use of electrical energy, instead of
chemical or mechanical methods. NOW AVAILABLE!
Biofuels Production, Edited by Vikash Babu, Ashish Thapliyal, and
Girijesh Kumar Patel, ISBN 9781118634509. The most comprehensive
and up-to-date treatment of all the possible aspects for biofuels production from biomass or waste material available. NOW AVAILABLE!
Biogas Production, Edited by Ackmez Mudhoo, ISBN 9781118062852.
This volume covers the most cutting-edge pretreatment processes being
used and studied today for the production of biogas during anaerobic
digestion processes using different feedstocks, in the most efficient and
economical methods possible. NOW AVAILABLE!
Bioremediation and Sustainability: Research and Applications, Edited
by Romeela Mohee and Ackmez Mudhoo, ISBN 9781118062845.
Bioremediation and Sustainability is an up-to-date and comprehensive
treatment of research and applications for some of the most important
low-cost, “green,” emerging technologies in chemical and environmental
engineering. NOW AVAILABLE!
Sustainable Energy Pricing, by Gary Zatzman, ISBN 9780470901632.
In this controversial new volume, the author explores a new science of
energy pricing and how it can be done in a way that is sustainable for the
world’s economy and environment. NOW AVAILABLE!
Green Chemistry and Environmental Remediation, Edited by Rashmi
Sanghi and Vandana Singh, ISBN 9780470943083. Presents high quality
research papers as well as in depth review articles on the new emerging green face of multidimensional environmental chemistry. NOW
AVAILABLE!
Energy Storage: A New Approach, by Ralph Zito, ISBN 9780470625910.
Exploring the potential of reversible concentrations cells, the author of
this groundbreaking volume reveals new technologies to solve the global
crisis of energy storage. NOW AVAILABLE!
Bioremediation of Petroleum and Petroleum Products, by James Speight
and Karuna Arjoon, ISBN 9780470938492. With petroleum-related
spills, explosions, and health issues in the headlines almost every day, the
issue of remediation of petroleum and petroleum products is taking on
increasing importance, for the survival of our environment, our planet,
and our future. This book is the first of its kind to explore this difficult
issue from an engineering and scientific point of view and offer solutions
and reasonable courses of action. NOW AVAILABLE!
WILEY END USER LICENSE AGREEMENT
Go to www.wiley.com/go/eula to access Wiley’s ebook EULA.
Descargar