Pure and Applied Mathematics Journal
Volume 5, Issue 3, June 2016, Pages: 77-81

Optimal Control and Hamiltonian System

Estomih Shedrack Massawe

Department of Mathematics, College of Natural Sciences, University of Dar es Salaam, Dar es Salaam, Tanzania

Estomih Shedrack Massawe. Optimal Control and Hamiltonian System. Pure and Applied Mathematics Journal. Vol. 5, No. 3, 2016, pp. 77-81. doi: 10.11648/j.pamj.20160503.13

Received: April 16, 2016; Accepted: April 28, 2016; Published: May 10, 2016

Abstract: In this paper, an optimal control for Hamiltonian control systems with external variables will be formulated and analysed. Necessary and sufficient conditions which lead to Pantryagin’s principle are stated and elaborated. Finally it is shown how the Pontryagin’s principle fits very well to the theory of Hamiltonian systems. The case of Potryagin’s maximum principle will be considered in detail since it is capable of dealing with both unbounded continuous controls and bounded controls which are possibly discontinuous.

Keywords: Optimal Control, Hamiltonian Systems, Conditions for Optimality

Contents

1. Introduction

It has been essential for many physical systems which are governed by differential equations to be controlled in such a way that a given performance index would be optimized. Large savings in cost have been obtained by a small improvement in performance. The optimal control problem which will be formulated will be the so called Bolza problem [1] with the added condition that the control variables lie in a closed set.

[2] in his paper of Optimal control of stochastic dynamical systems developed existence of stochastic optimal controls for a large class of stochastic differential systems with finite memory is considered. [3] established a feedback control law is developed for dynamical systems described by constrained generalized coordinates. They revealed that for certain complex dynamical systems, it is more desirable to develop the mathematical model using more general coordinates then degrees of freedom which leads to differential-algebraic equations of motion. [4] developed a computational approach to motor control that offers a unifying modelling framework for both dynamic systems and optimal control approaches. In discussions of several behavioural experiments and some theoretical and robotics studies, they demonstrated how the computational ideas allow both the representation of self-organizing processes and the optimization of movement based on reward criteria. [5] proposed a new mathematical formulation for the problem of optimal traffic assignment in dynamic networks with multiple origins and destinations. Several researchers have studied optimal control and dynamical systems. [6] studied Dynamical Systems based optimal control of incompressible fluids. They proposed a cost functional based on a local dynamical systems characterization of vortices. Connections of optimal control and Hamiltonian systems especially the necessary conditions of optimality has not been studied yet. In this paper, it is intended to focus on the link between optimal control and Hamiltonian systems. The case of Potryagin’s maximum principle will be considered in detail since it is capable of dealing with both unbounded continuous controls and bounded controls which are possibly discontinuous.

2. Formulation of Optimal Control Problem

We consider the state of a control system described by an -vector  whose evolution is governed by a system of differential equations

(1)

where  is a control function from a closed subset of  and .

Given a compact interval , open sets, , , a set  and functions  and  such that

,

,

,

.

the optimal control problem can be stated as follows:

Minimize  (2)

over all continuous functions  and measurable functions  satisfying

, ,

, .

is called the running cost and  the terminal cost. [7].

The Pontryagin’s principle requires the introduction of the Hamiltonian function  given by

(3)

In analogy with the corresponding quantity in classical mechanics.  is the generalized momenta.

Similar to the formulation of the Hamiltonian systems, the following set of equations hold [8]

(4)

with boundary conditions

Necessary Conditions for Optimality

In this section we shall state the necessary conditions for optimality which then lead to Pontryagin’s maximum principle.

Theorem

The necessary condition’s for  to be an optimal initial condition and optimal control for the optimal control problem stated above are the existence of a nonzero - dimensional vector  with  and an = dimensional vector function  such that for :  [1]

(i)   for  and ,

(ii)  ,

(iii)   with ,

(iv)  ,

(v)

(vi)

If  has a continuous partial derivative  then the condition

(vii)

holds for each .

Condition (ii) above can be written as

(5)

This is called Pontryagin’s maximum principle.

The interpretation of this principle is that on the optimal control,  is minimized with respect to the control variables , .

For simplicity we shall treat problem. This is a special case of the problem of optimal control in which the initial time and final time are fixed and there are no conditions on the final state.

We shall restate the Pontryagin’s principle so that it fits naturally to our framework of free terminal point problem.

A necessary condition for optimality of a control  for the free terminal point problem is that

(6)

For each  and , where  is the solution of

with boundary condition

.

The Pontryagin’s principle gives only necessary conditions for optimality but these conditions need not be sufficient. Since each optimal control must be external, there must be external controls which are not optimal. However it is natural to ask for conditions which are not optimal. However it is natural to ask for conditions which are both sufficient and necessary for optimality.

Consider a space  of control functions  defined on  with values on . Let the subset  be the set of control functions  such that  for each  and  is a feasible pair for the fixed initial state  The necessary and sufficient conditions that a control  be optimal for free terminal point problem is that for each fixed  we have  for each  such that   is convex and the mapping  is a function on  To fit this to the performance index

(7)

It is assumed that that  is a real continuously differentiable function and convex in  and  is continuously differentiable convex function of  For simplicity we shall consider a linear system.

Theorem

A necessary and sufficient condition for optimality of a control  for free terminal point problem with system [1].

(8)

and performance index

is that for

for each  such that  where  is the solution of

,

.

Moreover if in  is strictly convex in  for each fixed , the optimal control  is unique [1].

Proof

Since the corresponding differential equations is a linear system, the set  of controls such that  and  is a feasible pair consists of all piecewise continuous functions such that . This is a convex set. Let  and  be controls in  and  and  the corresponding solutions of the differential equations with . If , the convexity of  and  implies

(9)

since  is the solution of the differential equation corresponding to . Therefore  is a convex function on . It can be shown that for each  satisfying  for each  that

(10)

Hence  has a minimum at  [1].

If  is strictly convex, the above inequality is strict. Thus  is a strictly convex function on  and the minimum is unique.

3. Connections to Hamiltonian Systems

To fit the Pontryagin’s principle to theory of to the theory of Hamiltonian systems, a control system

,

will be considered with the input space  a manifold without boundaries  and a smooth function in all of its variables. Under these conditions, the Pontryagin’s principle implies the first order condition

for optimization.

Consider a simple control system  given by

with .  has a natural symplectic form  and  has a symplectic form . The space of external variables  have a symplectic form . Therefore  is a symplectic form . Let  be a smooth function and . These functions define the Hamiltonian

.

This is a generating function of the Lagrangian submanifold  given by the Hamiltonian equations

(11)

[8] has shown that Hamiltonian control system is given by

(12)

Comparing equations (11) and (12) it can be concluded that a control system  together with a smooth function  defines a full Hamiltonian system  where  with  such that  and  such that . It is assumed that  is a trivial bundle [9].

Let  and . Let also ,  and the Hamiltonian system  be as defined above. Then if ,  the equation  has a local Hamiltonian function [10]. We then obtain locally a Hamiltonian vectorfield  on . The projection of the solution curves of  on  form a set of curves which by Pontyagins principle contains the optimal trajectory . It is noted that ,  implies that also . If we have only rank of the  hen we obtain an immersed Lagrangian submanifold  of . This is similar to the implicit Hamiltonian differential equation . If  is projected onto  then there may be some points in  where the projection does not have mximal rank and thus the solution of the differential equation will not be defined. If  is projected onto , singularities and non-uniqueness of the optimal trajectories my occur [10].

4. Conclusion

In this paper, an optimal control for Hamiltonian control systems with external variables has been formulated and analysed. Necessary and sufficient conditions which led to Pantryagin’s principle are stated. It was shown how the Pontryagin’s principle to the theory of Hamiltonian systems. The case of Potryagin’s maximum principle was taken abroad because it is capable of dealing with both unbounded continuous controls and bounded controls which are possibly discontinuous.

References

1. W. H. Fleming and R. W. Rishel, Deterministic and Stochastic Optimal Control, Springer, New York, Inc., 1975.
2. N. U. Ahmed, Optimal control of stochastic dynamical systems, Information and Control, Volume 22, Issue 1, pp. 13-30, 1973.
3. V. Radisavljevic and H. Baruh, Journal of Dynamic Systems, Measurement, and Control, 121(4), pp. 594-598, 1999.
4. S. Schaa1, P., Mohajerian1 and A. Ijspeert, Progress in Brain Research, Vol. 165, pp. 425-445, 2007.
5. S. Lafortune, Introduction to Discrete Event Systems, The International Series on Discrete Event Dynamic Systems, 1993.
6. M. H. Huller, K. Kunisch, Y. S., S. Volkwein, International Journal for Numerical Methods in Fluids, 00:1-6, 2000.
7. S. Barnet and R. G. Cameron R.G, Introduction to Mathematical Control Theory, Clarendon Press, 1985.
8. E. S. Massawe, Hamiltonian Control Systems, International Journal of Theoretical and Mathematical Physics 2016, 6(1): pp. 26-30.
9. V. der Schaft, System Theoretic Description of Physical System, Doctoral Thesis, Mathematical Centrum, Amsterdam, 1984.
10. E. S. Massawe, Hamiltonian Control Systems, Unpublished M.Sc Thesis, University of Dublin.

 Contents 1. 2. 3. 4.
Article Tools