- Survey
- Open access
- Published:
Geometric optimal control and applications to aerospace
Pacific Journal of Mathematics for Industry volume 9, Article number: 8 (2017)
Abstract
This article deals with applications of optimal control to aerospace problems with a focus on modern geometric optimal control tools and numerical continuation techniques. Geometric optimal control is a theory combining optimal control with various concepts of differential geometry. The ultimate objective is to derive optimal synthesis results for general classes of control systems. Continuation or homotopy methods consist in solving a series of parameterized problems, starting from a simple one to end up by continuous deformation with the initial problem. They help overcoming the difficult initialization issues of the shooting method. The combination of geometric control and homotopy methods improves the traditional techniques of optimal control theory.
A nonacademic example of optimal attitude-trajectory control of (classical and airborne) launch vehicles, treated in details, illustrates how geometric optimal control can be used to analyze finely the structure of the extremals. This theoretical analysis helps building an efficient numerical solution procedure combining shooting methods and numerical continuation. Chattering is also analyzed and it is shown how to deal with this issue in practice.
Introduction
Generally, a space vehicle is modeled as a solid body. The motion combines the translation of the center of gravity (COG) defining the trajectory and the body rotation around its center of gravity defining the attitude. A usual simplification consists in assuming that the translation and the rotation motions are independent, because the attitude time scale is often much shorter than the trajectory time scale so that the attitude control can be considered as nearly perfect, i.e., instantaneous or with a short response time. With this assumption the trajectory problem (also called the guidance problem) and the attitude problem (also called the control problem) can be addressed separately. This uncoupling of the guidance and the control problem is valid either when the torque commands have a negligible effect on the CoG motion or when the control time scale is much shorter than the guidance time scale. Most space vehicles fall into one of these two categories. The main exceptions are atmospheric maneuvering vehicles such as cruise or anti-ballistic missiles and airborne launchers. Such vehicles have to perform large reorientation maneuvers requiring significant durations. These maneuvers have a sensible influence of the CoG motion and they must be accounted for a realistic trajectory optimization. In these cases, the rotation and the translation motions are coupled, the command are then the nozzle or the flap deflections depending on the vehicle control devices. For a propelled launcher, the motion is controlled by the thrust force which is nearly aligned with the roll axis. We call such exception problems the attitude-trajectory or coupled problems. We refer readers interested by aerospace missions to Section 2 for a general introduction on the applications to aerospace missions, of which the objective is to give a global view on how space missions are translated into optimal control problems.
The purpose of this article is to show how to address optimal control problems in aerospace using modern techniques of geometric optimal control and how to build solution algorithms based on continuation techniques. In particular, we make a brief survey on the chattering phenomenon (also called Fuller phenomenon), and explain how to dealt with the chattering phenomenon with numerical continuations. The chattering phenomenon, which appears systematically in aerospace applications (in trajectory optimization [91] and in attitude-trajectory optimization problems [92, 93]), is the situation where the optimal control switches an infinite number of times over a compact time interval.
The geometric optimal control (stated in the early 1980s and having widely demonstrated its advantages over the classical theory of the 1960s) and the continuation techniques (which are not new, but have been somewhat neglected until recently in optimal control) are powerful approaches for aerospace applications. In this article, the main techniques of optimal control theory, including the Pontryagin Maximum Principle, the first-order and higher order optimality conditions, the associated numerical methods, and the numerical continuation principles will be recalled. Most mathematical notions presented here are known by many readers, and can be skipped at the first reading.
After recalling some applications of the geometric control techniques and the continuation in trajectory optimization problems, we present detailed analyses of a nonacademic attitude-trajectory problem that we have studied during these years. This example deals with a minimum time maneuver of a coupled attitude-trajectory dynamic system. Due to the system high nonlinearity and the existence of a chattering phenomenon (see Sections 3.4 and Section 7 for details), the standard techniques of optimal control do not provide adequate solutions to this problem. Through this example, we will show step by step how to build efficient numerical procedures with the help of theoretical results obtained by applying geometric optimal control techniques. More precisely, we will explain how the geometric control techniques are used to analyze the extremals of the problem and to prove the existence of chattering phenomenon, and how the numerical continuation methods are used to overcome the chattering and to design the numerical resolution method.
Structure of the paper. In Section 2, several optimal control problems stemming from various aerospace missions are systematically introduced as motivation. In Section 3, we provide a brief survey of geometric optimal control, including the use of Lie and Poisson brackets with first and higher order optimality conditions. In Section 4, we recall classical numerical methods for optimal control problems, namely indirect and direct methods. In Section 5, we recall the concept of continuation methods, which help overcoming the initialization issue for indirect methods. In Section 6, we shortly give some applications of geometric optimal control and of continuation for space trajectory optimization problems. In Section 7, we detail a full nonacademic example in aerospace (an attitude-trajectory problem), in order to illustrate how to solve optimal control problems with the help of geometric optimal control theory and the continuation methods.
Applications to aerospace problems
Transport in space gives rise to a large range of problems that can be addressed by optimal control and mathematical programming techniques. Three kinds of problems can be distinguished depending on the departure and the arrival point: ascent from the Earth ground to an orbit, reentry from an orbit to the Earth ground (or to another body of the solar system), transfer from an orbit to another one. A space mission is generally composed of successive ascent, transfer and reentry phases, whose features are presented in the following paragraphs.
Ascent missions necessitate huge propellant masses to reach the orbital velocity and deliver large payloads such as telecommunications satellites. Due to the large lift-off mass, only chemical rocket engines are able to deliver the required thrust level. Consumption minimization is the main concern for these missions whose time of flight is generally about half an hour. Heavy launchers lift off vertically from a fixed ground launch pad, whereas airborne launchers are released horizontally by an airplane, benefiting thus from a higher initial altitude and an initial subsonic velocity. The first part of the trajectory occurs in the Earth atmosphere at increasing speed. The large aerodynamics loads met during the atmospheric flight require flying at near zero angle of attack, so that the atmospheric leg is completely driven by the initial conditions. Due to the large masses of propellants carried on board, the whole flight must be track by ground radar stations and stringent safety constraints must be applied regarding the area flown over. Once in vacuum the vehicle attitude is no longer constrained and the thrust direction can be freely chosen. When the orbital velocity is reached the thrust level can be reduced and coast arcs may help sparing propellant to reach the targeted orbit. Figure 1 gives an overview of the constraints applied to an ascent trajectory.
Reentry missions aim at retrieving either experiment results or space crews. The trajectory is split into a coast arc targeting accurate conditions at the atmospheric entry interface and a gliding atmospheric leg of about half an hour until the landing. The most stringent constraint comes from the convection flux that grows quickly when entering the dense atmosphere layers at hypersonic speeds. A near-horizontal flight is mandatory to achieve a progressive braking at limited thermal flux and load factor levels. The aerodynamic forces are controlled through the vehicle attitude. The angle of attack modulates the force magnitude and the loads applied to the vehicle. The bank angle orientates the lift left or right to follow an adequate descent rate and achieve the required downrange and cross-range until the targeted landing site. The landing may occur vertically in the sea or on the ground, or horizontally on a runway. Depending on the landing options the final braking is achieved by thrusting engines or by parachutes. If necessary the touchdown may also be damped by airbags or legs, for example for delivering scientific payloads on the Mars surface. The reentry is always the final part of a space mission. Take ISS servicing mission as example, see Fig. 2. The space shuttle is launched to joint the space station on the orbit, and it must return to the Earth after the station docking. It is clear that the reentry is the final part of the ISS serving mission.
Orbital missions deal with orbit changes around the Earth and also with interplanetary travels. A major difference with ascent and reentry trajectories is the much larger duration, which ranges from days to months or even years to reach the farthest planets of the solar system. The motion is essentially due to the gravity field of the nearest body and possibly of a second one. The vehicle operational life is limited by its onboard propellant so that all propelled maneuvers must be achieved as economically as possible. Depending on the engine thrust level the maneuvers are modeled either as impulsive velocity changes (impulsive modelling) or as short duration boosts (high thrust modelling) or as long duration boosts (low thrust modelling). Low thrust engines are particularly attractive due to their high specific impulse, but they require a high electrical power that cannot be delivered by onboard batteries. The energy is provided by large solar panels and the engine must be cut-off when the vehicle enters the Earth shadow. Low thrust orbit raising of telecommunication satellites toward the geostationary orbit at 36000 km lead thus to quite complex optimal control problems as pictured on Fig. 3. The green arrows represent the thrust direction on the target geostationary orbit, and the red ones represent the thrust direction on the initial orbit.
Other orbital transfer problems are the removal of space debris or the rendezvous for orbit servicing. Interplanetary missions raise other difficulties due to the gravity of several attracting bodies. For missions towards the Lagrange points (see Fig. 4) the detailed analysis of invariant manifolds in the three body problem can provide very inexpensive transfer solutions. In Fig. 4, five Lagrangian points L i , i=1,⋯,5 are illustrated in the left subfigure, and some trajectories around points L 1 and L 2 are plotted in the right subfigure, including a L 1 orbit, a L 2 orbit, and a L 1- L 2 transfer orbit.
For farther solar system travels successive fly-bys around selected planets allow “free” velocity gains. The resulting combinatorial problem with optional intermediate deep space maneuvers is challenging.
The above non exhaustive list gives a preview of various space transportation problems. In all cases the mission analysis comprises a simulation task and an optimization task (see Fig. 5). Various formulations and methods are possible regarding these two tasks. Selecting an adequate approach is essential in order to build a satisfying numerical solution process.
The simulation task consists in integrating the dynamics differential equations derived from mechanics laws. The vehicle is generally modeled as a solid body. The motion combines the translation of the center of gravity defining the trajectory and the body rotation around its center of gravity defining the attitude. The main forces and torques originate from the gravity field (always present), from the propulsion system (when switched on) and possibly from the aerodynamics shape when the vehicle evolves in an atmosphere. In many cases a gravity model including the first zonal term due to the Earth flattening is sufficiently accurate at the mission analysis stage. The aerodynamics is generally modeled by the drag and lift components tabulated versus the Mach number and the angle of attack. The atmosphere parameters (density, pressure, temperature) can be represented by an exponential model or tabulated with respect to the altitude. A higher accuracy may be required on some specific occasions, for example to forecast the possible fall-out of dangerous space debris, to assess correctly low thrust orbital transfers or complex interplanetary space missions. In such cases the dynamical model must be enhanced to account for effects of smaller magnitudes. These enhancements include higher order terms of the gravitational field, accurate atmosphere models depending on the season and the geographic position, extended aerodynamic databases, third body attraction, etc, and also other effects such as the solar wind pressure or the magnetic induced forces.
Complex dynamical models yield more representative results at the expense of larger computation times. In view of trajectory optimization purposes the simulation models have to make compromises between accuracy and speed. A usual simplification consists in assuming that the translation and the rotation motions are independent. With this assumption the trajectory problem (also called the guidance problem) and the attitude problem (also called the control problem) can be addressed separately. This uncoupling of the guidance and the control problem is valid either when the torque commands have a negligible effect on the CoG motion or when the control time scale is much shorter than the guidance time scale. Most space vehicles fall into one of these two categories. The main exceptions are atmospheric maneuvering vehicles such as cruise or anti-ballistic missiles and airborne launchers.
Such vehicles have to perform large reorientation maneuvers requiring significant durations. These maneuvers have a sensible influence of the CoG motion and they must be accounted for a realistic trajectory optimization.
Another way to speed up the simulation consists in splitting the trajectory into successive sequences using different dynamical models and propagation methods. Ascent or reentry trajectories are thus split into propelled, coast and gliding legs, while interplanetary missions are modeled by patched conics. Each leg is computed with its specific coordinate system and numerical integrator. Usual state vector choices are Cartesian coordinates for ascent trajectories, orbital parameters for orbital transfers, spherical coordinate for reentry trajectories. The reference frame is usually Galilean for most applications excepted for the reentry assessment. In this case an Earth rotating frame is more suited to formulate the landing constraints. The propagation of the dynamics equations may be achieved either by semi-analytical or numerical integrators. Semi-analytical integrators require significant mathematical efforts prior to the implementation and they are specialized to a given modelling. For example averaging techniques are particularly useful for long time-scale problems, such as low thrust transfers or space debris evolution, in order to provide high speed simulations with good differentiability features. On the other hand numerical integrators can be applied very directly to any dynamical problem. An adequate compromise has then to be found between the time-step as large as possible and the error tolerance depending on the desired accuracy.
The dynamics models consider first nominal features of the vehicle and of its environment in order to build a reference mission profile. Since the real flight conditions are never perfectly known, the analysis must also be extended with model uncertainties, first to assess sufficient margins when designing a future vehicle, then to ensure the required success probability and the flight safety when preparing an operational flight. The desired robustness may be obtained by additional propellant reserves for a launcher, or by reachable landing areas for a reentry glider.
The optimization task consists in finding the vehicle commands and optionally some design parameters in order to fulfill the mission constraints at the best cost. In most cases, the optimization deals only with the path followed by one vehicle. In more complicated cases, the optimization must account for moving targets or other vehicles that may be jettisoned parts of the main vehicle. Examples or such missions are debris removal, orbital rendezvous, interplanetary travel or reusable launchers with recovery of the stages after their separation.
A typical reusable launcher mission is pictured on Fig. 6. The goal is to reach the targeted orbit with the upper stage carrying the payload, while the lower and the upper stage must be recovered safely for the next launches. This problem necessitates a multi-branch modelling and a coordinated optimization method.
For preliminary design studies, the vehicle configuration is not defined. The optimization has to deal simultaneously with the vehicle design and the trajectory control. Depending on the problem formulation the optimization variables may thus be functions, reals or integers.
In almost all cases an optimal control problem must be solved to find the vehicle command law along the trajectory. The command aims at changing the magnitude and the direction of the forces applied, namely the thrust and the aerodynamic force. The attitude time scale is often much shorter than the trajectory time scale so that the attitude control can be considered as nearly perfect, i.e., instantaneous or with a short response time. The rotation dynamics is thus not simulated and the command is directly the vehicle attitude. If the rotation and the translation motions are coupled, the 6 degrees of freedom must be simulated. The command are then the nozzle or the flap deflections depending on the vehicle control devices. The choice of the attitude angles depends on the mission dynamics. For a propelled launcher, the motion is controlled by the thrust force which is nearly aligned with the roll axis. This axis is orientated by inertial pitch and yaw angles. For a gliding reentry vehicle, the motion is controlled by the drag and lift forces. The angle of attack modulates the force magnitude while the bank angle only acts on the lift direction. For orbital maneuvering vehicles, the dynamics is generally formulated using the orbital parameters evolution, e.g., by Gauss equations, so that attitude angles in the local orbital frame are best suited.
If the trajectory comprises multiple branches or successive flight sequences with dynamics changes and interior point constraints, discontinuities may occur in the optimal command law. This occurs typically at stage separations and engine ignitions or shutdowns. The commutation dates between the flight sequences themselves may be part of the optimized variables, as well as other finite dimension parameters, leading to a hybrid optimal control problem. A further complexity occurs with path constraints relating either to the vehicle design (e.g., dynamic pressure or thermal flux levels), or to the operations (e.g., tracking, safety, lightening). These constraints may be active along some parts of the trajectory, and the junction between constrained and unconstrained arcs may raise theoretical and numerical issues.
The numerical procedures for optimal control problems are usually classified between direct and indirect methods. Direct methods discretize the optimal control problem in order to rewrite it as a nonlinear large scale optimization problem. The process is straightforward and it can be applied in a systematic manner to any optimal control problem. New variables or constraints may be added easily. But achieving an accurate solution requires a careful discretization and the convergence may be difficult due to the large number of variables. On the other hand indirect methods are based on the Pontryagin Maximum Principle which gives a set of necessary conditions for a local minimum. The problem is reduced to a nonlinear system that is generally solved by a shooting method using a Newton-like algorithm. The convergence is fast and accurate, but the method requires both an adequate starting point and a high integration accuracy. The sensitivity to the initial guess can be lowered by multiple shooting which breaks the trajectory into several legs linked by interface constraints, at the expense of a larger nonlinear system. The indirect method requires also prior theoretical work for problems with singular solutions or with state constraints. Handling these constraints by penalty method can avoid numerical issues, but yields less optimal solutions.
In some cases the mission analysis may address discrete variables. Examples of such problems are the removal of space debris by a cleaner vehicle or interplanetary travels with multiple fly-bys. For a debris cleaning mission the successive targets are moving independently of the vehicle, and the propellant required to go from one target to another depends on the rendezvous dates. The optimization aims at selecting the targets and the visiting order in order to minimize the required propellant. The path between two given targets is obtained by solving a time-dependent optimal control problem. The overall problem is thus a combinatorial variant of the well-known Traveling Salesman Problem, with successive embedded optimal control problems.
For an interplanetary mission successive fly-bys around planets are necessary to increase progressively the velocity in the solar system and reach far destinations. Additional propelled maneuvers are necessary either at the fly-by or in the deep space in order to achieve the desired path. An impulsive velocity modelling is considered for these maneuvers in a first stage. If a low thrust engine is used, the maneuver assessment must be refined by solving an embedded optimal control problem. The optimization problem mixes discrete variables (selected planets, number of revolutions between two successive fly-bys, number of propelled maneuvers) and continuous variables (fly-bys dates, maneuver dates, magnitudes and orientations).
In preliminary design studies, the optimization problem addresses simultaneously the vehicle configuration and its command along the trajectory. The goal is usually to find the minimal gross weight vehicle able to achieve the specified mission. The configuration parameters are either continuous or discrete variables. For a propelled vehicle the main design parameters are the number of stages, the number of engines, the thrust level, the propellant type and the propellant masses. For a reentry vehicle the design is driven by the aerodynamic shape, the surface and by the auxiliary braking sub-systems if any. The gross mass minimization is essential for the feasibility of interplanetary missions. An example is given by a Mars lander composed of a heat shield, one or several parachutes, braking engines, airbags and legs. The sub-system designs drive the acceptable load levels and thus the state constraints applied to the entry trajectory. The successive sequence of the descent trajectory are depicted on Fig. 7. Large uncertainties have also to be accounted regarding the Mars environment in order to define a robust vehicle configuration.
Multidisciplinary optimization deals with such problems involving both the vehicle design and the mission scenario. The overall problem is too complex to be address directly, and a specific optimization procedure must be devised for each new case. A bi-level approach consists in separating the design and the trajectory optimization. The design problem is generally non differentiable or may present many local minima. It can be addressed in some cases by mixed optimization methods like branch and bound, or more generally by meta-heuristics like simulated annealing, genetic algorithms, particle swarm, etc. None is intrinsically better than another and a specific analysis is needed to formulate the optimization problem in a way suited to the selected method. These algorithms are based partly on a random exploration of the variable space. In order to be successful the exploration strategy has to be customized to the problem specificities. Thousands or millions of trials may be necessary to yield a candidate configuration, based on very simplified performance assessment (e.g., analytical solutions, impulsive velocities, response surface models etc.). The trajectory problem is then solved for this candidate solution in order to assess the real performance, and if necessary iterate on the configuration optimization with a corrected the performance model. Meta-heuristics may also be combined with multi-objective optimization approaches since several criteria have to be balanced at the design stage of a new space vehicle. The goal is to build a family of launchers using a common architecture of propelled stages with variants depending the targeted orbit and payload. By this way the development and manufacturing costs are minimized while the launcher configuration and the launch cost can be customized for each flight.
Geometric optimal control
Geometric optimal control (see, e.g., [1, 74, 85]) combines classical optimal control and geometric methods in system theory, with the goal of achieving optimal synthesis results. More precisely, by combining the knowledge inferred from the Pontryagin Maximum Principle (PMP) with geometric considerations, such as the use of Lie brackets and Lie algebras, of differential geometry on manifolds, and of symplectic geometry and Hamiltonian systems, the aim is to describe in a precise way the structure of optimal trajectories. We refer the reader to [74, 85] for a list of references on geometric tools used in geometric optimal control. The foundations of geometric control can be dated back to the Chow’s theorem and to [24, 25], where Brunovsky found that it was possible to derive regular synthesis results by using geometric considerations for a large class of control systems. Apart from the main goal of achieving a complete optimal synthesis, geometric control aims also at deriving higher-order optimality conditions in order to better characterize the set of candidate optimal trajectories.
In this section, we formulate the optimal control problem on differentiable manifolds and recall some tools and results from geometric optimal control. More precisely, the Lie derivative is used to define the order of the state constraints, the Lie and Poisson brackets are used to analyze the singular extremals and to derive higher order optimality conditions, and the optimality conditions (order one, two and higher) are used to analyze the chattering extremals (see Section 3.4 for the chattering phenomenon). These results will be applied in Section 7 on a coupled attitude and trajectory optimization problem.
3.1 Optimal control problem
Let M be a smooth manifold of dimension n, let N be a smooth manifold of dimension m, let M 0 and M 1 be two subsets of M, and let U be a subset of N. We consider the general nonlinear optimal control problem (\(\mathcal {P}_{0}\)), of minimizing the cost functional
over all possible trajectories solutions of the control system
and satisfying the terminal conditions
where the mappings f:M×N→TM, \(f^{0}: M \times N \rightarrow \mathbb {R}\), and \(g: \mathbb {R} \times M \rightarrow \mathbb {R}\) are smooth, and where the controls are bounded and measurable functions defined on [0,t f (u)] of \(\mathbb {R}^{+}\), taking values in U. The final time t f may be fixed or not. We denote \(\mathcal {U}\) the set of admissible controls such that the corresponding trajectories steer the system from an initial point of M 0 to a final point in M 1.
For each x(0)∈M 0 and \(u \in \mathcal {U}\), we can integrate the system (1) from t=0 to t=t f , and assess the cost C(t f ,u) corresponding to x(t)=x(t;x 0,u(t)) and u(t) for t=[0,t f ]. Solving the problem (\(\mathcal {P}_{0}\)) consists in finding a pair (x(t),u(t))=(x(t;x 0,u(t)),u(t)) minimizing the cost. For convenience, we define the end-point mapping to describe the final point of the trajectory solution of the control system (1).
Definition 1
The end-point mapping \(E: M \times \mathbb {R} \times \mathcal {U}\) of the system is defined by
where t↦x(x 0,t,u) is the trajectory solution of the control system (1) associated to u such that x(x 0,0,u)=x 0
Assuming moreover that \(\mathcal {U}\) is endowed with the standard L ∞ topology, then the end-point mapping is C 1 on \(\mathcal {U}\), and in terms of the end-point mapping, the optimal control problem under consideration can be written as the infinite-dimensional minimization problem
This formulation of the problem will be used when we introduce the Lagrange multipliers rule in Section 3.3.1 in a simpler case when M 0={x 0} and M 1={x 1} and \(U=\mathbb {R}^{m}\).
If the optimal control problem has a solution, we say that the corresponding control and trajectory are minimizing or optimal. We refer to [32, 84] for existence results in optimal control.
Next, we introduce briefly the concept of Lie derivative, and of Lie and Poisson brackets (used in Section 3.3.3 for higher order optimality conditions). These concepts will be applied in Section 7 to analyze the pull-up maneuver problem.
3.2 Lie derivative, lie bracket, and poisson bracket
Let Ω be an open and connected subset in M, and denote the space of all infinitely continuously differentiable functions on Ω by C ∞(Ω). Let X∈C ∞(Ω) be a vector field. X can be seen as defining a first-order differential operator from the space C ∞(Ω) into C ∞(Ω) by taking at every point q∈Ω the directional derivative of a function φ∈C ∞(Ω) in the direction of the vector field X(q), i.e.,
defined by
We call (X.φ)(q) the Lie derivative of the function φ along the vector field X, and generally one denote the operator by L X , i.e.,
In general, the order of the state constraints in optimal control problems is defined through Lie derivatives as we will show on the example in Section 7.1.5.
Definition 2
The Lie bracket of two vector fields X and Y defined on a domain Ω is the operator defined by the commutator
The Lie bracket actually defines a first-order differential operator. In particular, if X:Ω→M, z↦X(z), and Y:Ω→M, z↦Y(z), are coordinates for these vector fields, then
where DX and DY denote the matrices of the partial derivatives of the vector fields X and Y.
Let X, Y, and Z be three C ∞ vector fields defined on Ω, and let α, β be smooth functions on Ω. The Lie bracket has the following properties:
-
[·,·] is a bilinear operator;
-
[X,Y]=−[Y,X];
-
[X+Y,Z]=[X,Z]+[Y,Z];
-
[X,[Y,Z]]+[Y,[Z,X]]+[Z,[X,Y]]=0 (Jacobi identity);
-
[α X,β Y]=α β[X,Y]+α(L X β)Y−β(L Y α)X.
These properties show that the vector fields (as differential operators) form a Lie algebra. A Lie algebra over \(\mathbb {R}\) is a real vector space \(\mathcal {G}\) together with a bilinear operator \([\cdot,\cdot ]: \mathcal {G} \times \mathcal {G} \to \mathcal {G}\) such that for all \(X,Y,Z \in \mathcal {G}\) we have [X,Y]=−[Y,X] and [X+Y,Z]=[X,Z]+[Y,Z].
Going back to the problem (\(\mathcal {P}_{0}\)), we assume that f(x,u)=f 0(x)+uf 1(x), f 0(x,u)=1, and g(t,x)=0, and we define a C 1 function by
where p is the adoint vector and Z is a vector field. The function h is the Hamiltonian lift of the vector field Z. Accordingly, and with a slight abuse of notation, we denote by h(t)=h(x(t),p(t)) the value at time t of h along a given extremal. The derivative of this function is
Let us recall also the concept of the Poisson bracket. The Poisson bracket is related to the Hamiltonians. In the canonical coordinates z=(x,p), given two C 1 functions α 1(x,p) and α 2(x,p), the Poisson bracket takes the form
According to (3), taking
we have
where h 0(t)=〈p(t),f 0(x(t))〉 and h 1(t)=〈p(t),f 1(x(t))〉.
For convenience, we adopt the usual notations
and
We will see in Section 3.3 (and also in Section 7) that the Lie brackets and the Poisson brackets are very useful for deriving higher order optimality conditions in simpler form and for calculating the singular controls.
3.3 Optimality conditions
This section gives an overview of necessary optimality conditions.
For the first-order optimality conditions, we recall the Lagrange multipliers method for the optimal control problem without control constraints. Such constraints can be accounted in the Lagrangian with additional Lagrange multipliers [23]. This method leads to weaker results than the Pontryagin Maximum Principle which considers needle-like variations accounting directly for the control constraints.
In some cases, the first-order conditions do not provide adequate information of the optimal control, and the higher order optimality conditions are needed. Therefore we recall the second and higher order necessary optimality conditions that must be met by any trajectory associated to an optimal control u. These conditions are especially useful to analyze the singular solutions because the first-order optimality conditions do not provide any information in such cases.
3.3.1 First-order optimality conditions
3.3.1.0 Lagrange multipliers rule
We consider the simplified problem (\(\mathcal {P}_{0}\)) with \(M=\mathbb {R}^{n}\), M 0={x 0}, M 1={x 1}, and \(U=\mathbb {R}^{m}\). According to the well known Lagrange multipliers rule (and assuming the C 1 regularity of the problem), if x∈M is optimal then there exists a nontrivial couple \((\psi,\psi ^{0}) \in \mathbb {R}^{n} \times \mathbb {R} \) such that
where dE(·) and dC(·) denote the Fréchet derivative of E(·) and C(·), respectively. Defining the Lagrangian by
this first-order necessary condition can be written in the form
If we define as usual the intrinsic second-order derivative \(Q_{t_{f}}\) of the Lagrangian as the Hessian \(\frac {\partial ^{2} L_{t_{f}}}{\partial ^{2} u}(u,\psi,\psi ^{0})\) restricted to the subspace \(\ker \frac {\partial L_{t_{f}}}{\partial u}\), a second-order necessary condition for optimality is the nonpositivity of \(Q_{t_{f}}\) (with ψ 0≤0), and a second-order sufficient condition for local optimality is the negative definiteness of \(Q_{t_{f}}\).
These results are weaker to those obtained with the PMP. The Lagrange multiplier (ψ,ψ 0) is in fact related to the adjoint vector introduced in the PMP. More precisely, the Lagrange multiplier is unique up to a multiplicative scalar if and only if the trajectory x(·) admits a unique extremal lift up to a multiplicative scalar, and the adjoint vector (p(·),p 0) can be constructed such that (ψ,ψ 0)=(p(t f ),p 0) up to some multiplicative scalar. This relation can be observed from the proof of the PMP. The Lagrange multiplier ψ 0=p 0 is associated with the instantaneous cost. The case with p 0 null is said abnormal, which means that there are no neighboring trajectories having the same terminal point (see, e.g., [2, 85]).
3.3.1.0 Pontryagin maximum principle
The Pontryagin Maximum Principle (PMP, see [70]) for the problem (\(\mathcal {P}_{0}\)) with control constraints and without state constraints is recalled in the following statement.
Theorem 1
If the trajectory x(·), associated to the optimal control u on [0,t f ], is optimal, then it is the projection of an extremal (x(·),p(·),p 0,u(·)) where p 0≤0, and \(p(\cdot):[0,t_{f}] \mapsto T^{\ast }_{x(t)} M\) 1 is an absolutely continuous mapping (called adjoint vector) with (p(·),p 0)≠0, such that almost everywhere on [0,t f ],
where the Hamiltonian is defined by
and there holds almost everywhere on [0,t f ].
If moreover, the final time t f is not fixed, then
If M 0 and M 1 (or just one of them) are submanifolds of M locally around x(0)∈M 0 and x(t f )∈M 1, then the adjoint vector satisfies the transversality conditions at both endpoints (or just one of them)
where T x M 0 (resp., T x M 1) denote the tangent space to M 0 (resp., M 1) at the point x.
The quadruple (x(·),p(·),p 0,u(·)) is called the extremal lift of x(·). An extremal is said to be normal (resp., abnormal) if p 0<0 (resp., p 0=0). According to the convention chosen in the PMP, we consider p 0≤0. If we adopt the opposite convention p 0≥0, then we have to replace the maximization condition (7) with a minimization condition. When there are no control constraints, abnormal extremals project exactly onto singular trajectories.
The proof of the PMP is based on needle-like variations and uses a conic implicit function theorem (see, e.g., [1, 52, 78]). Since these needle-like variants are of order one, the optimality conditions given by the PMP are necessary conditions of the first-order. For singular controls, higher order control variations are needed to obtain optimality conditions. A singular control is defined precisely as follows.
Definition 3
Assume that M 0={x 0}. A control u defined on [0,t f ] is said to be singular if and only if the Fréchet differential \(\frac {\partial E}{\partial u}(x_{0},t_{f},u)\) is not of full rank. The trajectory x(·) associated with a singular control u is called singular trajectory.
In practice the condition \(\frac {\partial ^{2} H}{\partial u^{2}}(x(\cdot),p(\cdot),p^{0},u(\cdot))=0\) (the Hessian of the Hamiltonian is degenerate) is used to characterize singular controls. An extremal (x(·),p(·),p 0,u(·)) is said totally singular if this condition is satisfied. The is especially the case when the control is affine (see Section 3.3.3).
The PMP claims that if a trajectory is optimal, then it should be found among projections of extremals joining the initial set to the final target. Nevertheless the projection of a given extremal is not necessarily optimal. This motivates the next section on second-order optimality conditions.
3.3.2 Second-order optimal conditions
The literature on first and/or second-order sufficient conditions with continuous control is rich (see, e.g., [42, 61, 62, 65, 89]), which is less the case for discontinuous controls (see, e.g., [68]). We recall hereafter the Legendre type conditions with Poisson brackets to show that geometric optimal control allows a simple expression of the second-order necessary and sufficient conditions (see Theorem 2).
3.3.2.0 Legendre type conditions
For the optimal control problem (\(\mathcal {P}_{0}\)), we have the following second-order optimality conditions (see, e.g., [1, 10, 16]).
-
If a trajectory x(·), associated to a control u, is optimal on [0,t f ]in L ∞ topology, then the Legendre condition holds along every extremal lift (x(·),p(·),p 0,u(·))of x(·), that is
$${}\frac{\partial^{2} H}{\partial u^{2}} (x(\cdot),p(\cdot),p^{0},u(\cdot)).(v,v) \leq 0, \quad \forall v\in \mathbb{R}^{m}. $$ -
If the strong Legendre condition holds along the extremal (x(·),p(·),p 0,u(·)), that is, there exists ε 0>0such that
$${}\frac{\partial^{2} H}{\partial u^{2}} (x(\cdot),p(\cdot),p^{0},u(\cdot)).(v,v) \leq - \epsilon_{0} \|v\|^{2}, \quad \forall v\in \mathbb{R}^{m}, $$then there exists ε 1>0such that x(·)is locally optimal in L ∞ topology on [0,ε 1]. If the extremal is moreover normal, i.e., p 0≠0, then x(·)is locally optimal in C 0 topology on [0,ε 1].
The C 0 local optimality and L ∞ local optimality are respectively called strong local optimality and weak local optimality2. The Legendre condition is a necessary optimality condition, whereas the strong Legendre condition is a sufficient optimality condition. We say that we are in the regular case whenever the strong Legendre condition holds along the extremal. Under the strong Legendre condition, a standard implicit function argument allows expressing, at least locally, the control u as a function of x and p.
In the totally singular case, the strong Legendre condition is not satisfied and we have the following generalized condition [1, 51].
Theorem 2
(Goh and Generalized Legendre condition)
-
If a trajectory x(·), associated to a piecewise smooth control u, and having a totally singular extremal lift (x(·),p(·),p 0,u(·)), is optimal on [0,t f ] in L ∞ topology, then the Goh condition holds along the extremal, that is
$$\left\{\frac{\partial H}{\partial u_{i}},\frac{\partial H}{\partial u_{j}} \right\} = 0, $$where {·,·} denotes the Poisson bracket on T ∗ M. Moreover, the generalized Legendre condition holds along every extremal lift (x(·),p(·),p 0,u(·)) of x(·), that is
$${}\left\{ \!\left\{ H, \frac{\partial H}{\partial u}.v \right\},\frac{\partial H}{\partial u}.v \right\} + \left\{ \frac{\partial^{2} H}{\partial u^{2}}.(\dot{u},v),\frac{\partial H}{\partial u}.v \right\} \leq 0, \ \forall v\in \mathbb{R}^{m}. $$ -
If the Goh condition holds along the extremal lift (x(·),p(·),p 0,u(·)), if the strong Legendre condition holds along the extremal (x(·),p(·),p 0,u(·)), that is, there exists ε 0>0 such that
$$\begin{aligned} &\left\{ \left\{ H, \frac{\partial H}{\partial u}.v \right\},\frac{\partial H}{\partial u}.v \right\}\\ &\qquad+ \left\{ \frac{\partial^{2} H}{\partial u^{2}}.(\dot{u},v),\frac{\partial H}{\partial u}.v \right\} \leq - \epsilon_{0} \|v\|^{2}, \ \forall v\in \mathbb{R}^{m}, \end{aligned} $$and if moreover the mapping \(\frac {\partial f}{\partial u}(x_{0},u(0)):\mathbb {R}^{m} \mapsto T_{x_{0}}M\) is one-to-one, then there exists ε 1>0 such that x(·) is locally optimal in L ∞ topology on [0,ε 1].
As we have seen, the Legendre (or generalized Legendre) condition is a necessary condition, while the strong (or strong generalized Legendre) condition is a sufficient condition. However, these sufficient conditions are not easy to verify in practice. This leads to the next section where we explain how to use the so-called conjugate point along the extremal to determine the time when the extremal is no longer optimal.
3.3.2.0 Conjugate points
We consider here the simplified problem (\(\mathcal {P}_{0}\)) with \(M=\mathbb {R}^{n}\), M 0={x 0}, M 1={x 1}, and \(U=\mathbb {R}^{m}\). Under the strict Legendre assumption assuming that the Hessian \( \frac {\partial ^{2} H}{\partial u^{2}} (x,p,p^{0},u)\) is negative definite, the quadratic form \(Q_{t_{f}}\) is negative definite if t f >0 is small enough.
Definition 4
The first conjugate time is defined by the infimum of times t>0 such that Q t has a nontrivial kernel. We denote the first conjugate time along x(·) by t c .
The extremals are locally optimal (in L ∞ topology) as long as we do not encounter any conjugate point. Define the exponential mapping
where the solution of (5) starting from (x 0,p 0) at t=0 is denoted as (x(t,x 0,p 0),p(t,x 0,p 0)). Then, we have the following result (see, e.g., [1, 15] for the proof and more precise results):
The time t c is a conjugate time along x(·)if and only if the mapping \(\exp _{x_{0}}(t_{c},\cdot)\) is not an immersion at p 0, i.e., the differential of the mapping \(\exp _{x_{0}}(t_{c},\cdot)\) is not injective.
Essentially this result states that computing a first conjugate time t c reduces to finding the zero of some determinant along the extremal. In the smooth case (the control can be expressed as a smooth function of x and p), the survey article [15] provides also some algorithms to compute first conjugate times. In case of bang-bang control, a conjugate time theory has been developed (see [79] for a brief survey of the approaches), but the computation of conjugate times remains difficult in practice (see, e.g., [60]).
When the singular controls are of order one (see Definition 5), the second-order optimality condition is sufficient for the analysis. For higher order singular controls, higher order optimality conditions are needed which are recalled in the next section.
3.3.3 Order of singular controls and higher order optimality conditions
In this section we recall briefly the order of singular controls and the higher order optimality conditions. They will be used in Section 7.1 to analyze the example, which exhibits a singular control of order two. It is worth noting that when the singular control is of order 1 (also called minimal order in [16, 34]), these higher order optimality conditions are not required.
To illustrate how to use these conditions, we consider the minimal time control problem on M
where f, g 1 and g 2 are smooth vector fields on M. We assume that M 1 is accessible from x 0, and that there exists a constant \(B_{t_{f}}\) such that for every admissible control u, the corresponding trajectory x u (t) satisfies \(\phantom {\dot {i}\!}\| x_{u}(t) \| \leq B_{t_{f}}\) for all t∈[0,t f ]. Then, according to classical results (see, e.g., [32, 84]), there exists at least one optimal solution (x(·),u(·)), defined on [0,t f ].
Let h 0(x,p)=〈p, f(x)〉, h 1(x,p)=〈p, g 1(x)〉, and h 2(x,p)=〈p,g 2(x)〉. According to the PMP (see Section 3.3.1), the Hamiltonian of the problem (10) is defined by
where p(·) is the adjoint variable, and p 0≤0 is a real number such that (p(·),p 0)≠0. Defining Φ(t)=(h 1(t),h 2(t)), the maximization condition of the PMP yields
almost everywhere on [0,t f ], whenever Φ(t)≠(0,0).
We call Φ (as well as its components) the switching function. We say that an arc (restriction of an extremal to a subinterval I) is regular if ∥Φ(t)∥≠0 along I. Otherwise, the arc is said to be singular.
Following [45], we give here below a precise definition of the order of a singular control. The use of Poisson (and Lie) brackets simplifies the formulation of the higher order optimality conditions. This is one of the reasons making geometric optimal control theory a valuable tool in practice.
Definition 5
The singular control u=(u 1,u 2) defined on a subinterval I⊂[0,t f ] is said to be of order q if
-
1.
the first (2q−1)-th time derivatives of h i , i=1,2, do not depend on u and
$$\frac{d^{k}}{dt^{k}}(h_{i}) = 0,\quad k=0,1,\cdots,2q-1, $$ -
2.
the 2q-th time derivative of h i , i=1,2, depends on u linearly and
$$\frac{\partial}{\partial u_{i}} \frac{d^{2q}}{dt^{2q}}(h_{i}) \neq 0, \quad \det \left(\frac{\partial}{\partial u} \frac{d^{2q}}{dt^{2q}}\Phi \right) \neq 0,\quad i=1,2, $$along I.
The control u is said to be of intrinsic order q if the vector fields satisfy also
The condition of a nonzero determinant guarantees that the optimal control can be computed from the 2q-th time derivative of the switching function. Note that this definition requires that the two components of the control have the same order.
We next recall the Goh and generalized Legendre-Clebsch conditions (see [51, 56, 58]). It is worth noting that in [58], the following higher-order necessary conditions hold even when the components of the control u have different orders.
Lemma 1
(higher-order necessary conditions) We assume that a singular control u=(u 1,u 2) defined on I is of order q, that u is optimal and not saturating, i.e., ∥u∥<1. Then the Goh condition
must be satisfied along I. Moreover, the matrix having as (i,j)-th component
is symmetric and negative definite along I (generalized Legendre-Clebsch Condition).
In practice, it happens that the singular controls are often of intrinsic order 2, and that [g 1,g 2]=0, [g 1,[f,g 2]]=0, and [g 2,[f,g 1]]=0. The conditions given in the above definition yield [g 1,[f,g 1]]=0, [g 2,[f,g 2]]=0, [g 1,ad2 f.g 1]=0, [g 2,ad2 f.g 2]=0, 〈p,[g 1,ad3 f.g 1](x)〉≠0, 〈p,[g 2,ad3 f.g 2](x)〉≠0, and
We have thus the following higher-order necessary conditions, that will be used on the example in Section 7.1.
Corollary 1
We assume that the optimal trajectory x(·) contains a singular arc, defined on the subinterval I of [0,t f ], associated with a non saturating control u=(u 1,u 2)of intrinsic order 2. If the vector fields satisfy [g 1,g 2]=0, [g i ,[f,g j ]]=0, for i,j=1,2, then the Goh condition
and the generalized Legendre-Clebsch condition (in short, GLCC)
must be satisfied along I. Moreover, we say that the strengthened GLCC is satisfied if we have a strict inequality above, that is, 〈p(t),[g i ,ad3 f.g i ](x(t))〉<0.
In the next section, we recall the chattering phenomenon that may happen in the optimal control problem, manely when there exit singular controls of higher order in the problem. This phenomenon is actually not rare as illustrated in [90] by many examples (in astronautics, robotics, economics, and etc.). These examples are mostly single input systems. The existence of chattering phenomenon for bi-input control affine systems is also proved in [93].
3.4 Chattering phenomenon
We call chattering phenomenon (or Fuller’s phenomenon) the situation when the optimal control switches an infinite number of times over a compact time interval. It is well known that, if the optimal trajectory involves a singular arc of higher order, then no connection with a bang arc is possible and the bang arcs asymptotically joining the singular arc must chatter. On Fig. 8(b), the control is singular over (t 1,t 2), and the control u(t) with t∈(t 1−ε 1,t 1)∪(t 2,t 2+ε 2), ε 1>0, ε 2>0 is chattering. The corresponding optimal trajectory is called a chattering trajectory. On Fig. 8(a), the chattering trajectory “oscillates” around the singular part and finally “gets off" the singular trajectory with an infinite number of switchings.
The chattering phenomenon is illustrated by the Fuller’s problem (see [44, 63]), which is the optimal control problem
We define \( \xi = \left (\frac {\sqrt {33}-1}{24} \right)^{1/2}\) as the unique positive root of the equation ξ 4+ξ 2/12−1/18=0, and we define the sets
The optimal synthesis of the Fuller’s problem yields the following feedback control (see [44, 74, 88]).
The control switches from u=1 to u=−1 at points on Γ − and from u=−1 to u=1 at points on Γ +. The corresponding trajectories crossing the switching curves Γ ± transversally are chattering arcs with an infinite number of switchings that accumulate with a geometric progression at the final time t f >0.
The optimal synthesis for the Fuller’s problem is drawn on Fig. 9. The optimal control of the Fuller’s problem, denoted u ∗, contains a countable set of switchings of the form
where \(\left \{ t_{k} \right \}_{k \in \mathbb {N}}\) is a set of switching times that satisfies (t i+2−t i+1)<(t i+1−t i ), \(i \in \mathbb {N}\) and converges to t f <+∞. This means that the chattering arcs contain an infinite number of switchings within a finite time interval t f >0.
Numerical methods in optimal control
The numerical procedures for optimal control problems are usually classified between direct and indirect methods. Direct methods discretize the optimal control problem in order to rewrite it as a nonlinear large scale optimization problem. The process is straightforward and it can be applied in a systematic manner to any optimal control problem. New variables or constraints may be added easily. But achieving an accurate solution requires a careful discretization and the convergence may be difficult due to the large number of variables. On the other hand indirect methods are based on the Pontryagin Maximum Principle which gives a set of necessary conditions for a local minimum. The problem is reduced to a nonlinear system that is generally solved by a shooting method using a Newton-like algorithm. The convergence is fast and accurate, but the method requires both an adequate starting point and a high integration accuracy. The sensitivity to the initial guess can be lowered by multiple shooting which breaks the trajectory into several legs linked by interface constraints, at the expense of a larger nonlinear system. The indirect method requires also prior theoretical work for problems with singular solutions or with state constraints. Handling these constraints by penalty method can avoid numerical issues, but yields less optimal solutions. The principles of both indirect and direct methods are recalled hereafter.
4.1 Indirect methods
In indirect approaches, the Pontryagin Maximum Principle (first-order necessary condition for optimality) is applied to the optimal control problem in order to express the control as a function of the state and the adjoint. This reduces the problem to a nonlinear system of n equations with n unknowns generally solved by Newton-like methods. Indirect methods are also called shooting methods. The principle of the simple shooting method and of the multiple shooting method are recalled. The problem considered in this section is (\(\mathcal {P}_{0}\)).
Simple shooting method Using (6), the optimal control can be expressed as a function of the state and the adjoint variable (x(t),p(t)). Denoting z(t)=(x(t),p(t)), the extremal system (5) can be written under the form \(\dot {z}(t)=F(z(t))\). The initial and final conditions (2), the transversality conditions (8), and the transversality condition on the Hamiltonian (7) can be written under the form of R(z(0),z(t f ),t f )=0. We thus get a two boundary value problem
Let z(t,z 0) be the solution of the Cauchy problem
Then this two boundary value problem consists in finding a zero of the equation
This problem can be solved by Newton-like methods or other iterative methods.
Multiple shooting method The drawback of the single shooting method is the sensitivity of the Cauchy problem to the initial condition z 0. The multiple shooting aims at a better numerical stability by dividing the interval [0,t f ] into N subintervals [t i ,t i+1] and considering as unknowns the values of z i =(x(t i ),p(t i )) at the beginning of each subinterval. The application of the PMP to the optimal control problem yields a multi-point boundary value problem, which consists in finding Z=(p(0),t f ,z i ), i=1,⋯,N−1 such that the differential equation
and the constraints
are satisfied. The nodes of the multiple shooting method may involve the switching times (at which the switching function changes sign), and the junction times (entry, contact, or exit times) with boundary arcs. In this case an a priori knowledge of the solution structure is required.
The multiple shooting method improves the numerical stability at the expense of a larger nonlinear system. An adequate node number must be chosen making a compromise between the system dimension and the convergence domain.
4.2 Direct methods
Direct methods are so called because they address directly the optimal control problem without using the first-order necessary conditions yielded by the PMP. By discretizing both the state and the control, the problem reduces to a nonlinear optimization problem in finite dimension, also called NonLinear Programming problem (NLP). The discretization may be carried out in many ways, depending on the problem features. As an example we may consider a subdivision 0=t 0<t 1<·<t N =t f of the interval [0,t f ]. We discretize the controls such that they are piecewise constant on this subdivision with values in U. Meanwhile the differential equations may be discretized by an explicit Euler method : by setting h i =t i+1−t i , we get x i+1=x i +h i f(t i ,x i ,u i ). The cost may be discretized by a quadrature procedure. These discretizations reduces the optimal control problem \(\mathcal {P}_{0}\) to a nonlinear programming problem of the form
From a more general point of view, a finite dimensional representation of the control and of the state has to be chosen such that the differential equation, the cost, and all constraints can be expressed in a discrete way.
The numerical resolution of a nonlinear programming problem is standard, by gradient methods, penalization, quasi-Newton, dual methods, etc. (see, e.g., [9, 50, 57, 81]). There exist many efficient optimization packages such as IPOPT (see [86]), MUSCOD-II (see [39]), or the Minpack project (see [66]) for many optimization routines.
Alternative variants of direct methods are the collocation methods, the spectral or pseudo-spectral methods, the probabilistic approaches, etc.
Another approach to optimal control problems that can be considered as a direct method, consists in solving the Hamilton-Jacobi equation satisfied (in the viscosity sense) by the value function which is of the form
The value function is the optimal cost for the optimal control problem starting from a given point (x,t) (see [77] for some numerical methods).
4.3 Comparison between methods
The main advantages and disadvantages of the direct and indirect methods are summarized in Table 1 (see also, e.g., [84, 85]).
In practice no approach is intrinsically better than the other. The numerical method should be chosen depending on the problem features and on the known properties of the solution structure. These properties are derived by a theoretical analysis using the geometric optimal control theory. When a high accuracy is desired, as is generally the case for aerospace problems, indirect methods should be considered although they require more theoretical insight and may raise numerical difficulties.
Whatever the method chosen, there are many ways to adapt it to a specific problem (see [85]). Even with direct methods, a major issue lies in the initialization procedure. In recent years, the numerical continuation has become a powerful tool to overcome this difficulty. The next section recalls some basic mathematical concepts of the continuation approaches, with a focus on the numerical implementations of these methods.
Continuation methods
5.1 Existence results and discrete continuation
The basic idea of continuation (also called homotopy) methods is to solve a difficult problem step by step starting from a simpler problem by parameter deformation. The theory and practice of the continuation methods are well-spread (see, e.g., [3, 71, 87]). Combined with the shooting problem derived from the PMP, a continuation method consists in deforming the problem into a simpler one (that can be easily solved) and then solving a series of shooting problems step by step to come back to the original problem.
One difficulty of homotopy methods lies in the choice of a sufficiently regular deformation that allows the convergence of the homotopy method. The starting problem should be easy to solve, and the path between this starting problem and the original problem should be easy to model. Another difficulty is to numerically follow the path between the starting problem and the original problem. This path is parametrized by a parameter denoted λ. When the homotopic parameter λ is a real number and when the path is linear3 in λ, the homotopy method is rather called a continuation method.
The choice of the homotopic parameter may require considerable physical insight into the problem. This parameter may be defined either artificially according to some intuition, or naturally by choosing physical parameters of the system, or by a combination of both.
Suppose that we have to solve a system of N nonlinear equations in N dimensional variable Z
where \(F: \mathbb {R}^{N} \mapsto \mathbb {R}^{N}\) is a smooth map. We define a deformation
such that
where \(G_{0} : \mathbb {R}^{N} \mapsto \mathbb {R}^{N}\) is a smooth map having known zero points.
A zero path is a curve c(s)∈G −1(0) where s represents the arc length. We would like to trace a zero path starting from a point Z 0 such that G(Z 0,0)=0 and ending at a point Z f such that G(Z f ,1)=0.
The first question to address is the existence of zero paths, since the feasibility of the continuation method lies on this assumption. The second question to address is how to numerically track such zero paths when they exist.
Existence of zero paths
The local existence of the zero paths is answered by the implicit function theorem. Some regularity assumptions are needed, as in the following statement (which is the contents of [46, Theorem 2.1]).
Theorem 3
(Existence of the zero paths) Let Ω be an open bounded subset of \(\mathbb {R}^{N}\) and let the mapping \(G:\Omega \times [0,1] \mapsto \mathbb {R}^{N}\) be continuously differentiable such that:
-
Given any (Z,λ)∈{(Z,λ)∈Ω×[0,1] ∣ G(Z,λ)=0}, the Jacobian matrix
$$G^{\prime} = \left(\frac{\partial G}{\partial Z_{1}},\cdots,\frac{\partial G}{\partial Z_{N}},\frac{\partial G}{\partial \lambda} \right), $$is of maximum rank N;
-
Given any Z∈{Z∈Ω ∣ G(Z,0)=0}∪{Z∈Ω ∣ G(Z,1)=0}, the Jacobian matrix
$$G^{\prime} = \left(\frac{\partial G}{\partial Z_{1}},\cdots,\frac{\partial G}{\partial Z_{N}}\right) $$is of maximum rank N;
Then {(Z,λ)∈Ω×[0,1] ∣ G(Z,λ)=0} consists of the paths that is either a loop in \(\bar {\Omega } \times [0,1]\) or starts from a point of \(\partial \bar {\Omega } \times [0,1]\) and ends at another point of \(\partial \bar {\Omega } \times [0,1]\), where \(\partial \bar {\Omega }\) denotes the boundary of \(\bar {\Omega }\).
This means that the zero path is diffeomorphic to a circle or the real line. The possible paths and impossible paths are shown in Fig. 10 (borrowed from [46, 48]).
Now we provide basic arguments showing the feasibility of the continuation method (see Section 4.1 of [85] for more details).
Consider the simplified optimal control problem \(\mathcal {P}_{0}\) with \(M=\mathbb {R}^{n}\), M 0={x 0}, M 1={x 1} and \(U=\mathbb {R}^{m}\). We assume that the real parameter λ∈[0,1] is increasing monotonically from 0 to 1. Under these assumptions, we are to solve a family of optimal control problems parameterized by λ, i.e.,
where E is the end-point mapping defined in Definition 1.
We assume moreover that, along the continuation procedure:
-
there are no minimizing abnormal extremals;
-
there are no minimizing singular controls: by Definition 3, the control u is not singular means that the mapping \({dE}_{x_{0},t_{f},\lambda }(u)\) is surjective;
-
there are no conjugate points (by Definition 4 the quadratic form \(Q_{t_{f}}\) is not degenerate). The absence of conjugate point can be numerically tested (see, e.g., [15]).
We will see that these assumptions are essential for the local feasibility of the continuation methods.
According to the Lagrange multipliers rule, especially the first-order condition (4), if u λ is optimal, then there exists \((\psi _{\lambda },\psi ^{0}_{\lambda }) \in \mathbb {R}^{n} \times \mathbb {R} \backslash \left \{(0,0)\right \}\) such that \(\psi _{\lambda } {dE}_{x_{0},t_{f},\lambda }(u_{\lambda }) + \psi ^{0}_{\lambda } {dC}_{t_{f},\lambda }(u)=0\). Since we have assumed that there are no minimizing abnormal extremals in the problem and \((\psi _{\lambda },\psi ^{0}_{\lambda })\) is defined up to a multiplicative scalar, we can set \(\psi ^{0}_{\lambda }=-1\). Defining the Lagrangian by
we seek (u λ ,ψ λ ) such that
Let \((u_{\bar {\lambda }},\psi _{\bar {\lambda }},\bar {\lambda })\) be a zero of G and assume that G is of class C 1. Then according to Theorem 3, we require the Jacobian of G with respect to (u,ψ) at the point \((u_{\bar {\lambda }},\psi _{\bar {\lambda }},\bar {\lambda })\) to be invertible. More precisely, the Jacobian of G is
where \(Q_{t_{f},\lambda }\) is the Hessian \(\frac {\partial ^{2} L_{t_{f},\lambda }}{\partial ^{2} u}(u,\psi,\psi ^{0})\) restricted to \(\ker \frac {\partial L_{t_{f},\lambda }}{\partial u}\), and \({dE}_{x_{0},t_{f},\lambda }(u)^{\ast }\) is the transpose of \({dE}_{x_{0},t_{f},\lambda }(u)\).
We observe that the matrix (12) is invertible if and only if the linear mapping \({dE}_{x_{0},t_{f},\lambda }(u)\) is surjective and the quadratic form \(Q_{t_{f},\lambda }\) is non-degenerate. These properties correspond to the absence of any minimizing singular control and conjugate points, which are the assumptions done for the local feasibility of the continuation procedure.
The implicit function argument above is done on the control. In practice the continuation procedure is rather done on the exponential mapping (see (13)) and it consists in tracking a path of initial adjoint vectors p 0,λ . Therefore we parameterize the exponential mapping by λ, and thus problem (11) is to solve
On the one hand, according to the PMP, the optimal control u satisfies the extremal Eqs. 6, and thus u λ =u λ (t,p 0,λ ) is a function of the initial adjoint p 0,λ . On the other hand, the Lagrange multipliers are related to the adjoint vector by p(t f )=ψ, and thus ψ λ =ψ λ (p 0,λ ) is also a function of p 0,λ . Therefore, the shooting function defined by S(p 0,λ)=G(u(p 0),ψ(p 0),λ) has an invertible Jacobian if the matrix (12) is invertible. We conclude then that the assumptions (1)-(3) mentioned above are sufficient to ensure the local feasibility.
Despite of local feasibility, the zero path may not be globally defined for any λ∈[0,1]. The path could cross some singularity or diverge to infinity before reaching λ=1.
The first possibility can be eliminated by assuming (2) and (3) over all the domain Ω and for every λ∈[0,1]. The second possibility is eliminated if the paths remain bounded or equivalently by the properness of the exponential mapping (i.e., the initial adjoint vectors p 0,λ that are computed along the continuation procedure remain bounded uniformly with respect to λ). According to [21, 82], if the exponential mapping is not proper, then there exists an abnormal minimizer. By contraposition, if one assumes the absence of minimizing abnormal extremals, then the required boundedness follows.
For the simplified problem (11), where the controls are unconstrained and the singular trajectories are the projections of abnormal extremals, if there are no minimizing singular trajectory nor conjugate points over Ω, then the continuation procedure (13) is globally feasible on [0,1].
In more general homotopy strategies, the homotopic parameter λ is not necessarily increasing monotonically from 0 to 1. There may be turning points (see, e.g., [87]) and it is preferable to parametrize the zero path by the arc length s. Let c(s)=(Z(s),λ(s)) be the zero path such that G(c(s))=0. Then, a turning point of order one is the point where \(\lambda ^{\prime } (\bar {s})= 0\), \(\lambda ^{\prime \prime } (\bar {s}) \neq 0\). In [27], the authors indicate that if \(\lambda =\lambda (\bar {s})\) is a turning point of order one, then the corresponding final time t f is a conjugate time, and the corresponding point \(E_{x_{0},t_{f},\lambda }(u(x_{0},p_{0},t_{f},\lambda))\) is the corresponding conjugate point 4. By assuming the absence of conjugate points over Ω for all λ∈[0,1], the possibility of turning points is discarded.
Unfortunately, assuming the absence of singularities is in general too strong, and weaker assumptions do not allow concluding to the feasibility of the continuation method. In the literature, there are essentially two approaches to tackle this difficulty. The first one is of local type. One detects the singularities or bifurcations along the zero path (see, e.g., [3]). The second one is of global type, concerning the so-called globally convergent probability-one homotopy method. We refer the readers to [35, 87] for more details concerning this method.
Numerical tracking the zero paths There exists many numerical algorithms to track a zero path. Among these algorithms, the simplest one is the so called discrete continuation or embedding algorithm. The continuation parameter denoted λ, is discretized by \(\phantom {\dot {i}\!}0= \lambda ^{0} < \lambda ^{1} < \cdots < \lambda ^{n_{l}} = 1\) and the sequence of problems G(Z,λ i)=0, i=1,⋯,n l is solved to end up with a zero point of F(Z). If the increment △λ=λ i+1−λ i is small enough, then the solution Z i associated to λ i such that G(Z i,λ i)=0 is generally close to the solution of G(Z,λ i+1)=0. The discrete continuation algorithm is detailed in Algorithm 1.
In some cases the parameter λ may be ill suited to parameterize the zero path, and thus causes a slow progress or even a failure of the discrete continuation. Two enhancements (predictor-corrector methods and piecewise-Linear methods) have been proposed in the literature.
5.2 Predictor-corrector (PC) continuation
A natural parameter for the zero curve (Z,λ) is the arc-length denoted s.
The zero curve parameterized by the arc length s is denoted
Differentiating G(Z(s),λ(s))=0 with respect to s, we obtain
where \(J_{G} = \frac {\partial G(Z(s),\lambda (s))}{\partial (Z,\lambda)}\) is the Jacobian, and \(t(J_{G})=\frac {dc(s)}{ds}\) is the tangent vector of the zero path c(s).
If we know a point of this curve \((\bar {Z}(s_{i}),\bar {\lambda }(s_{i}))\), and assuming that c(s) is not a critical point (i.e., t(J G ) is not null),we can predict a new zero point \((\tilde {Z}(s_{i+1}),\tilde {\lambda }(s_{i+1}))\) by
where h s is the step size on s. As shown in Fig. 11, if the step size h s is sufficiently small, the prediction step yields a point \((\tilde {Z}(s_{i+1}),\tilde {\lambda }(s_{i+1}))\) close to a point \((\bar {Z}(s_{i+1}),\bar {\lambda }(s_{i+1}))\) on the curve, such that \(G(c(s_{i+1}))=G(\bar {Z}(s_{i+1}),\bar {\lambda }(s_{i+1}))=0\). The correction step consists in coming back on the curve using a Newton-like method.
The PC continuation is described by Algorithm 2.
When the optimal control problem is regular (in the sense of the Legendre condition are defined) and the homotopic parameter is a scalar, one can use the so called differential continuation or differential pathfollowing. This method consists in integrating accurately t(J G ) satisfying (14) (see details in [26]). The correction step is replaced by the mere integration of an ordinary differential equation with the help of automatic differentiation (see, e.g., [5, 29]).
5.3 Piecewise-linear (PL) continuation
The main advantage of the PL method is that it only needs the zero paths to be continuous (smoothness assumption of G is not necessary). For a detailed description of the PL methods, we refer the readers to [3, 4, 47].
Here we present the basic idea of the PL methods, which are also referred to as a simplicial methods. A PL continuation consists of following exactly a piecewise-linear curve \(c_{\mathcal {T}}(s)\) that approximates the zero path c(s)∈G −1(0).
The approximation curve \(c_{\mathcal {T}}(s)\) is a polygonal path relative to an underlying triangulation \(\mathcal {T}\) of \(\mathbb {R}^{N+1}\), which is a subdivision of \(\mathbb {R}^{N+1}\) into (N+1)-simplices. 5
Then, for any map \(G: \mathbb {R}^{N+1} \mapsto \mathbb {R}^{N}\), the piecewise linear approximation \(G_{\mathcal {T}}\) to G relative to the triangulation \(\mathcal {T}\) of \(\mathbb {R}^{N+1}\) is the unique map defined by:
-
\(G_{\mathcal {T}}(v) = G(v)\) for all vertices of \(\mathcal {T}\);
-
for any N+1-simplex \(\sigma = [v_{1},v_{2},\cdots,v_{N+2}] \in \mathcal {T}\), the restriction \(G_{\mathcal {T}} |_{\sigma }\) of \(G_{\mathcal {T}}\) to σ is an affine map.
Consequently a point \(Z = \sum _{i=1}^{N+2} \alpha _{i} v_{i}\) (here α i are barycentric coordinates that satisfy \(\sum _{i=1}^{N+2} \alpha _{i} = 1\) and α i ≥0) in a N+1-simplex satisfies
The set \(G_{\mathcal {T}}^{-1} (0)\) contains a polygonal path \(c_{\mathcal {T}} : \mathbb {R} \mapsto \mathbb {R}^{N +1}\) which approximates the path c. Tracking such a path is carried out via PL-steps similar to the steps used in linear programming methods such as the Simplex Method. Figure 12 portrays the basic idea of a PL method.
In aerospace applications, where the continuation procedure is in general differentiable, the PL methods are usually not as efficient as the PC methods or the differential continuation that we present in next sections. Nevertheless when singularities exist in the zero path, the PL method is probably the most efficient one.
In the next section, we recall briefly some successful applications of the geometric optimal control techniques and the numerical continuation to trajectory problems, including orbital transfer problems and atmospheric reentry problems. Note that attitude problems, namely the controllability problems, have also been treated by geometric control theory, see e.g. [19, 38]. We refer the readers to the book [20] and its reference for more details.
Applications to trajectory problems
In this section, we recall applications of the geometric optimal control theory and numerical continuation methods in trajectory problems, namely in orbital transfer problems and in atmospheric reentry problems. The aim of this section is to show that the continuation and the geometric optimal control theory have been applied successfully to trajectory problems. Indeed, they are powerful and efficient tools to combine with traditional optimal control theory.
6.1 Orbital transfer problems
The orbital transfer problem consists in steering the engine from an initial orbit to a final one while minimizing either the duration or the consumption. This problem has been widely studied in the literature, and the solution algorithms involve direct methods as well as indirect methods. The reader is referred to [6] and [31] for a list of methods and references. The dynamics is modelled by the controlled Kepler equations
where r(·) is the position of the spacecraft, μ is the gravitational constant of the planet, T(·)≤T max is the bounded thrust, and m(·) is the mass with β a constant depending on the specific impulse of the engine.
Controllability properties ensuring the feasibility of the problem have been studied in [15, 20], based on the analysis of Lie algebra generated by the vector fields of the system.
The minimum time low thrust transfer is addressed for example in [28]. It is observed that the domain of convergence of the Newton-type method in the shooting problem becomes smaller when the maximal thrust decreases. Therefore, a natural continuation process consists in starting with larger values of the maximal thrust and then decreasing step by step the maximal thrust. In [28], the authors started with the maximal thrust T max =60 N and achieved the continuation up to T max =0.14 N.
The minimum fuel consumption orbit transfer problem has also been widely studied. With the cost functional \(\int _{0}^{t_{f}} \| T(t)\| dt\), the problem is more difficult than minimizing the time, since the optimal control is discontinuous. In [48, 49], the authors propose a continuation on the cost functional, starting from the minimum-energy problem. The cost functional is defined by \(\int _{0}^{t_{f}} \left ((1-\lambda)\| T(t)\|^{2} + \lambda \| T(t)\| \right) dt\), where λ∈[0,1] is the homotopy parameter. When λ=0 (minimum-energy), the control derived from the PMP is continuous and the shooting problem is thus easier to solve. The authors prove the existence of a zero path from λ=0 and to λ=1. This continuation approach is later applied in [33] for studying the L 1-minimization of trajectory optimization problem. Such minimum-fuel low-thrust transfers are very important for deep space explorations, since all the propellant must be carried on board by the satellite. Similar continuation procedures have also been applied to the well-known Goddard’s problem, and to its three-dimensional variants ([12, 14]). The possible singular arcs (along which the norm of the thrust is neither zero nor maximal) have thus been analyzed and numerically computed.
Continuation procedures are also valuable for high-thrust orbital transfer problems. In [31], the authors proposed a continuation procedure starting from a flat Earth model with constant gravity. The variable gravity and the Earth curvature are introduced step by step by homotopy parameters. The theoretically analysis of the flat Earth model shows that the solution structure consists in a single boost followed by a coast arc. This helps solving the starting problem in a direct way, before coming back by continuation to the real round Earth problem. The round Earth solution exhibits a different solution structure (boost – coast – boost) which appears progressively along the continuation process.
6.2 Atmospheric reentry problem
An atmospheric reentry typically begins at an altitude of 120k m and ends with a landing phase. The final landing phase until the touchdown is generally studied apart and it is highly dependent on the mission specifications (ground or sea landing, manned or unmanned flight, etc). The so-called atmospheric leg aims at reducing the vehicle energy before the final landing phase. No fuel is used and the braking has to be fully achieved by aerodynamics while satisfying the state constraints, in particular on the thermal flux. The final conditions specify a target position at a low altitude, typically less than 20 k m.
The vehicle is considered as a glider submitted to the gravity and the aerodynamic forces, the control u being the bank angle and possibly the angle of attack. The optimal control problem consists thus in steering the space shuttle from given entry conditions to targeted final conditions while minimizing the total heat and satisfying state constraints on the thermal flux, the normal acceleration, and the dynamic pressure. We refer the readers to [22, 83] for a formulation of this problem. The control u acts on the lift force orientation, changing simultaneously the descent rate and the heading angle.
A practical guidance strategy consists in following the constraint boundaries, successively : thermal flux, normal acceleration, and dynamic pressure. This strategy does not care about the cost functional and it is therefore not optimal. Applying the Pontryagin Maximum Principle with state constraints is not promising due to a narrow domain of convergence of the shooting method. Finding a correct guess for the initial adjoint vector proves quite difficult. Therefore direct methods are generally preferred for this atmospheric reentry problem (see, e.g., [6, 7, 69]).
Here we recall two alternative approaches to address the problem by indirect methods.
The first approach is to analyze the control system using geometric control theory. For example, in [17, 18, 83], a careful analysis of the control system provides a precise description of the optimal trajectory. The resulting problem reduction makes it tractable by the shooting method. More precisely, the control system is rewritten as a single-input control-affine system in dimension three under some reasonable assumptions. Local optimal syntheses are derived from extending existing results in geometric optimal control theory. Based on perturbation arguments, this local nature of the optimal trajectory is then used to provide an approximation of the optimal trajectory for the full problem in dimension six, and finally simple approximation methods are developed to solve the problem.
A second approach is to use the continuation method. For example, in [55], the problem is solved by a shooting method, and a continuation is applied on the maximal value of the thermal flux. It is shown in [11, 54] that under some appropriate assumptions, the change in the structure of the trajectory is regular, i.e., when a constraint becomes active along the continuation, only one boundary arc appears. Nevertheless it is possible that an infinite number of boundary arcs appear (see, e.g., [72]). This phenomenon is possible when the constraint is of order three at least. By using a properly modified continuation procedure, the reentry problem was solved in [55] and the results of [18] were retrieved.
Now we have shown by examples in trajectory problems that the geometric optimal control theory can be used to analyze the problem and the numerical continuation can be used to design efficient numerical resolution methods. In the next section, we will show step by step by a nonacademic attitude-trajectory problem how the analysis and the design of numerical continuation procedure are done.
Application to attitude-trajectory optimal control
In this section, the nonacademic attitude-trajectory optimal control problem for a launch vehicle (classical and airborne) is analyzed in detail. Through this example, we illustrate how to analyze the (singular and regular) extremals of the problem with Lie and Poisson brackets, and how to elaborate numerical continuation procedures adapted to the solution structure. Indeed the theoretical analysis reveals the existence of a chattering phenomenon. Being aware of this feature is essential to devise an efficient numerical solution method.
7.1 Geometric analysis and numerical continuations for optimal attitude and trajectory control problem (\(\mathcal {P}_{S}\))
The problem is formulated in terms of dynamics, control, constraints and cost. The Pontryagin Maximum Principle and the geometric optimal control are then applied to analyze the extremals, revealing the existence of the chattering phenomenon.
7.1.1 Formulation of (\(\mathcal {P}_{S}\)) and difficulties
7.1.1.0 Minimum time attitude-trajectory control problem (\(\mathcal {P}_{S}\))
In this section, we formulate an attitude-trajectory minimum time control problem, denoted by (\(\mathcal {P}_{S}\)).
The trajectory of a launch vehicle is controlled by the thrust which can only have limited deflection angles with the vehicle longitudinal axis. Controlling the thrust direction requires controlling the vehicle attitude. When the attitude dynamics is slow, or when the orientation maneuver is large, this induces a coupling between the attitude motion and the trajectory, as explained in Section 6.
When this coupling is not negligible the dynamics and the state must account simultaneously for the trajectory variables (considering the launch vehicle as a mass point) and the attitude variables (e.g., the Euler angles or the quaternion associated to the body frame).
The objective is then to determine the deflection angle law driving the vehicle from given initial conditions to the desired final attitude and velocity, taking into account the attitude-trajectory coupling.
The typical duration of such reorientation maneuvers is small compared to the overall launch trajectory. We assume therefore that the gravity acceleration is constant and we do not account for the position evolution. The aerodynamical forces (lift and drag) are supposed negligible in the first approach, and they will be introduced later in the system modelling. The dynamics equations in an inertial frame (O,x,y,z) are
where (v x , v y , v z ) represents the velocity, (g x , g y , g z ) represents the gravity acceleration, θ (pitch), ψ (yaw), ϕ (roll) are the Euler angles, a is the ratio of the thrust force to the mass, and b is the ratio of the thrust torque to the transverse inertia of the launcher (a and b are assumed constant). \(u=(u_{1},u_{2}) \in \mathbb {R}^{2}\) is the control input of the system satisfying \(|u|=u_{1}^{2}+u_{2}^{2} \leq 1\). See more details of the model and the problem formulation in [92].
Defining the state vector as x=(v x ,v y ,v z ,θ,ψ,ϕ, ω x ,ω y ), we write the system (16) as the bi-input control-affine system
where the controls u 1 and u 2 satisfy the constraint \(u_{1}^{2}+u_{2}^{2} \leq 1\), and the vector fields f, g 1 and g 2 are defined by
We define the target set (submanifold of \(\mathbb {R}^{8}\))
where θ f , ψ f , ϕ f , ω xf and ω yf are desired final values of the state variables.
The first two conditions in (19) define a final velocity direction parallel to the longitudinal axis of the launcher, or in other terms a zero angle of attack.
The problem (\(\mathcal {P}_{S}\)) consists in steering the bi-input control-affine system (17) from \(x(0)=x_{0} = ({v_{x_{0}}},{v_{y_{0}}},{v_{z_{0}}},\theta _{0},\psi _{0},\phi _{0},{\omega _{x_{0}}},{\omega _{y_{0}}}) \in \mathbb {R}^{8}\) to the final target M 1S in minimum time t f , with controls satisfying the constraint \(u_{1}^{2}+u_{2}^{2} \leq 1\). The fixed initial condition is x(0)=x 0 and the final condition of problem \(\mathcal {P}_{S}\) is
The initial and final conditions are also called terminal conditions.
7.1.1.0 Difficulties
The problem (\(\mathcal {P}_{S}\)) is difficult to solve directly due to the coupling of the attitude and the trajectory. The system is of dimension 8 and its dynamics contains both slow (trajectory) and fast (attitude) components. In fact, the attitude movement is much faster than the trajectory movement. This observation is being particularly important in order to design an appropriate solution method. The idea is to define a simplified starting problem and then to apply continuation techniques. However the essential difficulty of this problem is the chattering phenomenon making the control switch an infinite number of times over a compact time interval. Such a phenomenon typically occurs when trying to connect bang arcs with higher-order singular arcs (see, e.g., [44, 63, 90, 91], or Section 3.4).
In a preliminary step, we limited ourselves to the planar problem, which is a single-input control affine system. This planar problem is close to real flight conditions of a launcher ascent phase. We have used the results of M.I. Zelikin and V.F. Borisov [90, 91] to understand the chattering phenomenon and to prove the local optimality of the chattering extremals. We refer the readers to [93] for details.
In a second step using the Pontryagin Maximum Principle and the geometric optimal control theory (see [1, 74, 85]), we have established an existence result of the chattering phenomenon for a class of bi-input control affine systems and we have applied the result to the problem (\(\mathcal {P}_{S}\)). More precisely, based on Goh and generalized Legendre-Clebsch conditions, we have proved that there exist optimal chattering arcs when connecting the regular arcs with a singular arc of order two.
7.1.2 Geometric analysis for (\(\mathcal {P}_{S}\))
7.1.2.0 Singular arcs and necessary conditions for optimality
The first step to analyze the problem is to apply the PMP (see Theorem 1). Let us consider the system (17), with the vector fields f, g 1 and g 2 defined by (18). According to the PMP, there must exist an absolutely continuous mapping \(p(\cdot)=(p_{v_{x}}(\cdot), p_{v_{y}}(\cdot), p_{v_{z}}(\cdot), p_{\theta }(\cdot), p_{\psi }(\cdot), p_{\phi }(\cdot), p_{\omega _{x}}(\cdot), p_{\omega _{y}}(\cdot))\) defined on [0,t f ], such that \(p(t)\in T^{*}_{x(t)}M\) (cotangent space) for every t∈[0,t f ], and a real number p 0≤0, with (p(·),p 0)≠0, such that \(\dot {x}(t) = \frac {\partial H}{\partial p}(x(t),p(t),p^{0},u(t))\) and almost everywhere on [0,t f ]
The Hamiltonian of the optimal control problem (\(\mathcal {P}_{S}\)) is defined by
with h 0(x,p)=〈p,f(x)〉, h 1(x,p)=〈p,g 1(x)〉, and h 2(x,p)=〈p,g 2(x)〉. With a slight abuse of notation as before, we will denote h i (t)=h i (x(t),p(t)), i=0,1,2.
Note that the abnormal extremals correspond to p 0=0 in the PMP. We suspect the existence of optimal abnormal extremals in (\(\mathcal {P}_{S}\)) for certain (nongeneric) terminal conditions. In the planar version of (\(\mathcal {P}_{S}\)) studied in [93], it is proved that there is no abnormal minimizer (p 0≠0) if the optimal control switches at least two times. We expect that the same property is still true here. We are able to prove that the singular extremals of (\(\mathcal {P}_{S}\)) are normal, however, we are not able to establish a clear relationship between the number of switchings and the existence of abnormal minimizers as in [93]. Thus, in our numerical simulations later, we will assume that there is at least one normal extremal for problem (\(\mathcal {P}_{S}\)) and compute it.
The maximization condition of the PMP yields, almost everywhere on [0,t f ],
whenever \(\phantom {\dot {i}\!}\Phi (t)=(h_{1}(t),h_{2}(t)) = (b p_{\omega _{y}}(t),- b p_{\omega _{x}}(t))\neq (0,0)\). The function Φ is of class C 1 and is called (as well as its components) the switching function. The switching manifold Γ is the submanifold of \(\mathbb {R}^{16}\) of codimension two defined by \(\Gamma = \left \{ z=(x,p)\in \mathbb {R}^{16} \mid p_{\omega _{x}} = p_{\omega _{y}} = 0 \right \}\).
The transversality condition \(\phantom {\dot {i}\!}p(t_{f}) \perp T_{x(t_{f})} M_{1}\) yields
where \(T_{x(t_{f})}M_{1}\) is the tangent space to M 1 at the point x(t f ). The final time t f being free and the system being autonomous, we have also h 0(x(t),p(t))+∥Φ(t)∥+p 0=0, ∀t∈[0,t f ].
We say that an arc (restriction of an extremal to a subinterval I) is regular if ∥Φ(t)∥≠0 along I. Otherwise, the arc is said to be singular. An arc that is a concatenation of an infinite number of regular arcs is said to be chattering. The chattering arc is associated with a chattering control that switches an infinite number of times, over a compact time interval. A junction between a regular arc and a singular arc is said to be a singular junction.
We next compute the singular control, since it is important to understand and explain the occurrence of chattering. The usual method for to computing singular controls is to differentiate repeatedly the switching function until the control explicitly appears. Note that here we need to use the notion of Lie bracket and Poisson bracket (see Section 3.2).
Assuming that ∥Φ(t)∥=0 for every t∈I, i.e., h 1(t)=h 2(t)=0, and differentiating with respect to t, we get, using the Poisson brackets,
along I. If the singular arc is optimal and the associated singular control is not saturating, then the Goh condition (see [51], see also Theorem 2) {h 1,h 2}=〈p,[g 1,g 2](x)〉=0 must be satisfied along I. Therefore we get that
along I.
Since the vector fields g 1 and g 2 commute, i.e., [g 1,g 2]=0, we get by differentiating again that
Assuming that
along I, we obtain that
so that the control u=(u 1,u 2) is said of order 1. u 1 and u 2 must moreover satisfy the constraint \(u_{1}^{2}+u_{2}^{2} \leq 1\).
However, in problem (\(\mathcal {P}_{S}\)), we have [g 1,[f,g 2]]=0, [g 2,[f,g 1]]=0, and {h 1,{h 0,h 1}}={h 2,{h 0,h 2}}=0 along I, which indicates that the singular control is of higher order. According to the Goh condition (see [51, 58], see also Definition 5 and Theorem 2), we must have {h i ,{h 0,h j }}=0, i,j=1,2, i≠j, and we can go on differentiating. It follows from [g 1,[f,g 1]]=0 and [g 2,[f,g 2]]=0 that
and we get
Using [g 1,g 2]=0 and [g i ,[f,g i ]]=0, i=1,2, it follows that [g k ,[g i ,[f,g j ]]=0, i,j,k=1,2 and
This is a new constraint along the singular arc. The time derivative of this constraint is equal to zero and therefore does not induce any additional constraint.
The higher-order necessary conditions for optimality (see Definition 5) state that an optimal singular control can only appear explicitly within an even derivative. Therefore we must have
along I. Accordingly, \(h_{i}^{(3)}=0\), i=1,2, gives three additional constraints along the singular arc:
By differentiating the first two constraints with respect to t, we get
Assuming that {h i ,ad3 h 0.h i }<0 for i=1,2 (generalized Legendre-Clebsch condition, see Corollary 1) and since
along I for problem (\(\mathcal {P}_{S}\)), the singular control is
The singular control u=(u 1,u 2) is then said of intrinsic order two (see the precise definition in Definition 5).
Let us assume that (x(·),p(·),p 0,u(·)) is a singular arc of (\(\mathcal {P}_{S}\)) along the subinterval I, which is locally optimal in C 0 topology. Then we have u=(u 1,u 2)=(0,0) along I, and u is a singular control of intrinsic order two. Moreover, we can establish (see the proof in [92]) that this singular extremal must be normal, i.e., p 0≠0, and according to Lemma 1, the Generalized Legendre-Clebsch Condition (GLCC) along I takes the form
We define next the singular surface S, which is filled by singular extremals of (\(\mathcal {P}_{S}\)), by
We will see later that the solutions of the problem of order zero (defined in the following Section) lie on this singular surface S.
Finally, the possibility of chattering in problem (\(\mathcal {P}_{S}\)) is demonstrated in [92]. A chattering arc appears when trying to connect a regular arc with an optimal singular arc. More precisely, let u be an optimal control, solution of (\(\mathcal {P}_{S}\)), and assume that u is singular on the sub-interval (t 1,t 2)⊂[0,t f ] and is regular elsewhere. If t 1>0 (resp., if t 2<t f ) then, for every ε>0, the control u switches an infinite number of times over the time interval [t 1−ε,t 1] (resp., on [t 2,t 2+ε]). The condition (22) was required in the proof.
The knowledge of chattering occurrence is essential for solving the problem (\(\mathcal {P}_{S}\)) in practice. Chattering raises indeed numerical issues that may prevent any convergence, especially when using an indirect approach (shooting). The occurrence of the chattering phenomenon in (\(\mathcal {P}_{S}\)) explains the failure of the indirect methods for certain terminal conditions (see also the recent paper [30]).
7.1.3 Indirect method and numerical continuation procedure for (\(\mathcal {P}_{S}\))
The principle of the continuation procedure is to start from the known solution of a simpler problem (called hereafter the problem of order zero) in order to initialize an indirect method for the more complicated problem (\(\mathcal {P}_{S}\)). This simple low-dimensional problem will then be embedded in higher dimension, and appropriate continuations will be applied to come back to the initial problem.
The problem of order zero defined below considers only the trajectory dynamics which is much slower than the attitude dynamics. Assuming an instantaneous attitude motion simplifies greatly the problem and provides an analytical solution. It is worth noting that the solution of the problem of order zero is contained in the singular surface S filled by the singular solutions for (\(\mathcal {P}_{S}\)), defined by (23).
7.1.3.0 Auxiliary problems
We define the problem of order zero, denoted by (\(\mathcal {P}_{0}\)) as the “subproblem” of problem (\(\mathcal {P}_{S}\)) reduced to the trajectory dynamics. The control for this problem is directly the vehicle attitude, and the attitude dynamics is not simulated.
Denoting the vehicle longitudinal axis as \(\vec {e}\) and considering it as the control vector (instead of the attitude angles θ, ψ), we formulate the problem as follows:
where \(\vec {w}\) is a given vector that refers to the desired target velocity direction, and \(\vec {g}\) is the gravitational acceleration vector. The solution of this problem is straightforward and gives : the optimal solution of (\(\mathcal {P}_{0}\)) is given by
with
and
We refer the readers to [92] for the detailed calculation.
The Euler angles θ ∗∈(−π,π) and ψ ∗∈(−π/2,π/2) are retrieved from the components of the vector \(\vec {e}^{\ast }\) since \( \vec {e}^{\ast } = \left (\sin \theta ^{\ast } \cos \psi ^{\ast }, -\sin \psi ^{\ast }, \cos \theta ^{\ast } \sin \psi ^{\ast }\right)^{\top } \).
We can check that these optimal angles θ=θ ∗, ψ=ψ ∗ and ϕ=ϕ ∗ (whatever the value of ϕ ∗) satisfy the Eqs. (23), so that the solution of (\(\mathcal {P}_{0}\)) is contained in the singular surface S. The optimal solution of (\(\mathcal {P}_{0}\)) actually corresponds to a singular solution of (\(\mathcal {P}_{S}\)) with the terminal conditions given by
A natural continuation strategy consists in changing continuously these terminal conditions (24)-(26) to come back to the terminal conditions (20) of (\(\mathcal {P}_{S}\)).
Unfortunately the chattering phenomenon may prevent the convergence of the shooting method. When the terminal conditions are in the neighborhood of the singular surface S, the optimal extremals are likely to contain a singular arc and thus chattering arcs causing the failure of the shooting method. In order to overcome the numerical issues we define a regularized problem with a modified cost functional.
The regularized problem (\(\mathcal {P}_{R}\)) consists in minimizing the cost functional
for the bi-input control-affine system (17), under the control constraints −1≤u i ≤1, i=1,2, and with the terminal conditions (20). The constant K>0 is arbitrary. We have replaced the constraint \(u_{1}^{2}+u_{2}^{2}\leq 1\) (i.e., u takes its values in the unit Euclidean disk) with the constraint that u takes its values in the unit Euclidean square. Note that we use the Euclidean square (and not the disk) because we observed that our numerical simulations worked better in this case. This regularized optimal control problem with the cost (27) has continuous extremal controls and it is therefore well suited to a continuation procedure.
The Hamiltonian of problem (\(\mathcal {P}_{R}\)) is
and according to the PMP, the optimal controls are
where the saturation operator sat is defined by
An important advantage of considering problem (\(\mathcal {P}_{R}\)) is that when we embed the solutions of (\(\mathcal {P}_{0}\)) into the (\(\mathcal {P}_{R}\)), they are not singular, whereas the solution of (\(\mathcal {P}_{0}\)) is a singular trajectory of the full problem (\(\mathcal {P}_{S}\)) and thus passing directly from (\(\mathcal {P}_{0}\)) to (\(\mathcal {P}_{S}\)) causes essential difficulties due to chattering. More precisely, an extremal of (\(\mathcal {P}_{0}\)) can be embedded into (\(\mathcal {P}_{R}\)), by setting
where θ ∗ and ψ ∗ are given by solving problem \(\mathcal {P}_{0}\), with the natural terminal conditions given by (24) and (25)-(26). This solution is not a singular extremal for (\(\mathcal {P}_{R}\)). The extremal equations for (\(\mathcal {P}_{R}\)), are the same than for (\(\mathcal {P}_{S}\)), as well as the transversality conditions.
7.1.3.0 Numerical continuation procedure
The objective is to find the optimal solution of (\(\mathcal {P}_{S}\)), starting from the explicit solution of \(\mathcal {P}_{0}\). We proceed as follows:
-
First, we embed the solution of (\(\mathcal {P}_{0}\)) into (\(\mathcal {P}_{R}\)). For convenience, we still denote (\(\mathcal {P}_{0}\)) the problem (\(\mathcal {P}_{0}\)) formulated in higher dimension.
-
Then, we pass from (\(\mathcal {P}_{0}\)) to (\(\mathcal {P}_{S}\)) by means of a numerical continuation procedure, involving three continuation parameters. The first two parameters λ 1 and λ 2 are used to pass continuously from the optimal solution of (\(\mathcal {P}_{0}\)) to the optimal solution of the regularized problem (\(\mathcal {P}_{R}\)) with prescribed terminal attitude conditions, for some fixed K>0. The third parameter λ 3 is then used to pass to the optimal solution of (\(\mathcal {P}_{S}\)) (see Fig. 13).
In a first step, we use the continuation parameter λ 1 to act on the initial conditions, according to
where \(\omega _{x}^{\ast } = \omega _{y}^{\ast } =0\), ϕ ∗=0, and θ ∗, ψ ∗ are given by the explicit solution of the problem (\(\mathcal {P}_{0}\)).
Using the transversality condition (21) and the extremal equations, the shooting function \( S_{\lambda _{1}} \) for the λ 1-continuation is of dimension 8 and defined by
where H K (t f ) with p 0=−1 is calculated from (28) and u 1 and u 2 are given by (29). Recall that we have proved that a singular extremal of problem (\(\mathcal {P}_{S}\)) must be normal, and since we are starting to solve the problem from a singular extremal, we can assume that p 0=−1.
Note again that there is no concern using \(S_{\lambda _{1}}\) as shooting function for (\(\mathcal {P}_{R}\)). This would not be the case for (\(\mathcal {P}_{S}\)) : if \(S_{\lambda _{1}}=0\), then together with ω x (t f )=0 and ω y (t f )=0, the final point (x(t f ),p(t f )) of the extremal would lie on the singular surface S defined by (23) and this would cause the failure of the shooting method. On the opposite, for problem (\(\mathcal {P}_{R}\)), even when x(t f )∈S, the shooting problem is smooth and it can still be solved.
The solution of (\(\mathcal {P}_{0}\)) is a solution of (\(\mathcal {P}_{R}\)) for λ 1=0, corresponding to the terminal conditions (24)-(25) (the other states at t f being free). By continuation, we vary λ 1 from 0 to 1, yielding the solution of (\(\mathcal {P}_{R}\)), for λ 1=1. The final state of the corresponding extremal gives some unconstrained Euler angles denoted by θ e =θ(t f ), ψ e =ψ(t f ), ϕ e =ϕ(t f ), ω xe =ω x (t f ) and ω ye =ω y (t f ).
In a second step, we use the continuation parameter λ 2 to act on the final conditions, in order to make them pass from the values θ e , ψ e , ϕ e , ω xe and ω ye , to the desired target values θ f , ψ f , ϕ f , ω xf and ω yf . The shooting function \( S_{\lambda _{2}} \) for the λ 2-continuation is still of dimension 8 and defined by
Solving this problem by varying λ 2 from 0 to 1, we obtain the solution of (\(\mathcal {P}_{R}\)), with the terminal condition (20).
Finally, in order to compute the solution of (\(\mathcal {P}_{S}\)), we use the continuation parameter λ 3 to pass from (\(\mathcal {P}_{R}\)) to (\(\mathcal {P}_{S}\)). We introduce the parameter λ 3 into the cost functional (27) and the Hamiltonian H K as follows:
According to the PMP, the extremal controls of this problem are given by u i =sat(−1,u ie ,1), i=1,2, where
The shooting function \(S_{\lambda _{3}}\) is defined similarly to \(S_{\lambda _{2}}\), replacing H K (t f ) with H K (t f ,λ 3). The solution of (\(\mathcal {P}_{S}\)) is then obtained by varying λ 3 continuously from 0 to 1.
This last continuation procedure fails in case of chattering, and thus it cannot be successful for any arbitrary terminal conditions. In particular, if chattering occurs then the λ 3-continuation is expected to fail for some value \(\lambda _{3} = \lambda _{3}^{\ast }<1\). In such a case this value of λ 3 corresponds to a sub-optimal solution of (\(\mathcal {P}_{S}\)), which is practically valuable since it satisfies the terminal conditions with a reduced final time (also not minimal), with a continuous control. The numerical experiments show that this continuation procedure is very efficient. In most cases, optimal solutions with prescribed terminal conditions can be obtained within a few seconds (without parallel calculations).
7.1.4 Direct method
In this section we envision a direct approach for solving (\(\mathcal {P}_{S}\)), with a piecewise constant control over a given time discretization. The solutions obtained with such a method are sub-optimal, especially when the control is chattering (the number of switches being limited by the time step).
Since the initialization of a direct method may also raise some difficulties, we propose the following strategy. The idea is to start from the problem (\(\mathcal {P}_{S}\)) with relaxed terminal requirements, in order to get a first solution, and then to reintroduce step by step the final conditions (20) of (\(\mathcal {P}_{S}\)). We implement this direct approach with the software BOCOP and its batch optimization option (see [13]).
-
Step 1: we solve (\(\mathcal {P}_{S}\)) with the initial condition x(0)=x 0 and the final conditions
$${}\omega_{y}(t_{f})=0,\quad \theta (t_{f}) = \theta_{f},\quad v_{z} (t_{f}) \sin \theta_{f} - v_{x} (t_{f}) \cos \theta_{f} =0. $$These final conditions are those of the planar version of (\(\mathcal {P}_{S}\)) (see [93] for details). This problem is easily solved by a direct method without any initialization care (a constant initial guess for the discretized variables suffices to ensure convergence).
-
Then, in Steps 2, 3, 4 and 5, we add successively (and step by step) the final conditions
$$v_{z} (t_{f}) \sin \psi_{f} + v_{y} (t_{f}) \cos \theta_{f} \cos \psi_{f} =0, $$$$\psi (t_{f})= \psi_{f},\quad \phi (t_{f})=\phi_{f},\quad \omega_{x}(t_{f}) = \omega_{xf}, $$and for each new step we use the solution of the previous one as an initial guess.
At the end of this process, we have obtained the solution of (\(\mathcal {P}_{S}\)).
7.1.5 Comparison of the indirect and direct approaches
So far, in order to compute numerically the solutions of (\(\mathcal {P}_{S}\)), we have implemented two approaches. The indirect approach, combining shooting and numerical continuation, is time-efficient when the solution does not contain any singular arcs.
Depending on the terminal conditions, the optimal solution of (\(\mathcal {P}_{S}\)) may involve a singular arc of order two, and the connection with regular arcs generates chattering. The occurrence of chattering causes the failure of the indirect approach. For such cases, we have proposed two alternatives. The first alternative is based on an indirect approach involving three continuations. The last continuation starting from a regularized problem with smooth controls aims at coming back to the original problem that may be chattering. When chattering appears the continuation fails, but the last successful step provides a valuable smooth solution meeting the terminal conditions.
The second alternative is based on a direct approach, and it yields as well a sub-optimal solution having a finite number of switches. The number of switches is limited by the discretization step. In any case, the direct strategy is much more time consuming than the indirect approach and the resulting control may exhibit many numerical oscillations as can be observed on Fig. 14. This kind of solutions is practically undesirable.
Note that with both approaches, no a priori knowledge of the solution structure is required (in particular, the number of switches is unknown).
Note also that since our aim is to show how to apply the geometric optimal control techniques and the numerical continuation methods, we do not make more detailed comparisons. We refer the interested readers to section 6.2 of [92] for a more detailed comparison. In fact, in aerospace applications, indirect methods are preferred because they provide, in general, more precise optimal trajectories. This is especially important in deep-space trajectory planning missions.
As a conclusion about this example (\(\mathcal {P}_{S}\)), we can emphasize that the theoretical analysis has revealed the existence of singular solutions with possible chattering. This led us to introduce a regularized problem in order to overcome this essential difficulty. On the other hand a continuation procedure is devised considering the dynamics slow-fast rates. This procedure is initiated by the problem of order zero reduced to the trajectory dynamics.
In the next section, we extend this approach to a more complicated problem (optimal pull-up maneuvers of airborne launch vehicles), in order to further illustrate the potential of continuation methods in aerospace applications.
7.2 Extension to optimal pull-up maneuver problem (\(\mathcal {P}_{A}\))
Since the first successful flight of Pegasus vehicle in April 1990, the airborne launch vehicles have always been a potentially interesting technique for small and medium-sized space transportation systems. The mobility and deployment of the airborne launch vehicles provide increased performance and reduced velocity requirements due to non-zero initial velocity and altitude. Airborne launch vehicles consist of a carrier aircraft (see left subfigure of Fig. 15) and a rocket-powered launch vehicle (see right subfigure of Fig. 15). The launch vehicle is released almost horizontally from the carrier aircraft and its engine is ignited a few seconds later once the carrier aircraft has moved away. The flight begins with a pull-up maneuver [75, 76] targeting the optimal flight path angle for the subsequent ascent at zero angle of attack. The kinematics conditions for the Pegasus vehicle are recalled here after [8, 36, 67, 73]. The release takes place horizontally at an altitude of 12.65 k m. The first stage is ignited at an altitude of 12.54 k m and a velocity of 236.8 m/s (0.8 Mach). The pull-up maneuver targets a flight path angle of 13.8° at the end of the first stage flight. The load factor is limited to 2.5 g and the dynamic pressure is limited to 47.6 k P a.
The pull-up maneuver consists in an attitude maneuver such that the flight path angle increases up to its targeted value, while satisfying the state constraints on the load factor and the dynamic pressure. In this section, we address the minimum time-energy pull-up maneuver problem for airborne launch vehicles with a focus on the numerical solution method.
The model of the control system is more complex than (16) due to the aerodynamics forces that depend on the flight conditions (atmospheric density depending on the altitude, vehicle angle of attack):
where (r x , r y , r z ) is the position, m is the mass, (L x , L y , L z ) is the lift force, and (D x , D y , D z ) is the drag force.
Defining the state variable x=(r x ,r y ,r z ,v x ,v y ,v z ,θ,ψ,ϕ,ω x ,ω y ), we write the system (30) as a bi-input control-affine system
where the controls u 1 and u 2 satisfy the constraint \(u_{1}^{2}+u_{2}^{2} \leq 1\), and the smooth vector fields \(\hat {f}\), \(\hat {g}_{1}\) and \(\hat {g}_{2}\) are defined by
The initial state is fixed \(\phantom {\dot {i}\!} x_{0} = (r_{x0},r_{y0},r_{z0},{v_{x_{0}}},{v_{y_{0}}},{v_{z_{0}}},\theta _{0},\) \(\phantom {\dot {i}\!}\psi _{0},\phi _{0},{\omega _{x_{0}}},{\omega _{y_{0}}}) \in \mathbb {R}^{11} \), and the target set is defined by (submanifold of \(\mathbb {R}^{11}\))
The optimal pull-up maneuver problem (\(\mathcal {P}_{A}\)) consists in steering the bi-input control-affine system (31) from
to a point belonging to the final target M 1, i.e.,
while minimizing the cost functional
with controls satisfying the constraint \(u_{1}^{2}+u_{2}^{2} \leq 1\), and with the state satisfying constraints on the lateral load factor and the dynamic pressure due to aerodynamic forces
where ρ is the air density, S is the reference surface of the launcher, C N is the lift coefficient approximated by C N =C N0+C N α α with given constants C N0 and C N α . α is the angle of attack given by
and |v| is the module of the velocity \(|v|=\sqrt {v_{x}^{2}+v_{y}^{2}+v_{z}^{2}}\). Compared to (\(\mathcal {P}_{S}\)), a significant additional difficulty comes from the state constraints.
Hard constraint formulation Recall that a state constraint c(x)≤0 is of order m if \(\hat {g}_{i}.c=\hat {g}_{i} \hat {f}.c=\cdots =\hat {g}_{i} \hat {f}^{m-2}.c=0\) and g i f m.c≠0, i=1,2. Here we use the notation of Lie derivatives, see Section 3.2. A boundary arc is an arc (not reduced to a point) satisfying the system
and the control along the boundary arc is a feedback control obtained by solving
After calculations, we find that the constraint on the load factor \(\bar {n}\) is of order 2 and the constraint on the dynamic pressure \(\bar {q}\) is of order 3.
According to the maximum principle with state constraints (see, e.g., [53]), there exists a nontrivial triple of Lagrange multipliers (p,p 0,η), with p 0≤0, p∈B V(0,t f )11 and η=(η 1,η 2)∈B V(0,t f )2, where B V(0,t f ) is the set of functions of bounded variation over [0,t f ], such that almost everywhere on [0,t f ]
where the Hamiltonian of the problem is
and we have the maximization condition
for almost every t. In addition, we have d η i ≥0 and \(\int _{0}^{t_{f}} c_{i}(x) \, d\eta _{i} = 0\) for i=1,2.
Along a boundary arc, we must have \(h_{i} = \langle p,\hat {g}_{i}(x) \rangle = 0\), i=1,2. Assuming that only the first constraint (which is of order 2) is active along this boundary arc, and differentiating twice the switching functions h i , i=1,2, we have \(d^{2} h_{i} = \langle p, \text {ad}^{2}\hat {f}.\hat {g}_{i} (x) \rangle dt^{2} - d\eta _{1} \cdot (\text {ad}\hat {f}.\hat {g}_{i}).c_{1} dt\). Moreover, at an entry point occurring at t=τ, we have \({dh}_{i}(\tau ^{+})={dh}_{i}(\tau ^{-})- d\eta _{1} \cdot (\text {ad}\hat {f}.\hat {g}_{i}).c_{1} =0\), which yields d η 1. A similar result is obtained at an exit point.
The main drawback of this formulation is that the adjoint vector p is no longer absolutely continuous. A jump d η may occur at the entry or at the exit point of a boundary arc, which complexifies significantly the numerical solution.
An alternative approach to address the dynamic pressure state constraint, used in [37, 41], is to design a feedback law that reduces the commanded throttle based on an error signal. According to [41], this approach works well when the trajectory does not violate too much the maximal dynamic pressure constraint, but it may cause instability if the constraint is violated significantly. In any case the derived solutions are suboptimal.
Another alternative is the penalty function method (also called soft constraint method). The soft constraint consists in introducing a penalty function to discard solutions entering the constrained region [40, 64, 84]. For the problem (\(\mathcal {P}_{A}\)), this soft constraint method is well suited in view of a continuation procedure starting from an unconstrained solution. This initial solution generally violates significantly the state constraint. The continuation procedure aims at reducing progressively the infeasibility.
Soft constraint formulation The problem (\(\mathcal {P}_{A}\)) is recast as an unconstrained optimal control problem by adding a penalty function to the cost functional defined by (34). The penalized cost is
where the penalty function P(·) for the state constraints is defined by
The constraint violation is managed by tuning the parameter K p . For convenience we still denote this unconstrained problem by (\(\mathcal {P}_{A}\)) and we apply the PMP.
Application of the PMP The Hamiltonian is now given by
The adjoint equation is
where we have set \(p =(p_{r_{x}},p_{r_{y}},p_{r_{z}},p_{v_{x}},p_{v_{y}},p_{v_{z}},p_{\theta },p_{\psi },\) \(\phantom {\dot {i}\!}p_{\phi },p_{\omega _{x}},p_{\omega _{y}})\). Let h=(h 1,h 2) be the switching function and let
The maximization condition of the PMP gives
The transversality condition \(\phantom {\dot {i}\!}p(t_{f}) \perp T_{x(t_{f})} M_{1}\), where \(T_{x(t_{f})}M_{1}\) is the tangent space to M 1 at the point x(t f ), yields the additional conditions
and
The final time t f being free and the system being autonomous, we have in addition that
almost everywhere on [0,t f ]. As previously we can assume p 0=−1.
The optimal control given by (36) is regular unless K=0 and ∥h(t)∥=0, in which case it becomes singular. As before the term \(K \int _{0}^{t_{f}} \|u(t)\|^{2} dt\) in the cost functional (34) is used to avoid chattering [44, 63, 72, 90, 91], and the exact minimum time solution can be approached by decreasing step by step the value of K≥0 until the shooting method possibly fails due to chattering.
Solution algorithm and comparison with ( \(\boldsymbol{\mathcal {P}}_{S}\) ) We aim at extending the continuation strategy developed for (\(\mathcal {P}_{S}\)) in order to address (\(\mathcal {P}_{A}\)). Comparing (\(\mathcal {P}_{A}\)) with (\(\mathcal {P}_{S}\)), we see that in (\(\mathcal {P}_{A}\)):
-
(a)
the position of the launcher is added to the state vector;
-
(b)
the gravity acceleration \(\vec {g}\) depends on the position and the aerodynamic forces (lift force \(\vec {L}\) and the drag force \(\vec {D}\)) are considered;
-
(c)
the cost functional is penalized by the state constraints violation;
Regarding the point (a), we need embedding the solution of (\(\mathcal {P}_{0}\)) into a larger dimension problem with the adjoint variable of the position \(\vec {p}_{r} = (p_{rx},p_{ry},p_{rz})^{\top }\) being zero. More precisely, consider the following problem, denoted by (\(\mathcal {P}_{0}^{H}\)), in which the position and the velocity are considered
The solution of (\(\mathcal {P}_{0}^{H}\)) is retrieved from the solution of (\(\mathcal {P}_{0}\)) completed by the new state components,
and the optimal control is
with
and
We use this solution as the initialization of the continuation procedure for solving (\(\mathcal {P}_{A}\)).
The point (b) can be addressed with a new continuation parameter λ 4 introducing simultaneously the variable gravity acceleration, the aerodynamic forces and the atmospheric density ρ (exponential model) as follows:
and
where R E =6378137 m is the radius of the Earth, h s =7143 m, ρ 0=1.225 k g/m 3, and g x , g y , g z are given by
with
and
The parameter λ 4 acts only on the dynamics. Applying the PMP, λ 4 appears explicitly in the adjoint equations, but not in the shooting function.
Finally, regarding the point (c), the penalty parameter K p in the cost functional (27) has to be large enough in order to produce a feasible solution. Unfortunately, too large values of K p may generate ill conditioning and raise numerical difficulties. In order to obtain an adequate value for K p , a simple strategy [43, 80] consists in starting with a quite small value of K p =K p0 and solving a series of problems with increasing K p . The process is stopped as soon as ∥c(x(t))∥<ε c , for every t∈[0,t f ], for some given tolerance ε c >0.
For convenience, we define the exo-atmospheric pull-up maneuver problem (\(\mathcal {P}_{A}^{exo}\)) as (\(\mathcal {P}_{A}\)) without state constraints and without aerodynamic forces and the unconstrained pull-up maneuver problem (\(\mathcal {P}_{A}^{unc}\)) as (\(\mathcal {P}_{A}\)) without state constraints.
We proceed as follows:
-
First, we embed the solution of (\(\mathcal {P}_{0} \)), into the larger dimension problem (\(\mathcal {P}_{A}\)). This problem is denoted (\(\mathcal {P}_{0}^{H}\)).
-
Then, we pass from (\(\mathcal {P}_{0}^{H}\)), to (\(\mathcal {P}_{A}\)) by using a numerical continuation procedure, involving four continuation parameters: two parameters λ 1 and λ 2 introduce the terminal conditions (32)-(33) into (\(\mathcal {P}_{A}^{exo}\)); λ 4 introduces the variable gravity acceleration and the aerodynamic forces in (\(\mathcal {P}_{A}^{unc}\)); λ 5 introduces the soft constraints in (\(\mathcal {P}_{A}\)).
The overall continuation procedure is depicted on Fig. 16. The final step of the procedure is to increase λ 3 (or equivalently decrease K) in order to minimize the maneuver duration.
More precisely, we have to solve the following problem with continuation parameters λ i , i=1,2,4,5,3
subject to the dynamics
and with initial conditions
and final conditions
The attitude angles θ e , ψ e , ϕ e , ω xe , and ω ye are those obtained at the end of the first continuation on λ 1. θ ∗, ψ ∗ are the explicit solutions of (\(\mathcal {P}_{0}^{H}\)).
These successive continuations are implemented using the PC continuation combined with the multiple shooting method. Some additional enhancements regarding the inertial frame choice and the Euler angle singularities help improving the overall robustness of the solution process.
Multiple shooting The unknowns of this shooting problem are \(p(0) \in \mathbb {R}^{11}\), \(t_{f} \in \mathbb {R}\), and \(z_{i}=(x_{i},p_{i}) \in \mathbb {R}^{22}\), i=1,⋯,N−1, where z i are the node points of the multiple shooting method (see Section 4.1). We set Z=(p(0),t f ,z i ), and let E=(θ,ψ,ϕ), ω=(ω x ,ω y ), p r =(p rx ,p ry ,p rz ), p E =(p θ ,p ψ ,p ϕ ), and p ω =(p ω x ,p ω y ). Then, the shooting function with the continuation parameter λ 1 is given by
where the Hamiltonian is given by
The shooting function with the continuation parameter λ 2 is
and the shooting functions \(G_{\lambda _{4}} \) and \(G_{\lambda _{5}} \) are identical to \(G_{\lambda _{2}} \).
PC continuation The predictor-corrector continuation requires the calculation of the Jacobian matrix J G (see Section 5.2) which is computationally expenssive. In order to speed up the process, an approximation is used based on the assumption of no conjugate point. According to [27], the first turning point of \(\lambda (\bar {s})\) (where \(\frac {d \lambda }{ds}(\bar {s})=0\) and \(\frac {d^{2} \lambda }{ds^{2}}(\bar {s}) \neq 0 \)) corresponds to a conjugate point (the first point where extremals lose local optimality). If we assume the absence of the conjugate point, there is no turning point for λ(s), and λ increases monotonically along the zero path. Knowing three zeros (Z i−2,λ i−2), (Z i−1,λ i−1) and (Z i ,λ i ), and let s 1=∥(Z i−1,λ i−1)−(Z i−2,λ i−2)∥, s 2=∥(Z i ,λ i )−(Z i−2,λ i−2)∥, s 3=∥(Z i ,λ i )−(Z i−1,λ i−1)∥, we can approximate the tangent vector t(J G ) by
When the step length h s is small enough, this approximation yields a predicted point (15) very close to the true zero.
Change of frame Changing the inertial reference frame can improve the problem conditioning and enhance the numerical solution process. The new frame \(S_{R}^{\prime }\) is defined from the initial frame S R by two successive rotations of angles (β 1,β 2). The problem (\(\mathcal {P}_{A}\)) becomes numerically easier to solve when the new reference frame \(S_{R}^{\prime }\) is adapted to the terminal conditions. However we do not know a priori which reference frame is the best suited. We propose to choose a reference frame associated to (β 1,β 2) such that \(\psi ^{\prime }_{f}=-\psi ^{\prime }_{0}\) and \(| \psi ^{\prime }_{f} |+| \psi ^{\prime }_{0} |\) being minimal (the subscribe ′ here means the new variable in \(S_{R}^{\prime }\)). This choice centers the terminal values on the yaw angle on zero. Thus we can hope that the solution remains far from the Euler angle singularities occurring when ψ→π/2+k π.
This frame rotation defines a nonlinear state transformation, which acts as a preconditionner. We observe from numerical experiments that it actually enhances the robustness of the algorithm. The reader is referred to [93] for more details of the change of frame.
Singularities of Euler angles The above frame change is not sufficient to avoid Euler angle singularities in all cases. Smoothing the vector fields at these singular configurations is another enhancement improving the overall robustness. The state and costate equations are smoothened as follows. Assuming first that \(\dot {\theta }\) is bounded, we have ω x sinϕ+ω y cosϕ→0 when ψ→π/2+k π. Since
and \(\dot {\theta }/\dot {\phi } \to 1\) as ψ→π/2+k π, we can smoothen the state equations by \(\dot {\theta } = \dot {\phi } = 0\) when ψ→π/2+k π. Assuming then that \(- \frac {p_{\theta }+p_{\phi } \sin \psi }{\cos \psi } \to A < \infty \) as ψ→π/2+k π, and taking the first-order derivatives of the numerator and denominator
we obtain A=0. We can smoothen the costate equations by \(\dot {p}_{\theta } = 0\), \(\dot {p}_{\phi } = 0\), \(\dot {p}_{\psi } = a \sin \theta p_{v x} + a \cos \theta p_{v z}\), \(\dot {p}_{\omega _{x}} = -p_{\psi } \cos \phi \), \(\dot {p}_{\omega _{y}} = p_{\psi } \sin \phi \). Summing up, at points ψ→π/2+k π, the attitude equations in system (31) and (35) become
These Eqs. (38) are used close to the singularities.
Algorithm We describe the whole numerical strategy of solving (\(\mathcal {P}_{A}\)) in the following algorithm.
7.3 Numerical results of solving (\(\mathcal {P}_{A}\))
The Algorithm 3 is first applied to a pull-up maneuver of an airborne launch vehicle just after its release from the carrier. We present some statistical results showing robustness of our algorithm. A second example considers the three-dimensional reorientation maneuver of a launch vehicle upper stage after a stage separation.
7.3.1 Pull-Up maneuvers of an airborne launch vehicle (AVL)
We consider a pull-up maneuver of an airborne launch vehicle close to the Pegasus configuration : a=15.8, b=0.2, S=14 m 2, C x0=0.06, C x α =0, C z0=0, and C z α =4.7. Let \(\bar {n}_{max}=2.2 g\) and \(\bar {q}_{max}=47 \,kPa\). The initial conditions (32) correspond to the engine ignition just after the release.
The final conditions (33) correspond to the beginning of the atmospheric ascent flight at zero angle of attack.
Such pull-up maneuvers are generally planar (ψ f =0°). Here we set ψ f =10° in order to show that the algorithm can also deal efficiently with non-planar pull-up maneuvers.
The multiple shooting method is applied with three node points. The components of the state variable x and the control u are plotted on Figs. 17 and 18, the components of the adjoint variable p are plotted on Fig. 19, the time histories of the load factor \(\bar {n}\) and of the dynamic pressure \(\bar {q}\) are plotted on Fig. 20. The position components are given in the geographic local frame with the vertical along the first axis (denoted x, not to be confused with the state vector). The control vector first component u 1 lies mainly in the trajectory plane and it acts mainly on the pitch angle.
We observe on Fig. 20 a boundary arc on the load factor constraint near the maximal level \(\bar {n}_{max}=2.2 g\). This corresponds on Fig. 19 to the switching function \(h(t)=b(p_{\omega _{y}},-p_{\omega _{x}})\) being close to zero. Comparing Figs. 18 and 19, we see that the control follows the form of the switching function. On the other hand, the state constraint of the dynamic pressure is never active.
We observe also on Fig. 19 a steeper variation of p θ (t) at t=5.86 s. The penalty function P(x) starts being positive at this date and adds terms in the adjoint differential equation.
Running this example requires 24.6 s to compute the optimal solution, with CPU: Intel(R) Core(TM) i5-2500 CPU 3.30GHz; Memory: 3.8 Gio; Compiler: gcc version 4.8.4 (Ubuntu 14.04 LTS). The number of nodes for the multiple shooting has been set to 3 from experiments. Passing to four node increases the computing time to 31.2 s without obvious robustness benefit.
We next present some statistical results obtained with the same computer settings.
7.3.1.0 Statistical results
(\(\mathcal {P}_{A}\)) is solved for various terminal conditions. The initial and final conditions are swept in the range given in Table 2. The last cell of the table indicates that the initial angle of attack is bounded to 10 degrees in order to exclude unrealistic cases.
For each variable, we choose a discretization step and we solve all possible combinations resulting from this discretization (factorial experiment). The total number of cases is 1701. All cases are run with the penalty parameter varying from K p0=0.1 to K p1=100 during the third continuation. For each continuation stage the number of simulations is limited to 200.
The 1701 cases are run for different settings of the number of nodes (N=0 or N=2) and of the regularization parameter (K=800 or K=1000).
The statistical results are reported in Tables 3, 4 and 5.
Tables 3-4 show the results with a multiple shooting using 2 nodes, with different values of the regularization parameter K. The algorithm appears fairly robust with respect to the terminal conditions. The choice of the regularization parameter K affects the resolution results: (i) the rate of success increases (resp. decreases) in the non-planar case (resp. planar case) when K increases from K=800 to K=1000; (ii) in term of the execution time, we see that in both cases, it is faster to get a result in planar case than in non-planar case, and most time is devoted to deal with the state constraints during the last continuation.
This suggests that for each specific problem (defined by the launcher configuration and the terminal conditions) a systematical experiment should be processed to find out the best K value. For example, we have tested the planar cases with different values of K. The success rate and the execution time are plotted with respect to K in Fig. 21.
We see that the value of K should neither be too large nor too small. From Tables 3, 4 and 5, we observe also that the λ 2-continuation causes most failures in the non-planar case. The success rate could be possibly improved by adapting the K value.
Tables 3 and5 compare the multiple and the single shooting method (N=0). The multiple shooting method (N=2) clearly improves the robustness of the algorithm, without significant increase of the execution time.
Figure 22 plots the success rate and the execution time depending of the number of nodes. The test case is the planar maneuver with the regularization parameter K set to 5.5×103. The rate of success does not increase monotonically with respect to the number of node points, and the execution time does not change significantly for N less than 6. When N≥6, the success rate decreases quickly and equals to zero when N=7. When the number of unknowns for the shooting method becomes too large, the domain of convergence of a Newton-type method reduces which finally leads to lower rate of success.
7.3.2 Reorientation maneuver of a launch vehicle
Along multi-burn ascent trajectories, the control (Euler angles) exhibit jumps at the stage separations (see for example [59, Figure 3]). In this case, a reorientation maneuver is necessary to follow the optimal thrust direction. For this reason, we apply the above algorithm as well to the maneuver problem of the upper stages of the launch vehicles.
Opposite to the airborne launch vehicle’s pull-up maneuvers, these reorientation maneuvers are in general three-dimensional and of lower magnitude. They occur at high altitudes (typically higher than 50 km since a sufficiently low dynamic pressure is required to ensure the separation safety) and high velocity (since the first stage has already delivered a large velocity increment).
The maneuver occurs in vacuum so that no state constraints apply. Finding the minimum time maneuver corresponds to solving the problem (\(\mathcal {P}_{S}\)).
In the example, we set the system parameters in (31) to a=20, b=0.2, which approximate an Ariane-like launcher. The initial conditions (32) are
and the final conditions (33) are
The multiple shooting method is applied with four node points. On Figs. 23 and 24, we report the components of state and control variables. We observe that, when t∈[32,145] s, the control is quasi null, and the attitude angles take the solution values of the zero order problem (\(\mathcal {P}_{0}^{H}\)): θ=151.5°≈θ ∗=151.57°, ψ=8.6°≈ψ ∗=8.85°. The regularization term \(K \int _{0}^{t_{f}} \|u\|^{2} dt\) in the cost functional yields a continuous control plotted on Fig. 24 and avoids chattering For this application case the regularization parameter moves from (1−λ 3)K=8×104 (λ=0) until (1−λ 3)K=240 (λ 3=0.997) and the computing time is about 110 s.
The maneuver duration t f is about 175 s due to the large direction change required on the velocity. During a real flight the velocity direction change is much smaller and the maneuver takes at most a few seconds. Our purpose when presenting this “unrealistic” case is rather to show that the proposed algorithm is robust in a large range of system configurations and terminal conditions.
Conclusion
The aim of this article was to show how to apply techniques of geometric optimal control and numerical continuation to aerospace problems. Some classical techniques of optimal control have been recalled, including Pontryagin Maximum Principle, first and second-order optimality conditions, and conjugate time theory. Techniques of geometric optimal control have then been recalled, such as higher-order optimality conditions and singular controls.
A quite difficult problem has illustrated in detail how to design an efficient solution method with the help of geometric optimal control tools and continuation methods. Some applications in space trajectory optimization have also been recalled.
Though geometric optimal control and numerical continuation provide a nice way to design efficient approaches for many aerospace applications, the answer to “how to select a reasonably simple problem for the continuation procedure” for general optimal control problems remains open. A deep understanding of the system dynamics is necessary to devise a simple problem that is “physically” sufficiently close to the original problem, while being numerically suited to initiate a continuation procedure.
In practice, many problems remain difficult due to the complexity of real-life models. In general, a compromise should be found between the complexity of the model under consideration and the choice of an adapted numerical method.
As illustrated by the example of airborne launch vehicles, many state and/or control constraints should also be considered in a real-life problem, and such constraints makes the problem much more difficult. For the airborne launch problem a penalization method combined with the previous geometric analysis proves satisfying. But this approach has to be customized to the specific problem under consideration. A challenging task is then to combine an adapted numerical approach with a thorough geometric analysis in order to get more information on the optimal synthesis. We refer the readers to [85] for a summary of open challenges in aerospace applications.
Endnotes
1 Given any x∈M, \(T^{\ast }_{x} M\) is the cotangent space to M at x.
2 If the final time t f is fixed, then \(\bar {x}(\cdot)\) is said to be locally optimal in L ∞ topology (resp. in C 0 topology), if it is optimal in a neighborhood of u in L ∞ topology (resp. in a neighborhood of \(\bar {x}(\cdot)\) C 0 topology).
If the final time t f is not fixed, then a trajectory \(\bar {x}(\cdot)\) is said to be locally optimal in L ∞ topology if, for every neighborhood V of u in L ∞([0,t f +ε],U), for every real number η so that |η|≤ε, for every control v∈V satisfying E(x 0,t f +η,v)=E(x 0,t f ,u) there holds C(t f +η,v)≥C(t f ,u). Moreover, a trajectory \(\bar {x}(\cdot)\) is said to be locally optimal in C 0 topology if, for every neighborhood W of \(\bar {x}(\cdot)\) in M, for every real number η so that |η|≤ε, for every trajectory x(·), associated to a control v∈V on [0,t f +η], contained in W, and satisfying \(x(0) = \bar {x}(0) = x_{0}\), \(x(t_{f}+\eta) = \bar {x}(t_{f})\), there holds C(t f +η,v)≥C(t f ,u).
3 meaning that in some coordinates, for λ∈[0,1], the path consists in a convex combination of the simpler problem and of the original problem
4 There, the end-point mapping has been implemented with the exponential mapping \(E_{x_{0},t_{f},\lambda }(u)=\exp _{x_{0},\lambda }(t_{f},p_{0})\) with initial condition (x(0),p(0))=(x 0,p 0).
5 Let \(v_{1}, \cdots, v_{j+1} \in \mathbb {R}^{N+1}\), j≤N+1, be affinely independent points, i.e., v k −v 1, k=2,⋯,j+1 are linearly independent. A j-simplex in \(\mathbb {R}^{N+1}\) is defined by the convex hull of the set v 1,⋯,v j+1. The convex hull of any subset w 1,⋯,w r+1⊂v 1,⋯,v j+1 is an r-face.
References
Agrachev, A.A, Sachkov, Y.L: Control theory from the geometric viewpoint. Encyclopedia of Mathematical Sciences, 87. Control Theory and Optimization, II. Springer-Verlag, Berlin (2004).
Agrachev, A., Sarychev, A.: On abnormal extremals for Lagrange variational problems. J. Math. Syst. Estim. Control. 1(8), 87–118 (1998).
Allgower, E., Georg, K.: Numerical continuation methods, Vol. 13. Springer-Verlag, Berlin (1990).
Allgower, E., Georg, K.: Piecewise linear methods for nonlinear equations and optimization. J Comput. Appl. Math. 124(1), 245–261 (2000).
Bischof, C., Carle, A., Kladem, P., Mauer, A.: Adifor 2.0: Automatic differentiation of Fortran 77 programs. IEEE Comput. Sci. Eng. 3, 18–32 (1996).
Betts, J.T: Survey of numerical methods for trajectory optimization. J. Guid. Control Dyn. 21, 193–207 (1998).
Betts, J.T: Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, 2nd edn. In: Advances in Design and Control, SIAM, Philadelphia (2010).
Bérend, N., Bonnans, F., Haddou, M., Laurent-Varin, J., Talbot, C.: An interior-point approach to trajectory optimization. J. Guid. Control. Dyn. 30(5), 1228–1238 (2007).
Bock, H.G, Plitt, K.J: A multiple shooting algorithm for direct solution of optimal control problem. In: Proceedings 9th IFAC World Congress Budapest, pp. 243–247. Pergamon Press, Budapest, Hungary (1984).
Bolza, O.: Calculus of variations. Chelsea Publishing Co., New York (1973).
Bonnans, J.F, Hermant, A.: Stability and sensitivity analysis for optimal control problems with a first-order state constraint and application to continuation methods. ESAIM Control Optim. Calc. Var. 4(14), 825–863 (2008).
Bonnans, F., Laurent-Varin, J., Martinon, P., Trélat, E.: Numerical study of optimal trajectories with singular arcs for an Ariane 5 launcher. J. Guid. Control Dyn. 1(32), 51–55 (2009).
Bonnans, F., Martinon, P., Grélard, V.: Bocop-A collection of examples (2012). http://ampl.com/products/ampl/, http://bocop.org/.
Bonnans, F., Martinon, P., Trélat, E.: Singular arcs in the generalized Goddard’s problem. J. Optim. Theory Appl. 2(139), 439–461 (2008).
Bonnard, B., Caillau, J.B, Trélat, E.: Second order optimality conditions in the smooth case and applications in optimal control, ESAIM Control. Optimisation Calc. Var. 13, 207–236 (2007).
Bonnard, B., Chyba, M.: The role of singular trajectories in control theory. Springer Verlag, New York (2003).
Bonnard, B., Faubourg, L., Launay, G., Trélat, E.: Optimal control with state constraints and the space shuttle re-entry problem. J. Dyn. Control. Syst. 2(9), 155–199 (2003).
Bonnard, B., Faubourg, L., Trélat, E.: Optimal control of the atmospheric arc of a space shuttle and numerical simulations by multiple-shooting techniques. Math. Models Methods Appl. Sci. 1(15), 109–140 (2005).
Bonnard, B., Faubourg, L., Trélat, E.: Mécanique céleste et contrôle des véhicules spatiaux. (French) [Celestial mechanics and the control of space vehicles] Mathématiques & Applications (Berlin) [Mathematics & Applications], 51, xiv+276 (2006).
Bonnard, B., Faubourg, L., Trélat, E.: Mécanique Céleste et Contrôle de Systèmes Spatiaux. In: Math. & Appl.,. Springer, Berlin (2006). XIV.
Bonnard, B., Trélat, E.: On the role of abnormal minimizers in SR-geometry. Ann. Fac. Sci. Toulouse (6). 10(3), 405–491 (2001).
Bonnard, B., Trélat, E.: Une approche géométrique du contrôle optimal de l’arc atmosphérique de la navette spatiale. ESAIM Control. Optim. Calc. Var. 7, 179–222 (2002).
Bryson, A.E, Ho, Y.: Applied optimal control: optimization, estimation and control. CRC Press, USA (1975).
Brunovský, P.: Every normal linear system has a regular time-optimal synthesis. Mathematica Slovaca. 28(1), 81–100 (1978).
Brunovský, P.: Existence of regular synthesis for general problems. J. Differ. Equ. 38, 317–343 (1980).
Caillau, J.B, Cots, O., Gergaud, J.: Differential continuation for regular optimal control problems. Optim. Meth. Softw. 2(27), 177–196 (2012).
Caillau, J.B, Daoud, B.: Minimum time control of the restricted three-body problem. SIAM J. Control. Optim. 50(6), 3178–3202 (2012).
Caillau, J.B, Gergaud, J., Noailles, J.: 3D geosynchronous transfer of a satellite: continuation on the thrust. J. Optim. Theory Appl. 3(118), 541–565 (2003).
Caillau, J.B, Noailles, J.: Continuous optimal control sensitivity analysis with AD, Automatic Differentiation: From Simulation to Optimization. Springer, Berlin (2002).
Caponigro, M., Piccoli, B., Rossi, F., Trélat, E.: Sparse Jurdjevic–Quinn stabilization of dissipative systems, Preprint Hal 2016, 21. to appear in in Automatica; a preprint on Hal: hal-01397843.
Cerf, M., Haberkorn, T., Trélat, E.: Continuation from a flat to a round Earth model in the coplanar orbit transfer problem. Optim. Control. Appl. Meth. 33(6), 654–675 (2012).
Cesari, L.: Optimization – Theory and Applications. Problems with Ordinary Differential Equations, Applications of Mathematics, Vol. 17. Verlag, Springer (1983).
Chen, Z., Caillau, J.B, Chitour, Y.: L 1-Minimization for Mechanical Systems. SIAM J. Control. Optim. 53(4) (2016).
Chitour, Y., Jean, F., Trélat, E.: Singular trajectories of control-affine systems. SIAM J. Control. Optim. 47, 1078–1095 (2008).
Chow, S.N, Mallet-Paret, J., Yorke, J.A: Finding zeros of maps: homotopy methods that are constructive with probability one. Math. Comput. 32, 887–899 (1978).
Clegern, J.B, Ostrander, M.J: Pegasus upgrades - A continuing study into an air-breathing alternative. In: 31 st AIAA, ASME, SAE, and ASEE, Joint Propulsion Conference and Exhibit, San Diego (1995).
Corvin, M.A: Ascent guidance for a Winged Boost Vehicle, NASA CR-172083 (1988).
Crouch, P.: Spacecraft attitude control and stabilization: Applications of geometric control theory to rigid body models. IEEE Trans. Autom. Control. 4(29), 321–331 (1984).
Diehl, M., Leineweber, D., Schäfer, A.: MUSCOD-II UsersŠ Manual. IWR-Preprint 2001-25, Universität Heidelberg (2001).
Denham, W.F, Bryson, A.E: Optimal programming problems with inequality constraints. 2. Solution by steepest-ascent. AIAA J. 1(2), 25–34 (1964).
Dukeman, G., Calise, A.J: Enhancements to an atmospheric ascent guidance algorithm. AIAA Pap. 5638, 1–7 (2003).
Dunn, J.C: Second-order optimality conditions in sets of L ∞ functions with range in a polyhedron. SIAM J. Control. Optim. 33, 1603–1635 (1995).
Frangos, C., Snyman, J.A: The application of parameter optimisation techniques to linear optimal control system design. Automatica. 28(1), 153–157 (1992).
Fuller, A.T: Study of an Optimum Non-Linear Control System. Int. J. Electron. 15(1), 63–71 (1963).
Gabasov, R., Kirillova, F.M: High order necessary conditions for optimality. SIAM J Control. 10(1), 127–168 (1972).
Garcia, C.B, Zangwill, W.I: An approach to homotopy and degree theory. Math. Oper. Res. 4(4), 390–405 (1979).
Gergaud, J.: Résolution numérique de problèmes de commande optimale à solution Bang-Bang par des méthodes homotopiques simpliciales. Ph.D. Thesis, ENSEEIHT. Institut National Polytechnique de Toulouse, France (1989).
Gergaud, J., Haberkorn, T., Martinon, P.: Low thrust minimum fuel orbital transfer: an homotopic approach. J. Guid. Control. Dyn. 27(6), 1046–1060 (2004).
Gergaud, J., Haberkorn, T.: Homotopy method for minimum consumption orbit transfer problem. ESAIM Control. Optim. Calc. Var. 2(12), 294–310 (2006).
Gerdts, M.: Optimal Control of ODEs and DAEs. De Gruyter, Berlin, 458 (2012).
Goh, B.S: Necessary conditions for singular extremals involving multiple control variables. SIAM J. Control. 4(4), 716–731 (1966).
Haberkorn, T., Trélat, E.: Convergence results for smooth regularizations of hybrid nonlinear optimal control problems. SIAM J. Control. Optim. 94, 1498–1522 (2011).
Hartl, R.F, Sethi, S.P, Vickson, R.G: A survey of the maximum principles for optimal control problems with state constraints. SIAM Rev. 37(2), 181–218 (1995).
Hermant, A.: Homotopy algorithm for optimal control problems with a second-order state constraint. Appl. Math. Optim. 1(61), 85–127 (2010).
Hermant, A.: Optimal control of the atmospheric reentry of a space shuttle by an homotopy method. Opt. Cont. Appl. Methods. 32, 627–646 (2011).
Kelley, H.J, Kopp, R.E, Moyer, H.G, Gardner, H.: Singular extremals. In: Leitmann, G. (ed.)Topics in Optimization, pp. 63–101. Academic Press, New York (1967).
Kirches, C.: A Numerical Method for Nonlinear Robust Optimal Control with Implicit Discretization. Thesis at University of Heidelberg (2006).
Krener, A.J: The high order maximal principle and its application to singular extremals. SIAM J. Control. Optim. 15, 256–293 (1977).
Lu, P., Forbes, S., Baldwin, M.: A versatile powered guidance algorithm. In: AIAA Guidance, Navigation, and Control Conference, p. 4843, San Diego (2012).
Maurer, H., Büskens, C., Kim, J.HR, Kaya, C.Y: Optimization methods for the verification of second-order sufficient conditions for bang-bang controls. Optim. Control. Appl. Methods26, 129–156 (2005).
Maurer, H., Oberle, H.J: Second order sufficient conditions for optimal control problems with free final time: The Riccati approach. SIAM J. Control. Optim. 41, 380–403 (2002).
Maurer, H., Pickenhain, S.: Second-order sufficient conditions for optimal control problems with mixed control-state constraints. J. Optim. Theory Appl. 86, 649–667 (1995).
Marchal, C.: Chattering arcs and chattering controls. J. Optim. Theory Appl. 11, 441–468 (1973).
Markopoulos, N., Calise, A.: Near-optimal, asymptotic tracking in control problems involving state-variable inequality constraints. In: AIAA Guidance, Navigation and Control Conference, Monterey, California (1993).
Milyutin, A.A, Osmolovskii, N.P: Calculus of Variations and Optimal Control. Transl. Math. Monogr. 180, 159–172 (1999).
Moré, J., Sorensen, D., Hillstrom, K., Garbow, B.: The MINPACK project, in Sources and Development of Mathematical Software(Cowell, W., ed.)Prentice-Hall, Englewood, Clis (1984).
Mosier, M.R, Harris, G.N, Whitmeyer, C.: Pegasus air-launched space booster payload interfaces and processing procedures for small optical payloads. In: International Society for Optics and Photonics, pp. 177–192, Orlando’91, Orlando (1991).
Osmolovskii, N.P, Lempio, F.: Transformation of quadratic forms to perfect squares for broken extremal. Set-Valued Anal. 10, 209–232 (2002).
Pesch, H.J: A practical guide to the solution of real-life optimal control problems. Control Cybern. 23(1/2), 7–60 (1994).
Pontryagin, L.S: The Mathematical Theory of Optimal Processes. Wiley-Interscience, New York (1962).
Rheinboldt, W.C: Numerical continuation methods: a perspective. J. Comput. Appl. Math. 1(124), 229–244 (2000).
Robbins, H.M: Optimality of intermediate-thrust arcs of rocket trajectories. AIAA J. 6(3), 1094–1098 (1965).
Roble, N.R, Petters, D.P, Fisherkeller, K.J: Further exploration of an airbreathing Pegasus alternative. In: Joint Propulsion Conference and Exhibit, Monterey (1993).
Schättler, H., Ledzewicz, U.: Geometric optimal control: theory, methods and examples, vol. 38. Springer Science & Business Media, New York (2012).
Sarigul-Klijn, N., Noel, C., Sarigul-Klijn, M.: Air launching earth-to-orbit vehicles: Delta V gains from launch conditions and vehicle aerodynamics. AIAA. 872, 1–9 (2004).
Sarigul-Klijn, N., Sarigul-Klijn, M., Noel, C.: Air-launching earth to orbit: effects of launch conditions and vehicle aerodynamics. J. Spacecr. Rocket. 3(42), 569–575 (2005).
Sethian, J.A: Level set methods and fast marching methods: evolving interfaces in computational geometry, fluid mechanics, computer vision, and materials science. Cambridge university press, Cambridge (1999).
Silva, C., Trélat, E.: Smooth regularization of bang-bang optimal control problems. IEEE Trans. Autom. Control. 11(55), 2488–2499 (2010).
Silva, C.J, Trélat, E.: Asymptotic approach on conjugate points for minimal time bang-bang controls. Syst. Control Lett. 11(59), 720–733. 2010,
Snyman, J.A, Stander, N., Roux, W.J: A dynamic penalty function method for the solution of structural optimization problems. Appl. Math. Model. 18(8), 453–460 (1994).
Stoer, J., Bulirsch, R.: Introduction to Numerical Analysis, Translated from the German by R. Bartels, W. Gautschi and C. Witzgall. Second, Vol. 12. Applied Mathematics Springer Verlag, New York (1993).
Trélat, E.: Some properties of the value function and its level sets for affine control systems with quadratic cost. J. Dyn. Control. Syst. 6(4), 511–541 (2000).
Trélat, E.: Optimal control of a space shuttle, and numerical simulations. Dynamical systems and differential equations (Wilmington, NC, 2002). Discrete Contin. Dyn. Syst. suppl, 842–851 (2003).
Trélat, E.: Contrôle optimal : théorie & applications. Vuibert, Paris (2005).
Trélat, E.: Optimal control and applications to aerospace: some results and challenges. J. Optim. Theory Appl. 3(154), 713–758 (2012).
Wächter, A., Biegler, L.T: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106, 25–57 (2006). https://projects.coin-or.org/Ipopt.
Watson, L.T: Theory of globally convergent probability-one homotopies for nonlinear programming. SIAM J. Optim. 3(11), 761–780 (2001).
Wonham, W.M: Note on a problem in optimal non-linear control. J. Electron. Control. 15, 59–62 (1963).
Zeidan, V.: The Riccati equation for optimal control problems with mixed state-control constraints: Necessity and sufficiency. SIAM J. Control. Optim. 32, 1297–1321 (1994).
Zelikin, M.I, Borisov, V.F: Theory of Chattering Control, with Applications to Astronautics, Robotics, Economics and Engineering. chapter 2,. 68, 2–4 (1994).
Zelikin, M.I, Borisov, V.F: Optimal chattering feedback control. J. Math. Sci. 114(3), 1227–1344 (2003).
Zhu, J., Trélat, E., Cerf, M.: Minimum time control of the rocket attitude reorientation associated with orbit dynamics. SIAM J. Control. Optim. 1(54), 391–422 (2016).
Zhu, J., Trélat, E., Cerf, M.:Planar tilting maneuver of a spacecraft: singular arcs in the minimum time problem and chattering, Discrete Cont. Dynam. Syst. Ser. B. 21(4), 1347–1388 (2016).
Acknowledgment
The second author acknowledges the support by FA9550-14-1-0214 of the EOARD-AFOSR.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zhu, J., Trélat, E. & Cerf, M. Geometric optimal control and applications to aerospace. Pac. J. Math. Ind. 9, 8 (2017). https://doi.org/10.1186/s40736-017-0033-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40736-017-0033-4