Time Optimal Linear Control
August 20, 2023 | min | Jean-Baptiste Bouvier
Table of contents
During my PhD I came across the problem of determining the minimal time for a linear system to reach a state. This is a straightforward optimal problem studied and solved several decades ago. However, I was surprised to find that there are no closed-form solutions for this minimal time despite the apparently simple framework.
Framework
Consider the following time-invariant linear system with unit bounded inputs: \begin{equation}\label{eq: dynamics} \dot x(t) = Ax(t) + Bu(t), \quad u(t) \in \mathcal{U}, \quad x(0) = x_0 \in \mathbb{R}^n, \end{equation} where the set of admissible inputs is $\mathcal{U} := [-1, 1]^m$, and matrices $A \in \mathbb{R}^{n \times n}$ and $B \in \mathbb{R}^{n \times m}$ are constant.
Our objective is to determine the minimal time necessary to reach the origin $x=0 \in \mathbb{R}^n$ from initial state $x_0$ with dynamics \eqref{eq: dynamics}. To be more specific, we want to calculate \begin{equation}\label{eq: T_N^*} T_N^*(x_0) := \underset{u\, \in\, \mathcal{F}(\mathcal{U})}{\inf} \big\{ T \geq 0 : x(T) = 0,\ \text{for dynamics \eqref{eq: dynamics}}\big\}, \end{equation} where $\mathcal{F}(\mathcal{U})$ is the set of functions of time taking values in $\mathcal{U}$.
We will first review how this problem was historically solved before discussing some recent works on this topic.
History
Determining the minimal time to the origin was of particular interest during the 1950's as discussed in the introduction of (LaSalle 1959). The work (LaSalle 1959) establishes that $T_N^*$ exists and that the optimal input $u^*$ argument of the optimization \eqref{eq: T_N^*} is unique and bang-bang for normal and Hurwitz linear systems. The notion of normality is related to the structure of the reachable set and directly implies uniqueness of the time-optimal inputs (Rechtschaffen 1979).
However, the problem of synthesizing $u^*$ in a general setting was not solved until the work (Neustadt 1960), which derives an iterative method converging to the optimal control. Shortly after, work (Ho 1962) establishes a method of successive approximations of $u^*$ designed for the engineers in need of a practical process. Instead of solving directly the minimal time problem \eqref{eq: T_N^*}, (Ho 1962) solves recursively the easier problem of minimizing the norm of the final state at a given time. The smallest of these times where the norm of the state is null is the minimal stabilizing time $T_N^*$. The same year (Eaton 1962) derives another algorithmic way of computing $u^*$, but based on geometrical intuition and hyperplanes.
Then, the work (Neustadt 1963) extended the bang-bang principle of (LaSalle 1959) to systems with nonlinear control but linear internal dynamics. More specifically, (Neustadt 1963) established the compactness of the reachable set of dynamics $\dot x(t) = A(t)x(t) + \varphi(u,t)$ when $u(t) \in \mathcal{U}$ is compact, and $A$ and $\varphi$ are continuous.
In the following years, (Babunashvili 1964) derives another algorithm relying on gradient descent to compute $u^*$ for semilinear systems of the form $\dot x(t) = A(t)x(t) + B(t)u(t) + f(t)$. Most of the results cited so far required the normality of the dynamics to ensure the uniqueness of the optimal solution. That is where the work (Fujisawa et al. 1967) comes into play, since its algorithm to synthesize $u^*$ does not need normality.
The profusion of numerical methods to determine $u^*$ and $T_N^*$ is in fact due to the absence of a closed-form analytical description of these optimal quantities as stated in the review (Athans 1966). The structure of the optimal input is known to be bang-bang, but the optimal switching times can only be described as the solutions of an optimization problem and hence have no closed-form expression.
Current State
Most of the algorithms discussed above are not very efficient in terms of computation. Indeed, these gradient-based iterative methods are very sensitive to initial guess and usually exhibit poor convergence properties. To address this issue, recent work have been studying the single input case and determining the optimal sequence of switching times of the bang-bang input $u^*$ (Grognard et al. 2003). This method provides an algorithm with much better convergence properties than older approaches.
Contemporary research is also performed on the theoretical front for time optimal control of linear systems, as witnessed by the work (Romano et al. 2020) studying the case of time-optimal transfer between two non-zero states, which had not been solved previously.
Quick Summary
- The minimal time problem for linear systems with bounded inputs gained a lot of interest in the 1950's and 60's.
- Despite its apparent simplicity, there exists no closed-form solution..
References
-
M. Athans, The status of optimal control theory and applications for deterministic systems, IEEE Transactions on Automatic Control, p. 580-596, 1966.
-
TG Babunashvili, The synthesis of linear optimal systems, Journal of the Society for Industrial and Applied Mathematics, Series A: Control, p. 261-265, SIAM, 1964.
-
J. Eaton, An iterative solution to time-optimal control, Journal of Mathematical Analysis and Applications, p. 329-344, Academic Press, 1962.
-
T. Fujisawa, and Y. Yasuda, An iterative procedure for solving the time-optimal regulator problem, SIAM Journal on Control, p. 501-512, 1967.
-
F. Grognard, and R. Sepulchre, Computation of time-optimal switchings for linear systems with complex poles, 2003 European Control Conference (ECC), p. 2190-2195, 2003.
-
Y. Ho, A successive approximation technique for optimal control systems subject to input saturation, Journal of Basic Engineering, p. 33-37, 1962.
-
J. LaSalle, Time Optimal Control Systems, Proceedings of the National Academy of Sciences of the United States of America, p. 573-577, 1959.
-
L. Neustadt, Synthesizing time optimal control systems, Journal of Mathematical Analysis and Applications, p. 484-493, Elsevier, 1960.
-
L. Neustadt, The Existence of Optimal Controls in the Absence of Convexity Conditions, Journal of Mathematical Analysis and Applications, p. 110-117, 1963.
-
E. Rechtschaffen, Unique winning policies for linear differential pursuit games, Journal of Optimization Theory and Applications, p. 629-658, Springer, 1979.
-
M. Romano, and F. Curti, Time-optimal control of linear time invariant systems between two arbitrary states, Automatica, p. 109151, Elsevier, 2020.