𝒦\mathcal{K}\mathcal{L} and Lyapunov Approaches for Discrete-time Peak Computation Problems

Assalé Adjé
LAboratoire de Modélisation et Pluridisciplinaire et Simulations
LAMPS
Université de Perpignan Via Domitia
France
Abstract

In this paper, we propose a method to solve discrete-time peak computation problems (DPCPs for short). DPCPs are optimization problems that consist of maximizing a function over the reachable values set of a discrete-time dynamical system. The optimal value of a DPCP can be rewritten as the supremum of the sequence of optimal values. Previous results provide general techniques for computing the supremum of a real sequence from a well-chosen pair of a strictly increasing continuous function on [0,1][0,1] and a positive scalar in (0,1)(0,1). In this paper, we exploit the specific structure of the optimal value of the DPCP to construct such a pair from classical tools from stability theory: 𝒦\mathcal{K}\mathcal{L} certificate and Lyapunov functions.

1 Introduction

In this paper, we propose to solve discrete-time peak computation problems in finite-dimensional state spaces. A discrete-time peak computation problem can be viewed as a particular maximization problem for which the constraint on the decision variable is to belong to the reachable values set of a given discrete-time dynamical system. The objective function of this optimization problem is a function of the states. This particular maximization problem can model various situations in engineering, economics, physics, biology… For example, in population dynamics, for a predator-prey system, one may be interested in the maximal size of each species, even if the system asymptotically oscillates between several equilibria. This problem boils down to maximizing each coordinate of the state variable separately over the system modeling the population dynamics.

Besides its importance in analyzing critical concrete situations, the literature on discrete-time peak computation problems in the communities of dynamical systems analysis and optimization is quite thin. Discrete-time peak computation problems appear for linear systems, where the objective function is the Euclidean norm in [APS18]. This problem also appears in the analysis of stable linear systems with general quadratic objective functions in [Adj21] and in [Adj25a]. A more constrained version of the problem appears in [AG24]. The additional constraint for the state variable is to live in a given set. Discrete-time peak computation problems have a continuous-time counterpart (see e.g., [MHS20]). Note that in [MHS20], the authors address the problem of verifying safety properties (see, e.g., [BYG17] for verification problems on discrete-time systems) for dynamical systems and formulate the safety verification problem as a continuous-time peak computation problem. Safety verification problems aim to prove that the trajectories of a system are contained in a safe set. A safety verification problem can be translated into a peak computation problem when the safe set is a sublevel of sufficiently regular function. In this case, the objective function of the peak computation problem is the function associated with the sublevel set. A safety analysis from the peak computation point of view is also proposed for switched systems in [Adj17] and for program analysis in [AGM15]. For the safety analysis of dynamical systems, one can think that a reachability analysis (see e.g., [RKML06] and references therein), equivalent to a feasibility analysis in our context, is sufficient to prove some properties of the dynamical system. However, maximality in our context is synonymous with criticality. The optimal value in this situation can be viewed as the value that penalizes the system the most with respect to some criteria and defines the extreme conditions under which the system could enter.

In [MHS20], the authors are interested in computing an upper bound of the peak optimal value for the continuous-time case. The techniques used are based on Sums-Of-Squares (SOS) and Linear Matrix Inequalities (LMI) formulations (the interested reader can consult [Las15] for those subjects). The same goal motivates the authors of [APS18]. They limit their study to linear systems and norm objective functions they directly use semidefinite programming (see [BTN01] for further details on the subject). The purpose of the current paper is to develop a method for computing the exact value of a discrete-time peak computation problem. This purpose is shared by the authors of [AG24]. However, this latter cannot be fairly compared because of the additional constraint. Moreover, the studied systems are linear or piecewise linear, and the objective functions are linear.

The goal of the current paper is to provide a theoretical generalization of the works in [Adj21] and  [Adj25a]. In [Adj21] and in [Adj25a], the resolution of discrete-time peak computation problems is restricted to stable affine systems and quadratic objective functions. In this paper, we consider wider classes of stable systems and general objective functions. The techniques developed in this paper combine the method proposed in  [Adj25b] with the Lyapunov approach of [Adj25a]. In [Adj25b], the method aims to find the maximal term of a real sequence using homeomorphisms of convergent positive geometric sequences. In [Adj25a], homeomorphisms and geometric sequences are constructed from quadratic Lyapunov functions. In this paper, we also propose a construction of homeomorphisms and geometric sequences from 𝒦\mathcal{K}\mathcal{L} bounds. The use of such classical tools of dynamical systems stability theory (the interested reader can consult [Ela05, Chap. 4] for discrete-time stability notions) to solve an optimization problem appears to be unnatural. This is a consequence of a simple observation: the existence of a maximizer for a real sequence depends on its asymptotic behavior. More precisely, the existence of a maximizer relies on the finiteness of the limit superior and on the existence of a term strictly greater than the limit superior. Hence, tools for studying the asymptotic behavior of the system help in the existence of a peak and its characterization. Although general stability properties can be difficult to obtain, asymptotic stability properties can be easily presented and understood from 𝒦\mathcal{KL} bounds (e.g., [ATP10] or [NT04]), that is, the norms of the states are globally bounded from above by a 𝒦\mathcal{KL} function. However, verifying that the 𝒦\mathcal{KL} function is an upper bound over the norms of the states must be done for all integers, which can be difficult. In contrast, Lyapunov functions are functional certificates for system stability. Some results guarantee the existence of Lyapunov functions: a system is globally asymptotically stable if and only if there exists a Lyapunov function (see, e.g., [Kha02]). Being a Lyapunov function is a timeless property that depends only on the dynamics. In a few advantageous situations, Lyapunov functions can be computed using numerical optimization solvers (see the survey [GH15]).

The paper is organized as follows. Section 2 presents the discrete-time peak computation problems addressed in this paper. In Section 2, we also present a running example that illustrates the techniques developed in this paper. Section 3 recalls the useful results obtained in [Adj25b]. In Section 3, we also provide basic definitions of comparison functions. Then, in Section 4, we develop an approach based on 𝒦\mathcal{K}\mathcal{L} bounds. In Section 5 we develop the approach using general Lyapunov functions. In these two sections, we explain how to use 𝒦\mathcal{K}\mathcal{L} and Lyapunov functions to solve a discrete-time peak computation problem. We also apply these techniques to the running example presented in Section 2. Section 6 concludes the paper and discusses future research.

Notations: \mathbb{R} stands for the set of reals, +\mathbb{R}_{+} the set of nonnegative reals, \mathbb{R}^{*} the set of nonzero reals and +\mathbb{R}_{+}^{*} the set of strictly positive reals. The vector space of vectors of dd reals is denoted by d\mathbb{R}^{d}. The set of natural integers is denoted by \mathbb{N} whereas \mathbb{N}^{*} denotes the set of nonzero natural integers. The norm \|\cdot\| stands for the Euclidean norm. Finally, B¯(0,α)\overline{B}(0,\alpha) denotes the closed ball of radius α\alpha centered at zero i.e., B¯(0,α):={xd:xα}\overline{B}(0,\alpha):=\{x\in\mathbb{R}^{d}:\|x\|\leq\alpha\}.

2 Problem formulation

2.1 Problem formulation

Let us consider a discrete-time dynamical system (Xin,T)(X^{\mathrm{in}},T) on d\mathbb{R}^{d} where

  • XinX^{\mathrm{in}} is the nonempty subset of d\mathbb{R}^{d} of initial conditions;

  • TT is a nonzero self-map on d\mathbb{R}^{d} that updates the state-variable.

The system is autonomous; it evolves without any control, disturbance, or external input. In this paper, we are interested in the reachable values that maximize a given function. Then, given φ:dd\varphi:\mathbb{R}^{d}\to\mathbb{R}^{d}, we define the discrete-time peak computation problem DPCP(Xin,T,φ)\operatorname{DPCP}(X^{\mathrm{in}},T,\varphi) as follows:

Maxx,kφ(Tk(x)) s. t.xXin,k\begin{array}[]{lcl}\displaystyle{\operatorname*{Max}_{x,k}}&&\displaystyle{\varphi(T^{k}(x))}\\ &\text{ s. t.}&\displaystyle{x\in X^{\mathrm{in}},k\in\mathbb{N}}\end{array} (DPCP)

where TkT^{k} is the kk-fold composition of TT if kk is not null, and the identity, if k=0k=0.

Like any optimization problem, to solve a problem of form (DPCP), we must compute

  • its optimal value i.e., sup{φ(Tk(x)):xXin,k}\sup\{\varphi(T^{k}(x)):x\in X^{\mathrm{in}},\ k\in\mathbb{N}\};

  • its optimal solutions i.e., couples (x¯,k¯)Xin×(\overline{x},\overline{k})\in X^{\mathrm{in}}\times\mathbb{N} such that

    φ(Tk¯(x¯))=sup{φ(Tk(x)):xXin,k}.\varphi(T^{\overline{k}}(\overline{x}))=\sup\{\varphi(T^{k}(x)):x\in X^{\mathrm{in}},\ k\in\mathbb{N}\}.

An optimal solution (x,k)(x,k) of DPCP(Xin,T,φ)\operatorname{DPCP}(X^{\mathrm{in}},T,\varphi) is thus composed of an initial condition xx leading to a peak and its associated date kk of peak realization.

Without loss of generality, we can assume that φ(0)=0\varphi(0)=0. Indeed, for the problem DPCP(Xin,T,φ)\operatorname{DPCP}(X^{\mathrm{in}},T,\varphi), the optimal value is equal to the sum of the optimal values of DPCP(Xin,T,φφ(0))\operatorname{DPCP}(X^{\mathrm{in}},T,\varphi-\varphi(0)) and φ(0)\varphi(0). Moreover, the optimal solutions of DPCP(Xin,T,φ)\operatorname{DPCP}(X^{\mathrm{in}},T,\varphi) and DPCP(Xin,T,φφ(0))\operatorname{DPCP}(X^{\mathrm{in}},T,\varphi-\varphi(0)) are equal.

The optimal value of DPCP(Xin,T,φ)\operatorname{DPCP}(X^{\mathrm{in}},T,\varphi) can be described using a sequence of optimal values. Indeed, we can introduce a static classical optimization problem where kk\in\mathbb{N} is fixed:

Maxxφ(Tk(x)) s. t.xXin\begin{array}[]{lcl}\displaystyle{\operatorname*{Max}_{x}}&&\displaystyle{\varphi(T^{k}(x))}\\ &\text{ s. t.}&\displaystyle{x\in X^{\mathrm{in}}}\end{array} (PkP_{k})

Then, we introduce the sequence ν=(νk)k\nu=(\nu_{k})_{k\in\mathbb{N}} defined, for all kk\in\mathbb{N}, by:

νk:=sup{φ(Tk(x)):xXin}\nu_{k}:=\sup\{\varphi(T^{k}(x)):x\in X^{\mathrm{in}}\} (1)

and its supremum:

νopt:=sup{νk:k}.\nu_{\rm opt}:=\sup\{\nu_{k}:k\in\mathbb{N}\}. (2)

From these notations, νopt\nu_{\rm opt} is equal to the optimal value of DPCP(Xin,T,φ)\operatorname{DPCP}(X^{\mathrm{in}},T,\varphi) whereas νk\nu_{k} is the optimal value of the static optimization problem (PkP_{k}). Solving DPCP(Xin,T,φ)\operatorname{DPCP}(X^{\mathrm{in}},T,\varphi) boils down to compute νopt\nu_{\rm opt}, an integer k¯\overline{k} such that νk¯=νopt\nu_{\overline{k}}=\nu_{\rm opt} and an optimal solution xXinx\in X^{\mathrm{in}} of (Pk¯)(P_{\overline{k}}). We concentrate our efforts on the computation of the integer k¯\overline{k} as the computation of the optimal solution xXinx\in X^{\mathrm{in}} of (Pk¯)(P_{\overline{k}}) relies on an optimization solver (when a suitable one is available). The approach proposed in this paper consists in computing a stopping integer for DPCP(Xin,T,φ)\operatorname{DPCP}(X^{\mathrm{in}},T,\varphi).

Definition 1 (Stopping integer).

An integer KK\in\mathbb{N} is a stopping integer for DPCP(Xin,T,φ)\operatorname{DPCP}(X^{\mathrm{in}},T,\varphi) if, for the sequence ν\nu defined in Equation (1):

νopt=max{νk:k=0,,K}.\nu_{\rm opt}=\max\{\nu_{k}:k=0,\ldots,K\}.

If there is no ambiguity, we will simply say that KK is a stopping integer for ν\nu.

Obviously, any stopping integer KK for ν\nu is greater than k¯\overline{k} and k¯\overline{k} is the smallest stopping integer for ν\nu. Following [Adj25b], if the set {j:νj>lim supn+νn}\{j\in\mathbb{N}:\nu_{j}>\limsup_{n\to+\infty}\nu_{n}\} is nonempty, a stopping integer for ν\nu can deduced from the formula:

minkνk>h(0)ln(h1(νk))ln(β)\min_{\begin{subarray}{c}k\in\mathbb{N}\\ \nu_{k}>h(0)\end{subarray}}\dfrac{\ln(h^{-1}(\nu_{k}))}{\ln(\beta)} (3)

where (h,β)(h,\beta) is such that

  1. 1.

    h:h:\mathbb{R}\to\mathbb{R} is strictly increasing and continuous on [0,1][0,1];

  2. 2.

    β(0,1)\beta\in(0,1);

  3. 3.

    for all kk\in\mathbb{N}, νkh(βk)\nu_{k}\leq h(\beta^{k});

  4. 4.

    there exists kk\in\mathbb{N} such that νk>h(0)\nu_{k}>h(0).

In this paper, we extend the techniques developed in [Adj25a] for stable affine systems and quadratic objective functions to nonlinear stable systems and nonquadratic objective functions. In [Adj25a], we construct (h,β)(h,\beta) from quadratic Lyapunov functions. In this paper, we propose to construct (h,β)(h,\beta) from any 𝒦\mathcal{K}\mathcal{L} stability certificate and any Lyapunov function associated with the system (Xin,T)(X^{\mathrm{in}},T). The interested reader can consult [Kel15] and references therein for complementary readings about those tools from stability theory.

2.2 Practical limitations

From a theoretical perspective, static optimization problems (PkP_{k}) are classical optimization problems; consequently, their study and resolution are relegated to optimization theory. For example, when φTk\varphi\circ T^{k} is concave and XinX^{\mathrm{in}} is convex, the problem (PkP_{k}) is said to be convex, and numerous methods exist (see e.g., [BTN01, Ber15]), or when φTk\varphi\circ T^{k} is polynomial and XinX^{\mathrm{in}} a basic semialgebraic set (see e.g., [AL11, Las15]), the problem (PkP_{k}) falls into the class of polynomial optimization problems.

In practice, the most critical challenge for the computation of νk\nu_{k} lies before the optimization procedure. Indeed, when TT is nonlinear, it is difficult to obtain a closed form for TkT^{k}. For example, for logistic sequences, closed forms are only known for particular cases (see e.g., [GRR10]). Some specific difference equations or systems require intensive efforts to obtain closed forms (see e.g., [Els12, HR18, Ste14] and references therein). We could use instead at least an exact formulation of Tk(Xin)T^{k}(X^{\mathrm{in}}) when XinX^{\mathrm{in}} is an infinite set. However, its computation is limited in practice. For example, in [MHL15], the authors approximate the polynomial image of a basic semialgebraic set using a sequence of semidefinite programs. The authors obtain the convergence in L1L^{1} sense of the sequence of approximations to the polynomial image of the set.

Hopefully, when TT is nonlinear, there exist some situations making the computation of Tk(Xin)T^{k}(X^{\mathrm{in}}) ”easier” in practice. For example, if XinX^{\mathrm{in}} is a finite set or TT is conjugate to a linear dynamics. Conjugate111In the literature, the term topologically conjugate is employed, but in this context, the bijection and its inverse are required to be continuous. In our case, continuity is not mandatory from an optimization perspective. means that there exists a bijection BB and a linear map identified with its matrix AA such that for all xdx\in\mathbb{R}^{d}, T(x)=B1(AB(x))T(x)=B^{-1}(AB(x)). In this case, we simply have for all xdx\in\mathbb{R}^{d} for all kk\in\mathbb{N}, Tk(x)=B1(AkB(x))T^{k}(x)=B^{-1}(A^{k}B(x)).

In this paper, the theoretical part does not require finiteness concerning TT or XinX^{\mathrm{in}}. We only require the finiteness of νk\nu_{k}. In our numerical illustrations, to concentrate on the novelty and avoid entering too deep into optimization tools, we will only consider finite initial condition sets XinX^{\mathrm{in}}. The consideration of more general initial condition sets in practice will be addressed in future works.

The developments made in this paper require real-valued sequences; hence the only assumption made in this paper is as follows:

Assumption 1.

For all k,νk=sup{φ(Tk(x)):xXin}k\in\mathbb{N},\ \nu_{k}=\sup\left\{\varphi(T^{k}(x)):x\in X^{\mathrm{in}}\right\} is finite.

A simple situation where Assumption 1 holds when XinX^{\mathrm{in}} is a bounded set, TT maps from bounded sets to bounded sets, and φ\varphi is upper semicontinuous.

2.3 Running example

Throughout this paper, we experiment with our main tools on a nonlinear system found in [LHK14a]. The dynamics is defined as:

𝖧:2x18(x2111x21)x\mathsf{H}:\mathbb{R}^{2}\ni x\mapsto\dfrac{1}{8}\begin{pmatrix}\|x\|^{2}-1&-1\\ 1&\|x\|^{2}-1\end{pmatrix}x (4)

To write a discrete-time peak computation problem, we also need an initial conditions set and an objective function. Before discussing the choice of the initial conditions set and the objective functions, it should be noted that starting outside a specific ball makes the norm of the state variable blow up. First, by a simple calculus, we obtain, for all z2z\in\mathbb{R}^{2}:

𝖧(x)2=164x2(1+(x21)2).\|\mathsf{H}(x)\|^{2}=\dfrac{1}{64}\|x\|^{2}\left(1+(\|x\|^{2}-1)^{2}\right).

Then, by introducing the following two functions:

𝔣:+s164s(1+(s1)2) and 𝔤:+s{𝔣(s)sif s>0132if s=0,\mathfrak{f}:\mathbb{R}_{+}\ni s\to\dfrac{1}{64}s(1+(s-1)^{2})\ \text{ and }\mathfrak{g}:\mathbb{R}_{+}\ni s\to\left\{\begin{array}[]{cr}\dfrac{\mathfrak{f}(s)}{s}&\text{if }s>0\\ \\ \dfrac{1}{32}&\text{if }s=0\end{array}\right., (5)

we get 𝖧(x)2=𝔣(x2)\|\mathsf{H}(x)\|^{2}=\mathfrak{f}(\|x\|^{2}) for all x2x\in\mathbb{R}^{2}. Now as 𝖧(x)2/x2=𝔤(x2)\|\mathsf{H}(x)\|^{2}/\|x\|^{2}=\mathfrak{g}(\|x\|^{2}) for all nonzero x2x\in\mathbb{R}^{2} and since the assertion (𝔤(s)<1\mathfrak{g}(s)<1 and s>0s>0) is equivalent to s(0,63+1)s\in(0,\sqrt{63}+1), we conclude that 𝖧(x)<x\|\mathsf{H}(x)\|<\|x\| if and only if x2(0,63+1)\|x\|^{2}\in(0,\sqrt{63}+1). Hence, we introduce:

ρ¯:=63+18.9373.\overline{\rho}:=\sqrt{63}+1\simeq 8.9373. (6)

Finally, we have for all r(0,ρ¯)r\in(0,\overline{\rho}), 𝖧(B¯(0,r))B¯(0,r)\mathsf{H}(\overline{B}(0,\sqrt{r}))\subseteq\overline{B}(0,\sqrt{r}).

In future examples, an initial conditions set will be a subset 𝖷\mathsf{X} of 2\mathbb{R}^{2} satisfying

sup{x:x𝖷}(0,ρ¯).\sup\{\|x\|:x\in\mathsf{X}\}\in(0,\sqrt{\overline{\rho}}). (7)

The exact definition of 𝖷\mathsf{X} will depend on the statements that we want to highlight.

An objective function is also required. In the examples, we will analyze the behavior of each coordinate separately; thus, the objective functions are the coordinate functions:

π1:2x=(x1,x2)x1 and π2:2x=(x1,x2)x2.\pi_{1}:\mathbb{R}^{2}\ni x=(x_{1},x_{2})^{\intercal}\mapsto x_{1}\text{ and }\pi_{2}:\mathbb{R}^{2}\ni x=(x_{1},x_{2})^{\intercal}\mapsto x_{2}. (8)

For an initial condition set 𝖷\mathsf{X} satisfying the condition (7), we will define two discrete-time peak computation problems DPCP(𝖷,𝖧,π1)\operatorname{DPCP}(\mathsf{X},\mathsf{H},\pi_{1}) and DPCP(𝖷,𝖧,π2)\operatorname{DPCP}(\mathsf{X},\mathsf{H},\pi_{2}). The optimal value of DPCP(𝖷,𝖧,π1)\operatorname{DPCP}(\mathsf{X},\mathsf{H},\pi_{1}) is written as

ω𝖷,opt1:=supkω𝖷,k1 where ω𝖷,k1:=supx𝖷π1(𝖧k(x)).\omega_{\mathsf{X},{\rm opt}}^{1}:=\sup_{k\in\mathbb{N}}\omega_{\mathsf{X},k}^{1}\text{ where }\omega_{\mathsf{X},k}^{1}:=\sup_{x\in\mathsf{X}}\pi_{1}(\mathsf{H}^{k}(x)). (9)

The optimal value of DPCP(𝖷,𝖧,π2)\operatorname{DPCP}(\mathsf{X},\mathsf{H},\pi_{2}) is written similarly:

ω𝖷,opt2:=supkω𝖷,k2 where ω𝖷,k2:=supx𝖷π2(𝖧k(x)).\omega_{\mathsf{X},{\rm opt}}^{2}:=\sup_{k\in\mathbb{N}}\omega_{\mathsf{X},k}^{2}\text{ where }\omega_{\mathsf{X},k}^{2}:=\sup_{x\in\mathsf{X}}\pi_{2}(\mathsf{H}^{k}(x)). (10)

The condition (7) for 𝖷\mathsf{X} ensures that for all kk\in\mathbb{N} and all x𝖷x\in\mathsf{X}, π1(𝖧k(x))sup{x:x𝖷}\pi_{1}(\mathsf{H}^{k}(x))\leq\sup\{\|x\|:x\in\mathsf{X}\} and π2(𝖧k(x))sup{x:x𝖷}\pi_{2}(\mathsf{H}^{k}(x))\leq\sup\{\|x\|:x\in\mathsf{X}\}. This guarantees that Assumption 1 holds for the sequences

ω𝖷1:=(ω𝖷,k1)k and ω𝖷2:=(ω𝖷,k2)k.\omega_{\mathsf{X}}^{1}:=(\omega_{\mathsf{X},k}^{1})_{k\in\mathbb{N}}\text{ and }\omega_{\mathsf{X}}^{2}:=(\omega_{\mathsf{X},k}^{2})_{k\in\mathbb{N}}. (11)

3 Preliminary results

Before considering the main results of this paper, we recall some useful theoretical results. First, in Subsection 3.1, we bring details about the formula (3) obtained in [Adj25b]. Second, in Subsection 3.2, we introduce useful material about comparison functions, which constitute the underlying tool behind 𝒦\mathcal{K}\mathcal{L} and Lyapunov stability.

3.1 Results on the supremum of a real sequence

The developments made in [Adj25b] concern general real sequences, and in this subsection, we consider uu\in\mathbb{R}^{\mathbb{N}} for which we aim to compute:

supnun and k s.t. uk=supnun\sup_{n\in\mathbb{N}}u_{n}\text{ and }k\in\mathbb{N}\text{ s.t. }u_{k}=\sup_{n\in\mathbb{N}}u_{n} (12)

3.1.1 Notations and basic results

We introduce the subset of \mathbb{R}^{\mathbb{N}} consisting of sequences bounded from above:

Λ:={u=(u0,u1,):supnun<+}.\Lambda:=\{u=(u_{0},u_{1},\ldots)\in\mathbb{R}^{\mathbb{N}}:\sup_{n\in\mathbb{N}}u_{n}<+\infty\}.

For an element of Λ\Lambda, we are interested in computing the supremum of its terms.

Proposition 1.

uΛlim supn+un{}u\in\Lambda\iff\displaystyle{\limsup_{n\to+\infty}u_{n}\in\mathbb{R}\cup\{-\infty\}}.

For uΛu\in\Lambda, we define Argmax(u):={k:uk=supnun}\operatorname{Argmax}(u):=\{k\in\mathbb{N}:u_{k}=\displaystyle{\sup_{n\in\mathbb{N}}u_{n}}\} and we introduce, for uΛu\in\Lambda, the two sets of ranks:

Rus:={k:uk>lim supn+un} and Δus:={k:max0jkuj>supj>kuj}{\mathrm{R}}_{u}^{s}:=\left\{k\in\mathbb{N}:u_{k}>\limsup_{n\to+\infty}u_{n}\right\}\ \text{ and }\ \Delta^{s}_{u}:=\left\{k\in\mathbb{N}:\max_{0\leq j\leq k}u_{j}>\sup_{j>k}u_{j}\right\} (13)

and we define

𝐊us=infΔus.\mathbf{K}^{s}_{u}=\inf\Delta^{s}_{u}. (14)

We recall that the infimum of the empty set is equal to ++\infty. Hence, 𝐊us<+\mathbf{K}^{s}_{u}<+\infty if and only if Δus\Delta^{s}_{u}\neq\emptyset. Furthermore, if Δus\Delta^{s}_{u} is nonempty, then 𝐊usΔus\mathbf{K}^{s}_{u}\in\Delta^{s}_{u}.

Proposition 2.

Let uΛu\in\Lambda. The following assertions hold:

  1. 1.

    ΔusRus\Delta^{s}_{u}\neq\emptyset\iff{\mathrm{R}}_{u}^{s}\neq\emptyset;

  2. 2.

    Δus=supkuk=lim supn+un\Delta^{s}_{u}=\emptyset\iff\sup_{k\in\mathbb{N}}u_{k}=\limsup_{n\to+\infty}u_{n};

  3. 3.

    If Δus\Delta^{s}_{u}\neq\emptyset then 𝐊us=maxArgmax(u)\mathbf{K}^{s}_{u}=\max\operatorname{Argmax}(u).

3.1.2 A formula based on pairs of strictly increasing continuous functions and convergent geometric sequences

In this subsection, we present how to construct the formula (3). All details can be found in [Adj25b]. The key tool to compute the supremum of a sequence uΛu\in\Lambda is the existence of a particular sequence (h(βk))k(h(\beta^{k}))_{k\in\mathbb{N}} that is an upper bound on uu. The particularity comes from the fact that hh is strictly increasing and continuous on [0,1][0,1] and β(0,1)\beta\in(0,1). We then introduce the following set of real functions:

Ω([0,1]):={h:: is strictly increasing and continuous on [0,1]}.\Omega([0,1]):=\{h:\mathbb{R}\mapsto\mathbb{R}:\text{ is strictly increasing and continuous on }[0,1]\}.

It should be noted that we can define a function that is strictly increasing and continuous on [0,1][0,1] on a set containing [0,1][0,1] and then extend it as we want on the entire real line. Therefore, sometimes, we will partially define elements of Ω([0,1])\Omega([0,1]) on a set greater than [0,1][0,1] but not everywhere on the real line.

We also introduce, for uu\in\mathbb{R}^{\mathbb{N}} the following sets:

Γ(u):={(h,β)Ω([0,1])×(0,1):ukh(βk),k}\mathbb{\Gamma}(u):=\{(h,\beta)\in\Omega([0,1])\times(0,1):u_{k}\leq h(\beta^{k}),\ \forall\,k\in\mathbb{N}\}

and

Γfun(u):={hΩ([0,1]):β(0,1) s.t. (h,β)Γ(u)}.\mathbb{\Gamma}_{\rm fun}(u):=\{h\in\Omega([0,1]):\exists\,\beta\in(0,1)\text{ s.t. }(h,\beta)\in\mathbb{\Gamma}(u)\}\kern 5.0pt.

Finally, we introduce for uu\in\mathbb{R}^{\mathbb{N}} and hΓfun(u)h\in\mathbb{\Gamma}_{\rm fun}(u):

Γsc(u,h):={β(0,1):(h,β)Γ(u)}.\mathbb{\Gamma}_{\rm sc}(u,h):=\{\beta\in(0,1):(h,\beta)\in\mathbb{\Gamma}(u)\}\kern 5.0pt.

As all elements of Γfun(u)\mathbb{\Gamma}_{\rm fun}(u) are strictly increasing and continuous on [0,1][0,1], they have an inverse on [0,1][0,1]. For the sake of simplicity, we will simply denote by h1h^{-1} the inverse of hΩ([0,1])h\in\Omega([0,1]). Moreover, the fact that the elements of Γfun(u)\mathbb{\Gamma}_{\rm fun}(u) are strictly increasing and continuous allows for simple properties.

Proposition 3.

Let uΛu\in\Lambda. Then :

  1. 1.

    For all hΓfun(u)h\in\mathbb{\Gamma}_{\rm fun}(u), we have lim supk+ukh(0)\limsup_{k\to+\infty}u_{k}\leq h(0);

  2. 2.

    For all kk\in\mathbb{N}^{*}, uk<h(1)u_{k}<h(1) and u0h(1)u_{0}\leq h(1);

  3. 3.

    If there exists hΓfun(u)h\in\mathbb{\Gamma}_{\rm fun}(u) such that u0=h(1)u_{0}=h(1) then u0=maxkuku_{0}=\max_{k\in\mathbb{N}}u_{k}.

We can also fully characterize the set Λ\Lambda from the nonemptiness of Γ(u)\mathbb{\Gamma}(u).

Proposition 4.

uΛu\in\Lambda if and only if Γ(u)\mathbb{\Gamma}(u)\neq\emptyset.

In the formula (3), the natural logarithm is defined when 0<h1(νk))0<h^{-1}(\nu_{k})), which is the same as h(0)<νkh(0)<\nu_{k}. Then, we must ensure that the function hΓ(u)h\in\mathbb{\Gamma}(u) satisfies h(0)<νkh(0)<\nu_{k} for some kk\in\mathbb{N}. This provides the notion of usefulness.

Definition 2 (Useful strictly increasing continuous functions).

Let uΛu\in\Lambda such that Rus{\mathrm{R}}_{u}^{s}\neq\emptyset. A function hΓfun(u)h\in\mathbb{\Gamma}_{\rm fun}(u) is said to be useful for uu if the set

𝒮(u,h):={k:uk>h(0)}\mathcal{S}(u,h):=\{k\in\mathbb{N}:u_{k}>h(0)\} (15)

is nonempty. By extension, (h,β)Γ(u)(h,\beta)\in\mathbb{\Gamma}(u) is useful for uu if hh is useful for uu.

From the first statement of Proposition 3, if hh is useful for uu, there exists kk\in\mathbb{N} such that uk>h(0)lim supn+unu_{k}>h(0)\geq\limsup_{n\to+\infty}u_{n} i.e., Rus{\mathrm{R}}_{u}^{s}\neq\emptyset.

The following result aims to prove that there exists an optimal useful affine function for uu when Rus{\mathrm{R}}_{u}^{s}\neq\emptyset.

Theorem 1 (Existence of a useful optimal affine function).

Let uΛu\in\Lambda such that Rus{\mathrm{R}}_{u}^{s}\neq\emptyset. Then, there exist a>0a>0, b(0,1)b\in(0,1) and cc\in\mathbb{R} such that:

  1. 1.

    {k:uk>c}\{k\in\mathbb{N}:u_{k}>c\}\neq\emptyset;

  2. 2.

    ukabk+cu_{k}\leq ab^{k}+c for all kk\in\mathbb{N};

  3. 3.

    u𝐊us=ab𝐊us+cu_{\mathbf{K}^{s}_{u}}=ab^{\mathbf{K}^{s}_{u}}+c.

Corollary 1.

Let uΛu\in\Lambda such that Rus{\mathrm{R}}_{u}^{s}\neq\emptyset. Then, there exists a pair (h,β)Γ(u)(h,\beta)\in\mathbb{\Gamma}(u) useful for uu such that u𝐊us=h(β𝐊us)u_{\mathbf{K}^{s}_{u}}=h(\beta^{\mathbf{K}^{s}_{u}}).

Proposition 5.

Let uΛu\in\Lambda. Then, Rus{\mathrm{R}}_{u}^{s}\neq\emptyset if and only if there exists hΓfun(u)h\in\mathbb{\Gamma}_{\rm fun}(u) useful for uu.

Theorem 1 provides a theoretical result and to have in hand an optimal function hΓfun(u)h\in\mathbb{\Gamma}_{\rm fun}(u) useful for uu is very unlikely to happen. However, if we have any function hΓfun(u)h\in\mathbb{\Gamma}_{\rm fun}(u), we can produce a stopping integer for uu which is, in fact, an upper bound of 𝐊us\mathbf{K}^{s}_{u}. To compute it, for uu\in\mathbb{R}^{\mathbb{N}}, we introduce the function 𝔉u:×Ω([0,1])×(0,1){+}\mathfrak{F}_{u}:\mathbb{N}\times\Omega([0,1])\times(0,1)\mapsto\mathbb{R}\cup\{+\infty\} defined for all kk\in\mathbb{N}, hΩ([0,1])h\in\Omega([0,1]) and β(0,1)\beta\in(0,1) by:

𝔉u(k,h,β):={ln(h1(uk))ln(β) if (h,β)Γ(u) and k𝒮(u,h)+ otherwise\mathfrak{F}_{u}(k,h,\beta):=\left\{\begin{array}[]{lr}\dfrac{\ln(h^{-1}(u_{k}))}{\ln(\beta)}&\text{ if }(h,\beta)\in\mathbb{\Gamma}(u)\text{ and }k\in\mathcal{S}(u,h)\\ +\infty&\text{ otherwise}\end{array}\right. (16)

3.1.3 Properties of 𝔉u\mathfrak{F}_{u} and the computation of the supremum of a sequence

The function defined in (16) has useful properties for computing a stopping integer for uu as shown in Theorem 2. To obtain the results of Theorem 2, we need auxiliary results.

Proposition 6.

Let us take a pair (h,β)Γ(u)(h,\beta)\in\mathbb{\Gamma}(u) useful for uu and let k𝒮(u,h)k\in\mathcal{S}(u,h). The following statements are true:

  1. 1.

    𝔉u(k,h,β)\mathfrak{F}_{u}(k,h,\beta) is well-defined and strictly positive if u0<h(1)u_{0}<h(1) and null if k=0k=0 and u0=h(1)u_{0}=h(1);

  2. 2.

    𝔉u(k,h,β)+1=min{j:h(βj)<uk}=min{j:βj<h1(uk)}\lfloor\mathfrak{F}_{u}(k,h,\beta)\rfloor+1=\min\{j\in\mathbb{N}:h(\beta^{j})<u_{k}\}=\min\{j\in\mathbb{N}:\beta^{j}<h^{-1}(u_{k})\};

  3. 3.

    𝔉u(k,h,β)k\mathfrak{F}_{u}(k,h,\beta)\geq k;

  4. 4.

    For all jj\in\mathbb{N}, j>𝔉u(k,h,β)j>\mathfrak{F}_{u}(k,h,\beta), uj<uku_{j}<u_{k};

  5. 5.

    Let j𝒮(u,h)j\in\mathcal{S}(u,h). If ujuku_{j}\leq u_{k} then 𝔉u(k,h,β)𝔉u(j,h,β)\mathfrak{F}_{u}(k,h,\beta)\leq\mathfrak{F}_{u}(j,h,\beta).

Proposition 7.

Let uΛu\in\Lambda such that Rus{\mathrm{R}}_{u}^{s}\neq\emptyset. The following properties hold:

  1. 1.

    Let hΓfun(u)h\in\mathbb{\Gamma}_{\rm fun}(u) and k𝒮(u,h)k\in\mathcal{S}(u,h). Let β,βΓsc(u,h)\beta,\beta^{\prime}\in\mathbb{\Gamma}_{\rm sc}(u,h) such that ββ\beta\leq\beta^{\prime}. Then 𝔉u(k,h,β)𝔉u(k,h,β)\mathfrak{F}_{u}(k,h,\beta)\leq\mathfrak{F}_{u}(k,h,\beta^{\prime}).

  2. 2.

    Let g,hΓfun(u)g,h\in\mathbb{\Gamma}_{\rm fun}(u). Let k𝒮(u,min{g,h})k\in\mathcal{S}(u,\min\{g,h\}). Let βΓsc(u,min{g,h})\beta\in\mathbb{\Gamma}_{\rm sc}(u,\min\{g,h\}). Then 𝔉u(k,min{g,h},β)min{𝔉u(k,g,β),𝔉u(k,h,β)}\mathfrak{F}_{u}(k,\min\{g,h\},\beta)\leq\min\{\mathfrak{F}_{u}(k,g,\beta),\mathfrak{F}_{u}(k,h,\beta)\}.

The first statement of Proposition 7 can be read as follows: we must choose the smallest possible βΓsc(u,h)\beta\in\mathbb{\Gamma}_{\rm sc}(u,h) to reduce the approximation gap. The second statement has a similar meaning in a functional sense: if we have several functions in Γfun(u)\mathbb{\Gamma}_{\rm fun}(u), we must consider the minimum functions to reduce the overapproximation gap.

Proposition 8.

Let uΛu\in\Lambda and (h,β)Γ(u)(h,\beta)\in\mathbb{\Gamma}(u). Then for all jj\in\mathbb{N}, 𝐊us𝔉u(j,h,β)\mathbf{K}^{s}_{u}\leq\lfloor\mathfrak{F}_{u}(j,h,\beta)\rfloor. Moreover, if hh is useful for uu, we have:

mink𝒮(u,h)𝔉u(k,h,β)=𝔉u(𝐊νs,h,β).\min_{k\in\mathcal{S}(u,h)}\mathfrak{F}_{u}(k,h,\beta)=\mathfrak{F}_{u}(\mathbf{K}^{s}_{\nu},h,\beta)\kern 5.0pt.

We can go further than the results of Proposition 8 to obtain a solution to problem (12).

Theorem 2 (Solution for (12)).

Let uΛu\in\Lambda with Rus{\mathrm{R}}_{u}^{s}\neq\emptyset. Let (h,β)Γ(u)(h,\beta)\in\mathbb{\Gamma}(u) be useful for uu and let k𝒮(u,h)k\in\mathcal{S}(u,h). Then :

maxnun=max{uk:k=0,,𝔉u(𝐊us,h,β)}.\max_{n\in\mathbb{N}}u_{n}=\max\{u_{k}:k=0,\ldots,\lfloor\mathfrak{F}_{u}(\mathbf{K}^{s}_{u},h,\beta)\rfloor\}.
Theorem 3 (Formula optimality).

For all uu\in\mathbb{R}^{\mathbb{N}}, we have the following formula:

𝐊us=inf(h,β)Γ(u)infk𝒮(u,h)𝔉u(k,h,β)=inf(h,β)Γ(u)infk𝒮(u,h)ln(h1(uk))ln(β)\mathbf{K}^{s}_{u}=\inf_{(h,\beta)\in\Gamma(u)}\inf_{k\in\mathcal{S}(u,h)}\mathfrak{F}_{u}(k,h,\beta)=\inf_{(h,\beta)\in\Gamma(u)}\inf_{k\in\mathcal{S}(u,h)}\dfrac{\ln(h^{-1}(u_{k}))}{\ln(\beta)}

and the inf\inf can be replaced by min\min when Rus{\mathrm{R}}_{u}^{s}\neq\emptyset.

We conclude this subsection with a summary in the form of an algorithm (Algorithm 1) to solve the problem (12).

1
Input : uΛu\in\Lambda with Rus{\mathrm{R}}_{u}^{s}\neq\emptyset and (h,β)Γ(u)(h,\beta)\in\mathbb{\Gamma}(u) useful
Output : uoptu_{\rm opt} and 𝐊u=minArgmax(u)\mathbf{K}_{u}=\min\operatorname{Argmax}(u)
2 begin
3 k=0k=0; K=+K=+\infty, umax=u_{\rm max}=-\infty, kmax=0k_{\rm max}=0
4 while kKk\leq K do
5    if k𝒮(u,h)k\in\mathcal{S}(u,h) then
6       if umax<uku_{\rm max}<u_{k} then
7          K=𝔉u(k,h,β)K=\lfloor\mathfrak{F}_{u}(k,h,\beta)\rfloor
8          umax=uku_{\rm max}=u_{k}
9          kmax=kk_{\rm max}=k
10          
11       
12    k=k+1k=k+1
13    
14 uopt=umaxu_{\rm opt}=u_{\rm max}
15 𝐊u=kmax\mathbf{K}_{u}=k_{\rm max}
16 
17
Algorithm 1 Resolution of the problem described in (12) with a useful (h,β)Γ(u)(h,\beta)\in\mathbb{\Gamma}(u)

Algorithm 1 terminates as hh is supposed to be useful i.e., 𝒮(u,h)\mathcal{S}(u,h)\neq\emptyset making 𝔉u(k,h,β)\mathfrak{F}_{u}(k,h,\beta) finite for all k𝒮(u,h)k\in\mathcal{S}(u,h). We can also improve 𝔉u(k,h,β)\mathfrak{F}_{u}(k,h,\beta) when the variable umaxu_{\rm max} is updated. Corollary 1 ensures that a useful function hh exists.

Returning to the discrete-time peak computation problem (DPCP), to compute a stopping integer for ν\nu (defined in (1)), we need to construct a couple (h,β)(h,\beta) useful for ν\nu.

Problem 1.

Construct a couple (h,β)Γ(ν)(h,\beta)\in\mathbb{\Gamma}(\nu) useful for ν\nu when Rνs{\mathrm{R}}_{\nu}^{s}\neq\emptyset.

The next sections of the paper focus on classical tools from dynamical systems stability theory, such as 𝒦\mathcal{K}\mathcal{L} certificate and Lyapunov functions. To properly define these tools, we need to introduce the concept of comparison functions.

3.2 Elements of comparison functions

The global asymptotic stability of discrete-time systems can be studied by means of comparison functions (see e.g., [GGLW14]). For a comprehensive literature review on comparison functions, interested readers can refer to  [Kel14]. In our context, as suggested in Subsection 3.1, the problem (DPCP) can be studied by analyzing the limit superior of the sequence ν\nu defined in (1). Comparison functions will be used to study its limit superior.

Definition 3 (𝒦\mathcal{K}/𝒦\mathcal{K}_{\infty} functions classes).

A function α:[0,a)+\alpha:[0,a)\mapsto\mathbb{R}_{+} for some a>0a>0 belongs to 𝒦\mathcal{K} if α\alpha is strictly increasing and continuous on [0,a)[0,a), and α(0)=0\alpha(0)=0. If moreover, a=+a=+\infty and limx+α(x)=+\lim_{x\to+\infty}\alpha(x)=+\infty, α\alpha is said to belong to 𝒦\mathcal{K}_{\infty}.

Definition 4 (\mathcal{L} functions class).

A function σ:++\sigma:\mathbb{R}_{+}\to\mathbb{R}_{+} belongs to \mathcal{L} if it is continuous, decreasing, and limx+σ(x)=0\lim_{x\to+\infty}\sigma(x)=0.

Definition 5 (𝒦\mathcal{K}\mathcal{L} functions class).

A function γ:+×++\gamma:\mathbb{R}_{+}\times\mathbb{R}_{+}\mapsto\mathbb{R}_{+} belongs to 𝒦\mathcal{K}\mathcal{L} if for all t+t\in\mathbb{R}_{+}, sγ(s,t)s\mapsto\gamma(s,t) belongs to 𝒦\mathcal{K} and if for all s+s\in\mathbb{R}_{+}, tγ(s,t)t\mapsto\gamma(s,t) belongs to \mathcal{L}.

The class 𝒦\mathcal{K} is included in Ω([0,1])\Omega([0,1]) but 𝒦\mathcal{K} functions are strictly positive except at 0, where they are null. Hence, h𝒦Γfun(ν)h\in\mathcal{K}\cap\mathbb{\Gamma}_{\rm fun}(\nu) is useful for ν\nu if at least one term νk\nu_{k} is strictly positive. Thus, in this subsection, Assumption 2 must be satisfied

Assumption 2.

There exist kk\in\mathbb{N} and xXinx\in X^{\mathrm{in}} satisfying φ(Tk(x))>0\varphi(T^{k}(x))>0.

Let us consider the set of positive definite functions on a subset XdX\subseteq\mathbb{R}^{d} containing 0:

PD(X):={f:X+:f(x)=0x=0}.\operatorname{PD}(X):=\{f:X\mapsto\mathbb{R}_{+}:f(x)=0\iff x=0\}.

that is the set of nonnegative functions on XX null at 0 and nowhere else.

We also need useful results linking positive definite functions and comparison functions. A proof can be found in [Kha02] (extensions can be found in [Adj24]).

Proposition 9 (Lemma 4.3/Appendix C.4 [Kha02]).

Let XdX\subseteq\mathbb{R}^{d} containing 0 in its interior. Let ff be a continuous function in PD(X)\operatorname{PD}(X). Let r>0r>0 such that B¯(0,r)X\overline{B}(0,r)\subseteq X. Then, for all xB¯(0,r)x\in\overline{B}(0,r), we have

α¯f(x)f(x)α¯f(x)\underline{\alpha}_{f}(\|x\|)\leq f(x)\leq\overline{\alpha}^{f}(\|x\|) (17)

where:

α¯f:[0,r)sinfsxrf(x) and α¯f:[0,r)ssupxsf(x)\underline{\alpha}_{f}:[0,r)\ni s\mapsto\inf_{s\leq\|x\|\leq r}f(x)\quad\text{ and }\quad\overline{\alpha}^{f}:[0,r)\ni s\mapsto\sup_{\|x\|\leq s}f(x) (18)

and α¯f,α¯f𝒦\underline{\alpha}_{f},\overline{\alpha}^{f}\in\mathcal{K}.

If, moreover, X=dX=\mathbb{R}^{d} and f(x)f(x) tends to ++\infty as x\|x\| tends to ++\infty, then α¯f,α¯f𝒦\underline{\alpha}_{f},\overline{\alpha}^{f}\in\mathcal{K}_{\infty} and the inequality (17) holds for all xdx\in\mathbb{R}^{d}.

Even if \|\cdot\| represents the Euclidean norm, the result holds for any norm in d\mathbb{R}^{d} as they are equivalent.

4 Solving problem 1 from 𝒦\mathcal{K}\mathcal{L} bounds

In this section, we return to the resolution of DPCP(Xin,T,φ)\operatorname{DPCP}(X^{\mathrm{in}},T,\varphi) defined in (DPCP). We recall that the data are the following:

  • an initial conditions set (nonempty) XindX^{\mathrm{in}}\subseteq\mathbb{R}^{d};

  • a nonzero map T:ddT:\mathbb{R}^{d}\mapsto\mathbb{R}^{d};

  • a function φ:d\varphi:\mathbb{R}^{d}\mapsto\mathbb{R} such that φ(0)=0\varphi(0)=0.

We recall that we want to compute νopt=sup{νk:k}\nu_{\rm opt}=\sup\{\nu_{k}:k\in\mathbb{N}\} where νk=sup{φ(Tk(x):xXin}\nu_{k}=\sup\{\varphi(T^{k}(x):x\in X^{\mathrm{in}}\}. To use the results of Section 3 with the sequence ν=(νk)k\nu=(\nu_{k})_{k\in\mathbb{N}}, we suppose that Assumption 1 holds.

To simplify the notation of the supremum of an objective function f:df:\mathbb{R}^{d}\mapsto\mathbb{R} over a nonempty set XX, we introduce the following new notation:

fX¯:=sup{f(x):xX}\overline{f_{X}}:=\sup\{f(x):x\in X\}

and the set of functions that are bounded from above on XinX^{\mathrm{in}}:

Fin(Xin):={f:d:fXin¯<+}\operatorname{Fin}\left(X^{\mathrm{in}}\right):=\{f:\mathbb{R}^{d}\mapsto\mathbb{R}:\overline{f_{X^{\mathrm{in}}}}<+\infty\}

With these notations, νk=(φTk)Xin¯\nu_{k}=\overline{(\varphi\circ T^{k})_{X^{\mathrm{in}}}} and Assumption 1 is the same as for all kk\in\mathbb{N}, φTkFin(Xin)\varphi\circ T^{k}\in\operatorname{Fin}\left(X^{\mathrm{in}}\right).

In Subsection 4.1, we show how to solve Problem 1 from a 𝒦\mathcal{K}\mathcal{L} upper bound of the images of the reachable values of the dynamical system (Xin,T)(X^{\mathrm{in}},T). Then, with some additional assumptions, we show that we can also solve Problem 1 from the classical 𝒦\mathcal{K}\mathcal{L} upper bound. Classical means that the upper bound is a 𝒦\mathcal{K}\mathcal{L} upper bound of the norm of the reachable values. The existence of such an upper bound is equivalent to the global asymptotic stability of the system. In Subsection 4.2, we apply the theoretical results obtained in Subsection 4.1 to the running example introduced in Subsection 2.3.

4.1 Theoretical constructions from 𝒦\mathcal{K}\mathcal{L} functions

We propose to combine 𝒦\mathcal{K}\mathcal{L} upper bounds with the celebrated result obtained by Sontag (recalled in Lemma 1) to construct a function hΓfun(ν)h\in\mathbb{\Gamma}_{\rm fun}(\nu) such that e1Γsc(ν,h)e^{-1}\in\mathbb{\Gamma}_{\rm sc}(\nu,h).

Lemma 1 (Proposition 7 of [Son98]).

For all γ𝒦\gamma\in\mathcal{K}\mathcal{L}, there exists θ1,θ2𝒦\theta_{1},\theta_{2}\in\mathcal{K}_{\infty} such that for all s,t+s,t\in\mathbb{R}_{+}, γ(s,t)θ1(θ2(s)et)\gamma(s,t)\leq\theta_{1}(\theta_{2}(s)e^{-t}).

In Subsection 3.1, we saw the importance of studying the limit superior of a real sequence to compute its maximal term. In general, this study is difficult. Here, we can exploit the particular structure of the sequence ν\nu and the fact that its terms originate from the reachable values of a dynamical system. The limit superior of ν\nu can be determined from the 𝒦\mathcal{K}\mathcal{L} upper bounds. Moreover, this functional approach provides natural functions hΓfun(ν)h\in\mathbb{\Gamma}_{\rm fun}(\nu).

Theorem 4 (Relaxed 𝒦\mathcal{K}\mathcal{L} construction).

Assume that there exist γ𝒦\gamma\in\mathcal{K}\mathcal{L} and a nonnegative function ψFin(Xin)\psi\in\operatorname{Fin}\left(X^{\mathrm{in}}\right) such that ψXin¯>0\overline{\psi_{X^{\mathrm{in}}}}>0 satisfying for all xXinx\in X^{\mathrm{in}} and for all kk\in\mathbb{N}

φ(Tk(x))γ(ψ(x),k).\varphi(T^{k}(x))\leq\gamma(\psi(x),k)\kern 5.0pt. (19)

Then:

  1. 1.

    lim supn+νn0\limsup_{n\to+\infty}\nu_{n}\leq 0 and if, moreover, Assumption 2 holds then Rνs{\mathrm{R}}_{\nu}^{s}\neq\emptyset;

  2. 2.

    there exists h𝒦h\in\mathcal{K}_{\infty} s.t. (h,e1)Γ(ν)(h,e^{-1})\in\mathbb{\Gamma}(\nu) and, if, moreover, Assumption 2 holds then hh is useful for ν\nu.

Proof.

1. For all kk, the function sγ(s,k)s\mapsto\gamma(s,k) is strictly increasing and then for all xXinx\in X^{\mathrm{in}}, for all kk\in\mathbb{N}, γ(ψ(x),k)γ(ψXin¯,k)\gamma(\psi(x),k)\leq\gamma(\overline{\psi_{X^{\mathrm{in}}}},k). Taking the supremum over XinX^{\mathrm{in}}, we have νkγ(ψXin¯,k)\nu_{k}\leq\gamma(\overline{\psi_{X^{\mathrm{in}}}},k). We conclude by taking the lim sup\limsup as kk tends to ++\infty and recalling that γ(ψXin¯,k)\gamma(\overline{\psi_{X^{\mathrm{in}}}},k) tends to 0 (as kk tends to ++\infty as tγ(s,t)t\mapsto\gamma(s,t)\in\mathcal{L} for all s+s\in\mathbb{R}_{+}). Finally, under Assumption 2, there exists kk\in\mathbb{N}, such that νk>0lim supk+νk\nu_{k}>0\geq\limsup_{k\to+\infty}\nu_{k}.

2. This is a direct consequence of Sontag’s result (Lemma 1 here). Indeed, there exist θ1,θ2𝒦\theta_{1},\theta_{2}\in\mathcal{K}_{\infty} such that for all xXinx\in X^{\mathrm{in}} and for all kk\in\mathbb{N}, φ(Tk(x))γ(ψ(x),k)θ1(θ2(ψ(x))ek)\varphi(T^{k}(x))\leq\gamma(\psi(x),k)\leq\theta_{1}(\theta_{2}(\psi(x))e^{-k}). Then νkh(ek):=supxXinθ1(θ2(ψ(x)ek))\nu_{k}\leq h(e^{-k}):=\sup_{x\in X^{\mathrm{in}}}\theta_{1}(\theta_{2}(\psi(x)e^{-k})). Then, as θ1\theta_{1} and θ2\theta_{2} are increasing and continuous, hh can also be written +yθ1(θ2(ψXin¯y))\mathbb{R}_{+}\ni y\mapsto\theta_{1}(\theta_{2}(\overline{\psi_{X^{\mathrm{in}}}}y)). The function hh is thus the composition of θ1\theta_{1}, θ2\theta_{2} and yψXin¯yy\mapsto\overline{\psi_{X^{\mathrm{in}}}}y that belong to 𝒦\mathcal{K}_{\infty} (the third as ψXin¯\overline{\psi_{X^{\mathrm{in}}}}) and thus belongs to 𝒦\mathcal{K}_{\infty}. Finally, under Assumption 2, there exists kk\in\mathbb{N} such that νk>0=h(0)\nu_{k}>0=h(0) and so hh is useful in this case.

Classically i.e., when φ=\varphi=\|\cdot\|, the inequality φ(Tk(x))γ(ψ(x),k)\varphi(T^{k}(x))\leq\gamma(\psi(x),k) (for ψ=\psi=\|\cdot\|) is true for all xdx\in\mathbb{R}^{d} and all kk\in\mathbb{N} if and only if the system xk+1=T(xk),x0dx_{k+1}=T(x_{k}),x_{0}\in\mathbb{R}^{d} is said to be uniformly global asymptotic stability (UGAS) (see for example [NT04, Prop. 1] or [ATP10, Th. 2]. This implies that xkx_{k} tends to 0 as kk tends to ++\infty. Thus, with a general function φ\varphi, we obtain one-sided information on the asymptotic behavior of the dynamics. However, from Proposition 9 when φ\varphi is continuous and XinX^{\mathrm{in}} is bounded, the classical 𝒦\mathcal{K}\mathcal{L} certificate of stability is sufficient to construct a function hΓfun(ν)h\in\mathbb{\Gamma}_{\rm fun}(\nu) such that e1Γsc(ν,h)e^{-1}\in\mathbb{\Gamma}_{\rm sc}(\nu,h).

Theorem 5 (Classical 𝒦\mathcal{K}\mathcal{L} construction).

Suppose that φ\varphi is continuous and XinX^{\mathrm{in}} is bounded and not reduced to {0}\{0\}. Assume that there exists γ𝒦\gamma\in\mathcal{K}\mathcal{L} satisfying for all xXinx\in X^{\mathrm{in}} and for all kk\in\mathbb{N}:

Tk(x)γ(x,k).\|T^{k}(x)\|\leq\gamma(\|x\|,k)\kern 5.0pt. (20)

Then:

  1. 1.

    lim supn+νn0\limsup_{n\to+\infty}\nu_{n}\leq 0 and if, moreover, Assumption 2 holds then Rνs{\mathrm{R}}_{\nu}^{s}\neq\emptyset;

  2. 2.

    there exists h𝒦h\in\mathcal{K}_{\infty} s.t. (h,e1)Γ(ν)(h,e^{-1})\in\mathbb{\Gamma}(\nu) and, if, moreover, Assumption 2 holds then hh is useful for ν\nu.

Proof.

It suffices to obtain an inequality of the form of (19) from the inequality (20) and then to use Theorem 4.

If φ\varphi is continuous, then from Proposition 9, as φ¯:dxmax{x,φ(x)}\overline{\varphi}:\mathbb{R}^{d}\ni x\mapsto\max\{\|x\|,\varphi(x)\} is continuous, belongs to PD(d)\operatorname{PD}(\mathbb{R}^{d}) (as φ(0)=0\varphi(0)=0) and satisfies φ¯(x)\overline{\varphi}(x) tends to ++\infty as x\|x\| tends to ++\infty. Hence, we have φφ¯α¯φ¯\varphi\leq\overline{\varphi}\leq\overline{\alpha}^{\overline{\varphi}}\circ\|\cdot\| with α¯φ¯𝒦\overline{\alpha}^{\overline{\varphi}}\in\mathcal{K}_{\infty}. Then, from inequality (20), we obtain φ(Tk(x))α¯φ¯(Tkx)α¯φ¯(γ(x,k))\varphi(T^{k}(x))\leq\overline{\alpha}^{\overline{\varphi}}(\|T^{k}x\|)\leq\overline{\alpha}^{\overline{\varphi}}(\gamma(\|x\|,k)) for all xXinx\in X^{\mathrm{in}}. Thus the inequality (19) holds with γ:=α¯φ¯γ𝒦\gamma:=\overline{\alpha}^{\overline{\varphi}}\circ\gamma\in\mathcal{K}\mathcal{L} and ψ:=Fin(Xin)\psi:=\|\cdot\|\in\operatorname{Fin}\left(X^{\mathrm{in}}\right) that verifies ψXin¯>0\overline{\psi_{X^{\mathrm{in}}}}>0 (as XinX^{\mathrm{in}} bounded and non reduced to {0}\{0\}). We conclude from Theorem 4.

Remark 1.

The inequality (20) forces TT to be zero at the origin. This also implies that all sequences (Tk(x))k(T^{k}(x))_{k} starting at xXinx\in X^{\mathrm{in}} converge to 0. The inequality (19) does not imply such properties for TT.

The approach using the functional upper bound detailed in (19) relies on Sontag’s result, which is not completely constructive. Therefore, it would be complicated to construct a function hΓfun(ν)h\in\mathbb{\Gamma}_{\rm fun}(\nu) from γ\gamma and ψ\psi appearing in (20) in practice for general systems. Hopefully, we can develop the latter approach to solve, for well-chosen initial condition sets 𝖷\mathsf{X}, the problems DPCP(𝖷,𝖧,πi)\operatorname{DPCP}(\mathsf{X},\mathsf{H},\pi_{i}) presented in Subsection 2.3.

4.2 Application to the running example of Subsection 2.3

First, we recall that we want to compute, for a subset 𝖷\mathsf{X} satisfying the condition (7), ω𝖷,opt1\omega_{\mathsf{X},{\rm opt}}^{1} and ω𝖷,opt2\omega_{\mathsf{X},{\rm opt}}^{2}, which are respectively the supremum of the sequences ω𝖷1\omega_{\mathsf{X}}^{1} and ω𝖷2\omega_{\mathsf{X}}^{2} defined in (9) and in (10). We have to construct for all i{1,2}i\in\{1,2\}, a function hΓfun(ω𝖷i)h\in\mathbb{\Gamma}_{\rm fun}(\omega_{\mathsf{X}}^{i}) such that e1Γsc(ω𝖷i,h)e^{-1}\in\mathbb{\Gamma}_{\rm sc}(\omega_{\mathsf{X}}^{i},h). Actually, we will see that we can choose the same function hh for ω𝖷1\omega_{\mathsf{X}}^{1} and ω𝖷2\omega_{\mathsf{X}}^{2} and we can construct this function from a 𝒦\mathcal{K}\mathcal{L} upper bound. From this hh, we can define 𝔉ω𝖷i\mathfrak{F}_{\omega_{\mathsf{X}}^{i}} for all i{1,2}i\in\{1,2\}. Next, we explicitly define two finite initial condition sets, 𝖷\mathsf{X}. For each, we define two vectors with numerical values. Then, we apply Algorithm 1 to compute the supremum of ω𝖷1\omega_{\mathsf{X}}^{1} and ω𝖷2\omega_{\mathsf{X}}^{2} for the two finite initial condition sets.

4.2.1 Theoretical developments of the running example using a 𝒦\mathcal{K}\mathcal{L} upper bound function.

We observe, since:

𝔤(s)<e1 and s>00<s<64e11+1=:ρ¯,\mathfrak{g}(s)<e^{-1}\text{ and }s>0\iff 0<s<\sqrt{64e^{-1}-1}+1=:\underline{\rho},

that 𝖧(x)2<e1x2\|\mathsf{H}(x)\|^{2}<e^{-1}\|x\|^{2} if and only if 0<x2<ρ¯0<\|x\|^{2}<\underline{\rho}. Let r(0,ρ¯)r\in(0,\underline{\rho}) and {0}𝖷B¯(0,r)\{0\}\neq\mathsf{X}\subseteq\overline{B}(0,\sqrt{r}). Then, for i{1,2}i\in\{1,2\} and for all z𝖷z\in\mathsf{X} and kk\in\mathbb{N}:

πi(𝖧k(x))𝖧k(x)2ek/2x.\pi_{i}(\mathsf{H}^{k}(x))\leq\sqrt{\|\mathsf{H}^{k}(x)\|^{2}}\leq e^{-k/2}\|x\|\kern 5.0pt.

Then we have πi(𝖧k(x))γ(ψ(x),k)\pi_{i}(\mathsf{H}^{k}(x))\leq\gamma(\psi(x),k) for all x𝖷x\in\mathsf{X} and all kk\in\mathbb{N}, where:

γ(s,t):=set and ψ=.\gamma(s,t):=s\sqrt{e^{-t}}\text{ and }\psi=\|\cdot\|\kern 5.0pt.

The function γ\gamma is clearly in 𝒦\mathcal{K}\mathcal{L} and ψ\psi is clearly nonnegative. As 𝖷\mathsf{X} is not reduced to {0}\{0\}, ψ𝖷¯\overline{\psi_{\mathsf{X}}} is strictly positive.

Assumption 2 holds for both π1\pi_{1} and π2\pi_{2}. Indeed, if xx is a nonzero vector such that 𝖧k(x)\|\mathsf{H}^{k}(x)\| converges to 0, then for all i{1,2}i\in\{1,2\}, πi(𝖧k(x))\pi_{i}(\mathsf{H}^{k}(x)) is strictly positive for some integer kk. A proof is provided in the appendix. The existence of 𝒦\mathcal{K}\mathcal{L} bound implies that for all x𝖷x\in\mathsf{X}, 𝖧k(x)\|\mathsf{H}^{k}(x)\| converges to 0.

As mentioned previously, in general, the functions θ1,θ2\theta_{1},\theta_{2} of Sontag’s result (Lemma 1 here) are not known explicitly. As we have obtained an inequality of the form (4) where e1e^{-1} appears explicitly, we have identified θ1,θ2\theta_{1},\theta_{2} and we can directly deduce a function h𝖷iΓfun(ω𝖷i)h_{\mathsf{X}}^{i}\in\mathbb{\Gamma}_{\rm fun}(\omega_{\mathsf{X}}^{i}) such that e1Γsc(ω𝖷i,h𝖷i)e^{-1}\in\mathbb{\Gamma}_{\rm sc}(\omega_{\mathsf{X}}^{i},h_{\mathsf{X}}^{i}). The same function can be chosen for i=1i=1 and i=2i=2 and we simply write h𝖷h_{\mathsf{X}} defined as follows:

h𝖷:+s𝖷¯s.h_{\mathsf{X}}:\mathbb{R}_{+}\ni s\mapsto\overline{\|\cdot\|_{\mathsf{X}}}\sqrt{s}\kern 5.0pt. (21)

As h𝖷(0)=0h_{\mathsf{X}}(0)=0, we have for i=1,2i=1,2, 𝒮(ω𝖷i,h)={k:ω𝖷,ki>0}\mathcal{S}(\omega_{\mathsf{X}}^{i},h)=\{k\in\mathbb{N}:\omega_{\mathsf{X},k}^{i}>0\}. It is straightforward to see that

h𝖷1:+s(s𝖷¯)2.h_{\mathsf{X}}^{-1}:\mathbb{R}_{+}\ni s\mapsto\left(\dfrac{s}{\overline{\|\cdot\|_{\mathsf{X}}}}\right)^{2}\kern 5.0pt.

Now since, for all i{1,2}i\in\{1,2\}, we dispose of an element of Γ(ω𝖷i)\mathbb{\Gamma}(\omega_{\mathsf{X}}^{i}), we can solve DPCP(𝖷,𝖧,πi)\operatorname{DPCP}(\mathsf{X},\mathsf{H},\pi_{i}) using 𝔉ω𝖷i\mathfrak{F}_{\omega_{\mathsf{X}}^{i}} and Algorithm 1. Very simple calculi lead to the formula for all i{1,2}i\in\{1,2\} and all kk such that ω𝖷,ki>0\omega_{\mathsf{X},k}^{i}>0:

𝔉ω𝖷i(k,h,e1)=ln(h𝖷1(ω𝖷,ki))ln(e1)=2ln(𝖷¯)2ln(ω𝖷,ki).\mathfrak{F}_{\omega_{\mathsf{X}}^{i}}(k,h,e^{-1})=\dfrac{\ln(h_{\mathsf{X}}^{-1}(\omega_{\mathsf{X},k}^{i}))}{\ln(e^{-1})}=2\ln(\overline{\|\cdot\|_{\mathsf{X}}})-2\ln(\omega_{\mathsf{X},k}^{i})\kern 5.0pt.

4.2.2 Numerical applications.

As discussed earlier, we consider two different finite initial condition sets:

  1. 1.

    𝖷a:={x1,x2}\mathsf{X}_{a}:=\{x^{1},x^{2}\} where x1:=(1.3,0.3)x^{1}:=(-1.3,-0.3)^{\intercal} and x2:=(1.1,0.8)x^{2}:=(-1.1,-0.8)^{\intercal};

  2. 2.

    𝖷b:={y1,y2}\mathsf{X}_{b}:=\{y^{1},y^{2}\} where y1:=(2.3,0.013)y^{1}:=(-2.3,0.013)^{\intercal} and y2:=(0.7,2.29)y^{2}:=(0.7,-2.29)^{\intercal}.

To simplify the notations, we write for all i{1,2}i\in\{1,2\}, ωai=(ωa,ki)k\omega_{a}^{i}=(\omega_{a,k}^{i})_{k\in\mathbb{N}} and ωbi=(ωb,ki)k\omega_{b}^{i}=(\omega_{b,k}^{i})_{k\in\mathbb{N}} instead of ω𝖷ai\omega_{\mathsf{X}_{a}}^{i} and ω𝖷bi\omega_{\mathsf{X}_{b}}^{i}. Recall that, for all i{a,b}i\in\{a,b\}, j{1,2}j\in\{1,2\} and kk:

ωi,kj:=supx𝖷iπj(𝖧k(x)).\omega_{i,k}^{j}:=\sup_{x\in\mathsf{X}_{i}}\pi_{j}(\mathsf{H}^{k}(x)).

As 𝖷a\mathsf{X}_{a} and 𝖷b\mathsf{X}_{b} are now explicit, we can refine the definition of the function h𝖷h_{\mathsf{X}}. We write hah_{a} for the function associated with 𝖷a\mathsf{X}_{a} and hbh_{b} for the one associated with 𝖷2\mathsf{X}_{2}. Thus, as

(𝖷a¯)2=max{x12,x22}=1.85 and (𝖷b¯)2=max{y12,y22}=5.7341(\overline{\|\cdot\|_{\mathsf{X}_{a}}})^{2}=\max\{\|x^{1}\|^{2},\|x^{2}\|^{2}\}=1.85\text{ and }(\overline{\|\cdot\|_{\mathsf{X}_{b}}})^{2}=\max\{\|y^{1}\|^{2},\|y^{2}\|^{2}\}=5.7341

we get:

ha1:ss21.85 and hb1:ss25.7341h_{a}^{-1}:s\mapsto\dfrac{s^{2}}{1.85}\text{ and }h_{b}^{-1}:s\mapsto\dfrac{s^{2}}{5.7341}

and we have, for all j{1,2}j\in\{1,2\} and all kk such that ωa,kj>0\omega_{a,k}^{j}>0 :

𝔉ωaj(k,ha,e1)=ln(1.85)2ln(max{πj(𝖧k(x1)),πj(𝖧k(x2))})\mathfrak{F}_{\omega_{a}^{j}}(k,h_{a},e^{-1})=\ln(1.85)-2\ln\left(\max\{\pi_{j}(\mathsf{H}^{k}(x^{1})),\pi_{j}(\mathsf{H}^{k}(x^{2}))\}\right)

and for all j{1,2}j\in\{1,2\} and all kk such that ωb,kj>0\omega_{b,k}^{j}>0:

𝔉ωbj(k,hb,e1)=ln(5.7341)2ln(max{πj(𝖧k(y1)),πj(𝖧k(y2))})\mathfrak{F}_{\omega_{b}^{j}}(k,h_{b},e^{-1})=\ln(5.7341)-2\ln\left(\max\{\pi_{j}(\mathsf{H}^{k}(y^{1})),\pi_{j}(\mathsf{H}^{k}(y^{2}))\}\right)

Then, we are able to apply Algorithm 1 to compute ωi,optj\omega_{i,{\rm opt}}^{j} and the maximal integer solution ki,maxjk_{i,{\rm max}}^{j} for i{a,b}i\in\{a,b\} and j{1,2}j\in\{1,2\}.

The case 𝖷a={x1,x2}\mathsf{X}_{a}=\{x^{1},x^{2}\} where x1=(1.3,0.3)x^{1}=(-1.3,-0.3)^{\intercal} and x2=(1.1,0.8)x^{2}=(-1.1,-0.8)^{\intercal}

Let us start with the initial conditions set 𝖷a\mathsf{X}_{a}.

To determine the number of images required for the search of the maximum, we need to compute 𝔉ωaj(k,ha,e1)\mathfrak{F}_{\omega_{a}^{j}}(k,h_{a},e^{-1}). However, this is only possible if and only if ωa,kj>0\omega_{a,k}^{j}>0. So we have to wait for the first integer kik_{i} such that max{πj(𝖧ki(x1)),πj(𝖧kj(x2))}\max\{\pi_{j}(\mathsf{H}^{k_{i}}(x^{1})),\pi_{j}(\mathsf{H}^{k_{j}}(x^{2}))\} is strictly positive. The coordinates of x1x^{1} and x2x^{2} are negative, so we move on and compute:

𝖧(x1)=(357/4000767/4000) and 𝖧(x2)=(27/160089/400)\mathsf{H}(x^{1})=\begin{pmatrix}-357/4000\\ -767/4000\end{pmatrix}\text{ and }\mathsf{H}(x^{2})=\begin{pmatrix}-27/1600\\ -89/400\end{pmatrix}

The vectors still have negative coordinates, and we compute

𝖧2(x1)=(0.034630.01174) and 𝖧2(x2)(0.029820.02432).\mathsf{H}^{2}(x^{1})=\begin{pmatrix}0.03463\\ 0.01174\end{pmatrix}\text{ and }\mathsf{H}^{2}(x^{2})\simeq\begin{pmatrix}0.02982\\ 0.02432\end{pmatrix}.

The vectors 𝖧2(x1)\mathsf{H}^{2}(x^{1}) and 𝖧2(x2)\mathsf{H}^{2}(x^{2}) have strictly positive coordinates, and we can compute our stopping integers:

𝔉ωa1(2,ha,e1)=ln(1.85)2ln(max{π1(𝖧2(x1),π1(𝖧2(x2))})7.3415 and 𝔉ωa2(2,ha,e1)=ln(1.85)2ln(max{π2(𝖧2(x1),π2(𝖧2(x2))})8.0482\begin{array}[]{ll}&\displaystyle{\mathfrak{F}_{\omega_{a}^{1}}(2,h_{a},e^{-1})=\ln(1.85)-2\ln\left(\max\{\pi_{1}(\mathsf{H}^{2}(x^{1}),\pi_{1}(\mathsf{H}^{2}(x^{2}))\}\right)\approx 7.3415}\\ \text{ and }&\displaystyle{\mathfrak{F}_{\omega_{a}^{2}}(2,h_{a},e^{-1})=\ln(1.85)-2\ln\left(\max\{\pi_{2}(\mathsf{H}^{2}(x^{1}),\pi_{2}(\mathsf{H}^{2}(x^{2}))\}\right)\approx 8.0482}\end{array}

This means that the stopping integer KK in Algorithm 1 is set to 7 for ωa1\omega_{a}^{1} and to 8 for ωa2\omega_{a}^{2} and

ωa,opt1=max{ωa,k1:k{0,1,,7}} and ωa,opt2=max{ωa,k2:k{0,1,,8}}.\omega_{a,{\rm opt}}^{1}=\max\{\omega_{a,k}^{1}:k\in\{0,1,\ldots,7\}\}\text{ and }\omega_{a,{\rm opt}}^{2}=\max\{\omega_{a,k}^{2}:k\in\{0,1,\ldots,8\}\}.

Following Algorithm 1, we compute the max{πj(𝖧k(x1)),πj(𝖧k(x2))}\max\{\pi_{j}(\mathsf{H}^{k}(x^{1})),\pi_{j}(\mathsf{H}^{k}(x^{2}))\} until kk reaches the stopping integer. As during the process, we have not found greater values than ωa,2i\omega_{a,2}^{i}, we conclude that ωa,optj=ωa,2j\omega_{a,{\rm opt}}^{j}=\omega_{a,2}^{j} and ka,maxj=2k_{a,{\rm max}}^{j}=2 for all j{1,2}j\in\{1,2\}.

The case 𝖷b={y1,y2}\mathsf{X}_{b}=\{y^{1},y^{2}\} where y1=(2.3,0.013)y^{1}=(-2.3,0.013)^{\intercal} and y2=(0.7,2.29)y^{2}=(0.7,-2.29)^{\intercal}

Let us start with the initial conditions set 𝖷b\mathsf{X}_{b}.

Contrary to the case 𝖷a\mathsf{X}_{a}, as y12=ωb,01=0.7y_{1}^{2}=\omega_{b,0}^{1}=0.7 and y21=ωb,02=0.013y_{2}^{1}=\omega_{b,0}^{2}=0.013, we can compute stopping integers at k=0k=0:

𝔉ωb1(0,hb,e1)=ln(5.7341)2ln(0.7)2.4598 and 𝔉ωb2(0,hb,e1)=ln(5.7341)2ln(0.013)10.432.\begin{array}[]{cc}&\mathfrak{F}_{\omega_{b}^{1}}(0,h_{b},e^{-1})=\ln(5.7341)-2\ln(0.7)\approx 2.4598\\ \text{ and }&\mathfrak{F}_{\omega_{b}^{2}}(0,h_{b},e^{-1})=\ln(5.7341)-2\ln(0.013)\approx 10.432\kern 5.0pt.\end{array}

This leads to the stopping integer KK in Algorithm 1 being set to 2 for ωb1{\omega_{b}^{1}} and 10 for ωb2{\omega_{b}^{2}}. Hence

ωb,opt1=max{ωb,01,ωb,11,ωb,21} and ωb,opt2=max{ωb,k2:k{0,1,,10}}.\omega_{b,{\rm opt}}^{1}=\max\{\omega_{b,0}^{1},\omega_{b,1}^{1},\omega_{b,2}^{1}\}\text{ and }\omega_{b,{\rm opt}}^{2}=\max\{\omega_{b,k}^{2}:k\in\{0,1,\ldots,10\}\}.

Now, we compute the images of y1y^{1} and y2y^{2} by 𝖧\mathsf{H}:

𝖧(y1)(1.235050.28053) and 𝖧(y2)(0.700481.26764).\mathsf{H}(y^{1})\simeq\begin{pmatrix}-1.23505\\ -0.28053\end{pmatrix}\text{ and }\mathsf{H}(y^{2})\simeq\begin{pmatrix}0.70048\\ -1.26764\end{pmatrix}.

As π1(𝖧(y2))>0.7\pi_{1}(\mathsf{H}(y^{2}))>0.7, we can update the stopping integer for ωb1{\omega_{b}^{1}} and:

𝔉ω21(1,hb,e1)=ln(5.7341)2ln(π1(𝖧(y2)))2.4584.\mathfrak{F}_{\omega_{2}^{1}}(1,h_{b},e^{-1})=\ln(5.7341)-2\ln(\pi_{1}(\mathsf{H}(y^{2})))\approx 2.4584.

Following the fifth statement of Proposition 6, this is slightly smaller than 𝔉ωb1(0,hb,e1)\mathfrak{F}_{\omega_{b}^{1}}(0,h_{b},e^{-1}) but its integer part is still equal to 2 for ωb1{\omega_{b}^{1}}. The second coordinate of 𝖧(y1)\mathsf{H}(y^{1}) and of 𝖧(y2)\mathsf{H}(y^{2}) are negative, then we cannot update the stopping integer for ωb2{\omega_{b}^{2}}. Next, we have

𝖧2(y1)(0.058190.17556) and 𝖧2(y2)(0.254560.08636).\mathsf{H}^{2}(y^{1})\simeq\begin{pmatrix}-0.05819\\ -0.17556\end{pmatrix}\text{ and }\mathsf{H}^{2}(y^{2})\approx\begin{pmatrix}0.25456\\ -0.08636\end{pmatrix}.

As π1(𝖧2(y2))\pi_{1}(\mathsf{H}^{2}(y^{2})) is strictly less than π1(𝖧(y2))\pi_{1}(\mathsf{H}(y^{2})) and the stopping integer for ωb1\omega_{b}^{1} is equal to 2, it follows that ωb,opt1=π1(𝖧(y2))\omega_{b,{\rm opt}}^{1}=\pi_{1}(\mathsf{H}(y^{2})) and kb,max1=1k_{b,{\rm max}}^{1}=1. Since π2(𝖧2(y1))\pi_{2}(\mathsf{H}^{2}(y^{1})) and π2(𝖧(y2))\pi_{2}(\mathsf{H}(y^{2})) are negative then nothing changes for ωb2{\omega_{b}^{2}}. We continue to iterate for ωb2\omega_{b}^{2} until k=10k=10. We have

𝖧3(y1)(0.028970.01392) and 𝖧3(y2)(0.018730.04183)\mathsf{H}^{3}(y^{1})\approx\begin{pmatrix}0.02897\\ 0.01392\end{pmatrix}\text{ and }\mathsf{H}^{3}(y^{2})\approx\begin{pmatrix}-0.01873\\ 0.04183\end{pmatrix}

thus ωb,32=π2(𝖧3(y2))>0.013\omega_{b,3}^{2}=\pi_{2}(\mathsf{H}^{3}(y^{2}))>0.013, which was the maximal previous value. We then update the value of the stopping integer KK of Algorithm 1 for ωb2\omega_{b}^{2} and obtain

𝔉ωb2(3,hb,e1)=ln(5.7341)2ln(π2(𝖧3(y2)))8.0945.\mathfrak{F}_{\omega_{b}^{2}}(3,h_{b},e^{-1})=\ln(5.7341)-2\ln(\pi_{2}(\mathsf{H}^{3}(y^{2})))\approx 8.0945.

The new stopping integer KK is equal to 8 for ωb2\omega_{b}^{2}. The next values are smaller than π2(𝖧3(y2))\pi_{2}(\mathsf{H}^{3}(y^{2})), which implies that ωb,opt2=π2(𝖧3(y2))\omega_{b,{\rm opt}}^{2}=\pi_{2}(\mathsf{H}^{3}(y^{2})) and kb,max2=3k_{b,{\rm max}}^{2}=3.

5 Solving Problem 1 from Lyapunov functions

Classically, Lyapunov functions are used as certificates of stability (see  [Ela05] or [KP01]). They bring a constructive approach to prove the stability of the dynamical system through converse Lyapunov theorems, which proved that stability notions are equivalent to the existence of Lyapunov functions [Kha02]. For some linear, polynomial, or piecewise polynomial systems, Lyapunov functions can be computed using numerical optimization solvers based on linear or semidefinite programming (e.g., see [GH15] and references therein).

Classical quadratic Lyapunov functions solve Problem 1 when TT is affine and stable, and φ\varphi is quadratic (see [Adj25a]). In this section, we prove that this approach can be generalized to all discrete-time systems for which a Lyapunov function exists.

5.1 Definition and useful facts

We begin by recalling the definition of classical Lyapunov functions for discrete-time systems. We briefly present the equivalent definitions of the decrease condition required to be a Lyapunov function. For topological notions, we recall that \displaystyle{\|\cdot\|} is the Euclidean norm. However, the context of finite-dimensional space makes the results independent of the choice of the norm.

For F:ddF:\mathbb{R}^{d}\mapsto\mathbb{R}^{d}, we denote by Sta(F)\operatorname{Sta}(F) the set of closed subsets of d\mathbb{R}^{d} containing 0 but non reduced to 0 and stable by FF i.e.,

Sta(F):={𝒟d:𝒟 closed , 0𝒟{0},F(𝒟)𝒟}\operatorname{Sta}(F):=\{\mathcal{D}\subseteq\mathbb{R}^{d}:\mathcal{D}\text{ closed },\ 0\in\mathcal{D}\neq\{0\},\ F(\mathcal{D})\subseteq\mathcal{D}\}

Following Lazar [Laz06], we define local Lyapunov functions on stable sets. We slightly relax the notion of stable sets, called classically positive or forward invariant sets. Indeed, classically, positive invariant sets are our stable sets with an auxiliary condition: 0 belongs to the interior of the stable set. This condition is not considered stability problems in this paper.

Definition 6 (Classical Lyapunov functions).

A function V:dV:\mathbb{R}^{d}\to\mathbb{R} is said to be a (local) Lyapunov function for F:ddF:\mathbb{R}^{d}\mapsto\mathbb{R}^{d} on XSta(F)X\in\operatorname{Sta}(F) if and only if:

  1. 1.

    There exist α1,α2𝒦\alpha_{1},\alpha_{2}\in\mathcal{K} such that, for all xXx\in X:

    α1(x)V(x)α2(x)\alpha_{1}(\|x\|)\leq V(x)\leq\alpha_{2}(\|x\|)
  2. 2.

    There exists λ(0,1)\lambda\in(0,1) such that for all xXx\in X:

    V(F(x))λV(x)V(F(x))\leq\lambda V(x)
Example 1.

In [LHK14a], the authors propose the following function:

𝖵:2xmax{x2,e𝖧(x)2}\mathsf{V}:\mathbb{R}^{2}\ni x\mapsto\max\{\|x\|^{2},e\|\mathsf{H}(x)\|^{2}\} (22)

as a (local) Lyapunov function for the dynamics 𝖧\mathsf{H} of our running example presented in Subsection 2.3.

Next, we prove that 𝖵\mathsf{V} is a Lyapunov function for 𝖧\mathsf{H} on any B¯(0,r)\overline{B}(0,\sqrt{r}) for which r<ρ¯r<\overline{\rho}. In this example, we prove the first statement of Definition 6. Let rr be in (0,ρ¯)(0,\overline{\rho}). Thus, from the discussion of Subsection 2.3, we have 𝖧(x)2<x2\|\mathsf{H}(x)\|^{2}<\|x\|^{2} for all nonzero xx in B¯(0,r)\overline{B}(0,\sqrt{r}). Then, for all xx in B¯(0,r)\overline{B}(0,\sqrt{r}), x2𝖵(x)max{x2,ex2}=ex2\|x\|^{2}\leq\mathsf{V}(x)\leq\max\{\|x\|^{2},e\|x\|^{2}\}=e\|x\|^{2}. We conclude that there exist α1,α2𝒦\alpha_{1},\alpha_{2}\in\mathcal{K}_{\infty} (α1:+ss2\alpha_{1}:\mathbb{R}_{+}\ni s\mapsto s^{2} and α1:+ses2\alpha_{1}:\mathbb{R}_{+}\ni s\mapsto es^{2}) such that α1(x)𝖵(x)α2(x)\alpha_{1}(\|x\|)\leq\mathsf{V}(x)\leq\alpha_{2}(\|x\|).

We will prove the second statement by computing the local Lyapunov ratio operator of 𝖧\mathsf{H}.

First, due to the equivalence of norms in d\mathbb{R}^{d} and since any element of 𝒦\mathcal{K} is increasing if the first statement holds for \|\cdot\|, it holds for all norms on d\mathbb{R}^{d}. Second, combining the results and discussions found in [GGLW14, Remark 5], [LHK14b, Def. 5, Th.4] and [Kel14, Lemma 25, Corollary 1], the condition that there exists λ(0,1)\lambda\in(0,1) such that VFλVV\circ F\leq\lambda V can be equivalently replaced by one of those statements:

  1. 1.

    there exists α:++\alpha:\mathbb{R}_{+}\mapsto\mathbb{R}_{+} positive definite and continuous such that VFVαV\circ F\leq V-\alpha\circ\|\cdot\|;

  2. 2.

    there exists ρ:++\rho:\mathbb{R}_{+}\mapsto\mathbb{R}_{+} positive definite, continuous and such that Idρ\mathrm{Id}-\rho is positive definite satisfying VFρVV\circ F\leq\rho\circ V.

First, we remark that from the first statement of Definition 6, VV is positive definite on any closed ball for which the radius belongs to the domain of α1\alpha_{1}. If we require VV to be continuous and 𝒟\mathcal{D} contains 0 in its interior, from Proposition 9, the first statement of Definition 6 is actually equivalent to VPD(𝒟)V\in\operatorname{PD}(\mathcal{D}). Second, if there exists a classical Lyapunov function for FF, FF must be null at zero222Actually, if F(e)=eF(e)=e for a nonzero vector, we could replace VV by V(e)V(\cdot-e) and the lower and the upper bounds on V(e)V(\cdot-e) by respectively α1(e)\alpha_{1}(\|\cdot-e\|) and α2(e)\alpha_{2}(\|\cdot-e\|). Finally, from the second statement of Definition 6, we can define a type of norm operator relative to a Lyapunov function. We propose a general definition of local positive definite ratio operators and then specify the results for local Lyapunov functions.

Definition 7 (Local positive definite ratio operators).

Let F:ddF:\mathbb{R}^{d}\to\mathbb{R}^{d} such that F(0)=0F(0)=0 and XSta(F)X\in\operatorname{Sta}(F). The local positive definite ratio operator of FF on XX is the following map:

PD(X)W:NFX(W):=supxXx0W(F(x))W(x)\operatorname{PD}(X)\ni W:\to N_{F}^{X}(W):=\sup_{\begin{subarray}{c}x\in X\\ x\neq 0\end{subarray}}\dfrac{W(F(x))}{W(x)} (23)

Note that the local positive definite ratio operator NFXN_{F}^{X} is always nonnegative and can take infinite values.

In Proposition 10, we present some useful properties of the local positive definite ratio operator NFXN_{F}^{X}.

Proposition 10.

Let F:ddF:\mathbb{R}^{d}\to\mathbb{R}^{d} such that F(0)=0F(0)=0 and XSta(F)X\in\operatorname{Sta}(F). Then, for all WPD(X)W\in\operatorname{PD}(X):

  1. 1.

    NIdX(W)=1N_{\mathrm{Id}}^{X}(W)=1 where Id:dxx\mathrm{Id}:\mathbb{R}^{d}\ni x\mapsto x;

  2. 2.

    for all xX\{0}x\in X\backslash\{0\}, W(F(x))NFX(W)W(x)W(F(x))\leq N_{F}^{X}(W)W(x);

  3. 3.

    for all kk\in\mathbb{N}^{*}, NFkX(W)(NFX(W))kN_{F^{k}}^{X}(W)\leq(N_{F}^{X}(W))^{k};

  4. 4.

    there exists λ(0,1)\lambda\in(0,1) such that for all xXx\in X, W(F(x))λW(x)W(F(x))\leq\lambda W(x) if and only if NFX(W)(0,1)N_{F}^{X}(W)\in(0,1).

Proof.

1. This is a straightforward application of the definitions.

2. The result clearly holds if NFX(W)N_{F}^{X}(W) is equal to ++\infty. Then, if NFX(W)N_{F}^{X}(W) is finite, directly applying the definition of NFX(W)N_{F}^{X}(W) and the positivity of WW lead to the result.

3. The result holds when NFX(W)N_{F}^{X}(W) is equal to ++\infty. Now, suppose that NFX(W)N_{F}^{X}(W) is finite. If NFX(W)=0N_{F}^{X}(W)=0, then W(F(x))W(F(x)) for all xX\{0}x\in X\backslash\{0\}. It follows that F(x)=0F(x)=0 for all xXx\in X. Finally, Fk(x)=0F^{k}(x)=0 for all kk\in\mathbb{N}^{*} and xXx\in X. Then NFkX(W)=0N_{F^{k}}^{X}(W)=0 and the inequality holds. If NFkX(W)=0N_{F^{k}}^{X}(W)=0 for some kk\in\mathbb{N} the inequality obviously holds. We then suppose that NFX(W)N_{F}^{X}(W) and for all kk\in\mathbb{N}^{*} NFk(W)N_{F}^{k}(W) are strictly positive. We now apply induction. The result obviously holds for k=1k=1. Assume that it holds for some kk\in\mathbb{N}^{*}. Let xX\{0}x\in X\backslash\{0\} such that W(Fk+1(x))>0W(F^{k+1}(x))>0. It follows that W(Fk(x))>0W(F^{k}(x))>0. Then W(Fk+1(x))/W(x)=(W(Fk+1(x))/W(Fk(x)))×(W(Fk(x))/W(x))W(F^{k+1}(x))/W(x)=(W(F^{k+1}(x))/W(F^{k}(x)))\times(W(F^{k}(x))/W(x)). As XSta(F)X\in\operatorname{Sta}(F), Fk+1(x)F^{k+1}(x) and Fk(x)F^{k}(x) belong to XX and are not equal to 0. Finally, W(Fk+1(x))/W(x)NFX(W)×NFkX(W)(NFX(W))k+1W(F^{k+1}(x))/W(x)\leq N_{F}^{X}(W)\times N_{F^{k}}^{X}(W)\leq(N_{F}^{X}(W))^{k+1} using the induction hypothesis. We obtain the inequality by taking the supremum over X\{0}X\backslash\{0\}.

4. It follows readily from the definition. ∎

The fourth statement of Proposition 10 is not difficult to prove. However, it is worth noting that if there exists λ(0,1)\lambda\in(0,1) such that for all xXx\in X, W(F(x))λW(x)W(F(x))\leq\lambda W(x), then NFX(W)λN_{F}^{X}(W)\leq\lambda. For our applications, this latter inequality is important. Indeed, NFX(W)N_{F}^{X}(W) will be used as an element βΓsc(h,ν)\beta\in\mathbb{\Gamma}_{\rm sc}(h,\nu) for a function hh to be determined. Since λ\lambda also belongs to Γsc(h,ν)\mathbb{\Gamma}_{\rm sc}(h,\nu), following Proposition 7, we will have 𝔉ν(k,h,NFX(W))𝔉ν(k,h,λ)\mathfrak{F}_{\nu}(k,h,N_{F}^{X}(W))\leq\mathfrak{F}_{\nu}(k,h,\lambda). This will be highlighted in Paragraph 5.3.2.

Corollary 2.

Let FF be a self-map on d\mathbb{R}^{d}. Suppose that FF admits a local Lyapunov function VV on XSta(F)X\in\operatorname{Sta}(F). Then:

  1. 1.

    NFX(V)(0,1)N_{F}^{X}(V)\in(0,1);

  2. 2.

    for all xXx\in X, V(F(x))NFX(V)V(x)V(F(x))\leq N_{F}^{X}(V)V(x);

  3. 3.

    for all kk\in\mathbb{N}, NFkX(V)(NFX(V))kN_{F^{k}}^{X}(V)\leq(N_{F}^{X}(V))^{k}.

Proof.

The first statement follows from the fourth statement of Proposition 10. The second statement is a reformulation of the second statement of Proposition 10, for which the inequality holds for x=0x=0 as NFX(V)N_{F}^{X}(V) is finite. The finiteness of NFX(V)N_{F}^{X}(V) also extends the third statement of Proposition 10.

Example 2 (The local positive definite ratio operator of 𝖧\mathsf{H} at 𝖵\mathsf{V} on closed balls).

Now, let us consider r(0,ρ¯)r\in(0,\overline{\rho}). To go further into our computations, we must compute the local positive definite ratio operator on B¯(0,r)\overline{B}(0,\sqrt{r}) of 𝖧\mathsf{H} at the Lyapunov candidate 𝖵\mathsf{V} defined in (22). For sake of simplicity, we write N𝖧r(𝖵)N_{\mathsf{H}}^{r}(\mathsf{V}) instead of N𝖧B¯(0,r)(𝖵)N_{\mathsf{H}}^{\overline{B}(0,\sqrt{r})}(\mathsf{V}).

First, we suppose that rρ¯r\leq\underline{\rho}. It follows that 𝖵(x)=x2\mathsf{V}(x)=\|x\|^{2}. Moreover, as r<ρ¯r<\overline{\rho}, we also have 𝖧(x)2<x2r\|\mathsf{H}(x)\|^{2}<\|x\|^{2}\leq r and so, by assumption, 𝖧(x)2ρ¯\|\mathsf{H}(x)\|^{2}\leq\underline{\rho} and 𝖵(𝖧(x))=𝖧(x)2\mathsf{V}(\mathsf{H}(x))=\|\mathsf{H}(x)\|^{2}. Finally, in the case where rρ¯r\leq\underline{\rho}, we have

N𝖧r(𝖵)=supx2rx0𝖵(𝖧(x))𝖵(x)=supx2rx0𝖧(x)2x2=supx2rx01+(x21)264=supx2rx0𝔤(x2)N_{\mathsf{H}}^{r}(\mathsf{V})=\sup_{\begin{subarray}{c}\|x\|^{2}\leq r\\ x\neq 0\end{subarray}}\dfrac{\mathsf{V}(\mathsf{H}(x))}{\mathsf{V}(x)}=\sup_{\begin{subarray}{c}\|x\|^{2}\leq r\\ x\neq 0\end{subarray}}\dfrac{\|\mathsf{H}(x)\|^{2}}{\|x\|^{2}}=\sup_{\begin{subarray}{c}\|x\|^{2}\leq r\\ x\neq 0\end{subarray}}\dfrac{1+(\|x\|^{2}-1)^{2}}{64}=\sup_{\begin{subarray}{c}\|x\|^{2}\leq r\\ x\neq 0\end{subarray}}\mathfrak{g}(\|x\|^{2})

The function 𝔤\mathfrak{g} is strictly decreasing on [0,1][0,1] and strictly increasing on [1,+)[1,+\infty) and for all 0<s<20<s<2, 𝔤(s)<1/32=𝔤(0)=𝔤(2)\mathfrak{g}(s)<1/32=\mathfrak{g}(0)=\mathfrak{g}(2), hence for all rρ¯r\leq\overline{\rho}:

N𝖧r(𝖵)={132 if r(0,2]1+(r1)264 if r(2,ρ¯]N_{\mathsf{H}}^{r}(\mathsf{V})=\left\{\begin{array}[]{lr}\dfrac{1}{32}&\text{ if }r\in(0,2]\\ \\ \dfrac{1+(r-1)^{2}}{64}&\text{ if }r\in(2,\underline{\rho}]\end{array}\right.

When r=ρ¯r=\underline{\rho}, we get the equality N𝖧r(V)=𝔤(ρ¯)=e1N_{\mathsf{H}}^{r}(V)=\mathfrak{g}(\underline{\rho})=e^{-1}. Futhermore, since 𝔤\mathfrak{g} is increasing on (2,ρ¯](2,\underline{\rho}], N𝖧r(𝖵)𝔤(ρ¯)=e1N_{\mathsf{H}}^{r}(\mathsf{V})\leq\mathfrak{g}(\underline{\rho})=e^{-1}.

Now, let consider r(ρ¯,ρ¯)r\in(\underline{\rho},\overline{\rho}) and x2x\in\mathbb{R}^{2} such that x2(ρ¯,r]\|x\|^{2}\in(\underline{\rho},r]. Then we have 𝖵(z)=e𝖧(z)2\mathsf{V}(z)=e\|\mathsf{H}(z)\|^{2} and 𝖵(𝖧(x))=e𝖧2(x)\mathsf{V}(\mathsf{H}(x))=e\|\mathsf{H}^{2}(x)\| if 𝖧(x)2ρ¯\|\mathsf{H}(x)\|^{2}\geq\underline{\rho} and 𝖵(x)=𝖧(x)2\mathsf{V}(x)=\|\mathsf{H}(x)\|^{2} if 𝖧(x)2<ρ¯\|\mathsf{H}(x)\|^{2}<\underline{\rho}. Then:

N𝖧r(𝖵)=supx2rx0𝖵(𝖧(x))𝖵(x)=max{supx2ρ¯𝖵(𝖧(x))𝖵(x),supρ¯<x2rx0𝖵(𝖧(x))𝖵(x)}=max{1e,supρ¯<x2r𝖧(x)2ρ¯𝖵(𝖧(x))𝖵(x),supρ¯<x2r𝖧(x)2>ρ¯𝖵(𝖧(x))𝖵(x)}=max{1e,supρ¯<x2r𝖧(x)2ρ¯𝖧(x)2e𝖧(x)2,supρ¯<x2r𝖧(x)2>ρ¯e𝖧2(x)2e𝖧(x)2}=max{1e,supρ¯<x2r𝖧(x)2>ρ¯164(1+(𝖧(x)21)2)}=max{1e,supρ¯<x2r𝖧(x)2>ρ¯𝔤(𝔣(x2))}\begin{array}[]{ll}N_{\mathsf{H}}^{r}(\mathsf{V})=\sup_{\begin{subarray}{c}\|x\|^{2}\leq r\\ x\neq 0\end{subarray}}\dfrac{\mathsf{V}(\mathsf{H}(x))}{\mathsf{V}(x)}&=\max\left\{\sup_{\begin{subarray}{c}\|x\|^{2}\leq\underline{\rho}\end{subarray}}\dfrac{\mathsf{V}(\mathsf{H}(x))}{\mathsf{V}(x)},\sup_{\begin{subarray}{c}\underline{\rho}<\|x\|^{2}\leq r\\ x\neq 0\end{subarray}}\dfrac{\mathsf{V}(\mathsf{H}(x))}{\mathsf{V}(x)}\right\}\\ &=\max\left\{\dfrac{1}{e},\sup_{\begin{subarray}{c}\underline{\rho}<\|x\|^{2}\leq r\\ \|\mathsf{H}(x)\|^{2}\leq\underline{\rho}\end{subarray}}\dfrac{\mathsf{V}(\mathsf{H}(x))}{\mathsf{V}(x)},\sup_{\begin{subarray}{c}\underline{\rho}<\|x\|^{2}\leq r\\ \|\mathsf{H}(x)\|^{2}>\underline{\rho}\end{subarray}}\dfrac{\mathsf{V}(\mathsf{H}(x))}{\mathsf{V}(x)}\right\}\\ &=\max\left\{\dfrac{1}{e},\sup_{\begin{subarray}{c}\underline{\rho}<\|x\|^{2}\leq r\\ \|\mathsf{H}(x)\|^{2}\leq\underline{\rho}\end{subarray}}\dfrac{\|\mathsf{H}(x)\|^{2}}{e\|\mathsf{H}(x)\|^{2}},\sup_{\begin{subarray}{c}\underline{\rho}<\|x\|^{2}\leq r\\ \|\mathsf{H}(x)\|^{2}>\underline{\rho}\end{subarray}}\dfrac{e\|\mathsf{H}^{2}(x)\|^{2}}{e\|\mathsf{H}(x)\|^{2}}\right\}\\ &=\max\left\{\dfrac{1}{e},\sup_{\begin{subarray}{c}\underline{\rho}<\|x\|^{2}\leq r\\ \|\mathsf{H}(x)\|^{2}>\underline{\rho}\end{subarray}}\dfrac{1}{64}\left(1+(\|\mathsf{H}(x)\|^{2}-1)^{2}\right)\right\}\\ &=\max\left\{\dfrac{1}{e},\sup_{\begin{subarray}{c}\underline{\rho}<\|x\|^{2}\leq r\\ \|\mathsf{H}(x)\|^{2}>\underline{\rho}\end{subarray}}\mathfrak{g}(\mathfrak{f}(\|x\|^{2}))\right\}\end{array}

Let us introduce the set

Sr:={s(ρ¯,r]:𝔣(s)>ρ¯}.S_{r}:=\{s\in(\underline{\rho},r]:\mathfrak{f}(s)>\underline{\rho}\}. (24)

We thus are interested in the supremum of 𝔤(𝔣(x2))\mathfrak{g}(\mathfrak{f}(\|x\|^{2})) for all x2Sr\|x\|^{2}\in S_{r}. As 𝔣\mathfrak{f} is strictly increasing on +\mathbb{R}_{+}, there exists a unique real root denoted by β\beta^{*} for the polynomial function of degree 3, 𝔣()ρ¯\mathfrak{f}(\cdot)-\underline{\rho}. As 𝔣(ρ¯)=ρ¯/e<ρ¯\mathfrak{f}(\underline{\rho})=\underline{\rho}/e<\underline{\rho}, we have β>ρ¯\beta^{*}>\underline{\rho}. We do not require the exact value of β\beta^{*}. In fact, SrS_{r} is nonempty if and only if 𝔣(r)>ρ¯\mathfrak{f}(r)>\underline{\rho}. Indeed, if there exists sSrs\in S_{r}, then, as 𝔣\mathfrak{f} strictly increases, 𝔣(r)𝔣(s)>ρ¯\mathfrak{f}(r)\geq\mathfrak{f}(s)>\underline{\rho} and rSrr\in S_{r}. Conversely, it is obvious that SrS_{r} is nonempty if rSrr\in S_{r}. Note that if SrS_{r} is empty, the supremum over it is -\infty making N𝖧r(𝖵)=e1N_{\mathsf{H}}^{r}(\mathsf{V})=e^{-1} for all r(ρ¯,β]r\in(\underline{\rho},\beta^{*}]. Suppose that SrS_{r} is nonempty. As 𝔣(ρ¯)=ρ¯/e>1\mathfrak{f}(\underline{\rho})=\underline{\rho}/e>1, 𝔤𝔣\mathfrak{g}\circ\mathfrak{f} is strictly increasing on SrS_{r} and for all sSrs\in S_{r}:

𝔤(𝔣(s))𝔤(𝔣(r))=164(1+(164r(1+(r1)2)1)2)<𝔤(𝔣(ρ¯))=1.\mathfrak{g}(\mathfrak{f}(s))\leq\mathfrak{g}(\mathfrak{f}(r))=\dfrac{1}{64}\left(1+\left(\dfrac{1}{64}r(1+(r-1)^{2})-1\right)^{2}\right)<\mathfrak{g}(\mathfrak{f}(\overline{\rho}))=1.

Finally, we obtain for all r<ρ¯r<\overline{\rho}:

N𝖧r(𝖵)={𝔤(𝔣(r)) if r>ρ¯ and f(r)>ρ¯e1 if r>ρ¯ and 𝔣(r)ρ¯1+(r1)264 if r(2,ρ¯]132 if r(0,2]N_{\mathsf{H}}^{r}(\mathsf{V})=\left\{\begin{array}[]{lr}\mathfrak{g}(\mathfrak{f}(r))&\text{ if }r>\underline{\rho}\text{ and }f(r)>\underline{\rho}\\ e^{-1}&\text{ if }r>\underline{\rho}\text{ and }\mathfrak{f}(r)\leq\underline{\rho}\\ \dfrac{1+(r-1)^{2}}{64}&\text{ if }r\in(2,\underline{\rho}]\\ \dfrac{1}{32}&\text{ if }r\in(0,2]\\ \end{array}\right. (25)

We conclude that 𝖵\mathsf{V} satisfies for all r<ρ¯r<\overline{\rho}:

  1. 1.

    for all zB¯(0,r)z\in\overline{B}(0,\sqrt{r}), x2𝖵(x)ex2\|x\|^{2}\leq\mathsf{V}(x)\leq e\|x\|^{2};

  2. 2.

    there exists λ(0,1)\lambda\in(0,1) such that for all xB¯(0,r))x\in\overline{B}(0,\sqrt{r})), 𝖵(𝖧(x))λ𝖵(x)\mathsf{V}(\mathsf{H}(x))\leq\lambda\mathsf{V}(x)

and 𝖵\mathsf{V} is a Lyapunov function for 𝖧\mathsf{H} on all B¯(0,r)\overline{B}(0,\sqrt{r}) where r<ρ¯r<\overline{\rho}.

5.2 Theoretical constructions from Lyapunov functions

In this subsection, we explain how to construct a solution for Problem 1 from a Lyapunov function. We recall that a solution of Problem 1 is an (h,β)Γ(ν)(h,\beta)\in\mathbb{\Gamma}(\nu) (ν\nu defined in (1)) useful when Rνs{\mathrm{R}}_{\nu}^{s}\neq\emptyset. First, we establish a direct result derived from Theorem 4 without any continuity condition on φ\varphi.

We formulate a simple lemma asserting that the supremum of a local Lyapunov function is finite over nonempty bounded sets contained in a suitable stable set. The result would be evident if the local Lyapunov function is (upper semi)continuous, but continuity is not mandatory in Definition 6.

Lemma 2.

Let F:ddF:\mathbb{R}^{d}\mapsto\mathbb{R}^{d}. Assume that FF admits a local Lyapunov function VV on XSta(F)X\in\operatorname{Sta}(F). Then, for all nonempty bounded sets BXB\subseteq X, VB¯<+\overline{V_{B}}<+\infty.

Proof.

Let FF be a self-map on d\mathbb{R}^{d} and VV be a local Lyapunov function on XSta(F)X\in\operatorname{Sta}(F). Let BB be a nonempty bounded set contained in XX. By definition, there exists α𝒦\alpha\in\mathcal{K}, such that V(x)α(x)V(x)\leq\alpha(\|x\|) for all xXx\in X. Then, as α\alpha is strictly increasing, VB¯supxBα(x)=α(supxBx)=α(maxxB¯x)=α(y¯)\overline{V_{B}}\leq\sup_{x\in B}\alpha(\|x\|)=\alpha(\sup_{x\in B}\|x\|)=\alpha(\max_{x\in\overline{B}}\|x\|)=\alpha(\|\overline{y}\|) for some y¯X\overline{y}\in X as XX is closed. We conclude that α\alpha is defined at B¯\overline{\|\cdot\|_{B}} and thus VB¯<+\overline{V_{B}}<+\infty.

Remark 2.

Lemma 2 justifies the closedness of elements on Sta(U)\operatorname{Sta}(U). As a matter of fact, the definition of being a Lyapunov function may allow functions α:[0,c)x(cx)1c1\alpha:[0,c)\ni x\mapsto(c-x)^{-1}-c^{-1} as functions α1,α2𝒦\alpha_{1},\alpha_{2}\in\mathcal{K}. If such a function is defined on the set {x:xX}\{\|x\|:x\in X\} where XX is a closed set, then for all xXx\in X, x<c\|x\|<c. This implies that XX is compact, and then supxXx=x¯<c\sup_{x\in X}\|x\|=\|\overline{x}\|<c for some x¯X\overline{x}\in X. On the contrary, let Y={xd:x<c}Y=\{x\in\mathbb{R}^{d}:\|x\|<c\}. If the function α:[0,c)x(cx)1c1\alpha:[0,c)\ni x\mapsto(c-x)^{-1}-c^{-1} is defined on the set {y:yY}\{\|y\|:y\in Y\}. As supyYy=c\sup_{y\in Y}\|y\|=c, so α\alpha is not defined on supyYy\sup_{y\in Y}\|y\|.

Theorem 6 (Direct Lyapunov construction).

Suppose that XinX^{\mathrm{in}} is bounded and not reduced to {0}\{0\}. Suppose that there exists a Lyapunov function VV for TT on 𝒟Sta(T)\mathcal{D}\in\operatorname{Sta}(T) such that Xin𝒟X^{\mathrm{in}}\subseteq\mathcal{D}. If

φ(x)V(x) for all x𝒟,\varphi(x)\leq V(x)\text{ for all }x\in\mathcal{D}, (26)

then:

  1. 1.

    lim supn+νn0\limsup_{n\to+\infty}\nu_{n}\leq 0 and if, moreover, Assumption 2 holds then Rνs{\mathrm{R}}_{\nu}^{s}\neq\emptyset;

  2. 2.

    the function:

    h:+ssVXin¯h:\mathbb{R}_{+}\ni s\mapsto s\overline{V_{X^{\mathrm{in}}}}

    belongs to Γfun(ν)\mathbb{\Gamma}_{\rm fun}(\nu) and NT𝒟(V)Γsc(ν,h)N_{T}^{\mathcal{D}}(V)\in\mathbb{\Gamma}_{\rm sc}(\nu,h) and, if, moreover, Assumption 2 holds then hh is useful for ν\nu.

Proof.

Let xXinx\in X^{\mathrm{in}} and kk\in\mathbb{N}. As 𝒟Sta(T)\mathcal{D}\in\operatorname{Sta}(T) and Xin𝒟X^{\mathrm{in}}\subseteq\mathcal{D}, then, Tk(x)𝒟T^{k}(x)\in\mathcal{D} and, from (26), φ(Tk(x))V(Tk(x))\varphi(T^{k}(x))\leq V(T^{k}(x)). Moreover, as VV is a Lyapunov function, from Lemma 2, VXin¯\overline{V_{X^{\mathrm{in}}}} is finite and we have φ(Tk(x))NTk𝒟(V)V(x)(NT𝒟(V))kVXin¯=h((NT𝒟(V))k)\varphi(T^{k}(x))\leq N_{T^{k}}^{\mathcal{D}}(V)V(x)\leq(N_{T}^{\mathcal{D}}(V))^{k}\overline{V_{X^{\mathrm{in}}}}=h((N_{T}^{\mathcal{D}}(V))^{k}). The inequality involving νk\nu_{k} follows by taking the supremum over XinX^{\mathrm{in}}. As VV is a Lyapunov function on 𝒟\mathcal{D}, NT𝒟(V)(0,1)N_{T}^{\mathcal{D}}(V)\in(0,1). This implies that the first statement as (NT𝒟(V))kVXin¯(N_{T}^{\mathcal{D}}(V))^{k}\overline{V_{X^{\mathrm{in}}}} tends to 0. As Xin{0}X^{\mathrm{in}}\neq\{0\}, then VXin¯>0\overline{V_{X^{\mathrm{in}}}}>0 and from Lemma 2, as XinX^{\mathrm{in}} is bounded, VXin¯<+\overline{V_{X^{\mathrm{in}}}}<+\infty. We obtain the second statement from this.

Remark 3.

Actually, the function γ(s,t):+×+(s,t)sexp(tln(NT𝒟(V)))\gamma(s,t):\mathbb{R}_{+}\times\mathbb{R}_{+}\ni(s,t)\mapsto s\exp(t\ln(N_{T}^{\mathcal{D}}(V))) is a 𝒦\mathcal{K}\mathcal{L} function and as XinX^{\mathrm{in}} is bounded and not reduced to {0}\{0\}, VV belongs to Fin(Xin)\operatorname{Fin}\left(X^{\mathrm{in}}\right) and verifies VXin¯>0\overline{V_{X^{\mathrm{in}}}}>0. Theorem 4 thus applies. As we do not depend on Sontag’s lemma, we have an explicit description of the element (h,β)Γ(ν)(h,\beta)\in\mathbb{\Gamma}(\nu).

If we assume that φ\varphi is continuous, the inequality (26) is no longer required. To recover a link between φ\varphi and a Lyapunov function, it suffices to use Proposition 9.

Theorem 7 (Continuous Lyapunov construction).

Assume that XinX^{\mathrm{in}} is bounded and not reduced to {0}\{0\}. Suppose that φ\varphi is continuous and there exists a Lyapunov function VV for TT on 𝒟Sta(T)\mathcal{D}\in\operatorname{Sta}(T) such that Xin𝒟X^{\mathrm{in}}\subseteq\mathcal{D}. Finally, suppose that there exists αV𝒦\alpha_{V}\in\mathcal{K} satisfying αV(x)V(x)\alpha_{V}(\|x\|)\leq V(x) for all x𝒟x\in\mathcal{D} and αV(s)>VXin¯\alpha_{V}(s)>\overline{V_{X^{\mathrm{in}}}} for some s(0,+)s\in(0,+\infty). Let α𝒦\alpha\in\mathcal{K}_{\infty} verifying φ(x)α(x)\varphi(x)\leq\alpha(\|x\|) for all xdx\in\mathbb{R}^{d}. We define:

h:+sα(αV1(sVXin¯))h:\mathbb{R}_{+}\ni s\mapsto\alpha\left(\alpha_{V}^{-1}(s\overline{V_{X^{\mathrm{in}}}})\right)

Then (h,NT𝒟(V))Γ(ν)(h,N_{T}^{\mathcal{D}}(V))\in\mathbb{\Gamma}(\nu). If Assumption 2 holds, then hh is useful for ν\nu.

Proof.

Let xXinx\in X^{\mathrm{in}} and kk\in\mathbb{N}. As Xin𝒟X^{\mathrm{in}}\subseteq\mathcal{D} and 𝒟Sta(T)\mathcal{D}\in\operatorname{Sta}(T), Tk(x)𝒟T^{k}(x)\in\mathcal{D} and thus αV(Tk(x))V(Tk(x))\alpha_{V}(\|T^{k}(x)\|)\leq V(T^{k}(x)). As VV is Lyapunov, we then have αV(Tk(x))V(Tk(x))NTk𝒟(V)V(x)NT𝒟(V)kV(x)\alpha_{V}(\|T^{k}(x)\|)\leq V(T^{k}(x))\leq N_{T^{k}}^{\mathcal{D}}(V)V(x)\leq N_{T}^{\mathcal{D}}(V)^{k}V(x). As xXinx\in X^{\mathrm{in}}, it follows that αV(Tk(x))NT𝒟(V)kVXin¯\alpha_{V}(\|T^{k}(x)\|)\leq N_{T}^{\mathcal{D}}(V)^{k}\overline{V_{X^{\mathrm{in}}}}. By assumption, VXin¯<αV(s)\overline{V_{X^{\mathrm{in}}}}<\alpha_{V}(s) for some s(0,+)s\in(0,+\infty). We also have NT𝒟(V)k(0,1)N_{T}^{\mathcal{D}}(V)^{k}\in(0,1). Then, αV1(NT𝒟(V)kVXin¯))\alpha_{V}^{-1}(N_{T}^{\mathcal{D}}(V)^{k}\overline{V_{X^{\mathrm{in}}}})) exists. Since φα\varphi\leq\alpha\circ\|\cdot\| on d\mathbb{R}^{d} with α\alpha increasing, the inequality φ(Tk(x))α(x)α(αV1((NT𝒟(V)kVXin¯)\varphi(T^{k}(x))\leq\alpha(\|x\|)\leq\alpha(\alpha_{V}^{-1}((N_{T}^{\mathcal{D}}(V)^{k}\overline{V_{X^{\mathrm{in}}}}) holds. Taking the supremum over XinX^{\mathrm{in}} leads to (h,NT𝒟(V))Γ(ν)(h,N_{T}^{\mathcal{D}}(V))\in\mathbb{\Gamma}(\nu). As α,αV𝒦\alpha,\alpha_{V}\in\mathcal{K}, h(0)=0h(0)=0 and thus hh is useful if Assumption 2 holds.

Remark 4.

If we impose the strongest condition: αV(s)>V𝒟¯\alpha_{V}(s)>\overline{V_{\mathcal{D}}} for some s(0,+)s\in(0,+\infty) (instead of αV(s)>VXin¯\alpha_{V}(s)>\overline{V_{X^{\mathrm{in}}}}), we can directly use Theorem 6. Indeed, if this stronger condition holds, we can deduce φ~:=αV(α1(φ(x))V(x)\tilde{\varphi}:=\alpha_{V}(\alpha^{-1}(\varphi(x))\leq V(x) for all x𝒟x\in\mathcal{D}.

5.3 Numerical applications to our running example of Subsection 2.3

We propose to illustrate the theoretical results obtained in Subsection 5.2 on the running example presented in Subsection  2.3. First, we explicit the function hΓ(ω𝖷i)h\in\mathbb{\Gamma}(\omega_{\mathsf{X}}^{i}). Then, we compare the results obtained from the Lyapunov approach with those obtained with 𝒦\mathcal{K}\mathcal{L} upper bounds. Finally, we complete the numerical experiments with new data not treated with the 𝒦\mathcal{K}\mathcal{L} approach.

5.3.1 Theoretical developments of the running example using 𝖵\mathsf{V}

We can return to the discrete-time peak computation problems presented in Subsection 2.3. Recall that the objective functions are the coordinate functions π1\pi_{1} and π2\pi_{2}. It is evident that φ(x)x=αφ(x)\varphi(x)\leq\|x\|=\alpha_{\varphi}(\|x\|) for all x2x\in\mathbb{R}^{2} where αφ:+:ss\alpha_{\varphi}:\mathbb{R}_{+}\ni:s\mapsto s. Recall that 𝖵\mathsf{V} defined in (22) is proved in Example 2 to be a Lyapunov function for 𝖧\mathsf{H} on all B¯(0,r)\overline{B}(0,\sqrt{r}) where r<ρ¯r<\overline{\rho}. It follows that α𝖵(x)=x2𝖵(x)\alpha_{\mathsf{V}}(\|x\|)=\|x\|^{2}\leq\mathsf{V}(x) for all x2x\in\mathbb{R}^{2} where α𝖵:+ss2\alpha_{\mathsf{V}}:\mathbb{R}_{+}\ni s\mapsto s^{2}.

Now let r<ρ¯r<\overline{\rho}. Suppose that 𝖷\mathsf{X} is such that 𝖷B¯(0,r)\mathsf{X}\subseteq\overline{B}(0,\sqrt{r}) and there exists x𝖷x\in\mathsf{X} with x2=r\|x\|^{2}=r. As the computation depends on rr, we write 𝖵𝖷¯r=supx𝖷V(x)\overline{\mathsf{V}_{\mathsf{X}}}_{r}=\sup_{x\in\mathsf{X}}V(x). Then, by definition of 𝖵\mathsf{V}, 𝖵𝖷¯r=supx𝖷max{x2,e𝖧(x)2}\overline{\mathsf{V}_{\mathsf{X}}}_{r}=\sup_{x\in\mathsf{X}}\max\{\|x\|^{2},e\|\mathsf{H}(x)\|^{2}\}. By assumption on 𝖷\mathsf{X}, 𝖵𝖷¯r=max{r,𝔣(r)}\overline{\mathsf{V}_{\mathsf{X}}}_{r}=\max\{r,\mathfrak{f}(r)\}. Now, following Theorem 7,

h𝖵,r:+ss𝖵𝖷¯rh_{\mathsf{V},r}:\mathbb{R}^{+}\ni s\mapsto\sqrt{s\overline{\mathsf{V}_{\mathsf{X}}}_{r}} (27)

satisfies (h𝖵,r,N𝖧r(𝖵))Γ(ω𝖷i)(h_{\mathsf{V},r},N_{\mathsf{H}}^{r}(\mathsf{V}))\in\mathbb{\Gamma}(\omega_{\mathsf{X}}^{i}) for all i{1,2}i\in\{1,2\}. This means that for all kk\in\mathbb{N}

ω𝖷,k1=supx𝖷π1(𝖧k(x))N𝖧r(𝖵)𝖵𝖷¯r and ω𝖷,k2=supx𝖷π2(𝖧k(x))N𝖧r(𝖵)𝖵𝖷¯r.\omega_{\mathsf{X},k}^{1}=\sup_{x\in\mathsf{X}}\pi_{1}(\mathsf{H}^{k}(x))\leq\sqrt{N_{\mathsf{H}}^{r}(\mathsf{V})\overline{\mathsf{V}_{\mathsf{X}}}_{r}}\text{ and }\omega_{\mathsf{X},k}^{2}=\sup_{x\in\mathsf{X}}\pi_{2}(\mathsf{H}^{k}(x))\leq\sqrt{N_{\mathsf{H}}^{r}(\mathsf{V})\overline{\mathsf{V}_{\mathsf{X}}}_{r}}\kern 5.0pt.

Finally, it is easy to see that

h𝖵,r1:+ss2/𝖵𝖷¯r.h_{\mathsf{V},r}^{-1}:\mathbb{R}_{+}\ni s\to s^{2}/\overline{\mathsf{V}_{\mathsf{X}}}_{r}.

Since h𝖵,r(0)=0h_{\mathsf{V},r}(0)=0, we only apply h𝖵,r1h_{\mathsf{V},r}^{-1} to positive terms ω𝖷,k1\omega_{\mathsf{X},k}^{1} and ω𝖷,k2\omega_{\mathsf{X},k}^{2}.

5.3.2 Comparison with 𝒦\mathcal{K}\mathcal{L} upper bounds results

In Subsection 4.2.2, we considered the case where 𝖷B¯(0,r)\mathsf{X}\subseteq\overline{B}(0,\sqrt{r}) with rρ¯r\leq\underline{\rho}. We recall that x2<ρ¯\|x\|^{2}<\underline{\rho} is equivalent to 𝖧(x)2<e1x2\|\mathsf{H}(x)\|^{2}<e^{-1}\|x\|^{2}, which is the same as e𝖧(x)2<x2e\|\mathsf{H}(x)\|^{2}<\|x\|^{2}. Then, let r<ρ¯r<\underline{\rho} and 𝖷B¯(0,r)\mathsf{X}\subseteq\overline{B}(0,\sqrt{r}). We have for all x𝖷x\in\mathsf{X}, 𝖵(x)=x2\mathsf{V}(x)=\|x\|^{2} and thus 𝖷¯=𝖵𝖷¯r\overline{\|\cdot\|_{\mathsf{X}}}=\overline{\mathsf{V}_{\mathsf{X}}}_{r}. We conclude that h𝖵,rh_{\mathsf{V},r} defined in (27) and hrh_{r} defined in (21) are equal. Moreover, in Subsection 4.2.2, we chose (hr,e1)Γ(ω𝖷i)(h_{r},e^{-1})\in\mathbb{\Gamma}(\omega_{\mathsf{X}}^{i}) whereas we choose here (h𝖵,r,N𝖧r(𝖵))Γ(ω𝖷i)(h_{\mathsf{V},r},N_{\mathsf{H}}^{r}(\mathsf{V}))\in\mathbb{\Gamma}(\omega_{\mathsf{X}}^{i}). We proved that in Example 22 in the case where r<ρ¯r<\underline{\rho}, that N𝖧r(𝖵)<e1N_{\mathsf{H}}^{r}(\mathsf{V})<e^{-1}. From the first statement of Proposition 7, we have for all kk\in\mathbb{N} such that ω𝖷,ki>0\omega_{\mathsf{X},k}^{i}>0, 𝔉ω(k,h𝖵,r,N𝖧r(𝖵))𝔉ω(k,hr,e1)\mathfrak{F}_{\omega}(k,h_{\mathsf{V},r},N_{\mathsf{H}}^{r}(\mathsf{V}))\leq\mathfrak{F}_{\omega}(k,h_{r},e^{-1}).

For example, we go back the set 𝖷a={x1,x2}\mathsf{X}_{a}=\{x^{1},x^{2}\} where x1=(1.3,0.3)x^{1}=(-1.3,-0.3)^{\intercal} and x2=(1.1,0.8)x^{2}=(-1.1,-0.8)^{\intercal}. We have 𝖷aB¯(0,1.85)\mathsf{X}_{a}\subseteq\overline{B}(0,\sqrt{1.85}) with x12=1.85<ρ¯\|x^{1}\|^{2}=1.85<\underline{\rho}. Then hr=h𝖵,r:+s:s2/1.85h_{r}=h_{\mathsf{V},r}:\mathbb{R}_{+}\ni s:\mapsto s^{2}/1.85. However, in Example 2, the value of N𝖧1.85(𝖵)=1/32<e1N_{\mathsf{H}}^{1.85}(\mathsf{V})=1/32<e^{-1} and

𝔉ωa1(2,h𝖵,r,N𝖧1.85(𝖵))=ln(1.85)2ln(max{π1(𝖧2(x1),π1(𝖧2(x2))})ln(32)2.1183<𝔉ωa1(2,hr,e1)7.3415.\begin{array}[]{lcl}\mathfrak{F}_{\omega_{a}^{1}}(2,h_{\mathsf{V},r},N_{\mathsf{H}}^{1.85}(\mathsf{V}))&=&\dfrac{\ln(1.85)-2\ln\left(\max\{\pi_{1}(\mathsf{H}^{2}(x^{1}),\pi_{1}(\mathsf{H}^{2}(x^{2}))\}\right)}{\ln(32)}\approx 2.1183\\ &<&\mathfrak{F}_{\omega_{a}^{1}}(2,h_{r},e^{-1})\approx 7.3415\kern 5.0pt.\end{array}

This directly provides the stopping integer equal to 2 and saves computations for ωa1\omega_{a}^{1}. We also have

𝔉ωa2(2,h𝖵,r,N𝖧1.85(𝖵))=ln(1.85)2ln(max{π2(𝖧2(x1),π2(𝖧2(x2))})ln(32)2.3222<𝔉ωa2(2,hr,e1)8.0482.\begin{array}[]{lcl}\mathfrak{F}_{\omega_{a}^{2}}(2,h_{\mathsf{V},r},N_{\mathsf{H}}^{1.85}(\mathsf{V}))&=&\dfrac{\ln(1.85)-2\ln\left(\max\{\pi_{2}(\mathsf{H}^{2}(x^{1}),\pi_{2}(\mathsf{H}^{2}(x^{2}))\}\right)}{\ln(32)}\approx 2.3222\\ &<&\mathfrak{F}_{\omega_{a}^{2}}(2,h_{r},e^{-1})\approx 8.0482\kern 5.0pt.\end{array}

Again, we save computations with respect to 𝔉ωa2(2,hr,e1)\mathfrak{F}_{\omega_{a}^{2}}(2,h_{r},e^{-1}).

The case 𝖷b={y1,y2}\mathsf{X}_{b}=\{y^{1},y^{2}\} where y1=(2.3,0.013)y^{1}=(-2.3,-0.013)^{\intercal} and y2=(0.7,2.29)y^{2}=(0.7,-2.29)^{\intercal} is less significative as 5.7341 is close to ρ¯5.7481\underline{\rho}\approx 5.7481. We have 𝖷bB¯(0,5.7341)\mathsf{X}_{b}\subseteq\overline{B}(0,\sqrt{5.7341}) with y22=5.7341<ρ¯\|y^{2}\|^{2}=5.7341<\underline{\rho}. Then hr=h𝖵,r:+s:s2/5.7341h_{r}=h_{\mathsf{V},r}:\mathbb{R}_{+}\ni s:\mapsto s^{2}/5.7341. However, in Example 2, the value of N𝖧5.7341(𝖵)=1+(5.73411)2640.36581<e10.36788N_{\mathsf{H}}^{5.7341}(\mathsf{V})=\dfrac{1+(5.7341-1)^{2}}{64}\approx 0.36581<e^{-1}\approx 0.36788 and:

𝔉ωb1(1,h𝖵,r,N𝖧5.7341(𝖵))=2ln(π1(𝖧(y2))ln(5.7341)ln(N𝖧5.7341(𝖵))2.44459<𝔉ωb1(1,hr,e1)2.4584;\mathfrak{F}_{\omega_{b}^{1}}(1,h_{\mathsf{V},r},N_{\mathsf{H}}^{5.7341}(\mathsf{V}))=\dfrac{2\ln(\pi_{1}(\mathsf{H}(y^{2}))-\ln(5.7341)}{\ln(N_{\mathsf{H}}^{5.7341}(\mathsf{V}))}\approx 2.44459<\mathfrak{F}_{\omega_{b}^{1}}(1,h_{r},e^{-1})\approx 2.4584;

We also have

𝔉ωb2(3,h𝖵,r,N𝖧5.7341(𝖵))=2ln(π2(𝖧3(y1)))ln(5.7341)ln(N𝖧5.7341(𝖵))8.04905<𝔉ωb2(3,hr,e1)8.0945.\mathfrak{F}_{\omega_{b}^{2}}(3,h_{\mathsf{V},r},N_{\mathsf{H}}^{5.7341}(\mathsf{V}))=\dfrac{2\ln(\pi_{2}(\mathsf{H}^{3}(y^{1})))-\ln(5.7341)}{\ln(N_{\mathsf{H}}^{5.7341}(\mathsf{V}))}\approx 8.04905<\mathfrak{F}_{\omega_{b}^{2}}(3,h_{r},e^{-1})\approx 8.0945.

5.3.3 Numerical applications

In Subsection 4.2.2, we are limited to 𝖷B¯(0,r)\mathsf{X}\subseteq\overline{B}(0,\sqrt{r}) where r<ρ¯r<\underline{\rho}. Considering such a radius rr, we obtain an upper bound of the form φ(𝖧k(z))ek/2z\varphi(\mathsf{H}^{k}(z))\leq e^{-k/2}\|z\|. Now, we can consider a greater radius rr. We cannot exceed ρ¯\overline{\rho} since the norm explodes starting with vectors of norm greater than ρ¯\sqrt{\overline{\rho}}. As in Subsection 4.2.2, we consider two initial condition sets: the first is a singleton for which the norm of the vector is greater than ρ¯\sqrt{\underline{\rho}}. This will explain the difference from the cases developed in Subsection 4.2.2. In the second case, we start with two vectors with strictly negative coordinates, the norms of which are greater than ρ¯\sqrt{\underline{\rho}}. Let us define

  1. 1.

    𝖷c={z0}\mathsf{X}_{c}=\{z^{0}\} where z0=(2.2,2)z^{0}=(2.2,-2)^{\intercal};

  2. 2.

    𝖷d={z1,z2}\mathsf{X}_{d}=\{z^{1},z^{2}\} where z1=(2.3,1.9)z^{1}=(-2.3,-1.9)^{\intercal} and z2=(2.5,1.5)z^{2}=(-2.5,-1.5)^{\intercal}.

Again to simplify the notations, we write for all i{1,2}i\in\{1,2\}, ωci=(ωc,ki)k\omega_{c}^{i}=(\omega_{c,k}^{i})_{k\in\mathbb{N}} and ωdi=(ωd,ki)k\omega_{d}^{i}=(\omega_{d,k}^{i})_{k\in\mathbb{N}} instead of ω𝖷ci\omega_{\mathsf{X}_{c}}^{i} and ω𝖷di\omega_{\mathsf{X}_{d}}^{i}. Recall that, for all i{c,d}i\in\{c,d\}, j{1,2}j\in\{1,2\} and kk\in\mathbb{N}:

ωi,kj:=supx𝖷iπj(𝖧k(x)).\omega_{i,k}^{j}:=\sup_{x\in\mathsf{X}_{i}}\pi_{j}(\mathsf{H}^{k}(x)).

We can refine the definition of h𝖵,rh_{\mathsf{V},r} and N𝖧r(𝖵)N_{\mathsf{H}}^{r}(\mathsf{V}). We write hch_{c} for the function associated with 𝖷c\mathsf{X}_{c} and hdh_{d} for the one associated with 𝖷d\mathsf{X}_{d}. First, the smallest rr satisfying 𝖷cB¯(0,r)\mathsf{X}_{c}\subseteq\overline{B}(0,\sqrt{r}) is equal to rc:=z02=8.84>ρ¯r^{c}:=\|z^{0}\|^{2}=8.84>\underline{\rho}. The smallest rr satisfying 𝖷dB¯(0,r)\mathsf{X}_{d}\subseteq\overline{B}(0,\sqrt{r}) is equal to rd=max{z12,z2}=8.9>ρ¯r^{d}=\max\{\|z^{1}\|^{2},\|z^{2}\|\}=8.9>\underline{\rho}. In both cases, the radius exceeds ρ¯\underline{\rho}, then 𝖵𝖷c¯=e𝖧(z0)223.4535\overline{\mathsf{V}_{\mathsf{X}_{c}}}=e\|\mathsf{H}(z^{0})\|^{2}\approx 23.4535 and 𝖵𝖷d¯=emax{𝖧(z1)2,𝖧(z2)2}=e𝖧(z1)223.9697\overline{\mathsf{V}_{\mathsf{X}_{d}}}=e\max\{\|\mathsf{H}(z^{1})\|^{2},\|\mathsf{H}(z^{2})\|^{2}\}=e\|\mathsf{H}(z^{1})\|^{2}\approx 23.9697. To know the exact value of N𝖧rc(𝖵)N_{\mathsf{H}}^{r^{c}}(\mathsf{V}) and N𝖧rd(𝖵)N_{\mathsf{H}}^{r^{d}}(\mathsf{V}), we have to check whether 𝔣(rc)>ρ¯\mathfrak{f}(r^{c})>\underline{\rho} and f(rd)>ρ¯f(r^{d})>\underline{\rho}. Since 𝔣(rc)8.6281>ρ¯\mathfrak{f}(r^{c})\approx 8.6281>\underline{\rho} and 𝔣(rd)8.81795>ρ¯\mathfrak{f}(r^{d})\approx 8.81795>\underline{\rho}, we conclude that N𝖧rc(𝖵)=𝔤(𝔣(rc))=𝔤(𝔣(8.84))0.9248N_{\mathsf{H}}^{r^{c}}(\mathsf{V})=\mathfrak{g}(\mathfrak{f}(r^{c}))=\mathfrak{g}(\mathfrak{f}(8.84))\approx 0.9248 and N𝖧rd(𝖵)=𝔤(𝔣(rd))=𝔤(𝔣(8.9))0.9706N_{\mathsf{H}}^{r^{d}}(\mathsf{V})=\mathfrak{g}(\mathfrak{f}(r^{d}))=\mathfrak{g}(\mathfrak{f}(8.9))\approx 0.9706. Thus, we get:

hc1:ss2e𝖧(z0)2 and hd1:ss2e𝖧(z1)2h_{c}^{-1}:s\mapsto\dfrac{s^{2}}{e\|\mathsf{H}(z^{0})\|^{2}}\text{ and }h_{d}^{-1}:s\mapsto\dfrac{s^{2}}{e\|\mathsf{H}(z^{1})\|^{2}}

and we have, for all j{1,2}j\in\{1,2\} and all kk such that ωc,kj>0\omega_{c,k}^{j}>0 :

𝔉ωcj(k,hc,N𝖧8.84(𝖵))=2ln(πj(𝖧k(z0))2ln(𝖧(z0))1ln(𝔤(𝔣(8.84)))\mathfrak{F}_{\omega_{c}^{j}}(k,h_{c},N_{\mathsf{H}}^{8.84}(\mathsf{V}))=\dfrac{2\ln(\pi_{j}(\mathsf{H}^{k}(z^{0}))-2\ln(\|\mathsf{H}(z^{0})\|)-1}{\ln(\mathfrak{g}(\mathfrak{f}(8.84)))}

and for all j{1,2}j\in\{1,2\} and all kk such that ωd,kj>0\omega_{d,k}^{j}>0:

𝔉ωdj(k,hd,N𝖧8.9(𝖵))=2ln(max{πj(𝖧k(z1)),πj(𝖧k(z2))})2ln(𝖧(z1))1ln(𝔤(𝔣(8.9)))\mathfrak{F}_{\omega_{d}^{j}}(k,h_{d},N_{\mathsf{H}}^{8.9}(\mathsf{V}))=\dfrac{2\ln(\max\{\pi_{j}(\mathsf{H}^{k}(z^{1})),\pi_{j}(\mathsf{H}^{k}(z^{2}))\})-2\ln(\|\mathsf{H}(z^{1})\|)-1}{\ln(\mathfrak{g}(\mathfrak{f}(8.9)))}

Then, we are able to apply Algorithm 1 to compute ωi,optj\omega_{i,{\rm opt}}^{j} and the maximal integer solution ki,maxjk_{i,{\rm max}}^{j} for i{c,d}i\in\{c,d\} and j{1,2}j\in\{1,2\}.

The case 𝖷c={z0}\mathsf{X}_{c}=\{z^{0}\} where z0=(2.2,2)z^{0}=(2.2,-2)^{\intercal}

As π1(z0)>0\pi_{1}(z^{0})>0 we can directly compute a first stopping integer for ωc1\omega_{c}^{1}.As π1(z0)>0\pi_{1}(z^{0})>0 we can compute directly a first stopping integer for ωc1\omega_{c}^{1} and:

𝔉ωc1(0,h𝖵,r,N𝖧8.84(𝖵))=2ln(2.2)2ln(𝖧(z0))1ln(𝔤(𝔣(8.84)))20.187\mathfrak{F}_{\omega_{c}^{1}}(0,h_{\mathsf{V},r},N_{\mathsf{H}}^{8.84}(\mathsf{V}))=\dfrac{2\ln(2.2)-2\ln(\|\mathsf{H}(z^{0})\|)-1}{\ln\left(\mathfrak{g}(\mathfrak{f}(8.84))\right)}\simeq 20.187

This means that we must compare the first 20 terms of ωc1\omega_{c}^{1}. For ωc2\omega_{c}^{2}, we cannot compute a stopping integer because π2(z0)0\pi_{2}(z^{0})\leq 0.

Then, we compute 𝖧(z0)\mathsf{H}(z^{0}) (actually already computed since we need this value to compute 𝔉ωc1\mathfrak{F}_{\omega_{c}^{1}}). Since

𝖧(z0)=(2.4061.685)\mathsf{H}(z^{0})=\begin{pmatrix}2.406\\ -1.685\end{pmatrix}

we can update the stopping integer for ωc1\omega_{c}^{1} and we have:

𝔉ωc1(1,hc,N𝖧8.84(𝖵))=2ln(2.406)2ln(𝖧(z0)1ln(𝔤(𝔣(8.84)))17.897\mathfrak{F}_{\omega_{c}^{1}}(1,h_{c},N_{\mathsf{H}}^{8.84}(\mathsf{V}))=\dfrac{2\ln(2.406)-2\ln(\|\mathsf{H}(z^{0})\|-1}{\ln\left(\mathfrak{g}(\mathfrak{f}(8.84))\right)}\simeq 17.897

This means that we finally compare the first 17 terms of ωc1\omega_{c}^{1}. The second coordinate is still negative; thus, we cannot compute a stopping integer. Now, as

𝖧2(z0)(2.504761.30591)\mathsf{H}^{2}(z^{0})\approx\begin{pmatrix}2.50476\\ -1.30591\end{pmatrix}

we have

𝔉ωc1(2,hc,N𝖧8.84(𝖵))=2ln(π1(𝖧2(z0)))2ln(𝖧(z0)1ln(𝔤(𝔣(8.84)))16.867\mathfrak{F}_{\omega_{c}^{1}}(2,h_{c},N_{\mathsf{H}}^{8.84}(\mathsf{V}))=\dfrac{2\ln(\pi_{1}(\mathsf{H}^{2}(z^{0})))-2\ln(\|\mathsf{H}(z^{0})\|-1}{\ln\left(\mathfrak{g}(\mathfrak{f}(8.84))\right)}\simeq 16.867

Then comparing the 16 first terms of ωc1\omega_{c}^{1} leads to the conclusion that ωc,opt1=π1(𝖧2(z0))\omega_{c,\rm opt}^{1}=\pi_{1}(\mathsf{H}^{2}(z^{0})) and Argmax(ωc1)={2}\operatorname{Argmax}(\omega_{c}^{1})=\{2\}.

For ωc2\omega_{c}^{2}, we have to wait until k=5k=5 to obtain π2(𝖧k(z0))>0\pi_{2}(\mathsf{H}^{k}(z^{0}))>0. Then, we have:

𝖧5(z0)(0.379210.15155)\mathsf{H}^{5}(z^{0})\approx\begin{pmatrix}0.37921\\ 0.15155\end{pmatrix}

and

𝔉ωc2(5,hc,N𝖧8.84(𝖵))=2ln(π2(𝖧5(z0)))2ln(𝖧(z0)1ln(𝔤(𝔣(8.84)))88.6294\mathfrak{F}_{\omega_{c}^{2}}(5,h_{c},N_{\mathsf{H}}^{8.84}(\mathsf{V}))=\dfrac{2\ln(\pi_{2}(\mathsf{H}^{5}(z^{0})))-2\ln(\|\mathsf{H}(z^{0})\|-1}{\ln\left(\mathfrak{g}(\mathfrak{f}(8.84))\right)}\simeq 88.6294

We must compare the 88 first terms of ωc2\omega_{c}^{2} to be guaranteed to pass the maximizer rank. However, the terms ωc,k2\omega_{c,k}^{2} after k=5k=5 are strictly smaller than π2(𝖧5(z0))\pi_{2}(\mathsf{H}^{5}(z^{0})) (even the positive ones) and we conclude that ωc,opt2=π2(𝖧5(z0))\omega_{c,\rm opt}^{2}=\pi_{2}(\mathsf{H}^{5}(z^{0})) and Argmax(ωc2)={5}\operatorname{Argmax}(\omega_{c}^{2})=\{5\}.

The case 𝖷d={z1,z2}\mathsf{X}_{d}=\{z^{1},z^{2}\} where z1=(2.3,1.9)z^{1}=(-2.3,-1.9)^{\intercal} and z2=(2.5,1.5)z^{2}=(-2.5,-1.5)^{\intercal}

and we write rd=max{z12,z2}=8.9>ρ¯r^{d}=\max\{\|z^{1}\|^{2},\|z^{2}\|\}=8.9>\underline{\rho}.

As both vectors have negative coordinates, we must wait for the first integers k1k_{1}, k2k_{2} such that ωd,k11>0\omega_{d,k_{1}}^{1}>0 and ωd,k22>0\omega_{d,k_{2}}^{2}>0. We have, min{k:π1(Tk(z1))>0}=6\min\{k\in\mathbb{N}:\pi_{1}(T^{k}(z^{1}))>0\}=6, min{k:π2(Tk(z1))>0}=7\min\{k\in\mathbb{N}:\pi_{2}(T^{k}(z^{1}))>0\}=7, min{k:π1(Tk(z2))>0}=4\min\{k\in\mathbb{N}:\pi_{1}(T^{k}(z^{2}))>0\}=4 and min{k:π2(Tk(z2))>0}=5\min\{k\in\mathbb{N}:\pi_{2}(T^{k}(z^{2}))>0\}=5. As π1(T4(z2))0.08955\pi_{1}(T^{4}(z^{2}))\approx 0.08955 and π2(T5(z2))0.0309\pi_{2}(T^{5}(z^{2}))\approx 0.0309, we can compute:

𝔉ωd1(4,hd,N𝖧8.9(𝖵))=2ln(π1(𝖧4(z2)))2ln(𝖧(z1)1ln(𝔤(𝔣(8.9)))268.47\mathfrak{F}_{\omega_{d}^{1}}(4,h_{d},N_{\mathsf{H}}^{8.9}(\mathsf{V}))=\dfrac{2\ln(\pi_{1}(\mathsf{H}^{4}(z^{2})))-2\ln(\|\mathsf{H}(z^{1})\|-1}{\ln\left(\mathfrak{g}(\mathfrak{f}(8.9))\right)}\simeq 268.47

and

𝔉ωd2(5,hd,N𝖧8.9(𝖵))=2ln(π2(𝖧5(z2)))2ln(𝖧(z1)1ln(𝔤(𝔣(8.9)))339.85\mathfrak{F}_{\omega_{d}^{2}}(5,h_{d},N_{\mathsf{H}}^{8.9}(\mathsf{V}))=\dfrac{2\ln(\pi_{2}(\mathsf{H}^{5}(z^{2})))-2\ln(\|\mathsf{H}(z^{1})\|-1}{\ln\left(\mathfrak{g}(\mathfrak{f}(8.9))\right)}\simeq 339.85

meaning that ωd,opt1=max{ωd,k1:k=0,,268}\omega_{d,{\rm opt}}^{1}=\max\{\omega_{d,k}^{1}:k=0,\ldots,268\} and ωd,opt2=max{ωd,k2:k=0,,339}\omega_{d,{\rm opt}}^{2}=\max\{\omega_{d,k}^{2}:k=0,\ldots,339\}.

Then, we compute the next images of z1z^{1} and z2z^{2} by 𝖧\mathsf{H}. We find that π1(𝖧6(z1))0.1512>π1(𝖧4(z2))\pi_{1}(\mathsf{H}^{6}(z^{1}))\approx 0.1512>\pi_{1}(\mathsf{H}^{4}(z^{2})) and thus we update the stopping integer for ωd1\omega_{d}^{1} and we find:

𝔉ωd1(6,hd,N𝖧8.9(𝖵))=2ln(π1(𝖧6(z1)))2ln(𝖧(z1)1ln(𝔤(𝔣(8.9)))233.34.\mathfrak{F}_{\omega_{d}^{1}}(6,h_{d},N_{\mathsf{H}}^{8.9}(\mathsf{V}))=\dfrac{2\ln(\pi_{1}(\mathsf{H}^{6}(z^{1})))-2\ln(\|\mathsf{H}(z^{1})\|-1}{\ln\left(\mathfrak{g}(\mathfrak{f}(8.9))\right)}\simeq 233.34.

The new stopping integer for ωd1\omega_{d}^{1} is set to 233. The next values of ωd,k1\omega_{d,k}^{1} are smaller than π1(𝖧6(z1))\pi_{1}(\mathsf{H}^{6}(z^{1})) and thus we conclude that ωd,opt1=π1(𝖧6(z1))\omega_{d,\rm opt}^{1}=\pi_{1}(\mathsf{H}^{6}(z^{1})) and Argmax(ωd1)={6}\operatorname{Argmax}(\omega_{d}^{1})=\{6\}.

The same situation holds for ωd2\omega_{d}^{2}. Indeed, π2(𝖧7(z1))0.0435835>π2(𝖧5(z2))\pi_{2}(\mathsf{H}^{7}(z^{1}))\approx 0.0435835>\pi_{2}(\mathsf{H}^{5}(z^{2})) and we have

𝔉ωd2(7,hd,N𝖧8.9(𝖵))=2ln(π2(𝖧7(z1)))2ln(𝖧(z1)1ln(𝔤(𝔣(8.9)))316.78\mathfrak{F}_{\omega_{d}^{2}}(7,h_{d},N_{\mathsf{H}}^{8.9}(\mathsf{V}))=\dfrac{2\ln(\pi_{2}(\mathsf{H}^{7}(z^{1})))-2\ln(\|\mathsf{H}(z^{1})\|-1}{\ln\left(\mathfrak{g}(\mathfrak{f}(8.9))\right)}\simeq 316.78

The new stopping integer for ωd2\omega_{d}^{2} is set to 316. The next values of ωd,k2\omega_{d,k}^{2} are smaller than π2(𝖧7(z1))\pi_{2}(\mathsf{H}^{7}(z^{1})) and thus we conclude that ωd,opt2=π2(𝖧7(z1))\omega_{d,\rm opt}^{2}=\pi_{2}(\mathsf{H}^{7}(z^{1})) and Argmax(ωd2)={7}\operatorname{Argmax}(\omega_{d}^{2})=\{7\}.

6 Concluding remarks

In this paper, we solved specific discrete-time peak computation problems. In general, discrete-time peak computation problems consist in searching, for a given objective function, for the reachable value of a given discrete-time system that maximizes the objective function. The optimal value of this maximization problem can be viewed as the supremum of a sequence of optimal values. The solution proposed in this paper is thus based on previous results, allowing us to compute the maximum of a real sequence. The computational method requires a particular sequence greater than the analyzed sequence. This particular sequence is the image of a positive convergent geometric sequence by a strictly increasing continuous function on [0,1][0,1]. In this paper, we use the structure of the problem (i.e., the dynamical system) to compute the strictly increasing continuous and the positive convergent geometric sequence. The construction of these elements requires the stability of the discrete-time system, as they are constructed from 𝒦\mathcal{K}\mathcal{L} certificates and Lyapunov functions. We illustrate the techniques on an example from the literature.

Three main questions remain unsolved and are left for future work. First, how can we solve a discrete-time peak computation problem for which the underlying system diverges? Second, for nonlinear systems, how can discrete-time peak computation problems be managed with an infinite bounded initial conditions set? Finally, how can the number of unuseful computations be reduced?

References

  • [Adj17] A. Adjé. Proving properties on PWA systems using copositive and semidefinite programming. In Numerical Software Verification: 9th International Workshop, NSV 2016, Toronto, ON, Canada, July 17-18, 2016, Revised Selected Papers 9, pages 15–30. Springer, 2017.
  • [Adj21] A. Adjé. Quadratic maximization of reachable values of affine systems with diagonalizable matrix. J. Optim. Theory Appl., 189(1):136–163, 2021.
  • [Adj24] A. Adjé. A parametric optimization point-of-view of comparison functions. arXiv preprint arXiv:2408.14440, 2024.
  • [Adj25a] A. Adjé. Quadratic maximization of reachable values of stable discrete-time affine systems. To appear in Optimization, pages 1–52, 2025.
  • [Adj25b] Assalé Adjé. On the maximization of real sequences, 2025.
  • [AG24] A. A. Ahmadi and O. Günlük. Robust-to-dynamics optimization. Mathematics of Operations Research, 2024.
  • [AGM15] A. Adjé, P.-L. Garoche, and V. Magron. Property-based polynomial invariant generation using sums-of-squares optimization. In Static Analysis: 22nd International Symposium, SAS 2015, Saint-Malo, France, September 9-11, 2015, Proceedings 22, pages 235–251. Springer, 2015.
  • [AL11] M. F. Anjos and J.B Lasserre. Handbook on semidefinite, conic and polynomial optimization, volume 166. Springer Science & Business Media, 2011.
  • [APS18] U. M. Ahiyevich, S. E. Parsegov, and P. S. Shcherbakov. Upper bounds on peaks in discrete-time linear systems. Automation and Remote Control, 79:1976–1988, 2018.
  • [ATP10] A. Loria A.R. Teel, D. Nešić and E. Panteley. Summability characterizations of uniform exponential and asymptotic stability of sets for difference inclusions. Journal of Difference Equations and Applications, 16(2-3):173–194, 2010.
  • [Ber15] D. Bertsekas. Convex optimization algorithms. Athena Scientific, 2015.
  • [BTN01] A. Ben-Tal and A. Nemirovski. Lectures on modern convex optimization: analysis, algorithms, and engineering applications. SIAM, 2001.
  • [BYG17] C. Belta, B. Yordanov, and E. A. Gol. Formal methods for fiscrete-time dynamical systems, volume 89. Springer, 2017.
  • [Ela05] S. Elaydi. An introduction to difference equations. 2005.
  • [Els12] E.M. Elsayed. Solutions of rational difference systems of order two. Mathematical and Computer Modelling, 55(3):378–384, 2012.
  • [GGLW14] R. Geiselhart, R. H. Gielen, M. Lazar, and F. R. Wirth. An alternative converse Lyapunov theorem for discrete-time systems. Systems & Control Letters, 70:49–59, 2014.
  • [GH15] P. Giesl and S. Hafstein. Review on computational methods for Lyapunov functions. Discrete and Continuous Dynamical Systems-B, 20(8):2291–2331, 2015.
  • [GRR10] M Ranferi Gutiérrez, M A Reyes, and H C Rosu. A note on verhulst’s logistic equation and related logistic maps. Journal of Physics A: Mathematical and Theoretical, 43(20):205204, apr 2010.
  • [HR18] Yacine Halim and Julius Fergy Rabago. On the solutions of a second-order difference equation in terms of generalized padovan sequences. Mathematica Slovaca, 68, 06 2018.
  • [Kel14] C. M. Kellett. A Compendium of comparison function results. Math. Control. Signals Syst., 26(3):339–374, 2014.
  • [Kel15] Christopher M. Kellett. Classical converse theorems in lyapunov’s second method. Discrete and Continuous Dynamical Systems - B, 20(8):2333–2360, 2015.
  • [Kha02] H.K. Khalil. Nonlinear systems. Pearson Education. Prentice Hall, 3rd edition, 2002.
  • [KP01] W. G. Kelley and A. C. Peterson. Difference equations: an introduction with applications. Academic press, 2001.
  • [Las15] J.-B. Lasserre. An introduction to polynomial and semi-algebraic optimization. Cambridge Texts in Applied Mathematics. Cambridge University Press, 2015.
  • [Laz06] M. Lazar. Model predictive control of hybrid systems: stability and robustness. Phd thesis, T.U. Eindhoven, Sept 2006.
  • [LHK14a] H. Li, S. Hafstein, and C. M. Kellett. Computation of lyapunov functions for discrete-time systems using the yoshizawa construction. In 53rd IEEE conference on decision and control, pages 5512–5517. IEEE, 2014.
  • [LHK14b] H. Li, S. Hafstein, and C. M. Kellett. Computation of Lyapunov functions for discrete-time systems using the Yoshizawa construction. In 53rd IEEE Conference on Decision and Control, pages 5512–5517, 2014.
  • [MHL15] Victor Magron, Didier Henrion, and Jean-Bernard Lasserre. Semidefinite approximations of projections and polynomial images of semialgebraic sets. SIAM Journal on Optimization, 25(4):2143–2164, 2015.
  • [MHS20] J. Miller, D. Henrion, and M. Sznaier. Peak estimation recovery and safety analysis. IEEE Control Systems Letters, 5(6):1982–1987, 2020.
  • [NT04] D. Nešić and A. R. Teel. Matrosov theorem for parameterized families of discrete-time systems. Automatica, 40(6):1025–1034, 2004.
  • [RKML06] S. V. Rakovic, E. C. Kerrigan, D. Q. Mayne, and J. Lygeros. Reachability analysis of discrete-time systems with disturbances. IEEE Transactions on Automatic Control, 51(4):546–561, 2006.
  • [Son98] E. D. Sontag. Comments on integral variants of ISS. Systems & Control Letters, 34(1):93–100, 1998.
  • [Ste14] Stevo Stevic. Representation of solutions of bilinear difference equations in terms of generalized fibonacci sequences. Electron. J. Qual. Theory Differ. Equ, 67(1):15, 2014.

Appendix

Lemma 3.

Let us consider the map 𝖧\mathsf{H} in (4). Let x2x\in\mathbb{R}^{2} be a nonzero vector such that 𝖧k(x)\|\mathsf{H}^{k}(x)\| tends to 0 as kk tends to ++\infty. There exists k1,k2k_{1},k_{2}\in\mathbb{N} such that π1(𝖧k1(x))>0\pi_{1}(\mathsf{H}^{k_{1}}(x))>0 and π2(𝖧k2(x))>0\pi_{2}(\mathsf{H}^{k_{2}}(x))>0.

Proof.

Actually, it suffices to prove that for all xx in the open unit ball, there exists k1,k2k_{1},k_{2}\in\mathbb{N}, such that π1(𝖧k1(x))>0\pi_{1}(\mathsf{H}^{k_{1}}(x))>0 and π2(𝖧k2(x))>0\pi_{2}(\mathsf{H}^{k_{2}}(x))>0. Indeed, if 𝖧k(x)\|\mathsf{H}^{k}(x)\| converges to 0, then 𝖧k(x)\mathsf{H}^{k}(x) eventually enters the open unit ball. The worst situation is when 𝖧k(x)\mathsf{H}^{k}(x) has only negative coordinates before entering the open unit ball.

Therefore, let us consider z2z\in\mathbb{R}^{2} not being 0 and such that z<1\|z\|<1. First, we consider π1\pi_{1}. If z1>0z_{1}>0 the result obviously holds. Therefore, we suppose that z10z_{1}\leq 0. If z20z_{2}\leq 0 (with z20z_{2}\neq 0 if z1=0z_{1}=0), as z<1\|z\|<1, we have π1(𝖧(z))>0\pi_{1}(\mathsf{H}(z))>0. Now, suppose that z2>0z_{2}>0. Since z1<0z_{1}<0, we have π2(𝖧(z))<0\pi_{2}(\mathsf{H}(z))<0. Recall that for all yB¯(0,r)y\in\overline{B}(0,\sqrt{r}) where r<ρ¯r<\overline{\rho}, 𝖧(y)2<y2\|\mathsf{H}(y)\|^{2}<\|y\|^{2}. If π1(𝖧(z))>0\pi_{1}(\mathsf{H}(z))>0, the result follows. Thus, if π1(𝖧(z))0\pi_{1}(\mathsf{H}(z))\leq 0, we get π1(𝖧2(z))>0\pi_{1}(\mathsf{H}^{2}(z))>0 since 𝖧(z)<1\|\mathsf{H}(z)\|<1 and π1(𝖧(z))<0\pi_{1}(\mathsf{H}(z))<0 and π2(𝖧(z))<0\pi_{2}(\mathsf{H}(z))<0.

Now, we prove the result for π2\pi_{2}. The result trivially holds if z2>0z_{2}>0. So, we suppose that z20z_{2}\leq 0. If z10z_{1}\geq 0 (z10z_{1}\neq 0 if z2=0z_{2}=0), then, as z<1\|z\|<1, we get π2(𝖧(z))>0\pi_{2}(\mathsf{H}(z))>0. Now, assume that z1<0z_{1}<0. Then, it leads to π1(𝖧(z))>0\pi_{1}(\mathsf{H}(z))>0. If π2(𝖧(z))>0\pi_{2}(\mathsf{H}(z))>0, the results holds. Assuming that π2(𝖧(z))<0\pi_{2}(\mathsf{H}(z))<0, then from the fact that 𝖧(z)<z<1\|\mathsf{H}(z)\|<\|z\|<1, π1(𝖧(z))>0\pi_{1}(\mathsf{H}(z))>0 and π2(𝖧(z))<0\pi_{2}(\mathsf{H}(z))<0, we get π2(𝖧2(z))>0\pi_{2}(\mathsf{H}^{2}(z))>0.