Cyber Risk Management and Mitigation Under Controlled Stochastic SIS Model
Abstract
In this paper, we formulate cyber risk management and mitigation as a stochastic optimal control problem under a stochastic Susceptible-Infected-Susceptible (SIS) epidemic model. To capture the dynamics and interplay of management and mitigation strategies, we introduce two stochastic controls: (i) a proactive risk management control to reduce external cyber attacks and internal contagion effects, and (ii) a reactive mitigation control to accelerate system recovery from cyber infection. The interplay between these controls is modeled by minimizing the expected discounted running costs, which balance proactive management expenses against reactive mitigation expenditures. We derive the associated Hamilton-Jacobi-Bellman (HJB) equation and characterize the value function as its unique viscosity solution. For numerical implementation, we propose a Policy Improvement Algorithm (PIA) and prove its convergence via Backward Stochastic Differential Equations (BSDEs). Finally, we present numerical results through a benchmark example, suboptimal control analysis, sensitivity analysis, and comparative statics.
Keywords: Cyber risk modeling, stochastic SIS model, stochastic control, policy improvement algorithm
1 Introduction
Cybersecurity has emerged as a critical concern in the industrial, financial, insurance, and governmental sectors due to the increasing digitization and interconnectedness of various systems. Currently, there is no unanimous definition of cyber risk. Some widely accepted definitions include but not limit to: Cyber risks are operational risks that may result in potential violation of confidentiality, availability, or integrity of information systems (Cebula and Young, , 2010); it is a financial risk associated with network and computer incidents and leading to the failure of information systems (Böhme and Kataria, , 2006; Böhme et al., , 2010).
In academia, the study of cyber risk has attracted the attention of researchers across many fields. For example, researchers in computer science have been aware of the importance of security in cyberspace and have made many contributions concerning cyber risk detection (Moore et al., , 2006; Garcia-Teodoro et al., , 2009; Cárdenas et al., , 2011; Liu et al., , 2016), security breach prediction (Zhan et al., , 2015; Bakdash et al., , 2018), and computer system enhancement (Jang-Jaccard and Nepal, , 2014). In control systems and automation, researchers are more interested in the design of various optimal (deterministic) controls against denial-of-service and/or false data injection types of cyber attacks to the cyber-physical systems, to name a few, see e.g., Amin et al., (2009); Pasqualetti et al., (2013); Fawzi et al., (2014); Wakaiki et al., (2019); Liu et al., (2025) and associated references. In the field of business and corporate finance, the studies focus on investigating cyber risk under the framework of enterprise risk management (Stoneburner et al., , 2002; Gordon et al., , 2003; Öğüt et al., , 2011; Paté-Cornell et al., , 2018). In addition, in insurance and actuarial science, researchers are more interested in modeling the cyber risk in terms of its frequency, severity, and dependence with a range of statistical and stochastic techniques, for example, Herath and Herath, (2011); Mukhopadhyay et al., (2013); Eling and Loperfido, (2017); Eling and Jung, (2018); Xu and Hua, (2019); Dou et al., (2020); Malavasi et al., (2022) and references therein. For a recent comprehensive cross-disciplinary review on modeling and management of cyber risk can refer to He et al., (2024), and for the review of data availability associated with cyber risk, one can refer to Cremer et al., (2022).
Traditional cyber risk models (especially in the community of insurance and actuarial science) are adept at capturing statistical properties and temporal trends of losses, yet they often fail to account for the propagation of cyber threats. By studying how cyber risks spread in a closed system, one can identify critical factors that determine the scale of contagion and the magnitude of the losses. This deeper understanding enhances the design of risk management and mitigation strategies. It is noted that the parallels between cyber risk and disease spread make epidemiology a natural framework for a similar analysis in cyber risk, given that both phenomena involve contagion, interdependent exposure, and dynamic evolution over time.
Inspired by methods from genetic epidemiology, Gil et al., (2014) introduced a statistical framework to assess the susceptibility of individual nodes in a network. Their approach treats the services running on a host as the defining risk factor for cyber threat exposure, drawing an analogy to genetic penetrance models. This conceptual borrowing allows for a structured analysis of how certain network configurations increase vulnerability. In addition, Liu et al., (2016) developed an innovative compartmental model for malware propagation by adapting epidemiological concepts to cybersecurity, where the computers are recognized as heterogeneous nodes with different protection levels in the network. The model categorizes nodes into three distinct states: weakly protected susceptible (W-nodes), strongly protected susceptible (S-nodes), and infected (I-nodes). The authors discussed the malware-free equilibrium when the “basic reproduction number” (an important metric in epidemiological models) and respectively. These findings align with earlier work by Mishra and Pandey, (2014), who employed a susceptible-exposed-infectious-susceptible-with-vaccination model to analyze worm propagation. More recently, Fahrenwaldt et al., (2018) models the spread of cyber infections with an interacting Markov chain and claims with a marked point process, where the spread process is of a pure Poisson jump process, whereas transitions are described by the susceptible-infected-susceptible (SIS) epidemic model. And, the dependence among different nodes (i.e., firms, computers, or devices) is modeled by an undirected network. Through a simulation study, the authors demonstrate that network topology plays a crucial role in determining insurance prices and designing effective risk management strategies. Xu and Hua, (2019) employs both Markov and non-Markov processes within an SIS network model. Their Markov formulation incorporates dual infection pathways, where Poisson processes are used to capture both internal network transmission and external threats. This formulation yields valuable dynamic upper bounds for infection probabilities and stationary probability estimates. The empirical validation in the study reveals the critical influence of recovery rates on insurance premium calculations. Later, Antonio et al., (2021) extends the Markov-based SIS model by incorporating network clustering coefficients, such that the model can capture the local network clustering that inhibits epidemic spread through modified transition probabilities. Using N-intertwined mean-field approximation, they derived dynamic infection probability bounds that improve premium accuracy when applied to both synthetic and real-world networks. Hillairet and Lopez, (2021) describes the spread of a cyber attack at a global level with a susceptible-infected-recovered (SIR) model, and approximates the cumulative number of individuals in each group with a Gaussian process. Furthermore, Hillairet et al., (2022) proposes a multi-group epidemic model to assess the impact of large-scale cyber attacks on insurance portfolios, which captures the interdependencies between actors and can be calibrated with limited data, enabling efficient scenario analysis of cyber events. For other relevant studies in recent literature, the readers may refer to He et al., (2024) and references therein.
Recent studies have significantly advanced the modeling of cyber risk propagation through epidemiological approaches, in particular, using SIS and its extended models with network-dependent interactions. While these models effectively capture contagion dynamics, dependence structures, and loss distributions, they remain descriptive rather than prescriptive. To be specific, many studies focus on predicting the spread of contagions and estimating losses, but fail to incorporate active intervention or control strategies to manage or mitigate the cyber risk in real-time. Hence, a critical limitation in existing cyber risk modeling with epidemic models is the absence of formal control mechanisms to identify optimal risk management and mitigation strategies. It is noted that this gap mirrors the recent developments in biological epidemic control literature, where the later works incorporate various optimal stochastic control (such as vaccination) into (non)stochastic SI(R)S models, see, for example, Boukanjime et al., (2021); Tran and Yin, (2021); Barnett et al., (2023); Sonveaux and Winkin, (2023); Federico et al., (2024); Chen et al., (2025).
Therefore, in this paper, we address such a gap by formalizing the cyber risk management and mitigation problem as a stochastic optimal control problem under a stochastic Susceptible-Infected-Susceptible (SIS) model. The objective of this paper is not to develop a sophisticated stochastic model that can fit perfectly to any existing cyber risk and cybersecurity data, but, rather, to provide a theoretical stochastic control framework for studying the cyber risk management and mitigation problems. The contributions of our research are summarized as follows:
-
•
Cyber risk has emerged as a critical threat to modern enterprises, governments, and financial systems, with the attacks illustrating potential contagion across various sectors. There is a clear trend in recent literature on employing stochastic processes, especially stochastic epidemic models, to capture the various contagion mechanisms of cyber risk. However, no studies have investigated the (optimal) risk management and risk mitigation strategies under such a stochastic contagion model. We bridge the gap in the literature by formulating the contagion dynamics of the cyber risks in a closed system as a controlled stochastic SIS model. Two distinct control variables are considered: one is a proactive control mechanism governing risk management, which reduces or prevents external cyber attacks and internal contagion effects, such as firewall upgrades, isolating infected systems, and disconnecting breached servers; another is reactive control mechanism associated with risk mitigation, which refers to the set of immediate tactical actions taken to recover the affected systems, for example, remove malware, close backdoor, or reset compromised credentials. Our work is the first to integrate these dual controls into a unified stochastic optimal control framework for cyber risk, providing a rigorous mathematical foundation for dynamic decision-making in cyber risk management and mitigation. Such a dual control framework can capture, to some extent, the trade-off between risk prevention/management and risk mitigation, which is a key challenge in practice, especially when facing resource constraints.
-
•
We characterize the value function(the expected discounted running costs) as the unique viscosity solution to the Hamilton-Jacobi-Bellman (HJB) equation derived from our stochastic control formulation for cyber risk management and mitigation. By applying viscosity theory, we establish the existence and uniqueness of the solution of the HJB equation under mild regularity assumptions. This analytical framework ensures numerical tractability, as the uniqueness property guarantees well-posedness for iterative computational methods via the dynamic programming principle. Moreover, our approach naturally accommodates extensions to more complex settings, including jump-diffusion processes and generalized cost structures, broadening its applicability beyond the current setup.
-
•
For the numerical implementation, we develop a Policy Improvement Algorithm (PIA) grounded in the Bellman-Howard policy iteration framework for such an infinite-time horizon stochastic cyber risk control problem. The algorithm demonstrates superior computational efficiency compared to other alternative numerical methods, such as Markov chain approximation with value iterations. The algorithm typically converges within a few iterative steps. In addition, we rigorously establish the algorithm’s convergence (at an exponential rate) and stability under error perturbations during iteration by mapping the value function at each step to the unique solution of some (iteratively defined) infinite-horizon Backward Stochastic Differential Equations (BSDEs). Given the inherent challenge of our underlying stochastic control problem (i.e., an infinite-time horizon problem with unknown boundary conditions), these results constitute a nontrivial extension of the finite-horizon frameworks, such as that of Kerimkulov et al., (2020). We further emphasize that our algorithm and the BSDE-based convergence and stability analysis can be further applied to those stochastic control problems with a random horizon or optimal stopping problems seamlessly. Finally, while the focus of this paper is on a one-dimensional diffusion process (specifically, the stochastic SIS model), all algorithmic and theoretical results extend directly to multi-dimensional controlled diffusion processes, enabling the applications of our results to more generalized compartmental models, for example, the Susceptible-Infectious-Recovered (SIR) model, the Susceptible-Infectious-Recovered-Vaccinated (SIRV) model, etc.
The remainder of the paper is organized as follows. Section 2 presents our controlled stochastic SIS model featuring dual control variables for cyber risk management and mitigation. We reformulate the problem as an optimal stochastic control problem under a (drift-controlled) diffusion model, and derive fundamental properties of the value function. Section 3 establishes our core theoretical contribution, where we prove that the value function can be characterized as the unique viscosity solution to the associated Hamilton-Jacobi-Bellman equation. In Section 4, we propose a numerical method, namely the Policy Improvement Algorithm, together with convergence results based on the theory of infinite-time horizon BSDEs. We also provide various numerical results on the optimal cyber risk management and mitigation strategies, including comprehensive sensitivity analysis and comparative statics. The conclusion is given in Section 5. Finally, some technical proofs are collected in the Appendices for completeness.
2 Stochastic SIS model and mathematical formulation
We start with a standard Susceptible-Infected-Susceptible(SIS) model, which can be characterized by two state variables, namely the number of susceptible nodes (for example, terminal computers or servers in a network) , and the number of cyber-infected nodes . Assume the total number of nodes in the system at time is . Similar to the usual step in studying the classical deterministic SIS model in the epidemiological literature, we normalize the system size to one such that we replace the susceptible and infected nodes and in the system by and , respectively. In addition, we further introduce the stochastic version of the SIS model, see e.g. Tran and Yin, (2021); Barnett et al., (2023), and the dynamics of the state variables are expressed as
(2.1) |
where we introduce white noise shocks , which is a parameter perturbation of with volatility . Here, is the rate at which susceptible nodes (devices/users) are infected by external cyber threats (e.g., malicious emails,denial-of-service); denotes the rate at which the infected nodes propagate malware to susceptible nodes; and is a baseline (unassisted) recovery rate of the compromised nodes become functional and threat-free, which may be due to the baseline cybersecurity measures like antivirus or basic IT policies. For simplicity, we assume that all the abovementioned model parameters in (2.1) are constants.
Now, we extend (2.1) to a controlled stochastic SIS model with two control variables. On one hand, we allow the above SIS model to be controlled under cyber risk management protocols through proactive protection measures (see, e.g., Barnett et al., (2023) for a similar control variable in an epidemic study). Let be the fraction of nodes (both susceptible and cyber-infected) at time in the system under protection with certain measures, which can reduce the effectiveness of both the external cyber threats to susceptible nodes and propagation from cyber-infected nodes to susceptible nodes. Note that it is possible to introduce an extra parameter and replace by in (2.1), which captures the incomplete effectiveness of the protection measure. We set and ignore this parameter in our study; for a detailed discussion, one can consult, for example, Barnett et al., (2023). On the other hand, we further replace in (2.1) by , where is a controlled recovery rate at time , which can be interpreted as the enhanced recovery rate due to certain reactive interventions to the cyber-infected nodes.
We assume that the pair of cyber risk management and mitigation control actions takes value in a nonempty set in . Then, the dynamics of the controlled stochastic SIS model can be described by the following system of stochastic differential equations(SDEs):
(2.2) |
Note that the control is applied to both susceptible nodes and infected nodes, hence we have a quadratic form in each of the second term in (2.2).
According to a similar analysis in Theorem 2.1 of Tran and Yin, (2021), we can formulate the above controlled SIS model into a one-dimensional controlled diffusion process. To be specific, for any initial value satisfying and , the system of SDEs (2.2) has a unique strong global solution , and almost surely for any . Therefore, in the following, we focus on the dynamics of the fraction of cyber-infected nodes in the system,
We shall provide a rigorous proof of the above assertion in Proposition 2.1 below. To proceed with our analysis under simplified notations, we reformulate the above cyber risk management and mitigation problem as a stochastic control problem for a general diffusion process given in (2.3) below.
Let be a filtered probability space satisfying the usual condition, and on which a one–dimensional Brownian motion is defined. Let denote the fraction of cyber-infected nodes in the system at time , which is driven by the following controlled diffusion process:
(2.3) |
where , (with abuse of notation, we use the volatility parameter in (2.2) as the function here in (2.3) for notation simplicity), and and are model parameters in the stochastic SIS system (2.2). The control processes are progressively measurable, where is the control action space, we may just set , denotes the set . Furthermore, throughout the paper, we let and denote the probability measure and expectation operator when , respectively.
Assumption 2.1
The mappings and are continuous in , and the former is uniformly in the control . There exists a constant such that for any , and all we have
(2.4) |
Moreover, for all and , it also holds that
and
for some .
Assumption 2.1 holds obviously in our stochastic SIS model. To be specific, consider any , one has
and
Hence, by letting , one arrives at (2.4). Moreover, take any and , we have
and
for some .
Now, we are ready to define the cost functional associated with our cyber risk management and mitigation problem (2.3) as
(2.5) |
where is the discounting factor, is a running cost function. In addition, without loss of generality, we restrict our study to the set of admissible controls given below. Let denotes all progressively measurable random processes taking values in , and define the set of admissible control processes by
Then, the value function is defined as
(2.6) |
To show the well-posedness of the controlled diffusion process (2.3), we prove the following proposition.
Proposition 2.1
For any control and initial value , the controlled SDE (2.3) has a unique global positive solution for all such that
Proof. The proof is similar to the proof of Theorem 3.1 in Gray et al., (2011). Let be the explosion time of the driven by (2.3). Since the drift and diffusion coefficients are locally Lipschitz, there exists a unique local solution for . For any initial value , consider a sufficiently large such that . Then, for each , define the first exit time
with the convention that . Obviously, is increasing in , hence we let , and almost surely. Then, the rest is to show almost surely. We prove the statement by the method of contradiction. Assume that there exists and such that
Then, there is a sufficiently large such that for all . We define an auxiliary function for , then for any and , an application of the Dynkin’s formula gives,
where is the infinitesimal generator of for any fixed and given as
With some simple algebra, one has
where . Take a finite modification of random process by defining
where for every . Then almost surely, but is bounded by everywhere. By Tonelli’s theorem applied to the nonnegative process , we have
where the last inequality follows from the boundedness of on all by the definition of the admissible control set . In particular, the joint measurability of and w.r.t. product -algebra , together with the bounds
ensure the validity of applying Tonelli’s theorem.
Then, with the help of Grönwall’s inequality, one has
(2.7) |
where we applied the fact that equals to either or . Then by sending in (2.7), one arrives at the contradiction , and completes the proof.
Assumption 2.2
There exists a constant , and such that for any pair of controls , we have
for all . Additionally, let , the cost function is bounded by a constant such that
Remark 2.1
The typical form of the cost function we shall investigate later in this paper can be expressed as
(2.8) |
where is the underlying marginal costs of running the system, is the marginal cost generated from infected nodes, and are the marginal costs associated with cyber risk management control for susceptible nodes and cyber-infected nodes, respectively. We shall assume that , which means that costs associated with management for infected nodes are, in general, higher than those for functional and threat-free nodes. Finally, is the marginal cost of cyber risk mitigation control. Not that Assumption 2.2 is satisfied for the class of cost functions defined in (2.8).
To proceed, we show some important properties of the value function in the following proposition. Throughout the paper, we assume that the Assumptions 2.1 and 2.2 hold true.
Proposition 2.2
-
(i)
If the running cost function is nondecreasing in for each pair of admissible controls, then is nondecreasing in for all .
-
(ii)
For any , there exists a constant such that
(2.9)
Proof. See Appendix A.
3 Hamilton-Jacobi-Bellman equation and the viscosity solution
In this section, we first state the dynamic programming equation associated with our stochastic control problem and derive (heuristically) the corresponding Hamilton-Jacobi-Bellman (HJB) equation. Let be any -stopping time, for any , we have
(3.1) |
The proof of (3.1) for controlled diffusion processes follows the classical arguments in the literature, see for example, Theorem 3.1.6 of Krylov, (1980), where the main point is the continuity of the value function, which is the present case in our study.
If the value function is sufficiently smooth, then by following the standard arguments with the help of the dynamic programming principle and Itô’s formula, we obtain the following HJB equation,
(3.2) |
Note that the heuristic arguments verifying that value function is a classical solution to the HJB equation (3.2) assume in prior that is twice continuously differentiable on , which is not the present case, where we only have Lipchitz continuity in . Hence, we adopt the concept of the viscosity solution introduced in Crandall and Lions, (1983) and characterize the optimal value function as the unique viscosity solution to the HJB equation (3.2).
Definition 3.1
Let be a locally Lipschitz continuous function.
We first provide the result regarding the existence of the viscosity solution for the HJB equation (3.2) on .
Proposition 3.1
The value function is a viscosity solution of (3.2) on .
Proof. See Appendix B.
To prove the uniqueness, we introduce the following alternative definition of viscosity solution (for second-order differential equations), see for example Yong and Zhou, (1999). For any function and , the so-called second–order superdiffiential of at is defined as
and the second–order subdiffiential of at is defined as
In addition, we let
and rewrite (3.2) as
(3.3) |
Definition 3.2
If is both a viscosity supersolution and a viscosity subsolution of (3.3) at , then it is a viscosity solution at .
Proposition 3.2
Proof. See Appendix C.
Proposition 3.3
Let be any increasing and Lipschtiz continuous viscosity supersolution of (3.2), then for all .
Proof. The proof follows the standard arguments of using Itô’s formula with a density argument dealing with the nonsmoothness of any viscosity supersolution of (3.2), see, for example, Nguyen-Ngoc and Yor, (2004); Azcue and Muler, (2005). We omit the details here.
By Proposition 3.3, for a sufficiently large closed interval , we can characterize the value function as the viscosity solution of (3.2) with the smallest value on the boundary in the class of increasing and Lipschitz continuous viscosity solutions of (3.2). Let denote the set of all increasing and Lipschitz continuous functions that are viscosity solutions of (3.2) on , then, we characterize the value function as
(3.4) |
Proposition 3.4
Proof. Consider another increasing and Lipschitz continuous viscosity solution of (3.2) on satisfying (3.4). On one hand, is a viscosity supersolution, and by Proposition 3.3, we have on . On the other hand, is also an increasing and Lipschitz continuous viscosity subsolution of (3.2) with the fact that for (since ), then by the comparison principle given in Proposition 3.2, we have on , whenever is considered as a viscosity supersolution and as a viscosity subsolution. Therefore, we obtain on and complete the proof.
4 Numerical algorithm and examples
4.1 Policy improvement algorithm
Obtaining a closed-form solution to (3.2) is seldom feasible; hence, in this paper, we apply a policy improvement algorithm, namely Bellman–Howard policy improvement/iteration algorithm (see Algorithm 1 below), to solve the cyber risk management and mitigation problem numerically. Iterative algorithms for solving optimal control problems can trace their origins to Bellman’s pioneering work Bellman, (1955, 1966), which introduced value iteration methods for finite space-time problems and established their convergence properties. Howard, (1960) later developed the policy improvement algorithm in the context of discrete space-time Markov decision processes (MDPs).
The proof of the convergence is of paramount importance in using policy improvement algorithms. Among the earliest convergence analyses for policy iteration in MDPs is the work of Puterman and Brumelle, (1979), which employed an abstract function–space framework applicable to both discrete and continuous settings. A key insight from their study is that policy iteration can be interpreted as a form of Newton’s method, inheriting similar convergence properties, i.e., whenever initialized near the true solution, the algorithm achieves quadratic convergence. Later Puterman, (1981) provides similar results on the convergence of the policy iteration algorithm for controlled diffusion processes. Further extensions were made by Santos and Rust, (2004), who examined discrete-time problems with continuous state and control spaces. Their work generalizes the results of Puterman and Brumelle, (1979), demonstrating global convergence while retaining local quadratic convergence rate under standard conditions and superlinear convergence rate under broader assumptions. More recently, under a setting of continuous space–time controlled diffusion processes, Kerimkulov et al., (2020) established a global rate of convergence and stability of the Bellman–Howard policy iteration algorithm with the help of techniques in Backward Stochastic Differential Equations (BSDEs). Therefore, we follow the main steps in Kerimkulov et al., (2020) when proving the convergence of our policy iteration algorithm, the main theorem is given in Theorem 4.1 below. Note that the study in Kerimkulov et al., (2020) focuses on finite-horizon stochastic control problems; a brief discussion of the infinite-horizon counterpart can be found in Remark 4.3 of Kerimkulov et al., (2020), which suggests that the convergence result still holds for some sufficiently large discount factor , although no formal proof is provided.
Without loss of generality, we consider a sufficiently large closed interval . Further, we let the running cost function be in the form of (2.8),
where is the baseline marginal costs associated with the management of the system; is the (extra) marginal running costs incurred by cyber-infected nodes in the system; and denote the marginal costs associated with proactive risk management for cyber-infected nodes and susceptible (functional and threat-free) nodes, respectively; denotes the marginal costs generated by reactive risk mitigation (intervention) to enhance the recovery rate in the system. The Policy Improvement Algorithm (PIA) for solving (3.2) on is given in Algorithm 1.
(4.1) |
We shall remark that to solve the Bellman-type ODE (4.1) during each iteration step of Algorithm 1, we need to impose boundary conditions which are not available in our problem. Hence, we impose an approximated Dirichlet condition at the left boundary () and a Neumann condition at the right boundary () based on the conventional Monte-Carlo simulation method for the performance function (2.5) under the current control strategy (extrapolated for all ) at each iteration step. The effectiveness of Algorithm 1 has been validated for an example of an infinite-horizon stochastic control problem with analytical solutions, as shown in Asmussen and Taksar, (1997).
For completeness, we present the convergence theorem of the Algorithm 1 in Theorem 4.1, together with the policy improvement theorem (see Theorem 4.2) and algorithm stability under perturbations to the solution of the Bellman-type ODE (4.1) (see Theorem 4.3).
Theorem 4.1
Assume Assumptions 2.1, 2.2, D.1, and D.2 hold. Let be the (viscosity) solution to the HJB equation (3.2) on , and let be the sequence of smooth approximations generated by Algorithm 1. Then there exists and the initial guess , such that for all there is a constant ( is a given discounting rate) satisfying
(4.2) |
The proof of Theorem 4.1 follows a similar argument to Theorem 4.1 in Kerimkulov et al., (2020). To be specific, at the th iteration, the solution of the Bellman-type ODE (4.1) can be characterized as the solution to a corresponding BSDE . By the uniqueness property of the infinite-horizon BSDE, the equivalence between and is thus established. In addition, the solution to the HJB equation (3.2) can also be represented by a BSDE, which follows directly from Lemma D.5, where the key ingredient is the comparison principle for BSDEs. Finally, applying the contraction property from Lemma D.4, the desired convergence result follows. For a more detailed explanation of the underlying idea of Algorithm 1, we refer the reader to Kerimkulov et al., (2020). Due to the complexity, we postpone the detailed proof together with some preliminary assumptions and lemmas to Appendix D at the end of the paper.
Moreover, the following theorem establishes the monotone improvement property of the policy improvement algorithm.
Theorem 4.2
Proof. The proof follows a similar argument as the proof of Theorem 5.1 of Kerimkulov et al., (2020) together with the comparison principle of infinite-horizon BSDE given in Theorem D.3.
To finish this subsection, we provide a stability property of the policy improvement algorithm under the perturbations to the solution of the Bellman-type ODE (4.1)(note that the perturbations come from both the fact that Eq. (4.1) is only solved approximately and our approximated boundary conditions). Hence, updating (with the first-order condition) the controls and at each iteration step in Algorithm 1 is essentially performed only with the approximated solution, which can cumulate errors in the iteration.
Let be a set of parameters that determines the accuracy of the solution to the ODE (4.1). Let be the policy at iteration obtained from an approximate solution to the ODE (4.1), let be the solution of
(4.3) |
with true boundary conditions. And let be the approximate solution to
where and denote the approximate values of and the left derivative of at respectively. Then, the policy function (see the definition of this function in Assumption D.1) for the next iteration step is given by
Theorem 4.3
Let Assumptions 2.1, 2.2, D.1, and D.2 hold. Let be the approximation sequence given by Algorithm 1. Let be the approximation sequence given by (4.3). Let and be the optimal control process for (3.2) and the associated controlled diffusion started from , respectively. Assume that is uniformly bounded. Then there exist and , such that for all , there exists a constant with
where is the -norm under probability measure , see Appendix D.
4.2 Benchmark example
The following Example 4.1 presents a benchmark example in which the Algorithm 1 is applied to numerically solve the cyber risk management and mitigation problem. It is important to note that the parameter shall be small (which is not unreasonable) to ensure the convergence of the PIA. Additionally, the initial guess for is set to zero. These choices are made to help obtain a smooth solution for the severely stiff ODE. Moreover, a combination of a Dirichlet condition at the left boundary and a Neumann condition at the right boundary of the computing interval will be used to solve the stiff ODE (4.1). Those are simulated by the conventional Monte-Carlo method.
Example 4.1
We first provide a benchmark example for the cyber risk control problem (2.6). Let . Furthermore, we set the discount factor . Additionally, let the external cyber attacks rate be , internal contagion rate , (unassisted) recovery rate , and the diffusion coefficient . Finally, we set , , and .



The results of the optimal strategy and value function are given in Figures 1(b) and 1(a), respectively; we also provide the convergence of the errors (in terms of normalized -norm of the difference between two successive value function), which shows the computational efficiency of the algorithm, see Figure 1(c). The algorithm converges with an error less than within eight steps starting from the initial guess and . To better understand the behavior of the optimal control , we observe that remains at zero (strong proactive control) when the ratio of cyber-infected nodes is considerably small, while decays fast from its maximum. This pattern indicates that when the current system has a small ratio of cyber-infected nodes, applying the risk mitigation control to enhance the recovery rate is less effective than implementing proactive management control to prevent the internal contagions and external cyber attacks. Consequently, the optimal strategy prioritizes risk management over risk mitigation at the outset.
Furthermore, Figure 1(b) shows that both the optimal controls (cyber risk management) and (cyber risk mitigation) decrease as the ratio of cyber-infected nodes increases. In particular, we have the following observations:
-
•
When the cyber-infected ratio is low (i.e., ), both controls are set high, which reflects a strong incentive for the decision maker to invest in cyber risk prevention and mitigation in an early stage.
-
•
As the cyber-infected ratio rises (), both controls decline. This suggests that once the system is already heavily compromised, additional investments in prevention or mitigation have diminishing impact on reducing risk, especially for the proactive risk management control . Hence, the decision maker may want to prioritize resource allocation toward risk mitigation strategies.
The above observations align with the intuition of cyber risk control under limited resources: it is most effective to intervene early, when cyber-infected ratios are still small; while intervention becomes less valuable (and less cost-effective) when the system is already in a state of widespread infection.
4.3 Suboptimal control analysis
Example 4.2
In this example, we provide numerical analysis when we fix either the risk management control at zero () or the risk mitigation control at zero (). The resulting optimal strategies (of the single control) and value functions are given in Figure 4.2.




When removing the reactive cyber risk mitigation control (i.e., ), the optimal proactive risk management control becomes noticeably stronger (see Figure 2(a)) compared to the benchmark example, which compensates for the absence of reactive mitigation controls. The value function remains nearly unchanged at small , but rises substantially (reaching about versus the benchmark level of ) when the cyber-infected ratio is close to one, see Figure 2(c). This suggests that as cyber-infection level increases, the marginal effectiveness of risk management control drops off, and proactive risk management alone cannot effectively substitute for reactive mitigation control in a highly infected system.
On the other hand, when we remove the risk management control (i.e., ), the optimal reactive control decreases relative to the benchmark (Figure 2(b)), reflecting the limited effectiveness of mitigation when proactive control from a risk management perspective is unavailable. In this case, the value function rises overall (Figure 2(d)), shifting from a range between 20 to 30 under the benchmark example to a range between 75 to 80. This shows that relying solely on reactive mitigation strategies results in higher expected costs, confirming that mitigation control cannot fully substitute for proactive management.
4.4 Sensitivity analysis
In this subsection, we numerically analyze the distinct roles of proactive control (risk management) and reactive control (risk mitigation) in shaping the optimal value function. In particular, we deviate by a series of small changes (uniformly in ) for and , respectively, from the optimal strategy in the benchmark example. The results are plotted in Figure 3(a)–3(d).

on .

on .

on .

on .
We conclude this subsection with the following observations:
-
•
Proactive risk management control (): According to Figure 3(a), increasing produces a substantial and nearly uniform increase in value function (higher expected discounted costs) across all initial states of cyber-infected level, confirming the consistent effectiveness of proactive measures when managing cyber risks in the system. Conversely, decreasing has limited effects on the magnitude of the value function (see Figure 3(b)). Note that, since for small values of , the decrease of cannot be applied in this case; hence, we do not observe any changes in the corresponding value function. But, when the cyber-infected ratio is high, the inability to sustain strong proactive control leads to a noticeable increase in the value function (but not comparable to the corresponding scenario when increasing ).
-
•
Reactive risk mitigation control (): Both Figures 3(c) and 3(d) show that perturbations on the value of exert their strongest influence when the system is in a status with a high cyber-infected ratio. Such an observation indicates that the reactive mitigation control is most valuable once the system contains a large number of infected nodes. In addition, unlike the risk management control, we can observe an obvious non-uniform change in the value function with respect to . In particular, when we add positive perturbations to (i.e., adding redundant mitigation controls), the increase in the value function (i.e., expected discounted costs) is moderate and consistent. However, adding negative perturbations to , which refers to insufficient mitigation controls, can distort the shape of the value function and sharply increase expected discounted costs. One may also conjecture (see Figure 3(d)) that there exists a critical “threshold level” for cyber risk mitigation control, such that insufficient reactive mitigation control below the “level” can cause tremendous losses. We leave this interesting observation for future research.
-
•
Overall conclusion: The sensitivity analysis together with the suboptimal control analysis in Example 4.2 highlight an “asymmetry” between the two types of control strategies. Proactive risk management provides consistent and broad benefits, and can partially substitute for the absence of mitigation controls. By contrast, reactive mitigation is valuable only when the system is heavily compromised with a high ratio of cyber-infected nodes, and cannot substitute for missing proactive risk management control. In practice, this implies that effective cyber risk control strategies require front-loaded investment in proactive defense, with reactive mitigation serving as a complementary safeguard against severe system breakdown rather than a stand-alone strategy.
Remark 4.1
One may notice that each “value function” obtained (by solving the corresponding ODEs under perturbed control) in the above sensitivity analysis (Figure 3(a)–3(d)) are not necessarily the objective function given in (2.5) under perturbed control.
In this remark, we assume the solution to the Bellman ODE is twice continuously differentiable. While the numerical solution is not necessarily twice continuously differentiable, it approximates a solution under standard regularity assumptions and sufficient discretization accuracy. If we fixed the control by each perturbation above, such as and , then the control space is reduced to a singleton. As the consequence, one could apply the Itô’s formula for the discounted process between 0 and with a sequence of stopping times ,
where denote , and note the stochastic integral is a local martingale. Sending to infinity, then
holds by dominated convergence theorem. By using the fact that is a solution to the ODE stated above and then sending to infinity, we observe that
coincides with the definition of objective function .
4.5 Comparative statics
In this final subsection, we perform a comparative statics analysis across all model parameters, including , , , and the marginal cost parameters () in the cost function given in (2.8). Note that it is not necessary to investigate the parameter , which contributes additively to the mitigation control in numerical results.
Comparative statics for and . We first compare different values of the external cyber attack rate and the internal contagious rate , keeping other parameters unchanged in Example 4.1. The resulting optimal strategies are given in Figures 4(a), 4(c) (for ) and 4(b), 4(d) (for ), respectively.




Figures 4.4 show that an increase in both (the rate of external cyber attacks) and (the rate of internal cyber risk propagation) requires stronger risk management and mitigation controls, with the optimal proactive management moving closer to and the optimal reactive mitigation rising. This reflects that it is optimal to simultaneously reinforce both preventive measures and reactive responses when cyber risks escalate. The impact of is more pronounced than that of , leading to sharper adjustments in both controls. From a cyber risk perspective, this distinction is natural: a surge in external attacks compels the decision maker to rapidly escalate defensive risk management measures and amplify mitigation controls. In contrast, internal risk propagation dynamics among susceptible and infected nodes induce a more gradual–though less pronounced–reinforcement of the two controls. Such a result thus demonstrates the necessity of preferential allocation of resources to the management of external cyber attacks.
Comparative statics for σ. We compare different values of the volatility parameter in the stochastic system, assuming other parameters remain the same as in Example 4.1. Figure 4.5 illustrates the impact of increasing the volatility parameter on the optimal control . As rises from to , , and , both the proactive management and the reactive mitigation weaken slightly. Intuitively, higher volatility increases uncertainty in the evolution of the cyber-infected ratio in the system, making aggressive interventions less effective. The small magnitude (compared with the cases when changing and ) of the change indicates that the control policy is primarily driven by the system’s drift dynamics, with stochastic fluctuations playing a secondary role.


Comparative statics for and . We further compare different values of the marginal costs associated with cyber risk management for cyber-infected nodes and susceptible nodes, respectively. We plot the resulting optimal strategies in Figure 4.6.




It is reasonable to observe that when the marginal costs of proactive management control associated with either cyber-infected nodes or susceptible nodes increase, the optimal proactive management control weakens. On the other hand, the optimal reactive mitigation control becomes stronger when the marginal costs of management control incurred by cyber-infected nodes () increase, since the costs associated with mitigation control become relatively cheaper. However, when the marginal costs () increase, we observe a decreasing trend (rather uniform for all levels of infection ratios) in the optimal mitigation control ; in fact, such a counter-intuitive result is not unreasonable, since with a higher value of , it becomes more expensive for risk management control when the system has a large amount of susceptible nodes; then, the decision maker may choose to reduce (moderately) the mitigation control, which essentially reduce the transition intensity from cyber-infected state to susceptible state. In addition, the optimal reactive mitigation control is more sensitive to the change of when the cyber-infected ratio in the system is low, but it is more sensitive to the change of when the system is heavily compromised. Furthermore, one can observe that the optimal risk management control , as a function of the cyber-infected ratio , exhibits a concave shape (see e.g., Figure 6(a)), and the concavity diminishes when decreases. Such a phenomenon may be rooted in the interaction (or trade-off) between the marginal costs and . When is (sufficiently) larger than , that is, the costs associated with cyber-infected nodes under management control are negligible compared to the baseline management costs of all cyber-infected nodes, then the optimal management control strategy is “neutral” to the cyber-infected ratio, hence results in a linear form. But, when is sufficiently larger than , the optimal management control, as a function of cyber-infected ratio, becomes a concave function. It means that, to minimize the total expected discounted costs, the decision maker might want to reduce the strength of proactive management control aggressively when the cyber-infected ratio is at a moderate level, and the tendency declines when the cyber-infected ratio approaches one (hence, a concave form). One can observe a similar result when changing the value of and keeping fixed, see Figures 7(a) and 7(c).
Comparative statics for and . We change the value of marginal costs () incurred by cyber-infected nodes in the system, or the marginal costs () associated with reactive risk mitigation control, keeping other parameters unchanged in Example 4.1.




When the marginal costs increase, Figures 7(a) and 7(c) show a similar (but in the opposite way) result of the optimal management control as we observed in Figures 6(a) and 6(c), where a stronger prevention measure is achieved by expanding the zero-valued plateau (representing maximal prevention) for a larger range of cyber-infected ratios. This expansion occurs in an almost uniform, additive manner, where as rises, the switching threshold at which departs from zero shifts rightward by roughly the same increment. In addition, with a small value in (e.g., ), the optimal proactive management control , as a function of cyber-infected ratio , exhibits a concave property; and the concavity diminishes when increases and eventually exceeds the value of . However, the optimal reactive mitigation adjusts in a significantly different way compared with what we obtained in Figure 6(c), especially when the cyber-infected ratio () is close to one. To be specific, when the system is heavily compromised, for a large value of (compared to ), it is optimal to increase the reactive mitigation control to reduce the number of infected nodes so that the expected discounted costs can be reduced significantly.
On the other hand, Figures 7(b) and 7(d) show that increasing the mitigation cost from to , , and leads to a systematic weakening of the mitigation strategy , while the management strategy is strengthened by expanding its zero-valued plateau (maximal prevention) so that strong prevention is applied earlier and more widely. The movement pattern of is similar to that observed in Figures 7(a) and 7(c), in the sense that the adjustment is roughly uniform across the state space, but the direction is opposite: a higher value of lifts proportionally, whereas a higher mitigation cost pushes downward. This observation highlights that when the operating costs associated with cyber-infected nodes become more costly, it is optimal to reinforce both risk management and mitigation strategies. However, when mitigation becomes expensive, the optimal strategy reallocates effort towards risk management control with a reduced reliance on expensive reactive mitigation controls.
5 Conclusion and future outlook
In this paper, we model cyber risk management and mitigation as a stochastic optimal control problem within a stochastic Susceptible-Infected-Susceptible (SIS) epidemic framework. We introduce two dynamic controls to capture real-time risk management and mitigation strategies: 1) a proactive control () that reduces external cyber attacks and internal contagion effects; 2) a reactive control () that speeds up the recovery of infected nodes. We formulate this as a dual stochastic control problem governed by a general diffusion process. Theoretically, we establish the well-posedness of the controlled SIS model under these dual controls and prove that the associated value function is the unique increasing and Lipschitz-continuous viscosity solution of the Hamilton-Jacobi-Bellman (HJB) equation derived from the control problem.
For numerical implementation, we propose a Policy Improvement Algorithm (PIA) and demonstrate its convergence using Backward Stochastic Differential Equations (BSDEs). Our convergence result extends existing finite-horizon analyses to the infinite-horizon case. Then, we present a benchmark example that illustrates the optimal risk management and mitigation strategy, along with the corresponding value function, for a given model parameter set. We further examine suboptimal performance and sensitivity by: 1) removing one control entirely in the benchmark scenario; 2) introducing small perturbations to each optimal control; 3) conducting a comprehensive comparative statics analysis across all model parameters. The sensitivity and suboptimal control analyses reveal a fundamental asymmetry between the two control strategies, where proactive risk management control demonstrates consistent system-wide benefits and exhibits partial substitutability for reactive mitigation when absent; however, reactive risk mitigation control shows value only during high-infection scenarios and cannot compensate for missing proactive measures. Furthermore, some interesting observations are drawn from the comparative statics, including: the asymmetric impact of external attack frequency versus internal contagion rates on optimal control strategies underscores a possible critical policy implication; effective cyber defense requires prioritizing resource allocation toward external threat management; the optimal control strategy, particularly proactive risk management, exhibits significantly different behavioral patterns depending on the current infection ratio. This variation stems from the interaction between the operational costs of maintaining all infected nodes in the system and the marginal costs of implementing risk management controls on these compromised nodes.
Finally, we remark that our work lays a foundation for several natural extensions in the field. One direction is incorporating jump processes to model sudden, large-scale cyber attacks or system failures, which could better capture extreme events beyond the diffusion approximation. Another extension involves regime-switching dynamics, where the network environment or external threat landscape changes over time, influencing both infection propagation and optimal control strategies. Further research may also explore multi-layered or networked SIS models with heterogeneous nodes, time delays, and partial observation, enabling more realistic and granular cyber risk management strategies. These extensions could provide a richer theoretical framework and more practical insights for robust cyber defense policies under uncertainty and complex operational conditions. We left them for future research.
Statements and Declarations
No competing interests.
Acknowledgments
Zhuo Jin and Hailiang Yang were supported by the National Natural Science Foundation of China Grant [Grant 12471452]. Ran Xu was supported by the National Natural Science Foundation of China [Grants 12201506 and 12371468].
Appendix A Proof of Proposition 2.1
(i) To prove the assertion, we first show that the cost functional defined in (2.8) is non-decreasing in if is non-decreasing in for each pair of admissible controls. This can be proved using the density argument and Itô’s formula. To be specific, let’s firstly argue the Yamada & Watanabe’s comparison principle of Itô’s diffusion (see, e.g., Karatzas and Shreve, (1991)). The diffusion term holds the locally Lipschitz property, and further observe that by simply taking . There exists a strictly decreasing sequence with and for every . To see this, one explicitly has , which gives satisfying such properties as required for each . Moreover, we would like to take a nonnegative continuous function dominated by , such that , so that we get a normalized function continuous in taking the form of
For example, one can take . Notice also that a property
holds for some real number . Next, assign the function
which is even and twice continuously differentiable:
by fundamental theorem of calculus, and
where observe that is at least non-decreasing sequence for each , hence allowing us to apply the monotone convergence theorem. Consider two random processes (namely the solution to the corresponding controlled SDE (2.3)) and for different initial data , each of that has continuous trajectory for every individual . Now, apply the Itô’s formula for the random process :
(A.1) |
where the stochastic integral vanished due to the fact for each ; the second inequality holds by the established property of function and ; and further by the Lipschitz continuity of to reach the last line in (A.1). By sending on both sides of (A.1) with Lebesgue dominated convergence theorem yields
As a consequence, the desired comparison principle follows by Grönwall’s inequality, i.e.
Secondly, let’s assume that . However, this immediately turns out that there exists some positive measure set such that
by the non-decreasing property of cost functional. This apparently contradicts the result we just obtained.
Thirdly, we can thus claim that the comparison principle of two objective functions: if under each control . To show this, observe that
by Assumption 2.2. That, together with the joint measurability of process on product -field for every and , ensures the applicability of Fubini’s theorem. Therefore, interchange the order of the two integrals
Hence, the non-decreasing property of is a straightforward argument. To be specific, we consider , and for any , let be an -optimal control for such that . Then, since is non-decreasing in for any given admissible control, we have
and by sending , we complete the proof.
(ii) We first show a similar result as stated in (2.9) holds for the cost functional for any given admissible control. For fixed , and take any , and a time , we have
(A.2) |
where denotes the controlled process under fixed control with initial value . Next, we estimate the following two terms
and
Therefore, we can apply boundedness property of the moment of the controlled SDE within for any , there exists a constant such that
and
which can be seen, for example, in Corollary 2.5.12 in Krylov, (1980). Thus, by combining and equation (A) and applying Minkowski’s inequality, we attain that
Then, without loss of generality, we consider such that . Take an -optimal control such that . Thus, we obtain the following inequality
The desired result will be achieved by sending to zero. Furthermore, is also continuous uniformly in . The Lipschitz and uniform continuity will follow again due to .
Appendix B Proof of Proposition 3.1
(i) Viscosity subsolution. We consider any test function with such that for any given , we just need to show that
where
is the infinitesimal generator of the controlled diffusion given by (2.3) under a pair of fixed control . Fix any , consider the control and the corresponding controlled diffusion . For a sufficiently small such that , consider the stopping time . Then, by applying Itô’s formula and the dynamic programming principle, we have
(B.1) |
Now, we assume that for any given . By the continuity of the function , there exists a such that for all . Then, let , we have
which is a contradiction to (B) if we choose above. Hence, by the arbitrariness of the control, we must have
(ii) Viscosity supersolution. For any , let be any test function such that attains a maximum value of zero at , we show that
We prove this by contradiction. Assume that , then by the continuity of the function and uniformly in the control, there exist and such that
Then, for any fixed control , let be the controlled process with . We define the first exit time of the interval as . Then, by applying Itô’s formula, we have
Then, by the arbitrariness of control , the dynamic programming principle and the fact that for all , we obtain that , which is a contradiction since .
Appendix C Proof of Proposition 3.4
We prove the comparison principle by the usual arguments of contradiction (see e.g. Touzi, (2012), Albrecher et al., (2022)). Assume that
and . Since both and are Lipschitz continuous on , there exists a constant such that
To proceed, let us consider the following set
For all , define two auxiliary functions,
Let , and . Then, we directly have , hence
Note that, by using the increasing and Lipschitz continuous property of and and the fact that on , we can show that there exists a sufficiently large such that for all , is an interior point of (see, for example, (Wang et al., , 2024, Appendix B)).
Then by the following inequality,
we arrive at
Therefore, by considering a sequence as such that , we have
which yields .
Then, we construct two twice continuously differentiable functions
which are essentially test functions for the subsolution and supersolution of (3.2) at the points and , respectively. To simplify our proof here, we first assume that and are twice continuously differentiable at and , respectively; one can resort to a more general theorem to get a similar result when are not twice continuously differentiable at the point (see e.g. Crandall et al., (1992)). Since reaches a local maximum at which is an interior point in , hence we have
Therefore, we arrive at
In addition, the Hessian matrix is negative semi-definite,
(C.1) |
Let , and
we can rewrite (C.1) as
Then, according to (Crandall et al., , 1992, Theorem 3.2), for any , there exists such that,
(C.2) |
and and , hence, we have
(C.3) |
In addition, by noting that , we obtain from (C.2) that
Therefore, we can derive from (C.3),
(C.4) |
Hence, by (C), we can obtain
which is a contradiction, then we complete the proof.
Appendix D Infinite-Horizon BSDE and Convergence of PIA
D.1 Assumptions and notations
Fix a filtered probability space and let be a one-dimensional Wiener process on this space. For self-containment, we introduce the following notations:
-
(i)
We first introduce the notations of some spaces that are involved in the later analysis: For any , let be the set of all -valued -adapted process such that
Let be the set of all -valued -adapted continuous process such that
Finally, let be the set of -valued -measurable random variables such that , where is any -stopping time taking values in .
Then, we define the space
(D.1) with the norm
for any . Obviously, with the norm is a Banach space.
-
(ii)
For a constant , and predictable process , we introduce the -norm:
-
(iii)
For adapted process such that , we define
-
(iv)
For any continuous local martingale let denote the quadratic variation process and let
denotes the Doléans-Dade exponential of given the initial data .
Assumption D.1
For each fixed , we assume the function
is measurable.
For notation simplicity, we write , where is the action space of our cyber risk control problem (2.5), and , and , which are the drift term and cost function in (3.2), . One can refer to Kerimkulov et al., (2020) for a short discussion on the validity of the measurability.
Assumption D.2
There are constants such that the following hold:
-
(i)
For , ,
and for all , , we have
-
(ii)
For all , and we have that
-
(iii)
For , ,
D.2 Some preliminary lemmas
The following lemma is a straightforward result of Girsanov’s theorem under the random process , which is bounded (according to Assumption D.2). This result will be helpful in the construction of a contraction mapping when proving Theorem 4.1 under a new measure on the probability space.
Lemma D.1
Let , , and be the unique solution to the SDE (2.3), started from time with initial data , and controlled by the optimal control process (with a abuse of notation, we use to denote the control process as well). Then, is equivalent to the probability measure , and the process
is a -Wiener process.
Proof. See Theorem 6.8.8 of Krylov, (2002).
Assumption D.3
For all , for is progressively measurable. And, there exist constants such that , and for any , ,
(D.2) |
and
(D.3) |
The following lemma states the unique solution of an infinite-horizon BSDE associated with our infinite-horizon stochastic control problem.
Lemma D.2
Proof. See, for example,(Yong and Zhou, , 1999, Theorem 7.3.6).
Moreover, the comparison principle for infinite-horizon BSDE was also well established, see, e.g. (Hamadène et al., , 1999, Theorem 2.2 ).
Lemma D.3
Let be the solution to the following BSDE
where and satisfy the Assumption D.3 (with removed) for , respectively. If further , and for all , then for all .
Now, we are ready to present the following Lemma D.4, which plays a key role in the proof of Theorem 4.1. Lemma D.4 follows a similar idea to Lemma A.5 of Kerimkulov et al., (2020), see also Lemma 3.2 of Fuhrman and Tessitore, (2004).
Lemma D.4
Let be a measurable function that satisfies Assumption D.3. Fix . Let be the unique solution to (D.4) for any given . Moreover, assume that for the following condition is satisfied:
Then there is and such that for any we have
(D.6) |
where is the unique solution to (D.4) corresponding to for , respectively. Furthermore, one can choose sufficiently large and sufficiently small such that , and the above results hold as well.
Proof. Denote and . Then, apply Itô’s formula to :
By taking the expectation of the equation above, we get
(D.7) |
By the Lipschitz property of the generator and by Young’s inequality, we observe that, for any ,
(D.8) |
where the last inequality holds by choosing . Then, take sufficiently large such that sufficiently small, we have
and subtracting on both sides of (D.7) together with (D.2), we have
(D.9) |
Dividing by on both sides of (D.9), we have
where the first inequality holds due to , and the second holds by setting . If one further chooses large such that , hence one get , and complete the proof.
Lemma D.5
Proof. We change the measure back to and rewrite the BSDE (D.10) as
By the definition of infinite-horizon BSDE, is a -adapted process for . Obviously, is measurable w.r.t. trivial -field , hence deterministic. The continuity of for follows the inequality (D.2) combined with the continuity of and in . Let’s then prove that is the viscosity solution to the HJB equation (3.2). We only show the case for viscosity supersolution, and the subsolution property will follow from the same idea. As we have proved is deterministic, consider the discounted process with . Take being a test function such that attains its (local) minimum value of zero at any , we shall have the supersolution inequality holds. We prove the assertion by contradiction. Assume that
Since is smooth enough and is continuous, there exists , such that for any , , we have and
(D.11) |
Let for any , and consider the pair of stopped processes
which solves the following finite-horizon BSDE
where . On the other hand, consider another pair of stopped processes
Apply the Itô’s formula to , then
where . Then, since for all , we have . In addition, by (D.11), one has
Hence, by Theorem D.3 (with a further argument of strict comparison principle, see e.g. (Pham, , 2009, Theorem 6.2.2)), we have , which is a contradiction. Hence, we have the supersolution inequality.
D.3 Proof of Theorem 4.1
Let be the smooth solution to the Bellman-type ODE (4.1), and recall the updated control at th iteration
Define as the solution to the SDE (2.3) started from and controlled by the optimal control policy . Apply the Itô’s formula to , we have
(D.12) |
where the second equality holds since is a solution to the ODE (4.1). Let’s define for,
Hence, we can write (D.12) as
(D.13) |
Let be a discounting rate, and define
(D.14) |
Then, we can rewrite the above BSDE (D.13) as
Moreover, one observes that
since the value function is finite and we have verified that almost surely for any (see, Proposition 2.1). Hence, we have
(D.15) |
By Lemma D.1, we change the measure to , and (D.15) can be rewritten as
(D.16) |
Similarly, consider the following BSDE with and replaced by the value function of our stochastic control problem (2.6) in (D.16):
(D.17) |
Note that for , we have , and by Assumption D.2,
where we have applied the fact that there is a constant , (given the derivative exists) by the property of the value function given in Proposition 2.2. Hence, one may choose large enough such that Assumption D.3 holds for . Then, by applying Lemmas D.4 and D.5, there is ,
(D.18) |
for any , where and denote the expectation and -norm under measure . In addition, by noting that the solution and are the value function and approximation sequence at th iteration in the Algorithm 1 (with a recursive argument, see e.g. Kerimkulov et al., (2020)), we have
Hence, we complete the proof.
References
- Albrecher et al., (2022) Albrecher, H., Azcue, P., and Muler, N. (2022). Optimal ratcheting of dividends in a Brownian risk model. SIAM Journal on Financial Mathematics, 13(3):657–701.
- Amin et al., (2009) Amin, S., Cárdenas, A. A., and Sastry, S. S. (2009). Safe and secure networked control systems under denial-of-service attacks. In International workshop on hybrid systems: computation and control, pages 31–45. Springer.
- Antonio et al., (2021) Antonio, Y., Indratno, S. W., and Saputro, S. W. (2021). Pricing of cyber insurance premiums using a Markov-based dynamic model with clustering structure. PLoS One, 16(10):e0258867.
- Asmussen and Taksar, (1997) Asmussen, S. and Taksar, M. (1997). Controlled diffusion models for optimal dividend pay-out. Insurance: Mathematics and Economics, 20(1):1–15.
- Azcue and Muler, (2005) Azcue, P. and Muler, N. (2005). Optimal reinsurance and dividend distribution policies in the Cramér-Lundberg model. Mathematical Finance: An International Journal of Mathematics, Statistics and Financial Economics, 15(2):261–308.
- Bakdash et al., (2018) Bakdash, J. Z., Hutchinson, S., Zaroukian, E. G., Marusich, L. R., Thirumuruganathan, S., Sample, C., Hoffman, B., and Das, G. (2018). Malware in the future? Forecasting of analyst detection of cyber events. Journal of Cybersecurity, 4(1):tyy007.
- Barnett et al., (2023) Barnett, M., Buchak, G., and Yannelis, C. (2023). Epidemic responses under uncertainty. Proceedings of the National Academy of Sciences, 120(2):e2208111120.
- Bellman, (1955) Bellman, R. (1955). Functional equations in the theory of dynamic programming. v. positivity and quasi-linearity. Proceedings of the National Academy of Sciences, 41(10):743–746.
- Bellman, (1966) Bellman, R. (1966). Dynamic programming. Science, 153(3731):34–37.
- Böhme and Kataria, (2006) Böhme, R. and Kataria, G. (2006). Models and measures for correlation in cyber-insurance. In Weis, volume 2(1), page 3.
- Böhme et al., (2010) Böhme, R., Schwartz, G., et al. (2010). Modeling cyber-insurance: towards a unifying framework. In WEIS.
- Boukanjime et al., (2021) Boukanjime, B., El-Fatini, M., Laaribi, A., Taki, R., and Wang, K. (2021). A Markovian regime-switching stochastic hybrid time-delayed epidemic model with vaccination. Automatica, 133:109881.
- Cárdenas et al., (2011) Cárdenas, A. A., Amin, S., Lin, Z.-S., Huang, Y.-L., Huang, C.-Y., and Sastry, S. (2011). Attacks against process control systems: risk assessment, detection, and response. In Proceedings of the 6th ACM symposium on information, computer and communications security, pages 355–366.
- Cebula and Young, (2010) Cebula, J. L. and Young, L. R. (2010). A taxonomy of operational cyber security risks. Technical report, Carnegie Mellon University. CMU/SEI-2010-TN-028.
- Chen et al., (2025) Chen, J., Feng, K., Freddi, L., Goreac, D., and Li, J. (2025). Optimality of vaccination for prevalence-constrained SIRS epidemics. Applied Mathematics & Optimization, 91(1):1–26.
- Crandall et al., (1992) Crandall, M. G., Ishii, H., and Lions, P.-L. (1992). User’s guide to viscosity solutions of second order partial differential equations. Bulletin of the American mathematical society, 27(1):1–67.
- Crandall and Lions, (1983) Crandall, M. G. and Lions, P.-L. (1983). Viscosity solutions of Hamilton-Jacobi equations. Transactions of the American mathematical society, 277(1):1–42.
- Cremer et al., (2022) Cremer, F., Sheehan, B., Fortmann, M., Kia, A. N., Mullins, M., Murphy, F., and Materne, S. (2022). Cyber risk and cybersecurity: a systematic review of data availability. The Geneva papers on risk and insurance. Issues and practice, 47(3):698.
- Dou et al., (2020) Dou, W., Tang, W., Wu, X., Qi, L., Xu, X., Zhang, X., and Hu, C. (2020). An insurance theory based optimal cyber-insurance contract against moral hazard. Information Sciences, 527:576–589.
- Eling and Jung, (2018) Eling, M. and Jung, K. (2018). Copula approaches for modeling cross-sectional dependence of data breach losses. Insurance: Mathematics and Economics, 82:167–180.
- Eling and Loperfido, (2017) Eling, M. and Loperfido, N. (2017). Data breaches: Goodness of fit, pricing, and risk measurement. Insurance: Mathematics and Economics, 75:126–136.
- Fahrenwaldt et al., (2018) Fahrenwaldt, M. A., Weber, S., and Weske, K. (2018). Pricing of cyber insurance contracts in a network model. ASTIN Bulletin: The Journal of the IAA, 48(3):1175–1218.
- Fawzi et al., (2014) Fawzi, H., Tabuada, P., and Diggavi, S. (2014). Secure estimation and control for cyber-physical systems under adversarial attacks. IEEE Transactions on Automatic control, 59(6):1454–1467.
- Federico et al., (2024) Federico, S., Ferrari, G., and Torrente, M.-L. (2024). Optimal vaccination in a SIRS epidemic model. Economic Theory, 77(1):49–74.
- Fuhrman and Tessitore, (2004) Fuhrman, M. and Tessitore, G. (2004). Infinite horizon backward stochastic differential equations and elliptic equations in Hilbert spaces. The Annals of Probability, 32(1B):607 – 660.
- Garcia-Teodoro et al., (2009) Garcia-Teodoro, P., Diaz-Verdejo, J., Maciá-Fernández, G., and Vázquez, E. (2009). Anomaly-based network intrusion detection: Techniques, systems and challenges. Computers & Security, 28(1-2):18–28.
- Gil et al., (2014) Gil, S., Kott, A., and Barabási, A.-L. (2014). A genetic epidemiology approach to cyber-security. Scientific Reports, 4(1):5659.
- Gordon et al., (2003) Gordon, L. A., Loeb, M. P., and Sohail, T. (2003). A framework for using insurance for cyber-risk management. Communications of the ACM, 46(3):81–85.
- Gray et al., (2011) Gray, A., Greenhalgh, D., Hu, L., Mao, X., and Pan, J. (2011). A stochastic differential equation SIS epidemic model. SIAM Journal on Applied Mathematics, 71(3):876–902.
- Hamadène et al., (1999) Hamadène, S., Lepeltier, J.-P., and Wu, Z. (1999). Infinite horizon reflected backward stochastic differential equations and applications in mixed control and game problems. Probability and Mathematical Statistics, 19(2):211–234.
- He et al., (2024) He, R., Jin, Z., and Li, J. S.-H. (2024). Modeling and management of cyber risk: a cross-disciplinary review. Annals of Actuarial Science, 18(2):270–309.
- Herath and Herath, (2011) Herath, H. and Herath, T. (2011). Copula-based actuarial model for pricing cyber-insurance policies. Insurance markets and companies: analyses and actuarial computations, 2(1):7–20.
- Hillairet and Lopez, (2021) Hillairet, C. and Lopez, O. (2021). Propagation of cyber incidents in an insurance portfolio: counting processes combined with compartmental epidemiological models. Scandinavian Actuarial Journal, 2021(8):671–694.
- Hillairet et al., (2022) Hillairet, C., Lopez, O., d’Oultremont, L., and Spoorenberg, B. (2022). Cyber-contagion model with network structure applied to insurance. Insurance: Mathematics and Economics, 107:88–101.
- Howard, (1960) Howard, R. A. (1960). Dynamic programming and Markov processes. John Wiley.
- Jang-Jaccard and Nepal, (2014) Jang-Jaccard, J. and Nepal, S. (2014). A survey of emerging threats in cybersecurity. Journal of Computer and System Sciences, 80(5):973–993.
- Karatzas and Shreve, (1991) Karatzas, I. and Shreve, S. E. (1991). Brownian Motion and Stochastic Calculus, volume 113 of Graduate Texts in Mathematics. Springer, New York, NY, 2 edition.
- Kerimkulov et al., (2020) Kerimkulov, B., Siska, D., and Szpruch, L. (2020). Exponential convergence and stability of howard’s policy improvement algorithm for controlled diffusions. SIAM Journal on Control and Optimization, 58(3):1314–1340.
- Krylov, (1980) Krylov, N. V. (1980). Controlled diffusion processes. Springer.
- Krylov, (2002) Krylov, N. V. (2002). Introduction to the Theory of Random Processes. American Mathematical Society, Providence, Rhode Island.
- Liu et al., (2025) Liu, W., Li, L., Sun, J., Deng, F., Wang, G., and Chen, J. (2025). Data-driven control against false data injection attacks. Automatica, 179:112399.
- Liu et al., (2016) Liu, Y., Dong, M., Ota, K., and Liu, A. (2016). Activetrust: Secure and trustable routing in wireless sensor networks. IEEE Transactions on Information Forensics and Security, 11(9):2013–2027.
- Malavasi et al., (2022) Malavasi, M., Peters, G. W., Shevchenko, P. V., Trück, S., Jang, J., and Sofronov, G. (2022). Cyber risk frequency, severity and insurance viability. Insurance: Mathematics and Economics, 106:90–114.
- Mishra and Pandey, (2014) Mishra, B. K. and Pandey, S. K. (2014). Dynamic model of worm propagation in computer network. Applied Mathematical Modelling, 38(7-8):2173–2179.
- Moore et al., (2006) Moore, D., Shannon, C., Brown, D. J., Voelker, G. M., and Savage, S. (2006). Inferring internet denial-of-service activity. ACM Transactions on Computer Systems (TOCS), 24(2):115–139.
- Mukhopadhyay et al., (2013) Mukhopadhyay, A., Chatterjee, S., Saha, D., Mahanti, A., and Sadhukhan, S. K. (2013). Cyber-risk decision models: To insure it or not? Decision Support Systems, 56:11–26.
- Nguyen-Ngoc and Yor, (2004) Nguyen-Ngoc, L. and Yor, M. (2004). Some martingales associated to reflected Lévy processes. In Séminaire de Probabilités XXXVIII, pages 42–69. Springer.
- Öğüt et al., (2011) Öğüt, H., Raghunathan, S., and Menon, N. (2011). Cyber security risk management: Public policy implications of correlated risk, imperfect ability to prove loss, and observability of self-protection. Risk Analysis: An International Journal, 31(3):497–512.
- Pasqualetti et al., (2013) Pasqualetti, F., Dörfler, F., and Bullo, F. (2013). Attack detection and identification in cyber-physical systems. IEEE Transactions on Automatic Control, 58(11):2715–2729.
- Paté-Cornell et al., (2018) Paté-Cornell, M.-E., Kuypers, M., Smith, M., and Keller, P. (2018). Cyber risk management for critical infrastructure: a risk analysis model and three case studies. Risk Analysis, 38(2):226–241.
- Pham, (2009) Pham, H. (2009). Continuous-time stochastic control and optimization with financial applications, volume 61. Springer Science & Business Media.
- Puterman, (1981) Puterman, M. (1981). On the convergence of policy iteration for controlled diffusions. Journal of Optimization Theory and Applications, 33(1):137–144.
- Puterman and Brumelle, (1979) Puterman, M. L. and Brumelle, S. L. (1979). On the convergence of policy iteration in stationary dynamic programming. Mathematics of Operations Research, 4(1):60–69.
- Santos and Rust, (2004) Santos, M. S. and Rust, J. (2004). Convergence properties of policy iteration. SIAM Journal on Control and Optimization, 42(6):2094–2115.
- Sonveaux and Winkin, (2023) Sonveaux, C. and Winkin, J. J. (2023). State feedback control law design for an age-dependent SIR model. Automatica, 158:111297.
- Stoneburner et al., (2002) Stoneburner, G., Goguen, A., Feringa, A., et al. (2002). Risk management guide for information technology systems. Nist special publication, 800(30):800–30.
- Touzi, (2012) Touzi, N. (2012). Optimal stochastic control, stochastic target problems, and backward SDE, volume 29. Springer Science & Business Media.
- Tran and Yin, (2021) Tran, K. and Yin, G. (2021). Optimal control and numerical methods for hybrid stochastic SIS models. Nonlinear Analysis: Hybrid Systems, 41:101051.
- Wakaiki et al., (2019) Wakaiki, M., Cetinkaya, A., and Ishii, H. (2019). Stabilization of networked control systems under DoS attacks and output quantization. IEEE Transactions on Automatic Control, 65(8):3560–3575.
- Wang et al., (2024) Wang, W., Xu, R., and Yan, K. (2024). Optimal ratcheting of dividends with capital injection. Mathematics of Operations Research.
- Xu and Hua, (2019) Xu, M. and Hua, L. (2019). Cybersecurity insurance: Modeling and pricing. North American Actuarial Journal, 23(2):220–249.
- Yong and Zhou, (1999) Yong, J. and Zhou, X. Y. (1999). Stochastic controls: Hamiltonian systems and HJB equations, volume 43. Springer Science & Business Media.
- Zhan et al., (2015) Zhan, Z., Xu, M., and Xu, S. (2015). Predicting cyber attack rates with extreme values. IEEE Transactions on Information Forensics and Security, 10(8):1666–1677.