create a website

Integrating Risk-Averse and Constrained Reinforcement Learning for Robust Decision-Making in High-Stakes Scenarios. (2024). Habib, Muhammad Salman ; Omair, Muhammad ; Ramzan, Muhammad Babar ; Ahmad, Moiz.
In: Mathematics.
RePEc:gam:jmathe:v:12:y:2024:i:13:p:1954-:d:1420914.

Full description at Econpapers || Download paper

Cited: 0

Citations received by this document

Cites: 61

References cited by this document

Cocites: 23

Documents which have cited the same bibliography

Coauthors: 0

Authors who have wrote about the same topic

Citations

Citations received by this document

    This document has not been cited yet.

References

References cited by this document

  1. Ahmadi, M.; Rosolia, U.; Ingham, M.; Murray, R.; Ames, A. Constrained Risk-Averse Markov Decision Processes. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020.
    Paper not yet in RePEc: Add citation now
  2. Altman, E. Constrained Markov Decision Processes; Routledge: London, UK, 1999.
    Paper not yet in RePEc: Add citation now
  3. Bakker, H.; Dunke, F.; Nickel, S. A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 2020, 96, 102080. [CrossRef]

  4. Basso, R.; Kulcsár, B.; Sanchez-Diaz, I.; Qu, X. Dynamic stochastic electric vehicle routing with safe reinforcement learning. Transp. Res. Part E Logist. Transp. Rev. 2022, 157, 102496. [CrossRef]

  5. Boda, K.; Filar, J.A. Time Consistent Dynamic Risk Measures. Math. Methods Oper. Res. 2006, 63, 169–186. [CrossRef]

  6. Boland, N.; Christiansen, J.; Dandurand, B.; Eberhard, A.; Oliveira, F. A parallelizable augmented Lagrangian method applied to large-scale non-convex-constrained optimization problems. Math. Program. 2019, 175, 503–536. [CrossRef]
    Paper not yet in RePEc: Add citation now
  7. Borkar, V.S. A convex analytic approach to Markov decision processes. Probab. Theory Relat. Fields 1988, 78, 583–602. [CrossRef]
    Paper not yet in RePEc: Add citation now
  8. Borkar, V.S. An actor-critic algorithm for constrained Markov decision processes. Syst. Control Lett. 2005, 54, 207–213. [CrossRef]
    Paper not yet in RePEc: Add citation now
  9. Chen, X.; Karimi, B.; Zhao, W.; Li, P. On the Convergence of Decentralized Adaptive Gradient Methods. arXiv 2021, arXiv:2109.03194. Available online: https://ui.adsabs.harvard.edu/abs/2021arXiv210903194C (accessed on 26 May 2024).
    Paper not yet in RePEc: Add citation now
  10. Chow, Y.; Ghavamzadeh, M.; Janson, L.; Pavone, M. Risk-constrained reinforcement learning with percentile risk criteria. J. Mach. Learn. Res. 2017, 18, 6070–6120.
    Paper not yet in RePEc: Add citation now
  11. Chow, Y.; Nachum, O.; Duenez-Guzman, E.; Ghavamzadeh, M. A lyapunov-based approach to safe reinforcement learning. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montréal, QC, Canada, 2–8 December 2018.
    Paper not yet in RePEc: Add citation now
  12. Coache, A.; Jaimungal, S.; Cartea, . Conditionally Elicitable Dynamic Risk Measures for Deep Reinforcement Learning. SSRN Electron. J. 2023, 14, 1249–1289. [CrossRef]

  13. Collins, A.G.E. Reinforcement learning: Bringing together computation and cognition. Curr. Opin. Behav. Sci. 2019, 29, 63–68. [CrossRef]
    Paper not yet in RePEc: Add citation now
  14. Dalal, G.; Dvijotham, K.; Vecerík, M.; Hester, T.; Paduraru, C.; Tassa, Y.J.A. Safe Exploration in Continuous Action Spaces. arXiv 2018, arXiv:1801.08757.
    Paper not yet in RePEc: Add citation now
  15. Demizu, T.; Fukazawa, Y.; Morita, H. Inventory management of new products in retailers using model-based deep reinforcement learning. Expert Syst. Appl. 2023, 229, 120256. [CrossRef]
    Paper not yet in RePEc: Add citation now
  16. Ding, S.; Wang, J.; Du, Y.; Shi, Y. Reduced Policy Optimization for Continuous Control with Hard Constraints. arXiv 2023, arXiv:2310.09574.
    Paper not yet in RePEc: Add citation now
  17. Dinh Thai, H.; Nguyen Van, H.; Diep, N.N.; Ekram, H.; Dusit, N. Markov Decision Process and Reinforcement Learning. In Deep Reinforcement Learning for Wireless Communications and Networking: Theory, Applications and Implementation; Wiley-IEEE Press: Hoboken, NJ, USA, 2023; pp. 25–36.
    Paper not yet in RePEc: Add citation now
  18. Dowd, K.; Cotter, J. Spectral Risk Measures and the Choice of Risk Aversion Function. arXiv 2011, arXiv:1103.5668.

  19. Dowson, O.; Kapelevich, L. SDDP.jl: A Julia Package for Stochastic Dual Dynamic Programming. INFORMS J. Comput. 2021, 33, 27–33. [CrossRef]
    Paper not yet in RePEc: Add citation now
  20. Escudero, L.F.; Garín, M.A.; Monge, J.F.; Unzueta, A. On preparedness resource allocation planning for natural disaster relief under endogenous uncertainty with time-consistent risk-averse management. Comput. Oper. Res. 2018, 98, 84–102. [CrossRef]
    Paper not yet in RePEc: Add citation now
  21. Gillies, A.W. Some Aspects of Analysis and Probability. Phys. Bull. 1959, 10, 65. [CrossRef]
    Paper not yet in RePEc: Add citation now
  22. Gu, S.; Yang, L.; Du, Y.; Chen, G.; Walter, F.; Wang, J.; Yang, Y.; Knoll, A. A Review of Safe Reinforcement Learning: Methods, Theory and Applications. arXiv 2022, arXiv:2205.10330. Mathematics 2024, 12, 1954 28 of 29
    Paper not yet in RePEc: Add citation now
  23. Habib, M.S. Robust Optimization for Post-Disaster Debris Management in Humanitarian Supply Chain: A Sustainable Recovery Approach. Ph.D. Thesis, Hanyang University, Seoul, Republic of Korea, 2018.
    Paper not yet in RePEc: Add citation now
  24. Habib, M.S.; Maqsood, M.H.; Ahmed, N.; Tayyab, M.; Omair, M. A multi-objective robust possibilistic programming approach for sustainable disaster waste management under disruptions and uncertainties. Int. J. Disaster Risk Reduct. 2022, 75, 102967. [CrossRef]
    Paper not yet in RePEc: Add citation now
  25. Habib, M.S.; Sarkar, B. A multi-objective approach to sustainable disaster waste management. In Proceedings of the International Conference on Industrial Engineering and Operations Management, Paris, Farance, 26–27 July 2018; pp. 1072–1083.
    Paper not yet in RePEc: Add citation now
  26. Hildebrandt, F.D.; Thomas, B.W.; Ulmer, M.W. Opportunities for reinforcement learning in stochastic dynamic vehicle routing. Comput. Oper. Res. 2023, 150, 106071. [CrossRef]
    Paper not yet in RePEc: Add citation now
  27. Hussain, A.; Masood, T.; Munir, H.; Habib, M.S.; Farooq, M.U. Developing resilience in disaster relief operations management through lean transformation. Prod. Plan. Control 2023, 34, 1475–1496. [CrossRef]
    Paper not yet in RePEc: Add citation now
  28. Kamyabniya, A.; Sauré, A.; Salman, F.S.; Bénichou, N.; Patrick, J. Optimization models for disaster response operations: A literature review. OR Spectr. 2024, 46, 1–47. [CrossRef]
    Paper not yet in RePEc: Add citation now
  29. Lee, J.; Lee, K.; Moon, I. A reinforcement learning approach for multi-fleet aircraft recovery under airline disruption. Appl. Soft Comput. 2022, 129, 109556. [CrossRef]
    Paper not yet in RePEc: Add citation now
  30. Li, J.; Fridovich-Keil, D.; Sojoudi, S.; Tomlin, C.J. Augmented Lagrangian Method for Instantaneously Constrained Reinforcement Learning Problems. In Proceedings of the 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 14–17 December 2021; pp. 2982–2989.
    Paper not yet in RePEc: Add citation now
  31. Liu, K.; Yang, L.; Zhao, Y.; Zhang, Z.-H. Multi-period stochastic programming for relief delivery considering evolving transportation network and temporary facility relocation/closure. Transp. Res. Part E Logist. Transp. Rev. 2023, 180, 103357. [CrossRef]
    Paper not yet in RePEc: Add citation now
  32. Liu, P.; Zhang, Y.; Bao, F.; Yao, X.; Zhang, C. Multi-type data fusion framework based on deep reinforcement learning for algorithmic trading. Appl. Intell. 2023, 53, 1683–1706. [CrossRef]
    Paper not yet in RePEc: Add citation now
  33. Lockwood, P.L.; Klein-Flügge, M.C. Computational modelling of social cognition and behaviour—A reinforcement learning primer. Soc. Cogn. Affect. Neurosci. 2020, 16, 761–771. [CrossRef] [PubMed]
    Paper not yet in RePEc: Add citation now
  34. Morillo, J.L.; Zéphyr, L.; Pérez, J.F.; Lindsay Anderson, C.; Cadena, . Risk-averse stochastic dual dynamic programming approach for the operation of a hydro-dominated power system in the presence of wind uncertainty. Int. J. Electr. Power Energy Syst. 2020, 115, 105469. [CrossRef]
    Paper not yet in RePEc: Add citation now
  35. Nguyen, N.D.; Nguyen, T.T.; Vamplew, P.; Dazeley, R.; Nahavandi, S. A Prioritized objective actor-critic method for deep reinforcement learning. Neural Comput. Appl. 2021, 33, 10335–10349. [CrossRef]
    Paper not yet in RePEc: Add citation now
  36. Paternain, S.; Chamon, L.F.O.; Calvo-Fullana, M.; Ribeiro, A. Constrained reinforcement learning has zero duality gap. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Curran Associates Inc.: New York, NY, USA, 2019; p. 679. Mathematics 2024, 12, 1954 29 of 29
    Paper not yet in RePEc: Add citation now
  37. Peng, X.B.; Abbeel, P.; Levine, S.; Panne, M.V.D. DeepMimic: Example-guided deep reinforcement learning of physics-based character skills. ACM Trans. Graph. 2018, 37, 143. [CrossRef]
    Paper not yet in RePEc: Add citation now
  38. Rao, J.J.; Ravulapati, K.K.; Das, T.K. A simulation-based approach to study stochastic inventory-planning games. Int. J. Syst. Sci. 2003, 34, 717–730. [CrossRef]
    Paper not yet in RePEc: Add citation now
  39. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1997. (In English)
    Paper not yet in RePEc: Add citation now
  40. Rodríguez-Espíndola, O. Two-stage stochastic formulation for relief operations with multiple agencies in simultaneous disasters. OR Spectr. 2023, 45, 477–523. [CrossRef]
    Paper not yet in RePEc: Add citation now
  41. Schulman, J.; Levine, S.; Abbeel, P.; Jordan, M.; Moritz, P. Trust Region Policy Optimization. In Proceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research, Lille, France, 6–11 July 2015. Available online: https://proceedings.mlr.press/v37/schulman15.html (accessed on 26 May 2024).
    Paper not yet in RePEc: Add citation now
  42. Shapiro, A.; Tekaya, W.; da Costa, J.P.; Soares, M.P. Risk neutral and risk averse Stochastic Dual Dynamic Programming method. Eur. J. Oper. Res. 2013, 224, 375–391. [CrossRef]

  43. Shavandi, A.; Khedmati, M. A multi-agent deep reinforcement learning framework for algorithmic trading in financial markets. Expert Syst. Appl. 2022, 208, 118124. [CrossRef]
    Paper not yet in RePEc: Add citation now
  44. Shi, T.; Xu, C.; Dong, W.; Zhou, H.; Bokhari, A.; Klemeš, J.J.; Han, N. Research on energy management of hydrogen electric coupling system based on deep reinforcement learning. Energy 2023, 282, 128174. [CrossRef]

  45. Tamar, A.; Castro, D.D.; Mannor, S. Policy gradients with variance related risk criteria. In Proceedings of the 29th International Coference on International Conference on Machine Learning, Edinburgh, UK, 26 June–1 July 2012.
    Paper not yet in RePEc: Add citation now
  46. Tamar, A.; Mannor, S. Variance Adjusted Actor Critic Algorithms. arXiv 2013, arXiv:1310.3697.
    Paper not yet in RePEc: Add citation now
  47. Van Wassenhove, L.N. Humanitarian aid logistics: Supply chain management in high gear. J. Oper. Res. Soc. 2006, 57, 475–489. [CrossRef]

  48. Venkatasatish, R.; Dhanamjayulu, C. Reinforcement learning based energy management systems and hydrogen refuelling stations for fuel cell electric vehicles: An overview. Int. J. Hydrogen Energy 2022, 47, 27646–27670. [CrossRef]
    Paper not yet in RePEc: Add citation now
  49. Wang, D.; Yang, K.; Yang, L. Risk-averse two-stage distributionally robust optimisation for logistics planning in disaster relief management. Int. J. Prod. Res. 2023, 61, 668–691. [CrossRef]

  50. Wang, K.; Long, C.; Ong, D.J.; Zhang, J.; Yuan, X.M. Single-Site Perishable Inventory Management Under Uncertainties: A Deep Reinforcement Learning Approach. IEEE Trans. Knowl. Data Eng. 2023, 35, 10807–10813. [CrossRef]
    Paper not yet in RePEc: Add citation now
  51. Wang, Y.; Zhan, S.S.; Jiao, R.; Wang, Z.; Jin, W.; Yang, Z.; Wang, Z.; Huang, C.; Zhu, Q. Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement Learning in Unknown Stochastic Environments. In Proceedings of the 40th International Conference on Machine Learning, Proceedings of Machine Learning Research, Honolulu, HI, USA, 23–29 July 2023; pp. 36593–36604. Available online: https://proceedings.mlr.press/v202/wang23as.html (accessed on 26 May 2024).
    Paper not yet in RePEc: Add citation now
  52. Wang, Z.; Shi, X.; Ma, C.; Wu, L.; Wu, J. CCPO: Conservatively Constrained Policy Optimization Using State Augmentation; IOS Press: Amsterdam, The Netherlands, 2023.
    Paper not yet in RePEc: Add citation now
  53. Waubert de Puiseau, C.; Meyes, R.; Meisen, T. On reliability of reinforcement learning based production scheduling systems: A comparative survey. J. Intell. Manuf. 2022, 33, 911–927. [CrossRef]

  54. Yang, Q.; Simão, T.D.; Tindemans, S.H.; Spaan, M.T.J. Safety-constrained reinforcement learning with a distributional safety critic. Mach. Learn. 2023, 112, 859–887. [CrossRef]
    Paper not yet in RePEc: Add citation now
  55. Yin, X.; Büyüktahtakın, İ.E. Risk-averse multi-stage stochastic programming to optimizing vaccine allocation and treatment logistics for effective epidemic response. IISE Trans. Healthc. Syst. Eng. 2022, 12, 52–74. [CrossRef]
    Paper not yet in RePEc: Add citation now
  56. Yu, G.; Liu, A.; Sun, H. Risk-averse flexible policy on ambulance allocation in humanitarian operations under uncertainty. Int. J. Prod. Res. 2021, 59, 2588–2610. [CrossRef]

  57. Yu, L.; Yang, H.; Miao, L.; Zhang, C. Rollout algorithms for resource allocation in humanitarian logistics. IISE Trans. 2019, 51, 887–909. [CrossRef]

  58. Yu, L.; Zhang, C.; Jiang, J.; Yang, H.; Shang, H. Reinforcement learning approach for resource allocation in humanitarian logistics. Expert Syst. Appl. 2021, 173, 114663. [CrossRef]
    Paper not yet in RePEc: Add citation now
  59. Zabihi, Z.; Moghadam, A.M.E.; Rezvani, M.H. Reinforcement Learning Methods for Computing Offloading: A Systematic Review. ACM Comput. Surv. 2023, 56, 17. [CrossRef]
    Paper not yet in RePEc: Add citation now
  60. Zhang, L.; Shen, L.; Yang, L.; Chen, S.; Wang, X.; Yuan, B.; Tao, D. Penalized Proximal Policy Optimization for Safe Reinforcement Learning. arXiv 2022, arXiv:2205.11814, 3719–3725.
    Paper not yet in RePEc: Add citation now
  61. Zhuang, X.; Zhang, Y.; Han, L.; Jiang, J.; Hu, L.; Wu, S. Two-stage stochastic programming with robust constraints for the logistics network post-disruption response strategy optimization. Front. Eng. Manag. 2023, 10, 67–81. [CrossRef]
    Paper not yet in RePEc: Add citation now

Cocites

Documents in RePEc which have cited the same bibliography

  1. Improving forest decision-making through complex system representation: A viability theory perspective. (2025). Domec, Jean-Christophe ; Labarre, Clmence ; Loustau, Denis ; Bingham, Logan ; Bdeker, Kai ; Andrs-Domenech, Pablo.
    In: Forest Policy and Economics.
    RePEc:eee:forpol:v:170:y:2025:i:c:s1389934124002387.

    Full description at Econpapers || Download paper

  2. A study of asset and liability management applied to Brazilian pension funds. (2025). , Joao ; Falcao, Rodrigo ; Bernardino, Wilton ; Alves, Jos Jonas ; de Souza, Filipe Costa ; Ospina, Raydonal.
    In: European Journal of Operational Research.
    RePEc:eee:ejores:v:322:y:2025:i:3:p:1059-1076.

    Full description at Econpapers || Download paper

  3. Integrating Risk-Averse and Constrained Reinforcement Learning for Robust Decision-Making in High-Stakes Scenarios. (2024). Habib, Muhammad Salman ; Omair, Muhammad ; Ramzan, Muhammad Babar ; Ahmad, Moiz.
    In: Mathematics.
    RePEc:gam:jmathe:v:12:y:2024:i:13:p:1954-:d:1420914.

    Full description at Econpapers || Download paper

  4. Renewable energy system sizing with power generation and storage functions accounting for its optimized activity on multiple electricity markets. (2024). Szirbik, Nick B ; Luning, Egbert A ; Jayawardhana, Bayu ; Saltik, Bahadir M ; Vakis, Antonis I ; Bechlenberg, Alva.
    In: Applied Energy.
    RePEc:eee:appene:v:360:y:2024:i:c:s0306261924001259.

    Full description at Econpapers || Download paper

  5. An alternative approach to address uncertainty in hub location. (2023). Nickel, Stefan ; Alumur, Sibel A ; Janschekowitz, Marc ; Taherkhani, Gita.
    In: OR Spectrum: Quantitative Approaches in Management.
    RePEc:spr:orspec:v:45:y:2023:i:2:d:10.1007_s00291-023-00706-2.

    Full description at Econpapers || Download paper

  6. Heterogeneous Multi-resource Planning and Allocation Under Stochastic Demand. (2023). Keskinocak, Pinar ; Singh, Mohit ; Baxter, Arden.
    In: INFORMS Journal on Computing.
    RePEc:inm:orijoc:v:35:y:2023:i:5:p:929-951.

    Full description at Econpapers || Download paper

  7. Re-evaluating portfolio diversification and design using cryptocurrencies: Are decentralized cryptocurrencies enough?. (2023). Bakry, Walid ; Vo, Xuan Vinh ; Al-Mohamad, Somar ; Prasad, Mason ; Khaki, Audil.
    In: Research in International Business and Finance.
    RePEc:eee:riibaf:v:64:y:2023:i:c:s0275531922002094.

    Full description at Econpapers || Download paper

  8. Design of a sales plan in a hybrid contractual and non-contractual context in a setting of limited capacity: A robust approach. (2023). Carravilla, Maria Antonia ; Oliveira, Jose Fernando ; Pereira, Daniel Filipe.
    In: International Journal of Production Economics.
    RePEc:eee:proeco:v:260:y:2023:i:c:s0925527323000993.

    Full description at Econpapers || Download paper

  9. Knowledge percolation threshold and optimization strategies of the combinatorial network for complex innovation in the digital economy. (2023). , Xi ; Li, Shengliang ; Yu, Lean ; Zhao, Jianyu.
    In: Omega.
    RePEc:eee:jomega:v:120:y:2023:i:c:s0305048323000774.

    Full description at Econpapers || Download paper

  10. Optimal energy and reserve scheduling in a renewable-dominant power system. (2023). Jiao, Zihao ; Zhang, Yuli ; Ran, Lun.
    In: Omega.
    RePEc:eee:jomega:v:118:y:2023:i:c:s0305048323000142.

    Full description at Econpapers || Download paper

  11. Real-time resource allocation in the emergency department: A case study. (2023). Aringhieri, Roberto ; Duma, Davide.
    In: Omega.
    RePEc:eee:jomega:v:117:y:2023:i:c:s0305048323000105.

    Full description at Econpapers || Download paper

  12. An integrative framework for coordination of damage assessment, road restoration, and relief distribution in disasters. (2023). Rezapour, Shabnam ; Farzaneh, Mohammad Amin ; Amini, Hadi M ; Baghaian, Atefe.
    In: Omega.
    RePEc:eee:jomega:v:115:y:2023:i:c:s0305048322001554.

    Full description at Econpapers || Download paper

  13. Product€“service system negotiation in aircraft lease contracts with option of disagreement. (2023). Godoy, Sergio ; Pascual, Rodrigo ; Jackson, Canek ; Cawley, Alejandro Mac.
    In: Journal of Air Transport Management.
    RePEc:eee:jaitra:v:107:y:2023:i:c:s0969699722001624.

    Full description at Econpapers || Download paper

  14. Dynamic scheduling of patients in emergency departments. (2023). Kuo, Yong-Hong ; de Queiroz, Thiago Alves ; Iori, Manuel ; Kramer, Arthur.
    In: European Journal of Operational Research.
    RePEc:eee:ejores:v:310:y:2023:i:1:p:100-116.

    Full description at Econpapers || Download paper

  15. Robust planning of sorting operations in express delivery systems. (2023). Khir, Reem ; Toriello, Alejandro ; Erera, Alan.
    In: European Journal of Operational Research.
    RePEc:eee:ejores:v:306:y:2023:i:2:p:615-631.

    Full description at Econpapers || Download paper

  16. Operational research and artificial intelligence methods in banking. (2023). Zhang, Wenke ; Doumpos, Michalis ; Platanakis, Emmanouil ; Gounopoulos, Dimitrios ; Zopounidis, Constantin.
    In: European Journal of Operational Research.
    RePEc:eee:ejores:v:306:y:2023:i:1:p:1-16.

    Full description at Econpapers || Download paper

  17. Plant-wide byproduct gas distribution under uncertainty in iron and steel industry via quantile forecasting and robust optimization. (2023). Jiang, Sheng-Long ; Wang, Meihong ; David, I.
    In: Applied Energy.
    RePEc:eee:appene:v:350:y:2023:i:c:s0306261923009674.

    Full description at Econpapers || Download paper

  18. Planning pharmaceutical manufacturing networks in the light of uncertain production approval times. (2022). Blossey, Gregor ; Hahn, Gerd J ; Koberstein, Achim.
    In: International Journal of Production Economics.
    RePEc:eee:proeco:v:244:y:2022:i:c:s0925527321003194.

    Full description at Econpapers || Download paper

  19. A bilevel framework for decision-making under uncertainty with contextual information. (2022). Pineda, S ; Muoz, M A ; Morales, J M.
    In: Omega.
    RePEc:eee:jomega:v:108:y:2022:i:c:s0305048321001845.

    Full description at Econpapers || Download paper

  20. Influenza vaccine supply chain coordination under uncertain supply and demand. (2022). Zhao, Qiuhong ; Lin, QI ; Lev, Benjamin.
    In: European Journal of Operational Research.
    RePEc:eee:ejores:v:297:y:2022:i:3:p:930-948.

    Full description at Econpapers || Download paper

  21. A multistage stochastic program for the design and management of flexible infrastructure networks. (2021). Torres-Rincn, Samuel ; Snchez-Silva, Mauricio ; Bastidas-Arteaga, Emilio.
    In: Reliability Engineering and System Safety.
    RePEc:eee:reensy:v:210:y:2021:i:c:s0951832021001046.

    Full description at Econpapers || Download paper

  22. Managing supply risk: Robust procurement strategy for capacity improvement. (2021). Li, YI ; Shou, Biying.
    In: Omega.
    RePEc:eee:jomega:v:102:y:2021:i:c:s0305048320307064.

    Full description at Econpapers || Download paper

  23. The stochastic container relocation problem with flexible service policies. (2020). Feng, Yuanjun ; Zeng, Qingcheng ; Li, Dong ; Song, Dong-Ping.
    In: Transportation Research Part B: Methodological.
    RePEc:eee:transb:v:141:y:2020:i:c:p:116-163.

    Full description at Econpapers || Download paper

Coauthors

Authors registered in RePEc who have wrote about the same topic

Report date: 2025-09-23 07:01:34 || Missing content? Let us know

CitEc is a RePEc service, providing citation data for Economics since 2001. Last updated August, 3 2024. Contact: Jose Manuel Barrueco.