Partially Observable Markov Decision Process (POMDP). The ’pomdp-solve’ program solves problems that are formulated as partially observable Markov decision processes, a.k.a. POMDPs. It uses the basic dynamic programming approach for all algorithms, solving one stage at a time working backwards in time. It does finite horizon problems with or without discounting. It will stop solving if the answer is within a tolerable range of the infinite horizon answer, and there are a couple of different stopping conditions (requires a discount factor less than 1.0). Alternatively you can solve a finite horizon problem for some fixed horizon length. The code actually implements a number of POMDP solution algorithms

References in zbMATH (referenced in 24 articles )

Showing results 1 to 20 of 24.
Sorted by year (citations)

1 2 next

  1. Carr, Steven; Jansen, Nils; Topcu, Ufuk: Task-aware verifiable RNN-based policies for partially observable Markov decision processes (2021)
  2. Češka, Milan; Hensel, Christian; Junges, Sebastian; Katoen, Joost-Pieter: Counterexample-guided inductive synthesis for probabilistic systems (2021)
  3. Dulac-Arnold, Gabriel; Levine, Nir; Mankowitz, Daniel J.; Li, Jerry; Paduraru, Cosmin; Gowal, Sven; Hester, Todd: Challenges of real-world reinforcement learning: definitions, benchmarks and analysis (2021)
  4. Nakao, Hideaki; Jiang, Ruiwei; Shen, Siqian: Distributionally robust partially observable Markov decision process with moment-based ambiguity (2021)
  5. Treszkai, Laszlo; Belle, Vaishak: A correctness result for synthesizing plans with loops in stochastic domains (2020)
  6. Kıvanç, İpek; Özgür-Ünlüakın, Demet: An effective maintenance policy for a multi-component dynamic system using factored POMDPs (2019)
  7. Norman, Gethin; Parker, David; Zou, Xueyi: Verification and control of partially observable probabilistic systems (2017)
  8. Chatterjee, Krishnendu; Chmelík, Martin; Tracol, Mathieu: What is decidable about partially observable Markov decision processes with (\omega)-regular objectives (2016)
  9. Seo, Junyeong; Sung, Youngchul; Lee, Gilwon; Kim, Donggun: Training beam sequence design for millimeter-wave MIMO systems: a POMDP framework (2016)
  10. Ayer, Turgay: Inverse optimization for assessing emerging technologies in breast cancer screening (2015)
  11. Chatterjee, Krishnendu; Chmelík, Martin: POMDPs under probabilistic semantics (2015)
  12. Norman, Gethin; Parker, David; Zou, Xueyi: Verification and control of partially observable probabilistic real-time systems (2015)
  13. Gouberman, Alexander; Siegle, Markus: Markov reward models and Markov decision processes in discrete and continuous time: performance evaluation and optimization (2014)
  14. Li, Yanjie; Yin, Baoqun; Xi, Hongsheng: Finding optimal memoryless policies of POMDPs under the expected average reward criterion (2011)
  15. Eker, Barış; Akın, H. Levent: Using evolution strategies to solve DEC-POMDP problems (2010) ioport
  16. Zhang, Hao: Partially observable Markov decision processes: a geometric technique and analysis (2010)
  17. Busemeyer, Jerome R.; Pleskac, Timothy J.: Theoretical tools for understanding and aiding dynamic decision making (2009)
  18. Baier, Christel; Bertrand, Nathalie; Größer, Marcus: On decision problems for probabilistic Büchi automata (2008)
  19. Fernández, Joaquín L.; Sanz, Rafael; Simmons, Reid G.; Diéguez, Amador R.: Heuristic anytime approaches to stochastic decision processes (2006)
  20. Roy, N.; Gordon, G.; Thrun, S.: Finding approximate POMDP solutions through belief compression (2005)

1 2 next

Further publications can be found at: http://www.pomdp.org/pomdp/papers/index.shtml