This document describes research on optimizing probabilistic argumentation strategies using Markov processes. It introduces argumentation problems with probabilistic strategies (APS) and formalizes them using probabilistic finite state machines. It then describes how an APS can be transformed into a mixed-observability Markov decision process (MOMDP) to allow for optimization without full knowledge of the initial state or opponent's private state. Algorithms for solving MOMDPs like MO-IP and MO-SARSOP are discussed. Different types of optimizations for the MOMDP are also presented, including removing irrelevant arguments, inferring attacks, and removing dominated arguments, both with and without dependencies on the initial state.
Related topics: