This document discusses hierarchical reinforcement learning and several related concepts:
1) Hierarchical reinforcement learning uses temporal or state abstraction to decompose reinforcement learning problems into smaller subproblems in order to learn faster, require less value function updates, and incorporate prior knowledge.
2) Key hierarchical RL methods discussed include options, feudal Q-learning, MaxQ, and hierarchical learning in subsumption architectures. Options define temporally extended actions and allow modeling problems as semi-Markov decision processes.
3) Feudal Q-learning uses a multi-layered hierarchy of managers and sub-managers with reward hiding and information hiding. MaxQ decomposes value functions hierarchically. Hierarchical learning in subsumption architectures focuses on
Related topics: