This document discusses bandit-based Monte Carlo planning, specifically focusing on Monte Carlo Tree Search (MCTS) and its application to the game of Go. It outlines various approaches to decision-making in games, the advantages and limitations of MCTS and Upper Confidence Tree (UCT) methods, and explores extensions for handling infinite action spaces and integrating expert knowledge. The document also presents the progress made in Go through computational methods, emphasizing the ongoing developments in the field of intelligent agents.