Dr. Surya Prakash discusses Monte Carlo methods as model-free reinforcement learning techniques that utilize random sampling to address scenarios where the environment model is unavailable. The presentation covers various aspects of Monte Carlo methods, including policy evaluation, estimating state and action values, and the significance of the exploring starts assumption in ensuring all actions are selected frequently. Additionally, the transformation of Monte Carlo methods into control strategies is also addressed, highlighting the use of ε-greedy policies to improve exploration.