This document provides an overview of local search algorithms. It discusses how local search works by iteratively improving a single current state rather than exploring the entire state space. Key aspects covered include representing problems as states, defining neighbor states and objective functions, getting stuck in local optima, and techniques like hill climbing, gradient descent, simulated annealing, and random restarts to escape local optima. Local search is memory efficient but can find good solutions more slowly than optimal algorithms. Algorithm design considerations like state representation, neighbors, and constraints are discussed. Pseudocode outlines for basic local search, tabu search, and simulated annealing wrappers are also provided.