Bayesian learning views hypotheses as intermediaries between data and predictions. Belief networks can represent learning problems with known or unknown structures and fully or partially observable variables. Belief networks use localized representations, whereas neural networks use distributed representations. Reinforcement learning uses rewards to learn successful agent functions, such as Q-learning which learns action-value functions. Active learning agents consider actions, outcomes, and how actions affect rewards received. Genetic algorithms evolve individuals to successful solutions measured by fitness functions. Explanation-based learning speeds up programs by reusing results of prior computations.