This document discusses radial basis function networks. It begins by introducing the basic structure of RBF networks, which typically involve an input layer, a hidden layer that applies a nonlinear transformation using radial basis functions, and an output layer with a linear transformation. The document then discusses Cover's theorem, which states that pattern classification problems are more likely to be linearly separable when mapped to a higher-dimensional space through a nonlinear transformation. Several key concepts are introduced, including dichotomies, phi-separable functions, and using hidden functions to map patterns to a hidden feature space.