Backpropagation is a common supervised learning technique for training artificial neural networks by calculating the gradient of the error in the network with respect to its weights, allowing the weights to be adjusted to minimize error through methods like stochastic gradient descent. It involves performing forward and backward passes through the network, using the error signal to calculate weight updates that reduce error for each connection based on its contribution to the output error. While powerful, backpropagation has limitations such as slow convergence and susceptibility to getting stuck in local minima.