Support vector machines are powerful machine learning models that can perform both classification and regression tasks. They work by finding a decision boundary or hyperplane that maximizes the margin between different classes of data. For nonlinear relationships, kernels can be used to transform the data into a higher dimensional space where a linear boundary can be found. Key hyperparameters for SVMs include C for regularization, gamma for the Gaussian RBF kernel, and epsilon for the width of the margin in regression.