This document provides an introduction to principal component analysis (PCA) for feature extraction. It describes PCA as a method to project high-dimensional data into a lower-dimensional subspace while retaining the most important information. The key steps are outlined as calculating the mean of each column, centering the data, computing the covariance matrix, performing eigendecomposition of the covariance matrix to obtain eigenvalues and eigenvectors, and projecting the data onto the new subspace defined by the principal components with the highest eigenvalues. Implementing PCA using scikit-learn's PCA class is also briefly discussed.