This document presents a method for desensitizing data using Ridge Discriminant Component Analysis (RDCA) to protect privacy in machine learning applications. RDCA is used to derive signal and noise subspaces with respect to a privacy label. Data is then projected onto the privacy noise subspace to generate desensitized data with reduced discriminative power for the privacy label. Experiments on activity recognition, face recognition, and digit recognition datasets show the privacy accuracy is reduced to random guess levels while utility accuracy only drops by 5-7% on average. This confirms RDCA desensitization effectively protects privacy with small loss to utility.