This document proposes using facial expression analysis to predict pairwise music preferences from short music clips. It describes conducting a user study where participants' facial expressions were recorded while listening to song pairs, and developing machine learning models to predict preferences from facial features. The models were able to predict preferences with reasonable accuracy based on emotions like contempt, valence, joy and sadness. Future work includes evaluating this approach for recommendations.