This document summarizes research on emotion and theme recognition in music using the MTG-Jamendo dataset. It provides an overview of previous work on music emotion recognition and theme classification. It then describes the MTG-Jamendo dataset and the emotions and themes in music task for automatic music tagging. Baseline results using a VGG-ish CNN model are presented along with the top submissions which utilized techniques like attention mechanisms and data augmentation. Overall the task remains challenging and future work may focus on improving dataset balance and exploring additional metadata.