The document discusses a method for automatically estimating emotions in music using deep long-short term memory recurrent neural networks (LSTM-RNNs). It outlines the feature sets extracted from audio and describes the training process involving multitask learning for arousal and valence, including pretraining with denoising autoencoders. Results on the official test set are presented, demonstrating the effectiveness of the proposed approach.