This document discusses the auralisation of deep convolutional neural networks (CNNs) to better understand their functionality, particularly in the context of spectrograms used in music information retrieval. The authors propose a method to reconstruct audio from deconvolved spectrograms, allowing researchers to listen to and interpret the learned features of the CNNs. Results from experiments on genre classification demonstrate the effectiveness of this approach, enabling more intuitive insights into CNN operations.