SlideShare a Scribd company logo
AURALISATION OF DEEP CONVOLUTIONAL NEURAL NETWORKS:
LISTENING TO LEARNED FEATURES
Keunwoo Choi
Queen Mary
University of London
keunwoo.choi
@qmul.ac.uk,
Jeonghee Kim
Naver Labs,
South Korea
jeonghee.kim
@navercorp.com
George Fazekas
Queen Mary
University of London
g.e.fazekas
@qmul.ac.uk
Mark Sandler
Queen Mary
University of London
m.sandler
@qmul.ac.uk
ABSTRACT
Deep learning has been actively adopted in the field of mu-
sic information retrieval, e.g. genre classification, mood
detection, and chord recognition. Deep convolutional neu-
ral networks (CNNs), one of the most popular deep learn-
ing approach, also have been used for these tasks. How-
ever, the process of learning and prediction is little under-
stood, particularly when it is applied to spectrograms. We
introduce auralisation of CNNs to understand its underly-
ing mechanism.
1. INTRODUCTION
In the field of computer vision, deep learning approaches
become de facto standard after convolutional neural net-
works (CNNs) showed break-through results in ImageNet
competition in 2012 [3]. It rapidly became popular while
the reason of success was not completely understood.
One effective way to understand and explain the CNNs
was introduced in [5], where the features in deeper levels
are visualised by a method called deconvolution. By de-
convolving and un-pooling layers, it enables people to see
which part of the input image are focused on by each filter.
However, spectrograms have not been analysed by this
approach. Moreover, it is not clear what can be understood
by deconvolving spectrograms. The information from ’see-
ing’ a part of spectrograms can be extended by auralising
the convolutional filters.
In this paper, we introduce the procedure and results of
deconvolution of spectrograms. Furthermore, we propose
auralisastion of filters by extending it to time-domain re-
construction. In Section 2, the background of CNNs and
deconvolution are explained. The proposed auralisation
method is introduced in Section 3. The results are dis-
cussed in Section 4.
2. BACKGROUND
The majority of research of CNNs on audio signal uses 2D
time-frequency representation as input data, considering it
c Keunwoo Choi, Jeonghee Kim, George Fazekas, Mark
Sandler. Licensed under a Creative Commons Attribution 4.0 Interna-
tional License (CC BY 4.0). Attribution: Keunwoo Choi, Jeonghee
Kim, George Fazekas, Mark Sandler. “Auralisation of Deep Convolu-
tional Neural Networks: Listening to Learned Features”, 16th Interna-
tional Society for Music Information Retrieval Conference, 2015.
Figure 1. Deconvolution of CNNs trained for image clas-
sification. The filters’ response and a corresponding part
in the input images are shown respectively on the left and
right for each layer. Image courtesy of [5].
as an image. Various types of representations have been
used including Short-time Fourier transform (STFT), Mel-
spectrogram and constant-Q transform (CQT). In [4], for
example, 80-by-15 (mel-band-by-frame) mel-spectrogram
is used with 7-by-3 and 3-by-3 convolutions for onset de-
tection and CQT is used with 5-by-25 and 5-by-13 convo-
lutions for chord recognition in [2].
Visualisation of CNNs was introduced in [5], which
showed how high-level features (postures/objects) are com-
bined from low-level features (lines/curves), as illustrated
in Figure 1. Visualisation of CNNs helps not only to un-
derstand the process inside the black box model, but also
to decide hyper-parameters of the networks. For exam-
ple, redundancy or deficiency of the capacity of the net-
works, which is limited by hyper-parameters such as the
number of layers and filters, can be judged by inspecting
the learned filters. Network visualisation provides useful
information since fine tuning hyper-parameters is among
the most crucial factors in designing CNNs, while it is a
computationally expensive process.
3. AURALISATION OF FILTERS
The spectrograms used in CNNs can be also deconvolved
as well. Unlike visual images however, a deconvolved
spectrogram does not generally facilitate an intuitive ex-
planation. This is because, first, seeing a spectrogram does
not necessarily provide clear intuition that is comparable to
observing an image. Second, detecting edges of a spectro-
gram, which is known to happen at the first layer in CNNs,
results in removing components of the spectrograms.
This paper has been supported by EPSRC Grant EP/L019981/1, Fus-
ing Audio and Semantic Technologies for Intelligent Music Production
and Consumption.
Figure 2. Filters at the first layer of CNNs trained for genre
classification, initial (on the left) and learned filters (on the
right).
To solve this problem, we propose to reconstruct the
audio signal using a process called auralisation. This re-
quires an additional stage for inverse-transformation of a
deconvolved spectrogram. The phase information is pro-
vided by the phase of the original STFT, following the
generic approach in spectrogram-based sound source sepa-
ration algorithms. STFT is therefore recommended which
allows us to later obtain a time-domain signal easily.
4. EXPERIMENTS AND DISCUSSION
We implemented a CNNs-based genre classification algo-
rithm using a dataset obtained from Naver Music . Three
genres (ballade, dance, and hiphop) were classified using
8,000 songs in total. 10 clips of 4 seconds were extracted
for each song, generating 80,000 data samples by STFT
with 512-point using windowed Fast Fourier Transform
and 50% hop size. 6,600/700/700 songs were designated
as training/validation/test sets respectively. As a reference
of performance, CNNs with 5 layers, 3-by-3 filters, max-
pooling with size and stride of 2 was used. This system
showed 75% of accuracy with a loss of 0.62.
4.1 Visualisation of First-layer Filters
A 4-layer CNNs was built with larger filter sizes at the
first layer. Since max-pooling layers are involved with in
deeper layers, filters at deeper layers can be only illustrated
when input data is given. To obtain a more intuitive visu-
alisation, large convolutional filters were used, as shown
in [1]. This system showed 76% of accuracy.
Figure 2 shows the visualisation of the first layer, which
consists of 12-by-12 filters. Filters are initialised with uni-
formly distributed random values, which resemble white
noise as shown on the left. After training, some of the fil-
ters develop patterns that can detect certain shapes. For
image classification tasks, the detectors for edges with dif-
ferent orientations usually are observed since the outlines
of objects are a useful cue for the task. Here however, sev-
eral vertical edge detectors are observed as shown on the
right. This may be because the networks learn to extract
not only edges but more complex patterns from spectro-
grams.
4.2 Auralisation of Filters
Fig 3 shows original and deconvolved signals and spec-
trograms of Bach’s English Suite No. 1: Prelude from
http://music.naver.com, a Korean music streaming service
Figure 3. Original and deconvolved signals and spectro-
grams with ground-truth onsets annotated (on the left)
7th feature at the 1st layer. By listening to these decon-
volved signals, it turns out that this feature provides an
onset detector-like filter. It can also be explained by vi-
sualising the filter. Vertical edge detectors can work as a
crude onset detector when it is applied to spectrograms,
since rapid change along time-axis will pass the edge de-
tector while the sustain parts will be filtered out.
There was a different case at deeper layers, where high-
level information can be expressed in deconvolved spectro-
grams. One of the features at the deepest layer was mostly
deactivated by hiphop music. However, not all the features
can be easily interpreted by listening to the signal.
5. CONCLUSIONS
We introduce auralisation of CNNs, which is an extension
to CNNs visualisation. This is done by inverse-transformation
of a deconvolved spectrogram to obtain a time-domain au-
dio signal. Listening the audio signal enables researchers
to understand the mechanism of CNNs when they are ap-
plied to spectrograms. Further research will include more
in depth interpretation of the learned features.
6. REFERENCES
[1] Luiz Gustavo Hafemann. An analysis of deep neural
networks for texture classification. 2014.
[2] Eric J Humphrey and Juan P Bello. Rethinking au-
tomatic chord recognition with convolutional neu-
ral networks. In Machine Learning and Applications
(ICMLA), International Conference on. IEEE, 2012.
[3] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin-
ton. Imagenet classification with deep convolutional
neural networks. In Advances in neural information
processing systems, pages 1097–1105, 2012.
[4] Jan Schluter and Sebastian Bock. Improved musical
onset detection with convolutional neural networks.
In International Conference on Acoustics, Speech and
Signal Processing. IEEE, 2014.
[5] Matthew D Zeiler and Rob Fergus. Visualizing and
understanding convolutional networks. In Computer
Vision–ECCV 2014, pages 818–833. Springer, 2014.
The results are demonstrated during the LBD session and on-line at
author’s soundcloud https://soundcloud.com/kchoi-research
An example code of the whole deconvolution procedure is open to
public at https://github.com/gnuchoi/CNNauralisation

More Related Content

PDF
2012APMC Conference Presentation
PDF
MULTI-STAGES CO-OPERATIVE/NONCOOPERATIVE SCHEMES OF SPECTRUM SENSING FOR COGN...
PDF
Depth Estimation of Sound Images Using Directional Clustering and Activation...
PDF
Online Divergence Switching for Superresolution-Based Nonnegative Matrix Fa...
PDF
A Fuzzy Based Priority Approach in Mobile Sensor Network Coverage
PDF
Self Attested Images for Secured Transactions using Superior SOM
PDF
H010234144
2012APMC Conference Presentation
MULTI-STAGES CO-OPERATIVE/NONCOOPERATIVE SCHEMES OF SPECTRUM SENSING FOR COGN...
Depth Estimation of Sound Images Using Directional Clustering and Activation...
Online Divergence Switching for Superresolution-Based Nonnegative Matrix Fa...
A Fuzzy Based Priority Approach in Mobile Sensor Network Coverage
Self Attested Images for Secured Transactions using Superior SOM
H010234144

What's hot (20)

DOCX
Optical coherence tomography
PPTX
Teaching Computers to Listen to Music
PDF
A self localization scheme for mobile wireless sensor networks
PDF
DIGITAL VIDEO SOURCE IDENTIFICATION BASED ON GREEN-CHANNEL PHOTO RESPONSE NON...
PDF
Sound event detection using deep neural networks
PPT
Kernel analysis of deep networks
PDF
The Performance Analysis Fiber Optic Dispersion on OFDM-QAM System
PPTX
Methods for interpreting and understanding deep neural networks
PDF
Deep Learning Based Voice Activity Detection and Speech Enhancement
PPTX
Speech Processing with deep learning
PDF
Ar26272276
PPTX
let's dive to deep learning
PDF
Wavelet Based Noise Robust Features for Speaker Recognition
PPTX
Deep Learning in Bio-Medical Imaging
PDF
Tearhertz Sub-Nanometer Sub-Surface Imaging of 2D Materials
PDF
캡슐 네트워크를 이용한 엔드투엔드 음성 단어 인식, 배재성(KAIST 석사과정)
PDF
improving Profile detection using Deep Learning
PDF
O026084087
PDF
Image compression techniques by using wavelet transform
Optical coherence tomography
Teaching Computers to Listen to Music
A self localization scheme for mobile wireless sensor networks
DIGITAL VIDEO SOURCE IDENTIFICATION BASED ON GREEN-CHANNEL PHOTO RESPONSE NON...
Sound event detection using deep neural networks
Kernel analysis of deep networks
The Performance Analysis Fiber Optic Dispersion on OFDM-QAM System
Methods for interpreting and understanding deep neural networks
Deep Learning Based Voice Activity Detection and Speech Enhancement
Speech Processing with deep learning
Ar26272276
let's dive to deep learning
Wavelet Based Noise Robust Features for Speaker Recognition
Deep Learning in Bio-Medical Imaging
Tearhertz Sub-Nanometer Sub-Surface Imaging of 2D Materials
캡슐 네트워크를 이용한 엔드투엔드 음성 단어 인식, 배재성(KAIST 석사과정)
improving Profile detection using Deep Learning
O026084087
Image compression techniques by using wavelet transform
Ad

Similar to AURALISATION OF DEEP CONVOLUTIONAL NEURAL NETWORKS: LISTENING TO LEARNED FEATURES (20)

PDF
Deep Learning Meetup #5
PDF
(2016) Rigaud and Radenen - Singing Voice Melody Transcription Using Deep Neu...
PDF
“Comparing ML-Based Audio with ML-Based Vision: An Introduction to ML Audio f...
PDF
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
PDF
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
PDF
Convolutional Neural Network and Feature Transformation for Distant Speech Re...
PPTX
Biologically Inspired Methods for Adversarially Robust Deep Learning
PDF
Audio chord recognition using deep neural networks
PDF
Investigating Multi-Feature Selection and Ensembling for Audio Classification
PDF
Deep Convolutional Neural Networks - Overview
PDF
Presentation ismir
PDF
Audio and Vision (D2L9 Insight@DCU Machine Learning Workshop 2017)
PDF
IRJET- Machine Learning and Noise Reduction Techniques for Music Genre Classi...
PPTX
Deep Learning Tomography
PPTX
Ml conf2013 teaching_computers_share
PPTX
MLConf2013: Teaching Computer to Listen to Music
PPTX
TASK-OPTIMIZED DEEP NEURAL NETWORK TO REPLICATE THE HUMAN AUDITORY CORTEX
PPTX
Visualizaing and understanding convolutional networks
PDF
Deep Learning for Computer Vision (1/4): Image Analytics @ laSalle 2016
PDF
Deep learning for music classification, 2016-05-24
Deep Learning Meetup #5
(2016) Rigaud and Radenen - Singing Voice Melody Transcription Using Deep Neu...
“Comparing ML-Based Audio with ML-Based Vision: An Introduction to ML Audio f...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
Convolutional Neural Network and Feature Transformation for Distant Speech Re...
Biologically Inspired Methods for Adversarially Robust Deep Learning
Audio chord recognition using deep neural networks
Investigating Multi-Feature Selection and Ensembling for Audio Classification
Deep Convolutional Neural Networks - Overview
Presentation ismir
Audio and Vision (D2L9 Insight@DCU Machine Learning Workshop 2017)
IRJET- Machine Learning and Noise Reduction Techniques for Music Genre Classi...
Deep Learning Tomography
Ml conf2013 teaching_computers_share
MLConf2013: Teaching Computer to Listen to Music
TASK-OPTIMIZED DEEP NEURAL NETWORK TO REPLICATE THE HUMAN AUDITORY CORTEX
Visualizaing and understanding convolutional networks
Deep Learning for Computer Vision (1/4): Image Analytics @ laSalle 2016
Deep learning for music classification, 2016-05-24
Ad

Recently uploaded (20)

PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PPTX
Spectroscopy.pptx food analysis technology
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Agricultural_Statistics_at_a_Glance_2022_0.pdf
MIND Revenue Release Quarter 2 2025 Press Release
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Assigned Numbers - 2025 - Bluetooth® Document
Chapter 3 Spatial Domain Image Processing.pdf
20250228 LYD VKU AI Blended-Learning.pptx
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
The AUB Centre for AI in Media Proposal.docx
Unlocking AI with Model Context Protocol (MCP)
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Reach Out and Touch Someone: Haptics and Empathic Computing
Dropbox Q2 2025 Financial Results & Investor Presentation
Per capita expenditure prediction using model stacking based on satellite ima...
Diabetes mellitus diagnosis method based random forest with bat algorithm
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Advanced methodologies resolving dimensionality complications for autism neur...
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Spectroscopy.pptx food analysis technology
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...

AURALISATION OF DEEP CONVOLUTIONAL NEURAL NETWORKS: LISTENING TO LEARNED FEATURES

  • 1. AURALISATION OF DEEP CONVOLUTIONAL NEURAL NETWORKS: LISTENING TO LEARNED FEATURES Keunwoo Choi Queen Mary University of London keunwoo.choi @qmul.ac.uk, Jeonghee Kim Naver Labs, South Korea jeonghee.kim @navercorp.com George Fazekas Queen Mary University of London g.e.fazekas @qmul.ac.uk Mark Sandler Queen Mary University of London m.sandler @qmul.ac.uk ABSTRACT Deep learning has been actively adopted in the field of mu- sic information retrieval, e.g. genre classification, mood detection, and chord recognition. Deep convolutional neu- ral networks (CNNs), one of the most popular deep learn- ing approach, also have been used for these tasks. How- ever, the process of learning and prediction is little under- stood, particularly when it is applied to spectrograms. We introduce auralisation of CNNs to understand its underly- ing mechanism. 1. INTRODUCTION In the field of computer vision, deep learning approaches become de facto standard after convolutional neural net- works (CNNs) showed break-through results in ImageNet competition in 2012 [3]. It rapidly became popular while the reason of success was not completely understood. One effective way to understand and explain the CNNs was introduced in [5], where the features in deeper levels are visualised by a method called deconvolution. By de- convolving and un-pooling layers, it enables people to see which part of the input image are focused on by each filter. However, spectrograms have not been analysed by this approach. Moreover, it is not clear what can be understood by deconvolving spectrograms. The information from ’see- ing’ a part of spectrograms can be extended by auralising the convolutional filters. In this paper, we introduce the procedure and results of deconvolution of spectrograms. Furthermore, we propose auralisastion of filters by extending it to time-domain re- construction. In Section 2, the background of CNNs and deconvolution are explained. The proposed auralisation method is introduced in Section 3. The results are dis- cussed in Section 4. 2. BACKGROUND The majority of research of CNNs on audio signal uses 2D time-frequency representation as input data, considering it c Keunwoo Choi, Jeonghee Kim, George Fazekas, Mark Sandler. Licensed under a Creative Commons Attribution 4.0 Interna- tional License (CC BY 4.0). Attribution: Keunwoo Choi, Jeonghee Kim, George Fazekas, Mark Sandler. “Auralisation of Deep Convolu- tional Neural Networks: Listening to Learned Features”, 16th Interna- tional Society for Music Information Retrieval Conference, 2015. Figure 1. Deconvolution of CNNs trained for image clas- sification. The filters’ response and a corresponding part in the input images are shown respectively on the left and right for each layer. Image courtesy of [5]. as an image. Various types of representations have been used including Short-time Fourier transform (STFT), Mel- spectrogram and constant-Q transform (CQT). In [4], for example, 80-by-15 (mel-band-by-frame) mel-spectrogram is used with 7-by-3 and 3-by-3 convolutions for onset de- tection and CQT is used with 5-by-25 and 5-by-13 convo- lutions for chord recognition in [2]. Visualisation of CNNs was introduced in [5], which showed how high-level features (postures/objects) are com- bined from low-level features (lines/curves), as illustrated in Figure 1. Visualisation of CNNs helps not only to un- derstand the process inside the black box model, but also to decide hyper-parameters of the networks. For exam- ple, redundancy or deficiency of the capacity of the net- works, which is limited by hyper-parameters such as the number of layers and filters, can be judged by inspecting the learned filters. Network visualisation provides useful information since fine tuning hyper-parameters is among the most crucial factors in designing CNNs, while it is a computationally expensive process. 3. AURALISATION OF FILTERS The spectrograms used in CNNs can be also deconvolved as well. Unlike visual images however, a deconvolved spectrogram does not generally facilitate an intuitive ex- planation. This is because, first, seeing a spectrogram does not necessarily provide clear intuition that is comparable to observing an image. Second, detecting edges of a spectro- gram, which is known to happen at the first layer in CNNs, results in removing components of the spectrograms. This paper has been supported by EPSRC Grant EP/L019981/1, Fus- ing Audio and Semantic Technologies for Intelligent Music Production and Consumption.
  • 2. Figure 2. Filters at the first layer of CNNs trained for genre classification, initial (on the left) and learned filters (on the right). To solve this problem, we propose to reconstruct the audio signal using a process called auralisation. This re- quires an additional stage for inverse-transformation of a deconvolved spectrogram. The phase information is pro- vided by the phase of the original STFT, following the generic approach in spectrogram-based sound source sepa- ration algorithms. STFT is therefore recommended which allows us to later obtain a time-domain signal easily. 4. EXPERIMENTS AND DISCUSSION We implemented a CNNs-based genre classification algo- rithm using a dataset obtained from Naver Music . Three genres (ballade, dance, and hiphop) were classified using 8,000 songs in total. 10 clips of 4 seconds were extracted for each song, generating 80,000 data samples by STFT with 512-point using windowed Fast Fourier Transform and 50% hop size. 6,600/700/700 songs were designated as training/validation/test sets respectively. As a reference of performance, CNNs with 5 layers, 3-by-3 filters, max- pooling with size and stride of 2 was used. This system showed 75% of accuracy with a loss of 0.62. 4.1 Visualisation of First-layer Filters A 4-layer CNNs was built with larger filter sizes at the first layer. Since max-pooling layers are involved with in deeper layers, filters at deeper layers can be only illustrated when input data is given. To obtain a more intuitive visu- alisation, large convolutional filters were used, as shown in [1]. This system showed 76% of accuracy. Figure 2 shows the visualisation of the first layer, which consists of 12-by-12 filters. Filters are initialised with uni- formly distributed random values, which resemble white noise as shown on the left. After training, some of the fil- ters develop patterns that can detect certain shapes. For image classification tasks, the detectors for edges with dif- ferent orientations usually are observed since the outlines of objects are a useful cue for the task. Here however, sev- eral vertical edge detectors are observed as shown on the right. This may be because the networks learn to extract not only edges but more complex patterns from spectro- grams. 4.2 Auralisation of Filters Fig 3 shows original and deconvolved signals and spec- trograms of Bach’s English Suite No. 1: Prelude from http://music.naver.com, a Korean music streaming service Figure 3. Original and deconvolved signals and spectro- grams with ground-truth onsets annotated (on the left) 7th feature at the 1st layer. By listening to these decon- volved signals, it turns out that this feature provides an onset detector-like filter. It can also be explained by vi- sualising the filter. Vertical edge detectors can work as a crude onset detector when it is applied to spectrograms, since rapid change along time-axis will pass the edge de- tector while the sustain parts will be filtered out. There was a different case at deeper layers, where high- level information can be expressed in deconvolved spectro- grams. One of the features at the deepest layer was mostly deactivated by hiphop music. However, not all the features can be easily interpreted by listening to the signal. 5. CONCLUSIONS We introduce auralisation of CNNs, which is an extension to CNNs visualisation. This is done by inverse-transformation of a deconvolved spectrogram to obtain a time-domain au- dio signal. Listening the audio signal enables researchers to understand the mechanism of CNNs when they are ap- plied to spectrograms. Further research will include more in depth interpretation of the learned features. 6. REFERENCES [1] Luiz Gustavo Hafemann. An analysis of deep neural networks for texture classification. 2014. [2] Eric J Humphrey and Juan P Bello. Rethinking au- tomatic chord recognition with convolutional neu- ral networks. In Machine Learning and Applications (ICMLA), International Conference on. IEEE, 2012. [3] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. [4] Jan Schluter and Sebastian Bock. Improved musical onset detection with convolutional neural networks. In International Conference on Acoustics, Speech and Signal Processing. IEEE, 2014. [5] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014, pages 818–833. Springer, 2014. The results are demonstrated during the LBD session and on-line at author’s soundcloud https://soundcloud.com/kchoi-research An example code of the whole deconvolution procedure is open to public at https://github.com/gnuchoi/CNNauralisation