SlideShare a Scribd company logo
Sound Events and Emotions: Investigating the
Relation of Rhythmic Characteristics and Arousal
K. Drossos 1 R. Kotsakis 2 G. Kalliris 2 A. Floros 1
1Digital Audio Processing & Applications Group, Audiovisual Signal Processing
Laboratory, Dept. of Audiovisual Arts, Ionian University, Corfu, Greece
2Laboratory of Electronic Media, Dept. of Journalism and Mass Communication, Aristotle
University of Thessaloniki, Thessaloniki, Greece
Agenda
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Structured form of sound
General sound
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Structured form of sound
General sound
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
Data
Aural stimuli can elicit emotions
Music is a structured form of sound
Sound events are not
Music’s rhythm is arousing
Research question
Is sound events’ rhythm arousing too?
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
Data
Aural stimuli can elicit emotions
Music is a structured form of sound
Sound events are not
Music’s rhythm is arousing
Research question
Is sound events’ rhythm arousing too?
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
Data
Aural stimuli can elicit emotions
Music is a structured form of sound
Sound events are not
Music’s rhythm is arousing
Research question
Is sound events’ rhythm arousing too?
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
Data
Aural stimuli can elicit emotions
Music is a structured form of sound
Sound events are not
Music’s rhythm is arousing
Research question
Is sound events’ rhythm arousing too?
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm
related features
For each
segment
Statistical
measures
Extracted Features Statistical Measures
Beat spectrum Mean
Onsets Standard deviation
Tempo Gradient
Fluctuation Kurtosis
Event density Skewness
Pulse clarity
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Feature Evaluation
Objectives
Most valuable features for arousal detection
Dependencies between selected features and different window
lengths
Algorithms used
InfoGainAttributeEval
SVMAttributeEval
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Feature Evaluation
Objectives
Most valuable features for arousal detection
Dependencies between selected features and different window
lengths
Algorithms used
InfoGainAttributeEval
SVMAttributeEval
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Feature Evaluation
Objectives
Most valuable features for arousal detection
Dependencies between selected features and different window
lengths
Algorithms used
InfoGainAttributeEval
SVMAttributeEval
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Classification
Objectives
Arousal classification based on rhythm
Dependencies between different window lengths and
classification results
Algorithms used
Artificial neural networks
Logistic regression
K Nearest Neighbors
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Classification
Objectives
Arousal classification based on rhythm
Dependencies between different window lengths and
classification results
Algorithms used
Artificial neural networks
Logistic regression
K Nearest Neighbors
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Classification
Objectives
Arousal classification based on rhythm
Dependencies between different window lengths and
classification results
Algorithms used
Artificial neural networks
Logistic regression
K Nearest Neighbors
WEKA software
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General results
Two groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General results
Two groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General results
Two groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General results
Two groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
Upper group
Beatspectrum std
Event density std
Onsets gradient
Fluctuation kurtosis
Beatspectrum gradient
Pulse clarity std
Fluctuation mean
Fluctuation std
Fluctuation skewness
Onsets skewness
Pulse clarity kurtosis
Event density kurtosis
Onsets mean
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Window length: 1.0 second
Algorithm utilised: Logistic regression
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Window length: 1.0 second
Algorithm utilised: Logistic regression
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Window length: 1.0 second
Algorithm utilised: Logistic regression
Lowest accuracy score: 71.26%
Window length: 1.4 seconds
Algorithm utilised: Artificial Neural network
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results related
Sound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results related
Sound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results related
Sound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results related
Sound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature work
Features & Dimensions
Other features related to arousal
Connection of features with valence
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature work
Features & Dimensions
Other features related to arousal
Connection of features with valence
Thank you!

More Related Content

PDF
20211026 taicca 1 intro to mir
PDF
"All you need is AI and music" by Keunwoo Choi
PDF
Understanding Music Playlists
PDF
楊奕軒/音樂資料檢索
PDF
20190625 Research at Taiwan AI Labs: Music and Speech AI
PDF
20211026 taicca 2 music generation
PDF
Research on Automatic Music Composition at the Taiwan AI Labs, April 2020
PPTX
Computational models of symphonic music
20211026 taicca 1 intro to mir
"All you need is AI and music" by Keunwoo Choi
Understanding Music Playlists
楊奕軒/音樂資料檢索
20190625 Research at Taiwan AI Labs: Music and Speech AI
20211026 taicca 2 music generation
Research on Automatic Music Composition at the Taiwan AI Labs, April 2020
Computational models of symphonic music

What's hot (14)

PDF
北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
PDF
machine learning x music
PDF
Poster vega north
PDF
Machine learning for creative AI applications in music (2018 nov)
PDF
Automatic Music Composition with Transformers, Jan 2021
DOC
Radio trailer analysis_sheet 1
PPTX
Unit 22 radio script
PDF
Learning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
ODP
Social Tags and Music Information Retrieval (Part I)
PDF
Machine Learning for Creative AI Applications in Music (2018 May)
PDF
The echo nest-music_discovery(1)
PDF
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
KEY
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
PDF
How to create a podcast
北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
machine learning x music
Poster vega north
Machine learning for creative AI applications in music (2018 nov)
Automatic Music Composition with Transformers, Jan 2021
Radio trailer analysis_sheet 1
Unit 22 radio script
Learning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
Social Tags and Music Information Retrieval (Part I)
Machine Learning for Creative AI Applications in Music (2018 May)
The echo nest-music_discovery(1)
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
How to create a podcast
Ad

Viewers also liked (6)

PDF
Using Technology to Create Emotions
PPT
TV Drama - Editing
PPT
TV Drama - Sound
PPTX
Camera shots in tv drama
PPT
Sound in TV Drama
PPTX
Module 17: How Actors Create Emotions And What We Can Learn From It
Using Technology to Create Emotions
TV Drama - Editing
TV Drama - Sound
Camera shots in tv drama
Sound in TV Drama
Module 17: How Actors Create Emotions And What We Can Learn From It
Ad

Similar to Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal (20)

PDF
Emotion classification for musical data using deep learning techniques
PPTX
Interdisciplinary Perspectives on Emotion, Music and Technology
PDF
From Content-based Music Emotion Recognition to Emotion Maps of Musical Piece...
PPT
Ai lab workshop(211206)
PPT
Emotionally-Controlled Music Synthesis
PDF
MOODetector: Automatic Music Emotion Recognition
PPT
A Musical System For Emotional Expression
PPT
Music & Emotions
PDF
Emotion analysis of songs based on lyrical
PDF
IRJET- Comparative Analysis of Emotion Recognition System
PDF
IRJET - EMO-MUSIC(Emotion based Music Player)
PDF
Dimensional Music Emotion Recognition
PDF
Audio Features for Music Emotion Recognition A Survey.pdf
PPTX
Viva Presentation
PPTX
Developing music mood taxonomies
PPTX
PDF
Affective-Driven Music Production
PDF
Music Emotion Classification based on Lyrics-Audio using Corpus based Emotion...
PDF
Mood Detection
PPTX
major ppt 1 final.pptx
Emotion classification for musical data using deep learning techniques
Interdisciplinary Perspectives on Emotion, Music and Technology
From Content-based Music Emotion Recognition to Emotion Maps of Musical Piece...
Ai lab workshop(211206)
Emotionally-Controlled Music Synthesis
MOODetector: Automatic Music Emotion Recognition
A Musical System For Emotional Expression
Music & Emotions
Emotion analysis of songs based on lyrical
IRJET- Comparative Analysis of Emotion Recognition System
IRJET - EMO-MUSIC(Emotion based Music Player)
Dimensional Music Emotion Recognition
Audio Features for Music Emotion Recognition A Survey.pdf
Viva Presentation
Developing music mood taxonomies
Affective-Driven Music Production
Music Emotion Classification based on Lyrics-Audio using Corpus based Emotion...
Mood Detection
major ppt 1 final.pptx

Recently uploaded (20)

PPTX
A Presentation on Artificial Intelligence
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
Machine Learning_overview_presentation.pptx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PPTX
Big Data Technologies - Introduction.pptx
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
Machine learning based COVID-19 study performance prediction
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PPTX
Programs and apps: productivity, graphics, security and other tools
A Presentation on Artificial Intelligence
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Machine Learning_overview_presentation.pptx
Per capita expenditure prediction using model stacking based on satellite ima...
MIND Revenue Release Quarter 2 2025 Press Release
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Big Data Technologies - Introduction.pptx
Diabetes mellitus diagnosis method based random forest with bat algorithm
“AI and Expert System Decision Support & Business Intelligence Systems”
NewMind AI Weekly Chronicles - August'25-Week II
Machine learning based COVID-19 study performance prediction
SOPHOS-XG Firewall Administrator PPT.pptx
Advanced methodologies resolving dimensionality complications for autism neur...
Group 1 Presentation -Planning and Decision Making .pptx
Programs and apps: productivity, graphics, security and other tools

Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal

  • 1. Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal K. Drossos 1 R. Kotsakis 2 G. Kalliris 2 A. Floros 1 1Digital Audio Processing & Applications Group, Audiovisual Signal Processing Laboratory, Dept. of Audiovisual Arts, Ionian University, Corfu, Greece 2Laboratory of Electronic Media, Dept. of Journalism and Mass Communication, Aristotle University of Thessaloniki, Thessaloniki, Greece
  • 2. Agenda 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 3. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 4. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 5. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 6. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 7. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 8. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 9. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Structured form of sound General sound Interaction with stimuli Reactions Emotions felt
  • 10. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Structured form of sound General sound Interaction with stimuli Reactions Emotions felt
  • 11. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 12. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 13. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 14. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 15. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 16. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 17. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 18. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 19. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 20. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 21. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 22. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 23. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 24. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 25. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 26. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 27. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 28. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 29. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 30. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 31. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 32. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 33. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 34. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 35. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 36. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 37. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 38. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 39. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 40. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 41. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 42. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 43. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 44. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 45. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 46. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 47. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 48. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 49. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 50. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 51. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 52. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 53. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 54. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 55. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 56. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 57. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 58. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 59. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 60. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 61. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 62. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 63. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Extracted Features Statistical Measures Beat spectrum Mean Onsets Standard deviation Tempo Gradient Fluctuation Kurtosis Event density Skewness Pulse clarity
  • 64. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Feature Evaluation Objectives Most valuable features for arousal detection Dependencies between selected features and different window lengths Algorithms used InfoGainAttributeEval SVMAttributeEval WEKA software
  • 65. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Feature Evaluation Objectives Most valuable features for arousal detection Dependencies between selected features and different window lengths Algorithms used InfoGainAttributeEval SVMAttributeEval WEKA software
  • 66. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Feature Evaluation Objectives Most valuable features for arousal detection Dependencies between selected features and different window lengths Algorithms used InfoGainAttributeEval SVMAttributeEval WEKA software
  • 67. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Classification Objectives Arousal classification based on rhythm Dependencies between different window lengths and classification results Algorithms used Artificial neural networks Logistic regression K Nearest Neighbors WEKA software
  • 68. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Classification Objectives Arousal classification based on rhythm Dependencies between different window lengths and classification results Algorithms used Artificial neural networks Logistic regression K Nearest Neighbors WEKA software
  • 69. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Classification Objectives Arousal classification based on rhythm Dependencies between different window lengths and classification results Algorithms used Artificial neural networks Logistic regression K Nearest Neighbors WEKA software
  • 70. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 71. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 72. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 73. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 74. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 75. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation Upper group Beatspectrum std Event density std Onsets gradient Fluctuation kurtosis Beatspectrum gradient Pulse clarity std Fluctuation mean Fluctuation std Fluctuation skewness Onsets skewness Pulse clarity kurtosis Event density kurtosis Onsets mean
  • 76. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Lowest accuracy score: 71.26%
  • 77. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Lowest accuracy score: 71.26%
  • 78. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Lowest accuracy score: 71.26%
  • 79. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Window length: 1.0 second Algorithm utilised: Logistic regression Lowest accuracy score: 71.26%
  • 80. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Window length: 1.0 second Algorithm utilised: Logistic regression Lowest accuracy score: 71.26%
  • 81. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Window length: 1.0 second Algorithm utilised: Logistic regression Lowest accuracy score: 71.26% Window length: 1.4 seconds Algorithm utilised: Artificial Neural network
  • 82. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 83. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 84. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 85. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 86. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 87. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 88. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 89. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 90. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 91. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 92. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 93. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 94. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature work Features & Dimensions Other features related to arousal Connection of features with valence
  • 95. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature work Features & Dimensions Other features related to arousal Connection of features with valence