SlideShare a Scribd company logo
DEEP LEARNING MODELS
FOR
QUESTION ANSWERING
Sujit Pal & Abhishek Sharma
Elsevier Search Guild Question Answering Workshop
October 5-6, 2016
About Us
Sujit Pal
Technology Research Director
Elsevier Labs
Abhishek Sharma
Organizer, DLE Meetup
and
Software Engineer, Salesforce
2
• Watching Udacity “Deep Learning” videos taught by
Vincent Vanhoucke.
• Watching “Deep Learning for Natural Language
Processing” (CS 224d) videos taught by Richard
Socher.
• Thinking that it might be interesting to do “something”
around Question Answering and Deep Learning.
How we started
3
What we knew
Deep
Learning
Question
Answering
Machine
Learning
Tensorflow
Keras
Python
Search
(Lucene/Solr/ES)
Natural
Language
Processing
gensim
4
Identify Scope
5
Question Answering Pipeline
6
Question Answering Pipeline
7
Research
8
This had just ended…
and the #4 ranked entry used Deep Learning for their solution.
9
10
11
12
13
14
15
Building Blocks
16
Most DL Networks (including Question Answering
models) composed out of these basic building blocks.
• Fully Connected Network
• Word Embedding
• Convolutional Neural Network
• Recurrent Neural Network
Building blocks
17
Fully Connected Network
Credit: neuralnetworksanddeeplearning.com
• Workhorse architecture of Deep Learning.
• Number of layers and number of units per layer increased for more
complex models.
• Used for all kinds of problem spaces.
18
Word Embedding
Credit:Sebastian Ruder // sebastianruder.com
• Projects sparse 1-hot
vector representation onto
denser lower dimensional
space.
• Unsupervised technique.
• Embedding space exhibits
Distributional Semantics.
• Has almost completely
replaced traditional
distributional features in
NLP (Deep Learning and
non Deep Learning).
• Word2Vec (CBOW and
Skip-gram), GloVe.
19
Convolutional Neural Network
• Alternate Convolution and Pooling Operations to extract relevant
features
• Mainly used in Image recognition, exploits geometry of image.
• 1D variant (Convolution and Pooling) used for text.
• Exploits word neighborhoods and extracts “meaning” of sentences or
paragraphs.
Credit: deeplearning.net
20
Recurrent Neural Network
Credit: Andrej Karpathy // karpathy.github.io
• Works with sequence input (such as text and audio).
• Exploits temporal nature of the data.
• Many variations possible (shown below).
• Basic RNN suffers from vanishing gradient problem – addressed by
Long Short Term Memory (LSTM) RNNs.
• Gated Recurrent Unit (GRU) another variant with simpler structure
and better performance than LSTM.
21
Start with bAbI
22
• Synthetic Dataset (1000, 10k, … records)
• Composed of actors, places, things, actions, etc.
• Released by Facebook Research
bAbI Dataset
• Single supporting fact
• Two supporting facts
• Three supporting facts
• Two argument relations
• Three argument relations
• Yes/No questions
• Counting
• Lists/sets
• Simple Negation
• Indefinite Knowledge
• Basic Coreference
• Conjunction
• Compound Coreference
• Time Reasoning
• Basic Deduction
• Basic Induction
• Positional Reasoning
• Size Reasoning
• Path Finding
• Agent’s Motivations
23
bAbI Format (task 1)
24
bAbI LSTM
• Implementation based
on the paper: Towards
AI-Complete Question
Answering: A Set of
Prerequisite Toy
Tasks.
• Test accuracy reported
in paper: 50%.
• Test accuracy achieved
by implementation:
56%.
• Code is here.
25
bAbI MemNN
• Implementation based on
the paper: End-to-end
Memory Networks.
• Test accuracy reported in
paper: 99%.
• Test accuracy achieved by
implementation: 42%.
• Code is here.
26
Back to Kaggle
27
Data Format
• 2000 multiple choice 8th Grade Science questions with 4
candidate answers and correct answer label.
• 2000 questions without correct answer label.
• Each question = 1 positive + 3 negative examples.
• No story here.
28
QA-LSTM
• Implementation based
on the paper: LSTM-
based Deep Learning
Models for Non-factoid
Answer Selection.
• Test accuracy reported
in paper: 64.3%
(InsuranceQA dataset).
• Test accuracy achieved
by implementation:
56.93% (unidirectional)
and 57% (bidirectional).
• Code: unidirectional,
bidirectional
29
Our Embedding Approach
• 3 approaches to Embedding
• Generate from Data
• Use External model for lookup
• Initialize with External model, then fine tune.
• Not enough question data to generate good embedding.
• Used pre-trained Google News word2vec model (trained
with 3 billion words)
• Model has 3 million word vectors of dimension (300,).
• Uses gensim to read model.
30
QA-LSTM-CNN
• Additional CNN Layer
for more effective
summarization.
• Test accuracy reported
in paper: 62.2%
(InsuranceQA dataset).
• Test accuracy achieved
by implementation:
56.3% (unidirectional),
did not try bidirectional.
• Code is here.
31
Incorporating Attention
• Vanishing Gradient problem addressed by LSTMs, but still
shows up in long range Q+A contexts.
• Solved using Attention Models
• Based on visual models of human attention.
• Allow the network to focus on certain words in question with “high
resolution” and the rest at “low resolution”.
• Similar to advice given for comprehension tests about reading the
questions, then scanning passage for question keywords.
• Implemented here as a dot product of question and answer, or
question and story vectors.
32
QA-LSTM + Attention
• Attention vector from
question and answer
combined with question.
• Test accuracy reported
in paper: 68.4%
(InsuranceQA dataset).
• Test accuracy achieved
by implementation:
62.93% (unidirectional),
60.43% (bidirectional)
• Code: unidirectional,
bidirectional.
33
Incorporating External Knowledge
• Contestants were allowed/advised to use external
sources such as ConceptNet, CK-12 books, Quizlets,
Flashcards from StudyStack, etc.
• Significant crawling/scraping and parsing effort involved.
• 4th place winner (tambietm) provides parsed download of
StudyStack Flashcards on his Google drive.
• Flashcard “story” = question || answer
34
Using Story Embedding
• Build Word2Vec model using words from Flashcards.
• Approximately 500k flashcards, 8,000 unique words.
• Provides smaller, more focused embedding space.
• Good performance boost over default Word2Vec
embedding.
35
Model Default
Embedding
Story
Embedding
QA-LSTM with Attention 62.93 76.27
QA-LSTM Bidirectional with Attention 60.43 76.27
Relating Story to Question
• Replicate bAbI setup: (story, question, answer).
• Only a subset of flashcards relate to given question.
• Using traditional IR methods to generate flashcard stories
for each question.
36
QA-LSTM + Story
• Story and Question
combined to create Fact
vector.
• Question and Answer
combined to create
Attention vector
• Fact and Attention
vectors concatenated.
• Test accuracy achieved
by implementation:
70.47% (unidirectional),
61.77% (bidirectional).
• Code: unidirectional,
bidirectional.
37
Results
Model Specifications Test Accuracy
(%)
QA-LSTM (Baseline) 56.93
QA-LSTM Bidirectional 57.0
QA-LSTM + CNN 55.7
QA-LSTM with Attention 62.93
QA-LSTM Bidirectional with Attention 60.43
QA-LSTM with Attention + Custom Embedding 76.27 *
QA-LSTM Bidirectional w/Attention + Custom Embedding 76.27 *
QA-LSTM + Attention + Story Facts 70.47
QA-LSTM Bidirectional + Attention + Story Facts 61.77
38
Model Deployment
• Our models predict answer correct vs. incorrect.
• Task is to choose the correct answer from candidate
answers.
• Re-instantiate trained model with Softmax layer removed.
• Run batch of (story, question, answer) for each candidate
answer.
• Select best scoring answer as correct answer.
39
Deploying Model - Example
40
Future Work
• Would like to implement Dynamic Memory Network
(DMN) and Hierarchical Attention Based Convolutional
Neural Network (HABCNN) models against this data.
• Would like to submit to Kaggle to see how I did once
Keras Issue 3927 is resolved.
• Would like to try out these models against Stanford
Question Answering Dataset (SQuAD) based on
Wikipedia articles.
• Would like to investigate Question Generation from text in
order to generate training sets for Elsevier corpora.
41
Administrivia
• Code for this talk:https://github.com/sujitpal/dl-models-for-
qa
• Contact me for questions and suggestions:
sujit.pal@elsevier.com
42
Closing Thoughts
• Deep Learning is rapidly becoming a general purpose
solution for nearly all learning problems.
• Information Retrieval approaches are still more successful
on Question Answering than Deep Learning, but there are
many efforts by Deep Learning researchers to change
that.
43
Thank you.
44

More Related Content

PPTX
BERT QnA System for Airplane Flight Manual
PDF
Intro to Deep Learning for Question Answering
PDF
BERT: Bidirectional Encoder Representations from Transformers
PDF
Transformer in Computer Vision
PDF
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
PDF
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
PDF
NLP using transformers
PPTX
BERT QnA System for Airplane Flight Manual
Intro to Deep Learning for Question Answering
BERT: Bidirectional Encoder Representations from Transformers
Transformer in Computer Vision
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
NLP using transformers

What's hot (20)

PDF
Word2Vec
PPTX
Natural language processing and transformer models
PPTX
Fine-tuning BERT for Question Answering
PDF
A Simple Framework for Contrastive Learning of Visual Representations
PPTX
What is word2vec?
PDF
Training Deep Neural Nets
PDF
Autoencoders
PPTX
XLnet RoBERTa Reformer
PPTX
Attention Is All You Need
PPTX
ViT.pptx
PDF
Deep Learning - Convolutional Neural Networks
PPTX
PDF
An introduction to the Transformers architecture and BERT
PPTX
NLP_KASHK:N-Grams
PDF
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
PPTX
Support vector machine
PDF
Transformer Introduction (Seminar Material)
PPTX
Support Vector Machine ppt presentation
PDF
An introduction to Deep Learning
PDF
Self-Attention with Linear Complexity
Word2Vec
Natural language processing and transformer models
Fine-tuning BERT for Question Answering
A Simple Framework for Contrastive Learning of Visual Representations
What is word2vec?
Training Deep Neural Nets
Autoencoders
XLnet RoBERTa Reformer
Attention Is All You Need
ViT.pptx
Deep Learning - Convolutional Neural Networks
An introduction to the Transformers architecture and BERT
NLP_KASHK:N-Grams
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Support vector machine
Transformer Introduction (Seminar Material)
Support Vector Machine ppt presentation
An introduction to Deep Learning
Self-Attention with Linear Complexity
Ad

Similar to Deep Learning Models for Question Answering (20)

PPTX
Enriching Solr with Deep Learning for a Question Answering System - Sanket Sh...
PDF
Improving neural question generation using answer separation
PDF
Apache MXNet ODSC West 2018
PDF
Naver learning to rank question answer pairs using hrde-ltc
PDF
Automated Testing and Safety Analysis of Deep Neural Networks
PPTX
Deep learning Tutorial - Part II
PPTX
Keynote at IWLS 2017
PDF
Presentation by Lionel Briand
PDF
Icml2017 overview
PDF
Two strategies for large-scale multi-label classification on the YouTube-8M d...
PPTX
Image-Based E-Commerce Product Discovery: A Deep Learning Case Study - Denis ...
PDF
Automated Testing of Autonomous Driving Assistance Systems
PDF
Practical Constraint Solving for Generating System Test Data
PDF
Training Neural Networks
PDF
Task Adaptive Neural Network Search with Meta-Contrastive Learning
PPTX
Machine Learning, Deep Learning and Data Analysis Introduction
PDF
Scalable Software Testing and Verification of Non-Functional Properties throu...
PPTX
Machine Learning 2 deep Learning: An Intro
DOC
IEEE 2014 JAVA DATA MINING PROJECTS Mining weakly labeled web facial images f...
DOC
2014 IEEE JAVA DATA MINING PROJECT Mining weakly labeled web facial images fo...
Enriching Solr with Deep Learning for a Question Answering System - Sanket Sh...
Improving neural question generation using answer separation
Apache MXNet ODSC West 2018
Naver learning to rank question answer pairs using hrde-ltc
Automated Testing and Safety Analysis of Deep Neural Networks
Deep learning Tutorial - Part II
Keynote at IWLS 2017
Presentation by Lionel Briand
Icml2017 overview
Two strategies for large-scale multi-label classification on the YouTube-8M d...
Image-Based E-Commerce Product Discovery: A Deep Learning Case Study - Denis ...
Automated Testing of Autonomous Driving Assistance Systems
Practical Constraint Solving for Generating System Test Data
Training Neural Networks
Task Adaptive Neural Network Search with Meta-Contrastive Learning
Machine Learning, Deep Learning and Data Analysis Introduction
Scalable Software Testing and Verification of Non-Functional Properties throu...
Machine Learning 2 deep Learning: An Intro
IEEE 2014 JAVA DATA MINING PROJECTS Mining weakly labeled web facial images f...
2014 IEEE JAVA DATA MINING PROJECT Mining weakly labeled web facial images fo...
Ad

More from Sujit Pal (20)

PPTX
Supporting Concept Search using a Clinical Healthcare Knowledge Graph
PPTX
Google AI Hackathon: LLM based Evaluator for RAG
PPTX
Building Learning to Rank (LTR) search reranking models using Large Language ...
PPTX
Cheap Trick for Question Answering
PPTX
Searching Across Images and Test
PPTX
Learning a Joint Embedding Representation for Image Search using Self-supervi...
PPTX
The power of community: training a Transformer Language Model on a shoestring
PPTX
Backprop Visualization
PPTX
Accelerating NLP with Dask and Saturn Cloud
PPTX
Accelerating NLP with Dask on Saturn Cloud: A case study with CORD-19
PPTX
Leslie Smith's Papers discussion for DL Journal Club
PPTX
Using Graph and Transformer Embeddings for Vector Based Retrieval
PPTX
Transformer Mods for Document Length Inputs
PPTX
Question Answering as Search - the Anserini Pipeline and Other Stories
PPTX
Building Named Entity Recognition Models Efficiently using NERDS
PPTX
Graph Techniques for Natural Language Processing
PPTX
Learning to Rank Presentation (v2) at LexisNexis Search Guild
PPTX
Search summit-2018-ltr-presentation
PPTX
Search summit-2018-content-engineering-slides
PPTX
SoDA v2 - Named Entity Recognition from streaming text
Supporting Concept Search using a Clinical Healthcare Knowledge Graph
Google AI Hackathon: LLM based Evaluator for RAG
Building Learning to Rank (LTR) search reranking models using Large Language ...
Cheap Trick for Question Answering
Searching Across Images and Test
Learning a Joint Embedding Representation for Image Search using Self-supervi...
The power of community: training a Transformer Language Model on a shoestring
Backprop Visualization
Accelerating NLP with Dask and Saturn Cloud
Accelerating NLP with Dask on Saturn Cloud: A case study with CORD-19
Leslie Smith's Papers discussion for DL Journal Club
Using Graph and Transformer Embeddings for Vector Based Retrieval
Transformer Mods for Document Length Inputs
Question Answering as Search - the Anserini Pipeline and Other Stories
Building Named Entity Recognition Models Efficiently using NERDS
Graph Techniques for Natural Language Processing
Learning to Rank Presentation (v2) at LexisNexis Search Guild
Search summit-2018-ltr-presentation
Search summit-2018-content-engineering-slides
SoDA v2 - Named Entity Recognition from streaming text

Recently uploaded (20)

PPTX
Business Acumen Training GuidePresentation.pptx
PPTX
IB Computer Science - Internal Assessment.pptx
PDF
Fluorescence-microscope_Botany_detailed content
PPTX
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PPT
Quality review (1)_presentation of this 21
PPTX
Qualitative Qantitative and Mixed Methods.pptx
PPTX
STUDY DESIGN details- Lt Col Maksud (21).pptx
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PDF
Foundation of Data Science unit number two notes
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PPTX
01_intro xxxxxxxxxxfffffffffffaaaaaaaaaaafg
PPTX
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPTX
1_Introduction to advance data techniques.pptx
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
Business Acumen Training GuidePresentation.pptx
IB Computer Science - Internal Assessment.pptx
Fluorescence-microscope_Botany_detailed content
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Miokarditis (Inflamasi pada Otot Jantung)
Quality review (1)_presentation of this 21
Qualitative Qantitative and Mixed Methods.pptx
STUDY DESIGN details- Lt Col Maksud (21).pptx
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
Foundation of Data Science unit number two notes
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
01_intro xxxxxxxxxxfffffffffffaaaaaaaaaaafg
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
Galatica Smart Energy Infrastructure Startup Pitch Deck
1_Introduction to advance data techniques.pptx
oil_refinery_comprehensive_20250804084928 (1).pptx
Data_Analytics_and_PowerBI_Presentation.pptx
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
The THESIS FINAL-DEFENSE-PRESENTATION.pptx

Deep Learning Models for Question Answering

  • 1. DEEP LEARNING MODELS FOR QUESTION ANSWERING Sujit Pal & Abhishek Sharma Elsevier Search Guild Question Answering Workshop October 5-6, 2016
  • 2. About Us Sujit Pal Technology Research Director Elsevier Labs Abhishek Sharma Organizer, DLE Meetup and Software Engineer, Salesforce 2
  • 3. • Watching Udacity “Deep Learning” videos taught by Vincent Vanhoucke. • Watching “Deep Learning for Natural Language Processing” (CS 224d) videos taught by Richard Socher. • Thinking that it might be interesting to do “something” around Question Answering and Deep Learning. How we started 3
  • 9. This had just ended… and the #4 ranked entry used Deep Learning for their solution. 9
  • 10. 10
  • 11. 11
  • 12. 12
  • 13. 13
  • 14. 14
  • 15. 15
  • 17. Most DL Networks (including Question Answering models) composed out of these basic building blocks. • Fully Connected Network • Word Embedding • Convolutional Neural Network • Recurrent Neural Network Building blocks 17
  • 18. Fully Connected Network Credit: neuralnetworksanddeeplearning.com • Workhorse architecture of Deep Learning. • Number of layers and number of units per layer increased for more complex models. • Used for all kinds of problem spaces. 18
  • 19. Word Embedding Credit:Sebastian Ruder // sebastianruder.com • Projects sparse 1-hot vector representation onto denser lower dimensional space. • Unsupervised technique. • Embedding space exhibits Distributional Semantics. • Has almost completely replaced traditional distributional features in NLP (Deep Learning and non Deep Learning). • Word2Vec (CBOW and Skip-gram), GloVe. 19
  • 20. Convolutional Neural Network • Alternate Convolution and Pooling Operations to extract relevant features • Mainly used in Image recognition, exploits geometry of image. • 1D variant (Convolution and Pooling) used for text. • Exploits word neighborhoods and extracts “meaning” of sentences or paragraphs. Credit: deeplearning.net 20
  • 21. Recurrent Neural Network Credit: Andrej Karpathy // karpathy.github.io • Works with sequence input (such as text and audio). • Exploits temporal nature of the data. • Many variations possible (shown below). • Basic RNN suffers from vanishing gradient problem – addressed by Long Short Term Memory (LSTM) RNNs. • Gated Recurrent Unit (GRU) another variant with simpler structure and better performance than LSTM. 21
  • 23. • Synthetic Dataset (1000, 10k, … records) • Composed of actors, places, things, actions, etc. • Released by Facebook Research bAbI Dataset • Single supporting fact • Two supporting facts • Three supporting facts • Two argument relations • Three argument relations • Yes/No questions • Counting • Lists/sets • Simple Negation • Indefinite Knowledge • Basic Coreference • Conjunction • Compound Coreference • Time Reasoning • Basic Deduction • Basic Induction • Positional Reasoning • Size Reasoning • Path Finding • Agent’s Motivations 23
  • 25. bAbI LSTM • Implementation based on the paper: Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks. • Test accuracy reported in paper: 50%. • Test accuracy achieved by implementation: 56%. • Code is here. 25
  • 26. bAbI MemNN • Implementation based on the paper: End-to-end Memory Networks. • Test accuracy reported in paper: 99%. • Test accuracy achieved by implementation: 42%. • Code is here. 26
  • 28. Data Format • 2000 multiple choice 8th Grade Science questions with 4 candidate answers and correct answer label. • 2000 questions without correct answer label. • Each question = 1 positive + 3 negative examples. • No story here. 28
  • 29. QA-LSTM • Implementation based on the paper: LSTM- based Deep Learning Models for Non-factoid Answer Selection. • Test accuracy reported in paper: 64.3% (InsuranceQA dataset). • Test accuracy achieved by implementation: 56.93% (unidirectional) and 57% (bidirectional). • Code: unidirectional, bidirectional 29
  • 30. Our Embedding Approach • 3 approaches to Embedding • Generate from Data • Use External model for lookup • Initialize with External model, then fine tune. • Not enough question data to generate good embedding. • Used pre-trained Google News word2vec model (trained with 3 billion words) • Model has 3 million word vectors of dimension (300,). • Uses gensim to read model. 30
  • 31. QA-LSTM-CNN • Additional CNN Layer for more effective summarization. • Test accuracy reported in paper: 62.2% (InsuranceQA dataset). • Test accuracy achieved by implementation: 56.3% (unidirectional), did not try bidirectional. • Code is here. 31
  • 32. Incorporating Attention • Vanishing Gradient problem addressed by LSTMs, but still shows up in long range Q+A contexts. • Solved using Attention Models • Based on visual models of human attention. • Allow the network to focus on certain words in question with “high resolution” and the rest at “low resolution”. • Similar to advice given for comprehension tests about reading the questions, then scanning passage for question keywords. • Implemented here as a dot product of question and answer, or question and story vectors. 32
  • 33. QA-LSTM + Attention • Attention vector from question and answer combined with question. • Test accuracy reported in paper: 68.4% (InsuranceQA dataset). • Test accuracy achieved by implementation: 62.93% (unidirectional), 60.43% (bidirectional) • Code: unidirectional, bidirectional. 33
  • 34. Incorporating External Knowledge • Contestants were allowed/advised to use external sources such as ConceptNet, CK-12 books, Quizlets, Flashcards from StudyStack, etc. • Significant crawling/scraping and parsing effort involved. • 4th place winner (tambietm) provides parsed download of StudyStack Flashcards on his Google drive. • Flashcard “story” = question || answer 34
  • 35. Using Story Embedding • Build Word2Vec model using words from Flashcards. • Approximately 500k flashcards, 8,000 unique words. • Provides smaller, more focused embedding space. • Good performance boost over default Word2Vec embedding. 35 Model Default Embedding Story Embedding QA-LSTM with Attention 62.93 76.27 QA-LSTM Bidirectional with Attention 60.43 76.27
  • 36. Relating Story to Question • Replicate bAbI setup: (story, question, answer). • Only a subset of flashcards relate to given question. • Using traditional IR methods to generate flashcard stories for each question. 36
  • 37. QA-LSTM + Story • Story and Question combined to create Fact vector. • Question and Answer combined to create Attention vector • Fact and Attention vectors concatenated. • Test accuracy achieved by implementation: 70.47% (unidirectional), 61.77% (bidirectional). • Code: unidirectional, bidirectional. 37
  • 38. Results Model Specifications Test Accuracy (%) QA-LSTM (Baseline) 56.93 QA-LSTM Bidirectional 57.0 QA-LSTM + CNN 55.7 QA-LSTM with Attention 62.93 QA-LSTM Bidirectional with Attention 60.43 QA-LSTM with Attention + Custom Embedding 76.27 * QA-LSTM Bidirectional w/Attention + Custom Embedding 76.27 * QA-LSTM + Attention + Story Facts 70.47 QA-LSTM Bidirectional + Attention + Story Facts 61.77 38
  • 39. Model Deployment • Our models predict answer correct vs. incorrect. • Task is to choose the correct answer from candidate answers. • Re-instantiate trained model with Softmax layer removed. • Run batch of (story, question, answer) for each candidate answer. • Select best scoring answer as correct answer. 39
  • 40. Deploying Model - Example 40
  • 41. Future Work • Would like to implement Dynamic Memory Network (DMN) and Hierarchical Attention Based Convolutional Neural Network (HABCNN) models against this data. • Would like to submit to Kaggle to see how I did once Keras Issue 3927 is resolved. • Would like to try out these models against Stanford Question Answering Dataset (SQuAD) based on Wikipedia articles. • Would like to investigate Question Generation from text in order to generate training sets for Elsevier corpora. 41
  • 42. Administrivia • Code for this talk:https://github.com/sujitpal/dl-models-for- qa • Contact me for questions and suggestions: sujit.pal@elsevier.com 42
  • 43. Closing Thoughts • Deep Learning is rapidly becoming a general purpose solution for nearly all learning problems. • Information Retrieval approaches are still more successful on Question Answering than Deep Learning, but there are many efforts by Deep Learning researchers to change that. 43

Editor's Notes

  • #11: Proposing the bAbI dataset.
  • #12: Recurrent attention model over large external memory.
  • #13: Closest to our use case, this is what our solution to the Kaggle comp is based on.
  • #14: Socher’s dynamic memory models.
  • #15: DMN, bAbI, Socher again
  • #16: MCTest Benchmark, HABCNN
  • #17: [1] bAbI dataset, [2] representation size (number of words), [3] MCTest Benchmark, Hierarchical Attention Based CNN (HABCNN), [4] DMN, Richard Socher [5]
  • #23: [1] bAbI dataset, [2] representation size (number of words), [3] MCTest Benchmark, Hierarchical Attention Based CNN (HABCNN), [4] DMN, Richard Socher [5]
  • #28: [1] bAbI dataset, [2] representation size (number of words), [3] MCTest Benchmark, Hierarchical Attention Based CNN (HABCNN), [4] DMN, Richard Socher [5]