SlideShare a Scribd company logo
NATURAL LANGUAGE
INFERENCE
Presented to:
Dr. Sven Naumann
Presented by:
Muhammad Ahmed
Syed Bukhari
Contents:
◦ Introdution
◦ Task Labels
◦ Approches to NLI
◦ Evaluation and Data sets
◦ Distributional and Vector Space Models
◦ Current State-of-the-Art Approaches
◦ Challenges and Future Direction
◦ Conclusion
Introduction
◦ NLI systems aim to bridge the gap between human language and machine understanding
by processing and interpreting natural language inputs.
◦ NLI is the task of determining the logical relationship between two pieces of text the premise
and the hypothesis.
◦ Goal is to understand whether the meaning of the hypothesis can be inferred from the
premise.
Natural language inference(NLI) importtant
Task Labels
◦ Task labels in Natural Language Inference (NLI) refer to the categories or labels used to
classify the relationships between the premise and hypothesis pairs in an NLI dataset. These
labels indicate the nature of the relationship between the two sentences and are crucial
for training, evaluating, and analyzing NLI models. Common task labels in NLI include:
◦ Entailment:
◦ The hypothesis can be inferred or logically entailed by the premise. It implies that the
information in the hypothesis is supported or logically follows from the information in the
premise.
◦ Contradiction:
◦ The hypothesis contradicts or is logically inconsistent with the premise. It suggests that the
information in the hypothesis contradicts the information in the premise.
◦ Neutral:
◦ There is no logical relationship or strong inference between the premise and the
hypothesis. It implies that the hypothesis neither entails nor contradicts the premise.
Examples
Entailment
Premise: "The cat is on the mat.
" Hypothesis: "The mat is under the cat."
Task Label: Entailment
Contradiction
Premise: "The sun rises in the east."
Hypothesis: "The sun sets in the west.“
Task Label: Contradiction
Neutral
Premise: "She is wearing a blue dress.“
Hypothesis: "The sky is clear.“
Task Label: Neutral
Continue..
◦ Partial Entailment:
◦ The hypothesis partially entails or is partially supported by the premise. It indicates that
some but not all of the information in the hypothesis can be inferred from the premise.
◦ Partial Contradiction:
◦ The hypothesis partially contradicts or is partially inconsistent with the premise. It
suggests that some but not all of the information in the hypothesis contradicts the
information in the premise.
Example:
Partial Entailment
Premise: "The cat is sitting on the mat.“
Hypothesis: "There is a cat on the floor."
Task Label: Partial Entailment
Partial Contradiction
Premise: "The weather is sunny.“
Hypothesis: "It might rain later.“
Task Label: Partial Contradiction
Approches to NLI
◦ Rule-based Systems
Predefined grammatical rules and patterns to interpret and generate natural language.
◦ Template Filling
Templates contained predefined slots that were filled with extracted information from user
queries.
◦ Finite-State Automata
Provided a simple and efficient way to handle dialogue interactions but struggled with
handling context and ambiguity.
◦ Slot-Filling Approaches
Involved identifying specific slots or parameters in user queries and mapping them to
corresponding actions.
◦ Command-based Interfaces
They paved the way for more sophisticated techniques and approaches in modern NLI
systems
Evaluation and Data sets
They provide benchmarks to measure the performance and effectiveness of NLI models.
Here's an overview of evaluation methods and commonly used data sets in NLI
Evaluation Methods
1. Accuracy Metrics
2. Precision, Recall, and F1 Score
3. Error Analysis
Data Sets
1. ATIS (The Airline Travel Information System)
2. SNIPS (Spoken Language Understanding in Intelligent Personal Assistants)
3. NLU-Evaluation-Corpora
4. MultiWOZ
5. Squad
6. Custom Data Sets
Current State-of-the-Art Approaches
In the field of Natural Language Inference (NLI), several state-of-the-art approaches
have emerged in recent years, achieving remarkable performance on benchmark
datasets. Here are some notable approaches
1. BERT (Bidirectional Encoder Representations from Transformers)
2. RoBERTa (Robustly Optimized BERT Pretraining Approach)
3. ALBERT (A Lite BERT)
4. ELECTRA (Efficiently Learning an Encoder that Classifies Token Replacements
Accurately)
5. DeBERTa (Decoding-enhanced BERT with Disentangled Attention)
6. T5 (Text-to-Text Transfer Transformer)
Natural language inference(NLI) importtant
Challenges and Future Directions
Challenges:
◦ Ambiguous Language.
◦ Cross-Domain Generalization.
◦ Handling Negation and Uncertainty.
◦ Incorporating World Knowledge.
◦ Lack of Diversity in Training Data.
Future Directions:
◦ Explainability and Interpretability.
◦ Multimodal NLI.
◦ Advances in Pre-training Techniques.
◦ Benchmarking and Evaluation.
Conclusion
◦ Natural Language Inference is an important task that makes us develop models that can actually
understand the dependencies between sentences.
◦ What we especially talk about ML these days is that transformers are ubiquitous. Thus, the models we
are also applicable in many applied tasks beyond NLI. Many transformer-based models are
benchmarked on NLI tasks to show the performance gains compared to the previous architectures.
References
◦ Bowman, S. R., Angeli, G., Potts, C., & Manning, C. D. (2015). A large annotated corpus for learning natural
language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language
Processing (EMNLP) (pp. 632-642).
◦ Conneau, A., Kiela, D., Schwenk, H., Barrault, L., & Bordes, A. (2017). Supervised learning of universal
sentence representations from natural language inference data. In Proceedings of the 2017 Conference on
Empirical Methods in Natural Language Processing (EMNLP) (pp. 670-680).
◦ Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers
for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the
Association for Computational Linguistics: Human Language Technologies (NAACL-HLT) (pp. 4171-4186).
◦ Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). RoBERTa: A robustly optimized
BERT pretraining approach. arXiv preprint arXiv:1907.11692

More Related Content

PPTX
Recent benchmarks for natural language inference
PPTX
srinu.pptx
PDF
Do Neural Models Learn Transitivity of Veridical Inference?
PPTX
State of the art in Natural Language Processing (March 2019)
PDF
BERT Explained_ State of the art language model for NLP.pdf
PDF
The NLP Muppets revolution!
PPTX
Natural Language Processing: From Human-Robot Interaction to Alzheimer’s Dete...
PDF
Nlp research presentation
Recent benchmarks for natural language inference
srinu.pptx
Do Neural Models Learn Transitivity of Veridical Inference?
State of the art in Natural Language Processing (March 2019)
BERT Explained_ State of the art language model for NLP.pdf
The NLP Muppets revolution!
Natural Language Processing: From Human-Robot Interaction to Alzheimer’s Dete...
Nlp research presentation

Similar to Natural language inference(NLI) importtant (20)

PDF
Lean Logic for Lean Times: Varieties of Natural Logic
PPTX
Natural Language Processing detailed description
PDF
NATURAL LANGUAGE PROCESSING
PDF
Turkish language modeling using BERT
PDF
BERT - Part 1 Learning Notes of Senthil Kumar
PDF
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
PDF
Natural Language Inference: for Humans and Machines
PDF
Lean Logic for Lean Times: Entailment and Contradiction Revisited
PPTX
PPT Unit 5=software- engineering-21.pptx
PDF
Natural Language Processing Theory, Applications and Difficulties
PPTX
Rigourous evaluation of nlp models in real world deployment
PPTX
Performance Impact Caused by Hidden Bias of Traning Data for Recognizing Text...
PPTX
sliffffffffffffffffffdasddasdffffffffh2.pptx
PDF
Natural Language Processing: L01 introduction
PPTX
Introduction to natural language processing (NLP)
PPTX
2023 EMNLP day_san.pptx
PPTX
Natural Language Processing
PPTX
PPTX
Introduction to nlp
PPTX
LONGSEM2024-25_CSE3015_ETH_AP2024256000125_Reference-Material-I.pptx
Lean Logic for Lean Times: Varieties of Natural Logic
Natural Language Processing detailed description
NATURAL LANGUAGE PROCESSING
Turkish language modeling using BERT
BERT - Part 1 Learning Notes of Senthil Kumar
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Natural Language Inference: for Humans and Machines
Lean Logic for Lean Times: Entailment and Contradiction Revisited
PPT Unit 5=software- engineering-21.pptx
Natural Language Processing Theory, Applications and Difficulties
Rigourous evaluation of nlp models in real world deployment
Performance Impact Caused by Hidden Bias of Traning Data for Recognizing Text...
sliffffffffffffffffffdasddasdffffffffh2.pptx
Natural Language Processing: L01 introduction
Introduction to natural language processing (NLP)
2023 EMNLP day_san.pptx
Natural Language Processing
Introduction to nlp
LONGSEM2024-25_CSE3015_ETH_AP2024256000125_Reference-Material-I.pptx
Ad

Recently uploaded (20)

PPTX
WATER RESOURCE-1.pptx ssssdsedsddsssssss
PDF
Music-and-Arts_jwkskwjsjsjsjsjsjsjdisiaiajsjjzjz
PDF
Landscape Architecture: Shaping the World Between Buildings
PPTX
Lung Cancer - Bimbingan.pptxmnbmbnmnmn mn mn
PPTX
National_Artists_for_Dance_with_Examples-1.pptx
PPTX
IOT Unit 6 PPT ( ~ By Prof. Simran Ahuja ).pptx
PPTX
Chemical Reactions in Our Lives.pptxyyyyyyyyy
PPTX
400kV_Switchyardasdsfesfewffwefrrwewew_Training_Module.pptx
PPTX
GREEN BUILDINGS are eco friendly for environment
PPTX
Lc 10hhjkhhjjkkkkjhhuiooopojjjoookjji.pptx
PPTX
Contemporary Arts and the Potter of Thep
PPTX
WEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEK
PPTX
CPAR-ELEMENTS AND PRINCIPLE OF ARTS.pptx
PPTX
668819271-A Relibility CCEPTANCE-SAMPLING.pptx
PPTX
QA PROCESS FLOW CHART (1).pptxbbbbbbbbbnnnn
PDF
Dating-Courtship-Marriage-and-Responsible-Parenthood.pdf
PDF
Triangle of photography : aperture, exposure and ISO
PPTX
Copy of Executive Design Pitch Deck by Slidesgo.pptx.pptx
PPTX
Understanding Postmodernism Powerpoint.pptx
PPTX
Q1_TLE_8_Week_2asfsdgsgsdgdsgfasdgwrgrgqrweg
WATER RESOURCE-1.pptx ssssdsedsddsssssss
Music-and-Arts_jwkskwjsjsjsjsjsjsjdisiaiajsjjzjz
Landscape Architecture: Shaping the World Between Buildings
Lung Cancer - Bimbingan.pptxmnbmbnmnmn mn mn
National_Artists_for_Dance_with_Examples-1.pptx
IOT Unit 6 PPT ( ~ By Prof. Simran Ahuja ).pptx
Chemical Reactions in Our Lives.pptxyyyyyyyyy
400kV_Switchyardasdsfesfewffwefrrwewew_Training_Module.pptx
GREEN BUILDINGS are eco friendly for environment
Lc 10hhjkhhjjkkkkjhhuiooopojjjoookjji.pptx
Contemporary Arts and the Potter of Thep
WEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEK
CPAR-ELEMENTS AND PRINCIPLE OF ARTS.pptx
668819271-A Relibility CCEPTANCE-SAMPLING.pptx
QA PROCESS FLOW CHART (1).pptxbbbbbbbbbnnnn
Dating-Courtship-Marriage-and-Responsible-Parenthood.pdf
Triangle of photography : aperture, exposure and ISO
Copy of Executive Design Pitch Deck by Slidesgo.pptx.pptx
Understanding Postmodernism Powerpoint.pptx
Q1_TLE_8_Week_2asfsdgsgsdgdsgfasdgwrgrgqrweg
Ad

Natural language inference(NLI) importtant

  • 1. NATURAL LANGUAGE INFERENCE Presented to: Dr. Sven Naumann Presented by: Muhammad Ahmed Syed Bukhari
  • 2. Contents: ◦ Introdution ◦ Task Labels ◦ Approches to NLI ◦ Evaluation and Data sets ◦ Distributional and Vector Space Models ◦ Current State-of-the-Art Approaches ◦ Challenges and Future Direction ◦ Conclusion
  • 3. Introduction ◦ NLI systems aim to bridge the gap between human language and machine understanding by processing and interpreting natural language inputs. ◦ NLI is the task of determining the logical relationship between two pieces of text the premise and the hypothesis. ◦ Goal is to understand whether the meaning of the hypothesis can be inferred from the premise.
  • 5. Task Labels ◦ Task labels in Natural Language Inference (NLI) refer to the categories or labels used to classify the relationships between the premise and hypothesis pairs in an NLI dataset. These labels indicate the nature of the relationship between the two sentences and are crucial for training, evaluating, and analyzing NLI models. Common task labels in NLI include: ◦ Entailment: ◦ The hypothesis can be inferred or logically entailed by the premise. It implies that the information in the hypothesis is supported or logically follows from the information in the premise. ◦ Contradiction: ◦ The hypothesis contradicts or is logically inconsistent with the premise. It suggests that the information in the hypothesis contradicts the information in the premise. ◦ Neutral: ◦ There is no logical relationship or strong inference between the premise and the hypothesis. It implies that the hypothesis neither entails nor contradicts the premise.
  • 6. Examples Entailment Premise: "The cat is on the mat. " Hypothesis: "The mat is under the cat." Task Label: Entailment Contradiction Premise: "The sun rises in the east." Hypothesis: "The sun sets in the west.“ Task Label: Contradiction Neutral Premise: "She is wearing a blue dress.“ Hypothesis: "The sky is clear.“ Task Label: Neutral
  • 7. Continue.. ◦ Partial Entailment: ◦ The hypothesis partially entails or is partially supported by the premise. It indicates that some but not all of the information in the hypothesis can be inferred from the premise. ◦ Partial Contradiction: ◦ The hypothesis partially contradicts or is partially inconsistent with the premise. It suggests that some but not all of the information in the hypothesis contradicts the information in the premise.
  • 8. Example: Partial Entailment Premise: "The cat is sitting on the mat.“ Hypothesis: "There is a cat on the floor." Task Label: Partial Entailment Partial Contradiction Premise: "The weather is sunny.“ Hypothesis: "It might rain later.“ Task Label: Partial Contradiction
  • 9. Approches to NLI ◦ Rule-based Systems Predefined grammatical rules and patterns to interpret and generate natural language. ◦ Template Filling Templates contained predefined slots that were filled with extracted information from user queries. ◦ Finite-State Automata Provided a simple and efficient way to handle dialogue interactions but struggled with handling context and ambiguity. ◦ Slot-Filling Approaches Involved identifying specific slots or parameters in user queries and mapping them to corresponding actions. ◦ Command-based Interfaces They paved the way for more sophisticated techniques and approaches in modern NLI systems
  • 10. Evaluation and Data sets They provide benchmarks to measure the performance and effectiveness of NLI models. Here's an overview of evaluation methods and commonly used data sets in NLI Evaluation Methods 1. Accuracy Metrics 2. Precision, Recall, and F1 Score 3. Error Analysis
  • 11. Data Sets 1. ATIS (The Airline Travel Information System) 2. SNIPS (Spoken Language Understanding in Intelligent Personal Assistants) 3. NLU-Evaluation-Corpora 4. MultiWOZ 5. Squad 6. Custom Data Sets
  • 12. Current State-of-the-Art Approaches In the field of Natural Language Inference (NLI), several state-of-the-art approaches have emerged in recent years, achieving remarkable performance on benchmark datasets. Here are some notable approaches 1. BERT (Bidirectional Encoder Representations from Transformers) 2. RoBERTa (Robustly Optimized BERT Pretraining Approach) 3. ALBERT (A Lite BERT) 4. ELECTRA (Efficiently Learning an Encoder that Classifies Token Replacements Accurately) 5. DeBERTa (Decoding-enhanced BERT with Disentangled Attention) 6. T5 (Text-to-Text Transfer Transformer)
  • 14. Challenges and Future Directions Challenges: ◦ Ambiguous Language. ◦ Cross-Domain Generalization. ◦ Handling Negation and Uncertainty. ◦ Incorporating World Knowledge. ◦ Lack of Diversity in Training Data. Future Directions: ◦ Explainability and Interpretability. ◦ Multimodal NLI. ◦ Advances in Pre-training Techniques. ◦ Benchmarking and Evaluation.
  • 15. Conclusion ◦ Natural Language Inference is an important task that makes us develop models that can actually understand the dependencies between sentences. ◦ What we especially talk about ML these days is that transformers are ubiquitous. Thus, the models we are also applicable in many applied tasks beyond NLI. Many transformer-based models are benchmarked on NLI tasks to show the performance gains compared to the previous architectures.
  • 16. References ◦ Bowman, S. R., Angeli, G., Potts, C., & Manning, C. D. (2015). A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 632-642). ◦ Conneau, A., Kiela, D., Schwenk, H., Barrault, L., & Bordes, A. (2017). Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 670-680). ◦ Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT) (pp. 4171-4186). ◦ Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692