SlideShare a Scribd company logo
Contrastive Learning with
Adversarial Perturbations for
Conditional Text Generation
Seanie Lee1*, Dong Bok Lee1*, Sung Ju Hwang1,2
KAIST1, Daejeon, South Korea
AITRICS2, Seoul, South Korea
1
Pretrained Language Model
2
Pretraining language model with large corpus and finetuning it for target task
requires a large amount of labeled data.
Conditional Text Generation
3
Conditional text generation is to generate another sequence from the given
sequence. Generally, we use encoder-decoder architecture.
the blue
Encoder Encoder Encoder
house
Embed Embed Embed
Decoder Decoder Decoder Decoder
Embed Embed Embed Embed
la masion bleu <eos>
Exposure Bias
4
Seq2seq models trained with teacher forcing often show exposure bias problem,
which hurts generalization to unseen inputs.
the blue
Encoder Encoder Encoder
house <bos>
Embed Embed Embed
Decoder Decoder Decoder Decoder
Embed Embed Embed Embed
le masion bleu <eos>
la masion bleu
prediction
Ground Truth
Contrastive Learning Framework
5
We propose to use contrast a ground truth pair to negative pairs for better
representation of target sentence.
I cannot do that.
GT Target
Sentence
Source Sent
ence
Encoder-Decoder
He wasn’t in great shape <eos>
<bos> He wasn’t in great shape
Randomly sampled negative examples are easily discriminated with the pretrained
language model and requires a large batch size to mine meaningful negative examples.
Contrastive Learning with Adversarial Perturbation
6
We propose to use adversarial perturbation to generate an “imposter” which is
close to the GT in embedding space but semantically different.
Imposter
He wasn’t in good shape.
Distant-Target
Perturbation
He was was
in good shape.
Perturbation
Source Sent
ence
Encoder-Decoder
He wasn’t in great shape <eos>
<bos> He wasn’t in great shape
Manifold
Contrastive Learning with Adversarial Perturbation
7
Conversely, we generate a “distant target” which is far away from the source
sentence in embedding space but semantically similar.
Imposter
He wasn’t in good shape.
Distant-Target
Perturbation
He was was
in good shape.
Perturbation
Source Sent
ence
Encoder-Decoder
He wasn’t in great shape <eos>
<bos> He wasn’t in great shape
Manifold
Contrastive Learning with Adversarial Perturbation
8
We pull the imposter as well as the negative examples away from the source and
push the distant target and target to the source.
Max
Min
push
source target
dist-target imposter
pull
Contrastive Learning objective
9
Given a pair of source and target sentence 𝑥("), 𝒚(𝒊), we randomly sample 𝒚(𝒋) with
𝑖 ≠ 𝑗 and use them as a set of negative examples 𝑆.
As SimCLR, we maximize the cosine similarity between source and target and minimize
it between source and negative examples.
Nu era într-o formă prea bună.
𝒙(𝒊)
He wasn’t in great shape.
𝒚(𝒊)
But I cannot do it anymore.
By mid-July, it was 40 percent.
𝒚(𝒋)
𝒚(𝒌)
Chen et al. "A simple framework for contrastive learning of visual representations." ICML 2020.
Generation of Imposter
10
We add a small perturbation to the hidden representation of target sentence to
generate imposter with linear approximation as Goodfellow et al. (2015).
Encoder-Decoder
Nu era într-o formă
prea bună.
<bos> He wasn’t
in great shape.
He was was in
great shape. Pooling
Pooling
Min
Source Sentence Target Sentence
Goodfellow et al. "Explaining and harnessing adversarial examples. International Conference on Learning Representations." ICLR 2015.
Objective
Linear Approximation
Generation of Distant Target
11
Add a large perturbation to the target embedding to be far away from the source
sentence but preserving the semantics of target sentence.
Maximize Distance
Semantic Preservation
Encoder Decoder
Pooling
Pooling
He wasn’t in
good shape.
Source Sentence
Max
Nu era într-o formă
prea bună.
<bos> He wasn’t
in great shape.
Target Sentence
Learning Objective – (1)
12
We add the imposter to the set of negative examples 𝑆 and use distant target as
another positive example of source sentence for contrastive learning.
Learning Objective
13
We jointly maximize the following objectives with stochastic gradient ascent.
Experimental Setup – (1)
14
1) Tasks and Evaluation Metric
• Neural Machine Translation: BLEU score
• Question Generation : BLEU score, F1/EM
• Text Summarization: Rouge score
2) Data
• WMT’16 RO-EN
• SQuAD
• Xsum
Experimental Setup – (2)
15
3) Baselines
• T5-MLE:
The T5 model trained with maximum likelihood estimation.
• T5-𝛼-MLE:
The T5 model trained with MLE but decode target sequence with temperature
scaling 𝛼 in softmax.
• T5-MLE-contrastive:
Naïve contrastive learning with MLE.
[Caccia 2020]Caccia et al., Language gans falling short, ICLR 2019
Experimental Setup – (2)
16
3) Baselines
• T5-SSMBA [Ng 2020]:
Generating additional examples by denoising and reconstructing target
sentences with masked language model
• T5-WordDropout Contrastive [Yang 2019]:
Generate negative examples by removing the most frequent word from the
target sentence.
• T5-R3f [Aghajanyan 2021]:
Add a Gaussian noise and enforce consistency loss.
[Ng 2020] Ng et al, Ssmba: Self-supervised manifold based data augmentation for improving out-of-domain robustness, EMNLP 2020
[Yang2021] Reducing word omission errors in neural machine translation: A contrastive learning approach, ACL 2019
[Aghajanyan 2019] Better fine-tuning by reducing representational collapse, ICLR2021
Experimental Result – (1)
17
Method BLEU
Machine Translation – WMT’16 RO-EN
T5-MLE 32.43
T5-𝛼-MLE 32.14
T5-MLE-contrastive 32.03
T5-SSMBA 32.81
T5-WordDropout Contrastive 32.44
T5-CLAPS (Ours) 33.96
Experimental Result – (2)
18
Method BLEU F1 EM
Question Generation – SQuAD
T5-MLE 21.00 67.64 55.91
T5-𝛼-MLE 20.50 68.04 56.30
T5-MLE-contrastive 20.91 67.32 55.25
T5-SSMBA 21.07 68.47 56.37
T5-WordDropout Contrastive 21.19 68.16 56.41
T5-CLAPS (Ours) 21.55 69.01 57.06
Experimental Result – (3)
19
Method Rouge-1 Rouge-2 Rouge-L
Text Summarization – Xsum
T5-MLE 36.10 14.72 29.16
T5-𝛼-MLE 36.68 15.10 29.72
T5-MLE-contrastive 36.34 14.81 29.41
T5-SSMBA 36.58 14.81 29.79
T5-WordDropout Contrastive 36.88 15.11 29.68
T5-CLAPS (Ours) 37.89 15.78 30.59
Visualization of Sentence Embedding
20
The model learns to push away the imposter from the target sentence and pull the
distant target to the source sentence.
Conclusion
21
• We propose a contrastive learning framework for conditional sequence
generation to mitigate the exposure bias problem.
• With adversarial perturbation, we generate negative and positive pairs that are
more difficult for the model to distinguish from the GT pair.
• Results show that we outperforms the baselines of T5 model across machine
translation, question generation and summarization tasks.
Future work
22
• For future work, we will improve the quality of imposter which contains
many grammatical errors.
• Generating imposter and distant target still requires a large amount of labeled
data. We need to improve the sample efficiency.
Thank you

More Related Content

PDF
Adversarial Self-Supervised Contrastive Learning
PPTX
mAP (Mean Average Precision)
PPTX
Neural Networks
PPTX
Deep Learning - CNN and RNN
PPTX
Presentation on unsupervised learning
PPTX
Convolution Neural Network (CNN)
PPTX
CNN and its applications by ketaki
PPTX
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Adversarial Self-Supervised Contrastive Learning
mAP (Mean Average Precision)
Neural Networks
Deep Learning - CNN and RNN
Presentation on unsupervised learning
Convolution Neural Network (CNN)
CNN and its applications by ketaki
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...

What's hot (20)

PPTX
Long Short Term Memory (Neural Networks)
PDF
Convolutional neural network
PPTX
Perceptron & Neural Networks
PPT
Neural Networks
PPT
Deep Learning
PPTX
Introduction to Machine Learning
PDF
Generalized Intersection over Union: A Metric and A Loss for Bounding Box Reg...
PDF
5.MLP(Multi-Layer Perceptron)
PPTX
Feedforward neural network
PPT
lecun-01.ppt
PPTX
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
PDF
Autoencoder
PPTX
AlexNet.pptx
PDF
Introduction to Autoencoders
PDF
Metaheuristic Optimization: Algorithm Analysis and Open Problems
PPT
Feature Extraction and Principal Component Analysis
PPTX
Neural network
PPTX
Machine Learning
PPTX
Deep Learning in Computer Vision
PDF
Long Short Term Memory (Neural Networks)
Convolutional neural network
Perceptron & Neural Networks
Neural Networks
Deep Learning
Introduction to Machine Learning
Generalized Intersection over Union: A Metric and A Loss for Bounding Box Reg...
5.MLP(Multi-Layer Perceptron)
Feedforward neural network
lecun-01.ppt
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
Autoencoder
AlexNet.pptx
Introduction to Autoencoders
Metaheuristic Optimization: Algorithm Analysis and Open Problems
Feature Extraction and Principal Component Analysis
Neural network
Machine Learning
Deep Learning in Computer Vision
Ad

Similar to Contrastive Learning with Adversarial Perturbations for Conditional Text Generation (20)

PDF
[Paper Reading] Unsupervised Learning of Sentence Embeddings using Compositi...
PDF
Neural machine translation of rare words with subword units
PDF
IEEE EMBC 2025 「Improving electrolaryngeal speech enhancement via a represent...
PDF
UWB semeval2016-task5
PDF
Deep Reinforcement Learning with Distributional Semantic Rewards for Abstract...
PDF
Bootstrapping Entity Alignment with Knowledge Graph Embedding
PDF
Fast and Accurate Preordering for SMT using Neural Networks
 
PDF
Sergey Nikolenko and Elena Tutubalina - Constructing Aspect-Based Sentiment ...
PDF
nakai22apsipa_presentation.pdf
PDF
Neural Mask Generator : Learning to Generate Adaptive Word Maskings for Langu...
PPTX
Dynamic pooling and unfolding recursive autoencoders for paraphrase detection
PPT
2-Chapter Two-N-gram Language Models.ppt
PDF
GAN(と強化学習との関係)
PDF
Turkish language modeling using BERT
PDF
2021 03-02-distributed representations-of_words_and_phrases
PDF
UNDERSTANDING NEGATIVE SAMPLING IN KNOWLEDGE GRAPH EMBEDDING
PPTX
Research paper presentation for a project .pptx
PDF
Score-Based Generative Modeling through Stochastic Differential Equations
PDF
Skip-gram Model Broken Down
PDF
Analyse de sentiment et classification par approche neuronale en Python et Weka
[Paper Reading] Unsupervised Learning of Sentence Embeddings using Compositi...
Neural machine translation of rare words with subword units
IEEE EMBC 2025 「Improving electrolaryngeal speech enhancement via a represent...
UWB semeval2016-task5
Deep Reinforcement Learning with Distributional Semantic Rewards for Abstract...
Bootstrapping Entity Alignment with Knowledge Graph Embedding
Fast and Accurate Preordering for SMT using Neural Networks
 
Sergey Nikolenko and Elena Tutubalina - Constructing Aspect-Based Sentiment ...
nakai22apsipa_presentation.pdf
Neural Mask Generator : Learning to Generate Adaptive Word Maskings for Langu...
Dynamic pooling and unfolding recursive autoencoders for paraphrase detection
2-Chapter Two-N-gram Language Models.ppt
GAN(と強化学習との関係)
Turkish language modeling using BERT
2021 03-02-distributed representations-of_words_and_phrases
UNDERSTANDING NEGATIVE SAMPLING IN KNOWLEDGE GRAPH EMBEDDING
Research paper presentation for a project .pptx
Score-Based Generative Modeling through Stochastic Differential Equations
Skip-gram Model Broken Down
Analyse de sentiment et classification par approche neuronale en Python et Weka
Ad

More from MLAI2 (20)

PDF
Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic Unce...
PDF
Online Hyperparameter Meta-Learning with Hypergradient Distillation
PDF
Online Coreset Selection for Rehearsal-based Continual Learning
PDF
Representational Continuity for Unsupervised Continual Learning
PDF
Sequential Reptile_Inter-Task Gradient Alignment for Multilingual Learning
PDF
Skill-Based Meta-Reinforcement Learning
PDF
Edge Representation Learning with Hypergraphs
PDF
Hit and Lead Discovery with Explorative RL and Fragment-based Molecule Genera...
PDF
Mini-Batch Consistent Slot Set Encoder For Scalable Set Encoding
PDF
Task Adaptive Neural Network Search with Meta-Contrastive Learning
PDF
Federated Semi-Supervised Learning with Inter-Client Consistency & Disjoint L...
PDF
Meta-GMVAE: Mixture of Gaussian VAE for Unsupervised Meta-Learning
PDF
Accurate Learning of Graph Representations with Graph Multiset Pooling
PDF
Clinical Risk Prediction with Temporal Probabilistic Asymmetric Multi-Task Le...
PDF
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures
PDF
Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Pr...
PDF
Cost-effective Interactive Attention Learning with Neural Attention Process
PDF
Adversarial Neural Pruning with Latent Vulnerability Suppression
PDF
Generating Diverse and Consistent QA pairs from Contexts with Information-Max...
PDF
Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distrib...
Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic Unce...
Online Hyperparameter Meta-Learning with Hypergradient Distillation
Online Coreset Selection for Rehearsal-based Continual Learning
Representational Continuity for Unsupervised Continual Learning
Sequential Reptile_Inter-Task Gradient Alignment for Multilingual Learning
Skill-Based Meta-Reinforcement Learning
Edge Representation Learning with Hypergraphs
Hit and Lead Discovery with Explorative RL and Fragment-based Molecule Genera...
Mini-Batch Consistent Slot Set Encoder For Scalable Set Encoding
Task Adaptive Neural Network Search with Meta-Contrastive Learning
Federated Semi-Supervised Learning with Inter-Client Consistency & Disjoint L...
Meta-GMVAE: Mixture of Gaussian VAE for Unsupervised Meta-Learning
Accurate Learning of Graph Representations with Graph Multiset Pooling
Clinical Risk Prediction with Temporal Probabilistic Asymmetric Multi-Task Le...
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures
Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Pr...
Cost-effective Interactive Attention Learning with Neural Attention Process
Adversarial Neural Pruning with Latent Vulnerability Suppression
Generating Diverse and Consistent QA pairs from Contexts with Information-Max...
Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distrib...

Recently uploaded (20)

PPTX
sap open course for s4hana steps from ECC to s4
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
cuic standard and advanced reporting.pdf
PPTX
Programs and apps: productivity, graphics, security and other tools
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Encapsulation theory and applications.pdf
PDF
Machine learning based COVID-19 study performance prediction
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
KodekX | Application Modernization Development
PPTX
Cloud computing and distributed systems.
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
sap open course for s4hana steps from ECC to s4
Chapter 3 Spatial Domain Image Processing.pdf
Building Integrated photovoltaic BIPV_UPV.pdf
cuic standard and advanced reporting.pdf
Programs and apps: productivity, graphics, security and other tools
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Spectral efficient network and resource selection model in 5G networks
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Encapsulation theory and applications.pdf
Machine learning based COVID-19 study performance prediction
Unlocking AI with Model Context Protocol (MCP)
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
MYSQL Presentation for SQL database connectivity
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
KodekX | Application Modernization Development
Cloud computing and distributed systems.
NewMind AI Weekly Chronicles - August'25 Week I
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
How UI/UX Design Impacts User Retention in Mobile Apps.pdf

Contrastive Learning with Adversarial Perturbations for Conditional Text Generation

  • 1. Contrastive Learning with Adversarial Perturbations for Conditional Text Generation Seanie Lee1*, Dong Bok Lee1*, Sung Ju Hwang1,2 KAIST1, Daejeon, South Korea AITRICS2, Seoul, South Korea 1
  • 2. Pretrained Language Model 2 Pretraining language model with large corpus and finetuning it for target task requires a large amount of labeled data.
  • 3. Conditional Text Generation 3 Conditional text generation is to generate another sequence from the given sequence. Generally, we use encoder-decoder architecture. the blue Encoder Encoder Encoder house Embed Embed Embed Decoder Decoder Decoder Decoder Embed Embed Embed Embed la masion bleu <eos>
  • 4. Exposure Bias 4 Seq2seq models trained with teacher forcing often show exposure bias problem, which hurts generalization to unseen inputs. the blue Encoder Encoder Encoder house <bos> Embed Embed Embed Decoder Decoder Decoder Decoder Embed Embed Embed Embed le masion bleu <eos> la masion bleu prediction Ground Truth
  • 5. Contrastive Learning Framework 5 We propose to use contrast a ground truth pair to negative pairs for better representation of target sentence. I cannot do that. GT Target Sentence Source Sent ence Encoder-Decoder He wasn’t in great shape <eos> <bos> He wasn’t in great shape Randomly sampled negative examples are easily discriminated with the pretrained language model and requires a large batch size to mine meaningful negative examples.
  • 6. Contrastive Learning with Adversarial Perturbation 6 We propose to use adversarial perturbation to generate an “imposter” which is close to the GT in embedding space but semantically different. Imposter He wasn’t in good shape. Distant-Target Perturbation He was was in good shape. Perturbation Source Sent ence Encoder-Decoder He wasn’t in great shape <eos> <bos> He wasn’t in great shape Manifold
  • 7. Contrastive Learning with Adversarial Perturbation 7 Conversely, we generate a “distant target” which is far away from the source sentence in embedding space but semantically similar. Imposter He wasn’t in good shape. Distant-Target Perturbation He was was in good shape. Perturbation Source Sent ence Encoder-Decoder He wasn’t in great shape <eos> <bos> He wasn’t in great shape Manifold
  • 8. Contrastive Learning with Adversarial Perturbation 8 We pull the imposter as well as the negative examples away from the source and push the distant target and target to the source. Max Min push source target dist-target imposter pull
  • 9. Contrastive Learning objective 9 Given a pair of source and target sentence 𝑥("), 𝒚(𝒊), we randomly sample 𝒚(𝒋) with 𝑖 ≠ 𝑗 and use them as a set of negative examples 𝑆. As SimCLR, we maximize the cosine similarity between source and target and minimize it between source and negative examples. Nu era într-o formă prea bună. 𝒙(𝒊) He wasn’t in great shape. 𝒚(𝒊) But I cannot do it anymore. By mid-July, it was 40 percent. 𝒚(𝒋) 𝒚(𝒌) Chen et al. "A simple framework for contrastive learning of visual representations." ICML 2020.
  • 10. Generation of Imposter 10 We add a small perturbation to the hidden representation of target sentence to generate imposter with linear approximation as Goodfellow et al. (2015). Encoder-Decoder Nu era într-o formă prea bună. <bos> He wasn’t in great shape. He was was in great shape. Pooling Pooling Min Source Sentence Target Sentence Goodfellow et al. "Explaining and harnessing adversarial examples. International Conference on Learning Representations." ICLR 2015. Objective Linear Approximation
  • 11. Generation of Distant Target 11 Add a large perturbation to the target embedding to be far away from the source sentence but preserving the semantics of target sentence. Maximize Distance Semantic Preservation Encoder Decoder Pooling Pooling He wasn’t in good shape. Source Sentence Max Nu era într-o formă prea bună. <bos> He wasn’t in great shape. Target Sentence
  • 12. Learning Objective – (1) 12 We add the imposter to the set of negative examples 𝑆 and use distant target as another positive example of source sentence for contrastive learning.
  • 13. Learning Objective 13 We jointly maximize the following objectives with stochastic gradient ascent.
  • 14. Experimental Setup – (1) 14 1) Tasks and Evaluation Metric • Neural Machine Translation: BLEU score • Question Generation : BLEU score, F1/EM • Text Summarization: Rouge score 2) Data • WMT’16 RO-EN • SQuAD • Xsum
  • 15. Experimental Setup – (2) 15 3) Baselines • T5-MLE: The T5 model trained with maximum likelihood estimation. • T5-𝛼-MLE: The T5 model trained with MLE but decode target sequence with temperature scaling 𝛼 in softmax. • T5-MLE-contrastive: Naïve contrastive learning with MLE. [Caccia 2020]Caccia et al., Language gans falling short, ICLR 2019
  • 16. Experimental Setup – (2) 16 3) Baselines • T5-SSMBA [Ng 2020]: Generating additional examples by denoising and reconstructing target sentences with masked language model • T5-WordDropout Contrastive [Yang 2019]: Generate negative examples by removing the most frequent word from the target sentence. • T5-R3f [Aghajanyan 2021]: Add a Gaussian noise and enforce consistency loss. [Ng 2020] Ng et al, Ssmba: Self-supervised manifold based data augmentation for improving out-of-domain robustness, EMNLP 2020 [Yang2021] Reducing word omission errors in neural machine translation: A contrastive learning approach, ACL 2019 [Aghajanyan 2019] Better fine-tuning by reducing representational collapse, ICLR2021
  • 17. Experimental Result – (1) 17 Method BLEU Machine Translation – WMT’16 RO-EN T5-MLE 32.43 T5-𝛼-MLE 32.14 T5-MLE-contrastive 32.03 T5-SSMBA 32.81 T5-WordDropout Contrastive 32.44 T5-CLAPS (Ours) 33.96
  • 18. Experimental Result – (2) 18 Method BLEU F1 EM Question Generation – SQuAD T5-MLE 21.00 67.64 55.91 T5-𝛼-MLE 20.50 68.04 56.30 T5-MLE-contrastive 20.91 67.32 55.25 T5-SSMBA 21.07 68.47 56.37 T5-WordDropout Contrastive 21.19 68.16 56.41 T5-CLAPS (Ours) 21.55 69.01 57.06
  • 19. Experimental Result – (3) 19 Method Rouge-1 Rouge-2 Rouge-L Text Summarization – Xsum T5-MLE 36.10 14.72 29.16 T5-𝛼-MLE 36.68 15.10 29.72 T5-MLE-contrastive 36.34 14.81 29.41 T5-SSMBA 36.58 14.81 29.79 T5-WordDropout Contrastive 36.88 15.11 29.68 T5-CLAPS (Ours) 37.89 15.78 30.59
  • 20. Visualization of Sentence Embedding 20 The model learns to push away the imposter from the target sentence and pull the distant target to the source sentence.
  • 21. Conclusion 21 • We propose a contrastive learning framework for conditional sequence generation to mitigate the exposure bias problem. • With adversarial perturbation, we generate negative and positive pairs that are more difficult for the model to distinguish from the GT pair. • Results show that we outperforms the baselines of T5 model across machine translation, question generation and summarization tasks.
  • 22. Future work 22 • For future work, we will improve the quality of imposter which contains many grammatical errors. • Generating imposter and distant target still requires a large amount of labeled data. We need to improve the sample efficiency.