This document discusses the application of deep neural networks, specifically RNN-based sequence to sequence models, in modeling conversational discourse while capturing long-range relationships across multiple utterances. It presents findings from experiments that demonstrate improved performance and discourse coherence when additional previous context is provided. The study highlights the potential for neural discourse models to generate more coherent conversational outputs and suggests future directions for enhancing these models in various domains.
Related topics: