This document proposes detecting actionable items in meetings using a convolutional deep structured semantic model (CDSSM). It motivates the task of identifying actions that could be performed during or after meetings based on discussions. The proposed approach adapts a CDSSM trained on a human-machine corpus to the domain of human-human meetings by continuing training on meeting data and adapting action embeddings. Experiments evaluate detecting actions in meeting transcripts, comparing a model trained only on other data to adapted models.
Related topics: