This document explores the potential of enhancing academic writing by fine-tuning a large language model (LLM) like GPT-3 on an author's own previously published work, which they call AUTOGEN models. The authors train three AUTOGEN models on the individual works of three authors and one combined model. They present examples showing AUTOGEN models can outperform the base GPT-3 model in format, style, quality, and novel idea generation, especially on familiar topics. However, the models have more mixed success developing arguments. The authors discuss ethical opportunities like increased productivity but also concerns like reduced diversity, privacy issues, and potential plagiarism if misused. They raise open questions around credit/blame and ownership for co