This document provides a critical review of research on the effects of computer-generated feedback, known as automated writing evaluation (AWE), on the quality of students' writing. It finds that while AWE feedback has a modest positive effect on the quality of texts students produce using AWE programs, there is little evidence it transfers to broader improvements in writing proficiency. It identifies factors like the paucity of research, mixed findings, heterogeneous participants and designs, and some methodological issues that limit conclusions about AWE's effectiveness. It calls for more classroom-integrated research emphasizing how AWE can support writing instruction.