This document summarizes a research paper that proposes an automatic essay grading system for short answers in English. The system generates alternative model answers using synonyms and evaluates student answers by comparing them to model answers using three algorithms: Common Words, Longest Common Subsequence, and Semantic Distance. The system was tested on 40 questions answered by three students, achieving 82% correlation with human grading, outperforming other state-of-the-art systems.