The document discusses the challenges faced in the machine translation (MT) industry, particularly in post-editing outputs that are often not fluent or accurate. It evaluates various automatic MT evaluation metrics, such as BLEU and METEOR, to effectively filter translations suitable for post-editing and presents the development of a hybrid evaluation metric that combines the strengths of both. The findings suggest that employing this hybrid metric can significantly enhance the selection of translations that require minimal modification, ultimately saving time and costs in the post-editing process.
Related topics: