This paper proposes a neural machine translation model that extends previous work by incorporating an attention mechanism, allowing the decoder to soft-search relevant parts of the input without explicitly segmenting it. The bidirectional RNN architecture with attention outperforms earlier encoder-decoder models on English-to-French translation tasks according to BLEU scores, handling variable length inputs more effectively. While promising, the model could be improved to better handle rare or unknown words.