This document discusses dynamic hand gesture recognition using human computer interaction. It proposes using a camera worn by hearing impaired users to capture hand gestures, which would then be processed on a computer using image processing techniques to recognize the gestures and map them to speech output. The system aims to help develop a prototype that can automatically recognize sign language gestures in real-time and translate them to voice. It reviews several previous works on sign language and gesture recognition and their limitations to motivate the proposed approach.