This document presents a novel approach for visual lip reading aimed at enhancing human-computer interaction and aiding hearing-impaired individuals. It details a visual speech recognition system that employs face detection, mouth region localization, and various feature extraction methods using Hidden Markov Models (HMM) for recognizing spoken words from visual lip movements. The experimental results from multiple participants demonstrate the effectiveness and accuracy of the proposed system in recognizing a set of predefined words.