This paper presents a novel gesture recognition system for isolated word sign language that aids communication for the deaf and hearing-impaired communities. The system utilizes a two-level approach, incorporating static extraction of key points from the first frame and dynamic accumulation of key-point trajectories, achieving a 94.3% recognition rate while addressing signer-independence challenges. The proposed vision-based method outperforms existing sensor-based techniques by eliminating constraints related to device limitations and environmental conditions.