The project focuses on developing a computer application for real-time sign language detection using convolutional neural networks (CNNs) to convert hand gestures into text. The system aims to bridge communication gaps for individuals with hearing impairments by providing accurate translations of sign language gestures. The methodology includes dataset creation, pre-processing, CNN model construction, and implementation of live classification, with future improvements suggested for enhanced accuracy.