This document presents research on a deep convolutional neural network model, referred to as Model E, for hand sign language recognition, specifically targeting communication assistance for deaf and speech-impaired individuals. The study utilizes a dataset from Kaggle, demonstrating that Model E outperforms the AlexNet model with an accuracy of 96.82%, while exploring the effects of different filter sizes in the architecture. The research aims to improve human-computer interaction through effective hand gesture recognition using advanced image processing and machine learning techniques.