This paper presents a system for translating static American Sign Language (ASL) into text using computer vision techniques, specifically employing Generic Fourier Descriptor (GFD) for feature extraction and K-Nearest Neighbour (KNN) for classification. The proposed method achieved an accuracy of approximately 86% for stored images and 69% for real-time webcam data, demonstrating its potential in gesture recognition. The system does not require users to wear gloves, making it user-friendly and adaptable for real-time applications.
Related topics: