This document proposes a novel method for dynamic gesture recognition of Indian Sign Language (ISL) using computer vision. It extracts both global and local features from input video captured by a Microsoft Kinect sensor. For global feature extraction, it uses a new method called Axis of Least Inertia (ALI) to represent hand motion. For local feature extraction, it uses Principle Component Analysis (PCA) to represent finger movements. The extracted features are used to recognize ISL gestures for applications that help communication for the deaf and dumb.