The document describes a method for 3D gesture recognition using a Leap Motion controller. Data from over 100 users performing 12 predefined gestures was collected, totaling 1.5GB and 9,600 gesture instances. The gestures were represented as "motion images" by mapping 3D locations to pixels and projecting onto planes to create fixed-size representations. Deep belief nets and convolutional neural networks were used to extract features and classify the images. Future work includes incorporating hidden Markov models to segment continuous gestures and exploring recurrent neural networks.