This document describes research on using convolutional neural networks (CNNs) to control a quadcopter for two tasks: obstacle avoidance and command by hand gesture. For obstacle avoidance, a CNN with 15 layers was trained on images of obstacles in different positions, achieving a mean accuracy of 75%. For command by gesture, transfer learning was used from the pre-trained AlexNet model. The last two layers were replaced and fine-tuned on images of hand gestures, achieving 98% accuracy. The research demonstrates the potential of CNNs for real-time visual processing and autonomous control of quadcopters.