This document describes a project to create an autonomous robot capable of mapping indoor environments. The robot uses an omni-directional platform and Microsoft Kinect sensor for depth and color imaging. Software is developed to process Kinect point cloud data into textured 3D meshes. The goal is for the robot to autonomously determine optimal areas to map and generate a complete 2D model of its surroundings. The project aims to create an affordable platform for indoor mapping research using open-source Robot Operating System software.