This document describes using a feature-based Markov Decision Process (MDP) and policy iteration to develop an algorithm that learns to play the game Tetris well. It formulates Tetris as an MDP with states defined by wall configurations and piece placement. An approximated value function is defined using features of the game state like column heights. Policy iteration is then used to iteratively update the weight vector of this approximated value function to learn an optimal policy. Simulation results show the learning algorithm achieves much higher scores on Tetris compared to a heuristic algorithm.