This paper investigates interpretability challenges in generative AI models, specifically GPT, by using sensitivity analysis techniques to rank word importance through attention weights and Kullback-Leibler divergence. The aim is to enhance the understanding of transformer models, addressing issues of accountability and transparency in AI outputs. The research also highlights the practical implications and ethical considerations of deploying generative AI models in various applications.