This document discusses a model-agnostic code simplification technique for pre-trained large language models (PLMs) aimed at improving code intelligence. It explores various challenges faced by PLMs, such as computational complexity and token limits, alongside empirical studies on the impact of removing different types of code tokens on model performance. The proposed SlimCode method prioritizes the removal of less important tokens to enhance efficiency in downstream software engineering tasks.