The document presents a new learning approach called neighbor consistency regularization (NCR) designed to improve model performance on datasets with noisy labels by ensuring that similar feature representations yield similar predictions. The method combines supervised classification loss with NCR, which encourages the network to account for nearby examples, mitigating the impact of incorrect labels. Experimental results demonstrate the effectiveness of NCR across various datasets, highlighting its robustness compared to existing methods.