This document presents a method called meta-dropout, which learns to perturb latent features in an input-dependent manner to enhance generalization in few-shot learning. Through a meta-learning framework, the approach improves decision boundaries and outperforms existing regularizers in both clean and adversarial settings. The findings demonstrate that meta-dropout can effectively increase model performance across various tasks and resistance to adversarial attacks.