The U.S. Army is addressing the threats of data poisoning in facial recognition systems caused by adversaries who subtly alter data to undermine AI algorithms. The Army is funding research to develop defensive software that can detect and mitigate these backdoor attacks, which could misclassify or disable AI models. The project faces challenges related to the size and quality of data used for training AI, which must balance the risks of adversarial attacks with the need for diverse and representative datasets.