This document discusses the vulnerabilities of pattern classification systems in adversarial settings and proposes a framework for evaluating the security of these classifiers during the design phase. The authors highlight three open issues in the existing methods: analyzing vulnerabilities, developing assessment methods for security, and creating novel design methods to ensure security against potential attacks. The proposed system aims to enhance classifier security by proactively anticipating and simulating potential adversarial attacks.