Improving AI trustworthiness is not a one-time or one-sided task, but rather a continuous and collaborative process that involves multiple steps and actors. Designers must ensure the AI system is aligned with user and stakeholder needs, values, and goals, as well as ethical and legal standards. Developers must guarantee the AI system is built with high-quality data, algorithms, and tools, then tested for accuracy, robustness, and reliability. When deploying, it's important to integrate the AI system with the user and stakeholder environment, context, and expectations, while also monitoring for performance, feedback, and improvement. It's also necessary to evaluate the AI system for its outcomes, impacts, and risks; auditing for its transparency, explainability, accountability, fairness, privacy, and security.