Leading With Trust: When Responsible AI and Security Collide

Leading With Trust: When Responsible AI and Security Collide

This month's newsletter is written by Sacha Faust , CISO at Grammarly

Ensuring the AI we adopt aligns with our values is essential as AI becomes integral to how businesses operate, communicate, and compete. As I step into the CISO role at Grammarly, I deeply appreciate that responsible AI is becoming a boardroom priority, shaping product decisions and impacting trust. I believe this isn’t just an ethical consideration—it’s a security imperative.

Responsible AI is a product mindset

Here’s where many organizations go wrong: They simply treat responsible AI as a compliance requirement—when in reality, it should be a product principle. We’ve shared in previous newsletter issues how responsible AI practices lead to.

As business leaders reframe responsible AI as a product design mindset, I encourage them to ask the following questions:

  • Does our AI reflect our company’s values in how it behaves?
  • Are we enabling employees to use AI in ways that align with our culture and morals?
  • Are we proactively identifying where AI could create risks, from bias to data leakage?

Responsible AI isn’t just about doing the right thing—it’s about designing systems that work better for your customers and teams. The end goal isn’t just fairness or transparency. It’s better outcomes for everyone they touch.

Secure the AI supply chain

Adopting AI isn’t plug and play. Behind every model you deploy is a supply chain of data, training processes, dependencies, and third-party components. Each component carries a potential risk, and that chain is only as strong as its weakest link.

To secure this AI supply chain, organizations must:

  • Trace where AI models originate and how they’re trained
  • Evaluate third-party vendors for alignment with responsible AI standards
  • Assert control over key components like moderation, data governance, and model deployment

A single point of failure—a biased model, a leaky vendor, or a poorly governed dataset—can undermine your brand’s integrity and expose your customers to risk. Taking ownership of your AI infrastructure is security. Responsible adoption requires as much diligence as secure deployment.

Blur the lines: Responsible AI is security

One of the most important insights I’ve gained throughout my career: The boundaries between responsible AI and AI security are not clearly drawn, nor should they be.

At Grammarly, we define responsible AI as creating and utilizing artificial intelligence in a manner that is mindful, morally sound, and aligned with human values. Security and responsible AI are deeply intertwined. Security ensures that systems behave as intended. Responsible AI ensures they behave as they should. Both are essential.

To integrate responsible AI and security:

  • Conduct cross-functional product and feature reviews before launch
  • Stress-test AI features in real-world, adversarial environments
  • Establish continuous monitoring and feedback loops to detect unintended consequences

Consider incentivizing external oversight. Bug bounty programs, red teaming, and third-party audits aren’t just for traditional software; they’re standard practice for AI systems, too.

Certify your commitment

Trust is quickly becoming a differentiator in the AI space. As scrutiny increases, forward-looking companies will seek ways to demonstrate their reliability and governance.

That’s why you’ll see certifications like ISO/IEC 42001:2023—the first international AI Management System standard—gaining traction. Grammarly believes such frameworks are more than paperwork, and we’re extremely proud to have recently achieved this certification. Such frameworks are a way to:

  • Signal to customers and partners that AI governance is a priority
  • Align internal teams around clear, responsible standards
  • Stay ahead of evolving regulatory expectations

What’s coming next: Customization vs. control

As we look to the future, business leaders should prepare for a surge of hyper-customized, context-aware AI systems across various domains. These systems will go beyond productivity: They will make decisions, solve problems, and even act on behalf of users.

It’s an exciting shift. But it also demands new leadership questions:

  • Are our policies ready to govern personalized AI behavior?
  • Do our safeguards evolve as user context and system autonomy grow?
  • How do we weigh the benefits of powerful AI against the risks of losing control?

It is up to leadership to find the right balance between freedom and oversight. You don’t need to be a technologist to lead on responsible AI. However, you do need to ask the right questions, set clear expectations, and create an environment where ethical and secure innovation can thrive.


Article content

AI in Action: 3 Questions With Michael Roseman, COO, OneSource Virtual

What’s your overall vision for AI in your organization over the next few years?

Our vision at OneSource Virtual centers on making AI as ubiquitous, accessible, and intuitive as Grammarly has made communication assistance. As a trusted HR and finance solutions provider, we’re creating an environment where AI tools are seamlessly integrated into workflows—available whenever and wherever our teams need them.

We’re putting powerful tools in the hands of our functional leaders, including those without technical backgrounds. A great example is our recent Build-A-Bot workshop, where about 80 business leaders with no prior AI development experience created nearly 30 functioning agents in just 2.5 hours—many of which were near-production quality.

This transformation is reshaping our entire development lifecycle while maintaining our core values of data security and reliable results:

  • Ideation: AI enables non-technical stakeholders to express ideas with greater specificity and detail.
  • Rapid prototyping: We can move quickly from concept to prototype at minimal cost.
  • Accelerated development: Once validated, we build and deploy solutions with more speed and confidence.

Just as Grammarly enhances rather than replaces writers, we see AI as fundamentally augmentative, enhancing creativity and productivity while staying true to our customer partnership model.

What does responsible AI mean to your team and your ways of operating?

Responsible AI at OneSource Virtual means deploying technology that enhances human potential while maintaining strong ethical boundaries and data protection. We aim to make powerful tools accessible without compromising on trust or accountability.

At OSV, responsible AI includes:

  • Ethical and fair systems: We’re committed to avoiding bias and treating all stakeholders equitably.
  • Transparency and explainability: Stakeholders should understand how AI systems work and how decisions are made.
  • Data security and privacy: We apply the same rigorous protections to AI that we do to our customers’ HR and financial data.
  • Accessibility and inclusivity: AI tools should be usable by employees regardless of role or technical experience.
  • Human-centered design: AI is meant to augment—not replace—human judgment in critical decisions.

Our Build-A-Bot workshop was a great example: business leaders created AI agents tailored to their teams’ needs, reinforcing our belief in human agency and inclusive design.

What have been your biggest surprises—both positive and negative—since implementing AI in your company’s operations?

Positive surprises:

  • Innovation acceleration: The pace of AI advancement has unlocked capabilities we once thought were out of reach.
  • Ubiquitous integration:: AI is becoming a natural part of how we work—just like Grammarly’s presence across writing platforms.
  • Team enthusiasm: Our employees have embraced AI with real excitement, exploring new ways to apply it.
  • Enhanced partnership: AI has allowed us to deliver more proactive insights and value to our customers.

Challenges:

  • Tool fragmentation: There’s no one-size-fits-all solution. Integrating multiple AI tools has increased complexity.
  • Document structure: We underestimated the work required to reformat content for AI processing.
  • Integration overhead: Managing multiple evolving tools has created technical debt.
  • Balancing innovation with reliability: We’ve worked to maintain trusted service levels while experimenting with emerging technologies.


Industry News

  • 💭 Responsible AI is still top of mind for leaders. VentureBeat recaps the Stanford Institute for Human-Centered AI’s 2025 AI Index Report, which found 64% of business leaders lean toward a safety-first approach to AI innovation.
  • 🤝 CIOs and privacy chiefs are increasingly working together to mitigate AI risk. CIO Dive covers learnings from IAPP’s Global Privacy Summit, where leaders emphasized the importance of companies adding more conversations between privacy and IT teams during the AI development and procurement processes.
  • 📈 Companies are seeing success with letting their employees safely explore AI. Fortune examines how one company hosted an “AI day” and gives monthly stipends to test out different AI tools and share what’s working and what’s not.

Responsible AI at Grammarly

  • 🧠 Grammarly on AI Agents: What’s Working and What’s Not. Grammarly CEO Shishir Mehrotra joined a live video summit, “AI Agents: What’s Working and What’s Not,” hosted by The Information, to discuss Grammarly’s evolution, the future of work, and the importance of delivering ROI by building AI that solves real user needs and works alongside them.
  • 🔦 Make Use Of spotlights Grammarly Authorship in a feature story, underscoring the necessity for process-tracking tools to serve as an alternative to overreliance on AI detection. While “many professors and teachers are wary of students using AI tools to write papers ... AI checkers aren’t foolproof.” That’s why we developed Authorship—to give both educators and students confidence that they’re engaging with AI responsibly and submitting their authentic work.
  • 📢 “Grammarly was one of the first agentic tools to become widely used,” writes Newsweek in a story highlighting Grammarly’s vision of an AI superhighway that brings agents right to where the user is working. As a trusted writing assistant for 16 years, Grammarly is uniquely positioned to connect the dots across applications and systems to provide smarter, more personalized support to users.



Oleksandra Ivanchenko

Senior Product Designer | UX/UI Design @ DataArt

2mo

Love this

Like
Reply
Dr. Kasili Mutambo, Ph.D.

Policy Researcher and Institutional Consultant

3mo

Thanks for sharing

Fanny Quintela

Sales Leader fueling the wave of connected devices and technology advancement for government

3mo

I love Grammarly!! You are awesome! Thanks for your help everyday.

Melissa Kuenzi

Impact through content.

3mo

"They simply treat responsible AI as a compliance requirement—when in reality, it should be a product principle" YES 👏 👏 👏 .

To view or add a comment, sign in

Others also viewed

Explore topics