Integrate GenAI Into CX Without Compromising Trust

Integrate GenAI Into CX Without Compromising Trust

Gen-AI is everywhere, and so there is a correspoding feeling that you are falling behind, or that you must act to integrate Gen-AI into your customer experience right away.

In today's edition of the #CX Patterns podcast and newsletter, I'm talking with David Truog about how to channel that energy in a positive, productive way, to integrate AI into CX the right way.

The right way to integrate AI enhances Trust.

Focus On Trust When Integrating AI

The widespread experimentation with GenAI, and the fast evolution of the technology, the models built on the technology means that now is a fantastic time to imagine new interaction models, new ways of solving customers' needs.

David stresses that all of this experimentation needs to happen within a frame that we will first not diminish the customers' trust in our brands. We want the use cases to be additive to our experiences, to show that we have been thoughtful in design and delivery, and to open our customers' minds to the possibilities that the next use case we offer is also one they'll be interested to try.

Your primary goal is to have the current use case create belief for the next use case.

Start With Employee Use Cases

Large language models are fallible, and probabilistic. They do not always give the right answer. This is why David stresses the value of starting with employee use cases for integrating GenAI. Employees can be a line of defense against giving a customer the wrong answer. Employees must be enabled to spot wrong answers, and empowered to override the models when necessary.

Early employee use cases create a great test bed for learning about how to best design the interaction between model and user. How do we get employees, and then customers to pay enough attention to the model, to trust but verify, to ensure they don't accept a wrong answer without double-checking.

Because while the LLMs are fallible, so are humans. We give the wrong answer all the time. And yet, as David points out, we've built up enormous organizations full of fallible employees and we've made it work. We've made it work by checking answers, by having layers of redundancy to catch mistakes. For now, the same is required with LLMs.

Employees are an ideal early user-group for fallible and imperfect models.

Consider What GenAI Is Best Suited For

LLMs feel magical to us because they aren't always giving prescribed answers, but are actually generating new, original responses. This means that in use cases where we need a consistent, correct answer every time, maybe GenAI is not the tech you're looking for.

David and I discuss this in the episode - that companies will also better serve their employees and their customers by focusing on use cases that match GenAI's strengths, and then creating positive guardrails around those use cases that help the models and their human interation partners get to better outcomes.

Create positive guardrails that help users and models move into greener pastures.

To view or add a comment, sign in

Others also viewed

Explore topics