Boards and the AI Reflex: Why Curiosity Beats Caution
Board Evolution: A Tech Leadership Interview Series
"If you're not using these tools every day, you can't add value in the conversation."
That's Stephane Chatonsky's frank assessment—and he's speaking from experience.
With board experience spanning startups to established companies, Stephane brings a refreshing perspective on how boards should approach AI. Having worked with Stephane while he was the chairman of the Prospection board, I've always appreciated his willingness to experiment with new technology and share his genuine enthusiasm for what works.
Start With Use, Not Theory
Stephane's biggest concern? Most board members don't actually use AI tools in their daily work.
"I would be a big advocate to encourage members to use it themselves daily. Unless they use it—I don't want to hear from a board member who isn't using ChatGPT every day. Forget about advising or adding value to a management team. It's like saying 'I'm going to advise you on cars but I've never driven one.'"
The starting point, he believes, is particularly challenging:
"Board members are generally older than the team. They come from the late stage of their career, so they're already starting at a disadvantage in terms of their skills."
But this isn't about technical expertise—it's about what Stephane calls "building your AI reflex."
"You need to build your AI reflex. It's got to be always front of mind. It's got to be a reflex: 'Of course you're going to do that.' Experiment with different tools and do it from a personal basis. That's really more important than any specific training."
What Boards Can Actually Do
While acknowledging that different boards have different levels of involvement, Stephane sees five areas where they can make a meaningful difference:
1. Identify quick wins "They've got a broad view on the business. They can help the management identify and spot areas where you can have immediate impact."
2. Encourage upskilling and partnering "Do we need to recruit? Do we upskill the team? How do we explore partnering? How do you collaborate with consultants, AI experts, or other companies? The board has a tendency to be outward-looking, so that's where they can be really helpful."
3. Push for metrics "Often the board, because there's a tendency to be quite financially driven, can encourage the team to say 'How do we measure ROI of whatever we do?' That's not stepping into the management role—it's actually driving towards making sure we measure success."
4. Be an outside perspective "Boards can challenge internal thinking from their external vantage point."
5. Address governance appropriately "Of course there's the risk, ethics, and governance piece."
Separate Societal Risk from Business Risk
Stephane offers an important distinction regarding AI risk management:
"There's this whole idea of existential risk from AI to the human species. Right now, I'm reading 'The Coming Wave,' which Bill Gates called the best book on AI. Most of it is around existential risk and how to manage it. Some board members would actually embrace that idea quickly: 'It's a new technology with existential risk, therefore we need to be really careful.'"
But he pushes back against this approach:
"I don't think that's the right approach. You might think at a societal level that it's a big danger, but as a board member looking to improve the business, being overly constraining and careful isn't going to drive results."
His view is pragmatic:
"On one side, you're a citizen and need to be concerned about that. On the other side, you need to put your citizen hat aside. You're a board member looking to improve profit, get AI into the P&L. You need to go really hard on it, even if societally you're concerned. You almost need to be schizophrenic. I'm a board member, a CEO—I need to go hard on this because if I don't, others will. I'll lose money and won't be as competitive."
Build for Agility, Not Prediction
Rather than trying to predict AI's future development or impacts, Stephane advocates for a more adaptable approach to risk:
"Just after the GFC, I went to a breakfast in Sydney where Lloyd Craig Blankfein, the CEO of Goldman Sachs globally, was speaking. He was asked about his approach to risk management, and what he said was very striking."
"You could invest a lot of resources trying to predict the future, but your chance of failing at prediction is very high. The better thing you can do is be agile and build an organisation that is highly flexible and reactive. Whatever happens, you'll react very fast and adjust."
This rings especially true for AI:
"That applies to AI even more because there is so much uncertainty. Nobody knows what's going to happen. In an ideal world, the board would focus on getting the organisation to be agile and fast. I think that's a good risk management approach."
Start Small, Not with Grand Transformations
When it comes to adoption strategies, Stephane favours practical experimentation over grandiose plans:
"I think it's more a daily thing. That's why using it daily, finding the quick wins—this is more the way to go than having big transformation programs. You're not going to build commitment to a big transformational program unless people individually are using it and seeing quick wins."
He's sceptical of the default enterprise approach:
"You can hire McKinsey, but what they're going to say is 'Let's transform ourselves to be an AI company.' This is not going to create movement. People will react more emotionally to quick wins."
His recommendation is focused:
"Don't be shy to go quite deep in something specific. There are some really good tools everywhere—for recruitment, for example. Use that to change the recruitment function of your business. Get some quick wins there."
Rethinking Talent for an AI World
Stephane is particularly concerned about how organisations will find and develop talent for an AI-driven future:
"I'm concerned about businesses that recruit people who come from education. I'm quite concerned about how education is adapting itself to AI. I see it because I'm teaching at university, and they're very slow organisations—slow to change. Are they adjusting the type of knowledge they provide to students and how they teach to the emergence of AI?"
This creates questions boards should be asking:
"There are some big questions around what type of people we want to recruit going forward. What type of people can operate well in this environment, and what kind of skill set do they need? It might be the case that technical skills matter less than humanities. The board challenging management on what type of people will thrive in this new environment, and what kind of skills they need—that has to be important."
Key Takeaways from My Conversation with Stephane
On Personal Experience: "You need to build your AI reflex. It's got to be a reflex: 'Of course you're going to do that.'"
On Balancing Risk: "You almost need to be schizophrenic. Watch for the societal risks, but as a board member, I need to go hard on this because if I don't, others will."
On Organisational Agility: "The better thing you can do is build an organisation that is highly flexible and reactive. Whatever happens, you'll react very fast."
On Implementation Strategy: "Start small. It's not like how you're going to present yourself to the market. Take a function and just get some wins there."
On Future Talent: "There are big questions around what kind of people will thrive in this new environment. At the end of the day, business is all around the people side."
Actionable Next Steps
For Boards:
Start using AI tools yourselves—daily—before advising management
Focus on identifying quick wins with immediate, measurable impact
Ask whether the organisation has the right mix of build, buy, or partner strategies
Prioritise agility over rigid risk management frameworks
Challenge management on talent acquisition and development strategies
For Executive Teams:
Identify functional areas where AI can make an immediate difference
Build momentum through showcasing small, successful implementations
Measure ROI on AI initiatives with clear metrics
Focus on developing adaptable, fast-moving decision processes
Rethink what skills and capabilities will matter most in the future
For Both:
Separate societal concerns from business imperatives
Develop hands-on experience with AI tools, not just theoretical knowledge
Focus on being responsive and adaptive rather than predictive
Build a culture of experimentation rather than perfection
Consider how education and training need to evolve for an AI world
Final Thought
Stephane's perspective is refreshingly pragmatic: boards don't need to become AI experts, but they do need to use the tools, understand their impact, and focus on building organisations that can adapt quickly to changes.
As he puts it:
"You don't need to predict the future. You need to be ready to adapt to it."
Director of Product | Product Management Professional | Digital Product Management | Digital Transformation | Digital Experience
4moThanks for sharing Kirsten Mann. I’ve always felt there are significant opportunities in shaking up board skills matrix from a traditional emphasis on financial, legal and risk to expanding to technology/digital/product. Some of my thoughts years ago when I first joined the AICD - It’s slightly dated, but increasingly relevant. https://medium.productcoalition.com/product-in-the-boardroom-9fc1d8fc5ad5
Board Chair | Non Executive Director | GAICD | Advisor | Investor
4moLoved the conversation with you Kirsten Mann. So much to talk about!