Vol.31 "AI Tsunami: Riding the Waves of Change"
Volume Summary TL:DR (by Microsoft Copilot)
Stience Time
(Not a typo - if you know you know. Magic or stience, which side are you on? - IYKYK)
Contrary to popular belief, a tsunami isn't just a single gigantic wave that instantaneously wipes out a community. It's actually a series of waves, often called a "wave train," that build upon each other with increasing intensity. As these waves approach shallower waters, their speed decreases, but their height and energy increase due to the interaction with the bottom and coastline.
Another interesting fact is that tsunami waves can travel at speeds exceeding 500 mph (800 km/h) in deep ocean waters, similar to the speed of a jet plane. However, as they near the coast, their speed drops significantly, while the impact intensifies. In the deep ocean, these waves often go unnoticed on the surface, barely a ripple, making them a hidden but powerful force.
While I love a good tsunami and "stiency" story, that's not what this article is about. This volume is about the 'wave of change' that all organizations around the world are now facing with artificial intelligence AI.
Life in the Zone
For the better part of my life, I've lived in one danger zone or another. I spent quite a bit of time in Southern California, earthquake territory, with a few 'not so fond' memories seared into my brain of holding on for dear life when the Earth started shaking, literally watching buildings swaying back and forth, literally using my body as a human shield over my infant son, while pictures and glass were breaking all around us. Not joking.
Totally not cool and yet so totally cool at the same time. Nature is amazing and awe inspiring.
I've been living in the Pacific Northwest for quite a few years. Although this area is not typically known for earthquakes, they are a daily reality, and the dangers are even greater. I literally live on an earthquake fault in a shadow of Mount Rainier and other non-dormant volcanoes. It's also a subduction zone.
Here I go again, I said this wasn't going to be about science, I just can't help myself sometimes 🤓
True to the title of this volume, I also live adjacent to a tsunami zone. Contained in this article are actual pictures of the signs in my neighborhood. I apologize in advance for the picture quality, some were taken through car windows while driving, and yet you'll still get the point. You'll also notice that photography may not be one of my top skills. 📷
Entering the Tsunami Zone, a.k.a. The Current State
For those who haven't noticed the world is different now with AI, once thought to be a fad and hype, AI is actually here to stay. It's already impacting every single job in every single role in all reaches of the globe (if it hasn't already, it soon will).
The first signpost ahead says "Entering the Tsunami Zone" this literal signpost yelling at people to be aware, stop take notice of where they are. In terms if AI, the landscape is rapidly evolving, and organizations are under immense pressure (that nobody signed up for) to adopt these technologies.
While AI is super cool in so many different ways, many companies simply being caught flat- footed (unprepared) and not ready the current state. This article is focused on AI adoption and the challenges being faced by all organizations, everywhere.
Do you remember the part where I said that often things go unnoticed, and the waves of change are cumulative? Here's proof.
Some small organizational waves include people security, ensuring that people are trained to handle AI tools securely, safely, and responsibly. Overall training, continuous upskilling, and filling the gaps to keep pace with AI advancements are crucial. Competition involves staying ahead of competitors by leveraging AI for innovation.
As mentioned previously, AI is the great leveler; large organizations that are too slow to react are quickly being outpaced by more nimble organizations embracing AI more quickly. Then there's reputation, which involves maintaining a positive public image while deploying AI responsibly and paying attention to the internal reputation of employees. For companies that decide not to roll out AI internally, you might lose people who are really good with tribal knowledge.
Wave after wave, and we haven't even gotten started yet. The waves of change are real.
Then, of course, there's responsible AI: how do we implement ethical AI practices and avoid biases? What does the infrastructure look like? We have to upgrade our IT infrastructure to support AI workloads. What about access controls? Implementing strict access controls to protect sensitive data is essential.
Preventing data leaks through robust data governance is also crucial. What's the strategy for licensing? How do we ensure compliance with AI software licensing requirements? How much does this stuff actually cost? How do I predict costs when costing is not a simple calculation?
These are just some of the examples, each one representing a shockwave moving undetected through the organizational waters, until now, now they are being noticed and rising in intensity.
Knowing where I live and accepting the fact that I live there tells me that while I cannot change my physical environment, I can accept that there is a current state that I need to pay attention to (just like organizations and AI).
In the unlikely event that a tsunami will hit my home, I have the power and agency to at least prepare for eventualities or possibilities.
Hopefully no one's home when and if it happens, that's also part of my planning. 😁
Being in the Zone
Being in the tsunami hazard zone, I need to plan for risk mitigation. There's no possible way I can make all risk go away, but I can certainly deal with things with eyes wide open. This is especially true when thinking about organizations and artificial intelligence. Recognizing the hazards associated with AI adoption is absolutely crucial.
Proactive planning and risk mitigation strategies can include regulatory compliance, like GDPR and other data protection regulations, ethical considerations, data privacy, quality control, technical debt, and cloud cost optimization.
Again, this is just a random sampling of things that I'm encountering while talking to customers all around the world. My favorite question is, what's the strategy? What's the ambition? Each item in and of itself may not represent a giant shockwave; however, combined, that's a completely different story.
The combination can have serious unintended consequences when left unchecked. My question to the reader is, who is planning all of this risk mitigation and how is it being coordinated?
A big change that is happening is that IT departments are no longer the command-and-control center that they used to be (ah, the good old days - not). The business is now involved in planning and risk mitigation, as they should be. So, who is leading the organization to safety?
The next signpost is literally pointing the way to the evacuation route. I kinda wish I got a better picture, but oh well, I'm sure you get the point. There are literal signposts pointing the way; follow this path. It doesn't mean the path is easy. I've often seen these signs that simply point to a dirt trail up a hillside, but it's still a path, and safety can be on the other side.
Organizational alignment and strategic alignment are absolutely key and non-negotiable to navigating the complexities of the AI landscape. Strategic alignment across the organization, consistent with organizational goals, is mandatory. AI cannot be the goal, nor should it be. Please don't tell me that AI is your goal. If it is, we need to have an intervention.
All organizations need to engage in conscious decision-making, which means making informed decisions at every step of the way—guessing, second-guessing, rechecking, and checking all of it. Governance and oversight are crucial; there has to be an AI governance board, regular audits, and engagement with the business. And, of course, there are different altitudes of maturity. Any of you who have heard me speak on this topic know this is critical. Every department, organizational unit, and business unit will handle AI maturity differently. Pay attention to that.
Some people will run up the hill and knock others over to get to safety. Others will sacrifice themselves to help save others. Then there's a group in the middle that will literally hold people's hands and walk them to safety. The human element of safety and security maps almost one-to-one with navigating the safety of organizational risk. It has to be a team effort.
Leaving the Zone
There are literally signs that indicate people are leaving the hazard zone. You've clawed your way up the hill, and now you are in a place of relative safety. While the water may be lapping at your feet, the immediate hazard no longer presents an imminent threat. Even after mitigating all the risks that organizations face, achieving organizational alignment, and figuring out how to spend or not overspend, organizations must remain vigilant.
There used to be a three to five-year cycle of planning, but now we're down to about two or three months of planning. Once the immediate danger subsides, it's time to look around and see what risks have been mitigated and identify others we don't even know about. Continuous vigilance means maintaining ongoing adaptation, growth, and evolution.
Something as simple as bringing in an AI agent, such as Copilot in Microsoft 365, as a declarative agent could quickly evolve into an actual agent created in Microsoft Studio or extend to become an agent in Microsoft Foundry. The point is that by continuously evaluating, watching, and focusing on continuous improvement, progress is made.
Leaving the tsunami hazard zone also means paying attention to the competitive landscape and new advancements. Organizations must balance the risk of going too fast or too slow. Going too fast can lead to unnecessary expenses and other pitfalls, while going too slow means being outpaced by the competition and losing people.
And don't even get me started about Shadow AI (again)🤬
The Assembly Area
So, we started on the beach, squarely in the danger zone. We worked on understanding our risks and what it is we're trying to do. We worked on data, security, technical debt, cloud cost optimization, and spending. Now we've reached the new signpost, which is the assembly area.
This, my loyal readers, is the key and most important part of the journey. Here we focus on organizational change and alignment. Achieving long-term success with AI requires organizational change management. To achieve organizational change, there must be a culture of innovation, adaptability, and continuous improvement—easier said than done. There needs to be long-term alignment with initiatives and organizational objectives.
This is another pitfall for companies: looking for quick ROI or quick success does not foster long-term objectives or outcomes. The need for engaged collaboration and integration, facilitating collaboration between IT and citizen developers, is crucial. And if you're not convinced about citizen developers, I am probably one of them. We do exist, and we are the future.
Building a resilient culture, investing in ongoing training, and promoting AI ethical practices are also key. If I've said it once, I've said it a thousand times: 70% of the technology story involves people.
The Big Finish
Although organizations can't possibly mitigate all risks associated with AI, having a comprehensive engagement plan to achieve successful outcomes is critical. Here's a gift for you for reading this far: start with an engagement plan. Ensure that all stakeholders are engaged in this AI journey, from top management and leadership all the way to frontline employees.
A clear communication strategy is essential to keep everyone informed about AI initiatives, progress, and challenges. Do AI out in the open. Have an achievement plan that includes KPIs and measures of success. Clearly define the outcomes and what you're looking for. Experiment, learn, and evolve. Focus on reducing conflicting KPIs and fostering collaboration between different departments. Feedback, feedback, feedback, and build a resilient AI culture. Everyone in the organization needs to be AI literate.
In the past, organizations had a three to five-year strategic plan cycle. But in our new 2025 fast-paced, AI-driven world, companies are finding it challenging to grasp the current state within a very short cycle. This shift underscores the importance of agility and adaptability—topics I have written about for multiple years. While there's a risk of moving too fast and making hasty decisions, there's also a risk in moving too slowly and falling behind. Striking the right balance is key.
I know I've outlined a lot of actionable areas to focus on in this article. The good news is, you're not alone. And you shouldn't feel alone, nor do you have to be alone. There are people like me out there who live in this world every single day, helping companies navigate these complexities.
Maybe I should go back to the road signs for the tsunami evacuation notices and repaint them with the words "AI." Maybe that would get people's attention. The road signs are there for a reason. Metaphorically, they're there for us too in this context. Safety and acceptable risk are relative terms. I, for one, don't want to leave things to chance.
I make my plans, I accept my realities, and I work toward a future. This is no different in terms of AI. Actually, there is one big difference: the biggest risk mitigation I see is around the question of ROI. That's just one small piece of a very large, complicated puzzle. It doesn't have to be complicated when looking at things holistically.
In safety terms, I know where I am physically in the world. I know that I'm in the zone. I clearly see my evacuation route. I clearly know when I'm leaving the imminent danger zone, and I clearly know when I'm in the assembly area working. The assembly area is not the end of the story; it doesn't end there. The assembly area is the start.
Think about it this way: the assembly area is at the top of the hill in the most relative area of safety. So, let's start there. That is step number one - organizational alignment and a focus on adoption and change management. Then go down into the zone, do the hard work necessary to make sure there is a path to safety, leave the zone better informed than you found it, adapt, learn, grow, and back to the top to focus on organizational alignment again.
Welcome to the real cycle of AI innovation - most definitely NOT a spectator sport. Surf's up! 🏄🏄♀️🏄
~Kevin J. Bernstein (aka "The Cloud Therapist")
Please submit topics and questions for future volumes (I'm listening)
A quick note to my readers - thank you for your feedback, support, and encouragement. I strive to bring you relevant thought-provoking content. #grateful that you choose to spend your time with me.
I appreciate your commitment to reading these all the way through. I know they can be quite lengthy. My goal is to bring you unique perspectives and things to mentally chew on.
#AI #Innovation #RiskManagement #OrganizationalChange #TechTrends #TechnologyAdoption