Early Lessons in AI Enablement: People, Patterns, and Pitfalls

Early Lessons in AI Enablement: People, Patterns, and Pitfalls

As I'm successfully closing one month in my new role here at Kentico , I thought I would share my thoughts, key takeaways and present some of my more "off the cuff" posts on LinkedIn in a more structured form.

If you are part of an organization at the start of your AI adoption journey, an AI sceptic, or simply someone wishing to read a different take, this might be of interest to you. AI tooling is an area with rapid developments, so please don't take what I say as gospel, and I would be happy to hear your takes.

It's never a cold start

Even for a position as contemporary as mine there are always going to be existing initiatives you must pick up, early adopters, staunch deniers and everything in-between. In my case, Kentico had already adopted some AI tools, we have developers who are very early adopters, and teams were already building and shipping AI tools for our platform – like AIRA, the native AI agent in Xperience by Kentico.

Coming into this environment, my role is not just to "introduce AI". My top priority is to introduce cohesion, enable our active users to get more value with the tooling, expand accessibility to new users, and establish ways to measure both the impact and extent of the adoption.

Two tracks, one curve

New tools and processes have always been polarizing, but few have been as controversial as AI. From job security concerns, ethical concerns, the impact of AI on our thinking ability, low quality results, and many other reasons you will end up with a fair number of sceptics and detractors.

On the other side of the spectrum, it's an exciting new technology to make use of, it can automate a lot of the toil, can allow you to ship more, and help with adopting new technologies; so, it's only natural that there's a lot of avid supporters.

Regardless of what the adoption of a new tool might promise, nobody can escape the J curve. Early adopters will benefit the most, but until the rest of the people, processes and tooling catches up, you can expect to hit a productivity wall.

This is nothing new. However, what is new with AI is the scale. There is an incredible amount of people interested in AI tools, the closest we have seen in recent history, is probably the introduction of cloud computing. There are also an incredible number of promises made by AI tools, but there's a stark difference between reality and the overall vision.

Hype vs reality: control the narrative before it controls you

The hype is real and it's overwhelming. A lot of us are used to the developer mindset of constant trial and error, documentation deep dives, dealing with junk in our system, etc. That's not necessarily true for everyone in a company. People get promised magic, then they try using the tools, and the magic simply isn't there.

That's where I come in. Triaging expectations, collecting complaints and finding potential solutions, focusing initiatives on what's realistic, and reframing AI adoption to exactly what it is, just another tool adoption. 

Experiment, create new baselines, add guardrails, review, iterate.

The cloud analogy is very fitting. A groundbreaking tool, but not the best tool for everything. I was there as a developer starting out: moving things to the cloud to save on costs, lower maintenance requirements, sysadmins are dead, uptime is no longer an issue, and we have so much more processing power now! I was also there later in my career when we were migrating projects away from the cloud due to costs, security concerns, processing requirements, etc.

I am expecting AI - at least in its current state - to be the exact same. Everyone is experimenting with use cases, but once the hype settles, there are going to be some very clear applications for AI, and some applications where it should not be used. We are currently on this path, and part of what everyone should be doing is this trial and feedback loop. If AI is not producing the results you're expecting, that's fine. Understand what it's not good for and find areas where it's an effective solution.

One note before I close this section. This is not something I have experienced at Kentico but it's very prominent outside and it might affect your organization. "AI shame" is widespread and it can manifest as anything from defensive posturing on LinkedIn, to aggressively shutting down people for daring to use AI tools in their work. All I can say is humanity's superpower is the usage of tools to make better tools. Get over it. You're not going to a library to search for manuscripts and you're not punching nails into the walls with your bare fists.

One size does not fit all

As I was saying, AI tooling is not always to work for you, at least not the exact same way that it does for others. For example, a microservice architecture with well-defined interfaces, separated concerns and strong documentation is favored by most AI tools available.

Are we going to rearchitect our whole platform to be microservices? No. Are we going to feed every single line of code we have in our framework into an AI and hope for the best? Also no.

So, what do we do?

  • Even without microservices, well defined and documented interfaces are extremely valuable to humans and AI agents alike.
  • There are new tools getting released constantly, some handle larger codebases better than others.
  • Experiment and improve how we provide context to our AI tools.
  • Accept that there's no decent solution for this now and focus on areas that have one.

Beyond code: the 80% opportunity

As reported by various sources, including this report by DX coding is not the majority of what a developer does. Even if a developer's day is 20% coding, why are we all obsessed over that? There's 80% waiting to be optimized. I would argue there's more time and efficiency to be gained by not focusing on the coding part at all.

Like the Agile Manifesto says: "Give them the environment and support they need and trust them to get the job done." So why is everyone telling developers how they need to write code all of a sudden? If you let the people building the solution make the final decisions, you will get something much more maintainable and of higher quality.

Allow your developers to share their findings and find ways where AI can be used to improve their development experience, but don't dictate what they must be doing. Instead, focus your efforts on finding ways AI can be used to enable your organization to do more, automate more, and increase the quality of your work.

Here's some practical examples:

Utilize voice to text

When I was drafting this post, I spoke into a microphone and asked GPT5 to categorize my thoughts into different sections of the article. After a meeting, I did the same to create a summary of my takeaways I could share on Teams. You can do the same with anything from a standup to rubber duck debugging.

Enable people to do things they couldn't do before

I know my way around an Excel sheet, but I'm by no means an expert. The average BA would destroy me in speed and quality. I would probably have to write a quick Python script to analyze data. Now I can feed it into an agent in Cursor and let it help me perform the analysis. Sure, it might be slower than the Excel master, but before it would have taken me significantly more time to do the same.

AI can give you context

One of the things I have struggled with both as a developer and as a PM is context. You often get tasks that need you to change an existing part of a system, and the research might take longer than the solution itself. AI agents are great at generating documentation based on changes you've made. It has never been this "cheap" to document something.

You could do something like this:

  • Finish your feature, generate documentation.
  • Use MCP to push the documentation to the Wiki.
  • Use MCP to update the ticket and/or pull request with this information.
  • Next time someone has to work on the section, they can ask the agent to use the documentation & Git history as context.

How is that for a time save? Even if no code was generated, there's significant gains in the adjacent areas.

Your metrics can be your doom

It's fine to start with imperfect metrics. Realize that tracking the wrong thing can be more destructive than not tracking anything at all. For example, Coinbase's recent goal of 50% of daily code to be AI-generated. If you set this as a target, developers will hit it, but it tells you nothing about security, maintainability, efficiency, or purpose. What did you achieve?

I don't have a good answer for this just yet. I will have a separate post outlining my journey, findings and thoughts about measuring AI adoption & impact. What's clear to me is that a lot of the data will come from surveys and feedback sessions, as new tools will impact people's daily work. You can track the adoption, but the impact should be visible in your existing KPIs and metrics.

People want updates

One of the things that everyone I have spoken to since I started is that they want more organized updates. ICs want them, management wants them, the business wants them, everyone does! You need to find the right balance between regular updates containing some value, and AI-related spam.

Naturally, you will have to provide organization-wide updates, discuss the next steps with the various leadership teams, and so on. However, I strongly believe that tooling adoption comes from the ICs and not from the business and leadership, so I started doing regular AI Coffee Chats for anyone who is interested in discussing or presenting something related to AI in Kentico.

Don't block progress, guide it

Kentico started strong. AI tools are available, training is available, the people are onboard, the platform is becoming smarter, there's lots of individual workflow and tooling wins. My challenge is to figure out how to bring all of this together without becoming a bottleneck. No two organizations are identical, and as I found out no two local AI editors will be the same either. Your challenges will be different.

I would love to hear your thoughts, please stay tuned for more updates, and let me know if you're particularly curious about any details.

Alexandros Koukovistas

Helping teams build with AI | API & Developer Experience | Tech Content Creator

5d

Also, if you'd like to read more about AI at Kentico, please check out our community portal https://community.kentico.com/blog?Page=1&SortBy=publishdate&dxTopics=ai

Great reflections on the real-world ups and downs of AI adoption! We’re excited to see how these insights shape the way we experiment and grow with new tools 🤖💡

To view or add a comment, sign in

Explore content categories