Why Generative AI value creation starts with people, not technology
Disclaimer: this article focuses on how knowledge workers across industries and roles can extract more value from general-purpose Generative AI large-language models (LLMs)—now that it is roughly 30 months since ChatGPT made its public debut in November 2022 and captured the world's attention. Specialized or domain-specific GenAI models are beyond the scope of this article.
Key Takeaways
AI has delivered $trillions in value over the last 40 years
Four decades of Deterministic and Analytical AI models have already unleashed trillions of dollars in economic gains, even if recent clickbait headlines increasingly suggest most corporations are far behind on capturing value from their AI investments. This is just not true when we consider the full range of AI applications threaded through our economy and society.
Consider the everyday miracle built into your car’s dashboard: your GPS navigation system. Back in the 1980s, trips often started with unfolding paper maps on the hood, hoping road signs matched our scribbles. In the 1990s, standalone GPS units began voicing turn-by-turn prompts. Dashboard screens soon added a moving blue dot, ending the “Did we miss the exit?” panic. Cloud-connected smartphones layered live traffic, crowd-sourced hazards, and machine-learning ETAs that reroute you before brake lights flare. Today, AI scans years of patterns and whispers: “you need to leave at 8:07a to arrive on time.” Once on the road it will predict within the minute when you can expect to arrive at your destination.
The benefits? AI-powered GPS navigation slashes wrong turns, saves hours, cuts fuel costs, reduces crash risk by eliminating frantic catch-up speeding, and replaces arrival anxiety with confident punctuality. It delivers these tangible benefits every single day to several hundred million drivers and their vehicles worldwide. As importantly, anyone who can enter their destination gets the full value from the technology. Amazing AI in action!
GPS is just one of thousands of advanced AI-enabled software applications that serve all users with minimal differentiation based on user skills. Other notable examples include weather forecasting, ride-sharing apps, financial transaction fraud detection as well as personalized search and e-commerce experiences. More recently, the entire field of autonomous vehicles—cars, trucks, drones, aircraft, spacecraft—is powered by traditional AI where decision-making aims to be 100% perfect. In addition, Analytical AI applications have been built to serve the needs of domain experts with deep skills who use tailored systems to augment their specialized talent. This includes dynamic digital ad pricing, high-speed automated equity trading, and advanced cybersecurity systems.
From Decision-Making to Design-Thinking
In 1997, IBM's Deep Blue stunned the world when it beat world chess champion Gary Kasparov in a televised six-game chess match. This was a key moment when the world at large finally understood how powerful AI had become with respect to decision-making capabilities.
In 2022, OpenAI made ChatGPT available to the world, introducing an entirely new era of Generative AI with distinctive benefits that were equally stunning. What were the big leaps that caught the world's imagination?
Generative AI pushed the AI frontier from decision-making to design-thinking. That is the fundamental capability leap, with a diversity of inspiring examples. Generative AI LLMs can draft prose; create images, videos and music; write software code; and even design industrial 3-D parts—shrinking the gap between idea and artifact. GenAI can also role play customer or employee conversations, design conference agendas, pressure-test assumptions, and mine insights from data. The breadth and depth of uses is limited only by your creativity and imagination. Headlines decrying “disappointing AI returns” almost always imply GenAI, not all AI, as the technology is outrunning human skill. Until organizations invest to close that gap, GenAI ROI will stay lopsided.
A parallel frontier comes into focus: extending task automation into the executive suite. Going all the way back to the Industrial Revolution in the late 18th century, executives have consistently utilized machines to automate or augment the work of their employees—while their own roles stayed untouched. However, GenAI can increasingly handle higher-order executive tasks such as constructing future market scenarios, drafting and reviewing contracts, analyzing budgets and plans, and sketching new product and service ideas, all nibbling at executive workflows across multiple domains. We have entered uncharted territory in the transformation of work. The real "known unknown:" will executives fully embrace tools that might diminish or disrupt their own careers?
The Hidden Bottleneck: Uneven Authentic Intelligence*
Generative AI output directly reflects the human input on the other side of the screen. Engage a general-purpose LLM possessing GPT-4 level reasoning with sharp, iterative dialogue and it soars; fire off a boring one-liner question and it returns mediocrity. In most firms, that “authentic generative intelligence” is unevenly distributed, making user capabilities—not model quality—the real constraint.
We’ve seen such skill gaps before in the history of software products. In spreadsheets, novices do basic math and statistics while experts build dynamic models so advanced they compete in “Excel e-sports” championships such as the Financial Modeling World Cup (yes, that is a real event). However, we have had 45 years—extending back to the arrival of the first spreadsheet, VisiCalc, in 1979—to build and strengthen that acumen.
Meanwhile, GenAI innovation has compressed decades of capability innovation into months. Part of the reason for the rapid acceleration: we are now using AI-software to design, build and test next generation AI-software. ChatGPT landed 30 months ago as a ready-made, massively powerful co-creator and has already raced through several upgrades, joined by LLMs created by Anthropic, Cohere, Google, IBM, Meta, Microsoft, and more.
It’s as if the 2025 edition of Excel—backed by cloud compute and 5G data streams—had dropped in 1981 just two years after the introduction of VisiCalc alongside a dozen credible rivals. The technology is sprinting forward at light speed; now human capability must catch up just as fast.
[*] We can gain Authentic Intelligence in many ways—there is no standard definition of authentic intelligence, but we might frame it as the holistic set of forces that directly and indirectly shape how individuals think. Good news: people can embrace a growth mindset to elevate and enhance their authentic intelligence.
Bridging bridges from Authentic to Artificial Intelligence: Nine interventions that can unlock more value
Prioritize skill building, starting with learning programs
Do not be surprised if you ultimately invest far more money and time in upskilling people than procuring tokens. Especially when it comes to extracting value from generalized LLMs, the cost of tuning, testing and deploying the technology is likely to be a fraction of the end-to-end investment in preparing the workforce to take full advantage of the technology. This primarily starts by designing and deploying a suite of learning programs tailored to different levels of GenAI experience and expertise—beginner to advanced users.
[1] Re-teach the basics, relentlessly. It turns out that working with GenAI feels anything but intuitive to most knowledge workers. You want to showcase the wide variety of ways to use GenAI using live workshops and/or immersive, online learning programs. Bake in quality control education early in these programs—compliance guardrails, fact-checking, plagiarism scans, deep-fake detection—so users learn to validate as fast as they create.
[2] Instill a conversational, editorial mindset. Nothing in our past software experience encouraged thoughtful, dynamic conversations with the technology; while conversational in form, legacy chatbots do not count as they are designed to only answer simple questions in a narrow lane—and historically with disappointing results. Therefore, establish Rule #1: an LLM is a co-creator, not a search box. Iterate prompts the way you’d passionately debate ideas, insights and initiatives with a colleague. Just as all great writing is 99% editing, the same is true with prompting an LLM. Experiment with teams co-elevating and iterating prompts together.
[3] Proactively upgrade human generative skills. Compelling ideas, strong prose, sharp arguments, and compelling visuals always start in the human mind. Learning programs that build broader and deeper skills in critical thinking, creativity, problem solving, storytelling, and visual literacy will lift prompt quality and evaluation rigor, letting GenAI amplify—not replace—authentic intelligence. As importantly, focus on broad-based capability building to avoid leaving people behind.
Reward experimentation and peer coaching
While offering a full portfolio of GenAI learning and training programs will prime the pump, skill building primarily benefits from intensive front-line experience using the technology coupled with coaching/mentoring. At the same time, measure what matters. Analytical and discriminative AI needed decades to pay off; GenAI will dramatically shrink that time to economic value—but it’s still a marathon, not a sprint.
[4] Measure learning velocity, not productivity. Far too many companies have put too much attention on understanding and measuring productivity benefits. Measure what matters, which is still trial, adoption and usage when it comes to general purpose LLMs. Given that this is still breakthrough technology, it is far more valuable to track and celebrate how quickly people are trying the GenAI tools, and ask them to codify what they have experienced and learned. User feedback provides insights and stimulates more engagement. Consider internal team competitions to accelerate experimentation and build momentum across the enterprise.
[5] Send your power users on the road to demonstrate compelling success stories. As with any new innovation, successful case examples build confidence and spark both replication and imagination. It is perhaps even more important with GenAI because there are thousands of different ways to extract value from the conversations. Getting power users to go out and demo the LLM to their colleagues across the company has high value because it is true peer-to-peer learning that incorporates the company's culture.
[6] Look beyond your company and industry for inspiration. We are still in the top of the first inning with GenAI. With every industry, company and research university driving AI experimentation, there is a lot to be learned by spending time outside your company examining the AI agendas and innovations at other organizations. Do not prejudge the value of talking with people in an entirely different industry or roles, as they might have novel insights that are applicable to your workforce.
Build Socratic interfaces that ask before they answer
This article's title stated GenAI's value creation starts with people, and that's true. However, there is an aspect of technology related to that starting point—the user experience—that warrants attention as it directly changes human behavior in how people extract value from GenAI systems.
My expectation is that this will be the next frontier in LLM user experience innovation. In the same way we have been conditioned to approach a search engine as a tool to answer our questions (and not ask us questions), there is work ahead to evolve the GenAI user experience such that questions can readily flow in both directions between human and software—just like real world conversations unfold. Some LLMs and corporations are in the early stages of embedding this capability, and it is all within reach given how the systems are designed.
[7] Prompt the prompter with a Socratic user experience. In the real world of human conversation, a smart question is just as likely to elicit a savvy question back from the person on the other side of the table. Equip GenAI to clarify intent and assumptions before generating answers to initial questions. The more this feels like a natural and provocative human conversation, the better. Net: Socratic UX pushes novices to sharpen their mindset and thinking, which creates a path to achieving more valuable outcomes.
[8] Consider investing in task-aware Socratic questioning. Drafting a strategy memo requires different queries than writing a performance review. Generating a photo of a filled conference room is different than generating an image for a new car design. The interface should have the capability to adjust the line of inquiry accordingly. Leverage easy access to domain-specific style guides, glossaries, and access to relevant knowledge so questions—and answers—reflect enterprise context and content.
[9] Visually differentiate the UX from a search engine to reinforce the two-way conversation. The fact that most LLMs appear with only a small text-entry box that resembles the query for a search engine automatically triggers certain user mindsets and behaviors. The more you can create a UX that invites and encourages conversation, the more likely users will adjust how they approach the software. At Boeing, we started with the platform name: "Boeing Conversational AI" and then continued to inject guidance to encourage conversational behaviors.
GenAI is an amplifier, not a replacement. It magnifies the clarity, originality, and discipline of the people who wield it. Organizations that raise authentic intelligence across their workforce—and embed Socratic, experimentation-friendly interfaces—will capture the next wave of AI value. Those that ignore the gap will generate more noise than value.
Couldn't agree more Mark Leiter. AI capabilities must be combined with human judgment to ensure quality and value creation! Thanks for your article.
Boeing AI and Data Strategy Leader | Associate Technical Fellow
1mo"Visually differentiate the UX from a search engine to reinforce the two-way conversation." I was thinking about this while on a walk. I don't think natural language is the best interface for LLMs. Reading, writing, speaking, and listening all carry a high cognitive load. Vision is the highest bandwidth pathway into the human brain, so it seems likely that LLM UX/UI will move in a more visual direction, even if there's still language under the hood. "Prompt engineers" will still write prompts just like software developers still write code, but most users will probably move towards a visual interface. This shift will vastly accelerate accessibility and adoption, just like Windows accelerated adoption of personal computers.
Boeing AI and Data Strategy Leader | Associate Technical Fellow
1mo100% agree that the bottleneck is adoption. This is what "the singularly" looks like... tech advancing faster than humans can adapt. As a parent and individual, I'm pushing adoption capability. How can we build our ability to adopt new technology faster? In the corporate context, adoption also need to be safe and compliant. Safety is the bottleneck to adoption, so we should be investing a lot more resources into safe adoption, which may not look like trad r&d. Also, managers will be hardest hit, headcountwise.
Cofounder | CEO | Start up Advisor | Talent Development | Business Transformation | Analytics | Digital | Enterprise SaaS | M&A |
1moMark, this is so well written, wonder how you bridged authentic with artificial intelligence!😊 Our startup SkillsBridge.AI is exactly in this space, organizing work for People+AI collaboration. Like you said, we truly believe user skills-not technology-limits GenAI adoption and value creation! Let's connect and we will show you how we are uncovering this "known unknown"!
Corporate Strategy and M&A Executive | Boeing
1moMark - Your emphasis on bridging the gap between user skills and GenAI tech is crucial for unlocking its potential. I resonate with the idea of treating LLMs as co-creators rather than just tools, which can foster a culture of 'innovation' operating under a 'fail fast' mindset. Given the rapid evolution of GenAI, what role do you see leadership playing in fostering a culture that encourages continuous learning and adaptation while addressing fear/risk levels among employees. Additionally, as orgs pressure-test LLMs, how best can they effectively measure the impact of skill-building initiatives on both employee engagement and direct business performance?