Humans Fall in Love with Solutions—AI Can Help Fall in Love with Problems
Why augmenting problem exploration with artificial intelligence may be the biggest yet underused lever for innovation
One of the low-hanging fruits of using artificial intelligence to transform how we work is harnessing its power to help people solve problems faster, but most importantly, more creatively. I have written extensively about it (start here). For example, one of the main epiphanies of the last years has been how artificial intelligence can help critique our ideas, which is strengthening the idea flow at the end of its funnel - area where many people are often lacking either because of their skills or because they tend to avoid critiquing others work too directly.
Here, I want to go to the other end of the idea flow, the upstream part of problem-solving and creativity that, as we will see, is a key determinant of the quality of whatever happens downstream. I expand on an earlier article about how AI can help us discover which problems to solve. In this essay, we discuss what comes next, leveraging both current scientific understanding and practitioner experiences.
Innovation efforts often jump straight to brainstorming fixes, seduced by the “dopamine hit” of a clever solution. Innovation facilitators, for instance, know the struggle with keeping working teams focused on problem-exploration exercises, so that they do not end up paying lip service to it before moving on to the "real work". Yet theory and evidence remind us that the quality of the solution space is bounded by the quality of the problem space we first explore. Artificial intelligence now offers a practical, high‑return way to strengthen that front‑end work: accelerating, broadening, and systematizing problem exploration while keeping humans firmly in charge of purpose and judgment.
What the Research Already Tells Us
There is a reasonably extensive corpus of research on this. (To be frank, I would've expected more, but research is seemingly skewed the same way practitioners are—we focus more on solutions than on problems.)
Over the past 10–15 years, scholarly and managerial literature on innovation management has converged on the critical importance of problem-space exploration—the thorough investigation, framing, and (re)formulation of the problem itself—before moving into solution generation. Research on design thinking, creative problem solving, and strategic problem formulation demonstrates that teams and organizations that invest in clarifying and reframing the problem systematically produce more original and higher-impact ideas.
Across design thinking (Liedtka, 2015; Micheli et al., 2018), creative problem‑solving (Abdulla et al., 2020), and strategic management (Nickerson et al., 2012), the message is consistent: a well‑defined problem is half, or at least a big part of, the innovation.
Now let's break down the issue to identify places where we can solve it. A common framework for disciplined and thorough ideation, Design thinking’s Double Diamond highlights two macro phases: Diamond 1 – Problem Exploration and Definition; Diamond 2 – Solution Generation and Delivery.
Our focus is on the first diamond, where early framing determines everything that follows. Anecdotal evidence from AI‑assisted ideation projects conducted through the last few years suggests material gains in speed and depth of insight when humans partner with AI during this phase.
A few assumptions grounded on practice can guide us here.
The central hypothesis of this work, supported by early evidence collected using artificial intelligence-assisted ideation technologies and practices, shows that AI, used as a cognitive partner in Diamond 1, enables a more comprehensive and insightful definition of the problem space than unaided human work—ultimately yielding solutions that are both more novel and more useful.
AI, used as a cognitive partner in Diamond 1, enables a more comprehensive and insightful definition of the problem space than unaided human work—ultimately yielding solutions that are both more novel and more useful.
Humans still supply strategic intent and critical evaluation; the machine delivers rapid, wide‑angle exploration that would be prohibitively slow or narrow if done manually.
Why the AI-Human Partnership Works
Both humans and machines are bound by our capabilities (knowledge, logic) and incentives (hormones in the human brain, tokens in the machines). Unsurprisingly, human limitations reflect themselves in organizational barriers.
AI workflows, such as those in AI chats, are aligned to follow our instinct to move fast to solutions. They also risk so-called "institutional knowledge replication," that is, staying well within the "known-knowns" instead of venturing further.
However, they can be reconfigured by asking for more reflection time (and tokens) on the problem. The schemas below, part of an overall process illustrated in a previous article, show exercises that can help with that.
Consider the following examples, where artificial intelligence can help delve into the problem and take different perspectives. Similar opportunities can be unlocked in science and R&D, among other fields.
Once again, obtaining the best results requires synergy between artificial intelligence and human capabilities. People act as principled "system 2" (in Daniel Kahneman's terms) to the faster "system 1" thinking of the machines. The machines, especially if well configured and using the latest reasoning models, can also prevent us from often falling into our own System 1 thinking. (More on this here.)
Artificial intelligence, if designed around a user experience that supports, not substitutes, the human, can help here, for instance, through:
For more details and practical guidance, please review the previous essays.
Conclusion: Better Questions, Better Innovation
A decade of empirical work underscores a simple truth: the creative ceiling of any innovation effort is set early, when we decide what problem to solve. Humans tend to move to solutions too fast. Artificial intelligence, if instructed appropriately, doesn't suffer from the same bias or have the same dopamine hit. On the contrary, it could be incentivised to spend time understanding problems well.
As a result, if used well, with competent humans firmly in the loop, AI now gives organizations a scalable means to deepen that decision. By pairing human strategic judgment with machine‑driven exploration, teams can, among others:
The result is a richer portfolio of solution avenues and, ultimately, more original and valuable solutions. Companies that cultivate disciplined, AI‑augmented problem framing are not just “doing design thinking faster;” they are upgrading the very substrate of innovation, ensuring they invest in solving the right problems before investing in solving them well.
This essay is part of a series on AI-augmented Collective Intelligence and the organizational, process, and skill infrastructure design that delivers the best performance for today's organizations. More here. Get in touch if you want these capabilities to augment your organization's collective intelligence.
References
Co-Creation at Scale, Augmented by AI | Co-Founder at Nyord
3moAs you asked: I very much like it. 😉 It's an update and focussed perspective on what you've already published in you "Us, Augmented"-paper (which I studied deeply). What resonates: The human/organizational weaknesses like tunnel vision, mis-representation, turf wars, hastiness. What question I have left: How to add AI in such a way to the problem discovery discussion (it is often a meeting, workshop) that all the AI benefits (bias buffering, decomposition, far-fetched analogies, unfamiliar expertise, etc.) feel very natural to its human counterparts ... and will be accepted and taken further. More a matter of how the "AI-colleague" presents itself that what she knows.
Research Associate in Organizational Behavior and Change Management, ESSEC Executive | Senior Advisor in Continuous Improvement and Collective Intelligence, Covéa | Collective Intelligence Facilitator
3moFlorian Magnani Frederic Le Pascal Le Goff
Research Associate in Organizational Behavior and Change Management, ESSEC Executive | Senior Advisor in Continuous Improvement and Collective Intelligence, Covéa | Collective Intelligence Facilitator
3moVery insightful indeed! Developing this capability could be highly valuable not only for systems thinkers and problem-oriented practitioners, but also for Lean coaches. The real challenge will lie in modeling the problem space with enough fidelity — especially integrating the systemic dimension, which is often oversimplified. There’s a lot to learn from digital twin approaches here: they offer promising ways to simulate complex, dynamic systems and could help avoid premature attachment to solutions
Age-Inclusive UX Research, Strategy & Design | AgeTech | Certified AI Auditor: AI Risk Assessments & Governance | Founder, Responsible-AgeTech.Org
3moI love how you phrased this, Gianni Giacomelli, I totally agree: “Innovation efforts often jump straight to brainstorming fixes, seduced by the “dopamine hit” of a clever solution. Yet theory and evidence remind us that the quality of the solution space is bounded by the quality of the problem space we first explore. “
Changemaker | Rainmaker | Matchmaker
3moThanks Gianni. Another most insightful piece. (Most of my initial AI learning has come from you. :) This particular article reminds me of my early days as a management consultant adopting the SCR model and later applying Barbara Minto’s Pyramid Principle. (Which I still do today). Your perspective on “falling in love with the problem” really resonated.