Non-Deterministic AI and the Emergence of Generative AI: A New Frontier

Non-Deterministic AI and the Emergence of Generative AI: A New Frontier

One way to understand non-deterministic AI is by first distinguishing between deterministic and non-deterministic algorithms.

In a deterministic algorithm, for a given particular input, the computer will always produce the same output by going through the same sequence of states. Every step is predictable, reproducible, explainable, and tightly controlled.

In contrast, in a non-deterministic algorithm, the computer may produce different outputs for the same input across different runs. The behaviour is not bound to a single path but can vary, sometimes substantially, depending on factors such as internal randomness, new (additional) data inputs, or environmental influences.

From the perspective of the Turing machine, an non-deterministic AI is considered to perform correctly if, for each input, there exists at least one run that produces the desired result — even if other runs might produce incorrect ones. Here, the notion of choice becomes critical. Choice overrides the traditional idea of universal validity; what matters is not that every path is correct, but that at least one successful path exists.

This viewpoint also touches on the philosophical dimensions of the Turing Test — particularly the challenge of determining whether machines can truly think, or whether this line of inquiry should be reconsidered altogether.

For instance, if state q0 with input 0 may lead to both state q1 or q2 in a non-deterministic machine (when the same q0 would lead to either q1 or q2 in a deterministic world), the acceptance condition is satisfied if there is at least one sequence where the desired outcome is reached. This flexibility introduces a vital new dimension: the system’s ability to adapt based on new information that may not have existed initially.

Thus, interaction and learning from diverse inputs become defining characteristics of non-deterministic AI. The system does not simply compute; it explores, adapts, and learns.

Viewed through this lens, what might appear as 'incorrect' non-deterministic outputs are not flaws — they are creative enablers. They allow AI systems to generate a range of ideas and solutions, tapping into randomness and diversity to uncover novel possibilities.

This concept parallels what we observe in clinical reasoning, where coherence and correspondence models may lead to different kinds of conclusion validity but not medical validity— much like systems biology may reach different endpoints compared to heuristic discovery approaches.

“Since bronchitis involves infection of the airways in the lungs, antibiotics should help” vs “Studies of antibiotics for bronchitis fail to show a benefit for most patients”

Similarly, when considering the P versus NP problem, which (among other equivalent formulations) concerns the question of how difficult it is to simulate nondeterministic computation with a deterministic computer, non-deterministic Turing machines (NTMs) present a useful theoretical foundation.

NTMs specify more than one possible action at a given point, allowing multiple possible computational paths, though always remaining bounded in complexity.

And this is precisely where Generative AI (GenAI) marks a pivotal evolution in non-deterministic AI.

Generative AI: A Novel Application of Non-Deterministic Principles

GenAI — from large language models (LLMs) to advanced image generators — embodies non-determinism at its core. It does not simply produce a single, fixed answer for an input. Instead, it learns, experiments, and generates a broad spectrum of creative responses.

GenAI may be perceived as introducing controlled randomness, to expand the horizon of what can be imagined, articulated, or discovered. And this is only the first step of a problem-solving process: we are now using AI to learn how to design the solutions, not to actually solve.

In this sense, GenAI represents a novel and profound application of non-deterministic AI principles. It harnesses the power of choice, creativity, continuous interaction, and adaptive learning, moving beyond the static nature of deterministic systems. It opens up not just new technical possibilities, but also new ways for humans and machines to co-create, explore, and innovate together.

An important analogy here is the divide and conquer strategy:

  • A problem is divided into smaller subproblems.
  • Each subproblem is solved independently or recursively with further division.
  • Solutions to the subproblems are synthesised to address the original, larger problem.

Imagine this process applied to a new variant of the NTM

The new variant [1] may be described as an unbounded number of people with each person working with their own tape. First, there is only one person working, but when there is a nondeterministic transition, they call another person for each branch, and so on. The difference between deciding a problem and computing a function is that in the first case, the person who calls only observes if one of the people called accepts or if all of them reject, and in the second, they combine the solutions found.

LLMs are simply great at combining solutions. Not necessarily in exploring all hypotheses, hence their hallucinations when solution models are non-specific, in which case irrelevant people are called or respond to the call.

The Real Opportunity with Generative AI

The real opportunity that GenAI offers is threefold:

  1. Develop, adopt, and deploy NTM-style blackboard architectures (I designed and built one 20 years ago - see [2]): Systems that can efficiently decompose complex problems into finite subproblems, each explored creatively given available evidence.
  2. Tap into the wealth of available data: Transforming raw data into information or structured evidence that supports diverse and resilient solution paths.
  3. Create safe and nurturing environments for GenAI to grow: Environments where GenAI can reason and learn without being intentionally or inadvertently exposed to faulty assumptions and biases that could be mistakenly treated as ground truths. One simple way to achieve this is to create F.A.I.R. datasets.

In embracing non-determinism through Generative AI, we are not stepping into chaos — we are stepping into a more expansive and creative computational future. One where the very unpredictability of the machine becomes a partner in innovation, learning, and discovery.

References

[1] Colombo JG, Marchi J. A nondeterministic Turing machine variant to compute functions. Theoretical Computer Science. 2022 Jan 18;902:54-63.

[2] Kalogeropoulos DA, Carson ER, Collinson PO. Towards knowledge-based systems in clinical practice: development of an integrated clinical information and knowledge management support system. Comput Methods Programs Biomed. 2003 Sep;72(1):65-80. doi: 10.1016/s0169-2607(02)00118-9.

Hassan Naqvi

Data Manager at Institute of Global Health and Development (IGHD) The Aga Khan University Hospital (Pakistan)

3mo

This really resonates with my experience, Dimitrios. In our health and climate research across urban Karachi and rural Sindh, we’ve been piloting AI models for real-time weather forecasting at the household level, using data from high-precision weather stations and affordable remote sensors, along with drone-based thermal mapping. One thing we’ve learned firsthand: when you're working with unpredictable weather patterns and evolving health risks, rigid, deterministic models often fall short. Generative, adaptive AI feels much closer to how life actually moves, dynamic, messy, and full of surprises. In my view, real progress isn’t just about predicting the future. It’s about building systems that can learn, grow, and adapt alongside it.

To view or add a comment, sign in

Others also viewed

Explore topics