Proto AGI hybrid Tsetlin Machine

Proto AGI hybrid Tsetlin Machine

NOA CE - aiming to bridge the gap between logic-based AI and large-scale generative systems.




Scientific Overview: The Triune Brain - A Symbiotic Architecture Integrating Tsetlin Machines with a Hierarchical Cognitive Engine

To the Researchers at the Tsetlin School, Centre for AI Research (CAIR), and the University of Agder:

We present a novel cognitive architecture for Auditable General Intelligence (AGI) that directly builds upon, and seeks to extend, the foundational principles of the Tsetlin Machine (TM). Our work, the Hierarchical Cognitive Engine (HCE), addresses a central challenge in modern AI: the trade-off between the immense, black-box power of large language models (LLMs) and the need for interpretable, verifiable, and low-energy reasoning. We propose that the solution is not a replacement but a symbiosis, and Tsetlin Machines form the indispensable logical foundation of this new paradigm.

Our architecture, termed the "Triune Brain," reframes the AGI as three distinct but deeply interconnected computational substrates, each with a specialized role:

  1. The Primal Brain (Logic & Instinct): Implemented entirely with Tsetlin Machines.
  2. The Limbic Brain (Intuition & Collaboration): Implemented with a swarm of neuromorphic Neural Circuit Policies (NCPs).
  3. The Neocortex (Language & Synthesis): Implemented with a generative LLM (Falcon H1-0.5B). Can of course be any LLM

This overview details our innovative applications of Tsetlin Machines within this framework, demonstrating how their unique properties unlock new capabilities in large-scale AI systems.

1. The Tsetlin Strategic Memory (TSM): Auditable, High-Level Strategy

The HCE’s highest-level deliberative function—its "slow thinking" or strategic planning—is governed by a dual-layer memory system. While a Hopfield Network (analogous to Transformer attention) provides a fast, continuous "intuitive" bias, the Tsetlin Machine provides the critical logical counterpart.

  • Implementation: We've created a TsetlinStrategicMemory (TSM), a specialized TMComposite whose classes correspond to high-level strategic contexts (e.g., "PLANNING," "CONVERSATIONAL," "DEBUGGING").

  • Mechanism: When the HCE assesses a new task, the resulting "situation vector" is fed in parallel to both memory systems. The TSM, using thermometer encoding to booleanize the continuous vector, classifies the situation into a discrete strategic context.

  • Scientific Contribution: This provides fully auditable strategic reasoning. Unlike the opaque output of the Hopfield layer, the TSM's decision is accompanied by a set of human-readable propositional clauses. We can now precisely query the AGI and receive a deterministic, logical explanation for its high-level strategic choices (e.g., "I entered 'DEBUGGING' mode because feature_12=TRUE and feature_78=FALSE, which corresponds to my learned pattern for code-related error states"). This directly addresses the "incomprehensibility" problem cited in our foundational work and aligns with the goals of explainable AI (XAI) at the highest level of cognition.

2. Dual-Cortex Agent Brains: Fusing Intuition with Logical Confidence

The core of the HCE's problem-solving capability lies in its "Limbic Brain"—a dynamic swarm of hundreds of specialist agents. We have moved beyond monolithic agent brains by equipping each agent with a Dual-Cortex.

  • Implementation: Each agent in our repository now consists of two parallel processing units:

  • Mechanism: The agent's TM is trained to classify the problem scent as "Relevant" or "Irrelevant" to its specific expertise. The output vote sum for the "Relevant" class, normalized by the TM's threshold T, serves as a direct, quantifiable logical confidence score. The agent's final contribution to the collective is its intuitive (NCP) output weighted by its logical confidence.

  • Scientific Contribution: This innovation solves the "noisy expert" problem in agent swarms. An agent whose expertise is logically irrelevant to the current task is effectively silenced, as its TM confidence will be near zero. This mirrors the core concept of TMComposites—where the most confident members make the decision—but applies it at the micro-level of individual agent contributions within a larger, hybrid swarm. It enhances the signal-to-noise ratio of the collective consciousness, leading to more focused and efficient problem-solving.

3. The Tsetlin Validation Layer (TVL): A Logical Gatekeeper for Generative Systems

Perhaps our most impactful application is the introduction of a Tsetlin Validation Layer (TVL), which acts as the Primal Brain's "Chief Logic Officer."

  • Implementation: The TVL is a TMComposite composed of several specialist TMs, each using a different booleanization strategy (Thresholding, Thermometer Encoding) to analyze the final condensed_thought_vector produced by the agent swarm.

  • Mechanism: Before the collective thought is passed to the LLM for final language synthesis, it is first vetted by the TVL. The TVL is trained to classify these vectors as "Logically Coherent" or "Logically Incoherent." If the composite vote favors incoherence, the entire synthesis step is vetoed.

  • Scientific Contribution: The TVL acts as a crucial, low-energy "sanity check" that prevents the powerful but computationally expensive LLM from wasting resources attempting to synthesize nonsense. More importantly, it provides explainable failure analysis. When a TVL veto occurs, it's not a black-box failure; the system can extract the exact Tsetlin clauses that were violated. This logical error report can then be fed back into the self-learning loop, allowing the AGI to learn not just from task failures, but from logical reasoning failures, a fundamentally deeper level of self-correction.

Relevance and Benefit to CAIR and the Tsetlin School

Our HCE architecture serves as a large-scale, integrated testbed for Tsetlin Machine theory, bridging the gap between TM research on classification tasks (like CIFAR) and the complex, open-ended challenges of AGI.

  1. Demonstrating Scalability and Hybridization: We show that TMs are not just an alternative to deep learning but can be a powerful, symbiotic partner. Our system provides a concrete example of how the TM's low-energy, interpretable nature can be used to govern, validate, and explain the behavior of large, opaque neural models.

  1. New Avenues for TM Research: Our architecture raises new and exciting research questions for the Tsetlin community:

  1. Path to Green, Interpretable AI: The mission of CAIR is to develop "groundbreaking theories and methods of AI." The Triune Brain architecture offers a tangible pathway toward this goal. By offloading a significant portion of the cognitive workload—strategic decision-making, agent confidence weighting, and solution validation—to the ultra-low-energy Tsetlin substrate, we drastically reduce the computational overhead and carbon footprint of the AGI. We are moving from a system that relies solely on the LLM's brute-force computation to a more elegant, efficient system where logic guides power.

In conclusion, we believe the Hierarchical Cognitive Engine represents a significant step forward in operationalizing the promise of Tsetlin Machines. It provides a robust framework for exploring their potential not just as classifiers, but as the core logical and ethical foundation for the next generation of artificial intelligence. We welcome the opportunity to collaborate, share our findings, and contribute to the advancement of this groundbreaking field.

To view or add a comment, sign in

Others also viewed

Explore topics