On knowledge as a biological phenomenon
The social brain hypothesis is that the evolution of human intelligence is closely tied to the evolution of the human brain and to the origin of language. This article is about how animals evolved to have knowledge, and to express knowledge using symbolic languages. The viewpoint is more psycho-biological than mathematical or philosophical.
Contents: Knowledge. Animal knowledge. Human knowledge. Artificial Intelligence. Semantic interoperability. The fuzzy logic of life. Information. A WKID hierarchy. Autopoiesis.
1 Knowledge in general
"Knowledge is a biological phenomenon" Humberto Maturana
In other words, description is a biological tool. No knower, no knowledge. No describer, no description. No conceiver, no concept. Before life, there were only entities and events out there awaiting to be bounded, recognised and described after life began.
The ability to describe reality evolved in organisms because they found that having or forming accurate models of the world is useful. Evidently, animals can recognize things, and recognize similarities between things. They recognize features that qualify a thing as a food item, or a member of the same species, or a predator. And they can distinguish between individual things of a type. (How their neuro-chemistry does this is mysterious, but it happens somehow.)
Thinking depends on our ability to relate things (observed or envisaged) to descriptions (in mental or verbal models) that things instantiate. Primitive organisms inherit most (if not all) of the descriptions of things they interact with. Higher animals also create new descriptions from perceptions.
Thinking processes evolved though the abilities of organisms to:
sense a thing (a food item, a friend, an enemy),
remember a thing (by recording a description of its observable features) and
react to a thing appropriately (by associating its description with appropriate actions)
Thinking starts with mapping descriptions to things they represent; and mapping perceived descriptions to remembered descriptions. So you can recognize a particular thing, such as your mother.
Higher intelligence requires the mapping of remembered descriptions to each other. So you can recognize when descriptions share one or more features (“family resemblances” as Wittgenstein put it).
Later, social animals grew able to communicate knowledge to another, using postures, calls and gestures. And humans grew able to symbolise a concept by encoding it in a verbal language. Linguistic thinking requires the mapping of family resemblances to named categories, classes or types (“food”, “friend”, “enemy”).
In short, no material, physical, things were described until biological entities evolved bio-chemical ways to detect, remember and recognise them. Animals recognize sensations of things, remember sensations, and fuzzily recognize similarities and difference between them. Eventually, humankind evolved ways to describe things by using words to name and typify things.
In natural language, the mapping of type names (tokens) to features is loose. Whenever (in law, in science, in system definition) we need to avoid ambiguity in description, we use a controlled vocabulary; we define a token by associating it with a particular set of features, and relate tokens to each in taxonomical hierarchies and ontological networks.
In short, our knowledge of reality ranges from recognising sensations of things in our environment, through remembering sensations, and fuzzy recognition of similarities and difference between them, to formal descriptions of things, expressed as type definitions in a controlled language.
Is knowledge a human construct? No. Primitive animals, without brains, inherit and develop models of the world around them. As humans, our interest is in knowledge recorded in brains. When we perceive something new, our neurons fire, creating neural pathways in a process called encoding. These new pathways are solidified in long-term memories during sleep. Every time we recall a memory, we reinforce it, and may also modify it.
How do we acquire and develop knowledge? We acquire knowledge by various means, by inheritance, by education, by trial and error, by observing and envisaging reality.
We inherit some knowledge, such as not to walk off a cliff, and what a face looks like. We acquire more knowledge in a variety of ways: by perception, conditioning, by education, by logical deduction.
We also acquire knowledge by envisaging things, which lays down memories of those envisagings. And we continually refresh and reorganize memories of what we have observed and envisaged.
In recent decades, we invented computers and Artificial Intelligence (AI) tools that can certainly capture and manipulate knowledge, and perhaps create it.
How do we share knowledge? Animals share knowledge by translating thoughts into a form perceived by others. Any form of matter or energy may be used: a facial expression, a gesture, an alarm call.
And most marvelously, we can express knowledge using that most flexible of tools, verbal language. To share knowledge, we use symbols or tokens that represent individual things (by name) and represent general types of thing. Sharing knowledge helps us to verify, refine and extend our classifications and models of reality.
2 Animal knowledge
This section and those following discuss animal knowledge, human knowledge and artificial intelligence, in the sequence they evolved.
A model is created or used to represent or symbolize selected features of something else, be it observed or envisaged. It only functions as a model when related to what it models.
Animal intelligence grew out of how animals recognize things in their environment by having or creating biochemical models of them. Even a virus has a model of the cell receptors it attaches to.
The processes of the physical universe are governed by rules defined by mankind (like Newton's laws of motion) that are fuzzy; meaning things may conform to them less than 100%, but the conformance is good enough to be useful. It is now thought the processes of biology, of life, including intelligence, are also fuzzy.
All thinking animals depend on the ability to model features of things in the world around them. An animal’s mental model of a friend, foe or food item may be fuzzy, fragile and fragmentary, but it must be accurate enough to be useful.
In animals with a central nervous system
sensors detect external things and conditions, then encode information about them in messages sent to
a central processor that receives input messages, then (perhaps after referring to memory) responds by sending output messages to direct
organs and motors that maintain the animal's state or advance its interests.
Animals can not only remember past inputs, but also remember the success or failure of past responses. That is surely the basis of learning, consciousness and other abilities that you might think unique to humans.
Learning? Even an amoeba, with no central nervous system, can learn from experience.
Communication? Many animals communicate using gestures and sounds. Some have complex social lives. Sperm whales are divided into clans that share a language of clicks.
Forethought and intentionality? Have you seen a video of whales creating a wave to knock a seal off an ice floe? That and other evidence suggests some animals not only have aims in mind, but can plan ahead and cooperate to meet them. Link to research.
Consciousness? Many debate what consciousness is. I propose it is an emergent propert of a) the ability to create and compare descriptions of the past, present and future, overlaid with b) self-awareness.
Comparing memories of the past, perceptions of the present, and envisagings of the future, helps an animal to make effective decisions about what to do next.
Self-awareness (measured by the mirror test) is a feature of, humankind and a few other species.
Like knowledge, consciousness is a biological phenomenon that requires the material existence of biological entities. It did and will not exist before and after the existence of life forms - and AI machines built by them.
Self-awareness? Chimpanzees, elephants and dolphins demonstrate a degree of self-awareness. Link to research.
In the course of biological evolution leading to humankind, all the abilities above emerged and developed by degree. What most clearly sets humans apart?
3 Human knowledge
Our animal understanding of things may be based on remembering sensations of things, and detecting resemblances between them. I gather brains record things many times. There is no "master copy" of a memory to which we refer when needed. Moreover, it seems a brain (even during sleep) continually reorganizes memories, connecting related ideas.
By contrast our human understanding of things is based on crystallizing resemblances into words that typify them, and recalling those verbal definitions.
(This is not entirely unique to humans. Chimpanzees in the Taï National Park of Côte d'Ivoire have proved capable of complex vocalizations far beyond what scientists had expected. Their several hundred "words" sound like grunts and chirps to us.)
It may be a million years ago (nobody knows) that humans began to describe the world in words. As recently as a hundred thousand years ago, there were four human species. Neanderthals surely spoke, but we don't how fluently.
In homo sapiens, the ability to communicate evolved in parallel with the brain. Today, what sets us apart is our mastery of using symbolic languages to crystallize fuzzy mental models into words. We are uniquely adept at translating mental models into and out of verbal models.
A mere five thousand years ago, the invention of writing enabled ever more complex ideas to be crystallized, remembered and communicated over distance, and over time. We can persist ideas, share ideas, build on ideas and extend them into complex stories and knowledge structures.
The written word enabled the study, testing and refinement of descriptions, and the development of business, science, and technology. For example, thinkers may observe a heartbeat and create this generic model.
The atria walls contract and force blood into the ventricles.
The ventricles walls contract and force blood to the lungs and body.
The atria fills with blood (and the cycle begins again).
Before mankind, there was no typifying model of the kind above, there were only un-described beating hearts. Some of the models we build are inaccurate, or impossible to realize. But to survive and thrive, our models must be realistic enough to be useful.
In the evolution of the universe, mathematics emerged and evolved as a modelling tool after humans mastered the ability to express knowledge in words. And developments in mathematics led to computing, the information age and AI.
Since the information age began, countless ways to express knowledge in graphical diagrams (in box-line diagrams that relate the terms of a controllled language) have emerged. Systems of interacting variables are modelled in causal loop diagrams. Systems that process information are modelled in diagrams conforming to the UML or ArchiMate modelling language (best not both in one diagram).
4 Artificial intelligence (AI)
Animals have mental models that enable them to recognise things in their environment, and make decisions. Human intelligence first added the ability to encode internal mental models into external speech, and decode them back again. Then, it added the ability to encode speech in writings that can be persisted, shared, manipulated and extended by others, over millennia.
Some say AI tools mimic how the human brain works. They don't; they only follow rules for analysing and manipulating knowledge expressed writings of some kind. While AI may reach the same conclusions as HI, that doesn't prove it works the same way.
I have slightly edited the quotes below from this source.
https://ea.rna.nl/2024/02/13/will-sam-altmans-7-trillion-ai-plan-rescue-ai/
"There is a fundamental difference between AI and HI. The Winogrande benchmark test was designed to require insight. It contains problems like “Robert woke up at 9:00am while Samuel woke up at 6:00am, so he had less time to get ready for school.” The question is: Who is ‘he’?”.
"Easy for a human, very hard for attention-based-next-token prediction systems like GPT, because to score high on that one you really need to understand the text. To scale up to human level accuracy, we need more than 3 million billion times more parameters than GPT3. (We do not have these benchmark numbers or sizes for GPT4)."
Human understanding requires "embodied cognition".
"Embodied cognition theories propose mental imagery, particularly automatic simulation varieties, as a core mechanism for deep conceptual processing, rather than language with which semantic memory has been commonly allied." (H.E. Schendan, in Encyclopedia of Human Behavior (Second Edition), 2012)
At one level, the meaning of a term is the association you make between that term (or token) and any defnition of it that you learn, or choose, to associate with it.
At another level, terms and definitions composed of words are proxies for the purely biological understandings we associate with them.
"F = M * A" is a rule governing physical motion that we can feel in the seat of our pants.
"Don't touch a hot plate" is a rule we can associate with touching a hot plate.
Generative AI (like GPT) works only at the first level. It can outperform us. It can write a book in 30 minutes. But since it does not understand things the way we do, it can produce ‘hallucinations’. The accuracy of its responses depends on the quality of the data sources it is trained on. Even its correct answers (it seems to me) are approximations or imitations of human understanding, because it lacks the understanding our biochemisty gives us.
Q Can an AI/LLM tool know what we know?
LLM tools process knowledge recorded in words, and analyze word use statistically. But eons before humans crystallized knowledge in words, they remembered things. So, our knowledge is at least partly non-verbal.
An LLM tool cannot inherit information we do, sense data we sense, and have the same experiences and conversations. Even if it could copy your synaptic network into a knowledge graph, it could not maintain it in line with the continual reorganization your brain carries out.
Q) Can an AI/LLM tool learn as we do?
"Learning" is a multi-faceted concept. Learning in the sense of remembering - a database does that. Learning in the sense of conditioning - a mouse in a maze does that, using a network of brain cells. How humans learn is massively boosted by our ability to manipulate verbal models of reality, both internally in thinking and externally in messages. And it is our use of words that AI of the LLM kind is able to mimic. That doesn't mean it learns in the way we do.
5 Semantic interoperability
To interoperate with another human successfully, in a variety of domains or situations, we must share some vocabulary. We reach that state through shared experience, formal education and trial and error.
The art of conversation depends on observing responses. When speaking, through verbal, visual and other cues, we detect misunderstandings, and instantly retry or adapt. The art writing involves take great pains not “lose” the other party.
To interoperate with a computer in a structured business use case (such as booking a train ticket), we must know the very limited “controlled vocabulary” its software engineers use to express knowledge at the user interface. The software doesn’t understand - as we do - what it means to catch a train.
To interoperate effectively with generative AI we must not only share enough of the vocabulary used in its data sources, but also be capable of detecting whether a response makes sense or not. The software doesn’t know that.
In all three cases, if we don’t share enough of a vocabulary with other party, the inter-operation can fail, or mislead us.
Suppose our human-to-software inter-operation does succeed, does that count as "semantic interoperability"? I guess it does, regardless of whether or not software understands what it does or says to us. Successful communication demonstrates semantic interoperability rather than understanding of a human kind.
The rest of the article below addresses other issues that arise in associating meanings (intensional definitions) with terms (tokens).
6 The fuzzy logic of life
Looked at one way, verbal language is only the icing on the cake of animal intelligence. However, the linguistic relativity hypothesis - that symbolic language not only enriched how we humans communicate but also influences how we think - is clearly true.
First, speech enabled people to think in ways that non-human animals cannot. Then, writing enabled people (distributed in space and time) to persist ideas, share them, extend them. This enabled us to build the massively complex systems that our modern lives depend on, such as legal systems, airplanes, and software systems with millions or billions of lines of code.
The use of the written word and pictures massively increased our ability to communicate, to describe and envisage complex systems, and plan actions to meet aims.
Humans use words to describe, characterize, or typify things.
"Isn't it rich? Isn't it queer?" Stephen Sondheim
But our verbally-expressed types or rules can be fuzzy, meaning that things either conform to them or not, yes or no.
A fuzzy set is a peculiar kind of set. For a classic (Boolean) set, an element either belongs to it or not. For a fuzzy set, an element can belong to it with a degree of membership from 0 to 1 (or 0% to 100%).
https://www.mdpi.com/2313-7673/9/2/121
Living cells hinge on molecular conformations. Since conformational distributions are molecular fuzzy sets, it can be inferred that the logic of life is fuzzy, i.e., vague. Although we lack a universally accepted definition of life, the myriad of known life forms shares some features.
Firstly, their chemistry: every living organism is made of at least one cell, which is an open system, confined by a membrane, and made of a plethora of interacting chemical compounds, among which the previously mentioned biopolymers, DNA, RNA, and proteins are the basic ingredients.
Secondly, their cycle: every living being starts its existence after its birth from other living matter and ends with its death, becoming inanimate. In between, it develops because it is capable of self-maintaining, self-reproducing and self-protecting against some intruders and harmful elements.
Thirdly, their computing power: every living being exploits matter and energy to encode, collect, store, process, and send information to pursue its goals.. The basic aims common to every living being are those of surviving and reproducing. Their achievement induces living beings to adapt by adjusting their metabolic processes, to acclimate by turning on and off peculiar genes, and to evolve by changing their genome under an ever-changing environment.
How we express knowledge using types
Natural language is messy. Words become meaningful through use. The vocabulary we use in everyday speech is a loose network of fuzzily defined words, whose meanings evolve through use, by trial and error, in human society. Dictionaries help us to be more consistent.
In controlled languages we typify not only atomic or elementary structures and behaviors (actors and activities, entities and events), but also complex types in which simpler types are related. We can relate
quantitative types in mathematical equations of the kind F = M * A.
qualitative types in a narrative predicate statements of the kind "wolves <eat> sheep".
predicate statements in any ontology that defines at least some of what is understood in a particular domain of knowledge. And we can represent an ontology in a concept graph.
7 Information
The term information has different meanings in different domains of knowledge. Here information is a meaningful representation of something. Information is stored and/or communicated.
Nurse points out that a linear sequence is an effective way store and convey information.
DNA is a linear sequence of chemical nucleotide bases encoded in a four-base chemical language (labeled A, T, C and G).
Speech is a linear sequence of words, sounded in a oral language.
Written text is a linear sequence of words encoded in the letters of a written language.
Software is a linear sequence of actions encoded in the symbols of a programming language.
However, information is also stored in network structures. It is stored in
the active self-modifying network of synapses in a brain
the cross-referenced structure of The Domesday Book ("a triumph of the written record") or a dictionary.
the network of data types in the data structure of a database
Shannon's theory (often called information or communication theory), defines a data communication system. It is not about the meaning of the data.
8 A WKID hierarchy
There several variations of a WKID hierarchy in which the terms knowledge, information and data are differentiated. This is the one I prefer.
Wisdom: the ability to make effective use of knowledge, especially in new situations. E.g., the ability to persuade some fisher folk to agree quotas and return young fish to the sea.
Knowledge: information that is factual, is accurate or true enough to be useful. E.g., the information that a particular fishing community does actually behave as in "the tragedy of commons" archetype.
Some of the information we record in words is a lie, mistaken, or a fantasy. But we can only survive and thrive if enough of the information is also knowledge in the sense it is true enough to be useful. One might say objective knowledge is subjective information after it has been recorded and verified, first by testing, then by logical and social confirmation.
Information: the meaning placed or found in some data, when an actor relates it to a phenomenon they observe or envisage. E.g., the meaning you record or read in a diagram of "the tragedy of commons".
Data: a physical form, into or out of which, information is encoded or decoded by an actor. E.g., a causal loop diagram.
Remember, a thing in the world is only data when it is encoded or decoded by an observer with some meaning - correlated with some concept understood by the observer. Data is only information at the moments that it is encoded or decoded as having a meaning, carrying some information.
This is such a challenging concept that whatever WKID hierarchy you prefer, in practice, you'll find "concepts" and "knowledge", and "information" and "data", being used as synonyms.
Complex systems commonly translate (or serialize) information from one coded form into another. The human ability to translate information in the brain into and out of speech and writing is remarkable. But animal intelligence did not emerge from analyzing words and numbers; it emerged from biochemical evolution.
9 Aside on autopoiesis
The biologist Maturana said that "autopoiesis" is a property that distinguishes a biological system from others. Which is to say, it is a complex system whose behaviors sustain its own structural state from primitive inputs.
The sociologist Luhmann "de-ontologised" Maturana's term to discuss a property that distinguises a social system from others. Which is to say, it is a self-perpetuating series of messages about a theme, where the meaning of each message is determined only by the receiver. This seems to me an academic fancy, and a daft idea, quite contrary to the general idea of a stateful system.
10 About articles to follow
Systems thinking is not the best way to address every problem or need.
“Nor do I believe that adopting a "systems approach" [will] produce some superior kind of rationality as compared to other frameworks of thought.” Sociological systems thinker Werner Ulrich, link Sept-October 2017.
The aim of my work on systems thinking is not to save the planet. For both novice and seasoned system thinkers, the aims are to:
Present a concise, accessible overview of established ideas
Add new insights
Expose issues in current systems thinking and practice
Rescue systems thinking from pseudo-science
Help you avoid jingle/jangle fallacies
Jingles - assuming two different things are the same because they have the same name. Jangles - assuming two identical things are different because they are named differently.
Holism is not wholism (article to follow)
The concept of emergent properties or effects is universal. Together, a bicycle and a human can do what neither can do alone. To think holistically about them is to think how the effect of forward motion emerges from their interactions.
But when people speak heatedly, deprecating “reductionism”, confusing science with reductionism, or misinterpreting holism, the result is hot air.
Dividing a whole into parts is the first step in the approach taken by Ashby, Forrester, Meadows, Senge and other systems thinkers. All begin by identifying the parts (variables or stocks) of interest. Only then, can they study how the parts interact to produce an effect.
In the classic hierarchy of sciences (sociology, psychology, biology, chemistry, physics) each stops at what they regard as an atomic part, unconcerned with its internal structure.
Holism is about understanding how one or more particular effects emerge from interactions between two or more particular things. It does not mean understanding a) the whole of each thing, b) the whole of what they can do in cooperation, or even c) the whole of what is necessary to produce an emergent effect. For sure, you do not understand what a human brain must do to ride a bicycle.
We cannot model the whole of any physical entity in any recognized form of system model. A system is a perspective of reality, drawn with particular interests in mind, that is stable enough, for long enough, to be modeled.
“There are limits to what science and the scientific method can achieve. In particular, studying society generally shows how difficult it is to control. This truth may disappoint those who want to build a science to shape society. But the scientific method is not a recipe that can be mechanically applied to all situations.” Freidrich Hayek
However, a system theory can help us build a holistic (joined up) model of a way of behaving that produces some effects of interest, typically effects we want to create or "intervene" to change.
Terminology? (article to follow)
Many systems thinkers use terms such as non-linear, chaotic and complex with no clear meaning. Some draw terms from mathematics, physics or biology, but use them with different meanings. This leads to “jingle-jangle fallacies”.
This work helps readers reduce the terminology torture by outlining the terms, concepts and principles of several systems thinking approaches, relating them, and defining a consistent and coherent vocabulary of nearly 200 terms.
So, these articles can be used as reference work by teachers and students of “systems thinking” and “complexity science” around the world. Some will also interest enterprise and business architects, other business change agents, teachers of system engineering, and users of architecture frameworks and system modeling languages like ArchiMate and UML.
11 Further reading
My day job is teaching classes in enterprise, solution and software architecture, all of which involve the graphical expression of knowledge about the structures and behaviors of systems.
For links to further reading, read "Seven kinds of system".
Woke Clinical Psychologist
11mo"Holism is about understanding how one or more particular effects emerge from interactions between two or more particular things." I'm having a hard time squaring that with Quine's Semantic Holism.
Woke Clinical Psychologist
11mo"The processes of the physical universe are governed by rules defined by mankind (like Newton's laws of motion) that are fuzzy" There's nothing fuzzy about Newton's laws. There is much fuzzy in the realm of the social sciences. It's not a matter of degree. It is a difference in kind,
Woke Clinical Psychologist
11mo"No material, physical, things were described until biological entities evolved bio-chemical ways to detect and recognise them." Correction : "No material, physical things were described until biological entities evolved the capacity for communicating with symbolic language." 'Describing' is a language game, and it requires a language to play. The importance of this difference is that it shows that the roots of epistemology do not neatly lie in the physical realm. They stretch more into the sociological realm. Sociality is a precondition for communication. And as the laws of sociology are emergent in respect to the laws of psychology, and in turn the laws of psychology are emergent in respect to the laws of biology, sociological principles can not be reduced to biological principles. 'Describing' is an act of social construction. What describingt is not, is an unbiased, theory-neutral representation of objective reality.
Author at Meditations of a 21st Century Hunter Gatherer
1ywill read this over the weekend. Thanks for your continuous sharings.