SlideShare a Scribd company logo
Extending the
 Mind       with
 Cognitive
 Prosthetics?

Andy Clark
School of Philosophy, Psychology
and Language Sciences (PPLS)

University of Edinburgh,
Scotland, UK

andy.clark@ed.ac.uk
With special thanks to: Rob Rupert, Kenneth Aizawa,
Fred Adams, Mark Rowlands, Dave Chalmers, Julian
Kiverstein, Mark Sprevak, Richard Menary, and Mike
Wheeler.
The Extended Mind Debate
Where in physical
space     lies the
machinery of mind
and cognition?
What it isn’t: Target is                         that the
machinery of mind might, perhaps in some alien beings, be
smeared across more than the neural economy.
Nor is the claim merely that non-brain activity
impacts the mind.


No-one denies that causal commerce
between mind and world matters, and it
changes what we think.

The contentious     claim is     that the
mechanisms of mind are not all in the head
(= the 'extended mind hypothesis’- Clark
and Chalmers (1998)
It’s as if someone said that your calculator
or currency converter’s MECHANISMS
were not all inside your laptop.

This is    when, e.g., we use a web-based
currency converter.

It is    when we use the built-in calculator
on the mac.
TXM: The Main Idea:

 The mechanisms of (your)
 mind are as free to bleed
 into the (rest of the) world
 as the mechanisms of
 calculation are to bleed into
 the web
Q/ Just how crazy is this
1.The Extended Mind Claim (super-mini-version)


2. Some Objections and Replies


3.Cognitive   Extension     versus    Cognitive
Shrinkage
The Extended Mind Claim

For the brain, it doesn’t matter whether information is
stored in the head or in the wider world, just so long as
it knows what kind of information is there and how
to get at it as soon as we (the agent) need to put it to
some practical use.


Brains like ours are already adept
at trading easy access against
expensive internal biological
representation and storage.
Roboticists and psychologists have known this for a
while..

Brooks: “The world is
its own best model”          O’Regan: “The world
                             as external memory”
Extending the Mind with Cognitive Prosthetics?
That feeling of seeing all the colour and detail in the
scenes is probably due to a kind of implicit meta-
knowing.


Our brains know that they can usually retrieve more
detailed info when needed, so we feel as if we already
see all the detail.

This is not really a mistake.

For we are poised to access that information just-in-
time for use.
TXM = a cognitive application of the same idea.


Compare: your feeling that you already know what
month this is

This is not due to your constantly rehearsing     the
answer in your conscious mind

(continually   sub-vocalizing   “March”    ‘March”
“March”).
Rather, it is due to your implicit meta-knowing that

This is the kind of thing you know and that (in
normal circumstances) you are poised to access that
information pretty much at will and as and when
needed.
So maybe being ‘ready-stored in the head’ is an
optional extra for dispositional believing (standing
beliefs) too?

Perhaps what matters here too (Clark and Chalmers
(1998) is being poised for easy access….

Yields the case of ‘note-book Otto’….
But TXM is not only about dispositional beliefs…many
of our best mind-extending loops into the world (just
like many of the best loops inside the brain) are much,
much fancier than simple access/retrieval loops…
think of loops like these:
gesturing while you talk (actively looping into the
body) – see Clark “Curing Cognitive Hiccups” Journal of Philosophy 2008)

scribbling while you think (looping into ‘passive’
external media, but not a case of simple offloading)

working with a highly-practiced software package
(looping into an active semi-intelligent sub-system)…
This complexity is highlighted in a famous exchange
between Richard Feynman (the Nobel laureate physicist)
and the historian Charles Weiner

“ Weiner once remarked casually that [a batch of notes and
sketches] represented “a record of [Feynman’s] day-to-day
work,” and Feynman reacted sharply.

   “I actually did the work on the paper,” he said.

   “Well,” Weiner said, “the work was done in your head,
but the record of it is still here.”

    “No, it’s not a record, not really. It’s working. You have
to work on paper and this is the paper. Okay?” “

Quoted in Genius (Gleick’s biography of Feynman)
It is not that all the thinking happens inside, and the loop out
into symbols on a page is just a kind of convenience or a way
to avoid forgetting.


Rather, the loops to external media form part and parcel of a
complex, integrated, bio-technologically hybrid system
for thinking.


For lots of examples and discussion, see Clark, Supersizing
The Mind (Oxford Univ Press, 2008))
The extended mind story is most convincing, I think,
  when we can by-pass the stage of consciously
  consulting an external or internal information store
  at all….

  Trials (at MIT Media Lab) of so-called ‘memory
  glasses’: aids to recall for people with impaired
  memory or visual recognition skills.




The Memory Glasses: Wearable Computing for Just-in-Time Memory Support Richard W. DeVaul
(MIT thesis, 2004). See also paper in 7th IEEE International Symposium on Wearable Computers
(2003)
= a Terminator style eye-glass display
The glasses work by matching the current scene
 (a face, for example) to stored information and
 cueing the subject (using the glasses-mounted
 display) with relevant information (a name, a
 relationship).



The Memory Glasses: Wearable Computing for Just-in-Time Memory Support Richard W. DeVaul
(MIT thesis, 2004). See also paper in 7th IEEE International Symposium on Wearable Computers
(2003)
The cue may be overt (consciously perceived by the
subject) or covert (rapidly flashed and hence
subliminally presented).


In the covert case, functionality is still improved
without any process of conscious awareness of the
cueing on the part of the subject.

Subjects like this a lot better!
Extending the Mind with Cognitive Prosthetics?
It is easy to imagine cases that then enhance
knowledge rather than merely ‘restore’ it.

Recognizr is a controversial app. purchased by
Apple (for over 15 million dollars).

- it makes a 3D facemap on-the-spot from a
photo input.
That means other applications can then match
that face to pictures taken from other angles etc
on the web, rapidly identifying the person and
retrieving all kinds of associated information.

Upshot: a body-mounted camera could constantly
generate these 3D face-maps, then get and act
upon a bunch of additional information from a
rapid web-trawl.
Imagine a version
where, if the person
meets some desired
condition (e.g. being a
fan of Paris SG) you
get      a      barely-
perceptible buzz from
a vibrotactile element
somewhere on your
body.
What happens when THAT becomes part of the
suite of robustly available equipment, through
which you encounter the wider world?

Soon you cease to consciously notice the gentle
buzz and simply register what it is telling you.
“just knowing” who is (probably) an SG fan
will then simply become part of how you
experience a new situation.


Your in-the-head cognitive routines will become
geared to the easy availability of the information,
creating a new, co-adapted, cognitive whole.

= ‘cognitive dovetailing’
The operation of a wide variety of such continuously
running programs may be compared to that of your
own (complex, active!) unconscious neural sub-
structures.


You will count as ‘using’ these software entities only
in the same attenuated sense as you ‘use’ your
hippocampus or frontal lobes.

Far better to say that the agent that IS you just is the
larger distributed system.
Speculation:

Such innovations – made increasingly
possible by the combination of web-based
infrastructure and portable technologies
that can learn about the agent as the
agent uses them - will increasingly blur
the boundaries between our own minds
and the technological infrastructures in
which we live, work, and play.
“Google Glasses”, expected to hit the market
within a year, may nudge us in this direction
sooner than we think …
TXM Summary

Portable (or always available/ubiquitous)

Robust.

and

Dovetailed (co-adapted)

Augmentations

“PRaDA accessories become you”
                                            34
Some Worries and Replies

  Adams and Aizawa (2008) find TXM
  ‘outrageous’ and ‘preposterous’ (p.vii).

  Whatever plausibility it has, they suggest, it
  gets by cheating.




Adams, F and Aizawa, K (2008) The Bounds of Cognition
(Blackwell)
First, it relies on a fuzzy, untriangulated notion of ‘cognition’.
We gave no ‘mark of the cognitive’, so how can we tell
where the machinery of cognition lies?



Second, the best candidate for such a mark involves non-
derived contents and they are all said to be found only ‘in the
head’.



Third, there are characteristic properties that the in-the-head
stuff displays that the rest doesn’t, so we can’t (even
bracketing non-derived content) run a functional-sameness
argument here.
So how come anyone is even tempted?

Only thanks (A and A suggest) to:

1. The error of mistaking (mere) causal
coupling for something more profound, more
‘constitutive’.


= rather like mistaking the inputs to a
calculator for part of the machinery that
calculates
and/or 2.

The error of confusing the cognitive process
with the cognitive system

the latter may include (inner and outer) parts
and processes that aid and abet cognition,
without themselves participating in true
cognitive processing.

(= like mistaking the calculator’s casing or
batteries for part of the calculating engine).
Concerning the mark of the cognitive

A and A suggest, as a plausible ‘mark of the cognitive’ the
presence of “non-derived representations governed by
idiosyncratic kinds of processes” (p.10).

The kinds of inscription found in e.g. some online storage fail
to make the grade on both counts.


They involve derived (that is, in some sense humanly
assigned) meanings.

And they do not behave in the same ways as their in-the-
head counterparts (for example, they fail to display various
well-known psychological effects, such as the recency effect
which systematically favors late entries in a list (p.63)).
But notice: non-derived representations (see Clark
(2005) for discussion) are indeed present in any
putative overall cognizing system




Even on the extended view, every extended mind will
involve    some     operations      defined   over
representations whose meanings are non-derived.
So the real question here concerns
the    acceptability  of     derived
representations or contents as
genuine elements in a distributed or
hybrid cognitive process that quite
clearly involves many non-derived
ones too.

I don’t think we have clear intuitions
about this

(consider manipulating
Venn diagrams in the head)
What about the rest of the clause? “non-derived
representations governed by idiosyncratic
kinds of processes” (p.10).

A and A note that human biological memory
systems look to be characterized by certain
psychological laws (eg primacy, recency and
chunking effects).

But to identify cognitive candidacy by
comparison to typical human inner neural
processes threatens (see Wheeler (2008)) to be
question-begging in the context of this debate
In any case, we should reject the idea that the
surface psychological laws that happen to
characterize the inner (bio-cognitive) realm in
human agents should in any way define the
cognitive realm itself
Martian bio-memory, even
if it didn't display e.g. the
recency      and    chunking
effects found in human
neural memory systems,
could surely count as an
aspect of Martian cognition.
..helps reveal the real role of the Parity
Principle (from Clark and Chalmers
(1998)).


If, as we confront some task, a part of the
world functions as a process which, were it
to go on in the head, we would have no
hesitation in accepting as part of the
cognitive process, then that part of the
world is (for that time) part of the cognitive
process.
What Parity Isn’t:
PP does NOT require the bio-external elements
to be operating in exactly the ‘same way’ as the
bio-internal elements.

Rather, the Parity Principle is best seen as a
demand that we assess the bio-external
contributions with the same kind of unbiased
vision that we ought to bring to bear on an
alien neural or inner organization.
It is a call not for sameness, but for
sameness of opportunity
Parity Probe =
akin to a ‘veil of metabolic ignorance’

asks what our attitude would be if
currently external means of
information storage and
transformation were found in biology.



= about avoiding a rush to judgment
based on spatial location alone.
PP is a tool that’s meant to help us deploy our pre-theoretic
grip on the cognitive without the distractions of skin and skull.

We surely do have such a grip.

It is only courtesy of such a grip that we can tell that eg the
colour or texture of the brain is not (as far as we know) a
cognitive-processing relevant feature.
PP = thus what Mark Sprevak dubs a ‘Fair Play Principle’: it
helps us avoid a rush to judgment based on the spatial location
and/or the processing idiosyncrasies of human wetware.
Indeed, avoiding human wetware chauvinism is
necessary quite close to home, if we are to allow for
e.g. the minds of cats
Suppose cat-brains turn out not to display
some of the signature features of human
memory systems?

Should we conclude that cat-memory is not
real memory?

Adams and Aizawa are alert (p.71-73) to the
worry, but their discussion is revealing…
“These observations suggest a complication in the
evaluation of the hypothesis of extended cognition.
They suggest that we cannot refute the hypothesis of
extended cognition simply on the grounds that the
combination of brain, body, and environment does not
form a conglomerate that is like a normal human
cognitive processor. The combination could have
some general, non-human, kind of cognition…that is
related to human cognition in only a “family
resemblance” kind of way.” (p.72).
But in this passage ‘like a normal human
cognitive processor’ already seems to mean
‘like  a    normal    human     in-the-head
mechanism’.

This makes the response look question-
begging.

For the challenge that the theorist of extended
cognition often means to raise to this very
identification.
What about the putative "coupling
/constitution fallacy” in arguments for the
extended mind?


= the fallacy of moving from the causal
coupling of some object or process to some
cognitive agent, to the conclusion that the
object or process is part of (helps
constitute)    the     agent's    cognitive
processing.
"Question: Why did the pencil
think that 2+2=4?

Clark's Answer: Because it was
coupled          to           the
mathematician…. That about
sums up what is wrong with [ the]
extended mind hypothesis.”

From Adams and Aizawa (‘Defending the
Bounds of Cognition’ )
Question: Why did the V4 neuron ‘think’ that
there was a spiral pattern in the stimulus?
Answer: Because it was coupled to the (rest of
the) monkey.
Let’s try that again:

…..the coupling is what         the V4 neuron,
whose response characteristics are such-and-
such, to                               in virtue
of which                              , in the
larger Monkey-system, is exhibited.

Unlike, say, the                     created in
that neuron in isolation, which wouldn’t be part
of any cognitive process at all
The Appeal to Coupling (Revisited)


Coupling is just the                     that
allows extended or distributed cognitive
processes to emerge, and be maintained, while
processing proceeds.
Examples:
Inter-hemisphere coupling, as in part enabled by
the corpus callosum.


Neural-bodily coupling, as between neural systems
and movements of hand and arm. See e.g. the case
of gesture, discussed at length in Clark (2007) (2008)

Neural-bodily-wordly coupling, as between neural
systems, bodily effectors, and bio-external resources
such as sketchpads,notebooks, and the web. See
e.g. discussions in Clark (2008) Supersizing the Mind
But still, I agree that not all coupling creates extended
cognitive systems…


Many things (like the weather, or a bang on the head) may
impact cognition but are not thereby parts of the cognizing
machine.




                                                      59
Thought Experiment 1

Suppose the rhythmic pulse of rain on my Edinburgh window
somehow helps the pace and sequencing of a flow of
thoughts.

Is the rain now part of my cognitive engine? Probably not.
.




                                                       60
Thought Experiment 2

A robot that deliberately seeks those
conditions, because it is designed to use
raindrop sounds to time, sequence, and pace
some internal operations essential to proper
cognizing.


??

                                               61
Thought Experiment 3


Imagine a robot that evolved to spit
stored water at a plate on its own
body so as to use the auditory signal to
time and sequence key neural
information-processing operations.




                                           62
Those self-maintained, self-stimulating signals are best
seen (I claim) as part of the cognitive mechanism itself. A
neural clock or oscillator would surely count after all…



Much of advanced cognition involves the deployment of
cognitive processes that create (or sometimes just elicit)
the inputs that continuously drive those and/or other
cognitive processes along (speech, sketching, writing, and
gesture, seem like prime examples of such self-created
systemic inputs).



                                                       63
In these special loop-y contexts, the simple input vs
part-of-processing distinction, with its associated ban
on counting inputs as parts of processing
mechanisms seems wrong.


= Self-stimulation as one clean route from mere inputs
to parts of mechanisms..
Compare: the car makes exhaust fumes (outputs) that are
also inputs that drive the turbo that adds power (often
around 30% more power!) to the engine.


The exhaust fumes are outputs that are also self-
created inputs that surely form a proper part of the
overall power-generating mechanism


= automotive self-stimulation!


                                                   65
Another Kind of Worry

Rob Rupert (2009) looks able to allow the spitting
robot to possess a bodily extended cognizing circuit,
but would reject the use of paper or other off-body
storage
This is because Rupert argues for a special status
for the most portable bundle of processing powers
that characterize the biological organism.



He sees this bundle as the constant target
(implicitly or explicitly) of most work in psychology
and neuroscience.

Various arguments: I’ll look just at two: asymmetry
and integration
Asymmetries
Eg (Rupert) If you destroy a
notebook, a cognizing agent may
well replace it. But destroy the
brain and that’s (literally) all she
wrote!



Or (Harry Collins) When my props and aids go wrong it is I
who have to repair them. They will never repair me.
There seems to be a deep asymmetry, or lopsidedness,
between the role of the notebook and that of the brain.
Reply:


So What?
Take a small part of the neural crew, and very
often ‘I’ can survive perfectly well without it (a
neuron or two, visual cortex, MT)
Similarly, when aspects of my own bio-memory
start to become unreliable, I may deliberately shift
towards alternative means of storage and retrieval.

The apparent lopsidedness (I have to take steps
to offset the loss of my own bio-memory
functioning) does not threaten the claim that, prior
to the loss, those internal resources were
realizing my cognitive activities.
Ditto, then, for the notebooks and sketchpads…
(Sprevak) Don’t hold the external stuff
to higher standards than we’d hold
aspects of the brain’s own
functioning.
Integration

Rupert claims there are severe scientific costs to
adopting the extended perspective, as we may begin to
lose our experimental grip on the integrated bundles of
processing resources (agents) that psychology and
neuroscience seeks to study.

Sally-the organism (call that ‘O-Sally’)

O-Sally + iPhone

O-Sally +notebook

O-Sally + Tommy
Re these putative costs


I just don’t see them.



No need to lose our grip on the core biological bundle.
Any more than attention to whole brains makes us lose
track of the special contribution of the hippocampal
bundle, or of the right hemisphere bundle…
The invitation is to let a thousand flowers bloom.
If our goal is to understand what a           (a socially and
technologically situated entity) can do, we’d better study the
class of systems that includes loops through the body,
artifacts, the web, other agents etc.




If the goal is to understand what the persisting biological
organism alone can do (say, by way of mathematical
reasoning) we might want to restrict the use of all non-
biological props and aids. Fingers yes, notepads no
If it is to discover the stand-alone capacities of the neural
apparatus, we might want to impede subjects from using their
fingers as counting buffers during an experiment. No fingers,
no gestures


If it is to track the contribution of a specific neural sub-
structure, we might want to use TMS to get a better grip on
that.
All these targets are both
theoretically and experimentally
viable!

TXM invites us to tackle them all,
and to do so as part of a single
interdisciplinary   project    of
understanding the distinctively
human mind.
A last question to ponder:

so…is all this potential change and
cognitive ‘upgrading’ a GOOD thing,
or is it a dangerous early step on the
road to some dark and ‘post-human’
future?
A common worry:

To allow all these well-fitted, transparent tools to
count as genuine aspects of OURSELVES is to
lose sight of our essential humanity.

It is to risk a kind of bodily, sensory, and cognitive
dissolution, as we slowly but surely lose track of
where WE stop and the world of tools and
technologies around us begins.

= a kind of personal dissolution into the bio-
technological matrix..
A kind of bodily, sensory, and cognitive BLOAT
Keith Butler tries to stop the bloat by appeal to a notion of the
biological brain as ultimate controller

“Even if external elements sometimes participate in processes
of control and choice ( your software agent might choose
some stocks and shares, and so on) still it is always the
biological brain that has the final say”



So the brain is the controller and chooser of actions in a way
all that external stuff is not.

So the external stuff should not count as part of the real
cognitive system. See eg Butler (1998), see also Adams and
Aizawa (2002, 2008)
                                                           81
But I am not convinced.

Re-applying the “locus of control” criterion inside the head
helps reveal what’s going wrong.

Do we now count as not part of my mind or myself any
neural subsystems that are not the ultimate arbiters of
action and choice?


Suppose only my frontal lobes have the final say- does that
shrink the “real mind” to just the frontal lobes!?

What if no subsystem has the ‘final say (Dennett)?

Has the mind and self just disappeared?
                                                           82
It is a mistake to think that all those
“cognitive tools” need some kind of
wafer-thin user…

This is where the ghost of Descartes
seeps out from under the contemporary
materialist rug



                                    83
I think, though, that we should
be MUCH more worried by the
alternative, which is a kind of
unprincipled shrinkage of the
mind and self!
Brainbound’s Last Stand?


Brie Gertler (2007) has argued for what she calls ‘the
narrow mind’ (TNM)

According to TNM, the realm of the mental consists only
of the contents of occurrent conscious, processing.

This allows her to reject the arguments for TXM by e.g.
rejecting standing beliefs (classing them as not ‘mental’)
hence sidestepping the parity considerations.

If only what is active and conscious here and now is
mental, then the physical base of mind (thus reduced)
plausibly does shrink back to well within the bounds of
skin and skull….
But restricting the mental/cognitive to        the
occurrent and conscious is a drastic step

It renders huge swathes of crucial in-head
processing non-mental.


Do we really want to avoid cognitive ‘bloat’ at the
cost of shrinking the mind so dramatically?


This seems scientifically unwarranted and
ethically dubious…
A Closing Story: Deacon
Patrick Jones

Jones     suffers    severe
memory impairments as
a result of repeated
traumatic brain injury.

Yet he lives a surprisingly
normal life as a working
catholic     deacon      in
Colorado Springs.

This is not due to any
super hi-tech interventions.
Jones relies upon a combination of the popular
software Evernote, a Mac program for visualization
called Curio, and an iPhone.

Courtesy of these off-the-shelf packages and
devices Jones is able to create massive webs of
interlinked notes and pointers that allow the
saving, searching, retrieving, and diagramming of his
own contacts, thoughts, meetings, decisions,
and interactions.

See “What if HM had a Blackberry?” Gary Marcus,
Psychology Today, December 2008
Amazingly, it is only in virtue of this whole up-and-
running web of structure that he able to recall who
he has spoken with, what was decided, and so
on.


Yet he carries through complex long-term projects of
pastoral care with incredible skill, optimism, and
good humour.
Patrick’s mental life is now built (it seems to me)
upon a foundation of both biological and non-
biological processing and storage.
If you were to hack into and destroy his EVERNOTE
records, that would be a crime against the person,
not merely a crime against his cyber-property.

It would be tantamount, as Dan Dennett once
commented, to inflicting brain damage on someone
while they sleep.
Issues of ownership and legal protection must
soon loom here.

Do Patrick’s software providers have the right to
delete his records if he fails to keep up
payments?

Do they have the right to cease to support old
software, even if it has become deeply dovetailed
with an ageing human’s biological brain?

What if Patrick and his spouse create a shared
resource then split up?
Issues like these will surely arise as our
cognitive technologies grow better and better,
and the ongoing dovetailing of brains and
technologies becomes more and more
pronounced.

Our laws, educational practice, and social
policy need to plan for a near-future in which
individual    minds     are     web-extended,
technology-permeated artifacts, apt for all
kinds of transformation, repair, extension,
and enhancement
Maybe the best way to do so
is start by recognizing that it’s
cognitive technologies all the
way down….

More Related Content

PDF
Reuben Binns: Social Knowledge and the Web
PDF
Michael Wheeler's presentation in Sorbonne, "Philosophy of the Web" seminar, ...
PDF
Meaning and the Semantic Web
PDF
Saturn: Carnal Mind
DOCX
Bhusal2 prepared bydeepak bhusalcwid50259419to p
PDF
Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense
PDF
Cognitive Computing for Tacit Knowledge1
PDF
Seeking normative guidelines for novel future
Reuben Binns: Social Knowledge and the Web
Michael Wheeler's presentation in Sorbonne, "Philosophy of the Web" seminar, ...
Meaning and the Semantic Web
Saturn: Carnal Mind
Bhusal2 prepared bydeepak bhusalcwid50259419to p
Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense
Cognitive Computing for Tacit Knowledge1
Seeking normative guidelines for novel future

What's hot (20)

PDF
Coalescing Minds: Brain Uploading Related Group Mind Scenarios
PPTX
J.M. Díaz Nafría: Systematic approach to innovation: Cosmos, Life, Society
PDF
Educational Futures Evidence Hub
PPTX
AI_1 Introduction of AI
PDF
Education, technologies, cognition: triangulations possibles
PDF
Artificial Intelligence & Cosmic Consciousness
PPT
Lect1 111021211234-phpapp02
PPTX
Artificial intelligence
PPTX
Perception in Nanocognition
PDF
Ux lady-human-information-processing
PPTX
Can abstraction lead to intelligence?
PPT
Are robots present?
PDF
Philosophy of AI
PDF
Creating Heart Capital :: Culture matters
PDF
The Knowledge ecology: Epistemic Credit and the Technologically Extended Mind
PPTX
Temporality of the Future
PPTX
Ctec801 week 4
PDF
Geometry of knowledge spaces
PDF
Chaps29 the entirebookks2017 - The Mind Mahine
Coalescing Minds: Brain Uploading Related Group Mind Scenarios
J.M. Díaz Nafría: Systematic approach to innovation: Cosmos, Life, Society
Educational Futures Evidence Hub
AI_1 Introduction of AI
Education, technologies, cognition: triangulations possibles
Artificial Intelligence & Cosmic Consciousness
Lect1 111021211234-phpapp02
Artificial intelligence
Perception in Nanocognition
Ux lady-human-information-processing
Can abstraction lead to intelligence?
Are robots present?
Philosophy of AI
Creating Heart Capital :: Culture matters
The Knowledge ecology: Epistemic Credit and the Technologically Extended Mind
Temporality of the Future
Ctec801 week 4
Geometry of knowledge spaces
Chaps29 the entirebookks2017 - The Mind Mahine
Ad

Viewers also liked (6)

PPTX
Cognitive and Emotional Neuro-Prostheses
PDF
Rethinking Realpolitik: The Afterglobalization Movement and Beyond
PDF
Common Logic: An Evolutionary Tale
PDF
Philosophical Foundations for a Services Systems Approach
PDF
Rdf with contexts
PDF
The Philosophy of Information and the Structure of Philosophical Revolutions
Cognitive and Emotional Neuro-Prostheses
Rethinking Realpolitik: The Afterglobalization Movement and Beyond
Common Logic: An Evolutionary Tale
Philosophical Foundations for a Services Systems Approach
Rdf with contexts
The Philosophy of Information and the Structure of Philosophical Revolutions
Ad

Similar to Extending the Mind with Cognitive Prosthetics? (20)

PPT
1. The Game Of The Century
PDF
The Memory System Of The Brain Reprint 2020 J Z Young
PDF
Consciousness — The Uncomfortable Unknown — 3 Reasons Why Philosophers, Biolo...
DOC
Introduction to Artificial Intelligence.doc
PPTX
Machine Learning, AI and the Brain
PPT
Scott Aaronson MIT Freewill Presentation
PDF
simulation.pdf
DOCX
Managing Riskin InformationSystemsPowered by vLab Solu.docx
DOCX
Managing Riskin InformationSystemsPowered by vLab Solu.docx
PDF
Why the "hard" problem of consciousness is easy and the "easy" problem hard....
PPTX
Politics and Pragmatism in Scientific Ontology Construction
PDF
Deep Generative Modeling Jakub M. Tomczak
PPT
Singularity
PDF
PDF
From CLR(Compressed Life Review) to AI_LLM_ The Power of Memory Compression i...
PPT
Invent this! update0314
PDF
The Psychology Behind Security - ISSA Journal Abril 2010
PDF
A Review Of Artificial Intelligence
DOCX
Artificial Intelligence
PPTX
serendip\ity
1. The Game Of The Century
The Memory System Of The Brain Reprint 2020 J Z Young
Consciousness — The Uncomfortable Unknown — 3 Reasons Why Philosophers, Biolo...
Introduction to Artificial Intelligence.doc
Machine Learning, AI and the Brain
Scott Aaronson MIT Freewill Presentation
simulation.pdf
Managing Riskin InformationSystemsPowered by vLab Solu.docx
Managing Riskin InformationSystemsPowered by vLab Solu.docx
Why the "hard" problem of consciousness is easy and the "easy" problem hard....
Politics and Pragmatism in Scientific Ontology Construction
Deep Generative Modeling Jakub M. Tomczak
Singularity
From CLR(Compressed Life Review) to AI_LLM_ The Power of Memory Compression i...
Invent this! update0314
The Psychology Behind Security - ISSA Journal Abril 2010
A Review Of Artificial Intelligence
Artificial Intelligence
serendip\ity

More from PhiloWeb (20)

PDF
Le Web a-t-il besoin d'une logique ? Un point de vue aporétique.
PDF
PhiloWeb panel. "Philosophy" of the Web
PPT
"Ontologies" : De la sémantique à l'éthique
PDF
From Linked Documentary Resources to Linked Computational Resources
PPTX
A methodology for internal Web ethics
PPT
Web Metaphysics between Logic and Ontology
PDF
Where do "ontologies" come from?
PDF
Containing the Semantic Explosion
PDF
Filter Bubble and Enframing
PDF
Harold Boley: RuleML/Grailog: The Rule Metalogic Visualized with Generalized ...
PDF
Selmer Bringsjord & Naveen Sundar G.: Given the Web, What is Intelligence, R...
PPTX
Raffaela Giovagnoli: Autonomy, Scorekeeping and the Net
PPTX
Michalis Vafopoulos: Initial thoughts about existence in the Web
PPTX
Henry Thompson : Are Uris really names?
PDF
Alexandre Monnin: W3C TPAC presentation of PhiloWeb
PDF
Alexandra Arapinis : From ontological structures to semantic lexical structur...
PDF
Henry Story: Philosophy and the Social Web
PPT
Harry Halpin: Artificial Intelligence versus Collective Intelligence
PPT
Yuk Hui: What is a digital object?
PPTX
Nicolas Delaforge: Modeling the Web resource, extracting the context: stakes ...
Le Web a-t-il besoin d'une logique ? Un point de vue aporétique.
PhiloWeb panel. "Philosophy" of the Web
"Ontologies" : De la sémantique à l'éthique
From Linked Documentary Resources to Linked Computational Resources
A methodology for internal Web ethics
Web Metaphysics between Logic and Ontology
Where do "ontologies" come from?
Containing the Semantic Explosion
Filter Bubble and Enframing
Harold Boley: RuleML/Grailog: The Rule Metalogic Visualized with Generalized ...
Selmer Bringsjord & Naveen Sundar G.: Given the Web, What is Intelligence, R...
Raffaela Giovagnoli: Autonomy, Scorekeeping and the Net
Michalis Vafopoulos: Initial thoughts about existence in the Web
Henry Thompson : Are Uris really names?
Alexandre Monnin: W3C TPAC presentation of PhiloWeb
Alexandra Arapinis : From ontological structures to semantic lexical structur...
Henry Story: Philosophy and the Social Web
Harry Halpin: Artificial Intelligence versus Collective Intelligence
Yuk Hui: What is a digital object?
Nicolas Delaforge: Modeling the Web resource, extracting the context: stakes ...

Recently uploaded (20)

PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Empathic Computing: Creating Shared Understanding
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Electronic commerce courselecture one. Pdf
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Encapsulation theory and applications.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Modernizing your data center with Dell and AMD
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
Big Data Technologies - Introduction.pptx
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
Agricultural_Statistics_at_a_Glance_2022_0.pdf
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Spectral efficient network and resource selection model in 5G networks
Empathic Computing: Creating Shared Understanding
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Electronic commerce courselecture one. Pdf
Unlocking AI with Model Context Protocol (MCP)
Dropbox Q2 2025 Financial Results & Investor Presentation
Review of recent advances in non-invasive hemoglobin estimation
MYSQL Presentation for SQL database connectivity
Encapsulation theory and applications.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
Modernizing your data center with Dell and AMD
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Reach Out and Touch Someone: Haptics and Empathic Computing
Big Data Technologies - Introduction.pptx
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
Encapsulation_ Review paper, used for researhc scholars
Chapter 3 Spatial Domain Image Processing.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm

Extending the Mind with Cognitive Prosthetics?

  • 1. Extending the Mind with Cognitive Prosthetics? Andy Clark School of Philosophy, Psychology and Language Sciences (PPLS) University of Edinburgh, Scotland, UK andy.clark@ed.ac.uk
  • 2. With special thanks to: Rob Rupert, Kenneth Aizawa, Fred Adams, Mark Rowlands, Dave Chalmers, Julian Kiverstein, Mark Sprevak, Richard Menary, and Mike Wheeler.
  • 4. Where in physical space lies the machinery of mind and cognition?
  • 5. What it isn’t: Target is that the machinery of mind might, perhaps in some alien beings, be smeared across more than the neural economy.
  • 6. Nor is the claim merely that non-brain activity impacts the mind. No-one denies that causal commerce between mind and world matters, and it changes what we think. The contentious claim is that the mechanisms of mind are not all in the head (= the 'extended mind hypothesis’- Clark and Chalmers (1998)
  • 7. It’s as if someone said that your calculator or currency converter’s MECHANISMS were not all inside your laptop. This is when, e.g., we use a web-based currency converter. It is when we use the built-in calculator on the mac.
  • 8. TXM: The Main Idea: The mechanisms of (your) mind are as free to bleed into the (rest of the) world as the mechanisms of calculation are to bleed into the web
  • 9. Q/ Just how crazy is this
  • 10. 1.The Extended Mind Claim (super-mini-version) 2. Some Objections and Replies 3.Cognitive Extension versus Cognitive Shrinkage
  • 11. The Extended Mind Claim For the brain, it doesn’t matter whether information is stored in the head or in the wider world, just so long as it knows what kind of information is there and how to get at it as soon as we (the agent) need to put it to some practical use. Brains like ours are already adept at trading easy access against expensive internal biological representation and storage.
  • 12. Roboticists and psychologists have known this for a while.. Brooks: “The world is its own best model” O’Regan: “The world as external memory”
  • 14. That feeling of seeing all the colour and detail in the scenes is probably due to a kind of implicit meta- knowing. Our brains know that they can usually retrieve more detailed info when needed, so we feel as if we already see all the detail. This is not really a mistake. For we are poised to access that information just-in- time for use.
  • 15. TXM = a cognitive application of the same idea. Compare: your feeling that you already know what month this is This is not due to your constantly rehearsing the answer in your conscious mind (continually sub-vocalizing “March” ‘March” “March”).
  • 16. Rather, it is due to your implicit meta-knowing that This is the kind of thing you know and that (in normal circumstances) you are poised to access that information pretty much at will and as and when needed.
  • 17. So maybe being ‘ready-stored in the head’ is an optional extra for dispositional believing (standing beliefs) too? Perhaps what matters here too (Clark and Chalmers (1998) is being poised for easy access…. Yields the case of ‘note-book Otto’….
  • 18. But TXM is not only about dispositional beliefs…many of our best mind-extending loops into the world (just like many of the best loops inside the brain) are much, much fancier than simple access/retrieval loops… think of loops like these: gesturing while you talk (actively looping into the body) – see Clark “Curing Cognitive Hiccups” Journal of Philosophy 2008) scribbling while you think (looping into ‘passive’ external media, but not a case of simple offloading) working with a highly-practiced software package (looping into an active semi-intelligent sub-system)…
  • 19. This complexity is highlighted in a famous exchange between Richard Feynman (the Nobel laureate physicist) and the historian Charles Weiner “ Weiner once remarked casually that [a batch of notes and sketches] represented “a record of [Feynman’s] day-to-day work,” and Feynman reacted sharply. “I actually did the work on the paper,” he said. “Well,” Weiner said, “the work was done in your head, but the record of it is still here.” “No, it’s not a record, not really. It’s working. You have to work on paper and this is the paper. Okay?” “ Quoted in Genius (Gleick’s biography of Feynman)
  • 20. It is not that all the thinking happens inside, and the loop out into symbols on a page is just a kind of convenience or a way to avoid forgetting. Rather, the loops to external media form part and parcel of a complex, integrated, bio-technologically hybrid system for thinking. For lots of examples and discussion, see Clark, Supersizing The Mind (Oxford Univ Press, 2008))
  • 21. The extended mind story is most convincing, I think, when we can by-pass the stage of consciously consulting an external or internal information store at all…. Trials (at MIT Media Lab) of so-called ‘memory glasses’: aids to recall for people with impaired memory or visual recognition skills. The Memory Glasses: Wearable Computing for Just-in-Time Memory Support Richard W. DeVaul (MIT thesis, 2004). See also paper in 7th IEEE International Symposium on Wearable Computers (2003)
  • 22. = a Terminator style eye-glass display
  • 23. The glasses work by matching the current scene (a face, for example) to stored information and cueing the subject (using the glasses-mounted display) with relevant information (a name, a relationship). The Memory Glasses: Wearable Computing for Just-in-Time Memory Support Richard W. DeVaul (MIT thesis, 2004). See also paper in 7th IEEE International Symposium on Wearable Computers (2003)
  • 24. The cue may be overt (consciously perceived by the subject) or covert (rapidly flashed and hence subliminally presented). In the covert case, functionality is still improved without any process of conscious awareness of the cueing on the part of the subject. Subjects like this a lot better!
  • 26. It is easy to imagine cases that then enhance knowledge rather than merely ‘restore’ it. Recognizr is a controversial app. purchased by Apple (for over 15 million dollars). - it makes a 3D facemap on-the-spot from a photo input.
  • 27. That means other applications can then match that face to pictures taken from other angles etc on the web, rapidly identifying the person and retrieving all kinds of associated information. Upshot: a body-mounted camera could constantly generate these 3D face-maps, then get and act upon a bunch of additional information from a rapid web-trawl.
  • 28. Imagine a version where, if the person meets some desired condition (e.g. being a fan of Paris SG) you get a barely- perceptible buzz from a vibrotactile element somewhere on your body.
  • 29. What happens when THAT becomes part of the suite of robustly available equipment, through which you encounter the wider world? Soon you cease to consciously notice the gentle buzz and simply register what it is telling you.
  • 30. “just knowing” who is (probably) an SG fan will then simply become part of how you experience a new situation. Your in-the-head cognitive routines will become geared to the easy availability of the information, creating a new, co-adapted, cognitive whole. = ‘cognitive dovetailing’
  • 31. The operation of a wide variety of such continuously running programs may be compared to that of your own (complex, active!) unconscious neural sub- structures. You will count as ‘using’ these software entities only in the same attenuated sense as you ‘use’ your hippocampus or frontal lobes. Far better to say that the agent that IS you just is the larger distributed system.
  • 32. Speculation: Such innovations – made increasingly possible by the combination of web-based infrastructure and portable technologies that can learn about the agent as the agent uses them - will increasingly blur the boundaries between our own minds and the technological infrastructures in which we live, work, and play.
  • 33. “Google Glasses”, expected to hit the market within a year, may nudge us in this direction sooner than we think …
  • 34. TXM Summary Portable (or always available/ubiquitous) Robust. and Dovetailed (co-adapted) Augmentations “PRaDA accessories become you” 34
  • 35. Some Worries and Replies Adams and Aizawa (2008) find TXM ‘outrageous’ and ‘preposterous’ (p.vii). Whatever plausibility it has, they suggest, it gets by cheating. Adams, F and Aizawa, K (2008) The Bounds of Cognition (Blackwell)
  • 36. First, it relies on a fuzzy, untriangulated notion of ‘cognition’. We gave no ‘mark of the cognitive’, so how can we tell where the machinery of cognition lies? Second, the best candidate for such a mark involves non- derived contents and they are all said to be found only ‘in the head’. Third, there are characteristic properties that the in-the-head stuff displays that the rest doesn’t, so we can’t (even bracketing non-derived content) run a functional-sameness argument here.
  • 37. So how come anyone is even tempted? Only thanks (A and A suggest) to: 1. The error of mistaking (mere) causal coupling for something more profound, more ‘constitutive’. = rather like mistaking the inputs to a calculator for part of the machinery that calculates
  • 38. and/or 2. The error of confusing the cognitive process with the cognitive system the latter may include (inner and outer) parts and processes that aid and abet cognition, without themselves participating in true cognitive processing. (= like mistaking the calculator’s casing or batteries for part of the calculating engine).
  • 39. Concerning the mark of the cognitive A and A suggest, as a plausible ‘mark of the cognitive’ the presence of “non-derived representations governed by idiosyncratic kinds of processes” (p.10). The kinds of inscription found in e.g. some online storage fail to make the grade on both counts. They involve derived (that is, in some sense humanly assigned) meanings. And they do not behave in the same ways as their in-the- head counterparts (for example, they fail to display various well-known psychological effects, such as the recency effect which systematically favors late entries in a list (p.63)).
  • 40. But notice: non-derived representations (see Clark (2005) for discussion) are indeed present in any putative overall cognizing system Even on the extended view, every extended mind will involve some operations defined over representations whose meanings are non-derived.
  • 41. So the real question here concerns the acceptability of derived representations or contents as genuine elements in a distributed or hybrid cognitive process that quite clearly involves many non-derived ones too. I don’t think we have clear intuitions about this (consider manipulating Venn diagrams in the head)
  • 42. What about the rest of the clause? “non-derived representations governed by idiosyncratic kinds of processes” (p.10). A and A note that human biological memory systems look to be characterized by certain psychological laws (eg primacy, recency and chunking effects). But to identify cognitive candidacy by comparison to typical human inner neural processes threatens (see Wheeler (2008)) to be question-begging in the context of this debate
  • 43. In any case, we should reject the idea that the surface psychological laws that happen to characterize the inner (bio-cognitive) realm in human agents should in any way define the cognitive realm itself Martian bio-memory, even if it didn't display e.g. the recency and chunking effects found in human neural memory systems, could surely count as an aspect of Martian cognition.
  • 44. ..helps reveal the real role of the Parity Principle (from Clark and Chalmers (1998)). If, as we confront some task, a part of the world functions as a process which, were it to go on in the head, we would have no hesitation in accepting as part of the cognitive process, then that part of the world is (for that time) part of the cognitive process.
  • 45. What Parity Isn’t: PP does NOT require the bio-external elements to be operating in exactly the ‘same way’ as the bio-internal elements. Rather, the Parity Principle is best seen as a demand that we assess the bio-external contributions with the same kind of unbiased vision that we ought to bring to bear on an alien neural or inner organization. It is a call not for sameness, but for sameness of opportunity
  • 46. Parity Probe = akin to a ‘veil of metabolic ignorance’ asks what our attitude would be if currently external means of information storage and transformation were found in biology. = about avoiding a rush to judgment based on spatial location alone.
  • 47. PP is a tool that’s meant to help us deploy our pre-theoretic grip on the cognitive without the distractions of skin and skull. We surely do have such a grip. It is only courtesy of such a grip that we can tell that eg the colour or texture of the brain is not (as far as we know) a cognitive-processing relevant feature.
  • 48. PP = thus what Mark Sprevak dubs a ‘Fair Play Principle’: it helps us avoid a rush to judgment based on the spatial location and/or the processing idiosyncrasies of human wetware.
  • 49. Indeed, avoiding human wetware chauvinism is necessary quite close to home, if we are to allow for e.g. the minds of cats
  • 50. Suppose cat-brains turn out not to display some of the signature features of human memory systems? Should we conclude that cat-memory is not real memory? Adams and Aizawa are alert (p.71-73) to the worry, but their discussion is revealing…
  • 51. “These observations suggest a complication in the evaluation of the hypothesis of extended cognition. They suggest that we cannot refute the hypothesis of extended cognition simply on the grounds that the combination of brain, body, and environment does not form a conglomerate that is like a normal human cognitive processor. The combination could have some general, non-human, kind of cognition…that is related to human cognition in only a “family resemblance” kind of way.” (p.72).
  • 52. But in this passage ‘like a normal human cognitive processor’ already seems to mean ‘like a normal human in-the-head mechanism’. This makes the response look question- begging. For the challenge that the theorist of extended cognition often means to raise to this very identification.
  • 53. What about the putative "coupling /constitution fallacy” in arguments for the extended mind? = the fallacy of moving from the causal coupling of some object or process to some cognitive agent, to the conclusion that the object or process is part of (helps constitute) the agent's cognitive processing.
  • 54. "Question: Why did the pencil think that 2+2=4? Clark's Answer: Because it was coupled to the mathematician…. That about sums up what is wrong with [ the] extended mind hypothesis.” From Adams and Aizawa (‘Defending the Bounds of Cognition’ )
  • 55. Question: Why did the V4 neuron ‘think’ that there was a spiral pattern in the stimulus? Answer: Because it was coupled to the (rest of the) monkey.
  • 56. Let’s try that again: …..the coupling is what the V4 neuron, whose response characteristics are such-and- such, to in virtue of which , in the larger Monkey-system, is exhibited. Unlike, say, the created in that neuron in isolation, which wouldn’t be part of any cognitive process at all
  • 57. The Appeal to Coupling (Revisited) Coupling is just the that allows extended or distributed cognitive processes to emerge, and be maintained, while processing proceeds.
  • 58. Examples: Inter-hemisphere coupling, as in part enabled by the corpus callosum. Neural-bodily coupling, as between neural systems and movements of hand and arm. See e.g. the case of gesture, discussed at length in Clark (2007) (2008) Neural-bodily-wordly coupling, as between neural systems, bodily effectors, and bio-external resources such as sketchpads,notebooks, and the web. See e.g. discussions in Clark (2008) Supersizing the Mind
  • 59. But still, I agree that not all coupling creates extended cognitive systems… Many things (like the weather, or a bang on the head) may impact cognition but are not thereby parts of the cognizing machine. 59
  • 60. Thought Experiment 1 Suppose the rhythmic pulse of rain on my Edinburgh window somehow helps the pace and sequencing of a flow of thoughts. Is the rain now part of my cognitive engine? Probably not. . 60
  • 61. Thought Experiment 2 A robot that deliberately seeks those conditions, because it is designed to use raindrop sounds to time, sequence, and pace some internal operations essential to proper cognizing. ?? 61
  • 62. Thought Experiment 3 Imagine a robot that evolved to spit stored water at a plate on its own body so as to use the auditory signal to time and sequence key neural information-processing operations. 62
  • 63. Those self-maintained, self-stimulating signals are best seen (I claim) as part of the cognitive mechanism itself. A neural clock or oscillator would surely count after all… Much of advanced cognition involves the deployment of cognitive processes that create (or sometimes just elicit) the inputs that continuously drive those and/or other cognitive processes along (speech, sketching, writing, and gesture, seem like prime examples of such self-created systemic inputs). 63
  • 64. In these special loop-y contexts, the simple input vs part-of-processing distinction, with its associated ban on counting inputs as parts of processing mechanisms seems wrong. = Self-stimulation as one clean route from mere inputs to parts of mechanisms..
  • 65. Compare: the car makes exhaust fumes (outputs) that are also inputs that drive the turbo that adds power (often around 30% more power!) to the engine. The exhaust fumes are outputs that are also self- created inputs that surely form a proper part of the overall power-generating mechanism = automotive self-stimulation! 65
  • 66. Another Kind of Worry Rob Rupert (2009) looks able to allow the spitting robot to possess a bodily extended cognizing circuit, but would reject the use of paper or other off-body storage
  • 67. This is because Rupert argues for a special status for the most portable bundle of processing powers that characterize the biological organism. He sees this bundle as the constant target (implicitly or explicitly) of most work in psychology and neuroscience. Various arguments: I’ll look just at two: asymmetry and integration
  • 68. Asymmetries Eg (Rupert) If you destroy a notebook, a cognizing agent may well replace it. But destroy the brain and that’s (literally) all she wrote! Or (Harry Collins) When my props and aids go wrong it is I who have to repair them. They will never repair me. There seems to be a deep asymmetry, or lopsidedness, between the role of the notebook and that of the brain.
  • 70. Take a small part of the neural crew, and very often ‘I’ can survive perfectly well without it (a neuron or two, visual cortex, MT) Similarly, when aspects of my own bio-memory start to become unreliable, I may deliberately shift towards alternative means of storage and retrieval. The apparent lopsidedness (I have to take steps to offset the loss of my own bio-memory functioning) does not threaten the claim that, prior to the loss, those internal resources were realizing my cognitive activities. Ditto, then, for the notebooks and sketchpads…
  • 71. (Sprevak) Don’t hold the external stuff to higher standards than we’d hold aspects of the brain’s own functioning.
  • 72. Integration Rupert claims there are severe scientific costs to adopting the extended perspective, as we may begin to lose our experimental grip on the integrated bundles of processing resources (agents) that psychology and neuroscience seeks to study. Sally-the organism (call that ‘O-Sally’) O-Sally + iPhone O-Sally +notebook O-Sally + Tommy
  • 73. Re these putative costs I just don’t see them. No need to lose our grip on the core biological bundle. Any more than attention to whole brains makes us lose track of the special contribution of the hippocampal bundle, or of the right hemisphere bundle…
  • 74. The invitation is to let a thousand flowers bloom.
  • 75. If our goal is to understand what a (a socially and technologically situated entity) can do, we’d better study the class of systems that includes loops through the body, artifacts, the web, other agents etc. If the goal is to understand what the persisting biological organism alone can do (say, by way of mathematical reasoning) we might want to restrict the use of all non- biological props and aids. Fingers yes, notepads no
  • 76. If it is to discover the stand-alone capacities of the neural apparatus, we might want to impede subjects from using their fingers as counting buffers during an experiment. No fingers, no gestures If it is to track the contribution of a specific neural sub- structure, we might want to use TMS to get a better grip on that.
  • 77. All these targets are both theoretically and experimentally viable! TXM invites us to tackle them all, and to do so as part of a single interdisciplinary project of understanding the distinctively human mind.
  • 78. A last question to ponder: so…is all this potential change and cognitive ‘upgrading’ a GOOD thing, or is it a dangerous early step on the road to some dark and ‘post-human’ future?
  • 79. A common worry: To allow all these well-fitted, transparent tools to count as genuine aspects of OURSELVES is to lose sight of our essential humanity. It is to risk a kind of bodily, sensory, and cognitive dissolution, as we slowly but surely lose track of where WE stop and the world of tools and technologies around us begins. = a kind of personal dissolution into the bio- technological matrix..
  • 80. A kind of bodily, sensory, and cognitive BLOAT
  • 81. Keith Butler tries to stop the bloat by appeal to a notion of the biological brain as ultimate controller “Even if external elements sometimes participate in processes of control and choice ( your software agent might choose some stocks and shares, and so on) still it is always the biological brain that has the final say” So the brain is the controller and chooser of actions in a way all that external stuff is not. So the external stuff should not count as part of the real cognitive system. See eg Butler (1998), see also Adams and Aizawa (2002, 2008) 81
  • 82. But I am not convinced. Re-applying the “locus of control” criterion inside the head helps reveal what’s going wrong. Do we now count as not part of my mind or myself any neural subsystems that are not the ultimate arbiters of action and choice? Suppose only my frontal lobes have the final say- does that shrink the “real mind” to just the frontal lobes!? What if no subsystem has the ‘final say (Dennett)? Has the mind and self just disappeared? 82
  • 83. It is a mistake to think that all those “cognitive tools” need some kind of wafer-thin user… This is where the ghost of Descartes seeps out from under the contemporary materialist rug 83
  • 84. I think, though, that we should be MUCH more worried by the alternative, which is a kind of unprincipled shrinkage of the mind and self!
  • 85. Brainbound’s Last Stand? Brie Gertler (2007) has argued for what she calls ‘the narrow mind’ (TNM) According to TNM, the realm of the mental consists only of the contents of occurrent conscious, processing. This allows her to reject the arguments for TXM by e.g. rejecting standing beliefs (classing them as not ‘mental’) hence sidestepping the parity considerations. If only what is active and conscious here and now is mental, then the physical base of mind (thus reduced) plausibly does shrink back to well within the bounds of skin and skull….
  • 86. But restricting the mental/cognitive to the occurrent and conscious is a drastic step It renders huge swathes of crucial in-head processing non-mental. Do we really want to avoid cognitive ‘bloat’ at the cost of shrinking the mind so dramatically? This seems scientifically unwarranted and ethically dubious…
  • 87. A Closing Story: Deacon Patrick Jones Jones suffers severe memory impairments as a result of repeated traumatic brain injury. Yet he lives a surprisingly normal life as a working catholic deacon in Colorado Springs. This is not due to any super hi-tech interventions.
  • 88. Jones relies upon a combination of the popular software Evernote, a Mac program for visualization called Curio, and an iPhone. Courtesy of these off-the-shelf packages and devices Jones is able to create massive webs of interlinked notes and pointers that allow the saving, searching, retrieving, and diagramming of his own contacts, thoughts, meetings, decisions, and interactions. See “What if HM had a Blackberry?” Gary Marcus, Psychology Today, December 2008
  • 89. Amazingly, it is only in virtue of this whole up-and- running web of structure that he able to recall who he has spoken with, what was decided, and so on. Yet he carries through complex long-term projects of pastoral care with incredible skill, optimism, and good humour.
  • 90. Patrick’s mental life is now built (it seems to me) upon a foundation of both biological and non- biological processing and storage. If you were to hack into and destroy his EVERNOTE records, that would be a crime against the person, not merely a crime against his cyber-property. It would be tantamount, as Dan Dennett once commented, to inflicting brain damage on someone while they sleep.
  • 91. Issues of ownership and legal protection must soon loom here. Do Patrick’s software providers have the right to delete his records if he fails to keep up payments? Do they have the right to cease to support old software, even if it has become deeply dovetailed with an ageing human’s biological brain? What if Patrick and his spouse create a shared resource then split up?
  • 92. Issues like these will surely arise as our cognitive technologies grow better and better, and the ongoing dovetailing of brains and technologies becomes more and more pronounced. Our laws, educational practice, and social policy need to plan for a near-future in which individual minds are web-extended, technology-permeated artifacts, apt for all kinds of transformation, repair, extension, and enhancement
  • 93. Maybe the best way to do so is start by recognizing that it’s cognitive technologies all the way down….