The Shape Of Agency Control Action Skill Knowledge Joshua Shepherd
The Shape Of Agency Control Action Skill Knowledge Joshua Shepherd
The Shape Of Agency Control Action Skill Knowledge Joshua Shepherd
The Shape Of Agency Control Action Skill Knowledge Joshua Shepherd
1. The Shape Of Agency Control Action Skill
Knowledge Joshua Shepherd download
https://ebookbell.com/product/the-shape-of-agency-control-action-
skill-knowledge-joshua-shepherd-24667604
Explore and download more ebooks at ebookbell.com
2. Here are some recommended products that we believe you will be
interested in. You can click the link to download.
The Cia In Hollywood How The Agency Shapes Film And Television Tricia
Jenkins
https://ebookbell.com/product/the-cia-in-hollywood-how-the-agency-
shapes-film-and-television-tricia-jenkins-4671398
The Shape Of Place David Alexander Liz Wylie
https://ebookbell.com/product/the-shape-of-place-david-alexander-liz-
wylie-45125978
The Shape Of Things To Come Selected Writings Interviews J Sakai
https://ebookbell.com/product/the-shape-of-things-to-come-selected-
writings-interviews-j-sakai-49197232
The Shape Of Stories Narrative Structures In Cuneiform Literature Gina
Konstantopoulos Editor
https://ebookbell.com/product/the-shape-of-stories-narrative-
structures-in-cuneiform-literature-gina-konstantopoulos-
editor-49493632
3. The Shape Of A Life One Mathematicians Search For The Universes Hidden
Geometry Shingtung Yau Steve Nadis
https://ebookbell.com/product/the-shape-of-a-life-one-mathematicians-
search-for-the-universes-hidden-geometry-shingtung-yau-steve-
nadis-50353670
The Shape Of The River Longterm Consequences Of Considering Race In
College And University Admissions William G Bowen Derek Bok
https://ebookbell.com/product/the-shape-of-the-river-longterm-
consequences-of-considering-race-in-college-and-university-admissions-
william-g-bowen-derek-bok-50440504
The Shape Of Life Genes Development And The Evolution Of Animal Form
Rudolf A Raff
https://ebookbell.com/product/the-shape-of-life-genes-development-and-
the-evolution-of-animal-form-rudolf-a-raff-51765046
The Shape Of The Writings Julius Steinberg Editor Timothy J Stone
Editor Rachel Marie Stone Editor
https://ebookbell.com/product/the-shape-of-the-writings-julius-
steinberg-editor-timothy-j-stone-editor-rachel-marie-stone-
editor-51831528
The Shape Of Spectatorship Art Science And Early Cinema In Germany
Scott Curtis
https://ebookbell.com/product/the-shape-of-spectatorship-art-science-
and-early-cinema-in-germany-scott-curtis-51904086
6. Acknowledgements
I had a lot of help in writing this book. For comments, conversation, and
inspiration along the way, I want to thank Al Mele, Myrto Mylopoulos,
Ellen Fridland, Matt Parrott, Denis Beuhler, Will Davies, Carissa Véliz,
Uriah Kriegel, Tim Bayne, Nick Shea, Wayne Wu, and Elisabeth Pacherie.
Many thanks to the students in my seminar at Carleton in 2018 for reading
an earlier version of this. For listening to earlier versions of this, and mak-
ing it better, I want to thank Neil Roughley and his group at Duisberg-
Essen, people at the summer mind workshop at Columbia, including John
Morrison, Katia Samoilova, and Antonia Peacocke, Thor Grünbaum and
many at the University of Copenhagen, Chiara Brozzo, Hong Yu Wong, and
many at the University of Tübingen, the whole workshop in the mountains
crew—Balaguer, Buckareff, Downes, Grzankowski, Jacobson, Pasnau,
Roskies, Strevens, and even McKenna—for conversations and encourage-
ment regarding an early version of chapters 7 and 8, Felipe de Brigard,
Santiago Amaya, Manuel Vargas, Walter Sinnott-Armstrong, and the audi-
ence at that Duke workshop, my colleagues at Carleton and also my col-
leagues at Universität de Barcelona, so many philosophers in the United
Kingdom and Ireland, so many philosophers in Oxford and at the Uehiro
Centre, and also the muses atop the Clarendon building.
For providing funding at various stages of this thing’s development, I
want to thank the European Research Council (Horizon 2020 grant 757698,
for the project Rethinking Conscious Agency), and the Canadian Institute
for Advanced Research’s (CIFAR) program in Mind, Brain, and Consciousness,
and the CIFAR Azrieli Global Scholar program.
For providing space to write, and music, and drinks of various sorts,
I want to thank Doña Rosa in Barcelona, the many pubs of Oxford, and The
Third in Ottawa.
9. 2 Introduction
I’m hooked. Agents do seem to be importantly distinct from non-agents.
Agents seem to be a special kind of thing, possessed of unique capacities
and thereby capable of special kinds of achievements.
In this book I give voice to this thought. I offer a perspective on agency—
on its minimal conditions and some of its exemplary instances.
The view of agency built in this book is not exactly reductionist. But it
is stripped down. It is individualistic. And it is in large measure, at least
in exposition, ahistorical. This is not to say it is not a product of its time.
One could trace a lineage that draws significant inspiration from
Aristotle, endorses some ideas found in the modern period (in, e.g.,
Hobbes), then begins to pick up steam with thinkers like William James,
and past him diverse mid-twentieth-century sources like Gilbert Ryle, or
Maurice Merleau-Ponty, and from there moves quickly towards the pre-
sent, adding and pruning layers like some kind of self-critical Fibonacci
sequence, by way of Hector-Neri Castañeda, Alvin Goldman, Marc
Jeannerod, Myles Brand, Daniel Dennett, Michael Bratman, Alfred Mele,
Elisabeth Pacherie.
From 2020, we can look back on the development of theories of agency
and action over time, and see that a lot of what passed for philosophical
reflection on action in the history of philosophy appears now as speculative
psychology. Progress in the sciences of the mind has been slow, and full of
fits and starts, but it continues. And from here it seems that earlier accounts
of agency, which leaned heavily on ideas about, and relations between, fac-
ulties or capacities called reason, or the passions, or the intellect, or the will,
are under pressure to accommodate different, mechanistic taxonomies that
make reference to notions like associative learning, task set construction,
sensorimotor adaptation, motor schema, representational format, metacog-
nition, cognitive control, and so on. These mechanistic taxonomies and
these neuropsychological concepts do not render philosophical reflection
on agency irrelevant, of course—if anything, the science of agency raises as
many philosophical questions as it answers. The point is simply that philo-
sophical accounts of agency and agentive phenomena must now be devel-
oped with an awareness that the parts that compose agents are being spliced
into fine levels of grain by a range of intersecting disciplines—neurobiology,
cognitive psychology, cognitive ethology, motor physiology, cybernetics,
and more. What this awareness has done to the book you are reading is that
I have written a book full of concerns that are somewhat abstract and thor-
oughly architectural.
10. the blueprint 3
In fact my book is architectural in two senses. In one way I am concerned
with broad structures. I am less concerned with the material that composes
the skeleton, than with the shape of the skeleton. I am concerned with the
basic building blocks of agency in chapters 2 through 5. In chapters 6
through 8 I am concerned with the abstract form of agency, and with how
agents, qua agent, might display excellence of form.
The second sense in which my book is architectural in that, rather than
try to capture the essence of pre-existing agentive notions, I am trying to
build something new. My approach is not conceptual analysis, but more like
Carnapian explication (Carnap 1950; Justus 2012; Shepherd and Justus 2015),
or what lately people have been calling conceptual engineering. Some revi-
sion of pre-existing notions is involved. But the aim is to actually capture
the reality underneath, or at least to develop accounts of phenomena that
might, even if flawed in some respects, promote understanding of the nature
of agents. I would ask readers to bear this in mind when reading the
accounts I offer of control, voluntary control, intentional action, and even
skill. I am aware that usage of these words varies, and that alternative
accounts are available. My claim is that the accounts I develop accurately
capture phenomena of importance, and that promote fruitful theorizing,
even if some departure from intuitions or common usage is required.
The shape of agency that I trace in this book comes primarily in the form
of accounts of five agentive phenomena: control, non-deviance, intentional
action, skill, and knowledgeable action. These accounts are interlinked.
Control is closely related to non-deviance. Both are important for inten-
tional action. Control, non-deviance, and intentional action undergird an
account of skill. And everything that goes before helps elucidate know
ledgeable action.
The aim is not to make good on the metaphor of two planes so much as
explain its allure by explaining the ways in which agents, as agents, are special.
Agents are special things, in that they are unique amalgamations of properties,
of causal powers. They have a unique kind of structure. This is not to say that
they do not fit perfectly within the natural order, whatever that is.
The Blueprint
Chapters 2 through 5 concern basic building blocks of agency. In chapter 2 I
develop an account of control’s possession. Key notions are the agent’s plans
11. 4 Introduction
(or plan-states), the agent’s behavioral patterns, and the circumstances in
which plans (or plan-states) help to cause an agent’s behavioral patterns. An
agent possesses control over her behavior when she is constituted in a way
such that in circumstances we must carefully specify, her behavioral patterns
repeatedly and flexibly match her plans for behavior.
In chapter 3 I develop an account of non-deviant causation.
In chapter 4 I leverage the earlier discussion to offer an account of con-
trol’s exercise. Roughly, control’s exercise essentially involves non-deviant
causation, and non-deviant causation is what happens when agents that
possess control behave in normal ways in implementing plans in certain
circumstances. I also apply this account, along with additional con
sid
er
ations, to offer an explication of voluntary control, and to illuminate volun-
tary control’s relationship to nearby notions of direct control, and indirect
control. I also extend the explication of voluntary control to the notion of
what is “up to” an agent.
In chapter 5 I develop an account of intentional action. It transpires that
intentional action is the exercise of a sufficient degree of control in bringing
behavior to approximate a good plan. Laying out this view of intentional
action takes some work, and I anticipate complaints. So I go on to consider
a number of ancillary issues and potential objections. I also consider this
account in relation to frequent complaints levied against causalism about
intentional action.
Chapters 2 through 5 might be thought of as the book’s first part.
Chapters 6 through 8 are a second part, with chapter 6 as a kind of hinge.
The main aim in this second part is to work towards an understanding of
agentive excellence.
In chapter 6 I discuss the nature of agency. I do not lay out a specific
account, but I try to render vivid the thought that agency is essentially a
matter of a system structured so as to make appropriate the application of
behavioral standards—frequently, rational standards—to the system, at
least some of which the system is able to satisfy. This discussion foregrounds
the accounts I offer in chapters 7 and 8. These are accounts of modes of
agentive excellence.
In chapter 7, it is skill at issue. I develop thoughts about the targets of
skill—especially about what I call an action domain. I also offer a novel
account of skill, and of skill’s gradability. I then consider the role of know
ledge in an account of skill, and argue that although knowledge is frequently
critical for skill, it is not necessary.
12. the blueprint 5
In chapter 8, knowledgeable action—action that in turn involves
know
ledge
of what I am doing and how—is at issue. Many have found knowledge of
action particularly interesting, and epistemically unique. I develop an account
of the epistemic credentials of knowledge of action, I discuss competitors,
and I illuminate how action that involves knowledge of action qualifies as a
mode of agentive excellence.
Let’s get it.
14. set out standards for success that they have learned to achieve again and
again and again and again.
I have said that control is necessary for activity—for action, for agency.
I have not said that only agents possess control. Engineers and biologists
find the language of control useful, and their usage is similar to my own.
A
system or sub-system with control is a system or sub-system whose
behavior can be modeled in terms of goal-states and processes that drive
the system or sub-system into states that constitute or cause the satisfac-
tion of these goal-states (cf. Dennett 1984). Such a system may not qualify
as an agent.
The trick, for an engineer or biologist, is to understand the joints and
levers of the system—to understand how the control is exercised. The trick
for the philosopher of mind or agency is to elucidate the philosophically
interesting components of controlled behavior and their relations set in the
broader context of philosophical reflection on the nature of agents. I want
to know what it is for the agent, as opposed to some non-agential system, or
some sub-system within the agent—her early visual system, or her circula-
tory system, or whatever—to exercise control.
2.2 Control’s Exercise
When an agent exercises control, they deploy behavior in the service of a cer-
tain class of mental states. The class I have in mind may be as narrow as the
class of intentions. Or it may be broad. Perhaps desires, urges, various emo-
tional states, states with imperatival content (arguably: pain states), or even
certain perceptual states could qualify. That will depend on one’s account of
the contents and functions of such states. Perhaps packages of states could
together qualify. Some think, for example, that an intention is really just a
package of a desire and a certain kind of belief (Davis 1984; Sinhababu 2017).
Or we could think of control with respect to a package of intentions, sub-
intentions, associated beliefs, and so on. I’m neutral on all this.
My requirements: in order to be served by controlled behavior, a mental
state or package of states should (a) represent (series of) events, states of
affairs, or whatever, as to be done, or eventuated (that is, should set out
a goal) (b) play a causal role in the production of (or, at minimum, attempts
to produce) the thing to be done (that is, M should move the agent
towards the goal) (c) qualify as a state or states of the agent, as opposed to
some sub-system of the agent. Notice: (b) requires that the state move the
Control’s Exercise 7
15. 8 Control’s Possession
agent in the right direction, towards the goal. I require that this not be
accidental. The state (or package) that moves the agent towards the goal
should, then, do so at least in part because the state’s content sets out a way
to
proceed towards the goal.
The third requirement is shifty, invoking as it seems to either a distinction
between personal and sub-personal levels (Dennett 1969), or something
like a distinction between doxastic and sub-doxastic states (Stich 1978). I
rely on an intuitive understanding of states of the agent for now. (More
detailed psychological architectures for particular agents would bring into
play more detailed criteria for marking the distinction.) On the intuitive
understanding, the agent’s intentions, beliefs, fears, and so on are at the level
of the agent. But states of the agent’s early visual system, or states that regu-
late the agent’s updating of long-term memory—states like these do not
qualify. So, while the processing in early visual cortex is plausibly controlled
processing, it does not qualify as control that the agent exercises. I discuss
this issue further at chapter 5.5.3.
I need a term of convenience to refer to the relevant class of mental states.
For reasons that will become apparent, I will call them plan-states. When
agents deploy behavior in service of plan-states, they aim at success. Success
involves a match between behavior and (certain aspects of) the representa-
tional content of the plan-state. That is what it is for behavior to be in ser-
vice of a plan-state. Such behavior is directed towards matching aspects of
the content of such a state.
A basketball player intends to make a shot. We can stipulate that the rep-
resentational content of the intention includes the following plan: “square to
the basket, locate the rim, aim just beyond the front of the rim, follow
through, make the shot.” When the agent is successful, she executes her
intention as planned—she squares to the basket, locates the rim, aims just
beyond the front of the rim, follows through, and makes the shot.
Talk of a content match between (aspects of) a plan-state and behavior
raises the following question: what is the representational content of a plan-
state, and what aspects of it are relevant? I will call the relevant aspects
means-end aspects. I’m building on Myles Brand’s work on the content of
intentions, on which the content of an intention is a plan. Here is how Brand
introduces the idea:
An intentional action can be a momentous occasion in one’s life, such as
marrying, or it can be a mundane occurrence, such as showering in the
morning; but in all cases, the agent is following his plan when acting. He
16. Control’s Exercise 9
has before his mind, as it were, a pattern of activity to which he brings his
actions into conformity. (Brand 1986: 213)
As Brand notes, it is not immediately obvious what plans are. We sometimes
talk of them in ways that suggest that plans are psychological states. But we
sometimes talk of them in ways that suggest plans are abstract objects (e.g.,
“There is a plan for world peace but no one has thought of it and no one
will” (218)). So perhaps plan-types are abstract objects and agents, via psy-
chological states, token some of these types. Settling the ontology of plans
here is not necessary.
We need some sense of the structure plans take, as well as of the kinds of
content plans can embed.
Regarding the structure plans take, Brand notes that very simple plans
need be little more than a linearly ordered series of steps. Plausibly a plan
could involve only one step: move left, wiggle finger, scream, or whatever.
But complex plans might involve conditional structures, sets of sub-goals,
embedded contingency plans specifying what results count as second or
third best, and so on. Brand (1986: 219) suggests we model plans as ordered
triples, like so:
Phi = <A, h, g>
Here A is a set of action-types (although one could say behavior-types
instead, with action-types as a subset of these), h is a function on A that
orders its members in terms of dependency relationships, and g specifies
which results (events, states of affairs, or whatever) are the actual goals or
subgoals embedded in the plan. Brand argues that g is necessary to capture
plan structure because two agents could share A and h while having
different goals. “You and I might follow the same recipe in baking a cake,
yet act on different plans. The goal of your plan might be to produce a
finished cake, whereas the goal of my plan might be to test the recipe;
nevertheless, we both performed the same types of actions in the same
order” (219).
I agree with Brand that a specification of the goal or goals is important.
This specification sets a standard for success. I might be aiming to get the
taste of butter to mix with the cocoa just right. You might be looking to give
something to your dog for its birthday. We might produce very similar
cakes, with my effort largely a failure, and yours a smashing success. The
difference is in the goals. Perhaps my performance suffered. Or perhaps I
17. 10 Control’s Possession
had a bad plan—that is, perhaps the behaviors in A or the dependency
relationships specified by h were poorly chosen or poorly constructed. This
suggests a distinction between the quality of performance of the behavior-
types a plan specifies, and the quality of satisfaction of the goal(s) a plan
specifies.2
As important as goals are the dependency relationships between
behavior-types. A plan for shaving involves the application of shaving cream
and the passing of a razor over the skin. It is important that one happen
before the other. Other plans could embed contingency structures, back-up
strat
egies, specification of next-best outcomes, and so on. Many behavior-
types could be represented by a plan, with some to be performed only on
certain branches of a tree, and only in certain orders, owing to the way the
plan orders the importance of the goals to be achieved.
The dependency relationships in a plan thus set up a means-end struc-
ture as internal to, constitutive of, the nature of plans. Behaviors are indexed
to goals as means, and they are weighted against other possible behaviors
given various contingencies.
My claim is that controlled behavior is behavior that, with additional
constraints added below, approximates means-end aspects of a plan-state.
Brand speaks of the agent bringing actions into conformity with intentions.
Lilian O’Brien (2012) speaks of a matching between movements and con-
tents of intentions. The idea here is similar. The aspects at issue are the
means an agent takes, the ends in view, and the dependency relationships
between various means and various ends. One could pull these apart and
speak of an agent’s fluency at various aspects in isolation:
Agent A performs the behaviors perfectly, but gets them out of order, and
fails to achieve the end.
Agent B performs the behaviors poorly, but gets the order right, and fails to
achieve the end.
Agent C performs the behaviors perfectly, and gets the order wrong, and
achieves the end.
2 The distinction between behavior-types and goals is useful, and in some cases necessary to
capture the structure and content of a plan. But in many cases it is plausible to think that the
goal and the behavior-type share a referent. The goal will simply be to perform the
behavior-type as specified.
18. Control’s Exercise 11
In one sense Agent C got lucky. Usually, achieving ends requires some level
of proficiency at behavioral execution, and at following the steps of a plan.
In some cases, however, the end is achieved anyway. And of course in other
cases, behavioral execution is perfect, as are the steps of the plan, and the
agent fails. Perhaps the plan was risky. Perhaps it contained a fatal flaw. My
son recently intended to eat a delicious piece of cherry candy by surrepti-
tiously swiping the candy I held and shoving it mouthward. A flawed plan
in one respect. For I held a disgusting cola-flavored piece of candy. So my
son failed to achieve his aim.
The relevance of these distinctions is that when we speak of behavior
conforming to aspects of a plan, we may have one of many aspects in mind.
I will tend to gloss over this, speaking of behavior conforming to or approxi
mat
ing a plan. If it is important, however, we could always be more specific,
and speak of plan quality, or of behavior conforming to a particular goal, or
a particular means.
In general, then, controlled behavior involves a match or an ap
proxi
ma
tion between behavior and aspects of the agent’s plan. In particular, it
involves a match or approximation between behavior and the ends (or
goals) embedded in the plan, or between behavior and the means as indexed
to specific ends. So we can speak of control with respect to a specific end, or
a specific means, or with respect to the plan taken as a whole. But events not
represented as contributing to the furtherance of the plan are not events
under the control of the agent.
Regarding the kinds of content a plan can embed: this will depend upon
the agent in question. Regarding humans the question is largely empirical. I
say largely because there is a limit on what kinds of content could feature in a
plan given the structure plans are supposed to take. Consider an iconically
structured visual representation of a scene. Following Green and Quilty-
Dunn (2017), this is a representation that meets the following principles.
First, “Every part of the representation represents some part of the scene
represented by the whole representation.” Second, “Each part of the repre-
sentation represents multiple properties at once, so that the representation
does not have separate vehicles corresponding to separate properties and
individuals” (11). A representation that meets these principles has very little
internal structure, and given the way such a representation compresses
information, it is difficult to abstract away from its parts. One lesson to draw
from this is that some states, such as a simple iconic visual representation of
a scene, may not be able to encode any kind of goal, and may have trouble
encoding any structured sequence of behavior-types. (An icon, could,
19. 12 Control’s Possession
however, serve as the specification of a goal. But a goal alone does not make a
plan.) The psychological states that direct behavior need more than this.
How much more is open for some debate. Philosophers tend to talk of
intentions as propositional attitudes. If the content of intentions is proposi-
tionally structured, then it is well suited for expressing plans. For proposi-
tions are systematic and recombinable, easily capable of representing
sequences of behavior-types and of embedding goals. But it is arguable that
we should not think of intentions as (exclusively) propositional attitudes
(Coffman 2018). And anyway there are probably ways of tokening plans and
goal-states without recourse to fully propositional structure. Philosophers
have argued that we engage in practical reasoning via non-propositional
representational states: map-like representations (Camp 2007), or analogue
magnitude representations (Beck 2014), or mental imagery (Gauker 2011),
or combinations of these (Shepherd 2018a). As I say, that is an empirical
question, and is not my chief focus here.
Plausibly, then, plans can take a variety of representational forms (see
Jeannerod 2006). The present point is that in order to exercise control over
behavior, an agent needs a capacity to represent a plan for behavior, how-
ever simple or complex.
So one’s representational capacities are one source of restriction on pos-
sessing a plan-state. Are there any others? Some philosophers have debated
whether one could try, or intend, to do what one believes impossible
(Thalberg 1962; Mele 1989; Ludwig 1992). These are not quite my questions
here. I am talking about plan-states generally, and intentions are only one
kind of plan-state. Intention possession may have additional restrictions
that plan-states like urges, or sensory motivational states, or motor repre-
sentations, do not. I am asking what is required for a system to possess a
plan-state.
I am not asking what is required for a system to possess a plan-state with
a specific content—a plan-state to A, for example, where A is an action vari-
able. I have not offered an account of action yet. We are at a pre
lim
in
ary stage.
Even so, a worry similar to the one regarding intending the impossible
arises here. Is it possible for a system to possess a plan-state, if the system
cannot—lacks the capacity to—execute the plan? This kind of worry does
not really arise with respect to simpler systems. If the states a simpler
system tokens do not have the function of bringing about behavior that
resembles the content of the state, there is little reason to consider the state a
plan-state. Of course a system can find itself in unfavorable circumstances,
20. Control’s Exercise 13
and token a plan-state that normally leads to success. Such a system could
have a plan-state that is, in those circumstances or at that time, impossible
for it to execute. We need to ask a more general version of the question. Is it
possible for a system to possess a plan-state, if the system lacks the capacity
to execute the plan in any circumstance in which the system could
be placed?
In more complex systems, systems capable of some degree of delusion or
self-deception, it becomes possible to envision a case in which the system
has a plan for doing something, where the something is a thing the system
cannot, in any case, do. Is that really a plan-state?
I suppose philosophers could disagree about this. Here is what I want to
say. It is too strong to require that the system have the capacity to perfectly
execute the plan. We should allow that a system can token plan-states that
systematically aim too high, for example. What we should require is that the
system have the capacity to cause (execute) some part of the plan. That is, in
order to token plan-states, a system should have some causal potency. It is a
minimal requirement, but a requirement nonetheless.
Causal potency can be understood as those causal powers (or dis
pos
itions)
by which an agent behaves—causes things. To a rough ap
proxi
ma
tion, an
agent’s exercise of causal potency can be measured in degrees, by indexing
the exercise to a specific plan, or to a part of a plan. We can, for example,
define approximation-level and perfect-level potency.
Approximation-level Potency. An agent J possesses approximation-level
potency with respect to (means-end aspects of) plan-state P in circum-
stances C to degree D if and only if for J, P in C can play a causal role in the
production of behavior that approximates (means-end aspects of) P’s con-
tent to degree D in C.
Perfect-level potency. An agent J possesses perfect-level potency with
respect to (means-end aspects of) plan-state P in circumstances C to degree
D if and only if for J, P in C can play a causal role in the production of
behavior that perfectly matches (means-end aspects of) P’s content to
degree D in C.
The possession of these levels of causal potency is not sufficient for the
possession of corresponding forms of control. Consider Frankie:
Batter. Frankie stands in the batter’s box, trembling. Frankie tends to strike
out, and he has never hit a home run before. Part of the problem is his
swing: an ugly, ungainly motion that rarely approaches the ball. In batting
21. 14 Control’s Possession
practice, Frankie’s coach will put a ball on a tee in front of him. Frankie hits
the tee more often than the ball. Even so, Frankie recently saw a film that
convinced him one simply needs to believe in oneself. Thus convinced,
Frankie eyes the pitcher and whispers to himself, “Just believe, Frankie!” He
then shuts his eyes and intends the following: “Swing hard, and hit a home
run!” Here comes the pitch. With eyes still closed, Frankie swings hard and
connects, producing a long fly ball deep to left field that lands just beyond
the fence.
In his specific circumstances, Frankie possesses perfect-level causal potency
regarding his intention to hit a home run in the given circumstances. Even
so, the home run does not constitute an exercise of control (even if the eyes-
closed swing of the bat does, to some degree).
What else does Frankie need? It is tempting to say that Frankie, or
Frankie’s intention, needs to bring about the home run in the right way.
Frankie’s swing, which by stipulation was an ugly, ungainly thing, is
analogous to a case Al Mele (1992) introduced regarding a philosopher.
This phil
oso
pher wanted to distract someone, and so intended to knock
over a glass. But this intention upset him such that his hand began to
shake uncontrollably, thereby knocking the glass over. The philosopher
seems to have even less control than Frankie—in both cases the result
accorded with the intention, but deviantly.
Consider the following as an account of control’s exercise:
EC*. An agent J exercises control in service of a plan-state P to degree D if
and only if J’s non-deviantly caused behavior approximates (means-end
aspects of) the representational content of P to degree D.
There is something right about EC*. First, it rules out cases like Batter as
exercises of (high degrees of) control. Second, it is a very plausible idea that
the degree of control an agent exercises has to do with the degree of approxi
ma
tion between behavior and plan content. An intention sometimes causes
behavior that fails to perfectly follow the plan, and thus fails to perfectly
match the content of the intention. Becky intends to make a shot that is all
net—that goes in without hitting the rim or backboard. But the ball hits the
front of the rim, then the backboard, and drops in. Clearly Becky exercised
a degree of control—the shot was very close to all net, so close that it went
in. But her behavior failed to perfectly match her intention. (If Becky bet
22. Control’s Possession 15
money on making it all net, this failure will be important.) Assuming that
the plan is exactly the same, it seems Becky exercises less control regarding
her intention if the shot is an inch shorter, hits the front rim and does not
drop in, and even less if she shoots an airball. Third, EC* seems to capture a
core truth about control’s exercise: the exercise of control essentially
includes an agent’s bringing behavior to match the content of a relevant plan.
But EC*’s appeal to non-deviant causation is problematic. If there is no non-
circular account of non-deviant causation in the offing, then we will rightly
suspect that the account on offer is superficial. In effect, EC* will tell us that the
exercise of control is essentially a matter of an agent’s bringing behavior to
match the representational content of a relevant intention in a controlled way.
I think there is a solution to this problem. It stems from reflection on
control’s possession.
2.3 Control’s Possession
The agent exercises control when she behaves in a certain way, driven and
guided by a plan and her commitment to it. In order to exercise control,
agents must have control.
When somebody does something that seems lucky, we wonder if they
could do it again. If they can do it again and again and again, we no longer
believe it lucky. We think they have some control over what’s going on.
Agents that possess control are agents that can repeatedly execute a plan for
behavior.
It’s one thing to repeatedly execute a plan in very similar circumstances.
But the world is capricious. We might want to see if the agent is poised to
handle extenuating circumstances as she brings behavior in line with
aspects of a plan. If so, the agent possesses flexible repeatability.
In general, an agent in possession of control with respect to some plan-
state is an agent poised to repeatedly execute that plan, even in the face of
extenuating circumstances.
To illustrate: hold fixed Frankie’s intention and suppose a number of
things. Maybe the ball comes in 1 mph faster or slower, or an inch higher or
lower, or Frankie’s muscles are slightly more fatigued, or Frankie produces a
slightly different arc of swing. We can vary Frankie’s circumstances any way
we like and ask: across this set of circumstances, how frequently does
Frankie evince the potency he evinced when he hit the home run? The
23. 16 Control’s Possession
answer to this question will give us a measure of the control Frankie
possesses regarding his intention.
In order to make sense of flexibility and repeatability, we have to specify a
certain set of circumstances. This is not necessarily to say that the posses-
sion of control is composed (even in part) of extrinsic properties. In dis-
cussing her view of causal powers, Rae Langton distinguishes between
extrinsic properties and relational properties, as follows: “whether a prop-
erty is extrinsic or intrinsic is primarily a metaphysical matter...whether a
property is relational or non-relational is primarily a conceptual matter: it is
relational just in case it can be represented only by a relational concept”
(2006: 173). As Langton notes, it is natural to view causal powers as both
intrinsic and relational: intrinsic because such powers are “compatible with
loneliness” and relational because “we need to talk about other things when
describing it” (173). This view is available regarding the control an agent
possesses.
Many agents are plastic—we lose limbs, muscle tissue, brain cells. Our
control is therefore plastic across circumstances. We learn novel ways of
performing tasks, and become adept with various tools. Andy Clark claims
that our brains are “open-ended opportunistic controllers”—our brains
“compute, pretty much on a moment-to-moment basis, what problem-solving
resources are readily available and recruit them into temporary problem-
solving wholes” (2007: 101). I think he’s right. It follows that circumstances
impact the amount of control we possess regarding our plans. So the
specification of a set of circumstances requires care.
We get viable and interesting measures of control only when the set of
circumstances is well selected. A set of circumstances is well selected when
we follow principles for set selection that roughly mirror principles for build-
ing an accurate causal model of the agent as embedded in a broader causal
system that comprises the kinds of circumstances in which we are interested.
So, for example, the set should be sufficiently large. Think of a set of cir-
cumstances with only two members: the case in which Frankie hits a home
run, and a case in which he misses the ball. This set is not informative: we
need a large number of cases before we get any useful information regard-
ing just how lucky Frankie’s home run was. A set is sufficiently large when
adding members does not substantively impact the resulting measure of
control.
Further, the circumstance selector should accurately specify the param
eters that are fixed, and the parameters that vary. In some cases the selector
25. Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.
The Foundation is committed to complying with the laws
regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or determine
the status of compliance for any particular state visit
www.gutenberg.org/donate.
While we cannot and do not solicit contributions from states
where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.
International donations are gratefully accepted, but we cannot
make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.
Please check the Project Gutenberg web pages for current
donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
26. credit card donations. To donate, please visit:
www.gutenberg.org/donate.
Section 5. General Information About
Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.
Project Gutenberg™ eBooks are often created from several
printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
This website includes information about Project Gutenberg™,
including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.
29. Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com