SlideShare a Scribd company logo
Introduction to Formal Argumentation Theory
Federico Cerutti and Nir Oren
Cardiff University, University of Aberdeen
CeruttiF@cardiff.ac.uk, n.oren@abdn.ac.uk
Cerutti, Oren (Cardiff, Aberdeen) 1 / 203
From Structured to Abstract Argumentation
Cerutti, Oren (Cardiff, Aberdeen) 2 / 203
Does MMR vaccination cause autism?
Cerutti, Oren (Cardiff, Aberdeen) 3 / 203
Supporting Reasoning with Different Types of Evidence in
Intelligence Analysis
Alice Toniolo_ Anthony Etuk Robin Wentao Ouyang
Tlmothy J-
N0Fman Federico Cerutti Mani Srivastava
DBPL 0f_C0ml3U“”Q SCIENCE Dept. of Computing Science University of California
University of Aberdeen, UK University of Aberdeen, UK Los Angeles, CA, USA
Nir Oren Timothy Dropps Paul Sullivan
Dept. of Computing Science John A_ Allen INTELPOINT Incorporated
University of Aberdeen, UK Honeywell, USA Pennsylvania, USA
Appears in: Proceedings of the 14th International
Conference on Autonomous Agents and ll/Iultiayent
Systems (AAJWAS 2015), Bordim, Elkind, Was.-3, Yolum
(ed5.), Mlay 4 8, 2015, Istcmbttl, Turkey.
[Ton+15]
Cerutti, Oren (Cardiff, Aberdeen) 4 / 203
Caveat
[BL08] [PS13]
Cerutti, Oren (Cardiff, Aberdeen) 5 / 203
Douglas Walton
Chris Reed
Fabrizio Macagno
ARGUMENTATION
SCHEMES
[WRM08]
Cerutti, Oren (Cardiff, Aberdeen) 6 / 203
Argumentation scheme for argument from correlation to cause
Correlation Premise: There is a positive correlation between A and B.
Conclusion: A causes B.
Critical questions are:
CQ1: Is there really a correlation between A and B?
CQ2: is there any reason to think that the correlation is any
more than a coincidence?
CQ3: Could there be some third factor, C, that is causing both A
and B?
Cerutti, Oren (Cardiff, Aberdeen) 7 / 203
The Knowledge Engineering Review, Vol. 26:4, 487—51 1. © Cambridge University Press, 2011
doi:10.1017/S0269888911000191
Representing and classifying arguments on the
Semantic Web
IYAD RAHWAN1‘2, B_ITA BANIHASHEMI3, CHRIS REED4,
DOUGLAS WALTON” and SHERIEF ABDALLAH”
[Rah+11]
Cerutti, Oren (Cardiff, Aberdeen) 8 / 203
Node Graph
(argument
network)
has-a
Information
Node
(I-Node)
is-a
Scheme Node
S-Node
has-a
Edge
is-a
Rule of inference
application node
(RA-Node)
Conflict application
node (CA-Node)
Preference
application node
(PA-Node)
Derived concept
application node (e.g.
defeat)
is-a
...
ContextScheme
Conflict
scheme
contained-in
Rule of inference
scheme
Logical inference
scheme
Presumptive
inference scheme
...
is-a
Logical conflict
scheme
is-a
...
Preference
scheme
Logical preference
scheme
is-a
...
Presumptive
preference scheme
is-a
uses uses uses
Cerutti, Oren (Cardiff, Aberdeen) 9 / 203
MMR vaccination
causes authism
C-2-C
It is possible that
MMR vaccination
is associated to
autism
Cerutti, Oren (Cardiff, Aberdeen) 10 / 203
EARLY REPORT
Early report
lleal-lymphoid-nodular hyperplasia, non-specific colitis, and
pervasive developmental disorder in children
A J Wake eld, S H Murch, A Anthony, J Linnell, D M Casson, M Malik, M Berelowitz, A P Dhillon, M A Thomson,
P Harvey, A Valentine, 5 E Davies, J A Walker-Smith
5|-|mma|'Y Introduction
1177
" °9W several children Who, after a nP"" '
"‘ investigated a conser""' _m;mAn1".,,,
Cerutti, Oren (Cardiff, Aberdeen) 11 / 203
Support
What else should
be true if the
causal link is true?
Cerutti, Oren (Cardiff, Aberdeen) 12 / 203
(Wakefield et al, 1998)
MMR vaccination
causes authism
C-2-C
It is possible that
MMR vaccination
is associated to
autism
Behavioural symptoms
were associated by
parents of 12 children
Witn
Cerutti, Oren (Cardiff, Aberdeen) 13 / 203
The New England
Iournal of Medicine
Copyright © 2002 by the Massachusetts Medical Society
VOLUME 347 N()VEMBER 7, 2002 NUMBER 19
A POPULATION-BASED STUDY OF MEASLES, MUMPS, AND RUBELLA
VACCINATION AND AUTISM
KREESTEN MELDGAARD MADSEN, M.D., ANDERS HVIID, M.Sc., MOGENS VESTERGAARD, M.D., DIANA SCHENDEL, PH.D.,
JAN WOHLFAHRT, M.Sc., POUL THORSEN, M.D., J(ZiRN OLSEN, M.D., AND MADS MELBYE, M.D.
ABS""‘
I 7 "Tested that the measle
' +hat vaccina— ”“CCi11C C3“’
-nn- ’
Cerutti, Oren (Cardiff, Aberdeen) 14 / 203
Support
Cerutti, Oren (Cardiff, Aberdeen) 15 / 203
(Madsen et al, 2002)
Support
What else should
be true if the
causal link is true?
Support
Support
Cerutti, Oren (Cardiff, Aberdeen) 16 / 203
MMR vaccination
causes authism
C-2-C
It is possible that
MMR vaccination
is associated to
autism
Behavioural symptoms
were associated by
parents of 12 children
Witn
CQ1: There is no
correlation between
MMR vaccination
and autism
CON
E-2-H
No statistical
correlation over
440,655 children
Cerutti, Oren (Cardiff, Aberdeen) 17 / 203
ASPIC+
[Pra10] [MP13]
[MP14]
Cerutti, Oren (Cardiff, Aberdeen) 18 / 203
ASPIC+
An argumentation system is as tuple AS = L, R, , ν, where:
: L → 2L: a contrariness function s.t. if ϕ ∈ ψ and:
ψ /∈ ϕ, then ϕ is a contrary of ψ;
ψ ∈ ϕ, then ϕ is a contradictory of ψ (ϕ = –ψ);
R = Rd ∪ Rs: strict (Rs) and defeasible (Rd ) inference rules s.t.
Rd ∩ Rs = ∅;
is an ordering on Rd .
ν : Rd → L, is a partial function.a
P ⊆ L is consistent iff ϕ, ψ ∈ P s.t. ϕ ∈ ψ, otherwise is inconsistent.
A knowledge base in an AS is Kn ∪ Kp = K ⊆ L; {Kn, Kp} is a partition
of K; Kn contains axioms that cannot be attacked; Kp contains
ordinary premises that can be attacked.
An argumentation theory is a pair AT = AS, K .
a
Informally, ν(r) is a wff in L which says that the defeasible rule r is
applicable.
Cerutti, Oren (Cardiff, Aberdeen) 18 / 203
MMR vaccination
causes authism
C-2-C
It is possible that
MMR vaccination
is associated to
autism
Behavioural symptoms
were associated by
parents of 12 children
Witn
CQ1: There is no
correlation between
MMR vaccination
and autism
CON
E-2-H
No statistical
correlation over
440,655 children
α
β
γ
δ
ε
Cerutti, Oren (Cardiff, Aberdeen) 19 / 203
MMR vaccination
causes authism
C-2-C
It is possible that
MMR vaccination
is associated to
autism
Behavioural symptoms
were associated by
parents of 12 children
Witn
CQ1: There is no
correlation between
MMR vaccination
and autism
CON
E-2-H
No statistical
correlation over
440,655 children
α
β
γ
δ
ε
β =⇒ α
γ =⇒ β
=⇒ δ
δ ∈ β
Cerutti, Oren (Cardiff, Aberdeen) 20 / 203
ASPIC+
An argument a on the basis of a AT = AS, K , AS = L, R, , ν, is:
1 ϕ if ϕ ∈ K with: Prem(a) = {ϕ}; Conc(a) = ϕ; Sub(a) = {ϕ};
Rules(a) = DefRules(a) = ∅; TopRule(a) = undefined.
2 a1, . . . , an −→ / =⇒ ψ if a1, . . . , an, with n ≥ 0, are arguments
such that there exists a strict/defeasible rule
r = Conc(a1), . . . , Conc(an) −→ / =⇒ ψ ∈ Rs/Rd .
Prem(a) = n
i=1 Prem(ai); Conc(a) = ψ;
Sub(a) = n
i=1 Sub(ai) ∪ {a};
Rules(a) = n
i=1 Rules(ai) ∪ {r};
DefRules(a) = {d | d ∈ Rules(a) ∩ Rd };
TopRule(a) = r
a is strict if DefRules(a) = ∅, otherwise defeasible; firm if
Prem(a) ⊆ Kn, otherwise plausible.
Cerutti, Oren (Cardiff, Aberdeen) 21 / 203
ASPIC+
Given a and b arguments, a defeats b iff a undercuts, successfully
rebuts or successfully undermines b, where:
a undercuts b (on b ) iff Conc(a) /∈ ν(r) for some b ∈ Sub(b) s.t.
r = TopRule(b ) ∈ Rd ;
a successfully rebuts b (on b ) iff Conc(a) /∈ ϕ for some
b ∈ Sub(b) of the form b1, . . . , bn =⇒ –ϕ, and a b ;
a successfully undermines b (on ϕ) iff Conc(a) /∈ ϕ, and
ϕ ∈ Prem(b) ∩ Kp, and a ϕ.
AF is the abstract argumentation framework defined by AT = AS, K
if A is the smallest set of all finite arguments constructed from K; and
→ is the defeat relation on A.
Cerutti, Oren (Cardiff, Aberdeen) 22 / 203
γε
ε, ε ⇒ δ γ, γ ⇒ β
γ, γ ⇒ β, β ⇒ α
Cerutti, Oren (Cardiff, Aberdeen) 23 / 203
Artificial
Intelligence
Arti cialIntelligence 77 (1995) 321v357
On the acceptability of arguments and its fundamental
role in nonmonotonic reasoning, logic programming and
n-person games*
Phan Minh Dung*
[Dun95]
Cerutti, Oren (Cardiff, Aberdeen) 24 / 203
Definition
A Dung argumentation framework AF is a pair
A, →
where A is a set of arguments, and → is a binary relation on A i.e.
→⊆ A × A.
Cerutti, Oren (Cardiff, Aberdeen) 25 / 203
A semantics is a way to identify sets of arguments (i.e. extensions)
“surviving the conflict together”
Cerutti, Oren (Cardiff, Aberdeen) 26 / 203
(Some) Semantics Properties
wailah-la unlina at 1-Iwmnscianca-dira+:t.corn
':.i; Science-.Direct Ani gal
Intelligence:1
E.LSI:'."v'lI:'.R. .eu:i:'.u'.-in Jnl::||igI:n»;::: m izrocm n75—:':m
www.r:I:i::1.r'icr.r:nn1.-'|m::3n:.':3r1iI11
On principle-based evaluation of extension-based
argumentation semantics ii’
Pietra Bamni, Massimiliano Giacomin *
[BG07]
The Kn0w[ed'ge Engineering Review, Vol. 26:4, 365-410. © Cambridge University Press, 2011
doi:10.1017J/S0269888911000166
An introduction to argumentation semantics
PIETRO BARONI‘, MARTIN CAMINADA2 and
MASSIMILIANO GlACOMIN'
[BCG11]
Cerutti, Oren (Cardiff, Aberdeen) 27 / 203
(Some) Semantics Properties
Conflict-freeness
an attacking and an attacked argument can not stay together (∅ is
c.f. by def.)
Admissibility
Strong-Admissibility
Reinstatement
I-Maximality
Directionality
Cerutti, Oren (Cardiff, Aberdeen) 27 / 203
(Some) Semantics Properties
Conflict-freeness
Admissibility
the extension should be able to defend itself, „fight fire with fire” (∅
is adm. by def.)
Strong-Admissibility
Reinstatement
I-Maximality
Directionality
Cerutti, Oren (Cardiff, Aberdeen) 27 / 203
(Some) Semantics Properties
Conflict-freeness
Admissibility
Strong-Admissibility
defence must be grounded on unattacked arguments (∅ is strong
adm. by def.)
Reinstatement
I-Maximality
Directionality
Cerutti, Oren (Cardiff, Aberdeen) 27 / 203
(Some) Semantics Properties
Conflict-freeness
Admissibility
Strong-Admissibility
Reinstatement
if you defend some argument you should take it on board (∅
satisfies the principle only if there are no unattacked arguments)
I-Maximality
Directionality
Cerutti, Oren (Cardiff, Aberdeen) 27 / 203
(Some) Semantics Properties
Conflict-freeness
Admissibility
Strong-Admissibility
Reinstatement
I-Maximality
no extension is a proper subset of another one
Directionality
Cerutti, Oren (Cardiff, Aberdeen) 27 / 203
(Some) Semantics Properties
Conflict-freeness
Admissibility
Strong-Admissibility
Reinstatement
I-Maximality
Directionality
a (set of) argument(s) is affected only by its ancestors in the attack
relation
Cerutti, Oren (Cardiff, Aberdeen) 27 / 203
Complete Extension
Admissibility and reinstatement
Set of conflict-free arguments s.t. each defended argument is included
b a
c
d
f e
gh



{a, c, d, e, g},
{a, b, c, e, g},
{a, c, e, g}



Cerutti, Oren (Cardiff, Aberdeen) 28 / 203
Grounded Extension
Strong Admissibility
Minimum complete extension
b a
c
d
f e
gh



{a, c, e, g}



Cerutti, Oren (Cardiff, Aberdeen) 29 / 203
Preferred Extension
Admissibility and maximality
Maximum complete extensions
b a
c
d
f e
gh



{a, c, d, e, g},
{a, b, c, e, g}



Cerutti, Oren (Cardiff, Aberdeen) 30 / 203
Stable Extension
„orror vacui:” the absence of odd-length cycles is a sufficient condition
for existence of stable extensions
Complete extensions attacking all the arguments outside
b a
c
d
f e
gh



{a, c, d, e, g},
{a, b, c, e, g}



Cerutti, Oren (Cardiff, Aberdeen) 31 / 203
Complete Labellings
An argument is IN if all its attackers are OUT
An argument is OUT if at least one of its attackers is IN
Otherwise is UNDEC
Cerutti, Oren (Cardiff, Aberdeen) 32 / 203
Complete Labellings
Max. UNDEC ≡ Grounded
b a
c
d
f e
gh



{a, c, e, g}



Cerutti, Oren (Cardiff, Aberdeen) 33 / 203
Complete Labellings
Max. IN ≡ Preferred
b a
c
d
f e
gh



{a, c, d, e, g}



Cerutti, Oren (Cardiff, Aberdeen) 33 / 203
Complete Labellings
Max. IN ≡ Preferred
b a
c
d
f e
gh



{a, b, c, e, g}



Cerutti, Oren (Cardiff, Aberdeen) 33 / 203
Complete Labellings
No UNDEC ≡ Stable
b a
c
d
f e
gh



{a, c, d, e, g}



Cerutti, Oren (Cardiff, Aberdeen) 33 / 203
Complete Labellings
No UNDEC ≡ Stable
b a
c
d
f e
gh



{a, b, c, e, g}



Cerutti, Oren (Cardiff, Aberdeen) 33 / 203
Properties of semantics
CO GR PR ST
D-conflict-free Yes Yes Yes Yes
D-admissibility Yes Yes Yes Yes
D-strongly admissibility No Yes No No
D-reinstatement Yes Yes Yes Yes
D-I-maximality No Yes Yes Yes
D-directionality Yes Yes Yes No
Cerutti, Oren (Cardiff, Aberdeen) 34 / 203
Many more semantics
Cerutti, Oren (Cardiff, Aberdeen) 35 / 203
γε
ε, ε ⇒ δ γ, γ ⇒ β
γ, γ ⇒ β, β ⇒ α
Cerutti, Oren (Cardiff, Aberdeen) 36 / 203
MMR vaccination
causes authism
C-2-C
It is possible that
MMR vaccination
is associated to
autism
Behavioural symptoms
were associated by
parents of 12 children
Witn
CQ1: There is no
correlation between
MMR vaccination
and autism
CON
E-2-H
No statistical
correlation over
440,655 children
α
β
γ
δ
ε
Cerutti, Oren (Cardiff, Aberdeen) 37 / 203
Rationality postulates
P1: direct consistency iff
{Conc(a) | a ∈ S} is
consistent;
P2: indirect consistency iff
Cl({Conc(a) | a ∈ S})
is consistent;
P3: closure iff
{Conc(a) | a ∈ S} =
Cl({Conc(a) | a ∈ S});
P4: sub-argument closure
iff ∀a ∈ S, Sub(a) ⊆ S.
Satisfied if:
Closure under transposition
If ϕ1, . . . , ϕn −→ ψ ∈ Rs, then ∀i = 1 . . . n,
ϕ1, . . . , ϕi−1, ¬ψ, ϕi+1, . . . , ϕn =⇒ ¬ϕi ∈ Rs.
Cl(Kn) is consistent;
the argument ordering is
reasonable, namely:
∀a, b, if a is strict and firm, and b is
plausible or defeasible, then a b;
∀a, b, if b is strict and firm, then
b a;
∀a, a , b such that a is a strict
continuation of {a}, if a b then
a b, and if b a, then b a ;
given a finite set of arguments
{a1, . . . , an}, let a+i
be some strict
continuation of
{a1, . . . , ai−1, ai+1, . . . , an}. Then it
is not the case that ∀i, a+i
ai .
Cerutti, Oren (Cardiff, Aberdeen) 38 / 203
Chapter 5
Complexity of Abstract Argumentation
Paul E. Dunne and Michael Wooldridge
I. Rahwan, G. R. Simari (cds.), Argunzerztarion in Ar‘!1j‘icial Intelligence,
DO] 10.1007/978—0—387—98197'-0-5. © Springer SCience+Business Media. LLC 2009
[DW09]
Cerutti, Oren (Cardiff, Aberdeen) 39 / 203
σ = CO σ = GR σ = PR σ = ST
EXISTSσ trivial trivial trivial NP-c
CAσ NP-c polynomial NP-c NP-c
SAσ polynomial polynomial Πp
2-c coNP-c
VERσ polynomial polynomial coNP-c polynomial
NEσ NP-c polynomial NP-c NP-c
Cerutti, Oren (Cardiff, Aberdeen) 39 / 203
Cerutti, Oren (Cardiff, Aberdeen) 40 / 203
Extending Dung
Dung’s framework captures negative interactions between
arguments.
But Dung’s framework does not easily capture several intuitive
properties of human argumentation
Joint attack
Recursive/meta-arguments
Preferences
Support
Argument strength
Cerutti, Oren (Cardiff, Aberdeen) 41 / 203
Joint Attack (Nielsen & Parsons (2006))
Both A and B must be the case for C to not hold.
Dung’s results map directly — only the definition of attacks needs
modification.
a
b
c
Cerutti, Oren (Cardiff, Aberdeen) 42 / 203
PAFs (Amgoud (1999))
Witness A claims x, Witness B claims ¬x, but A is much more
reliable.
Cerutti, Oren (Cardiff, Aberdeen) 43 / 203
PAFs (Amgoud (1999))
Witness A claims x, Witness B claims ¬x, but A is much more
reliable.
A Preference-based argumentation framework (PAF) is a triple
A, R, , where ⊆ A × A.
A B states that A is preferred to B.
A PAF is transformed to a PAF by moving from attacks to defeats:
A defeats B iff A attacks B and A B.
Cerutti, Oren (Cardiff, Aberdeen) 43 / 203
But...
a b
b > a
Cerutti, Oren (Cardiff, Aberdeen) 44 / 203
But...
a b
We can end up with conflicts in our extensions
Cerutti, Oren (Cardiff, Aberdeen) 44 / 203
Repair (Amgoud & Vesic (2014))
Attacks between arguments represent
An incoherence between the two arguments; and
A kind of preference determined by the direction of the attack.
We can thus consider the ultimate direction of the arrow to express
a real preference between arguments, and reverse it if needed.
Rr = {(a, b)|(a, b) ∈ R and not (b > a)}∪
{(b, a)|(a, b) ∈ R and (b > a)}
This amounts to reversing the direction of the arrows w.r.t
preferences.
Preferences can also be used to pick between multiple
extensions, selecting the "most preferred extensions".
Cerutti, Oren (Cardiff, Aberdeen) 45 / 203
Preferences using Extended Frameworks (Modgil,
Cerutti and others)
The idea of these frameworks is to allow attacks on attacks.
Capturing preferences, undercuts and the like in a natural manner.
a>b
b>a
a b b>a
Cerutti, Oren (Cardiff, Aberdeen) 46 / 203
Support
Attacks between arguments allow for reinstatement to occur,
enabling arguments to defend one another.
Arguments can also build on top of one another, or strengthen
each other through support.
Bipolar argumentation frameworks (Cayrol et al (2009)) allow for
arguments to interact by both attacking and supporting each other.
A, R, S
Different formalisms treat support differently.
Cerutti, Oren (Cardiff, Aberdeen) 47 / 203
Evidential Argument Frameworks (Oren et al (2014)
Evidential argument frameworks capture the notion of
sub-argument support.
For a conclusion to be justified, sub-arguments which lead to that
conclusion must be justified.
Evidence for initial arguments is also required.
It is then possible to transform the Evidential Framework into a
Dung framework by combining sub-arguments to form arguments
with only attacks between them.
⌘⌘ a b c
d
a
a,b
a,b,c
d
Cerutti, Oren (Cardiff, Aberdeen) 48 / 203
Attacks in Bipolar Frameworks
a bc a bc
a bc a bc
Secondary Supported
Mediated Extended
Another approach involves introducing new attacks based on the
supports present in the framework, after which the original
supports and attacks are deleted.
Cerutti, Oren (Cardiff, Aberdeen) 49 / 203
Attacks in Bipolar Frameworks
Different systems introduce different types of attacks.
Polberg & Hunter (2018) provide strong evidence that human
reasoning makes use of support when thinking about arguments,
and thus hint that bipolar frameworks are more than just ‘syntactic
sugar’.
Cerutti, Oren (Cardiff, Aberdeen) 50 / 203
Strength
Humans often claim that some argument is stronger than another.
Such strengths can come from beliefs relating to one argument
being preferred (by the reasoner) to another; or
From having the claims of the argument being considered more
certain.
Cerutti, Oren (Cardiff, Aberdeen) 51 / 203
Probabilistic Argument Frameworks (PrAFs)
PrAFs are a simple way to capture uncertainty in an abstract
framework.
They extend a standard DAF with probabilistic concepts.
A, D
Cerutti, Oren (Cardiff, Aberdeen) 52 / 203
Probabilistic Argument Frameworks (PrAFs)
PrAFs are a simple way to capture uncertainty in an abstract
framework.
They extend a standard DAF with probabilistic concepts.
A, D, PA, PD
PA,PD encodes the likelihood of an argument or attack.
Cerutti, Oren (Cardiff, Aberdeen) 52 / 203
Interpreting PrAFs
A
0.8
B
0.6
We can interpret PrAFs via a frequentist approach to probability:
PA(A) = 0.8 means that in 8 out of 10 possible worlds (or
Argument Frameworks), A exists.
A B A B
Cerutti, Oren (Cardiff, Aberdeen) 53 / 203
Likelihoods of Argument Frameworks
A
0.8
B
0.6
P(∅, ∅) =?
P({A}, ∅) =?
P({B}, ∅) =?
P({A, B}, {(A, B), (B, A)}) =?
Cerutti, Oren (Cardiff, Aberdeen) 54 / 203
Likelihoods of Argument Frameworks
A
0.8
B
0.6
P(∅, ∅) = 0.08
P({A}, ∅) = 0.32
P({B}, ∅) = 0.12
P({A, B}, {(A, B), (B, A)}) = 0.48
Each of these DAFs are induced from the original PrAF.
0.480.120.320.08
A B A B
Cerutti, Oren (Cardiff, Aberdeen) 54 / 203
Semantics
Unlike traditional frameworks, extensions are probabilistic,
indicating the likelihood that a set of arguments appears within
some extension.
This probability is computed as the sum of probabilities of the AFs
where the argument appears in the Dung extension.
P(∅, ∅) = 0.08 P({A}, ∅) = 0.32
P({B}, ∅) = 0.12 P({A, B}, {(A, B), (B, A)}) = 0.48
P({A} ∈ Grounded) = 0.32
P({A} ∈ Preferred(credulous)) = 0.8
P({A} ∈ Preferred(skeptical)) = 0.32
Cerutti, Oren (Cardiff, Aberdeen) 55 / 203
Extensions
It’s possible to extend PrAFs to Evidential Frameworks, lifting
aspects of the independence assumption PrAFs make.
And from there, to structured argumentation.
See Li, H., "Probabilistic Argumentation" (2015) for details.
Cerutti, Oren (Cardiff, Aberdeen) 56 / 203
What do probabilities mean?
1 Likelihood of an argument being considered justified (Hunter,
COMMA-12)
2 Likelihood that an argument is known by an agent (Li et al,
TAFA-11,COMMA-12,ArgMAS-13)
3 Likelihood that an agent believes an argument (Thimm, ECAI-12,
ECAI-14, Hunter, IJAR-13, ArXiv-14, . . . )
Cerutti, Oren (Cardiff, Aberdeen) 57 / 203
What do probabilities mean?
1 Likelihood of an argument being considered justified (Hunter,
COMMA-12)
2 Likelihood that an argument is known by an agent (Li et al,
TAFA-11,COMMA-12,ArgMAS-13)
3 Likelihood that an agent believes an argument (Thimm, ECAI-12,
ECAI-14, Hunter, IJAR-13, ArXiv-14, . . . )
Structural uncertainty - uncertainty about the structure of the
argument graph (1 and 2).
Epistemic uncertainty - uncertainty about agent beliefs (3).
Cerutti, Oren (Cardiff, Aberdeen) 57 / 203
Epistemic Extensions (taken from Hunter)
A probability function maps sets of arguments to a probability
value P : 2A → [0, 1], s.t. A ⊆A P(A ) = 1
P(a) =
a∈E⊆A
P(E)
Arguments are labelled based on the probability associated with
them: a is in if (P(a) > 0.5), out if P(a) < 0.5 and undec
otherwise.
What constraints can be placed on the probability function?
Cerutti, Oren (Cardiff, Aberdeen) 58 / 203
Some Constraints
COH For every a, b ∈ A, if a → b, then P(a) ≤ 1 − P(b)
SFOU If P(a) ≥ 0.5 for every a ∈ A which is not attacked.
FOU If P(a) = 1 for every a ∈ A which is not attacked.
SOPT If P(a) ≥ 1 − b s.t. b→a P(b) whenever an attack
against a exists.
OPT If P(a) ≥ 1 − b s.t. b→a P(b).
JUS If COH and OPT
TER If P(a) ∈ {0, 0.5, 1} for any a ∈ A
Cerutti, Oren (Cardiff, Aberdeen) 59 / 203
Classical Extensions
Given a complete probability function, the following association
between restrictions and classical extensions exists.
No restriction Complete
No a s.t. P(a) = 0.5 Stable
Maximal arguments s.t. P(a) = 1 Preferred
Maximal arguments s.t. P(a) = 0 Preferred
Maximal arguments s.t. P(a) = 0.5 Grounded
Minimal arguments s.t. P(a) = 1 Grounded
Minimal arguments s.t. P(a) = 0 Grounded
Minimal arguments s.t. P(a) = 0.5 Stable
Cerutti, Oren (Cardiff, Aberdeen) 60 / 203
Non-standard Extensions
Cerutti, Oren (Cardiff, Aberdeen) 61 / 203
So What?
Can we use these properties to assign probabilities to arguments?
Assume a partial function π : A → [0, 1]
What are the “best” probabilities to assign to arguments not in the
domain of π?
Cerutti, Oren (Cardiff, Aberdeen) 62 / 203
The Idea
A
1
B
?
Cerutti, Oren (Cardiff, Aberdeen) 63 / 203
The Idea
A
1
B
0
Cerutti, Oren (Cardiff, Aberdeen) 63 / 203
The Idea
A
0.7
B
?
Cerutti, Oren (Cardiff, Aberdeen) 63 / 203
The Idea
A
0.7
B
0.3
Cerutti, Oren (Cardiff, Aberdeen) 63 / 203
The Idea
What if we want COH (If a → b then P(a) ≤ 1 − P(b))?
A
?
B
?
C
0.4
Cerutti, Oren (Cardiff, Aberdeen) 64 / 203
The Idea
What if we want COH (If a → b then P(a) ≤ 1 − P(b))?
A
0.6
B
0.4
C
0.4
Cerutti, Oren (Cardiff, Aberdeen) 64 / 203
The Idea
What if we want COH (If a → b then P(a) ≤ 1 − P(b))?
A
0.5
B
0.5
C
0.4
Multiple probability functions can satisfy the coherence here.
Cerutti, Oren (Cardiff, Aberdeen) 64 / 203
Applications
Reasoning about uncertain knowledge
Persuasion and opponent modelling
Cerutti, Oren (Cardiff, Aberdeen) 65 / 203
Where are we?
We’ve covered several extensions of Dung’s formalism to take into
account additional common aspects of argumentation.
There are myriad other extended frameworks (and semantics) out
there.
Value based argumentation frameworks
Fuzzy argumentation frameworks
Weighted argumentation frameworks
A variety of ways to represent argument strength
Cerutti, Oren (Cardiff, Aberdeen) 66 / 203
Dialogue
Cerutti, Oren (Cardiff, Aberdeen) 67 / 203
Where are we?
We know how to represent arguments
We know how to identify justified conclusions
But how (and why?) do agents exchange arguments?
Cerutti, Oren (Cardiff, Aberdeen) 68 / 203
Exchanging arguments
Agents act to achieve some goal.
Different goals require different types of arguments to be
exchanged.
Walton and Krabbe’s (1995) typology:
Information-seeking participant seeks answer to some question(s) from
another participant, who knows the answer
Inquiry participants collaborate to answer a question (whose
answer they don’t know
Persuasion participant seeks to persuade another to accept a
proposition they don’t currently endorse
Negotiation bargaining over division of resources
Deliberation collaborate to decide which action(s) should be
adopted in some situation
Eristic verbal quarrel rather than physical fighting
Cerutti, Oren (Cardiff, Aberdeen) 69 / 203
Dialogues
Different types of dialogues are entered with the agents having
different goals, and the dialogues achieving different outcomes.
Dialogues may involve mixtures of dialogue types; one dialogue
may be embedded in another.
Dialogues specify a protocol — called a dialogue game — which
agents can follow to reach the dialogue outcomes.
Chap. 13 of Argumentation in Artificial Intelligence (2009) by
McBurney and Parsons provides a very good general summary of
dialogue games.
Cerutti, Oren (Cardiff, Aberdeen) 70 / 203
Dialogue Components
A dialogue game consists of
A set of commencement rules which define when the dialogue may
begin.
A set of locutions specifying which utterances are permitted. Such
rules can also specify which combinations of locutions are
permissible (e.g., asserting x and ¬x by the same participant may
be prohibited).
Commitment rules describe what an utterance commits an agent to.
E.g., a question may commit another to provide an answer, while an
assertion may commit the agent to either retracting or defending
the assertion’s content. Such rules can also be combined, stating
— for example — that a retraction after an assertion removes a
commitment.
Rules for speaker order specify who may make utterances when.
Termination rules state when the dialogue ends.
Cerutti, Oren (Cardiff, Aberdeen) 71 / 203
Dialogical Agents
An agent participating in a dialogue has a knowledge base
containing its (private) knowledge about the world.
Its dialogical commitments are tracked within a commitment store,
and can be thought of as a mapping between locutions and
statements expressing actions or beliefs external to the dialogue.
Cerutti, Oren (Cardiff, Aberdeen) 72 / 203
Dialogue Semantics
There are many different ways of specifying the semantics of each
utterance within a dialogue (which we will not formalise).
The effects of each utterance on agents and dialogue structures
must be described. E.g.,
The preconditions for an assert(φ) utterance is that a desires that
all agents believe that φ is the case.
The post-condition is that (1) all agents (except for a) believe that a
desires them to believe that φ is the case; and (2) a is committed to
demonstrate that φ is the case when questioned.
One may also specify locution combination rules stating, e.g., that
question(φ) may only be played when some agent is committed to
φ.
Cerutti, Oren (Cardiff, Aberdeen) 73 / 203
Where are we?
Dialogue games describe a protocol by which discussion can take
place.
In the context of argumentation, such a protocol usually involves
adding, removing or deleting arguments from agent commitment
stores to achieve some goal.
To use a dialogue game, an agent must (typically) also identify an
appropriate strategy to decide what locution to utter, and what the
contents of the locution should be (see for example Thimm (2014)
for a deeper discussion of this topic).
Cerutti, Oren (Cardiff, Aberdeen) 74 / 203
f
a b e h
c d g
i
m n o
j k l
Is o skeptically preferred in, undecided or out?
Cerutti, Oren (Cardiff, Aberdeen) 75 / 203
Proof dialogues
Proof dialogues aim to provide a dialogical approach to
determining the status of an argument.
Rather than applying the formal definition of the semantics to
determine extension membership, they consider two parties who
enter a dialogue to compute the status of an argument.
Such proof dialogues might help non-technical users understand
why some conclusion is, or is not, justified.
Cerutti, Oren (Cardiff, Aberdeen) 76 / 203
Desiderata
Such proof dialogues should
Be natural — if they are similar to the manner in which humans
reason, they’ll be understood
Be sound — reaching some conclusion in the dialogue should
coincide with the same decision for the presence or absence of the
argument in the extension(s).
Be complete — Any argument present or absent in the extension(s)
should have an associated dialogue which can prove it.
Be computationally efficient
Sometimes we won’t achieve all of these properties.
Cerutti, Oren (Cardiff, Aberdeen) 77 / 203
A simple proof dialogue
a
b c d
e
P : in(D)
O : out(C)
P : in(B)
O : out(A)
P : in(B)
Cerutti, Oren (Cardiff, Aberdeen) 78 / 203
A simple proof dialogue
a
b c d
e
P : in(D)
O : out(C)
P : in(B)
O : out(A)
P : in(B)
in moves are claims, while out
states a consequence of the in move
and asks for a justification for this
labelling.
O has no moves left, and must
therefore accept P’s position. P wins
the game.
Cerutti, Oren (Cardiff, Aberdeen) 78 / 203
A simple proof dialogue
a
b c d
e
P : in(E)
O : out(D)
P : in(C)
O : out(E)
Cerutti, Oren (Cardiff, Aberdeen) 79 / 203
A simple proof dialogue
a
b c d
e
P : in(E)
O : out(D)
P : in(C)
O : out(E)
By pointing out P’s contradiction, O
wins the game.
Cerutti, Oren (Cardiff, Aberdeen) 79 / 203
The Game
Participants: P and O
Commencement rule: P states that some argument is in
Speaker order: After P moves, players alternate.
Locution rules:
Each move of P (except the first) must be an in move which refers
to an attacker of the previous move of O.
Each move of O must be a out move which refers to an attacker of
any of the previous in moves.
O is not allowed to repeat moves (but P is, as the same in
argument can cause multiple arguments to be out).
Cerutti, Oren (Cardiff, Aberdeen) 80 / 203
The Game
Termination rules:
If O uses an argument previously used by P, then O wins (as they
have shown a contradiction). Similarly, if P uses an argument
previously used by O, O wins.
If P cannot move, then O wins (as they’re unable to justify their
position).
If O cannot move, then P wins (as they have to accept P’s claim).
Cerutti, Oren (Cardiff, Aberdeen) 81 / 203
The Game - Properties
If there is a game for argument A won by P, then there is a
preferred extension containing A.
If there is a preferred extension containing A, then P has a
winning strategy for the game.
Minimal number of moves necessary is
2 · |number of arguments labelled out| + 1.
But finding such a labelling is hard.
Cerutti, Oren (Cardiff, Aberdeen) 82 / 203
Grounded Discussion Game (GDG)
An argument is in the grounded extension if it "has to be the case".
For an opponent to show an argument is not in the grounded
extension, they simply need to show that one of its attackers
"could be the case".
The burden of proof is thus on P to show that none of the
attackers of the argument they are defending can be the case.
So the moves for the grounded game are
HTB(A) A has to be the case — A is in the grounded
labelling. Moved by P.
CB(B) B is not out in the grounded labelling.
Moved by O.
CONCEDE(A) Signals an agreement that A is in. Moved
by O.
RETRACT(B) Signals that B is out. Moved by O.
Cerutti, Oren (Cardiff, Aberdeen) 83 / 203
Grounded Discussion Game (GDG)
Game starts with P making a HTB statement.
O Can then make one or more CB, CONCEDE and RETRACT
statements.
After which P makes a HTB and the cycle repeats.
N.B., O makes multiple moves for every P move.
Cerutti, Oren (Cardiff, Aberdeen) 84 / 203
Locution Rules
HTB(A) is either the first move, or the previous move was CB(B)
in which case A must attack B, and O can’t CONCEDE or
RETRACT.
CB(B) is moved when B attacks the last HTB(A) statement where
CONCEDE(A) has not yet been made; B has not been retracted;
the last move was not a CB move, and CONCEDE and
RETRACT cannot be played.
CONCEDE(A) can be played when HTB(A) was moved earlier,
and all attackers of A have been retracted, and CONCEDE(A) has
not been played.
RETRACT(A) can be played when CB(A) was moved in the past,
and an attacker of A has been conceded, and RETRACT(A) has
not been played.
Cerutti, Oren (Cardiff, Aberdeen) 85 / 203
Winning and Losing
If O concedes the original argument, P wins. Otherwise, O wins.
If a HTB,CB or HTB-CB repeat occurs for the same argument, O
wins (due to burden of proof).
f
a b e h
c d g
1: P : HTB(C) 4: O : CONCEDE(A)
2: O : CB(B) 5: O : RETRACT(B)
3: P : HTB(A) 6: O : CONCEDE(C)
Cerutti, Oren (Cardiff, Aberdeen) 86 / 203
Winning and Losing
If O concedes the original argument, P wins. Otherwise, O wins.
If a HTB,CB or HTB-CB repeat occurs for the same argument, O
wins (due to burden of proof).
f
a b e h
c d g
1: P : HTB(B) 2: O : CB(A)
Cerutti, Oren (Cardiff, Aberdeen) 86 / 203
Winning and Losing
If O concedes the original argument, P wins. Otherwise, O wins.
If a HTB,CB or HTB-CB repeat occurs for the same argument, O
wins (due to burden of proof).
f
a b e h
c d g
1: P : HTB(F) 4: O : CONCEDE(A)
2: O : CB(B) 5: O : RETRACT(B)
3: P : HTB(A) 6: O : CB(A)
Cerutti, Oren (Cardiff, Aberdeen) 86 / 203
Another Grounded Game
Again, P and O alternate, with P moving first.
Every P move except the first attacks the preceding O move.
P moves cannot be repeated.
The winner is the player making the last move.
f
a b e h
c d g
[C, B, A] is won by P
[G, H] is won by O
Cerutti, Oren (Cardiff, Aberdeen) 87 / 203
Another Grounded Game (SGG)
f
a b e h
c d g
[F, B, A] is (incorrectly) won by P
So all possible games must be considered to demonstrate that an
argument is grounded.
This is (effectively) a tree of possible unique discussions, where
each path from root to leaf is won by P.
Cerutti, Oren (Cardiff, Aberdeen) 88 / 203
GDG vs SGG
SGG allows argument to reappear over multiple paths. In worst
case, it’s exponential in the number of arguments in the
framework.
GDG considers each argument once, and is linear in the number
of arguments in the framework (note that a strategy exists which
minimises game length).
Exponential blow-up is a standard feature of most tree-based
discussion games.
Cerutti, Oren (Cardiff, Aberdeen) 89 / 203
Skeptical Preferred Semantics
Grounded is considered "too skeptical".
Credulous preferred is "too lenient".
Skeptical preferred semantics seem to capture human intuitions
well.
Some work uses meta-dialogues, or works only where stable and
preferred semantics coincide.
Cerutti, Oren (Cardiff, Aberdeen) 90 / 203
Approach
Two players, O and P
Two phases
Phase 1: O advances an extension where the argument under
discussion is out or undec.
Phase 2: P shows that this extension is not a preferred extension.
Under perfect play, O will win iff the focal argument is not in, with
P winning otherwise.
Cerutti, Oren (Cardiff, Aberdeen) 91 / 203
More detail
Moves:
What is (WI) — requests a label to be assigned to an argument.
Claim (CL) — assign a label to an argument.
Players take turns to make a single move, with P beginning both
phases.
Phase 1:
P plays WI moves (starting with argument of interest).
O responds with a CL move assigning a (legal) label to the
argument.
P’s WI moves are for arguments which attack a previous CL move
(and no CL for that argument has yet occurred).
Play continues until no moves are possible, an illegal CL is made,
or the focal argument is claimed in. In the first case, Phase 2
begins, else P wins.
Cerutti, Oren (Cardiff, Aberdeen) 92 / 203
More detail
Moves:
What is (WI) — requests a label to be assigned to an argument.
Claim (CL) — assign a label to an argument.
Players take turns to make a single move, with P beginning both
phases.
Phase 2:
P begins by playing CL on a undec labelled argument.
O plays WI on a undec attacker of the CL.
This repeats until no more moves can be made. P wins the game if
it has made at least one move during this phase, and the labelling
is legal.
Cerutti, Oren (Cardiff, Aberdeen) 92 / 203
Example
f
e g a
b
Phase one:
P : WI(a)
O : CL(undec(a))
P : WI(g)
O : CL(undec(g))
P : WI(b)
O : CL(undec(b))
P : WI(e)
O : CL(out(e))
P : WI(f)
O : CL(in(f))
Cerutti, Oren (Cardiff, Aberdeen) 93 / 203
Example
f
e g a
b
Phase two:
P : CL(in(g))
O : WI(b)
P : CL(out(b))
O : WI(a)
P : CL(in(a))
O : WI(g)
P : CL(out(g))
P contradicts itself in Phase 2, and O
therefore wins — a is not skeptically
preferred.
Cerutti, Oren (Cardiff, Aberdeen) 93 / 203
Example 2
c d
a
b
Phase one:
P : WI(d)
O : CL(undec(d))
P : WI(c)
O : CL(undec(c))
P : WI(b)
O : CL(undec(b))
P : WI(a)
O : CL(undec(a))
Cerutti, Oren (Cardiff, Aberdeen) 94 / 203
Example 2
c d
a
b
Phase two:
P : CL(in(d))
O : WI(c)
P : CL(out(c))
O : WI(b)
P : CL(in(b))
O : WI(a)
P : CL(out(a))
In Phase 2, P successfully changes
an undec argument to in, and
therefore wins; d is skeptically
preferred.
Cerutti, Oren (Cardiff, Aberdeen) 94 / 203
What’s going on?
In phase 1, O identifies an admissible labelling where the focal
argument is not in. If this is a preferred extension, then O should
win the game, otherwise, they’ve cheated.
Phase 2 allows P to prove that O has cheated in phase 1.
Core result: there is a winning strategy for P or O depending on
whether the argument is or isn’t skeptically preferred.
Without perfect knowledge, this becomes a tree based discussion,
requiring all possible paths to be explored.
But in many applications, one party has perfect knowledge,
reducing real world complexity.
Cerutti, Oren (Cardiff, Aberdeen) 95 / 203
Observations
All proof dialogues incrementally assign a labelling to arguments.
There is an implicit assumption that participants are cooperatively
exploring the (shared) argument graph (as they know what
questions are legal).
Current work involves removing this assumption, but current
results indicate that in the worst case, all arguments and attackers
must be exchanged to obtains soundness and completeness,
reducing to existing work.
Since all attackers for an in argument must be explored, there’s a
question of cognitive load in human-centric applications over large
graphs. Exploration is taking place regarding heuristics to allow
short-circuiting, but this comes at the cost of completeness.
Cerutti, Oren (Cardiff, Aberdeen) 96 / 203
Take away messages
Dialectical proof procedures are an alternative approach to
identifying status of argument.
Such proof procedures exist for many semantics.
They implicitly encode algorithms used to perform labellings
(including random choice and backtracking as necessary).
Complexity (for a good algorithm) is equivalent to complexity of
deciding whether a single argument is in the appropriate
extension type.
The main claim is that such proof procedures are more easily
understood by non-experts.
Cerutti, Oren (Cardiff, Aberdeen) 97 / 203
MAS and Argumentation
Cerutti, Oren (Cardiff, Aberdeen) 98 / 203
Decision Making
Cerutti, Oren (Cardiff, Aberdeen) 99 / 203
Cerutti, Oren (Cardiff, Aberdeen) 100 / 203
The example is about having a surgery (sg) or not (¬sg), knowing that
the patient has colonic polyps. The knowledge base contains the
following information:
having a surgery has side-effects,
not having surgery avoids having side-effects,
when having a cancer, having a surgery avoids loss of life,
if a patient has cancer and has no surgery, the patient would lose
his life,
the patient has colonic polyps,
having colonic polyps may lead to cancer.
In addition to the above knowledge, the patient has also some goals
like: “no side effects” and “to not lose his life”. Obviously it is more
important for him to not lose his life than to not have side effects.
Cerutti, Oren (Cardiff, Aberdeen) 101 / 203
α [“the patient has colonic polyps”, and “having colonic polyps may
lead to cancer”]
δ1 [“the patient may have a cancer”, “when having a cancer, having a
surgery avoids loss of life”]
δ2 [“not having surgery avoids having side-effects”]
δ3 [“having a surgery has side-effects”]
δ4 [“the patient has colonic polyps”, and “having colonic polyps may
lead to cancer”, “if a patient has cancer and has no surgery, the
patient would lose his life”]
Cerutti, Oren (Cardiff, Aberdeen) 102 / 203
Definition
Ae denotes a set of epistemic arguments, and Ap denotes a set of
practical arguments such that Ae ∩ Ap = ∅. Let A = Ae ∪ Ap (i.e. A
will contain all those arguments)
Ae = {α} while Ap = {δ1, δ2, δ3, δ4}
Cerutti, Oren (Cardiff, Aberdeen) 103 / 203
Definition
Fp : D → 2Ap is a function that returns the arguments in favor of a
candidate decision. Such arguments are said pro the option.
Fc : D → 2Ap is a function that returns the arguments against a
candidate decision. Such arguments are said cons the option.
The two functions satisfy the following requirements:
∀d ∈ D, δ ∈ Ap s.t. δ ∈ Fp(d) and δ ∈ Fc(d). This means that an
argument is either in favor of an option or against that option. It
cannot be both.
If δ ∈ Fp(d) and δ ∈ Fp(d ) (resp. if δ ∈ Fc(d) and δ ∈ Fc(d )),
then d = d . This means that an argument refers only to one
option.
Let D = {d1, . . . , dn}. Ap = ( Fp(di)) ∪ ( Fc(di)), with
i = 1, . . . , n. This means that the available practical arguments
concern options of the set D.
When δ ∈ Fx (d) with x ∈ {p, c}, we say that d is the conclusion of δ,
and we write Conc(δ) = d.
Cerutti, Oren (Cardiff, Aberdeen) 104 / 203
α [“the patient has colonic polyps”,
and “having colonic polyps may
lead to cancer”]
δ1 [“the patient may have a cancer”,
“when having a cancer, having a
surgery avoids loss of life”]
δ2 [“not having surgery avoids
having side-effects”]
δ3 [“having a surgery has
side-effects”]
δ4 [“the patient has colonic polyps”,
and “having colonic polyps may
lead to cancer”, “if a patient has
cancer and has no surgery, the
patient would lose his life”]
The two options of the set D =
{sg, ¬sg} are
supported/attacked by the
following arguments: Fp(sg) =
{δ1}, Fc(sg) = {δ3}, Fp(¬sg) =
{δ2}, and Fc(¬sg) = {δ4}.
Cerutti, Oren (Cardiff, Aberdeen) 105 / 203
Definition
Three preference relations between arguments are defined. The first
one, denoted by ≥e, is a preorder—i.e. reflexive and transitive— on
the set Ae.
The second relation, denoted by ≥p, is a preorder on the set Ap.
Finally, a third relation, denoted by ≥m (m stands for mixed relation),
captures the idea that any epistemic argument is stronger than any
practical argument. Thus, ∀α ∈ Ae, ∀δ ∈ Ap, (α, δ) ∈≥m and
(δ, α) /∈≥m.
Cerutti, Oren (Cardiff, Aberdeen) 106 / 203
α [“the patient has colonic polyps”,
and “having colonic polyps may
lead to cancer”]
δ1 [“the patient may have a cancer”,
“when having a cancer, having a
surgery avoids loss of life”]
δ2 [“not having surgery avoids
having side-effects”]
δ3 [“having a surgery has
side-effects”]
δ4 [“the patient has colonic polyps”,
and “having colonic polyps may
lead to cancer”, “if a patient has
cancer and has no surgery, the
patient would lose his life”]
≥e = {(α, α)} and ≥m =
{(α, δ1), (α, δ2)}.
Regarding ≥p, δ1 is stronger
than δ2 since the goal satisfied
by δ1 (namely, not loss of life) is
more important than the one
satisfied by δ2 (not having side
effects). Thus, ≥p = {(δ1, δ1),
(δ2, δ2), (δ1, δ2)}.
Cerutti, Oren (Cardiff, Aberdeen) 107 / 203
Definition
Epistemic arguments may attack each others: Re ⊆ Ae × Ae.
Epistemic arguments may also attack practical arguments.
Practical arguments are not allowed to attack epistemic ones to avoid
wishful thinking: Rm ⊆ Ae × Ap.
It is assumed that practical arguments do not conflict: each practical
argument points out some advantage or some weakness of a
candidate decision: Rp ⊆ Ap × Ap = ∅.
Definition
Let A be a set of arguments, and a, b ∈ A. (a, b) ∈ Defx iff
(a, b) ∈ Rx , and (b, a) /∈>x .
Cerutti, Oren (Cardiff, Aberdeen) 108 / 203
Comparing candidate decisions
Unipolar principles: are those that only refer to either the
arguments pros or the arguments cons.
E.g.: counting arguments pros/cons, . . .
Bipolar principles: are those that take into account both types of
arguments at the same time.
E.g.: prefers a decision that has at least one supporting argument
which is better than any supporting argument of the other
decision, and also that has not a very strong argument against it.
Non-polar principles: are those where arguments pros and
arguments cons a given choice are aggregated into a unique
meta-argument. It results that the negative and positive polarities
disappear in the aggregation.
Cerutti, Oren (Cardiff, Aberdeen) 109 / 203
Cerutti, Oren (Cardiff, Aberdeen) 110 / 203
Cerutti, Oren (Cardiff, Aberdeen) 111 / 203
http://www.arganddec.com/diagram.php?id=705
Norms
Cerutti, Oren (Cardiff, Aberdeen) 112 / 203
Norms
(Detached) norms specify the manner in which an agent should
behave by describing the obligations, permissions and
prohibitions it should act under.
One view of permissions is that they identify exceptional
circumstances under which an obligation or prohibition is
derogated.
Further exceptions could prevent a permission from coming into
force.
This is analogous to reinstatement.
The non-monotonic nature of normative reasoning has long been
recognised.
We’re going to look at
How to reason about what should be the case given a set of norms;
and
How an agent should reason given norms and goals
Cerutti, Oren (Cardiff, Aberdeen) 113 / 203
Setting the scene
Suppose a soldier must listen to orders from three superiors,
a Sergeant, Captain and Major. The Sergeant (who likes
being warm) states that in winter, the heat should be turned
on. The Captain (who worries about energy costs) says that
during winter, the window must stay closed. Finally, the Major
(who likes being cool) states that whenever the heating is on,
the window should be open. (adapted from Horty (2007))
3 obligations are imposed on the soldier: (w, h), (w, ¬o), (h, o).
There are priorities over the obligations as the Major outranks the
Captain who outranks the Sergeant.
It’s winter, what should the soldier do?
This section is based on Liao, Oren, van der Torre, Villata (2017)
Cerutti, Oren (Cardiff, Aberdeen) 114 / 203
What to do?
(h, o) > (w, ¬o) > (w, h) {w}
There are multiple approaches to reasoning in the Deontic logic
literature
Greedy: apply applicable norm with highest priority that does not
introduce conflict. (w, ¬o), (w, h). So conclusion is (h, ¬o)
Reduction: Guess an extension, identify applicable norms and try
them out by applying Greedy. E.g., guessing {h, o} means all
norms are applicable. Greedy gives us the same extension, so this
works. Guessing {h, ¬o} does not appear after applying Greedy, so
not an extension.
Optimisation: Select norms in order of priority while they remain
consistent with context. So we select (h, o) and (w, ¬o). Greedy is
then applied, yielding {¬o}
Cerutti, Oren (Cardiff, Aberdeen) 115 / 203
Argumentation
What is an argument?
Context: yields an argument with conclusion of the element in the
context. E.g., there is a context argument with conclusion
conc(w) = w
Ordinary argument: a path from context to some conclusion
obtained by following the norms in the system (e.g., α = [w, h, o] is
an argument with conclusion conc(α) = o
Note that β = [w, h] is a subargument of α.
By identifying what arguments are justified and taking their
conclusions, we can determine what obligations hold and actions
should be performed.
Cerutti, Oren (Cardiff, Aberdeen) 116 / 203
Priorities
We have priorities over norms.
But we will be comparing arguments.
So we need to lift the former to obtain priorities over the latter.
Consider two arguments α = [u1, . . . , un], β = [v1, . . . , vm]
Cerutti, Oren (Cardiff, Aberdeen) 117 / 203
Priorities
We have priorities over norms.
But we will be comparing arguments.
So we need to lift the former to obtain priorities over the latter.
Consider two arguments α = [u1, . . . , un], β = [v1, . . . , vm]
Weakest link: (abusing notation) α w β iff ∃v ∈ βα s.t. ∀u ∈ αβ
v ≤ u. That is, there is some norm in β that is weaker than all
norms in α
Cerutti, Oren (Cardiff, Aberdeen) 117 / 203
Priorities
We have priorities over norms.
But we will be comparing arguments.
So we need to lift the former to obtain priorities over the latter.
Consider two arguments α = [u1, . . . , un], β = [v1, . . . , vm]
Weakest link: (abusing notation) α w β iff ∃v ∈ βα s.t. ∀u ∈ αβ
v ≤ u. That is, there is some norm in β that is weaker than all
norms in α
Last link: α l β iff un ≥ vm. That is, the last norm of α has priority
over the last norm of β.
Cerutti, Oren (Cardiff, Aberdeen) 117 / 203
Defeat
α defeats β if there a subargument β of β such that
concl(α) = ¬concl(β ); and
Either
α is a context argument; or
α is an ordinary argument and α β
Observations:
Defeat is dependant on whether last or weakest link is used.
The system we’ve defined is "ASPIC-like"; we can show that it
satisfies closure under sub-arguments, direct and contextual
consistency.
Cerutti, Oren (Cardiff, Aberdeen) 118 / 203
Argumentation Frameworks
A0A0 A1A1 A2A2 A3A3
[w][w] [(w, h)][(w, h)] [(w, h), (h, o)][(w, h), (h, o)] [(w, ¬o)][(w, ¬o)]
A0A0 A1A1 A2A2 A3A3
[w][w] [(w, h)][(w, h)] [(w, h), (h, o)][(w, h), (h, o)] [(w, ¬o)][(w, ¬o)]
A0A0 A1A1 A2A2 A3A3
[w][w] [(w, h)][(w, h)] [(w, h), (h, o)][(w, h), (h, o)] [(w, ¬o)][(w, ¬o)]
Cerutti, Oren (Cardiff, Aberdeen) 119 / 203
Results
Greedy is weakest link under the stable semantics.
Reduction is last link under the stable semantics.
Optimization is trickier...
Cerutti, Oren (Cardiff, Aberdeen) 120 / 203
Optimization
The weakest norm of an argument is the norm within an ordinary
argument with the lowest priority.
The weakest sub-argument of an argument α is the ordinary
sub-argument whose top norm (i.e., conclusion) is the weakest
norm.
The weakest argument is the set of weakest arguments w.r.t an
argument α in the set of super arguments of the weakest
sub-argument of α.
A0A0 A1A1 A2A2 A3A3
[w][w] [(w, h)][(w, h)] [(w, h), (h, o)][(w, h), (h, o)] [(w, ¬o)][(w, ¬o)]
Using weakest link, A3 defeats A2. The weakest arguments are
warg(A1) = {A1}, warg(A2) = {A1, A2}, warg(A3) = {A3}
Cerutti, Oren (Cardiff, Aberdeen) 121 / 203
Optimization
Now assume α defeats β. If neither share a weakest argument,
then we introduce additional arguments to defeat the proper
weakest sub-arguments of β.
A0A0
A3A3
A1A1
A4A4
A2A2
A5A5
auxaux
[(w, h)][(w, h)] [(w, ¬h)][(w, ¬h)] [w][w]
[(w, h), (h, o)][(w, h), (h, o)]
[(w, ¬o)][(w, ¬o)][(w, ¬h), (¬h, o)][(w, ¬h), (¬h, o)]
(w, h) = 1 (w, ¬h) = 0 (h, o) = 3 (¬h, o) = 4 (w, ¬o) = 2
A5 /∈ warg(A3) = {A1, A3} and /∈ warg(A4) = {A2, A4}
Cerutti, Oren (Cardiff, Aberdeen) 122 / 203
Optimization
If they share a weakest argument, then any argument containing
the weakest argument should be defeated.
This is achieved by introducing an auxiliary argument and attacks
from that argument to the weakest arguments.
A0A0
[a][a]
A1A1 A2A2
A3A3
[(a, b)][(a, b)] [(a, b), (b, c)][(a, b), (b, c)]
[(a, b), (b, ¬c)][(a, b), (b, ¬c)]
auxaux
(a, b) = 1 (b, c) = 2 (b, ¬c) = 3
A3 ∈ warg(A2) = {A1, A2, A3}
Cerutti, Oren (Cardiff, Aberdeen) 123 / 203
Where are we?
We can reason about what norms are in force.
In other words, we are using argumentation to reason about
norms.
We shift focus to how argumentation can be used to reason about
acting in the presence of norms.
This work is based on Oren (2013).
Cerutti, Oren (Cardiff, Aberdeen) 124 / 203
Overview
Overall goal: We examine an agent’s reasoning procedure in the
presence of norms and goals.
System model.
Goals, norms and preferences.
Reasoning via argument schemes.
Next steps.
Cerutti, Oren (Cardiff, Aberdeen) 125 / 203
AATS
A set of states.
An initial state.
A finite set of agents.
A set of non-overlapping actions for agents, with preconditions on
actions.
A transition function.
A set of propositions.
An interpretation function.
Cerutti, Oren (Cardiff, Aberdeen) 126 / 203
AATSs to Traces
We can construct a tree of possible paths of the system by
starting at the root of the tree and walking along the edges.
These paths exist due to different joint actions selected by the
agents.
Agents select different actions as some of the paths end up
achieving some state of affairs they desire, whereas other paths
do not.
Cerutti, Oren (Cardiff, Aberdeen) 127 / 203
Transition Systems
1
2
4
3
a,b
a,a
b,a
a,b
*,*
*,*
1
2
4
3
4
2
1
2
3
4
Cerutti, Oren (Cardiff, Aberdeen) 128 / 203
AATSs to Traces
We can construct a tree of possible paths of the system by
starting at the root of the tree and walking along the edges.
These paths exist due to different joint actions selected by the
agents.
Agents select different actions as some of the paths end up
achieving some state of affairs they desire, whereas other paths
do not.
Desirable states of affairs arise due to
Goals.
Norms.
Cerutti, Oren (Cardiff, Aberdeen) 129 / 203
Goals
We view a goal as a proposition that the agent would like to see
hold in some state.
The agent prefers those paths in which the goal is achieved to
those paths where it is not achieved.
For each goal, we can identify a family of paths where it is
achieved, and a family of paths where it is not achieved.
This can be compactly represented through preferences temporal
logic formulae.
Cerutti, Oren (Cardiff, Aberdeen) 130 / 203
The Logic
We describe paths using CTL*.
State formulae are evaluated with respect to an AATS S and a
state q ∈ Q:
S, q |=
S, q |= ⊥
S, q |= p iff p ∈ π(q)
S, q |= ¬ψ iff S, q |= ψ
S, q |= ψ ∨ φ iff S, q |= ψ or S, q |= φ
S, q |= Aψ iff S, λ |= ψ for all paths where λ[0] = q
S, q |= Eψ iff S, λ |= ψ for some path where λ[0] = q
Cerutti, Oren (Cardiff, Aberdeen) 131 / 203
The Logic
We describe paths using CTL*.
Path formulae are evaluated with respect to an AATS S and a path
λ:
S, λ |= ψ iff S, λ[0] |= ψ where ψ is a state formula.
S, λ |= ¬ψ iff S, λ ψ
S, λ |= ψ ∨ φ iff S, λ||= ψ or S, λ||= φ
S, λ |= ψ iff S, λ[1, ∞]||= ψ
S, λ |= ♦ψ iff ∃u ∈ N such that S, λ[u, ∞]||= ψ
S, λ |= ψ iff ∀u ∈ N it is the case that t S, λ[u, ∞] |= ψ
S, λ |= φUψ iff ∃u ∈ N such that S, λ[u, ∞] |= ψ and
∀v s.t. 0 ≤ v < u, S, λ[v, ∞] |= φ
Cerutti, Oren (Cardiff, Aberdeen) 131 / 203
Back to goals
A goal is then encoded through a preference relation between
sets of paths expressed as logical formulae.
g ¬ g
Cerutti, Oren (Cardiff, Aberdeen) 132 / 203
Norms
We treat prohibitions as obligations to ensure some state of affairs
does not come about.
We consider two types of obligations:
Achievement obligations — “you should close the door”.
Maintenance obligations — “you should keep the door closed”.
If an obligation is not complied with, then it is violated.
Every norm has a creditor and target (c.f. commitments).
Cerutti, Oren (Cardiff, Aberdeen) 133 / 203
Deadlines, Violation and Permission
Without a deadline
an achievement obligation cannot be violated; and
a maintenance obligation cannot be discharged.
Permissions act as exceptions to obligations.
If an obligation would be violated, and a permission exists, then
the obligation is considered to not be violated.
E.g. if you should keep the door closed, but are permitted to open
it when someone wants to enter, then doing so does not violate
the obligation.
Cerutti, Oren (Cardiff, Aberdeen) 134 / 203
Permission and Violaton
We introduce special propositions which must exist in those states
where a permission derogates an obligation, and where a violation
of an obligation occurs.
P
g
a,x — agent a has obtained permission from g to see to it that
state of affairs xis not the case.
V
g
a,x,d a violation by a of an obligation w.r.t g to see to it that x with
respect to a deadline d.
A permission existing until deadline d is then defined through the
formula
P
g
a (x|d) ≡ AP
g
a,x Ud
We require the following axiom to “clear” the permission:
A (¬P
g
a (x|d) → ¬P
g
a,x )
Cerutti, Oren (Cardiff, Aberdeen) 135 / 203
Achievement Obligations
An achievement obligation, abbreviated O
g
a (x|d) requiring the
target a to ensure that some state of affairs x holds before a
deadline d towards a creditor g is represented as follows:
A(¬V
g
a,x,d ∧ ¬d ∧ ¬x)U (((¬x ∧ d ∧ ¬P
g
a,x ∧ V
g
a,x,d )∨
(¬x ∧ d ∧ P
g
a,x ∧ ¬V
g
a,x,d ))∨
(x ∧ ¬V
g
a,x,d ))
Cerutti, Oren (Cardiff, Aberdeen) 136 / 203
Achievement Obligations
An achievement obligation, abbreviated O
g
a (x|d) requiring the
target a to ensure that some state of affairs x holds before a
deadline d towards a creditor g is represented as follows:
A(¬V
g
a,x,d ∧ ¬d ∧ ¬x)U (((¬x ∧ d ∧ ¬P
g
a,x ∧ V
g
a,x,d )∨
(¬x ∧ d ∧ P
g
a,x ∧ ¬V
g
a,x,d ))∨
(x ∧ ¬V
g
a,x,d ))
Before the deadline or x holds, the obligation is not violated.
Cerutti, Oren (Cardiff, Aberdeen) 136 / 203
Achievement Obligations
An achievement obligation, abbreviated O
g
a (x|d) requiring the
target a to ensure that some state of affairs x holds before a
deadline d towards a creditor g is represented as follows:
A(¬V
g
a,x,d ∧ ¬d ∧ ¬x)U (((¬x ∧ d ∧ ¬P
g
a,x ∧ V
g
a,x,d )∨
(¬x ∧ d ∧ P
g
a,x ∧ ¬V
g
a,x,d ))∨
(x ∧ ¬V
g
a,x,d ))
If the deadline occurs and x is not the case, then if there is no
permission allowing this to occur, a violation is recorded.
Alternatively, if such a permission exists, then no violation is
recorded (this is encoded by the second line of the proposition).
Cerutti, Oren (Cardiff, Aberdeen) 136 / 203
Achievement Obligations
An achievement obligation, abbreviated O
g
a (x|d) requiring the
target a to ensure that some state of affairs x holds before a
deadline d towards a creditor g is represented as follows:
A(¬V
g
a,x,d ∧ ¬d ∧ ¬x)U (((¬x ∧ d ∧ ¬P
g
a,x ∧ V
g
a,x,d )∨
(¬x ∧ d ∧ P
g
a,x ∧ ¬V
g
a,x,d ))∨
(x ∧ ¬V
g
a,x,d ))
If x is achieved (before the deadline), then no violation is recorded.
Violations should not occur arbitrarily:
A (¬O
g
a (x|d) → ¬V
g
a,x,d )
Cerutti, Oren (Cardiff, Aberdeen) 136 / 203
Maintenance Obligations
A((¬x ∧ ¬d ∧ (¬P
g
a,x ∧ V
g
a,x,d )∨
(P
g
a,x ∧ ¬V
g
a,x,d )) ∨ (x ∧ ¬d))Ud
In other words, before the deadline, either x is maintained, or x is
not maintained, in which case the obligation is violated if an
associated permission does not exist.
We abbreviate a maintenance obligation as O
g
a (m : d).
As for achievement obligations,
A (¬O
g
a (x : d) → ¬V
g
a,x,d )
Cerutti, Oren (Cardiff, Aberdeen) 137 / 203
Preferences and Norms
A norm’s creditor prefers that a norm is complied with to it being
violated (the norm’s target doesn’t care).
¬V
g
a,x,d
g
♦V
g
a,x,d
So an agent has a set of preferences obtained from its goals, and
a set of preferences obtained from its norms.
These preferences are typically in conflict.
We introduce meta-preferences in order to resolve these conflicts.
We then have a most preferred path through the system, allowing
the agent to perform practical reasoning.
We clearly have a non-monotonic system with reinstatement, and
we can therefore identify the most preferred path via
argumentation, but why should we?
Cerutti, Oren (Cardiff, Aberdeen) 138 / 203
Explanation
Argumentation can be used to provide easily understood
explanations of complex system behaviour.
In this work, we describe the system via arguments instantiated
from a set of argumentation schemes.
The resultant argument framework describes the system, and the
argument schemes and attacks between them provide our
explanation.
Other techniques, e.g. games for proof can then be used to
explain the argument framework to non-experts.
Note: for simplicity we ignore the multi-agent aspect of the system
in the argument schemes (future work).
Cerutti, Oren (Cardiff, Aberdeen) 139 / 203
Argumentation Schemes
We represent the system via an exhaustive set of argumentation
schemes.
Any path through the system represents a possible sequence of
actions that could be executed.
AS1: (Given situation S) The sequence of joint actions A1, . . . , An
should be executed.
Critical questions:
CQ1-1 Is there some other sequence of actions that should be executed
instead?
CQ1-2 Is there a more preferred sequence of actions that should be
executed?
Cerutti, Oren (Cardiff, Aberdeen) 140 / 203
Argumentation Schemes
We represent the system via an exhaustive set of argumentation
schemes.
One reason to prefer a path over another is that it achieves a goal
while another does not.
AS2:The sequence of joint actions A1, . . . , An is preferred over
A1, . . . An as the former achieves a goal which the latter does not.
Critical Questions:
CQ2-1 Is there some other sequence of actions which achieves a more
preferred goal than the one achieved by this action sequence?
CQ2-2 Does the sequence of actions lead to the violation of a norm?
Cerutti, Oren (Cardiff, Aberdeen) 140 / 203
Argumentation Schemes
We represent the system via an exhaustive set of argumentation
schemes.
Compliance with an obligation is a reason to prefer one path over
another.
AS3: The sequence of actions A1, . . . An should be less preferred
than sequence A1, . . . An as, in the absence of permissions, the
former violates a norm while the latter does not.
CQ3-1 Is the goal resulting from the sequence of actions more preferred
than the violation?
CQ3-2 Does the violation resulting from this norm result in some other,
more important violation not occurring?
CQ3-3 Is there a permission that derogates the violation?
Cerutti, Oren (Cardiff, Aberdeen) 140 / 203
Argumentation Schemes
We represent the system via an exhaustive set of argumentation
schemes.
The derogation of an obligation’s violation prevents it from being
preferred to a situation where it is not violated.
AS4: There is a permission that derogates the violation of an
obligation.
The next set of argument schemes are used to associate
preferences between different goals and norms, and are used to
instantiate CQs for AS2 and AS3.
AS5: Agent α prefers goal g over goal g
AS6: Agent α prefers achieving goal g to not violating n
AS7: Agent α prefers not achieving goal g to violating n
AS8: Agent α prefers violating n to violating n
AS9: Agent α prefers situation A to B
Cerutti, Oren (Cardiff, Aberdeen) 140 / 203
Formalisation
We can formalise these notions by referring to the AATS.
AS3: There exist two paths λ, λ obtained from the sequence of
joint actions j1, . . . jn and j1, . . . jm respectively, and it is the case
that SP
g
a,x , λ |= V
g
a,x,d and SP
g
a,x , λ |= V
g
a,x,d
CQ3-1: There is an instance of AS6 for S, λ |= γ and
S, λ |= V
g
a,x,d , where λ is the first path of AS3.
CQ3-2: There is an instantiation of AS8 for which this instantiation
of AS3 means that SP
g
a,x , λ |= V
g
a,x,d and SP
g
a,x , λ |= Vh
b,y,e
CQ3-3: There is an instantiation of AS4 referring to a permission
P
g
a,x which refers to the same path λ as this instantiation of AS3.
Cerutti, Oren (Cardiff, Aberdeen) 141 / 203
The Argumentation System
Many of our argument schemes are used to express
(meta-)preferences, and are naturally encoded as attacks on
attacks.
We therefore instantiate the system as an extended argument
framework (EAF), separating the preference level from the object
level of the system.
EAF as arguments should still exist even if they are derogated/not
preferred.
CQ1-1 is a symmetric attack between arguments.
CQ1-2 attacks an attacking edge.
CQ2-1, 2-2, 3-1 and 3-2 are instantiated via AS5-AS7 via an
attack on an attacking edge (with the attacked edge originating
from AS2 or AS3).
CQ3-3 is instantiated as an attack from AS8 to the appropriate
AS3 attack (allowing us to reason that the obligation still exists,
but is derogated when extensions are computed).
Cerutti, Oren (Cardiff, Aberdeen) 142 / 203
The Argumentation System
Each preferred extension of the system will contain a single
argument from AS1 for some specific action sequence,
representing one most preferred sequence of actions. In a
multi-agent setting, this joint action sequence strongly dominates
all others.
If multiple preferred extensions exist, then additional preferences
are required in order to identify a most preferred course of action.
In a multi-agent setting, this means additional coordination is
required.
An empty preferred extension indicates that a preference conflict
exists that must be resolved before a course of action can be
agreed upon.
Cerutti, Oren (Cardiff, Aberdeen) 143 / 203
Example
1
3
4 2
1>2
wd
2>1
vm
1>3
kj
3>4
wd
4>3
vm
2>3
kj
2>4
kj
1>4
kj
4>1
vm
2>3
vm
1>4
wd
3>2
wd
1>3
nvl
1>4
nvl
2>3
nvl
2>4
nvl
per
wd,
kj
vm,
kj
wd vm
W,N V,N W,F V,F
1 2 3 4
Cerutti, Oren (Cardiff, Aberdeen) 144 / 203
Where are we?
We can use argument to reason about what norms are in force.
Captures existing detachment procedures.
But requires new semantics (!)
We can reason about how to act using argumentation by taking
the formal system in which action takes place and creating
argument schemes which encode different choices within the
system (c.f., Atkinson (2007)).
We’ve repeatedly claimed that argumentation gives us some
advantage in such scenarios.
Cerutti, Oren (Cardiff, Aberdeen) 145 / 203
Applications
Cerutti, Oren (Cardiff, Aberdeen) 146 / 203
Explanation, Dealing with Humans
Cerutti, Oren (Cardiff, Aberdeen) 147 / 203
Argument and Explanation
Argumentation is no silver bullet - other techniques can perform
the same type of reasoning.
But — it is claimed — argumentation mirrors human reasoning,
making its operation easily understandable, potentially also
making systems which utilise it more explainable.
We will look at
Whether argumentation mirrors human reasoning.
How argumentation can be used to explain complex concepts.
Cerutti, Oren (Cardiff, Aberdeen) 148 / 203
Do humans reason argumentatively?
Rahwan et al. (2010) demonstrated that humans seem to think in
a manner similar to that predicted by the skeptical preferred
semantics.
Though reinstatement weakens conclusions.
Polberg and Hunter (2018) suggest that bipolar and probabilistic
argumentative reasoning also captures aspects of human
reasoning.
We consider
how closely structured argumentation captures human reasoning
the level of agreement between multiple extension semantics and
probabilistic reasoning
Cerutti, Oren (Cardiff, Aberdeen) 149 / 203
Structured Argumentation and Human Reasoning
(Cerutti, Tintarev, Oren (2014)
Prakken & Sartor’s (1997) argumentation framework was used, as
it allows explicit arguments about preferences.
r3 : ¬a ⇒ r1 r2
Scenarios were constructed which have a limited number of
interacting arguments.
Cerutti, Oren (Cardiff, Aberdeen) 150 / 203
Scenarios
A politician and an economist discuss the potential financial
outcome of the independence of a region X. The politician puts
forward an argument in favour of the conclusion "If Region X
becomes independent, X’s citizens will be poorer than they are
now". Another argument holding a contradicting conclusion (i.e.,
that Region X will not be poorer) is advanced by the economist.
The economist’s opinion is likely to be preferred to that of the
politician, and is supported by a scientific document.
s1 :→ sayspol s2 :→ sayseco s3 :→ saysexp
r1 : sayspol∧ ∼ expol → poorer
r2 : sayseco ∧ saysdoc∧ ∼ execo∧ ∼ exdoc → ¬poorer
r3 :∼ exexp → r2 r1
a1 : [s1, r1] a2 : [s2, s3, r2] a3 : [r3]
a2 defeats a1, a2 (and ¬poorer) justified
Cerutti, Oren (Cardiff, Aberdeen) 151 / 203
Scenarios
a1 a2
a3
Conclusion: ¬poorer
Cerutti, Oren (Cardiff, Aberdeen) 152 / 203
Scenarios
Four domains were considered (weather forecast, political debate,
used car purchase, pursuing a romantic relationship)
Base case always consisted of two arguments with contradicting
conclusions, and a preference for a2 over a1.
These base cases were then extended with additional information.
Cerutti, Oren (Cardiff, Aberdeen) 153 / 203
Extended Scenario
Other research disputes the economist’s claims.
s4 → snewr r4 : snewr ∧ ∼ exnewr → poorer
a1 a2
a3 a4
Conclusion: poorer or ¬poorer
Cerutti, Oren (Cardiff, Aberdeen) 154 / 203
Extended Scenarios
a1 a2
a3 a4
a1 a2
a3 a4
a1 a2
a3 a4
Pref. attack (x2)
a2 rebuttal
Pref. rebuttal
Cerutti, Oren (Cardiff, Aberdeen) 155 / 203
Experiments
Participants were asked what they thought:
Position advocated by first argument is correct (e.g., people will be
poorer)
Position advocated by second argument is correct (e.g., people will
not be poorer)
Don’t know which position is correct.
First for base case, and then after extended case was introduced.
Statements were also rated in terms of relevance for determining
the conclusion.
Cerutti, Oren (Cardiff, Aberdeen) 156 / 203
What was expected?
In base case, agreement with the second argument should occur.
In the extended case, people should be unable to conclude
anything.
People should find the argument regarding preference relevant to
drawing conclusions.
Cerutti, Oren (Cardiff, Aberdeen) 157 / 203
Results
0
15
30
45
60
Pos. A Pos. B Pos. U
%
Distribution of acceptability of actors’ positions
Base cases Extended cases
H1 and H2 are validated (though many people did draw
unexpected conclusions).
For H3, the preference made a significant difference (evaluated by
asking how much trust was placed in speaker).
But background knowledge seems to have a significant effect on
the way people reason. Different scenarios (with different impacts)
seem to affect reasoning.
The ability to make explicit arguments about preference is
important.
Cerutti, Oren (Cardiff, Aberdeen) 158 / 203
Understanding Multiple Extensions
Cerutti, Oren (Cardiff, Aberdeen) 159 / 203
What do multiple extensions mean?
Consider the credulous preferred semantics.
We may interpret each extension as a valid possible state of
reality. So given a set of arguments ξ, and assuming that each
possible world is equiprobable,
P(ξ) =
1/|ˆξP| ξ ∈ ˆξP
0 ξ ∈ ˆξP
(1)
For P(ξ) and argument A ∈ Arg:
ˆP(A) =
A∈ξ⊆Arg
P(ξ) (2)
is the degree of belief that an argument A is in an extension.
Cerutti, Oren (Cardiff, Aberdeen) 160 / 203
Justification ratio
The probability of a conclusion being justified w.r.t the likelihoods
of arguments which justify it is defined as follows.
Justification ratio
Given a set of arguments A = {A1, . . . , An} the justification ratio of a
conclusion ϕ of argument Ai is µ(ϕ) = Ai ∈A
ˆP(Ai).
With equiprobable extensions:
µ(ϕ) = ˆP(A) =
A∈ξ⊆Arg
1/|ˆξP| where ϕ ∈ Conc(A)
Cerutti, Oren (Cardiff, Aberdeen) 161 / 203
Example
Arguments:
A1 : r4
A2 : r2
A3 : r3
A4 : r1
A5 : A1 ⇒ r5
Two preferred extensions ξ1, ξ2
P(ξ1) = P(ξ2) = 0.5
Justification ratios:
µ(r1) = 0
µ(r2) = µ(r3) = 0.5
µ(r4) = µ(r5) = 1
A3
A2
A4 A5A1
A3
A2
A4 A5A1
Cerutti, Oren (Cardiff, Aberdeen) 162 / 203
Back to probability
If people take a frequentist approach to probability, then there
should be a strong relationship between
Classical probability interpretation
p(ri ) =
# of worlds where ri holds
total # possible worlds
Justification ratio (probabilistic semantics):
µ(ri ) =
# extensions in which ri is acceptable
total # extensions
Is there?
Cerutti, Oren (Cardiff, Aberdeen) 163 / 203
The experiment
Gave subjects a set of defeasible rules which yield n extensions,
with the conclusion of interest in m ≤ n extensions.
(Joe is a Democrat, Joe has taken the job, Joe has got a job at the Labor Union)
(Joe is a Democrat, Joe does not have a job at the Labor Union, Joe has taken the
job)
(Joe is a Republican, Joe does not have a job at the Labor Union, Joe does not
believe in Unions)
Given the 3 stated possible worlds, how likely is that you would
believe that “Joe is a Republican"? (uµ(ri))
Gave subjects a scenario where conclusions are probabilistically
generated such that the message of interest has equivalent
likelihood to m/n
Assume that we have a stream of information composed by one or
many copies of the following messages (. . . ). We know that 1
message out of 3 state that “Joe is a Republican". If 3 messages
are released, how likely is that a message would state that “Joe is a
Republican"? (up(ri ))
Cerutti, Oren (Cardiff, Aberdeen) 164 / 203
Results Domain 1
Believability ratings uµ(ri) and up(ri)
X6_0 X3_1 X4_1 X1_1 X2_0 X7_0 X5_0 X5_1 X3_0 X4_0 X6_1 X1_0 X2_1 X7_1
At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt
0
10
20
30
cat
value
variable
avalue5
avalue4
avalue3
avalue2
avalue1
pvalue5
pvalue4
pvalue3
pvalue2
pvalue1
Count
Interpretation
:Scenario
Extremely Likely
Likely
Neutral
Unlikely
Extremely Unlikely
Believability:
As Justification ratio/probability increases
The user believability rating
of a conclusion is positively
correlated
in At with the outcome
of probabilistic
semantics
in Pt with the
probability of the info
holding
The two correlations in At
and Pt are similar.
Cerutti, Oren (Cardiff, Aberdeen) 165 / 203
Results Domain 2
Believability rating uµ(ri) and up(ri) where ri is about likelihood of a fact
ω
X2_0 X2_1 X3_0 X4_0 X5_0 X6_1 X6_0 X3_1 X4_1 X1_1 X7_0 X5_1 X1_0 X7_1
At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt
0
10
20
30
cat
value
variable
avalue5
avalue4
avalue3
avalue2
avalue1
pvalue5
pvalue4
pvalue3
pvalue2
pvalue1
Count
Interpretation
:Scenario
Extremely Likely
Likely
Neutral
Unlikely
Extremely Unlikely
Believability:
As (justification ratio/probability)*likelihood of the fact increases
The user believability rating
of a conclusion is positively
correlated
in At with the outcome
of probabilistic
semantics∗ω
in Pt with the
probability of the info
holding∗ω
The two correlations in At
and Pt are similar.
Cerutti, Oren (Cardiff, Aberdeen) 166 / 203
Conclusions
We aimed to study the alignment between argumentation
semantics and human intuition
Specifically whether structured qualitative argumentation captures
some notion of uncertainty
Our results showed that:
People tend to agree with the outcome of the probabilistic
semantics in understanding the believability ratings of the
conclusions
With qualitative propositions, the outcome of the probability
semantics may be understood by people in a way similar to the
understanding of probability.
With propositions about likelihood of events, people employ a
heuristic associating the product of probabilities to the believability
of conclusions.
Cerutti, Oren (Cardiff, Aberdeen) 167 / 203
Where are we?
People seem to reason in an argumentative manner.
How can we use this?
Cerutti, Oren (Cardiff, Aberdeen) 168 / 203
The Problem
Complex computational systems are built on formal underpinnings
– game theory, logics, planners, inference engines, probability
theory, machine learning, . . .
It is difficult for non-experts (and even experts) to establish why
certain behaviours occurred, and what alternatives existed.
Debugging such systems is clearly difficult.
Human-system interactions exacerbate the problem
Lack of information regarding coordination
Little/no feedback about system behaviour
Difficult to communicate with and/or modify system behaviour
(GIGO)
Inadequate explanation of system functioning leads to loss of trust
Cerutti, Oren (Cardiff, Aberdeen) 169 / 203
Objectives
We seek to make computational systems scrutable, allowing
humans to better exploit them.
Goals:
Why were decisions made?
What alternatives were there? Why were they not pursued?
Allow for additional information to be fed to the system.
Effects:
Improve human/agent team functioning.
Improve system resilience (by adapting to new information).
Improve trust in the system.
Cerutti, Oren (Cardiff, Aberdeen) 170 / 203
Architecture
As an exemplar domain, we focused on workflows/plans.
We also considered a more general (defeasible) rule based
system.
Physical System
Knowledge
Base
Planner Plan Visualiser
NLG
Argument
Engine
Dialog
User Interface
ActuatorsSensors
Cerutti, Oren (Cardiff, Aberdeen) 171 / 203
Planner and Plans
We assume the existence of a planner.
Hardcoded workflows (via yawl) in our system.
Workflows contain choice points, which are selected based on
external domain information.
Cerutti, Oren (Cardiff, Aberdeen) 172 / 203
Knowledge Base
The knowledge base is a representation of the domain.
Encoded in the ASPIC- language (developed as part of the
project).
kick --> do_shut_in
do_shut_in ==> can_soft_shut_in
do_shut_in ==> can_hard_shut_in
R34: can_soft_shut_in =(-need_speed)=> SoftShutIn
R37: can_hard_shut_in =(-SoftShutIn)=> HardShutIn
==> shallow_depth
--> can_plug # we can always plug the well
# prefer soft shut-in
R34 > R37
Cerutti, Oren (Cardiff, Aberdeen) 173 / 203
Argumentation Engine
Given an ASPIC- knowledge base we can generate arguments —
chains of inference leading to some conclusion.
Arguments interact by attacking each other:
Through opposing conclusions (rebut)
By having a conclusion oppose a premise of another argument
(undermine)
By stating that a defeasible rule is not applicable in a situation
(undercut)
The argumentation engine allows a set of arguments to be
evaluated and determines which are justified (via different
extensions).
Our ASPIC- argumentation engine is the first to allow for an
intuitive form of rebut (unrestricted rebut) in the presence of
preferences under the grounded extension.
Cerutti, Oren (Cardiff, Aberdeen) 174 / 203
Dialogue
While a visual set of arguments allows one to trace the reasoning,
it is still difficult to understand for large argument systems.
We have developed several proof dialogues which incrementally
explore the argument graph in a dialectic manner.
NLG is used to transform logical statements from within the KB
into natural language.
DEMO
Cerutti, Oren (Cardiff, Aberdeen) 175 / 203
Summary
The SAsSy tool combines dialogue games and argumentation to
explain complex concepts through multiple modalities.
Significant industrial interest in taking tool further.
This will require several additional technologies to be integrated
into the system.
Cerutti, Oren (Cardiff, Aberdeen) 176 / 203
CISpaces
Cerutti, Oren (Cardiff, Aberdeen) 177 / 203
Supporting Reasoning with Different Types of Evidence in
Intelligence Analysis
Alice Toniolo_ Anthony Etuk Robin Wentao Ouyang
Tlmothy J-
N0Fman Federico Cerutti Mani Srivastava
DBPL 0f_C0ml3U“”Q SCIENCE Dept. of Computing Science University of California
University of Aberdeen, UK University of Aberdeen, UK Los Angeles, CA, USA
Nir Oren Timothy Dropps Paul Sullivan
Dept. of Computing Science John A_ Allen INTELPOINT Incorporated
University of Aberdeen, UK Honeywell, USA Pennsylvania, USA
Appears in: Proceedings of the 14th International
Conference on Autonomous Agents and ll/Iultiayent
Systems (AAJWAS 2015), Bordim, Elkind, Was.-3, Yolum
(ed5.), Mlay 4 8, 2015, Istcmbttl, Turkey.
Cerutti, Oren (Cardiff, Aberdeen) 178 / 203
Research question: Evaluate the Jupiter intervention on a conflict
ongoing on Mars
Research hypothesis: Is the Jupiter intervention on Mars humanitarian
or strategical?
Data gathering: beyond the scope of this work
Justification of possible hypotheses based on data and logic
Cerutti, Oren (Cardiff, Aberdeen) 179 / 203
Sensemaking
Agent
Data Request/
Crowdsourcing
Agent
Provenance
Agent
GUI Interface ToolBox
WorkBoxInfoBox ReqBox
ChatBox
Cerutti, Oren (Cardiff, Aberdeen) 180 / 203
Sensemaking Agent and Walton’s Argumentation
Schemes
Argument from Cause to Effect
Major Premise: Generally, if A occurs, then B will (might) occur.
Minor Premise: In this case, A occurs (might occur).
Conclusion: Therefore, in this case, B will (might) occur.
Critical questions
CQ1: How strong is the causal generalisation?
CQ2: Is the evidence cited (if there is any) strong enough to
warrant the causal generalisation?
CQ3: Are there other causal factors that could interfere with the
production of the effect in the given case?
Cerutti, Oren (Cardiff, Aberdeen) 181 / 203
Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
for refined
petroleum
Jupiter intervention
on Mars aims at
protecting strategic
assets
PRO
CON
CON
Cerutti, Oren (Cardiff, Aberdeen) 182 / 203
Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
for refined
petroleum
Jupiter intervention
on Mars aims at
protecting strategic
assets
PRO
CON
CON
Civilian casualties
caused by Jupiter
forces
CON
LCE
Use of old Jupiter
military doctrine
causes civilian
casualties
Large use of old
Jupiter military
techniques on Mars
Cerutti, Oren (Cardiff, Aberdeen) 183 / 203
Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
for refined
petroleum
Jupiter intervention
on Mars aims at
protecting strategic
assets
PRO
CON
CON
Civilian casualties
caused by Jupiter
forces
CON
LCE
Use of old Jupiter
military doctrine
causes civilian
casualties
Large use of old
Jupiter military
techniques on Mars
CQ2
There is no evidence
to show that the cause
occurred
Cerutti, Oren (Cardiff, Aberdeen) 184 / 203
Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
for refined
petroleum
Jupiter intervention
on Mars aims at
protecting strategic
assets
PRO
CON
CON
Civilian casualties
caused by Jupiter
forces
CON
LCE
Use of old Jupiter
military doctrine
causes civilian
casualties
Large use of old
Jupiter military
techniques on Mars
CQ2
There is no evidence
to show that the cause
occurred
CON
Use of massive aerial
and artillery strikes
Cerutti, Oren (Cardiff, Aberdeen) 185 / 203
Knowledge Base
Kp = { aid;
oil;
doctrine;
technique;
noevidence;
artillery; }
Rd = { aid =⇒ humanitarian;
oil =⇒ strategic;
doctrine ∧ technique =⇒ casualties; }
humanitarian = −strategic
casualties = ^humanitarian
noevidence = ^technique
artillery = ^noevidence
Cerutti, Oren (Cardiff, Aberdeen) 186 / 203
From Knowledge Base to Argument Graph
Kp = { aid;
oil;
doctrine;
technique;
noevidence;
artillery; }
Rd = { aid =⇒ humanitarian;
oil =⇒ strategic;
doctrine ∧ technique =⇒
casualties; }
humanitarian = −strategic
casualties = ^humanitarian
noevidence = ^technique
artillery = ^noevidence
aida1: aid
aida2: a1 ⇒ humanitarian
aida3: oil
aida4: a3 ⇒ strategic
aida5: doctrine
aida6: technique
aida7: a5 ∧ a6 ⇒ casualties
aida8: noevidence aida9: artillery
0
Prakken, H. (2010). An abstract framework for argumentation with structured
arguments. Argument & Computation, 1(2):93–124.
Cerutti, Oren (Cardiff, Aberdeen) 187 / 203
aida1: aid
aida2: a1 ⇒ humanitarian
aida3: oil
aida4: a3 ⇒ strategic
aida5: doctrine
aida6: technique
aida7: a5 ∧ a6 ⇒ casualties
aida8: noevidence aida9: artillery
Cerutti, Oren (Cardiff, Aberdeen) 188 / 203
Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
for refined
petroleum
Jupiter intervention
on Mars aims at
protecting strategic
assets
PRO
CON
CON
Civilian casualties
caused by Jupiter
forces
CON
LCE
Use of old Jupiter
military doctrine
causes civilian
casualties
Large use of old
Jupiter military
techniques on Mars
CQ2
There is no evidence
to show that the cause
occurred
CON
Use of massive aerial
and artillery strikes
Cerutti, Oren (Cardiff, Aberdeen) 189 / 203
Cerutti, Oren (Cardiff, Aberdeen) 190 / 203
https://cispaces.org/ http://cicero.cs.cf.ac.uk/cispaces/
Conclusions
Cerutti, Oren (Cardiff, Aberdeen) 191 / 203
Structured
Argumentation
(ASPIC+)
Abstract
Argumentation
Argument
Schemes
Extended
Frameworks
Dialogues
Practical Reasoning/
Decision Making
Normative
Reasoning
Plan Explanation
Sensemaking
Cerutti, Oren (Cardiff, Aberdeen) 192 / 203
Backup sildes
Cerutti, Oren (Cardiff, Aberdeen) 193 / 203
Backup
Cerutti, Oren (Cardiff, Aberdeen) 194 / 203
Sensemaking
Agent
Data Request/
Crowdsourcing
Agent
Provenance
Agent
GUI Interface ToolBox
WorkBoxInfoBox ReqBox
ChatBox
Cerutti, Oren (Cardiff, Aberdeen) 195 / 203
Crowdsourcing Agent
1 Critical questions trigger the need for further information on a topic
2 Analyst call the crowdsourcing agent (CWSAg)
3 CWSAg distributes the query to a large group of contributors
4 CWSAg aggregates the results and shows statistics to the analyst
Cerutti, Oren (Cardiff, Aberdeen) 196 / 203
CWSAg Results Import
Q0-Answer
Clear (Con)
Q1-Answer
21.1 (Pro)
Q0-AGAINST
Water Contaminated
Q1-FOR
Water Contaminated
CONTRADICTORY
Cerutti, Oren (Cardiff, Aberdeen) 197 / 203
Sensemaking
Agent
Data Request/
Crowdsourcing
Agent
Provenance
Agent
GUI Interface ToolBox
WorkBoxInfoBox ReqBox
ChatBox
Cerutti, Oren (Cardiff, Aberdeen) 198 / 203
N
S
E
W
image info ij
observation
Observer Messenger Informer
message
info ik
Gang
heading
South
Gang
Crossing
North Border
N
S
E
W
Surveillance
BORDER L1-L2
Image
Processing
Analyst Joe
BORDER L1-L2
GP(ij)
GP(ik)
Cerutti, Oren (Cardiff, Aberdeen) 199 / 203
Argument from Provenance
- Given a provenance chain GP(ij) of ij, information ij:
- (Where?) was derived from an entity A
- (Who?) was associated with actor AG
- (What?) was generated by activity P1
- (How?) was informed by activity P2
- (Why?) was generated to satisfy goal X
- (When?) was generated at time T
- (Which?) was generated by using some entities A1,. . . , AN
- where A, AG, P1, . . . belong to GP(ij )
- the stated elements of GP(ij) infer that information ij is true,
⇒ Therefore, information ij may plausibly be taken to be true.
CQA1: Is ij consistent with other information?
CQA2: Is ij supported by evidence?
CQA3: Does GP (ij ) contain other elements that lead us not to believe ij ?
CQA4: Are there provenance elements that should have been included for
believing ij ?
Cerutti, Oren (Cardiff, Aberdeen) 200 / 203
Argument for Provenance Preference
- Given information ij and ik ,
- and their known parts of the provenance chains GP(ij) and GP(ik ),
- if there exists a criterion Ctr such that GP(ij) Ctr GP(ik ), then
ij ik
- a criterion Ctr leads to assert that GP(ij) Ctr GP(ik )
⇒ Therefore, ik should be preferred to ij.
Trustworthiness Reliability Timeliness Shortest path
CQB1: Does a different criterion Ctr1, such that GP (ij ) Ctr1
GP (ik ) lead ij ik
not being valid?
CQB2: Is there any exception to criterion Ctr such that even if a provenance
chain GP (ik ) is preferred to GP (ij ), information ik is not preferred to information ij ?
CQB3: Is there any other reason for believing that the preference ij ik is not
valid?
Cerutti, Oren (Cardiff, Aberdeen) 201 / 203
PVAg Provenance Analysis & Import
IMPORT ANALYSIS
Primary Source Pattern
Provenance
Explanation
US Patrol
Report
Extract
Used wasGeneratedBy
US Team
Patrol
wasAssociatedWith
wasDerivedFrom
INFO:
Livestock
illness
prov: time
2015-04-27T02:27:40Z
Farm Daily
Report
Prepare
Used wasGeneratedBy
Kish
Farmer
wasAssociatedWith
wasDerivedFrom
type PrimarySource
Annotate
wasGeneratedBy
wasAssociatedWith
Livestock
Pictures
Used
Livestock
Information
IMPORT OF
PREFERENCES?
Cerutti, Oren (Cardiff, Aberdeen) 202 / 203
Theories/Technologies integrated
Argument representation:
Argument Schemes and Critical questions (domain specific)
„Bipolar-like” graph for user consumption
AIF (extension for provenance)
ASPIC(+)
Arguments based on preferences (partially under development)
Theoretical framework for acceptability status:
AF
PrAF (case study for [Li15])
AFRA for preference handling (under development)
Computational machinery: jArgSemSAT
Cerutti, Oren (Cardiff, Aberdeen) 203 / 203

More Related Content

PDF
Argumentation and Machine Learning: When the Whole is Greater than the Sum of...
PDF
Argumentation in Artificial Intelligence: From Theory to Practice
PDF
Argumentation in Artificial Intelligence
PDF
Presentation iaf 2014 v1
PDF
Computer Science Engineering: Discrete mathematics & graph theory, THE GATE A...
PDF
On Some New Contra Continuous and Contra Open Mappings in Intuitionistic Fuzz...
PDF
Handout: Argumentation in Artificial Intelligence: From Theory to Practice
PDF
Intuitionistic Fuzzy Semipre Generalized Connected Spaces
Argumentation and Machine Learning: When the Whole is Greater than the Sum of...
Argumentation in Artificial Intelligence: From Theory to Practice
Argumentation in Artificial Intelligence
Presentation iaf 2014 v1
Computer Science Engineering: Discrete mathematics & graph theory, THE GATE A...
On Some New Contra Continuous and Contra Open Mappings in Intuitionistic Fuzz...
Handout: Argumentation in Artificial Intelligence: From Theory to Practice
Intuitionistic Fuzzy Semipre Generalized Connected Spaces

What's hot (18)

PDF
An algebraic approach to Duflo's polynomial conjecture in the nilpotent case
PDF
A Matrix Based Approach for Weighted Argumentation Frameworks
PDF
Extending Labelling Semantics to Weighted Argumentation Frameworks
PDF
Gödel’s incompleteness theorems
PPT
Godels First Incompleteness Theorem
PDF
Slides Workshopon Explainable Logic-Based Knowledge Representation (XLoKR 2020)
PDF
Argumentation in Artificial Intelligence: 20 years after Dung's work. Right m...
PDF
Argumentation in Artificial Intelligence: 20 years after Dung's work. Left ma...
PDF
On Some New Continuous Mappings in Intuitionistic Fuzzy Topological Spaces
DOCX
Bc0052 theory of computer science
PDF
1 2 3
PDF
Math63032modal
PDF
AN IMPLEMENTATION, EMPIRICAL EVALUATION AND PROPOSED IMPROVEMENT FOR BIDIRECT...
PDF
11.on almost generalized semi continuous
PDF
On almost generalized semi continuous
PDF
A Labelling Semantics for Weighted Argumentation Frameworks
DOCX
Mathmatical reasoning
PDF
Next Steps in Propositional Horn Contraction
An algebraic approach to Duflo's polynomial conjecture in the nilpotent case
A Matrix Based Approach for Weighted Argumentation Frameworks
Extending Labelling Semantics to Weighted Argumentation Frameworks
Gödel’s incompleteness theorems
Godels First Incompleteness Theorem
Slides Workshopon Explainable Logic-Based Knowledge Representation (XLoKR 2020)
Argumentation in Artificial Intelligence: 20 years after Dung's work. Right m...
Argumentation in Artificial Intelligence: 20 years after Dung's work. Left ma...
On Some New Continuous Mappings in Intuitionistic Fuzzy Topological Spaces
Bc0052 theory of computer science
1 2 3
Math63032modal
AN IMPLEMENTATION, EMPIRICAL EVALUATION AND PROPOSED IMPROVEMENT FOR BIDIRECT...
11.on almost generalized semi continuous
On almost generalized semi continuous
A Labelling Semantics for Weighted Argumentation Frameworks
Mathmatical reasoning
Next Steps in Propositional Horn Contraction
Ad

Similar to Introduction to Formal Argumentation Theory (11)

PDF
dung-semantics-part1.pdf
PDF
Cerutti--ARGAIP 2010
PDF
Handout for the course Abstract Argumentation and Interfaces to Argumentative...
PDF
Cerutti--ECSQARU 2009
PDF
Cerutti--AAAI Fall Symposia 2009
PDF
Looking for Invariant Operators in Argumentation
PDF
Cerutti--PhD viva voce defence
PDF
Looking for Invariant Operators in Argumentation
PDF
Cerutti--Introduction to Argumentation (seminar @ University of Aberdeen)
PPT
Jarrar.lecture notes.aai.2011s.descriptionlogic
PPT
Supporting Argument in e-Democracy
dung-semantics-part1.pdf
Cerutti--ARGAIP 2010
Handout for the course Abstract Argumentation and Interfaces to Argumentative...
Cerutti--ECSQARU 2009
Cerutti--AAAI Fall Symposia 2009
Looking for Invariant Operators in Argumentation
Cerutti--PhD viva voce defence
Looking for Invariant Operators in Argumentation
Cerutti--Introduction to Argumentation (seminar @ University of Aberdeen)
Jarrar.lecture notes.aai.2011s.descriptionlogic
Supporting Argument in e-Democracy
Ad

More from Federico Cerutti (17)

PDF
Security of Artificial Intelligence
PDF
Introduction to Evidential Neural Networks
PDF
Human-Argumentation Experiment Pilot 2013: Technical Material
PDF
Probabilistic Logic Programming with Beta-Distributed Random Variables
PDF
Supporting Scientific Enquiry with Uncertain Sources
PDF
Algorithm Selection for Preferred Extensions Enumeration
PDF
Formal Arguments, Preferences, and Natural Language Interfaces to Humans: an ...
PDF
Argumentation Extensions Enumeration as a Constraint Satisfaction Problem: a ...
PDF
A SCC Recursive Meta-Algorithm for Computing Preferred Labellings in Abstract...
PDF
Cerutti-AT2013-Graphical Subjective Logic
PDF
Cerutti-AT2013-Trust and Risk
PDF
Cerutti -- TAFA2013
PDF
Cerutti--Knowledge Representation and Reasoning (postgrad seminar @ Universit...
PDF
Cerutti--TAFA 2011
PDF
Cerutti--Verification of Crypto Protocols (postgrad seminar @ University of B...
PDF
Cerutti--NMR 2010
PDF
Cerutti--Web Information Systems (postgrad seminar @ University of Brescia)
Security of Artificial Intelligence
Introduction to Evidential Neural Networks
Human-Argumentation Experiment Pilot 2013: Technical Material
Probabilistic Logic Programming with Beta-Distributed Random Variables
Supporting Scientific Enquiry with Uncertain Sources
Algorithm Selection for Preferred Extensions Enumeration
Formal Arguments, Preferences, and Natural Language Interfaces to Humans: an ...
Argumentation Extensions Enumeration as a Constraint Satisfaction Problem: a ...
A SCC Recursive Meta-Algorithm for Computing Preferred Labellings in Abstract...
Cerutti-AT2013-Graphical Subjective Logic
Cerutti-AT2013-Trust and Risk
Cerutti -- TAFA2013
Cerutti--Knowledge Representation and Reasoning (postgrad seminar @ Universit...
Cerutti--TAFA 2011
Cerutti--Verification of Crypto Protocols (postgrad seminar @ University of B...
Cerutti--NMR 2010
Cerutti--Web Information Systems (postgrad seminar @ University of Brescia)

Recently uploaded (20)

PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PPTX
Cell Structure & Organelles in detailed.
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PPTX
PPH.pptx obstetrics and gynecology in nursing
PPTX
Lesson notes of climatology university.
PDF
RMMM.pdf make it easy to upload and study
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
01-Introduction-to-Information-Management.pdf
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
Basic Mud Logging Guide for educational purpose
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PDF
Supply Chain Operations Speaking Notes -ICLT Program
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
Microbial diseases, their pathogenesis and prophylaxis
Cell Structure & Organelles in detailed.
Module 4: Burden of Disease Tutorial Slides S2 2025
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
TR - Agricultural Crops Production NC III.pdf
O5-L3 Freight Transport Ops (International) V1.pdf
PPH.pptx obstetrics and gynecology in nursing
Lesson notes of climatology university.
RMMM.pdf make it easy to upload and study
2.FourierTransform-ShortQuestionswithAnswers.pdf
01-Introduction-to-Information-Management.pdf
102 student loan defaulters named and shamed – Is someone you know on the list?
Basic Mud Logging Guide for educational purpose
Pharmacology of Heart Failure /Pharmacotherapy of CHF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
Abdominal Access Techniques with Prof. Dr. R K Mishra
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
Supply Chain Operations Speaking Notes -ICLT Program

Introduction to Formal Argumentation Theory

  • 1. Introduction to Formal Argumentation Theory Federico Cerutti and Nir Oren Cardiff University, University of Aberdeen CeruttiF@cardiff.ac.uk, n.oren@abdn.ac.uk Cerutti, Oren (Cardiff, Aberdeen) 1 / 203
  • 2. From Structured to Abstract Argumentation Cerutti, Oren (Cardiff, Aberdeen) 2 / 203
  • 3. Does MMR vaccination cause autism? Cerutti, Oren (Cardiff, Aberdeen) 3 / 203
  • 4. Supporting Reasoning with Different Types of Evidence in Intelligence Analysis Alice Toniolo_ Anthony Etuk Robin Wentao Ouyang Tlmothy J- N0Fman Federico Cerutti Mani Srivastava DBPL 0f_C0ml3U“”Q SCIENCE Dept. of Computing Science University of California University of Aberdeen, UK University of Aberdeen, UK Los Angeles, CA, USA Nir Oren Timothy Dropps Paul Sullivan Dept. of Computing Science John A_ Allen INTELPOINT Incorporated University of Aberdeen, UK Honeywell, USA Pennsylvania, USA Appears in: Proceedings of the 14th International Conference on Autonomous Agents and ll/Iultiayent Systems (AAJWAS 2015), Bordim, Elkind, Was.-3, Yolum (ed5.), Mlay 4 8, 2015, Istcmbttl, Turkey. [Ton+15] Cerutti, Oren (Cardiff, Aberdeen) 4 / 203
  • 5. Caveat [BL08] [PS13] Cerutti, Oren (Cardiff, Aberdeen) 5 / 203
  • 6. Douglas Walton Chris Reed Fabrizio Macagno ARGUMENTATION SCHEMES [WRM08] Cerutti, Oren (Cardiff, Aberdeen) 6 / 203
  • 7. Argumentation scheme for argument from correlation to cause Correlation Premise: There is a positive correlation between A and B. Conclusion: A causes B. Critical questions are: CQ1: Is there really a correlation between A and B? CQ2: is there any reason to think that the correlation is any more than a coincidence? CQ3: Could there be some third factor, C, that is causing both A and B? Cerutti, Oren (Cardiff, Aberdeen) 7 / 203
  • 8. The Knowledge Engineering Review, Vol. 26:4, 487—51 1. © Cambridge University Press, 2011 doi:10.1017/S0269888911000191 Representing and classifying arguments on the Semantic Web IYAD RAHWAN1‘2, B_ITA BANIHASHEMI3, CHRIS REED4, DOUGLAS WALTON” and SHERIEF ABDALLAH” [Rah+11] Cerutti, Oren (Cardiff, Aberdeen) 8 / 203
  • 9. Node Graph (argument network) has-a Information Node (I-Node) is-a Scheme Node S-Node has-a Edge is-a Rule of inference application node (RA-Node) Conflict application node (CA-Node) Preference application node (PA-Node) Derived concept application node (e.g. defeat) is-a ... ContextScheme Conflict scheme contained-in Rule of inference scheme Logical inference scheme Presumptive inference scheme ... is-a Logical conflict scheme is-a ... Preference scheme Logical preference scheme is-a ... Presumptive preference scheme is-a uses uses uses Cerutti, Oren (Cardiff, Aberdeen) 9 / 203
  • 10. MMR vaccination causes authism C-2-C It is possible that MMR vaccination is associated to autism Cerutti, Oren (Cardiff, Aberdeen) 10 / 203
  • 11. EARLY REPORT Early report lleal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children A J Wake eld, S H Murch, A Anthony, J Linnell, D M Casson, M Malik, M Berelowitz, A P Dhillon, M A Thomson, P Harvey, A Valentine, 5 E Davies, J A Walker-Smith 5|-|mma|'Y Introduction 1177 " °9W several children Who, after a nP"" ' "‘ investigated a conser""' _m;mAn1".,,, Cerutti, Oren (Cardiff, Aberdeen) 11 / 203
  • 12. Support What else should be true if the causal link is true? Cerutti, Oren (Cardiff, Aberdeen) 12 / 203 (Wakefield et al, 1998)
  • 13. MMR vaccination causes authism C-2-C It is possible that MMR vaccination is associated to autism Behavioural symptoms were associated by parents of 12 children Witn Cerutti, Oren (Cardiff, Aberdeen) 13 / 203
  • 14. The New England Iournal of Medicine Copyright © 2002 by the Massachusetts Medical Society VOLUME 347 N()VEMBER 7, 2002 NUMBER 19 A POPULATION-BASED STUDY OF MEASLES, MUMPS, AND RUBELLA VACCINATION AND AUTISM KREESTEN MELDGAARD MADSEN, M.D., ANDERS HVIID, M.Sc., MOGENS VESTERGAARD, M.D., DIANA SCHENDEL, PH.D., JAN WOHLFAHRT, M.Sc., POUL THORSEN, M.D., J(ZiRN OLSEN, M.D., AND MADS MELBYE, M.D. ABS""‘ I 7 "Tested that the measle ' +hat vaccina— ”“CCi11C C3“’ -nn- ’ Cerutti, Oren (Cardiff, Aberdeen) 14 / 203
  • 15. Support Cerutti, Oren (Cardiff, Aberdeen) 15 / 203 (Madsen et al, 2002)
  • 16. Support What else should be true if the causal link is true? Support Support Cerutti, Oren (Cardiff, Aberdeen) 16 / 203
  • 17. MMR vaccination causes authism C-2-C It is possible that MMR vaccination is associated to autism Behavioural symptoms were associated by parents of 12 children Witn CQ1: There is no correlation between MMR vaccination and autism CON E-2-H No statistical correlation over 440,655 children Cerutti, Oren (Cardiff, Aberdeen) 17 / 203
  • 18. ASPIC+ [Pra10] [MP13] [MP14] Cerutti, Oren (Cardiff, Aberdeen) 18 / 203
  • 19. ASPIC+ An argumentation system is as tuple AS = L, R, , ν, where: : L → 2L: a contrariness function s.t. if ϕ ∈ ψ and: ψ /∈ ϕ, then ϕ is a contrary of ψ; ψ ∈ ϕ, then ϕ is a contradictory of ψ (ϕ = –ψ); R = Rd ∪ Rs: strict (Rs) and defeasible (Rd ) inference rules s.t. Rd ∩ Rs = ∅; is an ordering on Rd . ν : Rd → L, is a partial function.a P ⊆ L is consistent iff ϕ, ψ ∈ P s.t. ϕ ∈ ψ, otherwise is inconsistent. A knowledge base in an AS is Kn ∪ Kp = K ⊆ L; {Kn, Kp} is a partition of K; Kn contains axioms that cannot be attacked; Kp contains ordinary premises that can be attacked. An argumentation theory is a pair AT = AS, K . a Informally, ν(r) is a wff in L which says that the defeasible rule r is applicable. Cerutti, Oren (Cardiff, Aberdeen) 18 / 203
  • 20. MMR vaccination causes authism C-2-C It is possible that MMR vaccination is associated to autism Behavioural symptoms were associated by parents of 12 children Witn CQ1: There is no correlation between MMR vaccination and autism CON E-2-H No statistical correlation over 440,655 children α β γ δ ε Cerutti, Oren (Cardiff, Aberdeen) 19 / 203
  • 21. MMR vaccination causes authism C-2-C It is possible that MMR vaccination is associated to autism Behavioural symptoms were associated by parents of 12 children Witn CQ1: There is no correlation between MMR vaccination and autism CON E-2-H No statistical correlation over 440,655 children α β γ δ ε β =⇒ α γ =⇒ β =⇒ δ δ ∈ β Cerutti, Oren (Cardiff, Aberdeen) 20 / 203
  • 22. ASPIC+ An argument a on the basis of a AT = AS, K , AS = L, R, , ν, is: 1 ϕ if ϕ ∈ K with: Prem(a) = {ϕ}; Conc(a) = ϕ; Sub(a) = {ϕ}; Rules(a) = DefRules(a) = ∅; TopRule(a) = undefined. 2 a1, . . . , an −→ / =⇒ ψ if a1, . . . , an, with n ≥ 0, are arguments such that there exists a strict/defeasible rule r = Conc(a1), . . . , Conc(an) −→ / =⇒ ψ ∈ Rs/Rd . Prem(a) = n i=1 Prem(ai); Conc(a) = ψ; Sub(a) = n i=1 Sub(ai) ∪ {a}; Rules(a) = n i=1 Rules(ai) ∪ {r}; DefRules(a) = {d | d ∈ Rules(a) ∩ Rd }; TopRule(a) = r a is strict if DefRules(a) = ∅, otherwise defeasible; firm if Prem(a) ⊆ Kn, otherwise plausible. Cerutti, Oren (Cardiff, Aberdeen) 21 / 203
  • 23. ASPIC+ Given a and b arguments, a defeats b iff a undercuts, successfully rebuts or successfully undermines b, where: a undercuts b (on b ) iff Conc(a) /∈ ν(r) for some b ∈ Sub(b) s.t. r = TopRule(b ) ∈ Rd ; a successfully rebuts b (on b ) iff Conc(a) /∈ ϕ for some b ∈ Sub(b) of the form b1, . . . , bn =⇒ –ϕ, and a b ; a successfully undermines b (on ϕ) iff Conc(a) /∈ ϕ, and ϕ ∈ Prem(b) ∩ Kp, and a ϕ. AF is the abstract argumentation framework defined by AT = AS, K if A is the smallest set of all finite arguments constructed from K; and → is the defeat relation on A. Cerutti, Oren (Cardiff, Aberdeen) 22 / 203
  • 24. γε ε, ε ⇒ δ γ, γ ⇒ β γ, γ ⇒ β, β ⇒ α Cerutti, Oren (Cardiff, Aberdeen) 23 / 203
  • 25. Artificial Intelligence Arti cialIntelligence 77 (1995) 321v357 On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games* Phan Minh Dung* [Dun95] Cerutti, Oren (Cardiff, Aberdeen) 24 / 203
  • 26. Definition A Dung argumentation framework AF is a pair A, → where A is a set of arguments, and → is a binary relation on A i.e. →⊆ A × A. Cerutti, Oren (Cardiff, Aberdeen) 25 / 203
  • 27. A semantics is a way to identify sets of arguments (i.e. extensions) “surviving the conflict together” Cerutti, Oren (Cardiff, Aberdeen) 26 / 203
  • 28. (Some) Semantics Properties wailah-la unlina at 1-Iwmnscianca-dira+:t.corn ':.i; Science-.Direct Ani gal Intelligence:1 E.LSI:'."v'lI:'.R. .eu:i:'.u'.-in Jnl::||igI:n»;::: m izrocm n75—:':m www.r:I:i::1.r'icr.r:nn1.-'|m::3n:.':3r1iI11 On principle-based evaluation of extension-based argumentation semantics ii’ Pietra Bamni, Massimiliano Giacomin * [BG07] The Kn0w[ed'ge Engineering Review, Vol. 26:4, 365-410. © Cambridge University Press, 2011 doi:10.1017J/S0269888911000166 An introduction to argumentation semantics PIETRO BARONI‘, MARTIN CAMINADA2 and MASSIMILIANO GlACOMIN' [BCG11] Cerutti, Oren (Cardiff, Aberdeen) 27 / 203
  • 29. (Some) Semantics Properties Conflict-freeness an attacking and an attacked argument can not stay together (∅ is c.f. by def.) Admissibility Strong-Admissibility Reinstatement I-Maximality Directionality Cerutti, Oren (Cardiff, Aberdeen) 27 / 203
  • 30. (Some) Semantics Properties Conflict-freeness Admissibility the extension should be able to defend itself, „fight fire with fire” (∅ is adm. by def.) Strong-Admissibility Reinstatement I-Maximality Directionality Cerutti, Oren (Cardiff, Aberdeen) 27 / 203
  • 31. (Some) Semantics Properties Conflict-freeness Admissibility Strong-Admissibility defence must be grounded on unattacked arguments (∅ is strong adm. by def.) Reinstatement I-Maximality Directionality Cerutti, Oren (Cardiff, Aberdeen) 27 / 203
  • 32. (Some) Semantics Properties Conflict-freeness Admissibility Strong-Admissibility Reinstatement if you defend some argument you should take it on board (∅ satisfies the principle only if there are no unattacked arguments) I-Maximality Directionality Cerutti, Oren (Cardiff, Aberdeen) 27 / 203
  • 33. (Some) Semantics Properties Conflict-freeness Admissibility Strong-Admissibility Reinstatement I-Maximality no extension is a proper subset of another one Directionality Cerutti, Oren (Cardiff, Aberdeen) 27 / 203
  • 34. (Some) Semantics Properties Conflict-freeness Admissibility Strong-Admissibility Reinstatement I-Maximality Directionality a (set of) argument(s) is affected only by its ancestors in the attack relation Cerutti, Oren (Cardiff, Aberdeen) 27 / 203
  • 35. Complete Extension Admissibility and reinstatement Set of conflict-free arguments s.t. each defended argument is included b a c d f e gh    {a, c, d, e, g}, {a, b, c, e, g}, {a, c, e, g}    Cerutti, Oren (Cardiff, Aberdeen) 28 / 203
  • 36. Grounded Extension Strong Admissibility Minimum complete extension b a c d f e gh    {a, c, e, g}    Cerutti, Oren (Cardiff, Aberdeen) 29 / 203
  • 37. Preferred Extension Admissibility and maximality Maximum complete extensions b a c d f e gh    {a, c, d, e, g}, {a, b, c, e, g}    Cerutti, Oren (Cardiff, Aberdeen) 30 / 203
  • 38. Stable Extension „orror vacui:” the absence of odd-length cycles is a sufficient condition for existence of stable extensions Complete extensions attacking all the arguments outside b a c d f e gh    {a, c, d, e, g}, {a, b, c, e, g}    Cerutti, Oren (Cardiff, Aberdeen) 31 / 203
  • 39. Complete Labellings An argument is IN if all its attackers are OUT An argument is OUT if at least one of its attackers is IN Otherwise is UNDEC Cerutti, Oren (Cardiff, Aberdeen) 32 / 203
  • 40. Complete Labellings Max. UNDEC ≡ Grounded b a c d f e gh    {a, c, e, g}    Cerutti, Oren (Cardiff, Aberdeen) 33 / 203
  • 41. Complete Labellings Max. IN ≡ Preferred b a c d f e gh    {a, c, d, e, g}    Cerutti, Oren (Cardiff, Aberdeen) 33 / 203
  • 42. Complete Labellings Max. IN ≡ Preferred b a c d f e gh    {a, b, c, e, g}    Cerutti, Oren (Cardiff, Aberdeen) 33 / 203
  • 43. Complete Labellings No UNDEC ≡ Stable b a c d f e gh    {a, c, d, e, g}    Cerutti, Oren (Cardiff, Aberdeen) 33 / 203
  • 44. Complete Labellings No UNDEC ≡ Stable b a c d f e gh    {a, b, c, e, g}    Cerutti, Oren (Cardiff, Aberdeen) 33 / 203
  • 45. Properties of semantics CO GR PR ST D-conflict-free Yes Yes Yes Yes D-admissibility Yes Yes Yes Yes D-strongly admissibility No Yes No No D-reinstatement Yes Yes Yes Yes D-I-maximality No Yes Yes Yes D-directionality Yes Yes Yes No Cerutti, Oren (Cardiff, Aberdeen) 34 / 203
  • 46. Many more semantics Cerutti, Oren (Cardiff, Aberdeen) 35 / 203
  • 47. γε ε, ε ⇒ δ γ, γ ⇒ β γ, γ ⇒ β, β ⇒ α Cerutti, Oren (Cardiff, Aberdeen) 36 / 203
  • 48. MMR vaccination causes authism C-2-C It is possible that MMR vaccination is associated to autism Behavioural symptoms were associated by parents of 12 children Witn CQ1: There is no correlation between MMR vaccination and autism CON E-2-H No statistical correlation over 440,655 children α β γ δ ε Cerutti, Oren (Cardiff, Aberdeen) 37 / 203
  • 49. Rationality postulates P1: direct consistency iff {Conc(a) | a ∈ S} is consistent; P2: indirect consistency iff Cl({Conc(a) | a ∈ S}) is consistent; P3: closure iff {Conc(a) | a ∈ S} = Cl({Conc(a) | a ∈ S}); P4: sub-argument closure iff ∀a ∈ S, Sub(a) ⊆ S. Satisfied if: Closure under transposition If ϕ1, . . . , ϕn −→ ψ ∈ Rs, then ∀i = 1 . . . n, ϕ1, . . . , ϕi−1, ¬ψ, ϕi+1, . . . , ϕn =⇒ ¬ϕi ∈ Rs. Cl(Kn) is consistent; the argument ordering is reasonable, namely: ∀a, b, if a is strict and firm, and b is plausible or defeasible, then a b; ∀a, b, if b is strict and firm, then b a; ∀a, a , b such that a is a strict continuation of {a}, if a b then a b, and if b a, then b a ; given a finite set of arguments {a1, . . . , an}, let a+i be some strict continuation of {a1, . . . , ai−1, ai+1, . . . , an}. Then it is not the case that ∀i, a+i ai . Cerutti, Oren (Cardiff, Aberdeen) 38 / 203
  • 50. Chapter 5 Complexity of Abstract Argumentation Paul E. Dunne and Michael Wooldridge I. Rahwan, G. R. Simari (cds.), Argunzerztarion in Ar‘!1j‘icial Intelligence, DO] 10.1007/978—0—387—98197'-0-5. © Springer SCience+Business Media. LLC 2009 [DW09] Cerutti, Oren (Cardiff, Aberdeen) 39 / 203
  • 51. σ = CO σ = GR σ = PR σ = ST EXISTSσ trivial trivial trivial NP-c CAσ NP-c polynomial NP-c NP-c SAσ polynomial polynomial Πp 2-c coNP-c VERσ polynomial polynomial coNP-c polynomial NEσ NP-c polynomial NP-c NP-c Cerutti, Oren (Cardiff, Aberdeen) 39 / 203
  • 52. Cerutti, Oren (Cardiff, Aberdeen) 40 / 203
  • 53. Extending Dung Dung’s framework captures negative interactions between arguments. But Dung’s framework does not easily capture several intuitive properties of human argumentation Joint attack Recursive/meta-arguments Preferences Support Argument strength Cerutti, Oren (Cardiff, Aberdeen) 41 / 203
  • 54. Joint Attack (Nielsen & Parsons (2006)) Both A and B must be the case for C to not hold. Dung’s results map directly — only the definition of attacks needs modification. a b c Cerutti, Oren (Cardiff, Aberdeen) 42 / 203
  • 55. PAFs (Amgoud (1999)) Witness A claims x, Witness B claims ¬x, but A is much more reliable. Cerutti, Oren (Cardiff, Aberdeen) 43 / 203
  • 56. PAFs (Amgoud (1999)) Witness A claims x, Witness B claims ¬x, but A is much more reliable. A Preference-based argumentation framework (PAF) is a triple A, R, , where ⊆ A × A. A B states that A is preferred to B. A PAF is transformed to a PAF by moving from attacks to defeats: A defeats B iff A attacks B and A B. Cerutti, Oren (Cardiff, Aberdeen) 43 / 203
  • 57. But... a b b > a Cerutti, Oren (Cardiff, Aberdeen) 44 / 203
  • 58. But... a b We can end up with conflicts in our extensions Cerutti, Oren (Cardiff, Aberdeen) 44 / 203
  • 59. Repair (Amgoud & Vesic (2014)) Attacks between arguments represent An incoherence between the two arguments; and A kind of preference determined by the direction of the attack. We can thus consider the ultimate direction of the arrow to express a real preference between arguments, and reverse it if needed. Rr = {(a, b)|(a, b) ∈ R and not (b > a)}∪ {(b, a)|(a, b) ∈ R and (b > a)} This amounts to reversing the direction of the arrows w.r.t preferences. Preferences can also be used to pick between multiple extensions, selecting the "most preferred extensions". Cerutti, Oren (Cardiff, Aberdeen) 45 / 203
  • 60. Preferences using Extended Frameworks (Modgil, Cerutti and others) The idea of these frameworks is to allow attacks on attacks. Capturing preferences, undercuts and the like in a natural manner. a>b b>a a b b>a Cerutti, Oren (Cardiff, Aberdeen) 46 / 203
  • 61. Support Attacks between arguments allow for reinstatement to occur, enabling arguments to defend one another. Arguments can also build on top of one another, or strengthen each other through support. Bipolar argumentation frameworks (Cayrol et al (2009)) allow for arguments to interact by both attacking and supporting each other. A, R, S Different formalisms treat support differently. Cerutti, Oren (Cardiff, Aberdeen) 47 / 203
  • 62. Evidential Argument Frameworks (Oren et al (2014) Evidential argument frameworks capture the notion of sub-argument support. For a conclusion to be justified, sub-arguments which lead to that conclusion must be justified. Evidence for initial arguments is also required. It is then possible to transform the Evidential Framework into a Dung framework by combining sub-arguments to form arguments with only attacks between them. ⌘⌘ a b c d a a,b a,b,c d Cerutti, Oren (Cardiff, Aberdeen) 48 / 203
  • 63. Attacks in Bipolar Frameworks a bc a bc a bc a bc Secondary Supported Mediated Extended Another approach involves introducing new attacks based on the supports present in the framework, after which the original supports and attacks are deleted. Cerutti, Oren (Cardiff, Aberdeen) 49 / 203
  • 64. Attacks in Bipolar Frameworks Different systems introduce different types of attacks. Polberg & Hunter (2018) provide strong evidence that human reasoning makes use of support when thinking about arguments, and thus hint that bipolar frameworks are more than just ‘syntactic sugar’. Cerutti, Oren (Cardiff, Aberdeen) 50 / 203
  • 65. Strength Humans often claim that some argument is stronger than another. Such strengths can come from beliefs relating to one argument being preferred (by the reasoner) to another; or From having the claims of the argument being considered more certain. Cerutti, Oren (Cardiff, Aberdeen) 51 / 203
  • 66. Probabilistic Argument Frameworks (PrAFs) PrAFs are a simple way to capture uncertainty in an abstract framework. They extend a standard DAF with probabilistic concepts. A, D Cerutti, Oren (Cardiff, Aberdeen) 52 / 203
  • 67. Probabilistic Argument Frameworks (PrAFs) PrAFs are a simple way to capture uncertainty in an abstract framework. They extend a standard DAF with probabilistic concepts. A, D, PA, PD PA,PD encodes the likelihood of an argument or attack. Cerutti, Oren (Cardiff, Aberdeen) 52 / 203
  • 68. Interpreting PrAFs A 0.8 B 0.6 We can interpret PrAFs via a frequentist approach to probability: PA(A) = 0.8 means that in 8 out of 10 possible worlds (or Argument Frameworks), A exists. A B A B Cerutti, Oren (Cardiff, Aberdeen) 53 / 203
  • 69. Likelihoods of Argument Frameworks A 0.8 B 0.6 P(∅, ∅) =? P({A}, ∅) =? P({B}, ∅) =? P({A, B}, {(A, B), (B, A)}) =? Cerutti, Oren (Cardiff, Aberdeen) 54 / 203
  • 70. Likelihoods of Argument Frameworks A 0.8 B 0.6 P(∅, ∅) = 0.08 P({A}, ∅) = 0.32 P({B}, ∅) = 0.12 P({A, B}, {(A, B), (B, A)}) = 0.48 Each of these DAFs are induced from the original PrAF. 0.480.120.320.08 A B A B Cerutti, Oren (Cardiff, Aberdeen) 54 / 203
  • 71. Semantics Unlike traditional frameworks, extensions are probabilistic, indicating the likelihood that a set of arguments appears within some extension. This probability is computed as the sum of probabilities of the AFs where the argument appears in the Dung extension. P(∅, ∅) = 0.08 P({A}, ∅) = 0.32 P({B}, ∅) = 0.12 P({A, B}, {(A, B), (B, A)}) = 0.48 P({A} ∈ Grounded) = 0.32 P({A} ∈ Preferred(credulous)) = 0.8 P({A} ∈ Preferred(skeptical)) = 0.32 Cerutti, Oren (Cardiff, Aberdeen) 55 / 203
  • 72. Extensions It’s possible to extend PrAFs to Evidential Frameworks, lifting aspects of the independence assumption PrAFs make. And from there, to structured argumentation. See Li, H., "Probabilistic Argumentation" (2015) for details. Cerutti, Oren (Cardiff, Aberdeen) 56 / 203
  • 73. What do probabilities mean? 1 Likelihood of an argument being considered justified (Hunter, COMMA-12) 2 Likelihood that an argument is known by an agent (Li et al, TAFA-11,COMMA-12,ArgMAS-13) 3 Likelihood that an agent believes an argument (Thimm, ECAI-12, ECAI-14, Hunter, IJAR-13, ArXiv-14, . . . ) Cerutti, Oren (Cardiff, Aberdeen) 57 / 203
  • 74. What do probabilities mean? 1 Likelihood of an argument being considered justified (Hunter, COMMA-12) 2 Likelihood that an argument is known by an agent (Li et al, TAFA-11,COMMA-12,ArgMAS-13) 3 Likelihood that an agent believes an argument (Thimm, ECAI-12, ECAI-14, Hunter, IJAR-13, ArXiv-14, . . . ) Structural uncertainty - uncertainty about the structure of the argument graph (1 and 2). Epistemic uncertainty - uncertainty about agent beliefs (3). Cerutti, Oren (Cardiff, Aberdeen) 57 / 203
  • 75. Epistemic Extensions (taken from Hunter) A probability function maps sets of arguments to a probability value P : 2A → [0, 1], s.t. A ⊆A P(A ) = 1 P(a) = a∈E⊆A P(E) Arguments are labelled based on the probability associated with them: a is in if (P(a) > 0.5), out if P(a) < 0.5 and undec otherwise. What constraints can be placed on the probability function? Cerutti, Oren (Cardiff, Aberdeen) 58 / 203
  • 76. Some Constraints COH For every a, b ∈ A, if a → b, then P(a) ≤ 1 − P(b) SFOU If P(a) ≥ 0.5 for every a ∈ A which is not attacked. FOU If P(a) = 1 for every a ∈ A which is not attacked. SOPT If P(a) ≥ 1 − b s.t. b→a P(b) whenever an attack against a exists. OPT If P(a) ≥ 1 − b s.t. b→a P(b). JUS If COH and OPT TER If P(a) ∈ {0, 0.5, 1} for any a ∈ A Cerutti, Oren (Cardiff, Aberdeen) 59 / 203
  • 77. Classical Extensions Given a complete probability function, the following association between restrictions and classical extensions exists. No restriction Complete No a s.t. P(a) = 0.5 Stable Maximal arguments s.t. P(a) = 1 Preferred Maximal arguments s.t. P(a) = 0 Preferred Maximal arguments s.t. P(a) = 0.5 Grounded Minimal arguments s.t. P(a) = 1 Grounded Minimal arguments s.t. P(a) = 0 Grounded Minimal arguments s.t. P(a) = 0.5 Stable Cerutti, Oren (Cardiff, Aberdeen) 60 / 203
  • 78. Non-standard Extensions Cerutti, Oren (Cardiff, Aberdeen) 61 / 203
  • 79. So What? Can we use these properties to assign probabilities to arguments? Assume a partial function π : A → [0, 1] What are the “best” probabilities to assign to arguments not in the domain of π? Cerutti, Oren (Cardiff, Aberdeen) 62 / 203
  • 80. The Idea A 1 B ? Cerutti, Oren (Cardiff, Aberdeen) 63 / 203
  • 81. The Idea A 1 B 0 Cerutti, Oren (Cardiff, Aberdeen) 63 / 203
  • 82. The Idea A 0.7 B ? Cerutti, Oren (Cardiff, Aberdeen) 63 / 203
  • 83. The Idea A 0.7 B 0.3 Cerutti, Oren (Cardiff, Aberdeen) 63 / 203
  • 84. The Idea What if we want COH (If a → b then P(a) ≤ 1 − P(b))? A ? B ? C 0.4 Cerutti, Oren (Cardiff, Aberdeen) 64 / 203
  • 85. The Idea What if we want COH (If a → b then P(a) ≤ 1 − P(b))? A 0.6 B 0.4 C 0.4 Cerutti, Oren (Cardiff, Aberdeen) 64 / 203
  • 86. The Idea What if we want COH (If a → b then P(a) ≤ 1 − P(b))? A 0.5 B 0.5 C 0.4 Multiple probability functions can satisfy the coherence here. Cerutti, Oren (Cardiff, Aberdeen) 64 / 203
  • 87. Applications Reasoning about uncertain knowledge Persuasion and opponent modelling Cerutti, Oren (Cardiff, Aberdeen) 65 / 203
  • 88. Where are we? We’ve covered several extensions of Dung’s formalism to take into account additional common aspects of argumentation. There are myriad other extended frameworks (and semantics) out there. Value based argumentation frameworks Fuzzy argumentation frameworks Weighted argumentation frameworks A variety of ways to represent argument strength Cerutti, Oren (Cardiff, Aberdeen) 66 / 203
  • 89. Dialogue Cerutti, Oren (Cardiff, Aberdeen) 67 / 203
  • 90. Where are we? We know how to represent arguments We know how to identify justified conclusions But how (and why?) do agents exchange arguments? Cerutti, Oren (Cardiff, Aberdeen) 68 / 203
  • 91. Exchanging arguments Agents act to achieve some goal. Different goals require different types of arguments to be exchanged. Walton and Krabbe’s (1995) typology: Information-seeking participant seeks answer to some question(s) from another participant, who knows the answer Inquiry participants collaborate to answer a question (whose answer they don’t know Persuasion participant seeks to persuade another to accept a proposition they don’t currently endorse Negotiation bargaining over division of resources Deliberation collaborate to decide which action(s) should be adopted in some situation Eristic verbal quarrel rather than physical fighting Cerutti, Oren (Cardiff, Aberdeen) 69 / 203
  • 92. Dialogues Different types of dialogues are entered with the agents having different goals, and the dialogues achieving different outcomes. Dialogues may involve mixtures of dialogue types; one dialogue may be embedded in another. Dialogues specify a protocol — called a dialogue game — which agents can follow to reach the dialogue outcomes. Chap. 13 of Argumentation in Artificial Intelligence (2009) by McBurney and Parsons provides a very good general summary of dialogue games. Cerutti, Oren (Cardiff, Aberdeen) 70 / 203
  • 93. Dialogue Components A dialogue game consists of A set of commencement rules which define when the dialogue may begin. A set of locutions specifying which utterances are permitted. Such rules can also specify which combinations of locutions are permissible (e.g., asserting x and ¬x by the same participant may be prohibited). Commitment rules describe what an utterance commits an agent to. E.g., a question may commit another to provide an answer, while an assertion may commit the agent to either retracting or defending the assertion’s content. Such rules can also be combined, stating — for example — that a retraction after an assertion removes a commitment. Rules for speaker order specify who may make utterances when. Termination rules state when the dialogue ends. Cerutti, Oren (Cardiff, Aberdeen) 71 / 203
  • 94. Dialogical Agents An agent participating in a dialogue has a knowledge base containing its (private) knowledge about the world. Its dialogical commitments are tracked within a commitment store, and can be thought of as a mapping between locutions and statements expressing actions or beliefs external to the dialogue. Cerutti, Oren (Cardiff, Aberdeen) 72 / 203
  • 95. Dialogue Semantics There are many different ways of specifying the semantics of each utterance within a dialogue (which we will not formalise). The effects of each utterance on agents and dialogue structures must be described. E.g., The preconditions for an assert(φ) utterance is that a desires that all agents believe that φ is the case. The post-condition is that (1) all agents (except for a) believe that a desires them to believe that φ is the case; and (2) a is committed to demonstrate that φ is the case when questioned. One may also specify locution combination rules stating, e.g., that question(φ) may only be played when some agent is committed to φ. Cerutti, Oren (Cardiff, Aberdeen) 73 / 203
  • 96. Where are we? Dialogue games describe a protocol by which discussion can take place. In the context of argumentation, such a protocol usually involves adding, removing or deleting arguments from agent commitment stores to achieve some goal. To use a dialogue game, an agent must (typically) also identify an appropriate strategy to decide what locution to utter, and what the contents of the locution should be (see for example Thimm (2014) for a deeper discussion of this topic). Cerutti, Oren (Cardiff, Aberdeen) 74 / 203
  • 97. f a b e h c d g i m n o j k l Is o skeptically preferred in, undecided or out? Cerutti, Oren (Cardiff, Aberdeen) 75 / 203
  • 98. Proof dialogues Proof dialogues aim to provide a dialogical approach to determining the status of an argument. Rather than applying the formal definition of the semantics to determine extension membership, they consider two parties who enter a dialogue to compute the status of an argument. Such proof dialogues might help non-technical users understand why some conclusion is, or is not, justified. Cerutti, Oren (Cardiff, Aberdeen) 76 / 203
  • 99. Desiderata Such proof dialogues should Be natural — if they are similar to the manner in which humans reason, they’ll be understood Be sound — reaching some conclusion in the dialogue should coincide with the same decision for the presence or absence of the argument in the extension(s). Be complete — Any argument present or absent in the extension(s) should have an associated dialogue which can prove it. Be computationally efficient Sometimes we won’t achieve all of these properties. Cerutti, Oren (Cardiff, Aberdeen) 77 / 203
  • 100. A simple proof dialogue a b c d e P : in(D) O : out(C) P : in(B) O : out(A) P : in(B) Cerutti, Oren (Cardiff, Aberdeen) 78 / 203
  • 101. A simple proof dialogue a b c d e P : in(D) O : out(C) P : in(B) O : out(A) P : in(B) in moves are claims, while out states a consequence of the in move and asks for a justification for this labelling. O has no moves left, and must therefore accept P’s position. P wins the game. Cerutti, Oren (Cardiff, Aberdeen) 78 / 203
  • 102. A simple proof dialogue a b c d e P : in(E) O : out(D) P : in(C) O : out(E) Cerutti, Oren (Cardiff, Aberdeen) 79 / 203
  • 103. A simple proof dialogue a b c d e P : in(E) O : out(D) P : in(C) O : out(E) By pointing out P’s contradiction, O wins the game. Cerutti, Oren (Cardiff, Aberdeen) 79 / 203
  • 104. The Game Participants: P and O Commencement rule: P states that some argument is in Speaker order: After P moves, players alternate. Locution rules: Each move of P (except the first) must be an in move which refers to an attacker of the previous move of O. Each move of O must be a out move which refers to an attacker of any of the previous in moves. O is not allowed to repeat moves (but P is, as the same in argument can cause multiple arguments to be out). Cerutti, Oren (Cardiff, Aberdeen) 80 / 203
  • 105. The Game Termination rules: If O uses an argument previously used by P, then O wins (as they have shown a contradiction). Similarly, if P uses an argument previously used by O, O wins. If P cannot move, then O wins (as they’re unable to justify their position). If O cannot move, then P wins (as they have to accept P’s claim). Cerutti, Oren (Cardiff, Aberdeen) 81 / 203
  • 106. The Game - Properties If there is a game for argument A won by P, then there is a preferred extension containing A. If there is a preferred extension containing A, then P has a winning strategy for the game. Minimal number of moves necessary is 2 · |number of arguments labelled out| + 1. But finding such a labelling is hard. Cerutti, Oren (Cardiff, Aberdeen) 82 / 203
  • 107. Grounded Discussion Game (GDG) An argument is in the grounded extension if it "has to be the case". For an opponent to show an argument is not in the grounded extension, they simply need to show that one of its attackers "could be the case". The burden of proof is thus on P to show that none of the attackers of the argument they are defending can be the case. So the moves for the grounded game are HTB(A) A has to be the case — A is in the grounded labelling. Moved by P. CB(B) B is not out in the grounded labelling. Moved by O. CONCEDE(A) Signals an agreement that A is in. Moved by O. RETRACT(B) Signals that B is out. Moved by O. Cerutti, Oren (Cardiff, Aberdeen) 83 / 203
  • 108. Grounded Discussion Game (GDG) Game starts with P making a HTB statement. O Can then make one or more CB, CONCEDE and RETRACT statements. After which P makes a HTB and the cycle repeats. N.B., O makes multiple moves for every P move. Cerutti, Oren (Cardiff, Aberdeen) 84 / 203
  • 109. Locution Rules HTB(A) is either the first move, or the previous move was CB(B) in which case A must attack B, and O can’t CONCEDE or RETRACT. CB(B) is moved when B attacks the last HTB(A) statement where CONCEDE(A) has not yet been made; B has not been retracted; the last move was not a CB move, and CONCEDE and RETRACT cannot be played. CONCEDE(A) can be played when HTB(A) was moved earlier, and all attackers of A have been retracted, and CONCEDE(A) has not been played. RETRACT(A) can be played when CB(A) was moved in the past, and an attacker of A has been conceded, and RETRACT(A) has not been played. Cerutti, Oren (Cardiff, Aberdeen) 85 / 203
  • 110. Winning and Losing If O concedes the original argument, P wins. Otherwise, O wins. If a HTB,CB or HTB-CB repeat occurs for the same argument, O wins (due to burden of proof). f a b e h c d g 1: P : HTB(C) 4: O : CONCEDE(A) 2: O : CB(B) 5: O : RETRACT(B) 3: P : HTB(A) 6: O : CONCEDE(C) Cerutti, Oren (Cardiff, Aberdeen) 86 / 203
  • 111. Winning and Losing If O concedes the original argument, P wins. Otherwise, O wins. If a HTB,CB or HTB-CB repeat occurs for the same argument, O wins (due to burden of proof). f a b e h c d g 1: P : HTB(B) 2: O : CB(A) Cerutti, Oren (Cardiff, Aberdeen) 86 / 203
  • 112. Winning and Losing If O concedes the original argument, P wins. Otherwise, O wins. If a HTB,CB or HTB-CB repeat occurs for the same argument, O wins (due to burden of proof). f a b e h c d g 1: P : HTB(F) 4: O : CONCEDE(A) 2: O : CB(B) 5: O : RETRACT(B) 3: P : HTB(A) 6: O : CB(A) Cerutti, Oren (Cardiff, Aberdeen) 86 / 203
  • 113. Another Grounded Game Again, P and O alternate, with P moving first. Every P move except the first attacks the preceding O move. P moves cannot be repeated. The winner is the player making the last move. f a b e h c d g [C, B, A] is won by P [G, H] is won by O Cerutti, Oren (Cardiff, Aberdeen) 87 / 203
  • 114. Another Grounded Game (SGG) f a b e h c d g [F, B, A] is (incorrectly) won by P So all possible games must be considered to demonstrate that an argument is grounded. This is (effectively) a tree of possible unique discussions, where each path from root to leaf is won by P. Cerutti, Oren (Cardiff, Aberdeen) 88 / 203
  • 115. GDG vs SGG SGG allows argument to reappear over multiple paths. In worst case, it’s exponential in the number of arguments in the framework. GDG considers each argument once, and is linear in the number of arguments in the framework (note that a strategy exists which minimises game length). Exponential blow-up is a standard feature of most tree-based discussion games. Cerutti, Oren (Cardiff, Aberdeen) 89 / 203
  • 116. Skeptical Preferred Semantics Grounded is considered "too skeptical". Credulous preferred is "too lenient". Skeptical preferred semantics seem to capture human intuitions well. Some work uses meta-dialogues, or works only where stable and preferred semantics coincide. Cerutti, Oren (Cardiff, Aberdeen) 90 / 203
  • 117. Approach Two players, O and P Two phases Phase 1: O advances an extension where the argument under discussion is out or undec. Phase 2: P shows that this extension is not a preferred extension. Under perfect play, O will win iff the focal argument is not in, with P winning otherwise. Cerutti, Oren (Cardiff, Aberdeen) 91 / 203
  • 118. More detail Moves: What is (WI) — requests a label to be assigned to an argument. Claim (CL) — assign a label to an argument. Players take turns to make a single move, with P beginning both phases. Phase 1: P plays WI moves (starting with argument of interest). O responds with a CL move assigning a (legal) label to the argument. P’s WI moves are for arguments which attack a previous CL move (and no CL for that argument has yet occurred). Play continues until no moves are possible, an illegal CL is made, or the focal argument is claimed in. In the first case, Phase 2 begins, else P wins. Cerutti, Oren (Cardiff, Aberdeen) 92 / 203
  • 119. More detail Moves: What is (WI) — requests a label to be assigned to an argument. Claim (CL) — assign a label to an argument. Players take turns to make a single move, with P beginning both phases. Phase 2: P begins by playing CL on a undec labelled argument. O plays WI on a undec attacker of the CL. This repeats until no more moves can be made. P wins the game if it has made at least one move during this phase, and the labelling is legal. Cerutti, Oren (Cardiff, Aberdeen) 92 / 203
  • 120. Example f e g a b Phase one: P : WI(a) O : CL(undec(a)) P : WI(g) O : CL(undec(g)) P : WI(b) O : CL(undec(b)) P : WI(e) O : CL(out(e)) P : WI(f) O : CL(in(f)) Cerutti, Oren (Cardiff, Aberdeen) 93 / 203
  • 121. Example f e g a b Phase two: P : CL(in(g)) O : WI(b) P : CL(out(b)) O : WI(a) P : CL(in(a)) O : WI(g) P : CL(out(g)) P contradicts itself in Phase 2, and O therefore wins — a is not skeptically preferred. Cerutti, Oren (Cardiff, Aberdeen) 93 / 203
  • 122. Example 2 c d a b Phase one: P : WI(d) O : CL(undec(d)) P : WI(c) O : CL(undec(c)) P : WI(b) O : CL(undec(b)) P : WI(a) O : CL(undec(a)) Cerutti, Oren (Cardiff, Aberdeen) 94 / 203
  • 123. Example 2 c d a b Phase two: P : CL(in(d)) O : WI(c) P : CL(out(c)) O : WI(b) P : CL(in(b)) O : WI(a) P : CL(out(a)) In Phase 2, P successfully changes an undec argument to in, and therefore wins; d is skeptically preferred. Cerutti, Oren (Cardiff, Aberdeen) 94 / 203
  • 124. What’s going on? In phase 1, O identifies an admissible labelling where the focal argument is not in. If this is a preferred extension, then O should win the game, otherwise, they’ve cheated. Phase 2 allows P to prove that O has cheated in phase 1. Core result: there is a winning strategy for P or O depending on whether the argument is or isn’t skeptically preferred. Without perfect knowledge, this becomes a tree based discussion, requiring all possible paths to be explored. But in many applications, one party has perfect knowledge, reducing real world complexity. Cerutti, Oren (Cardiff, Aberdeen) 95 / 203
  • 125. Observations All proof dialogues incrementally assign a labelling to arguments. There is an implicit assumption that participants are cooperatively exploring the (shared) argument graph (as they know what questions are legal). Current work involves removing this assumption, but current results indicate that in the worst case, all arguments and attackers must be exchanged to obtains soundness and completeness, reducing to existing work. Since all attackers for an in argument must be explored, there’s a question of cognitive load in human-centric applications over large graphs. Exploration is taking place regarding heuristics to allow short-circuiting, but this comes at the cost of completeness. Cerutti, Oren (Cardiff, Aberdeen) 96 / 203
  • 126. Take away messages Dialectical proof procedures are an alternative approach to identifying status of argument. Such proof procedures exist for many semantics. They implicitly encode algorithms used to perform labellings (including random choice and backtracking as necessary). Complexity (for a good algorithm) is equivalent to complexity of deciding whether a single argument is in the appropriate extension type. The main claim is that such proof procedures are more easily understood by non-experts. Cerutti, Oren (Cardiff, Aberdeen) 97 / 203
  • 127. MAS and Argumentation Cerutti, Oren (Cardiff, Aberdeen) 98 / 203
  • 128. Decision Making Cerutti, Oren (Cardiff, Aberdeen) 99 / 203
  • 129. Cerutti, Oren (Cardiff, Aberdeen) 100 / 203
  • 130. The example is about having a surgery (sg) or not (¬sg), knowing that the patient has colonic polyps. The knowledge base contains the following information: having a surgery has side-effects, not having surgery avoids having side-effects, when having a cancer, having a surgery avoids loss of life, if a patient has cancer and has no surgery, the patient would lose his life, the patient has colonic polyps, having colonic polyps may lead to cancer. In addition to the above knowledge, the patient has also some goals like: “no side effects” and “to not lose his life”. Obviously it is more important for him to not lose his life than to not have side effects. Cerutti, Oren (Cardiff, Aberdeen) 101 / 203
  • 131. α [“the patient has colonic polyps”, and “having colonic polyps may lead to cancer”] δ1 [“the patient may have a cancer”, “when having a cancer, having a surgery avoids loss of life”] δ2 [“not having surgery avoids having side-effects”] δ3 [“having a surgery has side-effects”] δ4 [“the patient has colonic polyps”, and “having colonic polyps may lead to cancer”, “if a patient has cancer and has no surgery, the patient would lose his life”] Cerutti, Oren (Cardiff, Aberdeen) 102 / 203
  • 132. Definition Ae denotes a set of epistemic arguments, and Ap denotes a set of practical arguments such that Ae ∩ Ap = ∅. Let A = Ae ∪ Ap (i.e. A will contain all those arguments) Ae = {α} while Ap = {δ1, δ2, δ3, δ4} Cerutti, Oren (Cardiff, Aberdeen) 103 / 203
  • 133. Definition Fp : D → 2Ap is a function that returns the arguments in favor of a candidate decision. Such arguments are said pro the option. Fc : D → 2Ap is a function that returns the arguments against a candidate decision. Such arguments are said cons the option. The two functions satisfy the following requirements: ∀d ∈ D, δ ∈ Ap s.t. δ ∈ Fp(d) and δ ∈ Fc(d). This means that an argument is either in favor of an option or against that option. It cannot be both. If δ ∈ Fp(d) and δ ∈ Fp(d ) (resp. if δ ∈ Fc(d) and δ ∈ Fc(d )), then d = d . This means that an argument refers only to one option. Let D = {d1, . . . , dn}. Ap = ( Fp(di)) ∪ ( Fc(di)), with i = 1, . . . , n. This means that the available practical arguments concern options of the set D. When δ ∈ Fx (d) with x ∈ {p, c}, we say that d is the conclusion of δ, and we write Conc(δ) = d. Cerutti, Oren (Cardiff, Aberdeen) 104 / 203
  • 134. α [“the patient has colonic polyps”, and “having colonic polyps may lead to cancer”] δ1 [“the patient may have a cancer”, “when having a cancer, having a surgery avoids loss of life”] δ2 [“not having surgery avoids having side-effects”] δ3 [“having a surgery has side-effects”] δ4 [“the patient has colonic polyps”, and “having colonic polyps may lead to cancer”, “if a patient has cancer and has no surgery, the patient would lose his life”] The two options of the set D = {sg, ¬sg} are supported/attacked by the following arguments: Fp(sg) = {δ1}, Fc(sg) = {δ3}, Fp(¬sg) = {δ2}, and Fc(¬sg) = {δ4}. Cerutti, Oren (Cardiff, Aberdeen) 105 / 203
  • 135. Definition Three preference relations between arguments are defined. The first one, denoted by ≥e, is a preorder—i.e. reflexive and transitive— on the set Ae. The second relation, denoted by ≥p, is a preorder on the set Ap. Finally, a third relation, denoted by ≥m (m stands for mixed relation), captures the idea that any epistemic argument is stronger than any practical argument. Thus, ∀α ∈ Ae, ∀δ ∈ Ap, (α, δ) ∈≥m and (δ, α) /∈≥m. Cerutti, Oren (Cardiff, Aberdeen) 106 / 203
  • 136. α [“the patient has colonic polyps”, and “having colonic polyps may lead to cancer”] δ1 [“the patient may have a cancer”, “when having a cancer, having a surgery avoids loss of life”] δ2 [“not having surgery avoids having side-effects”] δ3 [“having a surgery has side-effects”] δ4 [“the patient has colonic polyps”, and “having colonic polyps may lead to cancer”, “if a patient has cancer and has no surgery, the patient would lose his life”] ≥e = {(α, α)} and ≥m = {(α, δ1), (α, δ2)}. Regarding ≥p, δ1 is stronger than δ2 since the goal satisfied by δ1 (namely, not loss of life) is more important than the one satisfied by δ2 (not having side effects). Thus, ≥p = {(δ1, δ1), (δ2, δ2), (δ1, δ2)}. Cerutti, Oren (Cardiff, Aberdeen) 107 / 203
  • 137. Definition Epistemic arguments may attack each others: Re ⊆ Ae × Ae. Epistemic arguments may also attack practical arguments. Practical arguments are not allowed to attack epistemic ones to avoid wishful thinking: Rm ⊆ Ae × Ap. It is assumed that practical arguments do not conflict: each practical argument points out some advantage or some weakness of a candidate decision: Rp ⊆ Ap × Ap = ∅. Definition Let A be a set of arguments, and a, b ∈ A. (a, b) ∈ Defx iff (a, b) ∈ Rx , and (b, a) /∈>x . Cerutti, Oren (Cardiff, Aberdeen) 108 / 203
  • 138. Comparing candidate decisions Unipolar principles: are those that only refer to either the arguments pros or the arguments cons. E.g.: counting arguments pros/cons, . . . Bipolar principles: are those that take into account both types of arguments at the same time. E.g.: prefers a decision that has at least one supporting argument which is better than any supporting argument of the other decision, and also that has not a very strong argument against it. Non-polar principles: are those where arguments pros and arguments cons a given choice are aggregated into a unique meta-argument. It results that the negative and positive polarities disappear in the aggregation. Cerutti, Oren (Cardiff, Aberdeen) 109 / 203
  • 139. Cerutti, Oren (Cardiff, Aberdeen) 110 / 203
  • 140. Cerutti, Oren (Cardiff, Aberdeen) 111 / 203 http://www.arganddec.com/diagram.php?id=705
  • 141. Norms Cerutti, Oren (Cardiff, Aberdeen) 112 / 203
  • 142. Norms (Detached) norms specify the manner in which an agent should behave by describing the obligations, permissions and prohibitions it should act under. One view of permissions is that they identify exceptional circumstances under which an obligation or prohibition is derogated. Further exceptions could prevent a permission from coming into force. This is analogous to reinstatement. The non-monotonic nature of normative reasoning has long been recognised. We’re going to look at How to reason about what should be the case given a set of norms; and How an agent should reason given norms and goals Cerutti, Oren (Cardiff, Aberdeen) 113 / 203
  • 143. Setting the scene Suppose a soldier must listen to orders from three superiors, a Sergeant, Captain and Major. The Sergeant (who likes being warm) states that in winter, the heat should be turned on. The Captain (who worries about energy costs) says that during winter, the window must stay closed. Finally, the Major (who likes being cool) states that whenever the heating is on, the window should be open. (adapted from Horty (2007)) 3 obligations are imposed on the soldier: (w, h), (w, ¬o), (h, o). There are priorities over the obligations as the Major outranks the Captain who outranks the Sergeant. It’s winter, what should the soldier do? This section is based on Liao, Oren, van der Torre, Villata (2017) Cerutti, Oren (Cardiff, Aberdeen) 114 / 203
  • 144. What to do? (h, o) > (w, ¬o) > (w, h) {w} There are multiple approaches to reasoning in the Deontic logic literature Greedy: apply applicable norm with highest priority that does not introduce conflict. (w, ¬o), (w, h). So conclusion is (h, ¬o) Reduction: Guess an extension, identify applicable norms and try them out by applying Greedy. E.g., guessing {h, o} means all norms are applicable. Greedy gives us the same extension, so this works. Guessing {h, ¬o} does not appear after applying Greedy, so not an extension. Optimisation: Select norms in order of priority while they remain consistent with context. So we select (h, o) and (w, ¬o). Greedy is then applied, yielding {¬o} Cerutti, Oren (Cardiff, Aberdeen) 115 / 203
  • 145. Argumentation What is an argument? Context: yields an argument with conclusion of the element in the context. E.g., there is a context argument with conclusion conc(w) = w Ordinary argument: a path from context to some conclusion obtained by following the norms in the system (e.g., α = [w, h, o] is an argument with conclusion conc(α) = o Note that β = [w, h] is a subargument of α. By identifying what arguments are justified and taking their conclusions, we can determine what obligations hold and actions should be performed. Cerutti, Oren (Cardiff, Aberdeen) 116 / 203
  • 146. Priorities We have priorities over norms. But we will be comparing arguments. So we need to lift the former to obtain priorities over the latter. Consider two arguments α = [u1, . . . , un], β = [v1, . . . , vm] Cerutti, Oren (Cardiff, Aberdeen) 117 / 203
  • 147. Priorities We have priorities over norms. But we will be comparing arguments. So we need to lift the former to obtain priorities over the latter. Consider two arguments α = [u1, . . . , un], β = [v1, . . . , vm] Weakest link: (abusing notation) α w β iff ∃v ∈ βα s.t. ∀u ∈ αβ v ≤ u. That is, there is some norm in β that is weaker than all norms in α Cerutti, Oren (Cardiff, Aberdeen) 117 / 203
  • 148. Priorities We have priorities over norms. But we will be comparing arguments. So we need to lift the former to obtain priorities over the latter. Consider two arguments α = [u1, . . . , un], β = [v1, . . . , vm] Weakest link: (abusing notation) α w β iff ∃v ∈ βα s.t. ∀u ∈ αβ v ≤ u. That is, there is some norm in β that is weaker than all norms in α Last link: α l β iff un ≥ vm. That is, the last norm of α has priority over the last norm of β. Cerutti, Oren (Cardiff, Aberdeen) 117 / 203
  • 149. Defeat α defeats β if there a subargument β of β such that concl(α) = ¬concl(β ); and Either α is a context argument; or α is an ordinary argument and α β Observations: Defeat is dependant on whether last or weakest link is used. The system we’ve defined is "ASPIC-like"; we can show that it satisfies closure under sub-arguments, direct and contextual consistency. Cerutti, Oren (Cardiff, Aberdeen) 118 / 203
  • 150. Argumentation Frameworks A0A0 A1A1 A2A2 A3A3 [w][w] [(w, h)][(w, h)] [(w, h), (h, o)][(w, h), (h, o)] [(w, ¬o)][(w, ¬o)] A0A0 A1A1 A2A2 A3A3 [w][w] [(w, h)][(w, h)] [(w, h), (h, o)][(w, h), (h, o)] [(w, ¬o)][(w, ¬o)] A0A0 A1A1 A2A2 A3A3 [w][w] [(w, h)][(w, h)] [(w, h), (h, o)][(w, h), (h, o)] [(w, ¬o)][(w, ¬o)] Cerutti, Oren (Cardiff, Aberdeen) 119 / 203
  • 151. Results Greedy is weakest link under the stable semantics. Reduction is last link under the stable semantics. Optimization is trickier... Cerutti, Oren (Cardiff, Aberdeen) 120 / 203
  • 152. Optimization The weakest norm of an argument is the norm within an ordinary argument with the lowest priority. The weakest sub-argument of an argument α is the ordinary sub-argument whose top norm (i.e., conclusion) is the weakest norm. The weakest argument is the set of weakest arguments w.r.t an argument α in the set of super arguments of the weakest sub-argument of α. A0A0 A1A1 A2A2 A3A3 [w][w] [(w, h)][(w, h)] [(w, h), (h, o)][(w, h), (h, o)] [(w, ¬o)][(w, ¬o)] Using weakest link, A3 defeats A2. The weakest arguments are warg(A1) = {A1}, warg(A2) = {A1, A2}, warg(A3) = {A3} Cerutti, Oren (Cardiff, Aberdeen) 121 / 203
  • 153. Optimization Now assume α defeats β. If neither share a weakest argument, then we introduce additional arguments to defeat the proper weakest sub-arguments of β. A0A0 A3A3 A1A1 A4A4 A2A2 A5A5 auxaux [(w, h)][(w, h)] [(w, ¬h)][(w, ¬h)] [w][w] [(w, h), (h, o)][(w, h), (h, o)] [(w, ¬o)][(w, ¬o)][(w, ¬h), (¬h, o)][(w, ¬h), (¬h, o)] (w, h) = 1 (w, ¬h) = 0 (h, o) = 3 (¬h, o) = 4 (w, ¬o) = 2 A5 /∈ warg(A3) = {A1, A3} and /∈ warg(A4) = {A2, A4} Cerutti, Oren (Cardiff, Aberdeen) 122 / 203
  • 154. Optimization If they share a weakest argument, then any argument containing the weakest argument should be defeated. This is achieved by introducing an auxiliary argument and attacks from that argument to the weakest arguments. A0A0 [a][a] A1A1 A2A2 A3A3 [(a, b)][(a, b)] [(a, b), (b, c)][(a, b), (b, c)] [(a, b), (b, ¬c)][(a, b), (b, ¬c)] auxaux (a, b) = 1 (b, c) = 2 (b, ¬c) = 3 A3 ∈ warg(A2) = {A1, A2, A3} Cerutti, Oren (Cardiff, Aberdeen) 123 / 203
  • 155. Where are we? We can reason about what norms are in force. In other words, we are using argumentation to reason about norms. We shift focus to how argumentation can be used to reason about acting in the presence of norms. This work is based on Oren (2013). Cerutti, Oren (Cardiff, Aberdeen) 124 / 203
  • 156. Overview Overall goal: We examine an agent’s reasoning procedure in the presence of norms and goals. System model. Goals, norms and preferences. Reasoning via argument schemes. Next steps. Cerutti, Oren (Cardiff, Aberdeen) 125 / 203
  • 157. AATS A set of states. An initial state. A finite set of agents. A set of non-overlapping actions for agents, with preconditions on actions. A transition function. A set of propositions. An interpretation function. Cerutti, Oren (Cardiff, Aberdeen) 126 / 203
  • 158. AATSs to Traces We can construct a tree of possible paths of the system by starting at the root of the tree and walking along the edges. These paths exist due to different joint actions selected by the agents. Agents select different actions as some of the paths end up achieving some state of affairs they desire, whereas other paths do not. Cerutti, Oren (Cardiff, Aberdeen) 127 / 203
  • 160. AATSs to Traces We can construct a tree of possible paths of the system by starting at the root of the tree and walking along the edges. These paths exist due to different joint actions selected by the agents. Agents select different actions as some of the paths end up achieving some state of affairs they desire, whereas other paths do not. Desirable states of affairs arise due to Goals. Norms. Cerutti, Oren (Cardiff, Aberdeen) 129 / 203
  • 161. Goals We view a goal as a proposition that the agent would like to see hold in some state. The agent prefers those paths in which the goal is achieved to those paths where it is not achieved. For each goal, we can identify a family of paths where it is achieved, and a family of paths where it is not achieved. This can be compactly represented through preferences temporal logic formulae. Cerutti, Oren (Cardiff, Aberdeen) 130 / 203
  • 162. The Logic We describe paths using CTL*. State formulae are evaluated with respect to an AATS S and a state q ∈ Q: S, q |= S, q |= ⊥ S, q |= p iff p ∈ π(q) S, q |= ¬ψ iff S, q |= ψ S, q |= ψ ∨ φ iff S, q |= ψ or S, q |= φ S, q |= Aψ iff S, λ |= ψ for all paths where λ[0] = q S, q |= Eψ iff S, λ |= ψ for some path where λ[0] = q Cerutti, Oren (Cardiff, Aberdeen) 131 / 203
  • 163. The Logic We describe paths using CTL*. Path formulae are evaluated with respect to an AATS S and a path λ: S, λ |= ψ iff S, λ[0] |= ψ where ψ is a state formula. S, λ |= ¬ψ iff S, λ ψ S, λ |= ψ ∨ φ iff S, λ||= ψ or S, λ||= φ S, λ |= ψ iff S, λ[1, ∞]||= ψ S, λ |= ♦ψ iff ∃u ∈ N such that S, λ[u, ∞]||= ψ S, λ |= ψ iff ∀u ∈ N it is the case that t S, λ[u, ∞] |= ψ S, λ |= φUψ iff ∃u ∈ N such that S, λ[u, ∞] |= ψ and ∀v s.t. 0 ≤ v < u, S, λ[v, ∞] |= φ Cerutti, Oren (Cardiff, Aberdeen) 131 / 203
  • 164. Back to goals A goal is then encoded through a preference relation between sets of paths expressed as logical formulae. g ¬ g Cerutti, Oren (Cardiff, Aberdeen) 132 / 203
  • 165. Norms We treat prohibitions as obligations to ensure some state of affairs does not come about. We consider two types of obligations: Achievement obligations — “you should close the door”. Maintenance obligations — “you should keep the door closed”. If an obligation is not complied with, then it is violated. Every norm has a creditor and target (c.f. commitments). Cerutti, Oren (Cardiff, Aberdeen) 133 / 203
  • 166. Deadlines, Violation and Permission Without a deadline an achievement obligation cannot be violated; and a maintenance obligation cannot be discharged. Permissions act as exceptions to obligations. If an obligation would be violated, and a permission exists, then the obligation is considered to not be violated. E.g. if you should keep the door closed, but are permitted to open it when someone wants to enter, then doing so does not violate the obligation. Cerutti, Oren (Cardiff, Aberdeen) 134 / 203
  • 167. Permission and Violaton We introduce special propositions which must exist in those states where a permission derogates an obligation, and where a violation of an obligation occurs. P g a,x — agent a has obtained permission from g to see to it that state of affairs xis not the case. V g a,x,d a violation by a of an obligation w.r.t g to see to it that x with respect to a deadline d. A permission existing until deadline d is then defined through the formula P g a (x|d) ≡ AP g a,x Ud We require the following axiom to “clear” the permission: A (¬P g a (x|d) → ¬P g a,x ) Cerutti, Oren (Cardiff, Aberdeen) 135 / 203
  • 168. Achievement Obligations An achievement obligation, abbreviated O g a (x|d) requiring the target a to ensure that some state of affairs x holds before a deadline d towards a creditor g is represented as follows: A(¬V g a,x,d ∧ ¬d ∧ ¬x)U (((¬x ∧ d ∧ ¬P g a,x ∧ V g a,x,d )∨ (¬x ∧ d ∧ P g a,x ∧ ¬V g a,x,d ))∨ (x ∧ ¬V g a,x,d )) Cerutti, Oren (Cardiff, Aberdeen) 136 / 203
  • 169. Achievement Obligations An achievement obligation, abbreviated O g a (x|d) requiring the target a to ensure that some state of affairs x holds before a deadline d towards a creditor g is represented as follows: A(¬V g a,x,d ∧ ¬d ∧ ¬x)U (((¬x ∧ d ∧ ¬P g a,x ∧ V g a,x,d )∨ (¬x ∧ d ∧ P g a,x ∧ ¬V g a,x,d ))∨ (x ∧ ¬V g a,x,d )) Before the deadline or x holds, the obligation is not violated. Cerutti, Oren (Cardiff, Aberdeen) 136 / 203
  • 170. Achievement Obligations An achievement obligation, abbreviated O g a (x|d) requiring the target a to ensure that some state of affairs x holds before a deadline d towards a creditor g is represented as follows: A(¬V g a,x,d ∧ ¬d ∧ ¬x)U (((¬x ∧ d ∧ ¬P g a,x ∧ V g a,x,d )∨ (¬x ∧ d ∧ P g a,x ∧ ¬V g a,x,d ))∨ (x ∧ ¬V g a,x,d )) If the deadline occurs and x is not the case, then if there is no permission allowing this to occur, a violation is recorded. Alternatively, if such a permission exists, then no violation is recorded (this is encoded by the second line of the proposition). Cerutti, Oren (Cardiff, Aberdeen) 136 / 203
  • 171. Achievement Obligations An achievement obligation, abbreviated O g a (x|d) requiring the target a to ensure that some state of affairs x holds before a deadline d towards a creditor g is represented as follows: A(¬V g a,x,d ∧ ¬d ∧ ¬x)U (((¬x ∧ d ∧ ¬P g a,x ∧ V g a,x,d )∨ (¬x ∧ d ∧ P g a,x ∧ ¬V g a,x,d ))∨ (x ∧ ¬V g a,x,d )) If x is achieved (before the deadline), then no violation is recorded. Violations should not occur arbitrarily: A (¬O g a (x|d) → ¬V g a,x,d ) Cerutti, Oren (Cardiff, Aberdeen) 136 / 203
  • 172. Maintenance Obligations A((¬x ∧ ¬d ∧ (¬P g a,x ∧ V g a,x,d )∨ (P g a,x ∧ ¬V g a,x,d )) ∨ (x ∧ ¬d))Ud In other words, before the deadline, either x is maintained, or x is not maintained, in which case the obligation is violated if an associated permission does not exist. We abbreviate a maintenance obligation as O g a (m : d). As for achievement obligations, A (¬O g a (x : d) → ¬V g a,x,d ) Cerutti, Oren (Cardiff, Aberdeen) 137 / 203
  • 173. Preferences and Norms A norm’s creditor prefers that a norm is complied with to it being violated (the norm’s target doesn’t care). ¬V g a,x,d g ♦V g a,x,d So an agent has a set of preferences obtained from its goals, and a set of preferences obtained from its norms. These preferences are typically in conflict. We introduce meta-preferences in order to resolve these conflicts. We then have a most preferred path through the system, allowing the agent to perform practical reasoning. We clearly have a non-monotonic system with reinstatement, and we can therefore identify the most preferred path via argumentation, but why should we? Cerutti, Oren (Cardiff, Aberdeen) 138 / 203
  • 174. Explanation Argumentation can be used to provide easily understood explanations of complex system behaviour. In this work, we describe the system via arguments instantiated from a set of argumentation schemes. The resultant argument framework describes the system, and the argument schemes and attacks between them provide our explanation. Other techniques, e.g. games for proof can then be used to explain the argument framework to non-experts. Note: for simplicity we ignore the multi-agent aspect of the system in the argument schemes (future work). Cerutti, Oren (Cardiff, Aberdeen) 139 / 203
  • 175. Argumentation Schemes We represent the system via an exhaustive set of argumentation schemes. Any path through the system represents a possible sequence of actions that could be executed. AS1: (Given situation S) The sequence of joint actions A1, . . . , An should be executed. Critical questions: CQ1-1 Is there some other sequence of actions that should be executed instead? CQ1-2 Is there a more preferred sequence of actions that should be executed? Cerutti, Oren (Cardiff, Aberdeen) 140 / 203
  • 176. Argumentation Schemes We represent the system via an exhaustive set of argumentation schemes. One reason to prefer a path over another is that it achieves a goal while another does not. AS2:The sequence of joint actions A1, . . . , An is preferred over A1, . . . An as the former achieves a goal which the latter does not. Critical Questions: CQ2-1 Is there some other sequence of actions which achieves a more preferred goal than the one achieved by this action sequence? CQ2-2 Does the sequence of actions lead to the violation of a norm? Cerutti, Oren (Cardiff, Aberdeen) 140 / 203
  • 177. Argumentation Schemes We represent the system via an exhaustive set of argumentation schemes. Compliance with an obligation is a reason to prefer one path over another. AS3: The sequence of actions A1, . . . An should be less preferred than sequence A1, . . . An as, in the absence of permissions, the former violates a norm while the latter does not. CQ3-1 Is the goal resulting from the sequence of actions more preferred than the violation? CQ3-2 Does the violation resulting from this norm result in some other, more important violation not occurring? CQ3-3 Is there a permission that derogates the violation? Cerutti, Oren (Cardiff, Aberdeen) 140 / 203
  • 178. Argumentation Schemes We represent the system via an exhaustive set of argumentation schemes. The derogation of an obligation’s violation prevents it from being preferred to a situation where it is not violated. AS4: There is a permission that derogates the violation of an obligation. The next set of argument schemes are used to associate preferences between different goals and norms, and are used to instantiate CQs for AS2 and AS3. AS5: Agent α prefers goal g over goal g AS6: Agent α prefers achieving goal g to not violating n AS7: Agent α prefers not achieving goal g to violating n AS8: Agent α prefers violating n to violating n AS9: Agent α prefers situation A to B Cerutti, Oren (Cardiff, Aberdeen) 140 / 203
  • 179. Formalisation We can formalise these notions by referring to the AATS. AS3: There exist two paths λ, λ obtained from the sequence of joint actions j1, . . . jn and j1, . . . jm respectively, and it is the case that SP g a,x , λ |= V g a,x,d and SP g a,x , λ |= V g a,x,d CQ3-1: There is an instance of AS6 for S, λ |= γ and S, λ |= V g a,x,d , where λ is the first path of AS3. CQ3-2: There is an instantiation of AS8 for which this instantiation of AS3 means that SP g a,x , λ |= V g a,x,d and SP g a,x , λ |= Vh b,y,e CQ3-3: There is an instantiation of AS4 referring to a permission P g a,x which refers to the same path λ as this instantiation of AS3. Cerutti, Oren (Cardiff, Aberdeen) 141 / 203
  • 180. The Argumentation System Many of our argument schemes are used to express (meta-)preferences, and are naturally encoded as attacks on attacks. We therefore instantiate the system as an extended argument framework (EAF), separating the preference level from the object level of the system. EAF as arguments should still exist even if they are derogated/not preferred. CQ1-1 is a symmetric attack between arguments. CQ1-2 attacks an attacking edge. CQ2-1, 2-2, 3-1 and 3-2 are instantiated via AS5-AS7 via an attack on an attacking edge (with the attacked edge originating from AS2 or AS3). CQ3-3 is instantiated as an attack from AS8 to the appropriate AS3 attack (allowing us to reason that the obligation still exists, but is derogated when extensions are computed). Cerutti, Oren (Cardiff, Aberdeen) 142 / 203
  • 181. The Argumentation System Each preferred extension of the system will contain a single argument from AS1 for some specific action sequence, representing one most preferred sequence of actions. In a multi-agent setting, this joint action sequence strongly dominates all others. If multiple preferred extensions exist, then additional preferences are required in order to identify a most preferred course of action. In a multi-agent setting, this means additional coordination is required. An empty preferred extension indicates that a preference conflict exists that must be resolved before a course of action can be agreed upon. Cerutti, Oren (Cardiff, Aberdeen) 143 / 203
  • 183. Where are we? We can use argument to reason about what norms are in force. Captures existing detachment procedures. But requires new semantics (!) We can reason about how to act using argumentation by taking the formal system in which action takes place and creating argument schemes which encode different choices within the system (c.f., Atkinson (2007)). We’ve repeatedly claimed that argumentation gives us some advantage in such scenarios. Cerutti, Oren (Cardiff, Aberdeen) 145 / 203
  • 184. Applications Cerutti, Oren (Cardiff, Aberdeen) 146 / 203
  • 185. Explanation, Dealing with Humans Cerutti, Oren (Cardiff, Aberdeen) 147 / 203
  • 186. Argument and Explanation Argumentation is no silver bullet - other techniques can perform the same type of reasoning. But — it is claimed — argumentation mirrors human reasoning, making its operation easily understandable, potentially also making systems which utilise it more explainable. We will look at Whether argumentation mirrors human reasoning. How argumentation can be used to explain complex concepts. Cerutti, Oren (Cardiff, Aberdeen) 148 / 203
  • 187. Do humans reason argumentatively? Rahwan et al. (2010) demonstrated that humans seem to think in a manner similar to that predicted by the skeptical preferred semantics. Though reinstatement weakens conclusions. Polberg and Hunter (2018) suggest that bipolar and probabilistic argumentative reasoning also captures aspects of human reasoning. We consider how closely structured argumentation captures human reasoning the level of agreement between multiple extension semantics and probabilistic reasoning Cerutti, Oren (Cardiff, Aberdeen) 149 / 203
  • 188. Structured Argumentation and Human Reasoning (Cerutti, Tintarev, Oren (2014) Prakken & Sartor’s (1997) argumentation framework was used, as it allows explicit arguments about preferences. r3 : ¬a ⇒ r1 r2 Scenarios were constructed which have a limited number of interacting arguments. Cerutti, Oren (Cardiff, Aberdeen) 150 / 203
  • 189. Scenarios A politician and an economist discuss the potential financial outcome of the independence of a region X. The politician puts forward an argument in favour of the conclusion "If Region X becomes independent, X’s citizens will be poorer than they are now". Another argument holding a contradicting conclusion (i.e., that Region X will not be poorer) is advanced by the economist. The economist’s opinion is likely to be preferred to that of the politician, and is supported by a scientific document. s1 :→ sayspol s2 :→ sayseco s3 :→ saysexp r1 : sayspol∧ ∼ expol → poorer r2 : sayseco ∧ saysdoc∧ ∼ execo∧ ∼ exdoc → ¬poorer r3 :∼ exexp → r2 r1 a1 : [s1, r1] a2 : [s2, s3, r2] a3 : [r3] a2 defeats a1, a2 (and ¬poorer) justified Cerutti, Oren (Cardiff, Aberdeen) 151 / 203
  • 190. Scenarios a1 a2 a3 Conclusion: ¬poorer Cerutti, Oren (Cardiff, Aberdeen) 152 / 203
  • 191. Scenarios Four domains were considered (weather forecast, political debate, used car purchase, pursuing a romantic relationship) Base case always consisted of two arguments with contradicting conclusions, and a preference for a2 over a1. These base cases were then extended with additional information. Cerutti, Oren (Cardiff, Aberdeen) 153 / 203
  • 192. Extended Scenario Other research disputes the economist’s claims. s4 → snewr r4 : snewr ∧ ∼ exnewr → poorer a1 a2 a3 a4 Conclusion: poorer or ¬poorer Cerutti, Oren (Cardiff, Aberdeen) 154 / 203
  • 193. Extended Scenarios a1 a2 a3 a4 a1 a2 a3 a4 a1 a2 a3 a4 Pref. attack (x2) a2 rebuttal Pref. rebuttal Cerutti, Oren (Cardiff, Aberdeen) 155 / 203
  • 194. Experiments Participants were asked what they thought: Position advocated by first argument is correct (e.g., people will be poorer) Position advocated by second argument is correct (e.g., people will not be poorer) Don’t know which position is correct. First for base case, and then after extended case was introduced. Statements were also rated in terms of relevance for determining the conclusion. Cerutti, Oren (Cardiff, Aberdeen) 156 / 203
  • 195. What was expected? In base case, agreement with the second argument should occur. In the extended case, people should be unable to conclude anything. People should find the argument regarding preference relevant to drawing conclusions. Cerutti, Oren (Cardiff, Aberdeen) 157 / 203
  • 196. Results 0 15 30 45 60 Pos. A Pos. B Pos. U % Distribution of acceptability of actors’ positions Base cases Extended cases H1 and H2 are validated (though many people did draw unexpected conclusions). For H3, the preference made a significant difference (evaluated by asking how much trust was placed in speaker). But background knowledge seems to have a significant effect on the way people reason. Different scenarios (with different impacts) seem to affect reasoning. The ability to make explicit arguments about preference is important. Cerutti, Oren (Cardiff, Aberdeen) 158 / 203
  • 197. Understanding Multiple Extensions Cerutti, Oren (Cardiff, Aberdeen) 159 / 203
  • 198. What do multiple extensions mean? Consider the credulous preferred semantics. We may interpret each extension as a valid possible state of reality. So given a set of arguments ξ, and assuming that each possible world is equiprobable, P(ξ) = 1/|ˆξP| ξ ∈ ˆξP 0 ξ ∈ ˆξP (1) For P(ξ) and argument A ∈ Arg: ˆP(A) = A∈ξ⊆Arg P(ξ) (2) is the degree of belief that an argument A is in an extension. Cerutti, Oren (Cardiff, Aberdeen) 160 / 203
  • 199. Justification ratio The probability of a conclusion being justified w.r.t the likelihoods of arguments which justify it is defined as follows. Justification ratio Given a set of arguments A = {A1, . . . , An} the justification ratio of a conclusion ϕ of argument Ai is µ(ϕ) = Ai ∈A ˆP(Ai). With equiprobable extensions: µ(ϕ) = ˆP(A) = A∈ξ⊆Arg 1/|ˆξP| where ϕ ∈ Conc(A) Cerutti, Oren (Cardiff, Aberdeen) 161 / 203
  • 200. Example Arguments: A1 : r4 A2 : r2 A3 : r3 A4 : r1 A5 : A1 ⇒ r5 Two preferred extensions ξ1, ξ2 P(ξ1) = P(ξ2) = 0.5 Justification ratios: µ(r1) = 0 µ(r2) = µ(r3) = 0.5 µ(r4) = µ(r5) = 1 A3 A2 A4 A5A1 A3 A2 A4 A5A1 Cerutti, Oren (Cardiff, Aberdeen) 162 / 203
  • 201. Back to probability If people take a frequentist approach to probability, then there should be a strong relationship between Classical probability interpretation p(ri ) = # of worlds where ri holds total # possible worlds Justification ratio (probabilistic semantics): µ(ri ) = # extensions in which ri is acceptable total # extensions Is there? Cerutti, Oren (Cardiff, Aberdeen) 163 / 203
  • 202. The experiment Gave subjects a set of defeasible rules which yield n extensions, with the conclusion of interest in m ≤ n extensions. (Joe is a Democrat, Joe has taken the job, Joe has got a job at the Labor Union) (Joe is a Democrat, Joe does not have a job at the Labor Union, Joe has taken the job) (Joe is a Republican, Joe does not have a job at the Labor Union, Joe does not believe in Unions) Given the 3 stated possible worlds, how likely is that you would believe that “Joe is a Republican"? (uµ(ri)) Gave subjects a scenario where conclusions are probabilistically generated such that the message of interest has equivalent likelihood to m/n Assume that we have a stream of information composed by one or many copies of the following messages (. . . ). We know that 1 message out of 3 state that “Joe is a Republican". If 3 messages are released, how likely is that a message would state that “Joe is a Republican"? (up(ri )) Cerutti, Oren (Cardiff, Aberdeen) 164 / 203
  • 203. Results Domain 1 Believability ratings uµ(ri) and up(ri) X6_0 X3_1 X4_1 X1_1 X2_0 X7_0 X5_0 X5_1 X3_0 X4_0 X6_1 X1_0 X2_1 X7_1 At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt 0 10 20 30 cat value variable avalue5 avalue4 avalue3 avalue2 avalue1 pvalue5 pvalue4 pvalue3 pvalue2 pvalue1 Count Interpretation :Scenario Extremely Likely Likely Neutral Unlikely Extremely Unlikely Believability: As Justification ratio/probability increases The user believability rating of a conclusion is positively correlated in At with the outcome of probabilistic semantics in Pt with the probability of the info holding The two correlations in At and Pt are similar. Cerutti, Oren (Cardiff, Aberdeen) 165 / 203
  • 204. Results Domain 2 Believability rating uµ(ri) and up(ri) where ri is about likelihood of a fact ω X2_0 X2_1 X3_0 X4_0 X5_0 X6_1 X6_0 X3_1 X4_1 X1_1 X7_0 X5_1 X1_0 X7_1 At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt At Pt 0 10 20 30 cat value variable avalue5 avalue4 avalue3 avalue2 avalue1 pvalue5 pvalue4 pvalue3 pvalue2 pvalue1 Count Interpretation :Scenario Extremely Likely Likely Neutral Unlikely Extremely Unlikely Believability: As (justification ratio/probability)*likelihood of the fact increases The user believability rating of a conclusion is positively correlated in At with the outcome of probabilistic semantics∗ω in Pt with the probability of the info holding∗ω The two correlations in At and Pt are similar. Cerutti, Oren (Cardiff, Aberdeen) 166 / 203
  • 205. Conclusions We aimed to study the alignment between argumentation semantics and human intuition Specifically whether structured qualitative argumentation captures some notion of uncertainty Our results showed that: People tend to agree with the outcome of the probabilistic semantics in understanding the believability ratings of the conclusions With qualitative propositions, the outcome of the probability semantics may be understood by people in a way similar to the understanding of probability. With propositions about likelihood of events, people employ a heuristic associating the product of probabilities to the believability of conclusions. Cerutti, Oren (Cardiff, Aberdeen) 167 / 203
  • 206. Where are we? People seem to reason in an argumentative manner. How can we use this? Cerutti, Oren (Cardiff, Aberdeen) 168 / 203
  • 207. The Problem Complex computational systems are built on formal underpinnings – game theory, logics, planners, inference engines, probability theory, machine learning, . . . It is difficult for non-experts (and even experts) to establish why certain behaviours occurred, and what alternatives existed. Debugging such systems is clearly difficult. Human-system interactions exacerbate the problem Lack of information regarding coordination Little/no feedback about system behaviour Difficult to communicate with and/or modify system behaviour (GIGO) Inadequate explanation of system functioning leads to loss of trust Cerutti, Oren (Cardiff, Aberdeen) 169 / 203
  • 208. Objectives We seek to make computational systems scrutable, allowing humans to better exploit them. Goals: Why were decisions made? What alternatives were there? Why were they not pursued? Allow for additional information to be fed to the system. Effects: Improve human/agent team functioning. Improve system resilience (by adapting to new information). Improve trust in the system. Cerutti, Oren (Cardiff, Aberdeen) 170 / 203
  • 209. Architecture As an exemplar domain, we focused on workflows/plans. We also considered a more general (defeasible) rule based system. Physical System Knowledge Base Planner Plan Visualiser NLG Argument Engine Dialog User Interface ActuatorsSensors Cerutti, Oren (Cardiff, Aberdeen) 171 / 203
  • 210. Planner and Plans We assume the existence of a planner. Hardcoded workflows (via yawl) in our system. Workflows contain choice points, which are selected based on external domain information. Cerutti, Oren (Cardiff, Aberdeen) 172 / 203
  • 211. Knowledge Base The knowledge base is a representation of the domain. Encoded in the ASPIC- language (developed as part of the project). kick --> do_shut_in do_shut_in ==> can_soft_shut_in do_shut_in ==> can_hard_shut_in R34: can_soft_shut_in =(-need_speed)=> SoftShutIn R37: can_hard_shut_in =(-SoftShutIn)=> HardShutIn ==> shallow_depth --> can_plug # we can always plug the well # prefer soft shut-in R34 > R37 Cerutti, Oren (Cardiff, Aberdeen) 173 / 203
  • 212. Argumentation Engine Given an ASPIC- knowledge base we can generate arguments — chains of inference leading to some conclusion. Arguments interact by attacking each other: Through opposing conclusions (rebut) By having a conclusion oppose a premise of another argument (undermine) By stating that a defeasible rule is not applicable in a situation (undercut) The argumentation engine allows a set of arguments to be evaluated and determines which are justified (via different extensions). Our ASPIC- argumentation engine is the first to allow for an intuitive form of rebut (unrestricted rebut) in the presence of preferences under the grounded extension. Cerutti, Oren (Cardiff, Aberdeen) 174 / 203
  • 213. Dialogue While a visual set of arguments allows one to trace the reasoning, it is still difficult to understand for large argument systems. We have developed several proof dialogues which incrementally explore the argument graph in a dialectic manner. NLG is used to transform logical statements from within the KB into natural language. DEMO Cerutti, Oren (Cardiff, Aberdeen) 175 / 203
  • 214. Summary The SAsSy tool combines dialogue games and argumentation to explain complex concepts through multiple modalities. Significant industrial interest in taking tool further. This will require several additional technologies to be integrated into the system. Cerutti, Oren (Cardiff, Aberdeen) 176 / 203
  • 215. CISpaces Cerutti, Oren (Cardiff, Aberdeen) 177 / 203
  • 216. Supporting Reasoning with Different Types of Evidence in Intelligence Analysis Alice Toniolo_ Anthony Etuk Robin Wentao Ouyang Tlmothy J- N0Fman Federico Cerutti Mani Srivastava DBPL 0f_C0ml3U“”Q SCIENCE Dept. of Computing Science University of California University of Aberdeen, UK University of Aberdeen, UK Los Angeles, CA, USA Nir Oren Timothy Dropps Paul Sullivan Dept. of Computing Science John A_ Allen INTELPOINT Incorporated University of Aberdeen, UK Honeywell, USA Pennsylvania, USA Appears in: Proceedings of the 14th International Conference on Autonomous Agents and ll/Iultiayent Systems (AAJWAS 2015), Bordim, Elkind, Was.-3, Yolum (ed5.), Mlay 4 8, 2015, Istcmbttl, Turkey. Cerutti, Oren (Cardiff, Aberdeen) 178 / 203
  • 217. Research question: Evaluate the Jupiter intervention on a conflict ongoing on Mars Research hypothesis: Is the Jupiter intervention on Mars humanitarian or strategical? Data gathering: beyond the scope of this work Justification of possible hypotheses based on data and logic Cerutti, Oren (Cardiff, Aberdeen) 179 / 203
  • 218. Sensemaking Agent Data Request/ Crowdsourcing Agent Provenance Agent GUI Interface ToolBox WorkBoxInfoBox ReqBox ChatBox Cerutti, Oren (Cardiff, Aberdeen) 180 / 203
  • 219. Sensemaking Agent and Walton’s Argumentation Schemes Argument from Cause to Effect Major Premise: Generally, if A occurs, then B will (might) occur. Minor Premise: In this case, A occurs (might occur). Conclusion: Therefore, in this case, B will (might) occur. Critical questions CQ1: How strong is the causal generalisation? CQ2: Is the evidence cited (if there is any) strong enough to warrant the causal generalisation? CQ3: Are there other causal factors that could interfere with the production of the effect in the given case? Cerutti, Oren (Cardiff, Aberdeen) 181 / 203
  • 220. Jupiter troops deliver aids to Martians Jupiter intervention on Mars is humanitarian PRO Agreement to exchange crude oil for refined petroleum Jupiter intervention on Mars aims at protecting strategic assets PRO CON CON Cerutti, Oren (Cardiff, Aberdeen) 182 / 203
  • 221. Jupiter troops deliver aids to Martians Jupiter intervention on Mars is humanitarian PRO Agreement to exchange crude oil for refined petroleum Jupiter intervention on Mars aims at protecting strategic assets PRO CON CON Civilian casualties caused by Jupiter forces CON LCE Use of old Jupiter military doctrine causes civilian casualties Large use of old Jupiter military techniques on Mars Cerutti, Oren (Cardiff, Aberdeen) 183 / 203
  • 222. Jupiter troops deliver aids to Martians Jupiter intervention on Mars is humanitarian PRO Agreement to exchange crude oil for refined petroleum Jupiter intervention on Mars aims at protecting strategic assets PRO CON CON Civilian casualties caused by Jupiter forces CON LCE Use of old Jupiter military doctrine causes civilian casualties Large use of old Jupiter military techniques on Mars CQ2 There is no evidence to show that the cause occurred Cerutti, Oren (Cardiff, Aberdeen) 184 / 203
  • 223. Jupiter troops deliver aids to Martians Jupiter intervention on Mars is humanitarian PRO Agreement to exchange crude oil for refined petroleum Jupiter intervention on Mars aims at protecting strategic assets PRO CON CON Civilian casualties caused by Jupiter forces CON LCE Use of old Jupiter military doctrine causes civilian casualties Large use of old Jupiter military techniques on Mars CQ2 There is no evidence to show that the cause occurred CON Use of massive aerial and artillery strikes Cerutti, Oren (Cardiff, Aberdeen) 185 / 203
  • 224. Knowledge Base Kp = { aid; oil; doctrine; technique; noevidence; artillery; } Rd = { aid =⇒ humanitarian; oil =⇒ strategic; doctrine ∧ technique =⇒ casualties; } humanitarian = −strategic casualties = ^humanitarian noevidence = ^technique artillery = ^noevidence Cerutti, Oren (Cardiff, Aberdeen) 186 / 203
  • 225. From Knowledge Base to Argument Graph Kp = { aid; oil; doctrine; technique; noevidence; artillery; } Rd = { aid =⇒ humanitarian; oil =⇒ strategic; doctrine ∧ technique =⇒ casualties; } humanitarian = −strategic casualties = ^humanitarian noevidence = ^technique artillery = ^noevidence aida1: aid aida2: a1 ⇒ humanitarian aida3: oil aida4: a3 ⇒ strategic aida5: doctrine aida6: technique aida7: a5 ∧ a6 ⇒ casualties aida8: noevidence aida9: artillery 0 Prakken, H. (2010). An abstract framework for argumentation with structured arguments. Argument & Computation, 1(2):93–124. Cerutti, Oren (Cardiff, Aberdeen) 187 / 203
  • 226. aida1: aid aida2: a1 ⇒ humanitarian aida3: oil aida4: a3 ⇒ strategic aida5: doctrine aida6: technique aida7: a5 ∧ a6 ⇒ casualties aida8: noevidence aida9: artillery Cerutti, Oren (Cardiff, Aberdeen) 188 / 203
  • 227. Jupiter troops deliver aids to Martians Jupiter intervention on Mars is humanitarian PRO Agreement to exchange crude oil for refined petroleum Jupiter intervention on Mars aims at protecting strategic assets PRO CON CON Civilian casualties caused by Jupiter forces CON LCE Use of old Jupiter military doctrine causes civilian casualties Large use of old Jupiter military techniques on Mars CQ2 There is no evidence to show that the cause occurred CON Use of massive aerial and artillery strikes Cerutti, Oren (Cardiff, Aberdeen) 189 / 203
  • 228. Cerutti, Oren (Cardiff, Aberdeen) 190 / 203 https://cispaces.org/ http://cicero.cs.cf.ac.uk/cispaces/
  • 229. Conclusions Cerutti, Oren (Cardiff, Aberdeen) 191 / 203
  • 231. Backup sildes Cerutti, Oren (Cardiff, Aberdeen) 193 / 203
  • 232. Backup Cerutti, Oren (Cardiff, Aberdeen) 194 / 203
  • 233. Sensemaking Agent Data Request/ Crowdsourcing Agent Provenance Agent GUI Interface ToolBox WorkBoxInfoBox ReqBox ChatBox Cerutti, Oren (Cardiff, Aberdeen) 195 / 203
  • 234. Crowdsourcing Agent 1 Critical questions trigger the need for further information on a topic 2 Analyst call the crowdsourcing agent (CWSAg) 3 CWSAg distributes the query to a large group of contributors 4 CWSAg aggregates the results and shows statistics to the analyst Cerutti, Oren (Cardiff, Aberdeen) 196 / 203
  • 235. CWSAg Results Import Q0-Answer Clear (Con) Q1-Answer 21.1 (Pro) Q0-AGAINST Water Contaminated Q1-FOR Water Contaminated CONTRADICTORY Cerutti, Oren (Cardiff, Aberdeen) 197 / 203
  • 236. Sensemaking Agent Data Request/ Crowdsourcing Agent Provenance Agent GUI Interface ToolBox WorkBoxInfoBox ReqBox ChatBox Cerutti, Oren (Cardiff, Aberdeen) 198 / 203
  • 237. N S E W image info ij observation Observer Messenger Informer message info ik Gang heading South Gang Crossing North Border N S E W Surveillance BORDER L1-L2 Image Processing Analyst Joe BORDER L1-L2 GP(ij) GP(ik) Cerutti, Oren (Cardiff, Aberdeen) 199 / 203
  • 238. Argument from Provenance - Given a provenance chain GP(ij) of ij, information ij: - (Where?) was derived from an entity A - (Who?) was associated with actor AG - (What?) was generated by activity P1 - (How?) was informed by activity P2 - (Why?) was generated to satisfy goal X - (When?) was generated at time T - (Which?) was generated by using some entities A1,. . . , AN - where A, AG, P1, . . . belong to GP(ij ) - the stated elements of GP(ij) infer that information ij is true, ⇒ Therefore, information ij may plausibly be taken to be true. CQA1: Is ij consistent with other information? CQA2: Is ij supported by evidence? CQA3: Does GP (ij ) contain other elements that lead us not to believe ij ? CQA4: Are there provenance elements that should have been included for believing ij ? Cerutti, Oren (Cardiff, Aberdeen) 200 / 203
  • 239. Argument for Provenance Preference - Given information ij and ik , - and their known parts of the provenance chains GP(ij) and GP(ik ), - if there exists a criterion Ctr such that GP(ij) Ctr GP(ik ), then ij ik - a criterion Ctr leads to assert that GP(ij) Ctr GP(ik ) ⇒ Therefore, ik should be preferred to ij. Trustworthiness Reliability Timeliness Shortest path CQB1: Does a different criterion Ctr1, such that GP (ij ) Ctr1 GP (ik ) lead ij ik not being valid? CQB2: Is there any exception to criterion Ctr such that even if a provenance chain GP (ik ) is preferred to GP (ij ), information ik is not preferred to information ij ? CQB3: Is there any other reason for believing that the preference ij ik is not valid? Cerutti, Oren (Cardiff, Aberdeen) 201 / 203
  • 240. PVAg Provenance Analysis & Import IMPORT ANALYSIS Primary Source Pattern Provenance Explanation US Patrol Report Extract Used wasGeneratedBy US Team Patrol wasAssociatedWith wasDerivedFrom INFO: Livestock illness prov: time 2015-04-27T02:27:40Z Farm Daily Report Prepare Used wasGeneratedBy Kish Farmer wasAssociatedWith wasDerivedFrom type PrimarySource Annotate wasGeneratedBy wasAssociatedWith Livestock Pictures Used Livestock Information IMPORT OF PREFERENCES? Cerutti, Oren (Cardiff, Aberdeen) 202 / 203
  • 241. Theories/Technologies integrated Argument representation: Argument Schemes and Critical questions (domain specific) „Bipolar-like” graph for user consumption AIF (extension for provenance) ASPIC(+) Arguments based on preferences (partially under development) Theoretical framework for acceptability status: AF PrAF (case study for [Li15]) AFRA for preference handling (under development) Computational machinery: jArgSemSAT Cerutti, Oren (Cardiff, Aberdeen) 203 / 203