SlideShare a Scribd company logo
Process Reasoning
and Mining
Marco Montali
Free University of Bozen Bolzano

montali@inf.unibz.it
with
Uncertainty
Credits to 

Anti Alman, Sander J. J. Leemans, 

Fabrizio M. Maggi, Rafael Peñaloza
AdONE

16/05/2022
• Fix a
fi
nite alphabet of atomic tasks
• Process execution:
fi
nite sequence of events on a speci
fi
c case

• Event: atomic task in some position (data/time abstraction)

• Execution trace: element from
Σ
Σ*
Process executions
Setting the stage…
2
Process, trace
Setting the stage…
3
Σ*
Types of process representations
4
Σ* Σ*
Σ* Σ*
Where is uncertainty?
5
trace process
6
Plan today
1.Infusing declarative/imperative process
models with uncertainty
2.Show how we can reason on these
models and their traces
3.Use these techniques for stochastic
process mining (discovery, conformance
checking, monitoring)
7
Declarative process
specifications
Constraints declaratively predicating on the execution
of activities over time.

Examples: Declare [Pesic et al, EDOC07; _ et al,
TWEB11] and DCR Graphs [Slaats et al, BPM13].

Declare uses LTL over
fi
nite traces (LTLf) and
fi
nite-
state automata to provide support for the whole
lifecycle: consistency, enacment, monitoring, discovery.
Temporal process constraints
8
A front-end for linear temporal logic over
fi
nite traces
The Declare framework
9
Crisp semantics of constraints: an execution trace
conforms to the model if it satis
fi
es every constraint in
the model.
close
order
1..1
accept
refuse
• Best practices: constraints that must hold in the majority, but
not necessarily all, cases.

90% of the orders are shipped via truck.
• Outlier behaviors: constraints that only apply to very few, but
still conforming, cases.

Only 1% of the orders are canceled after being paid.
• Constraints involving external parties: contain uncontrollable
activities for which only partial guarantees can be given.

In 8 cases out of 10, the customer accepts the order and also
pays for it.
Some examples
Uncertainty is pervasive
10
Crisp and uncertain constraints
ProbDeclare
11
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
Crisp and uncertain constraints
ProbDeclare
12
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
Crisp and uncertain constraints
ProbDeclare
13
close
order
1..1
accept
refuse
Crisp!
Each trace in
the log
contains
exactly one
close order
{0.8}
{0.3}
{0.9}
Crisp and uncertain constraints
ProbDeclare
14
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
Uncertain!
90% traces
are so that an
order is not
accepted and
refused.

In 10% traces
the seller
changes their
mind
Crisp and uncertain constraints
ProbDeclare
15
close
order
1..1
accept
refuse
Uncertain!
90% traces
are so that an
order is not
accepted and
refused.

In 10% traces
the seller
changes their
mind
{0.8}
{0.3}
{0.9}
• A stochastic language over is a function such
that 

•
fi
nite if
fi
nitely many traces get a non-zero probability

• A log can be seen as a
fi
nite stochastic language
Σ ρ : Σ* → [0,1]
∑
τ∈Σ*
ρ(τ) = 1
ProbDeclare is interpreted over
fi
nite stochastic languages
Formally…
16
A ProbDeclare constraint over is a triple , where:

• the process condition is an LTLf formula over 

• the probability operator is one of 

• the probability reference value p is a rational value in [0,1]

A stochastic language satis
fi
es a ProbDeclare constraint
if 

Σ ⟨φ, ⋈, p⟩
φ Σ
⋈ { = , ≠ , ≤ , ≥ , < , > }
ρ
⟨φ, ⋈, p⟩
∑
τ∈Σ*,τ⊧φ
ρ(τ) ⋈ p
Semantics of a ProbDeclare constraint
Formally…
17
A stochastic language satis
fi
es a ProbDeclare speci
fi
cation if:

• For every crisp constraint and every trace with non-zero
probability, we have that 

• For every probabilistic constraint we have
ρ
φ τ ∈ Σ*
τ ⊧ φ
⟨φ, ⋈, p⟩
∑
τ∈Σ*,τ⊧φ
ρ(τ) ⋈ p
From one to many constraints
18
Key challenge: interplay of multiple constraints
A constraint scenario picks which probabilistic constraints must
hold, and which are violated (i.e., their negated version must hold).

It denotes a “process variant”.

All in all: up to 2n scenarios, denoting di
ff
erent “process variants”.
Constraint scenarios
Dealing with “n” probabilistic constraints
19
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
A constraint scenario picks which probabilistic constraints must
hold, and which are violated (i.e., their negated version must hold).

It denotes a “process variant”.

All in all: up to 2n scenarios, denoting di
ff
erent “process variants”.
Constraint scenarios
Dealing with “n” probabilistic constraints
20
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
8 scenarios
(1) (2) (3)
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
“1”: satis
fi
ed

“0”: violated
Interplay between logic and probabilities
Reasoning over scenarios is tricky
21
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
Logically, there cannot be traces
satisfying the crips constraints and
also all the uncertain ones.

Hence, scenario 111 is inconsistent!
• A scenario indicates which constraints holds and which don’t

• A constraint holds in a trace if the trace satis
fi
es the constraint
formula

• A constraint does not hold in a trace if the trace violates the
constraint formula -> satis
fi
ed the negation of the constraint
formula

• Scenario characteristic formula:
LTLf to the rescue
Logical reasoning within scenarios
22
faction/violation of constraints as indicated by the scenario. Three questions
immediately arise: (i) how does one check to which scenario(s) a trace belongs?
(ii) Can a trace belong to multiple scenarios? (iii) Are all scenarios meaningful,
or should we discard some of them?
To answer such questions, we provide a logical characterization of scenarios.
First and foremost, we introduce a characteristic LTLf formula for a scenario:
a trace belongs to a scenario if and only if the trace satisfies the characteristic
formula of the scenario.
Definition 15. Let M = 〈Σ, C, 〈〈ϕ1, ⊲⊳1, p1〉, . . . , 〈ϕn, ⊲⊳n, pn〉〉〉 be ProbDe-
clare model. The characteristic formula induced by a scenario SM
b1···bn
over
M, compactly called SM
b1···bn
-formula, is the LTLf formula
Φ(SM
b1···bn
) =
'
ψ∈C
ψ ∧
'
i∈{1,...,n}
(
)
*
)
+
ϕi if bi = 1
¬ϕi if bi = 0
(8)
⊳
Definition 16. A trace τ belongs to scenario SM
b1···bn
if τ |= Φ(SM
b1···bn
). Sce-
Example
23
close
order
1..∗
accept
order
{0.8}
1
refuse
order
{0.3}
2
{0.9}
3
1 2 3 consistent?
S000 ✸(close ∧ ¬©✸acc) ✸(close ∧ ¬©✸ref) ✸acc ∧ ✸refuse no
S001 ✸(close ∧ ¬©✸acc) ✸(close ∧ ¬©✸ref) ¬(✸acc ∧ ✸refuse) yes
S010 ✸(close ∧ ¬©✸acc) ✷(close → ©✸ref) ✸acc ∧ ✸refuse no
S011 ✸(close ∧ ¬©✸acc) ✷(close → ©✸ref) ¬(✸acc ∧ ✸refuse) yes
S100 ✷(close → ©✸acc) ✸(close ∧ ¬©✸ref) ✸acc ∧ ✸refuse no
S101 ✷(close → ©✸acc) ✸(close ∧ ¬©✸ref) ¬(✸acc ∧ ✸refuse) yes
S110 ✷(close → ©✸acc) ✷(close → ©✸ref) ✸acc ∧ ✸refuse yes
S111 ✷(close → ©✸acc) ✷(close → ©✸ref) ¬(✸acc ∧ ✸refuse) no
Figure 1: A ProbDeclare model, with 8 constraint scenarios, out of which only 4 are consistent.
Recall that each scenario induces a formula that does not simply conjoin the positive/negated
variants of the probabilistic constraints, but includes also the conjunction of the formulae for
crisp constraints.
Interplay between logic and probabilities
Reasoning over scenarios is tricky
24
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
0.8+0.3 > 1, hence there must be traces
where a closed order is accepted and
refused (and so the two activities coexist).
Hence, scenario 110 must have a non-zero probability!
• For n scenarios, let with be the probability
that an arbitrary trace belongs to scenario 

• What are the legitimate probability distributions over scenarios?

• May be in
fi
nitely many

• No solution: inconsistent speci
fi
cation
xi i ∈ {0,…,2n−1
}
i
From probabilistic constraints to probability distributions over scenarios
Probabilities of scenarios
25
masses associated to consistent scenarios, we set up a system of inequalities
whose solutions constitute all the probability distributions that are compati-
ble with the logical and probabilistic characterization of the probabilistic con-
straints in the ProbDeclare model of interest. To do so, we associate each
scenario to a probability variable, keeping the same naming convention. For
example, the probability mass of scenario S001 is represented by variable x001.
For M = 〈Σ, C, 〈〈ϕ1, ⊲⊳1, p1〉, . . . , 〈ϕn, ⊲⊳n, pn〉〉〉, we construct the system LM of
inequalities using probability variables xi, with i ranging from 0 to 2n
− 1 (in
binary format):
xi ≥ 0 0 ≤ i < 2n
(9)
% 2n
−1
"
i=0
xi
&
= 1 (10)
% "
i∈{0,...,2n−1},
jth position of i is 1
xi
&
⊲⊳j pj 0 ≤ j < n (11)
xi = 0 0 ≤ i < 2n
, scenario Si is inconsistent (12)
The first two lines guarantee that variables xi indeed form a probability distri-
bution, being all non-negative and collectively summing up to 1. The schema of
inequalities captured in Equation (11) verifies the probability associated to each
Example
26
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
sign
consent
close
order
1..∗
{0.8} 1
{0.1} 2
1 2 consistent?
S00 ¬sign U close ✸(close ∧ ¬©✸sign) yes
S01 ¬sign U close ✷(close → ©✸sign) yes
S10 ¬close W sign ✸(close ∧ ¬©✸sign) yes
S11 ¬close W sign ✷(close → ©✸sign) yes
Figure 2: A ProbDeclare model and its 4 constraint scenarios.
once the variables above are removed (being them all equal to 0):
x001 + x011 + x101 + x110 = 1
x101 + x110 = 0.8
x011 + x110 = 0.3
x001 + x011 + x101 = 0.9
It is easy to see that this system of equations admits only one solution: x001 = 0,
x = 0.2, x = 0.7, x = 0.1. This solution witnesses that scenario S
Reasoning in a scenario: standard LTLf reasoning:

• Inconsistent scenarios get probability 0 (no conforming trace).

Constraint probabilities induce probability distribution on scenarios:

• System of linear inequalities to compute scenario probabilities.
Which scenarios are possible? With which probability?
Reasoning over scenarios
27
close
order
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
scenario
consistent? probability
(1) (2) (3)
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
1..1
Reasoning in a scenario: standard LTLf reasoning:

• Inconsistent scenarios get probability 0 (no conforming trace).

Constraint probabilities induce probability distribution on scenarios:

• System of linear inequalities to compute scenario probabilities.
Which scenarios are possible? With which probability?
Reasoning over scenarios
28
close
order
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
1..1
scenario
consistent? probability
(1) (2) (3)
0 0 0 N
0 0 1 Y
0 1 0 N
0 1 1 Y
1 0 0 N
1 0 1 Y
1 1 0 Y
1 1 1 N
Reasoning in a scenario: standard LTLf reasoning:

• Inconsistent scenarios get probability 0 (no conforming trace).

Constraint probabilities induce probability distribution on scenarios:

• System of linear inequalities to compute scenario probabilities.
Which scenarios are possible? With which probability?
Reasoning over scenarios
29
close
order
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
1..1
scenario
consistent? probability
(1) (2) (3)
0 0 0 N 0
0 0 1 Y
0 1 0 N 0
0 1 1 Y
1 0 0 N 0
1 0 1 Y
1 1 0 Y
1 1 1 N 0
scenario
consistent? probability
(1) (2) (3)
0 0 0 N 0
0 0 1 Y 0
0 1 0 N 0
0 1 1 Y 0.2
1 0 0 N 0
1 0 1 Y 0.7
1 1 0 Y 0.1
1 1 1 N 0
Reasoning in a scenario: standard LTLf reasoning:

• Inconsistent scenarios get probability 0 (no conforming trace).

Constraint probabilities induce probability distribution on scenarios:

• System of linear inequalities to compute scenario probabilities.
Which scenarios are possible? With which probability?
Reasoning over scenarios
30
close
order
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
1..1
Reasoning in a scenario: standard LTLf reasoning:

• Inconsistent scenarios get probability 0 (no conforming trace).

Constraint probabilities induce probability distribution on scenarios:

• System of linear inequalities to compute scenario probabilities.
Which scenarios are possible? With which probability?
Reasoning over scenarios
31
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
scenario
consistent? probability
(1) (2) (3)
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
scenario
consistent? probability
(1) (2) (3)
0 0 0 N
0 0 1 Y
0 1 0 N
0 1 1 Y
1 0 0 N
1 0 1 Y
1 1 0 Y
1 1 1 N
scenario
consistent? probability
(1) (2) (3)
0 0 0 N 0
0 0 1 Y
0 1 0 N 0
0 1 1 Y
1 0 0 N 0
1 0 1 Y
1 1 0 Y
1 1 1 N 0
scenario
consistent? probability
(1) (2) (3)
0 0 0 N 0
0 0 1 Y 0
0 1 0 N 0
0 1 1 Y 0.2
1 0 0 N 0
1 0 1 Y 0.7
1 1 0 Y 0.1
1 1 1 N 0
scenario
consistent? probability
(1) (2) (3)
0 0 0 N 0
0 0 1 Y 0
0 1 0 N 0
0 1 1 Y 0.2
1 0 0 N 0
1 0 1 Y 0.7
1 1 0 Y 0.1
1 1 1 N 0
1..1
close
order
Typical discovery procedure

1. Candidate constraints are generated by analysing the structure
of the log and the activities contained therein

2. Compute the support per constraint

3. Filter constraints based on support

4. Apply further
fi
lters based, redundancy, interestingness, vacuity,
…
Process discovery
Scenarios in action
32
Key challenge: consistency only guaranteed if support 100%
3. Candidate formulae are filtered, retaining only th
ceeds a given threshold.
4. Further filters are applied, for example considering
dancy, interestingness, and vacuity [7, 11, 27].
In this pipeline, the notion of support is typically fo
Definition 18. The support of an LTLf formula ϕ in a
suppL(ϕ) =
!
τ∈L,τ|=ϕ L(τ)
|L|
To obtain a meaningful Declare model in output, t
crucial catch: the formulae that pass all the steps of the
an overall inconsistent model. The reason is that formu
strictly less than 1 may actually conflict with each oth
recognized by the model, which does not keep nor use a
to support. Fixing these potential inconsistencies calls t
processing techniques [11].
Support naturally matches the semantics of probabilistic constraints

• Consistency by design if whenever we discover an interesting
constraint , we retain its support as probability:



• Two implications on “probabilistic interestingness” and
“probabilistic over
fi
tting”
φ
⟨φ, = , suppL
(φ)⟩
Process discovery
Scenarios in action
33
Scenarios in action
Probabilistic conformance checking
34
close
order
accept
<close order>
close
order
refuse
accept refuse
scenario
probability
(1) (2) (3)
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 0.2
1 0 0 0
1 0 1 0.7
1 1 0 0.1
1 1 1 0
Scenarios in action
Probabilistic conformance checking
35
scenario
probability
(1) (2) (3)
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 0.2
1 0 0 0
1 0 1 0.7
1 1 0 0.1
1 1 1 0
0
0
1
close
order
accept
<close order>
close
order
refuse
accept refuse
Scenarios in action
Probabilistic conformance checking
36
scenario
probability
(1) (2) (3)
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 0.2
1 0 0 0
1 0 1 0.7
1 1 0 0.1
1 1 1 0
0
0
1
Violation!
close
order
accept
<close order>
close
order
refuse
accept refuse
Scenarios in action
Probabilistic conformance checking
37
scenario
probability
(1) (2) (3)
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 0.2
1 0 0 0
1 0 1 0.7
1 1 0 0.1
1 1 1 0
close
order
accept
<close order,

accept,

refuse>
close
order
refuse
accept refuse
Scenarios in action
Probabilistic conformance checking
38
scenario
probability
(1) (2) (3)
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 0.2
1 0 0 0
1 0 1 0.7
1 1 0 0.1
1 1 1 0
close
order
accept
<close order,

accept,

refuse>
close
order
refuse
accept refuse
1
1
0
Scenarios in action
Probabilistic conformance checking
39
scenario
probability
(1) (2) (3)
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 0.2
1 0 0 0
1 0 1 0.7
1 1 0 0.1
1 1 1 0
1
1
0
Conforming!

(but a rare 

case)
close
order
accept
<close order,

accept,

refuse>
close
order
refuse
accept refuse
Scenarios in action
Probabilistic monitoring
40
Whole scenarios have to be considered: one LTLf monitor per scenario.

Monitors used in parallel: if multiple return the same verdict, aggregate
probability values can be returned for sophisticated feedback.

Interpretability of these feedbacks is an interesting open question.
Scenarios in action
Probabilistic monitoring
41
Fully implemented!
From traces to logs
Stochastic conformance (granularity: scenario)
42
Log
ProbDeclare


speci
fi
cation
From traces to logs
Stochastic conformance (granularity: scenario)
43
Log
ProbDeclare


speci
fi
cation
Consistent scenarios
From traces to logs
Stochastic conformance (granularity: scenario)
44
Log
ProbDeclare


speci
fi
cation
Consistent scenarios
Speci
fi
cation


distribution
From traces to logs
Stochastic conformance (granularity: scenario)
45
Log
ProbDeclare


speci
fi
cation
Consistent scenarios
Speci
fi
cation


distribution
Log


distribution
From traces to logs
Stochastic conformance (granularity: scenario)
46
Log
ProbDeclare


speci
fi
cation
Consistent scenarios
Speci
fi
cation


distribution
Log


distribution
(Earth mover’s) distance
Can be re
fi
ned
through trace
alignments
47
Imperative process
models
Process control-flow with Petri nets
Minimum requirements
48
review
claim
claim


received
check


claim
info


complete?
obtain
missing
info
check


claim
yes


no
Process control-flow with Petri nets
Minimum requirements
49
review
claim
claim


received
check


claim
info


complete?
obtain
missing
info
check


claim
yes


no


review
claim
check


claim
obtain
missing info
check


claim
labels repeated labels
silent transitions
Unlogged tasks as silent transitions
50
4 Sander J.J. Leemans et al.
q0
open
t01
o
q1
⌧
t12
i
(insert item)
q2
⌧
t21
m
finalize
t23
f
q3
reject
t37
r
q7
accept
t35
a
q4
⌧
t51
b
pay
t46
p
q6
⌧ t45
d
q5
cancel
t15
c
delete
t58
q8
Fig. 2: Stochastic net of an order-to-cash process. Weights are presented symbolically.
Transition t12 captures a task that cannot be logged, and so is modelled as silent.
Definition 1 (Labelled Petri net). A labelled Petri net N is a tuple hQ, T, F, `i, where:
(i) Q is a finite set of places; (ii) T is a finite set of transitions, disjoint from Q (i.e.,
Q  T = ;); (iii) F ✓ (Q ⇥ T) [ (T ⇥ Q) is a flow relation connecting places to
𝗂
𝗇
𝗌
𝖾
𝗋
𝗍
𝗂
𝗍
𝖾
𝗆
Unlogged tasks as silent transitions
51
4 Sander J.J. Leemans et al.
q0
open
t01
o
q1
⌧
t12
i
(insert item)
q2
⌧
t21
m
finalize
t23
f
q3
reject
t37
r
q7
accept
t35
a
q4
⌧
t51
b
pay
t46
p
q6
⌧ t45
d
q5
cancel
t15
c
delete
t58
q8
Fig. 2: Stochastic net of an order-to-cash process. Weights are presented symbolically.
Transition t12 captures a task that cannot be logged, and so is modelled as silent.
Definition 1 (Labelled Petri net). A labelled Petri net N is a tuple hQ, T, F, `i, where:
(i) Q is a finite set of places; (ii) T is a finite set of transitions, disjoint from Q (i.e.,
Q  T = ;); (iii) F ✓ (Q ⇥ T) [ (T ⇥ Q) is a flow relation connecting places to
• Fix an initial marking (= initial state) and a set of deadlocking
fi
nal
markings (=
fi
nal states)

• Run: valid sequence of transitions from the initial state to some
fi
nal
state

• Trace: projection of the run on visible transitions

• How many runs for a trace? Potentially in
fi
nite!
Runs and traces
52
4 Sander J.J. Leemans et al.
q0
open
t01
o
q1
⌧
t12
i
(insert item)
q2
⌧
t21
m
finalize
t23
f
q3
reject
t37
r
q7
accept
t35
a
q4
⌧
t51
b
pay
t46
p
q6
⌧ t45
d
q5
cancel
t15
c
delete
t58
q8
Fig. 2: Stochastic net of an order-to-cash process. Weights are presented symbolically.
Transition t12 captures a task that cannot be logged, and so is modelled as silent.
Definition 1 (Labelled Petri net). A labelled Petri net N is a tuple hQ, T, F, `i, where:
(i) Q is a finite set of places; (ii) T is a finite set of transitions, disjoint from Q (i.e.,
Q  T = ;); (iii) F ✓ (Q ⇥ T) [ (T ⇥ Q) is a flow relation connecting places to
Via transition systems (interleaving semantics)
Execution semantics
53
4 Sander J.J. Leemans et al.
q0
open
t01
o
q1
⌧
t12
i
(insert item)
q2
⌧
t21
m
finalize
t23
f
q3
reject
t37
r
q7
accept
t35
a
q4
⌧
t51
b
pay
t46
p
q6
⌧ t45
d
q5
cancel
t15
c
delete
t58
q8
Fig. 2: Stochastic net of an order-to-cash process. Weights are presented symbolically.
Transition t12 captures a task that cannot be logged, and so is modelled as silent.
Definition 1 (Labelled Petri net). A labelled Petri net N is a tuple hQ, T, F, `i, where:
(i) Q is a finite set of places; (ii) T is a finite set of transitions, disjoint from Q (i.e.,
Q  T = ;); (iii) F ✓ (Q ⇥ T) [ (T ⇥ Q) is a flow relation connecting places to
transitions and transitions to places; (iv) ` : T ! ⌃ is a labelling function mapping
each transition t 2 T to a corresponding label `(t) that is either a task name from ⌃ of
the silent label ⌧. /
In the paper, we adopt a dot notation to extract the component of interest from a net, that
is, given a net N, its places are denoted by N.Q, etc. We will adopt the same notational
convention for the other definitions as well. Given a net N an element x 2 N.Q[N.T,
the preset and post-set of x are respectively defined by •
x = {y | hy, xi 2 F} and
•
initial state: 

[q0]
fi
nal states: 

[q6], [q7], [q8]
6 Sander J.J. Leemans et al.
s0
[q0]
s1
[q1]
s2
[q2]
s3
[q3]
s4
[q4]
s7
[q7]
s5
[q5]
s6
[q6]
s8
[q8]
1
open
⇢i = i
i+c
⌧
⇢m = m
m+f
⌧
⇢f = f
m+f
fin ⇢a
=
a
a+r
acc
⇢r = r
a+r
rej
⇢b = b
b+p+d
⌧
⇢c = c
i+c can
⇢d = d
b+p+d
⌧
⇢p = d
b+p+d
pay
1
del
From nondeterminism to probability distribution over next markings
Stochastic decision making
54
4 Sander J.J. Leemans et al.
q0
open
t01
o
q1
⌧
t12
i
(insert item)
q2
⌧
t21
m
finalize
t23
f
q3
reject
t37
r
q7
accept
t35
a
q4
⌧
t51
b
pay
t46
p
q6
⌧ t45
d
q5
cancel
t15
c
delete
t58
q8
Fig. 2: Stochastic net of an order-to-cash process. Weights are presented symbolically.
Transition t12 captures a task that cannot be logged, and so is modelled as silent.
• Every transition gets a weight

• Probability for an enabled transition to
fi
re: relative weight on all
enabled transitions

• Probability of a run: product of the
fi
ring probabilities
• Every transition gets a weight

• Probability for an enabled transition to
fi
re: relative weight on all
enabled transitions

• Probability of a run: product of the
fi
ring probabilities
From nondeterminism to probability distribution over next markings
Stochastic decision making
55
4 Sander J.J. Leemans et al.
q0
open
t01
o
q1
⌧
t12
i
(insert item)
q2
⌧
t21
m
finalize
t23
f
q3
reject
t37
r
q7
accept
t35
a
q4
⌧
t51
b
pay
t46
p
q6
⌧ t45
d
q5
cancel
t15
c
delete
t58
q8
Fig. 2: Stochastic net of an order-to-cash process. Weights are presented symbolically.
Transition t12 captures a task that cannot be logged, and so is modelled as silent.
fi
re with probability
a
a + r
fi
re with probability
r
a + r
Via stochastic transition systems
Execution semantics
56
4 Sander J.J. Leemans et al.
q0
open
t01
o
q1
⌧
t12
i
(insert item)
q2
⌧
t21
m
finalize
t23
f
q3
reject
t37
r
q7
accept
t35
a
q4
⌧
t51
b
pay
t46
p
q6
⌧ t45
d
q5
cancel
t15
c
delete
t58
q8
Fig. 2: Stochastic net of an order-to-cash process. Weights are presented symbolically.
Transition t12 captures a task that cannot be logged, and so is modelled as silent.
Definition 1 (Labelled Petri net). A labelled Petri net N is a tuple hQ, T, F, `i, where:
(i) Q is a finite set of places; (ii) T is a finite set of transitions, disjoint from Q (i.e.,
Q  T = ;); (iii) F ✓ (Q ⇥ T) [ (T ⇥ Q) is a flow relation connecting places to
transitions and transitions to places; (iv) ` : T ! ⌃ is a labelling function mapping
each transition t 2 T to a corresponding label `(t) that is either a task name from ⌃ of
the silent label ⌧. /
In the paper, we adopt a dot notation to extract the component of interest from a net, that
is, given a net N, its places are denoted by N.Q, etc. We will adopt the same notational
convention for the other definitions as well. Given a net N an element x 2 N.Q[N.T,
the preset and post-set of x are respectively defined by •
x = {y | hy, xi 2 F} and
•
initial state: 

[q0]
fi
nal states: 

[q6], [q7], [q8]
6 Sander J.J. Leemans et al.
s0
[q0]
s1
[q1]
s2
[q2]
s3
[q3]
s4
[q4]
s7
[q7]
s5
[q5]
s6
[q6]
s8
[q8]
1
open
⇢i = i
i+c
⌧
⇢m = m
m+f
⌧
⇢f = f
m+f
fin ⇢a
=
a
a+r
acc
⇢r = r
a+r
rej
⇢b = b
b+p+d
⌧
⇢c = c
i+c can
⇢d = d
b+p+d
⌧
⇢p = d
b+p+d
pay
1
del
Key questions
57
1. Probability of a trace?

2. Veri
fi
cation of qualitative properties: probability that the
net satis
fi
es a given declarative speci
fi
cation?

3. Conformance to a ProbDeclare speci
fi
cation?
1. Probability of a trace?

2. Veri
fi
cation of qualitative properties: probability that the
net satis
fi
es a given declarative speci
fi
cation?

3. Conformance to a ProbDeclare speci
fi
cation?
and answers
Key questions
58
A. Warm-up: outcome probability

B. Play with automata

C. Reduce all the three questions above to A.
Idea: labels are not important 

-> the stochastic transition system behaves like a Markov chain



Outcome probability ~ Markov chain exist distributions

-> can be solved analytically
Probability of reaching a
fi
nal state
Outcome probability
59
6 Sander J.J. Leemans et al.
s0
[q0]
s1
[q1]
s2
[q2]
s3
[q3]
s4
[q4]
s7
[q7]
s5
[q5]
s6
[q6]
s8
[q8]
1
open
⇢i = i
i+c
⌧
⇢m = m
m+f
⌧
⇢f = f
m+f
fin ⇢a
=
a
a+r
acc
⇢r = r
a+r
rej
⇢b = b
b+p+d
⌧
⇢c = c
i+c can
⇢d = d
b+p+d
⌧
⇢p = d
b+p+d
pay
1
del
Fig. 3: Stochastic reachability graph of the order-to-cash bounded stochastic PNP.
States are named. The initial state is shown with a small incoming edge. Final states
have a double countour.
Definition 7 (Labelled transition system). A labelled transition system is a tuple
hS, s0, Sf , %i where: (i) S is a (possibly infinite) set of states; (ii) s0 2 S is the ini-
tial state; (iii) Sf ✓ S is the set of accepting states; (iv) % ✓ S ⇥⌃ ⇥ S is a⌃-labelled
transition relation. A run is a finite sequence of transitions leading from s0 to one of the
q0
a
t01 q1
b
t02
b
q2
c
t23
c
q3
d
t32
e t33
(a) Stochastic net.
s0
[q0]
s1
[q1]
s2
[q2]
s3
[q3]
a
⇢a
=
a
a+b
b
⇢b = b
a+b c
d
⇢d = d
d+e e
⇢e = e
e+d
(b) Reachability graph.
Fig. 4: Reachability graph (b) of a bounded stochastic PNP with net shown in (a), initial
marking [q0] and final marking [q1]. States s2 and s3 are livelock markings.
By recalling that states of RG(N) are markings of N, the schema (1) of equations
deals with final (deadlock) states, that in (1) with non-final deadlock states, and that in
(1) with non-final, non-deadlock states.
EF
N has always at least a solution. However, it may be indeterminate and thus admit
infinitely many ones, requiring in that case to pick the least committing (i.e., minimal
non-negative) solution. The latter case happens when N contains livelock markings.
This is illustrated in the following examples.
Example 2. Consider bounded stochastic PNP Norder (Figure 2). We want to solve the
problem OUTCOME-PROB(Norder, [q6]), to compute the probability that a created order
eventually completes the process by being paid. To do so, we solve E
[q6]
Norder
by encoding
the reachability graph of Figure 3 into:
xs8
= 0 xs5
= xs8
xs2
= ⇢mxs1
+ ⇢f xs3
xs7
= 0 xs4
= ⇢bxs1
+ ⇢dxs5
+ ⇢pxs6
xs1
= ⇢ixs2
+ ⇢cxs5
xs6
= 1 xs3
= ⇢axs4
+ ⇢rxs7
xs0
= xs1
This yields xs0
=
⇢i⇢f ⇢a⇢pxs6
+⇢i⇢f ⇢rxs7
+(⇢i⇢f ⇢a⇢d+⇢c)xs8
1 ⇢i⇢m ⇢i⇢f ⇢a⇢b
=
⇢i⇢f ⇢a⇢p
1 ⇢i⇢m ⇢i⇢f ⇢a⇢b
, which
The issue of livelocks
60
Reasoning on Labelled Petri Nets and their Dynamics in a Stochastic Setting 9
q0
a
t01
a
q1
b
t02
b
q2
c
t23
c
q3
d
t32
d
e t33
e
(a) Stochastic net.
s0
[q0]
s1
[q1]
s2
[q2]
s3
[q3]
a
⇢a
=
a
a+b
b
⇢b = b
a+b c
d
⇢d = d
d+e e
⇢e = e
e+d
(b) Reachability graph.
Fig. 4: Reachability graph (b) of a bounded stochastic PNP with net shown in (a), initial
marking [q0] and final marking [q1]. States s2 and s3 are livelock markings.
By recalling that states of RG(N) are markings of N, the schema (1) of equations
deals with final (deadlock) states, that in (1) with non-final deadlock states, and that in
(1) with non-final, non-deadlock states.
initial state: [q0]
fi
nal state: [q1]
General solution
61
si
corresponds to a final marking, otherwise xsi = 0 (witnessing that F cannot be
reached);
Inductive case if s has at least one successor, then its variable is equal to sum of the
state variables of its successor states, weighted by the transition probability to
move to that successor.
Formally, OUTCOME-PROB(N, F) with RG(N) = hS, s0, Sf , %, pi gets encoded into
the following linear optimisation problem EF
N :
Return xs0 from the minimal non-negative solution of
xsi = 1 for each si 2 F (1)
xsj = 0 for each sj 2 S  F s.t. |succRG(N )(sj)| = 0 (2)
xsk =
X
hsk,l,s0
k
i2succRG(N )(sk)
p(hsk, l, s0
ki) · xs0
k
for each sk 2 S s.t. |succRG(N )(sk)| > 0 (3)
Base case if s has no successor states (i.e., is a deadlock marking), then xsi = 1 if s
corresponds to a final marking, otherwise xsi = 0 (witnessing that F cannot be
reached);
Inductive case if s has at least one successor, then its variable is equal to sum of the
state variables of its successor states, weighted by the transition probability to
move to that successor.
Formally, OUTCOME-PROB(N, F) with RG(N) = hS, s0, Sf , %, pi gets encoded into
the following linear optimisation problem EF
N :
Return xs0 from the minimal non-negative solution of
xsi = 1 for each si 2 F (1
xsj = 0 for each sj 2 S  F s.t. |succRG(N )(sj)| = 0 (2
xsk =
X
hsk,l,s0
k
i2succRG(N )(sk)
p(hsk, l, s0
ki) · xs0
k
for each sk 2 S s.t. |succRG(N )(sk)| > 0 (3
10 Sander J.J. Leemans et al.
Example 3 illustrates how the technique implicitly gets rid of livelock markings,
associating to them a 0 probability. This captures the essential fact that, by definition,
a livelock marking can never reach any final marking. More in general, we can in fact
solve OUTCOME-PROB(N, F) by turning the linear optimisation problem EF
N into the
following system of equalities, which is guaranteed to have exactly one solution:
xsi
= 1 for each deadlock marking si 2 F (4)
xsj
= 0 for each deadlock marking sj 2 S  F (5)
xsk
= 0 for each livelock marking sk 2 S (6)
xsh
=
X
hsh,l,s0
hi2succRG(N )(sh)
p(hsh, l, s0
hi) · xs0
h
for each remaining marking sh 2 S (7)
Recall that checking whether a marking s is livelock can be done over RG(N) by
Verification of qualitative properties
62
1. Express the property as a DFA 

(works for traces, LTLf, LDLf, regexp)

2. Extend the DFA with tau-loops, expressing that silent transitions
are always ok

3. Do the cross-product of the extended DFA and the input
stochastic transition system

4. Solve outcome probability of all
fi
nal states on the resulting TS
Key advantage: no ad-hoc techniques to get rid of
silent transitions!
Example: trace probability
Probability of: <open,
fi
n, acc,
fi
n, acc, pay>
63
6 Sander J.J. Leemans et al.
s0
[q0]
s1
[q1]
s2
[q2]
s3
[q3]
s4
[q4]
s7
[q7]
s5
[q5]
s6
[q6]
s8
[q8]
1
open
⇢i = i
i+c
⌧
⇢m = m
m+f
⌧
⇢f = f
m+f
fin ⇢a
=
a
a+r
acc
⇢r = r
a+r
rej
⇢b = b
b+p+d
⌧
⇢c = c
i+c can
⇢d = d
b+p+d
⌧
⇢p = d
b+p+d
pay
1
del
Fig. 3: Stochastic reachability graph of the order-to-cash bounded stochastic PNP.
States are named. The initial state is shown with a small incoming edge. Final states
have a double countour.
Definition 7 (Labelled transition system). A labelled transition system is a tuple
hS, s0, Sf , %i where: (i) S is a (possibly infinite) set of states; (ii) s0 2 S is the ini-
tial state; (iii) Sf ✓ S is the set of accepting states; (iv) % ✓ S ⇥⌃ ⇥ S is a⌃-labelled
transition relation. A run is a finite sequence of transitions leading from s0 to one of the
states in Sf in agreement with %. /
Due to our requirement that all final markings are deadlock markings, accepting states
Reasoning on Labelled Petri Nets and their Dynamics in a Stochastic Setting
s0 s1 s2
s3
s4
s5
⌧ ⌧ ⌧
⌧
⌧
⌧
open fin
acc
fin
rej
0, 0 1, 1 2, 1 3, 2
2, 3
3, 4
7, 5
open
1 ⌧
⇢i
⌧
⇢m fin
⇢f
acc
⇢a
⇢
⌧
⇢i
⌧
⇢m
fin
⇢f
rej
⇢r
X
=
Reasoning on Labelled Petri Nets and their Dynamics in a Stochastic Setting 13
s0 s1 s2
s3
s4
s5
⌧ ⌧ ⌧
⌧
⌧
⌧
open fin
acc
fin
rej
(a) DFAs A and Ā .
0, 0 1, 1 2, 1 3, 2 4, 3
1, 3
2, 3
3, 4
7, 5
open
1 ⌧
⇢i
⌧
⇢m fin
⇢f
acc
⇢a
⌧
⇢b
⌧
⇢i
⌧
⇢m
fin
⇢f
rej
⇢r
(b) Product system between Ā and RG(Norder).
Applications in stochastic process mining
Probabilistic trace alignment
64
Conformance to a ProbDeclare specification
Induced distribution via qualitative veri
fi
cation
65
ProbDeclare


speci
fi
cation
Consistent scenarios
Speci
fi
cation


distribution
Induced


distribution
(Earth mover’s) distance
Stochastic


Petri net
Important of incorporating uncertainty into declarative
and procedural process models
Combined reasoning via careful, loosely coupled
interplay of reasoning about time/dynamics and
reasoning about probabilities 

Direct impact on a variety of stochastic process
mining tasks

Main techniques: all implemented!
“The future is uncertain, but the end is always near” (Jim Morrison)
Conclusion
66

More Related Content

PDF
Extending Temporal Business Constraints with Uncertainty
PDF
Extending Temporal Business Constraints with Uncertainty
PPT
cs344-lect11-resolution-robotic-knowledge-representation-29jan08.ppt
PPTX
Ensuring Model Consistency in Declarative Process Discovery
PDF
22 planning
PDF
Data Generation with PROSPECT: a Probability Specification Tool
PPT
A PPT on Constraint Satisfaction problems
PDF
Compliance monitoring of multi-perspective declarative process models
Extending Temporal Business Constraints with Uncertainty
Extending Temporal Business Constraints with Uncertainty
cs344-lect11-resolution-robotic-knowledge-representation-29jan08.ppt
Ensuring Model Consistency in Declarative Process Discovery
22 planning
Data Generation with PROSPECT: a Probability Specification Tool
A PPT on Constraint Satisfaction problems
Compliance monitoring of multi-perspective declarative process models

Similar to Process Reasoning and Mining with Uncertainty (20)

PPT
Goal stack planning.ppt
PPTX
Automated Discovery of Declarative Process Models
PPT
05-constraint-satisfaction-problems-(us).ppt
PDF
Planning Agent
PDF
PPT
Cs ps, sat, fol resolution strategies
PPTX
lecture9 constraint problem and path finding
PPT
Constraint_Satisfaction problem based_slides.ppt
PDF
Hak ontoforum
PPT
Bounded Model Checking
PDF
Strategy Synthesis for Data-Aware Dynamic Systems with Multiple Actors
PPTX
Resolving Inconsistencies and Redundancies in Declarative Process Models
PPT
Constraint Satisfaction problem in AI.ppt
PPT
PropositionalLogic.ppt
PDF
BCS515B Module 5 vtu notes : Artificial Intelligence Module 5.pdf
PDF
Towards a Probabilistic Extension to Non-Deterministic Transitions in Model-B...
PPT
5 csp
PDF
Operationalizing Declarative and Procedural Knowledge
PDF
inf5430_sv_randomization.pdf
PPTX
Problem Solving Techniques
Goal stack planning.ppt
Automated Discovery of Declarative Process Models
05-constraint-satisfaction-problems-(us).ppt
Planning Agent
Cs ps, sat, fol resolution strategies
lecture9 constraint problem and path finding
Constraint_Satisfaction problem based_slides.ppt
Hak ontoforum
Bounded Model Checking
Strategy Synthesis for Data-Aware Dynamic Systems with Multiple Actors
Resolving Inconsistencies and Redundancies in Declarative Process Models
Constraint Satisfaction problem in AI.ppt
PropositionalLogic.ppt
BCS515B Module 5 vtu notes : Artificial Intelligence Module 5.pdf
Towards a Probabilistic Extension to Non-Deterministic Transitions in Model-B...
5 csp
Operationalizing Declarative and Procedural Knowledge
inf5430_sv_randomization.pdf
Problem Solving Techniques
Ad

More from Faculty of Computer Science - Free University of Bozen-Bolzano (20)

PDF
From Case-Isolated to Object-Centric Processes - A Tale of two Models
PDF
Reasoning on Labelled Petri Nets and Their Dynamics in a Stochastic Setting
PDF
Constraints for Process Framing in Augmented BPM
PDF
PDF
From Case-Isolated to Object-Centric Processes
PDF
Modeling and Reasoning over Declarative Data-Aware Processes
PDF
Soundness of Data-Aware Processes with Arithmetic Conditions
PDF
Modeling and Reasoning over Declarative Data-Aware Processes with Object-Cent...
PDF
Enriching Data Models with Behavioral Constraints
PDF
Representing and querying norm states using temporal ontology-based data access
PDF
Processes and organizations - a look behind the paper wall
PDF
Formal modeling and SMT-based parameterized verification of Data-Aware BPMN
PDF
Modeling and reasoning over declarative data-aware processes with object-cent...
From Case-Isolated to Object-Centric Processes - A Tale of two Models
Reasoning on Labelled Petri Nets and Their Dynamics in a Stochastic Setting
Constraints for Process Framing in Augmented BPM
From Case-Isolated to Object-Centric Processes
Modeling and Reasoning over Declarative Data-Aware Processes
Soundness of Data-Aware Processes with Arithmetic Conditions
Modeling and Reasoning over Declarative Data-Aware Processes with Object-Cent...
Enriching Data Models with Behavioral Constraints
Representing and querying norm states using temporal ontology-based data access
Processes and organizations - a look behind the paper wall
Formal modeling and SMT-based parameterized verification of Data-Aware BPMN
Modeling and reasoning over declarative data-aware processes with object-cent...
Ad

Recently uploaded (20)

PDF
Formation of Supersonic Turbulence in the Primordial Star-forming Cloud
PPTX
TOTAL hIP ARTHROPLASTY Presentation.pptx
PPTX
INTRODUCTION TO EVS | Concept of sustainability
PDF
An interstellar mission to test astrophysical black holes
PDF
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
PPTX
2. Earth - The Living Planet Module 2ELS
PPTX
neck nodes and dissection types and lymph nodes levels
PDF
Sciences of Europe No 170 (2025)
PDF
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
PPTX
Cell Membrane: Structure, Composition & Functions
PPTX
Introduction to Fisheries Biotechnology_Lesson 1.pptx
PPTX
cpcsea ppt.pptxssssssssssssssjjdjdndndddd
PPTX
ANEMIA WITH LEUKOPENIA MDS 07_25.pptx htggtftgt fredrctvg
PPTX
Protein & Amino Acid Structures Levels of protein structure (primary, seconda...
PPTX
Taita Taveta Laboratory Technician Workshop Presentation.pptx
PDF
Placing the Near-Earth Object Impact Probability in Context
PPT
protein biochemistry.ppt for university classes
PPTX
2Systematics of Living Organisms t-.pptx
PDF
The scientific heritage No 166 (166) (2025)
PDF
AlphaEarth Foundations and the Satellite Embedding dataset
Formation of Supersonic Turbulence in the Primordial Star-forming Cloud
TOTAL hIP ARTHROPLASTY Presentation.pptx
INTRODUCTION TO EVS | Concept of sustainability
An interstellar mission to test astrophysical black holes
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
2. Earth - The Living Planet Module 2ELS
neck nodes and dissection types and lymph nodes levels
Sciences of Europe No 170 (2025)
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
Cell Membrane: Structure, Composition & Functions
Introduction to Fisheries Biotechnology_Lesson 1.pptx
cpcsea ppt.pptxssssssssssssssjjdjdndndddd
ANEMIA WITH LEUKOPENIA MDS 07_25.pptx htggtftgt fredrctvg
Protein & Amino Acid Structures Levels of protein structure (primary, seconda...
Taita Taveta Laboratory Technician Workshop Presentation.pptx
Placing the Near-Earth Object Impact Probability in Context
protein biochemistry.ppt for university classes
2Systematics of Living Organisms t-.pptx
The scientific heritage No 166 (166) (2025)
AlphaEarth Foundations and the Satellite Embedding dataset

Process Reasoning and Mining with Uncertainty

  • 1. Process Reasoning and Mining Marco Montali Free University of Bozen Bolzano montali@inf.unibz.it with Uncertainty Credits to Anti Alman, Sander J. J. Leemans, 
 Fabrizio M. Maggi, Rafael Peñaloza AdONE 16/05/2022
  • 2. • Fix a fi nite alphabet of atomic tasks • Process execution: fi nite sequence of events on a speci fi c case • Event: atomic task in some position (data/time abstraction) • Execution trace: element from Σ Σ* Process executions Setting the stage… 2
  • 3. Process, trace Setting the stage… 3 Σ*
  • 4. Types of process representations 4
  • 5. Σ* Σ* Σ* Σ* Where is uncertainty? 5 trace process
  • 6. 6 Plan today 1.Infusing declarative/imperative process models with uncertainty 2.Show how we can reason on these models and their traces 3.Use these techniques for stochastic process mining (discovery, conformance checking, monitoring)
  • 8. Constraints declaratively predicating on the execution of activities over time. Examples: Declare [Pesic et al, EDOC07; _ et al, TWEB11] and DCR Graphs [Slaats et al, BPM13]. Declare uses LTL over fi nite traces (LTLf) and fi nite- state automata to provide support for the whole lifecycle: consistency, enacment, monitoring, discovery. Temporal process constraints 8
  • 9. A front-end for linear temporal logic over fi nite traces The Declare framework 9 Crisp semantics of constraints: an execution trace conforms to the model if it satis fi es every constraint in the model. close order 1..1 accept refuse
  • 10. • Best practices: constraints that must hold in the majority, but not necessarily all, cases. 90% of the orders are shipped via truck. • Outlier behaviors: constraints that only apply to very few, but still conforming, cases. Only 1% of the orders are canceled after being paid. • Constraints involving external parties: contain uncontrollable activities for which only partial guarantees can be given. In 8 cases out of 10, the customer accepts the order and also pays for it. Some examples Uncertainty is pervasive 10
  • 11. Crisp and uncertain constraints ProbDeclare 11 close order 1..1 accept refuse {0.8} {0.3} {0.9}
  • 12. Crisp and uncertain constraints ProbDeclare 12 close order 1..1 accept refuse {0.8} {0.3} {0.9}
  • 13. Crisp and uncertain constraints ProbDeclare 13 close order 1..1 accept refuse Crisp! Each trace in the log contains exactly one close order {0.8} {0.3} {0.9}
  • 14. Crisp and uncertain constraints ProbDeclare 14 close order 1..1 accept refuse {0.8} {0.3} {0.9} Uncertain! 90% traces are so that an order is not accepted and refused. In 10% traces the seller changes their mind
  • 15. Crisp and uncertain constraints ProbDeclare 15 close order 1..1 accept refuse Uncertain! 90% traces are so that an order is not accepted and refused. In 10% traces the seller changes their mind {0.8} {0.3} {0.9}
  • 16. • A stochastic language over is a function such that • fi nite if fi nitely many traces get a non-zero probability • A log can be seen as a fi nite stochastic language Σ ρ : Σ* → [0,1] ∑ τ∈Σ* ρ(τ) = 1 ProbDeclare is interpreted over fi nite stochastic languages Formally… 16
  • 17. A ProbDeclare constraint over is a triple , where: • the process condition is an LTLf formula over • the probability operator is one of • the probability reference value p is a rational value in [0,1] A stochastic language satis fi es a ProbDeclare constraint if Σ ⟨φ, ⋈, p⟩ φ Σ ⋈ { = , ≠ , ≤ , ≥ , < , > } ρ ⟨φ, ⋈, p⟩ ∑ τ∈Σ*,τ⊧φ ρ(τ) ⋈ p Semantics of a ProbDeclare constraint Formally… 17
  • 18. A stochastic language satis fi es a ProbDeclare speci fi cation if: • For every crisp constraint and every trace with non-zero probability, we have that • For every probabilistic constraint we have ρ φ τ ∈ Σ* τ ⊧ φ ⟨φ, ⋈, p⟩ ∑ τ∈Σ*,τ⊧φ ρ(τ) ⋈ p From one to many constraints 18 Key challenge: interplay of multiple constraints
  • 19. A constraint scenario picks which probabilistic constraints must hold, and which are violated (i.e., their negated version must hold). It denotes a “process variant”. All in all: up to 2n scenarios, denoting di ff erent “process variants”. Constraint scenarios Dealing with “n” probabilistic constraints 19 close order 1..1 accept refuse {0.8} {0.3} {0.9} 1 2 3
  • 20. A constraint scenario picks which probabilistic constraints must hold, and which are violated (i.e., their negated version must hold). It denotes a “process variant”. All in all: up to 2n scenarios, denoting di ff erent “process variants”. Constraint scenarios Dealing with “n” probabilistic constraints 20 close order 1..1 accept refuse {0.8} {0.3} {0.9} 1 2 3 8 scenarios (1) (2) (3) 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 “1”: satis fi ed “0”: violated
  • 21. Interplay between logic and probabilities Reasoning over scenarios is tricky 21 close order 1..1 accept refuse {0.8} {0.3} {0.9} Logically, there cannot be traces satisfying the crips constraints and also all the uncertain ones. Hence, scenario 111 is inconsistent!
  • 22. • A scenario indicates which constraints holds and which don’t • A constraint holds in a trace if the trace satis fi es the constraint formula • A constraint does not hold in a trace if the trace violates the constraint formula -> satis fi ed the negation of the constraint formula • Scenario characteristic formula: LTLf to the rescue Logical reasoning within scenarios 22 faction/violation of constraints as indicated by the scenario. Three questions immediately arise: (i) how does one check to which scenario(s) a trace belongs? (ii) Can a trace belong to multiple scenarios? (iii) Are all scenarios meaningful, or should we discard some of them? To answer such questions, we provide a logical characterization of scenarios. First and foremost, we introduce a characteristic LTLf formula for a scenario: a trace belongs to a scenario if and only if the trace satisfies the characteristic formula of the scenario. Definition 15. Let M = 〈Σ, C, 〈〈ϕ1, ⊲⊳1, p1〉, . . . , 〈ϕn, ⊲⊳n, pn〉〉〉 be ProbDe- clare model. The characteristic formula induced by a scenario SM b1···bn over M, compactly called SM b1···bn -formula, is the LTLf formula Φ(SM b1···bn ) = ' ψ∈C ψ ∧ ' i∈{1,...,n} ( ) * ) + ϕi if bi = 1 ¬ϕi if bi = 0 (8) ⊳ Definition 16. A trace τ belongs to scenario SM b1···bn if τ |= Φ(SM b1···bn ). Sce-
  • 23. Example 23 close order 1..∗ accept order {0.8} 1 refuse order {0.3} 2 {0.9} 3 1 2 3 consistent? S000 ✸(close ∧ ¬©✸acc) ✸(close ∧ ¬©✸ref) ✸acc ∧ ✸refuse no S001 ✸(close ∧ ¬©✸acc) ✸(close ∧ ¬©✸ref) ¬(✸acc ∧ ✸refuse) yes S010 ✸(close ∧ ¬©✸acc) ✷(close → ©✸ref) ✸acc ∧ ✸refuse no S011 ✸(close ∧ ¬©✸acc) ✷(close → ©✸ref) ¬(✸acc ∧ ✸refuse) yes S100 ✷(close → ©✸acc) ✸(close ∧ ¬©✸ref) ✸acc ∧ ✸refuse no S101 ✷(close → ©✸acc) ✸(close ∧ ¬©✸ref) ¬(✸acc ∧ ✸refuse) yes S110 ✷(close → ©✸acc) ✷(close → ©✸ref) ✸acc ∧ ✸refuse yes S111 ✷(close → ©✸acc) ✷(close → ©✸ref) ¬(✸acc ∧ ✸refuse) no Figure 1: A ProbDeclare model, with 8 constraint scenarios, out of which only 4 are consistent. Recall that each scenario induces a formula that does not simply conjoin the positive/negated variants of the probabilistic constraints, but includes also the conjunction of the formulae for crisp constraints.
  • 24. Interplay between logic and probabilities Reasoning over scenarios is tricky 24 close order 1..1 accept refuse {0.8} {0.3} {0.9} 0.8+0.3 > 1, hence there must be traces where a closed order is accepted and refused (and so the two activities coexist). Hence, scenario 110 must have a non-zero probability!
  • 25. • For n scenarios, let with be the probability that an arbitrary trace belongs to scenario • What are the legitimate probability distributions over scenarios? • May be in fi nitely many • No solution: inconsistent speci fi cation xi i ∈ {0,…,2n−1 } i From probabilistic constraints to probability distributions over scenarios Probabilities of scenarios 25 masses associated to consistent scenarios, we set up a system of inequalities whose solutions constitute all the probability distributions that are compati- ble with the logical and probabilistic characterization of the probabilistic con- straints in the ProbDeclare model of interest. To do so, we associate each scenario to a probability variable, keeping the same naming convention. For example, the probability mass of scenario S001 is represented by variable x001. For M = 〈Σ, C, 〈〈ϕ1, ⊲⊳1, p1〉, . . . , 〈ϕn, ⊲⊳n, pn〉〉〉, we construct the system LM of inequalities using probability variables xi, with i ranging from 0 to 2n − 1 (in binary format): xi ≥ 0 0 ≤ i < 2n (9) % 2n −1 " i=0 xi & = 1 (10) % " i∈{0,...,2n−1}, jth position of i is 1 xi & ⊲⊳j pj 0 ≤ j < n (11) xi = 0 0 ≤ i < 2n , scenario Si is inconsistent (12) The first two lines guarantee that variables xi indeed form a probability distri- bution, being all non-negative and collectively summing up to 1. The schema of inequalities captured in Equation (11) verifies the probability associated to each
  • 26. Example 26 close order 1..1 accept refuse {0.8} {0.3} {0.9} 1 2 3 sign consent close order 1..∗ {0.8} 1 {0.1} 2 1 2 consistent? S00 ¬sign U close ✸(close ∧ ¬©✸sign) yes S01 ¬sign U close ✷(close → ©✸sign) yes S10 ¬close W sign ✸(close ∧ ¬©✸sign) yes S11 ¬close W sign ✷(close → ©✸sign) yes Figure 2: A ProbDeclare model and its 4 constraint scenarios. once the variables above are removed (being them all equal to 0): x001 + x011 + x101 + x110 = 1 x101 + x110 = 0.8 x011 + x110 = 0.3 x001 + x011 + x101 = 0.9 It is easy to see that this system of equations admits only one solution: x001 = 0, x = 0.2, x = 0.7, x = 0.1. This solution witnesses that scenario S
  • 27. Reasoning in a scenario: standard LTLf reasoning: • Inconsistent scenarios get probability 0 (no conforming trace). Constraint probabilities induce probability distribution on scenarios: • System of linear inequalities to compute scenario probabilities. Which scenarios are possible? With which probability? Reasoning over scenarios 27 close order accept refuse {0.8} {0.3} {0.9} 1 2 3 scenario consistent? probability (1) (2) (3) 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 1..1
  • 28. Reasoning in a scenario: standard LTLf reasoning: • Inconsistent scenarios get probability 0 (no conforming trace). Constraint probabilities induce probability distribution on scenarios: • System of linear inequalities to compute scenario probabilities. Which scenarios are possible? With which probability? Reasoning over scenarios 28 close order accept refuse {0.8} {0.3} {0.9} 1 2 3 1..1 scenario consistent? probability (1) (2) (3) 0 0 0 N 0 0 1 Y 0 1 0 N 0 1 1 Y 1 0 0 N 1 0 1 Y 1 1 0 Y 1 1 1 N
  • 29. Reasoning in a scenario: standard LTLf reasoning: • Inconsistent scenarios get probability 0 (no conforming trace). Constraint probabilities induce probability distribution on scenarios: • System of linear inequalities to compute scenario probabilities. Which scenarios are possible? With which probability? Reasoning over scenarios 29 close order accept refuse {0.8} {0.3} {0.9} 1 2 3 1..1 scenario consistent? probability (1) (2) (3) 0 0 0 N 0 0 0 1 Y 0 1 0 N 0 0 1 1 Y 1 0 0 N 0 1 0 1 Y 1 1 0 Y 1 1 1 N 0
  • 30. scenario consistent? probability (1) (2) (3) 0 0 0 N 0 0 0 1 Y 0 0 1 0 N 0 0 1 1 Y 0.2 1 0 0 N 0 1 0 1 Y 0.7 1 1 0 Y 0.1 1 1 1 N 0 Reasoning in a scenario: standard LTLf reasoning: • Inconsistent scenarios get probability 0 (no conforming trace). Constraint probabilities induce probability distribution on scenarios: • System of linear inequalities to compute scenario probabilities. Which scenarios are possible? With which probability? Reasoning over scenarios 30 close order accept refuse {0.8} {0.3} {0.9} 1 2 3 1..1
  • 31. Reasoning in a scenario: standard LTLf reasoning: • Inconsistent scenarios get probability 0 (no conforming trace). Constraint probabilities induce probability distribution on scenarios: • System of linear inequalities to compute scenario probabilities. Which scenarios are possible? With which probability? Reasoning over scenarios 31 accept refuse {0.8} {0.3} {0.9} 1 2 3 scenario consistent? probability (1) (2) (3) 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 scenario consistent? probability (1) (2) (3) 0 0 0 N 0 0 1 Y 0 1 0 N 0 1 1 Y 1 0 0 N 1 0 1 Y 1 1 0 Y 1 1 1 N scenario consistent? probability (1) (2) (3) 0 0 0 N 0 0 0 1 Y 0 1 0 N 0 0 1 1 Y 1 0 0 N 0 1 0 1 Y 1 1 0 Y 1 1 1 N 0 scenario consistent? probability (1) (2) (3) 0 0 0 N 0 0 0 1 Y 0 0 1 0 N 0 0 1 1 Y 0.2 1 0 0 N 0 1 0 1 Y 0.7 1 1 0 Y 0.1 1 1 1 N 0 scenario consistent? probability (1) (2) (3) 0 0 0 N 0 0 0 1 Y 0 0 1 0 N 0 0 1 1 Y 0.2 1 0 0 N 0 1 0 1 Y 0.7 1 1 0 Y 0.1 1 1 1 N 0 1..1 close order
  • 32. Typical discovery procedure 1. Candidate constraints are generated by analysing the structure of the log and the activities contained therein 2. Compute the support per constraint 3. Filter constraints based on support 4. Apply further fi lters based, redundancy, interestingness, vacuity, … Process discovery Scenarios in action 32 Key challenge: consistency only guaranteed if support 100% 3. Candidate formulae are filtered, retaining only th ceeds a given threshold. 4. Further filters are applied, for example considering dancy, interestingness, and vacuity [7, 11, 27]. In this pipeline, the notion of support is typically fo Definition 18. The support of an LTLf formula ϕ in a suppL(ϕ) = ! τ∈L,τ|=ϕ L(τ) |L| To obtain a meaningful Declare model in output, t crucial catch: the formulae that pass all the steps of the an overall inconsistent model. The reason is that formu strictly less than 1 may actually conflict with each oth recognized by the model, which does not keep nor use a to support. Fixing these potential inconsistencies calls t processing techniques [11].
  • 33. Support naturally matches the semantics of probabilistic constraints • Consistency by design if whenever we discover an interesting constraint , we retain its support as probability: • Two implications on “probabilistic interestingness” and “probabilistic over fi tting” φ ⟨φ, = , suppL (φ)⟩ Process discovery Scenarios in action 33
  • 34. Scenarios in action Probabilistic conformance checking 34 close order accept <close order> close order refuse accept refuse scenario probability (1) (2) (3) 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 0.2 1 0 0 0 1 0 1 0.7 1 1 0 0.1 1 1 1 0
  • 35. Scenarios in action Probabilistic conformance checking 35 scenario probability (1) (2) (3) 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 0.2 1 0 0 0 1 0 1 0.7 1 1 0 0.1 1 1 1 0 0 0 1 close order accept <close order> close order refuse accept refuse
  • 36. Scenarios in action Probabilistic conformance checking 36 scenario probability (1) (2) (3) 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 0.2 1 0 0 0 1 0 1 0.7 1 1 0 0.1 1 1 1 0 0 0 1 Violation! close order accept <close order> close order refuse accept refuse
  • 37. Scenarios in action Probabilistic conformance checking 37 scenario probability (1) (2) (3) 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 0.2 1 0 0 0 1 0 1 0.7 1 1 0 0.1 1 1 1 0 close order accept <close order,
 accept,
 refuse> close order refuse accept refuse
  • 38. Scenarios in action Probabilistic conformance checking 38 scenario probability (1) (2) (3) 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 0.2 1 0 0 0 1 0 1 0.7 1 1 0 0.1 1 1 1 0 close order accept <close order,
 accept,
 refuse> close order refuse accept refuse 1 1 0
  • 39. Scenarios in action Probabilistic conformance checking 39 scenario probability (1) (2) (3) 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 0.2 1 0 0 0 1 0 1 0.7 1 1 0 0.1 1 1 1 0 1 1 0 Conforming!
 (but a rare 
 case) close order accept <close order,
 accept,
 refuse> close order refuse accept refuse
  • 40. Scenarios in action Probabilistic monitoring 40 Whole scenarios have to be considered: one LTLf monitor per scenario. Monitors used in parallel: if multiple return the same verdict, aggregate probability values can be returned for sophisticated feedback. Interpretability of these feedbacks is an interesting open question.
  • 41. Scenarios in action Probabilistic monitoring 41 Fully implemented!
  • 42. From traces to logs Stochastic conformance (granularity: scenario) 42 Log ProbDeclare speci fi cation
  • 43. From traces to logs Stochastic conformance (granularity: scenario) 43 Log ProbDeclare speci fi cation Consistent scenarios
  • 44. From traces to logs Stochastic conformance (granularity: scenario) 44 Log ProbDeclare speci fi cation Consistent scenarios Speci fi cation distribution
  • 45. From traces to logs Stochastic conformance (granularity: scenario) 45 Log ProbDeclare speci fi cation Consistent scenarios Speci fi cation distribution Log distribution
  • 46. From traces to logs Stochastic conformance (granularity: scenario) 46 Log ProbDeclare speci fi cation Consistent scenarios Speci fi cation distribution Log distribution (Earth mover’s) distance Can be re fi ned through trace alignments
  • 48. Process control-flow with Petri nets Minimum requirements 48 review claim claim received check claim info complete? obtain missing info check claim yes no
  • 49. Process control-flow with Petri nets Minimum requirements 49 review claim claim received check claim info complete? obtain missing info check claim yes no review claim check claim obtain missing info check claim labels repeated labels silent transitions
  • 50. Unlogged tasks as silent transitions 50 4 Sander J.J. Leemans et al. q0 open t01 o q1 ⌧ t12 i (insert item) q2 ⌧ t21 m finalize t23 f q3 reject t37 r q7 accept t35 a q4 ⌧ t51 b pay t46 p q6 ⌧ t45 d q5 cancel t15 c delete t58 q8 Fig. 2: Stochastic net of an order-to-cash process. Weights are presented symbolically. Transition t12 captures a task that cannot be logged, and so is modelled as silent. Definition 1 (Labelled Petri net). A labelled Petri net N is a tuple hQ, T, F, `i, where: (i) Q is a finite set of places; (ii) T is a finite set of transitions, disjoint from Q (i.e., Q T = ;); (iii) F ✓ (Q ⇥ T) [ (T ⇥ Q) is a flow relation connecting places to 𝗂 𝗇 𝗌 𝖾 𝗋 𝗍 𝗂 𝗍 𝖾 𝗆
  • 51. Unlogged tasks as silent transitions 51 4 Sander J.J. Leemans et al. q0 open t01 o q1 ⌧ t12 i (insert item) q2 ⌧ t21 m finalize t23 f q3 reject t37 r q7 accept t35 a q4 ⌧ t51 b pay t46 p q6 ⌧ t45 d q5 cancel t15 c delete t58 q8 Fig. 2: Stochastic net of an order-to-cash process. Weights are presented symbolically. Transition t12 captures a task that cannot be logged, and so is modelled as silent. Definition 1 (Labelled Petri net). A labelled Petri net N is a tuple hQ, T, F, `i, where: (i) Q is a finite set of places; (ii) T is a finite set of transitions, disjoint from Q (i.e., Q T = ;); (iii) F ✓ (Q ⇥ T) [ (T ⇥ Q) is a flow relation connecting places to
  • 52. • Fix an initial marking (= initial state) and a set of deadlocking fi nal markings (= fi nal states) • Run: valid sequence of transitions from the initial state to some fi nal state • Trace: projection of the run on visible transitions • How many runs for a trace? Potentially in fi nite! Runs and traces 52 4 Sander J.J. Leemans et al. q0 open t01 o q1 ⌧ t12 i (insert item) q2 ⌧ t21 m finalize t23 f q3 reject t37 r q7 accept t35 a q4 ⌧ t51 b pay t46 p q6 ⌧ t45 d q5 cancel t15 c delete t58 q8 Fig. 2: Stochastic net of an order-to-cash process. Weights are presented symbolically. Transition t12 captures a task that cannot be logged, and so is modelled as silent. Definition 1 (Labelled Petri net). A labelled Petri net N is a tuple hQ, T, F, `i, where: (i) Q is a finite set of places; (ii) T is a finite set of transitions, disjoint from Q (i.e., Q T = ;); (iii) F ✓ (Q ⇥ T) [ (T ⇥ Q) is a flow relation connecting places to
  • 53. Via transition systems (interleaving semantics) Execution semantics 53 4 Sander J.J. Leemans et al. q0 open t01 o q1 ⌧ t12 i (insert item) q2 ⌧ t21 m finalize t23 f q3 reject t37 r q7 accept t35 a q4 ⌧ t51 b pay t46 p q6 ⌧ t45 d q5 cancel t15 c delete t58 q8 Fig. 2: Stochastic net of an order-to-cash process. Weights are presented symbolically. Transition t12 captures a task that cannot be logged, and so is modelled as silent. Definition 1 (Labelled Petri net). A labelled Petri net N is a tuple hQ, T, F, `i, where: (i) Q is a finite set of places; (ii) T is a finite set of transitions, disjoint from Q (i.e., Q T = ;); (iii) F ✓ (Q ⇥ T) [ (T ⇥ Q) is a flow relation connecting places to transitions and transitions to places; (iv) ` : T ! ⌃ is a labelling function mapping each transition t 2 T to a corresponding label `(t) that is either a task name from ⌃ of the silent label ⌧. / In the paper, we adopt a dot notation to extract the component of interest from a net, that is, given a net N, its places are denoted by N.Q, etc. We will adopt the same notational convention for the other definitions as well. Given a net N an element x 2 N.Q[N.T, the preset and post-set of x are respectively defined by • x = {y | hy, xi 2 F} and • initial state: 
 [q0] fi nal states: 
 [q6], [q7], [q8] 6 Sander J.J. Leemans et al. s0 [q0] s1 [q1] s2 [q2] s3 [q3] s4 [q4] s7 [q7] s5 [q5] s6 [q6] s8 [q8] 1 open ⇢i = i i+c ⌧ ⇢m = m m+f ⌧ ⇢f = f m+f fin ⇢a = a a+r acc ⇢r = r a+r rej ⇢b = b b+p+d ⌧ ⇢c = c i+c can ⇢d = d b+p+d ⌧ ⇢p = d b+p+d pay 1 del
  • 54. From nondeterminism to probability distribution over next markings Stochastic decision making 54 4 Sander J.J. Leemans et al. q0 open t01 o q1 ⌧ t12 i (insert item) q2 ⌧ t21 m finalize t23 f q3 reject t37 r q7 accept t35 a q4 ⌧ t51 b pay t46 p q6 ⌧ t45 d q5 cancel t15 c delete t58 q8 Fig. 2: Stochastic net of an order-to-cash process. Weights are presented symbolically. Transition t12 captures a task that cannot be logged, and so is modelled as silent. • Every transition gets a weight • Probability for an enabled transition to fi re: relative weight on all enabled transitions • Probability of a run: product of the fi ring probabilities
  • 55. • Every transition gets a weight • Probability for an enabled transition to fi re: relative weight on all enabled transitions • Probability of a run: product of the fi ring probabilities From nondeterminism to probability distribution over next markings Stochastic decision making 55 4 Sander J.J. Leemans et al. q0 open t01 o q1 ⌧ t12 i (insert item) q2 ⌧ t21 m finalize t23 f q3 reject t37 r q7 accept t35 a q4 ⌧ t51 b pay t46 p q6 ⌧ t45 d q5 cancel t15 c delete t58 q8 Fig. 2: Stochastic net of an order-to-cash process. Weights are presented symbolically. Transition t12 captures a task that cannot be logged, and so is modelled as silent. fi re with probability a a + r fi re with probability r a + r
  • 56. Via stochastic transition systems Execution semantics 56 4 Sander J.J. Leemans et al. q0 open t01 o q1 ⌧ t12 i (insert item) q2 ⌧ t21 m finalize t23 f q3 reject t37 r q7 accept t35 a q4 ⌧ t51 b pay t46 p q6 ⌧ t45 d q5 cancel t15 c delete t58 q8 Fig. 2: Stochastic net of an order-to-cash process. Weights are presented symbolically. Transition t12 captures a task that cannot be logged, and so is modelled as silent. Definition 1 (Labelled Petri net). A labelled Petri net N is a tuple hQ, T, F, `i, where: (i) Q is a finite set of places; (ii) T is a finite set of transitions, disjoint from Q (i.e., Q T = ;); (iii) F ✓ (Q ⇥ T) [ (T ⇥ Q) is a flow relation connecting places to transitions and transitions to places; (iv) ` : T ! ⌃ is a labelling function mapping each transition t 2 T to a corresponding label `(t) that is either a task name from ⌃ of the silent label ⌧. / In the paper, we adopt a dot notation to extract the component of interest from a net, that is, given a net N, its places are denoted by N.Q, etc. We will adopt the same notational convention for the other definitions as well. Given a net N an element x 2 N.Q[N.T, the preset and post-set of x are respectively defined by • x = {y | hy, xi 2 F} and • initial state: 
 [q0] fi nal states: 
 [q6], [q7], [q8] 6 Sander J.J. Leemans et al. s0 [q0] s1 [q1] s2 [q2] s3 [q3] s4 [q4] s7 [q7] s5 [q5] s6 [q6] s8 [q8] 1 open ⇢i = i i+c ⌧ ⇢m = m m+f ⌧ ⇢f = f m+f fin ⇢a = a a+r acc ⇢r = r a+r rej ⇢b = b b+p+d ⌧ ⇢c = c i+c can ⇢d = d b+p+d ⌧ ⇢p = d b+p+d pay 1 del
  • 57. Key questions 57 1. Probability of a trace? 2. Veri fi cation of qualitative properties: probability that the net satis fi es a given declarative speci fi cation? 3. Conformance to a ProbDeclare speci fi cation?
  • 58. 1. Probability of a trace? 2. Veri fi cation of qualitative properties: probability that the net satis fi es a given declarative speci fi cation? 3. Conformance to a ProbDeclare speci fi cation? and answers Key questions 58 A. Warm-up: outcome probability B. Play with automata C. Reduce all the three questions above to A.
  • 59. Idea: labels are not important 
 -> the stochastic transition system behaves like a Markov chain 
 Outcome probability ~ Markov chain exist distributions
 -> can be solved analytically Probability of reaching a fi nal state Outcome probability 59 6 Sander J.J. Leemans et al. s0 [q0] s1 [q1] s2 [q2] s3 [q3] s4 [q4] s7 [q7] s5 [q5] s6 [q6] s8 [q8] 1 open ⇢i = i i+c ⌧ ⇢m = m m+f ⌧ ⇢f = f m+f fin ⇢a = a a+r acc ⇢r = r a+r rej ⇢b = b b+p+d ⌧ ⇢c = c i+c can ⇢d = d b+p+d ⌧ ⇢p = d b+p+d pay 1 del Fig. 3: Stochastic reachability graph of the order-to-cash bounded stochastic PNP. States are named. The initial state is shown with a small incoming edge. Final states have a double countour. Definition 7 (Labelled transition system). A labelled transition system is a tuple hS, s0, Sf , %i where: (i) S is a (possibly infinite) set of states; (ii) s0 2 S is the ini- tial state; (iii) Sf ✓ S is the set of accepting states; (iv) % ✓ S ⇥⌃ ⇥ S is a⌃-labelled transition relation. A run is a finite sequence of transitions leading from s0 to one of the q0 a t01 q1 b t02 b q2 c t23 c q3 d t32 e t33 (a) Stochastic net. s0 [q0] s1 [q1] s2 [q2] s3 [q3] a ⇢a = a a+b b ⇢b = b a+b c d ⇢d = d d+e e ⇢e = e e+d (b) Reachability graph. Fig. 4: Reachability graph (b) of a bounded stochastic PNP with net shown in (a), initial marking [q0] and final marking [q1]. States s2 and s3 are livelock markings. By recalling that states of RG(N) are markings of N, the schema (1) of equations deals with final (deadlock) states, that in (1) with non-final deadlock states, and that in (1) with non-final, non-deadlock states. EF N has always at least a solution. However, it may be indeterminate and thus admit infinitely many ones, requiring in that case to pick the least committing (i.e., minimal non-negative) solution. The latter case happens when N contains livelock markings. This is illustrated in the following examples. Example 2. Consider bounded stochastic PNP Norder (Figure 2). We want to solve the problem OUTCOME-PROB(Norder, [q6]), to compute the probability that a created order eventually completes the process by being paid. To do so, we solve E [q6] Norder by encoding the reachability graph of Figure 3 into: xs8 = 0 xs5 = xs8 xs2 = ⇢mxs1 + ⇢f xs3 xs7 = 0 xs4 = ⇢bxs1 + ⇢dxs5 + ⇢pxs6 xs1 = ⇢ixs2 + ⇢cxs5 xs6 = 1 xs3 = ⇢axs4 + ⇢rxs7 xs0 = xs1 This yields xs0 = ⇢i⇢f ⇢a⇢pxs6 +⇢i⇢f ⇢rxs7 +(⇢i⇢f ⇢a⇢d+⇢c)xs8 1 ⇢i⇢m ⇢i⇢f ⇢a⇢b = ⇢i⇢f ⇢a⇢p 1 ⇢i⇢m ⇢i⇢f ⇢a⇢b , which
  • 60. The issue of livelocks 60 Reasoning on Labelled Petri Nets and their Dynamics in a Stochastic Setting 9 q0 a t01 a q1 b t02 b q2 c t23 c q3 d t32 d e t33 e (a) Stochastic net. s0 [q0] s1 [q1] s2 [q2] s3 [q3] a ⇢a = a a+b b ⇢b = b a+b c d ⇢d = d d+e e ⇢e = e e+d (b) Reachability graph. Fig. 4: Reachability graph (b) of a bounded stochastic PNP with net shown in (a), initial marking [q0] and final marking [q1]. States s2 and s3 are livelock markings. By recalling that states of RG(N) are markings of N, the schema (1) of equations deals with final (deadlock) states, that in (1) with non-final deadlock states, and that in (1) with non-final, non-deadlock states. initial state: [q0] fi nal state: [q1]
  • 61. General solution 61 si corresponds to a final marking, otherwise xsi = 0 (witnessing that F cannot be reached); Inductive case if s has at least one successor, then its variable is equal to sum of the state variables of its successor states, weighted by the transition probability to move to that successor. Formally, OUTCOME-PROB(N, F) with RG(N) = hS, s0, Sf , %, pi gets encoded into the following linear optimisation problem EF N : Return xs0 from the minimal non-negative solution of xsi = 1 for each si 2 F (1) xsj = 0 for each sj 2 S F s.t. |succRG(N )(sj)| = 0 (2) xsk = X hsk,l,s0 k i2succRG(N )(sk) p(hsk, l, s0 ki) · xs0 k for each sk 2 S s.t. |succRG(N )(sk)| > 0 (3) Base case if s has no successor states (i.e., is a deadlock marking), then xsi = 1 if s corresponds to a final marking, otherwise xsi = 0 (witnessing that F cannot be reached); Inductive case if s has at least one successor, then its variable is equal to sum of the state variables of its successor states, weighted by the transition probability to move to that successor. Formally, OUTCOME-PROB(N, F) with RG(N) = hS, s0, Sf , %, pi gets encoded into the following linear optimisation problem EF N : Return xs0 from the minimal non-negative solution of xsi = 1 for each si 2 F (1 xsj = 0 for each sj 2 S F s.t. |succRG(N )(sj)| = 0 (2 xsk = X hsk,l,s0 k i2succRG(N )(sk) p(hsk, l, s0 ki) · xs0 k for each sk 2 S s.t. |succRG(N )(sk)| > 0 (3 10 Sander J.J. Leemans et al. Example 3 illustrates how the technique implicitly gets rid of livelock markings, associating to them a 0 probability. This captures the essential fact that, by definition, a livelock marking can never reach any final marking. More in general, we can in fact solve OUTCOME-PROB(N, F) by turning the linear optimisation problem EF N into the following system of equalities, which is guaranteed to have exactly one solution: xsi = 1 for each deadlock marking si 2 F (4) xsj = 0 for each deadlock marking sj 2 S F (5) xsk = 0 for each livelock marking sk 2 S (6) xsh = X hsh,l,s0 hi2succRG(N )(sh) p(hsh, l, s0 hi) · xs0 h for each remaining marking sh 2 S (7) Recall that checking whether a marking s is livelock can be done over RG(N) by
  • 62. Verification of qualitative properties 62 1. Express the property as a DFA 
 (works for traces, LTLf, LDLf, regexp) 2. Extend the DFA with tau-loops, expressing that silent transitions are always ok 3. Do the cross-product of the extended DFA and the input stochastic transition system 4. Solve outcome probability of all fi nal states on the resulting TS Key advantage: no ad-hoc techniques to get rid of silent transitions!
  • 63. Example: trace probability Probability of: <open, fi n, acc, fi n, acc, pay> 63 6 Sander J.J. Leemans et al. s0 [q0] s1 [q1] s2 [q2] s3 [q3] s4 [q4] s7 [q7] s5 [q5] s6 [q6] s8 [q8] 1 open ⇢i = i i+c ⌧ ⇢m = m m+f ⌧ ⇢f = f m+f fin ⇢a = a a+r acc ⇢r = r a+r rej ⇢b = b b+p+d ⌧ ⇢c = c i+c can ⇢d = d b+p+d ⌧ ⇢p = d b+p+d pay 1 del Fig. 3: Stochastic reachability graph of the order-to-cash bounded stochastic PNP. States are named. The initial state is shown with a small incoming edge. Final states have a double countour. Definition 7 (Labelled transition system). A labelled transition system is a tuple hS, s0, Sf , %i where: (i) S is a (possibly infinite) set of states; (ii) s0 2 S is the ini- tial state; (iii) Sf ✓ S is the set of accepting states; (iv) % ✓ S ⇥⌃ ⇥ S is a⌃-labelled transition relation. A run is a finite sequence of transitions leading from s0 to one of the states in Sf in agreement with %. / Due to our requirement that all final markings are deadlock markings, accepting states Reasoning on Labelled Petri Nets and their Dynamics in a Stochastic Setting s0 s1 s2 s3 s4 s5 ⌧ ⌧ ⌧ ⌧ ⌧ ⌧ open fin acc fin rej 0, 0 1, 1 2, 1 3, 2 2, 3 3, 4 7, 5 open 1 ⌧ ⇢i ⌧ ⇢m fin ⇢f acc ⇢a ⇢ ⌧ ⇢i ⌧ ⇢m fin ⇢f rej ⇢r X = Reasoning on Labelled Petri Nets and their Dynamics in a Stochastic Setting 13 s0 s1 s2 s3 s4 s5 ⌧ ⌧ ⌧ ⌧ ⌧ ⌧ open fin acc fin rej (a) DFAs A and Ā . 0, 0 1, 1 2, 1 3, 2 4, 3 1, 3 2, 3 3, 4 7, 5 open 1 ⌧ ⇢i ⌧ ⇢m fin ⇢f acc ⇢a ⌧ ⇢b ⌧ ⇢i ⌧ ⇢m fin ⇢f rej ⇢r (b) Product system between Ā and RG(Norder).
  • 64. Applications in stochastic process mining Probabilistic trace alignment 64
  • 65. Conformance to a ProbDeclare specification Induced distribution via qualitative veri fi cation 65 ProbDeclare speci fi cation Consistent scenarios Speci fi cation distribution Induced distribution (Earth mover’s) distance Stochastic Petri net
  • 66. Important of incorporating uncertainty into declarative and procedural process models Combined reasoning via careful, loosely coupled interplay of reasoning about time/dynamics and reasoning about probabilities Direct impact on a variety of stochastic process mining tasks Main techniques: all implemented! “The future is uncertain, but the end is always near” (Jim Morrison) Conclusion 66