SlideShare a Scribd company logo
© Predictable Network Solutions Ltd 2017 www.pnsol.com
PEnDAR: Cost/performance-driven V&V
for distributed/cyber-physical systems
Presentation for Software Validation and Verification for Complex
Systems Workshop, May 2017
© Predictable Network Solutions Ltd 2017 www.pnsol.com
2
Goals
• Consider how to enable Validation & Verification
of cost and performance
• for distributed and hierarchical systems
• using developments of well-tried tools
• supporting both initial and ongoing incremental
development
• Provide early visibility of cost/performance
hazards
• avoiding costly failures
• maximising the chances of successful in-budget
delivery of acceptable end-user outcomes
Partners
Supported by:
PEnDAR project
© Predictable Network Solutions Ltd 2017 www.pnsol.com
Focus on performance
Functional correctness is not enough
© Predictable Network Solutions Ltd 2017
4
www.pnsol.com
Nature of performance
• In an ‘ideal world’, systems would always respond instantaneously
• and without exceptions/failures/errors
• In practice this doesn’t happen
• there is always some delay and some chance of failure: some impairment
• Thus performance is a privation
• the absence of impairment
• like ‘darkness’ or ‘silence’
• Quantity also matters
• require a certain rate or volume of responses with a given bound on
impairment
© Predictable Network Solutions Ltd 2017 www.pnsol.com
5Sources of impairment
Causality
Information:
Takes time!
Communicated
Computed
© Predictable Network Solutions Ltd 2017 www.pnsol.com
6Sources of impairment
Synchronisation
Implicit: resource sharing
Exclusive
Discrete:
Locks etc
Long timescales
Statistical
Continuous:
CPU, interface…
Short timescales
Explicit
Communication
Data
dependency
Causality
Information:
Takes time!
Communicated
Computed
© Predictable Network Solutions Ltd 2017 www.pnsol.com
7Sources of impairment
Synchronisation
Implicit: resource sharing
Exclusive
Discrete:
Locks etc
Long timescales
Statistical
Continuous:
CPU, interface…
Short timescales
Explicit
Communication
Data
dependency
Causality
Information:
Takes time!
Communicated
Computed
Process algebra
© Predictable Network Solutions Ltd 2017 www.pnsol.com
8Sources of impairment
Synchronisation
Implicit: resource sharing
Exclusive
Discrete:
Locks etc
Long timescales
Statistical
Continuous:
CPU, interface…
Short timescales
Explicit
Communication
Data
dependency
Causality
Information:
Takes time!
Communicated
Computed
Stochastic process algebra
© Predictable Network Solutions Ltd 2017 www.pnsol.com
9Sources of impairment
Synchronisation
Implicit: resource sharing
Exclusive
Discrete:
Locks etc
Long timescales
Statistical
Continuous:
CPU, interface…
Short timescales
Explicit
Communication
Data
dependency
Causality
Information:
Takes time!
Communicated
Computed
Imperfection
Discrete
Statistical
Exceptions
Failures
Resource
exhaustion
Stochastic process algebra
© Predictable Network Solutions Ltd 2017 www.pnsol.com
10Sources of impairment
Synchronisation
Implicit: resource sharing
Exclusive
Discrete:
Locks etc
Long timescales
Statistical
Continuous:
CPU, interface…
Short timescales
Explicit
Communication
Data
dependency
Causality
Information:
Takes time!
Communicated
Computed
Imperfection
Discrete
Statistical
Exceptions
Failures
Resource
exhaustion
∆Q Framework
© Predictable Network Solutions Ltd 2017
11
www.pnsol.com
Measure of performance: ∆Q
• ∆Q is a measure of the ‘quality impairment’ of an outcome
• The extent of deviation from ‘instantaneous and infallible’
• Nothing in the real world is perfect so ∆Q always exists
• ∆Q is conserved
• A delayed outcome can’t be ‘undelayed’
• A failed outcome can’t be ‘unfailed’
• ∆Q can be traded
• E.g. accept more delay in return for more certainty of completion
• ∆Q has an algebra
• Can manipulate it mathematically
© Predictable Network Solutions Ltd 2017 www.pnsol.com
12
∆Q can be represented with an
improper random variable
• Combines continuous and discrete
probabilities
• Thus encompasses normal
behaviour and exceptions/failures
in one model
∆Q is composable
• Supports hierarchical V&V
Representation of ∆Q
0.1
0.2
0.3
0.4
0.6
0.5
0.7
0.8
0.9
1.0
0.0
Cumulativeprobability
Response time
2 4 6 8 10 12 14 16
Tangible mass
encodes distribution
of response time
Intangible mass
encodes probability
of exception/failure
© Predictable Network Solutions Ltd 2017 www.pnsol.com
Outline of a coherent methodology
© Predictable Network Solutions Ltd 2017 www.pnsol.com
14Performance/resource analysis
Sub-
system
Sub-
system
Sub-
system
∆Q
Shared resources
Starting with a functional decomposition:
• Take each subsystem in isolation
• analyse performance
• modelling remainder of system as ∆Q
• quantify resource consumption
• may be dependent on the ∆Q
• Examine resource sharing
• within system – quantify resource costs
• between systems – quantify opportunity cost
• Successive refinements
• consider couplings
• iterate to fixed point
Quantitative
Timeliness
Agreement
© Predictable Network Solutions Ltd 2017
15
www.pnsol.com
Quantifying intent
• The key challenge is to establish quantified intentions
• For outcomes/resources/costs
• Then a variety of mathematical techniques can be applied
• queuing theory
• large deviation theory
• ∆Q algebra
• This is “only rocket science”
• Not brain surgery!
• Set of tools already developed
© Predictable Network Solutions Ltd 2017
16
www.pnsol.com
16
Quantifying timeliness
Outcome requirement:
Suppose we had a specification for
how long it’s acceptable to wait for a
system outcome:
• 50% of responses within 5 seconds
• 95% of responses within 10 seconds
• 99.9% of responses within 15s
• 0.1% failure rate
This can be represented by an
improper CDF 0.1
0.2
0.3
0.4
0.6
0.5
0.7
0.8
0.9
1.0
0.0
Cumulativeprobability
2 4 6 8 10 12 14 16
Response time
© Predictable Network Solutions Ltd 2017 www.pnsol.com
Click to edit Master title style 17
• Suppose the black line shows the
delivered CDF
• From measurement, simulation or
analysis
• This is everywhere above and to
the left of the requirement curve
• This means that the timeliness
requirement is satisfied
• If not, there is a performance
hazard
Meeting a timeliness requirement
0.1
0.2
0.3
0.4
0.6
0.5
0.7
0.8
0.9
1.0
0.0
2 4 6 8 10 12 14 16
Cumulativeprobability Response time
© Predictable Network Solutions Ltd 2017 www.pnsol.com
18
1. Decompose the performance
requirement following system
structure
• Using engineering judgment/best
practice/cosmic constraints
• Creates initial subsystem requirements
2. Validate the decomposition by re-
combining via the behaviour
• Formally and automatically checkable
• Can be part of continuous integration
• Captures interactions and couplings
• Necessary and sufficient:
• IF all subsystems function correctly
and integrate properly
• AND all subsystems satisfy their
performance requirements
• THEN the overall system will meet
its performance requirement
• Apply this hierarchically until
• Behaviour is trivially provable OR
• Have a complete set of testable
subsystem verification/acceptance
criteria
Validating performance requirements
© Predictable Network Solutions Ltd 2017 www.pnsol.com
Capturing interactions
© Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com
20
Outcomes
Delivered
Resources
Consumed
Variability
Exception
/failure
Externally created
Mitigation
System impact
Propagation
Scale
Schedulability
Capacity
Distance
Number
Time
Space
Density
© Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com
21
Outcomes
Delivered
Resources
Consumed
Variability
Exception
/failure
Scale
© Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com
22
Outcomes
Delivered
Resources
Consumed
Variability
Exception
/failure
Scale
Distance
Number
Time
Space
Schedulability
Capacity
Density
© Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com
23
Outcomes
Delivered
Resources
Consumed
Variability
Exception
/failure
Mitigation
Propagation
Scale
Schedulability
Capacity
Distance
Number
Time
Space
Density
© Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com
24
Outcomes
Delivered
Resources
Consumed
Variability
Exception
/failure
Externally created
Mitigation
Propagation
Scale
Schedulability
Capacity
Distance
Number
Time
Space
Density
© Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com
25
Outcomes
Delivered
Resources
Consumed
Variability
Exception
/failure
Externally created
Mitigation
System impact
Propagation
Scale
Schedulability
Capacity
Distance
Number
Time
Space
Density
© Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com
26
Outcomes
Delivered
Resources
Consumed
Variability
Exception
/failure
Externally created
Mitigation
System impact
Propagation
Scale
Schedulability
Capacity
Distance
Number
Time
Space
Density
© Predictable Network Solutions Ltd 2017
27
www.pnsol.com
Summary
• System performance validation consists of:
• Analysing the interaction of system behaviour and subsystem performance
requirements
• Showing that this ‘adds up’ to meet the quantified requirements (QTA)
• System performance verification consists of showing that subsystems
meet their QTAs
• By analysis and/or measurement of ∆Q of the subsystems’ observable
behaviour
• This provides acceptance and/or contractual criteria for third-party
subsystems or services
• Substantially reduces performance integration risks and hence re-work
© Predictable Network Solutions Ltd 2017 www.pnsol.com
Project findings
© Predictable Network Solutions Ltd 2017
29
www.pnsol.com
Project methodology
• Investigate system-of-system use cases
• Extract key aspects by talking to multiple stakeholders inside VF
• Consider barriers, costs and benefits of applying performance V&V
• Consider application of performance V&V within established
methodologies
• Automotive etc.
• Application with TVS toolchain
• Run an industry focus group to explore issues and validate findings
• Questionnaire
• Webinars
• Interviews
© Predictable Network Solutions Ltd 2017
30
www.pnsol.com
Key questions
1. Can the well-tried tools be adapted to be used outside a
consultancy model?
• Yes – performance validation and verification is practical in appropriate
contexts
2. Can these tools be applied within existing V&V methodologies for
automotive etc.?
• Yes – current approaches can be thereby extended to include V&V of system
performance
3. Do the tools have application beyond V&V?
• Yes – they can be used earlier in the SDLC to deliver significant benefits, but
there are organisational/process barriers to overcome
© Predictable Network Solutions Ltd 2017 www.pnsol.com
Click to edit Master title style 31
Support stages of the SDLC
• Design
• Feasibility analysis
• Hierarchical decomposition
• Subsystem acceptance criteria
• Verification
• Checking delivery of quantified outcomes
• Evaluating resource usage
• Re-verification during system lifetime
• Validation
• Quantification of performance criteria
• Checking coverage and consistency
Quantify hazards
• Failure to meet outcome requirements
• Physical constraints
• Schedulability constraints
• Supply chain constraints
• Failure to meet resource constraints
• Scaling
• Correlations
Interaction with System Development Life
Cycle
© Predictable Network Solutions Ltd 2017 www.pnsol.com
32
• Avoid infeasible developments
• 'fail early’
• prune blind alleys
• Address scalability early
• including real-world constraints
• avoid 'heavy tail’
• See the whole risk landscape
• not just the 'first problem’
• Be able to write a safety case
• Even for shared-resource
distributed systems
• Can handle subsystems with
undocumented characteristics
• Mitigate this with in-life
measurement of ∆Q driven by the
validation
• Forewarned is forearmed
• Inexpensive, focused data rather than
wide-angle ‘big data’ approach
• Can integrate with current V&V
toolchains
• E.g. automotive
Benefits
© Predictable Network Solutions Ltd 2017
33
www.pnsol.com
Exploitation: barriers
• Need to ‘quantify intent’
• may meet resistance
• Effort in adopting new tools and techniques
• Processes and procedures may change
• Upfront work needed before “real” development starts
• may not fit management expectations/metrics of ‘progress’
• V&V process models that leave integration to the end
• blocks opportunity to validate performance decomposition at the outset
© Predictable Network Solutions Ltd 2017
34
www.pnsol.com
Exploitation: opportunities
• Create progress metrics
• Capturing successive risk reduction
• Support management and engineering
• Assist customers concerned about:
• Risks of bad performance
• to reputation
• to insurability
• to safety
• Costs of verifying and scaling up a prototype
• Need a performance model
• Paper submitted to IEEE Design&Test special issue
• Waiting for decision
© Predictable Network Solutions Ltd 2017 www.pnsol.com
Thank you!
If you would like further details or want to discuss potential applications, please contact us at:
info@pnsol.com

More Related Content

PPTX
PEnDAR webinar 2 with notes
PPTX
PEnDAR webinar 1 with notes
PDF
Don't estimate - forecast
PPS
Critical Chain Project Management
PPTX
Mortgage Stability 20120420
PPTX
Agility is the tool gilb vilnius 9 dec 2013
PDF
Running successful agile projects
PEnDAR webinar 2 with notes
PEnDAR webinar 1 with notes
Don't estimate - forecast
Critical Chain Project Management
Mortgage Stability 20120420
Agility is the tool gilb vilnius 9 dec 2013
Running successful agile projects

What's hot (19)

PDF
Next Gen Continuous Delivery: Connecting Business Initiatives to the IT Roadmap
PPTX
Estimation – a waste of time master 2013 sdc gothenburg w hp rules
PDF
Sre summary
PPTX
Illinois Technology Association Tech Talk
PDF
Building a Compelling Business Case for Continuous Delivery
PPTX
Driving Continuous Delivery Transformation in a Data-Driven Way
PDF
Continuous delivery best practices and essential tools
PDF
ODD+PC: How to Get Stuff Right
PPTX
Webinar: 5 Steps To The Perfect Storage Refresh
PPTX
Test Data Management: The Underestimated Pain
PPTX
SRE-iously: Defining the Principles, Habits, and Practices of Site Reliabilit...
PDF
Assurance Not just about the bugs Pt2
PPTX
Webinar - Devops platform for the evolving enterprise
PDF
IT Operations Consulting
PPTX
SRE-iously! Defining the Principles, Habits, and Practices of Site Reliabilit...
DOC
Cevn Vibert Testimonials
PPT
VeeShell presentation
PPTX
Problem management foundation - Engineering
PDF
Telecoms Evangelist no.2
Next Gen Continuous Delivery: Connecting Business Initiatives to the IT Roadmap
Estimation – a waste of time master 2013 sdc gothenburg w hp rules
Sre summary
Illinois Technology Association Tech Talk
Building a Compelling Business Case for Continuous Delivery
Driving Continuous Delivery Transformation in a Data-Driven Way
Continuous delivery best practices and essential tools
ODD+PC: How to Get Stuff Right
Webinar: 5 Steps To The Perfect Storage Refresh
Test Data Management: The Underestimated Pain
SRE-iously: Defining the Principles, Habits, and Practices of Site Reliabilit...
Assurance Not just about the bugs Pt2
Webinar - Devops platform for the evolving enterprise
IT Operations Consulting
SRE-iously! Defining the Principles, Habits, and Practices of Site Reliabilit...
Cevn Vibert Testimonials
VeeShell presentation
Problem management foundation - Engineering
Telecoms Evangelist no.2
Ad

Similar to PEnDAR: software v&v for complex systems (20)

PPTX
Time-resource v&v for complex systems
PDF
Webinar: Demonstrating Business Value for DevOps & Continuous Delivery
PDF
Get Loose! Microservices and Loosely Coupled Architectures
PDF
Get Loose! Microservices and Loosely Coupled Architectures
PPTX
Getting Started with ThousandEyes Proof of Concepts
PDF
Raising Your Game: Maximizing Uptime in the Multi-cloud
PPTX
Getting Started with ThousandEyes Proof of Concepts
PDF
A Better, Faster Pipeline for Software Delivery
PPTX
Getting Started With ThousandEyes Proof of Concepts: End User Digital Experience
PPTX
Getting Started with ThousandEyes Proof of Concepts
PDF
Introduction to 5w’s of DevOps
PPTX
CWIN17 london delivering devops and release automation in fs - duncan bradf...
PDF
How to build confidence in your release cycle
PDF
DevOps at Crevise Technologies
PDF
Deliver on the Promise of Agile and DevOps Transformations
PDF
OCSL - VMware, vSphere Webinar May 2013
PPTX
Getting Demo & POV Ready
PPTX
How to achieve security, reliability, and productivity in less time
PDF
Performance Testing Cloud-Based Systems
PDF
Using Lean Thinking to identify and address Delivery Pipeline bottlenecks
Time-resource v&v for complex systems
Webinar: Demonstrating Business Value for DevOps & Continuous Delivery
Get Loose! Microservices and Loosely Coupled Architectures
Get Loose! Microservices and Loosely Coupled Architectures
Getting Started with ThousandEyes Proof of Concepts
Raising Your Game: Maximizing Uptime in the Multi-cloud
Getting Started with ThousandEyes Proof of Concepts
A Better, Faster Pipeline for Software Delivery
Getting Started With ThousandEyes Proof of Concepts: End User Digital Experience
Getting Started with ThousandEyes Proof of Concepts
Introduction to 5w’s of DevOps
CWIN17 london delivering devops and release automation in fs - duncan bradf...
How to build confidence in your release cycle
DevOps at Crevise Technologies
Deliver on the Promise of Agile and DevOps Transformations
OCSL - VMware, vSphere Webinar May 2013
Getting Demo & POV Ready
How to achieve security, reliability, and productivity in less time
Performance Testing Cloud-Based Systems
Using Lean Thinking to identify and address Delivery Pipeline bottlenecks
Ad

Recently uploaded (20)

PDF
Design Guidelines and solutions for Plastics parts
PPTX
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PDF
Abrasive, erosive and cavitation wear.pdf
PPT
Total quality management ppt for engineering students
PDF
22EC502-MICROCONTROLLER AND INTERFACING-8051 MICROCONTROLLER.pdf
PDF
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
PDF
III.4.1.2_The_Space_Environment.p pdffdf
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PDF
Categorization of Factors Affecting Classification Algorithms Selection
PDF
COURSE DESCRIPTOR OF SURVEYING R24 SYLLABUS
PPTX
Artificial Intelligence
PPTX
Information Storage and Retrieval Techniques Unit III
PDF
August 2025 - Top 10 Read Articles in Network Security & Its Applications
PDF
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
PPTX
communication and presentation skills 01
PDF
Visual Aids for Exploratory Data Analysis.pdf
PDF
distributed database system" (DDBS) is often used to refer to both the distri...
PPTX
"Array and Linked List in Data Structures with Types, Operations, Implementat...
PPTX
Nature of X-rays, X- Ray Equipment, Fluoroscopy
Design Guidelines and solutions for Plastics parts
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
R24 SURVEYING LAB MANUAL for civil enggi
Abrasive, erosive and cavitation wear.pdf
Total quality management ppt for engineering students
22EC502-MICROCONTROLLER AND INTERFACING-8051 MICROCONTROLLER.pdf
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
III.4.1.2_The_Space_Environment.p pdffdf
Automation-in-Manufacturing-Chapter-Introduction.pdf
Categorization of Factors Affecting Classification Algorithms Selection
COURSE DESCRIPTOR OF SURVEYING R24 SYLLABUS
Artificial Intelligence
Information Storage and Retrieval Techniques Unit III
August 2025 - Top 10 Read Articles in Network Security & Its Applications
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
communication and presentation skills 01
Visual Aids for Exploratory Data Analysis.pdf
distributed database system" (DDBS) is often used to refer to both the distri...
"Array and Linked List in Data Structures with Types, Operations, Implementat...
Nature of X-rays, X- Ray Equipment, Fluoroscopy

PEnDAR: software v&v for complex systems

  • 1. © Predictable Network Solutions Ltd 2017 www.pnsol.com PEnDAR: Cost/performance-driven V&V for distributed/cyber-physical systems Presentation for Software Validation and Verification for Complex Systems Workshop, May 2017
  • 2. © Predictable Network Solutions Ltd 2017 www.pnsol.com 2 Goals • Consider how to enable Validation & Verification of cost and performance • for distributed and hierarchical systems • using developments of well-tried tools • supporting both initial and ongoing incremental development • Provide early visibility of cost/performance hazards • avoiding costly failures • maximising the chances of successful in-budget delivery of acceptable end-user outcomes Partners Supported by: PEnDAR project
  • 3. © Predictable Network Solutions Ltd 2017 www.pnsol.com Focus on performance Functional correctness is not enough
  • 4. © Predictable Network Solutions Ltd 2017 4 www.pnsol.com Nature of performance • In an ‘ideal world’, systems would always respond instantaneously • and without exceptions/failures/errors • In practice this doesn’t happen • there is always some delay and some chance of failure: some impairment • Thus performance is a privation • the absence of impairment • like ‘darkness’ or ‘silence’ • Quantity also matters • require a certain rate or volume of responses with a given bound on impairment
  • 5. © Predictable Network Solutions Ltd 2017 www.pnsol.com 5Sources of impairment Causality Information: Takes time! Communicated Computed
  • 6. © Predictable Network Solutions Ltd 2017 www.pnsol.com 6Sources of impairment Synchronisation Implicit: resource sharing Exclusive Discrete: Locks etc Long timescales Statistical Continuous: CPU, interface… Short timescales Explicit Communication Data dependency Causality Information: Takes time! Communicated Computed
  • 7. © Predictable Network Solutions Ltd 2017 www.pnsol.com 7Sources of impairment Synchronisation Implicit: resource sharing Exclusive Discrete: Locks etc Long timescales Statistical Continuous: CPU, interface… Short timescales Explicit Communication Data dependency Causality Information: Takes time! Communicated Computed Process algebra
  • 8. © Predictable Network Solutions Ltd 2017 www.pnsol.com 8Sources of impairment Synchronisation Implicit: resource sharing Exclusive Discrete: Locks etc Long timescales Statistical Continuous: CPU, interface… Short timescales Explicit Communication Data dependency Causality Information: Takes time! Communicated Computed Stochastic process algebra
  • 9. © Predictable Network Solutions Ltd 2017 www.pnsol.com 9Sources of impairment Synchronisation Implicit: resource sharing Exclusive Discrete: Locks etc Long timescales Statistical Continuous: CPU, interface… Short timescales Explicit Communication Data dependency Causality Information: Takes time! Communicated Computed Imperfection Discrete Statistical Exceptions Failures Resource exhaustion Stochastic process algebra
  • 10. © Predictable Network Solutions Ltd 2017 www.pnsol.com 10Sources of impairment Synchronisation Implicit: resource sharing Exclusive Discrete: Locks etc Long timescales Statistical Continuous: CPU, interface… Short timescales Explicit Communication Data dependency Causality Information: Takes time! Communicated Computed Imperfection Discrete Statistical Exceptions Failures Resource exhaustion ∆Q Framework
  • 11. © Predictable Network Solutions Ltd 2017 11 www.pnsol.com Measure of performance: ∆Q • ∆Q is a measure of the ‘quality impairment’ of an outcome • The extent of deviation from ‘instantaneous and infallible’ • Nothing in the real world is perfect so ∆Q always exists • ∆Q is conserved • A delayed outcome can’t be ‘undelayed’ • A failed outcome can’t be ‘unfailed’ • ∆Q can be traded • E.g. accept more delay in return for more certainty of completion • ∆Q has an algebra • Can manipulate it mathematically
  • 12. © Predictable Network Solutions Ltd 2017 www.pnsol.com 12 ∆Q can be represented with an improper random variable • Combines continuous and discrete probabilities • Thus encompasses normal behaviour and exceptions/failures in one model ∆Q is composable • Supports hierarchical V&V Representation of ∆Q 0.1 0.2 0.3 0.4 0.6 0.5 0.7 0.8 0.9 1.0 0.0 Cumulativeprobability Response time 2 4 6 8 10 12 14 16 Tangible mass encodes distribution of response time Intangible mass encodes probability of exception/failure
  • 13. © Predictable Network Solutions Ltd 2017 www.pnsol.com Outline of a coherent methodology
  • 14. © Predictable Network Solutions Ltd 2017 www.pnsol.com 14Performance/resource analysis Sub- system Sub- system Sub- system ∆Q Shared resources Starting with a functional decomposition: • Take each subsystem in isolation • analyse performance • modelling remainder of system as ∆Q • quantify resource consumption • may be dependent on the ∆Q • Examine resource sharing • within system – quantify resource costs • between systems – quantify opportunity cost • Successive refinements • consider couplings • iterate to fixed point Quantitative Timeliness Agreement
  • 15. © Predictable Network Solutions Ltd 2017 15 www.pnsol.com Quantifying intent • The key challenge is to establish quantified intentions • For outcomes/resources/costs • Then a variety of mathematical techniques can be applied • queuing theory • large deviation theory • ∆Q algebra • This is “only rocket science” • Not brain surgery! • Set of tools already developed
  • 16. © Predictable Network Solutions Ltd 2017 16 www.pnsol.com 16 Quantifying timeliness Outcome requirement: Suppose we had a specification for how long it’s acceptable to wait for a system outcome: • 50% of responses within 5 seconds • 95% of responses within 10 seconds • 99.9% of responses within 15s • 0.1% failure rate This can be represented by an improper CDF 0.1 0.2 0.3 0.4 0.6 0.5 0.7 0.8 0.9 1.0 0.0 Cumulativeprobability 2 4 6 8 10 12 14 16 Response time
  • 17. © Predictable Network Solutions Ltd 2017 www.pnsol.com Click to edit Master title style 17 • Suppose the black line shows the delivered CDF • From measurement, simulation or analysis • This is everywhere above and to the left of the requirement curve • This means that the timeliness requirement is satisfied • If not, there is a performance hazard Meeting a timeliness requirement 0.1 0.2 0.3 0.4 0.6 0.5 0.7 0.8 0.9 1.0 0.0 2 4 6 8 10 12 14 16 Cumulativeprobability Response time
  • 18. © Predictable Network Solutions Ltd 2017 www.pnsol.com 18 1. Decompose the performance requirement following system structure • Using engineering judgment/best practice/cosmic constraints • Creates initial subsystem requirements 2. Validate the decomposition by re- combining via the behaviour • Formally and automatically checkable • Can be part of continuous integration • Captures interactions and couplings • Necessary and sufficient: • IF all subsystems function correctly and integrate properly • AND all subsystems satisfy their performance requirements • THEN the overall system will meet its performance requirement • Apply this hierarchically until • Behaviour is trivially provable OR • Have a complete set of testable subsystem verification/acceptance criteria Validating performance requirements
  • 19. © Predictable Network Solutions Ltd 2017 www.pnsol.com Capturing interactions
  • 20. © Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com 20 Outcomes Delivered Resources Consumed Variability Exception /failure Externally created Mitigation System impact Propagation Scale Schedulability Capacity Distance Number Time Space Density
  • 21. © Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com 21 Outcomes Delivered Resources Consumed Variability Exception /failure Scale
  • 22. © Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com 22 Outcomes Delivered Resources Consumed Variability Exception /failure Scale Distance Number Time Space Schedulability Capacity Density
  • 23. © Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com 23 Outcomes Delivered Resources Consumed Variability Exception /failure Mitigation Propagation Scale Schedulability Capacity Distance Number Time Space Density
  • 24. © Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com 24 Outcomes Delivered Resources Consumed Variability Exception /failure Externally created Mitigation Propagation Scale Schedulability Capacity Distance Number Time Space Density
  • 25. © Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com 25 Outcomes Delivered Resources Consumed Variability Exception /failure Externally created Mitigation System impact Propagation Scale Schedulability Capacity Distance Number Time Space Density
  • 26. © Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com 26 Outcomes Delivered Resources Consumed Variability Exception /failure Externally created Mitigation System impact Propagation Scale Schedulability Capacity Distance Number Time Space Density
  • 27. © Predictable Network Solutions Ltd 2017 27 www.pnsol.com Summary • System performance validation consists of: • Analysing the interaction of system behaviour and subsystem performance requirements • Showing that this ‘adds up’ to meet the quantified requirements (QTA) • System performance verification consists of showing that subsystems meet their QTAs • By analysis and/or measurement of ∆Q of the subsystems’ observable behaviour • This provides acceptance and/or contractual criteria for third-party subsystems or services • Substantially reduces performance integration risks and hence re-work
  • 28. © Predictable Network Solutions Ltd 2017 www.pnsol.com Project findings
  • 29. © Predictable Network Solutions Ltd 2017 29 www.pnsol.com Project methodology • Investigate system-of-system use cases • Extract key aspects by talking to multiple stakeholders inside VF • Consider barriers, costs and benefits of applying performance V&V • Consider application of performance V&V within established methodologies • Automotive etc. • Application with TVS toolchain • Run an industry focus group to explore issues and validate findings • Questionnaire • Webinars • Interviews
  • 30. © Predictable Network Solutions Ltd 2017 30 www.pnsol.com Key questions 1. Can the well-tried tools be adapted to be used outside a consultancy model? • Yes – performance validation and verification is practical in appropriate contexts 2. Can these tools be applied within existing V&V methodologies for automotive etc.? • Yes – current approaches can be thereby extended to include V&V of system performance 3. Do the tools have application beyond V&V? • Yes – they can be used earlier in the SDLC to deliver significant benefits, but there are organisational/process barriers to overcome
  • 31. © Predictable Network Solutions Ltd 2017 www.pnsol.com Click to edit Master title style 31 Support stages of the SDLC • Design • Feasibility analysis • Hierarchical decomposition • Subsystem acceptance criteria • Verification • Checking delivery of quantified outcomes • Evaluating resource usage • Re-verification during system lifetime • Validation • Quantification of performance criteria • Checking coverage and consistency Quantify hazards • Failure to meet outcome requirements • Physical constraints • Schedulability constraints • Supply chain constraints • Failure to meet resource constraints • Scaling • Correlations Interaction with System Development Life Cycle
  • 32. © Predictable Network Solutions Ltd 2017 www.pnsol.com 32 • Avoid infeasible developments • 'fail early’ • prune blind alleys • Address scalability early • including real-world constraints • avoid 'heavy tail’ • See the whole risk landscape • not just the 'first problem’ • Be able to write a safety case • Even for shared-resource distributed systems • Can handle subsystems with undocumented characteristics • Mitigate this with in-life measurement of ∆Q driven by the validation • Forewarned is forearmed • Inexpensive, focused data rather than wide-angle ‘big data’ approach • Can integrate with current V&V toolchains • E.g. automotive Benefits
  • 33. © Predictable Network Solutions Ltd 2017 33 www.pnsol.com Exploitation: barriers • Need to ‘quantify intent’ • may meet resistance • Effort in adopting new tools and techniques • Processes and procedures may change • Upfront work needed before “real” development starts • may not fit management expectations/metrics of ‘progress’ • V&V process models that leave integration to the end • blocks opportunity to validate performance decomposition at the outset
  • 34. © Predictable Network Solutions Ltd 2017 34 www.pnsol.com Exploitation: opportunities • Create progress metrics • Capturing successive risk reduction • Support management and engineering • Assist customers concerned about: • Risks of bad performance • to reputation • to insurability • to safety • Costs of verifying and scaling up a prototype • Need a performance model • Paper submitted to IEEE Design&Test special issue • Waiting for decision
  • 35. © Predictable Network Solutions Ltd 2017 www.pnsol.com Thank you! If you would like further details or want to discuss potential applications, please contact us at: info@pnsol.com

Editor's Notes

  • #3: PEnDAR - Performance ENsurance by Design, Analysing Requirements TSB REFERENCE: 132304 Why? Seeing cost/performance hazards becoming visible late in the development process – too late to save some projects! Multi-$B problem worldwide Pressure to re-purpose commodity infrastructure for safety/mission-critical objectives; need to be able to articulate a safety case.
  • #15: A Quantitative Timeliness Agreement (QTA) is a relationship between the demand (the applied load, including its pattern) and the delivered quality impairment (as a probability distribution, ∆Q) Opportunity cost between one system and another sharing the same resources, and successive refinements won’t be considered in this webinar.
  • #16: Rocket science used to be something only world superpowers could do – now you only need to be a billionaire! It’s well enough understood to be reproducible, and is just (complex) engineering. Brain surgery requires experience, skill and gut feel – not easy to teach! Outcomes are hard to quantify.
  • #17: 16
  • #18: Any CDF whose curve is always to the left and above this one represents an outcome that is “acceptable”. If the black line crosses the blue line we have a performance hazard.
  • #19: This can be combined with a corresponding analysis of the resource consumption
  • #20: We’re now going to run through some of the technical dimensions of this challenge
  • #21: This captures what we have learnt about system delivery problems over the last decade. There’s a lot here so we’re going to break it down!
  • #22: They key task with shared-resource systems is to find a way to quantify and manage the performance/resource tradeoff. Quantifying and managing the performance/resource tradeoff (yellow centre) is specific to each particular system; the issues around it can de dealt with by applying generic techniques. Analysis of the central problem is complemented by a synthesis of other techniques. The three key aspects to consider are: Scale – how are the resource/performance trades affected by the scale of the system? Exception/failure – how are these managed, given that they become inevitable in a shared, distributed system Variability – how variable are the resources and the demand for outcomes?
  • #23: Scale has two dimensions: Space – either in terms of physical distance, affecting transmission times, or in terms of numbers of users/demands on the system, which together create a notion of ‘density’ that can drive the economics of the solution. Time – on long timescales the question is one of capacity, on short ones of schedulability.
  • #24: Exception and failure are specifically not a question of ‘coding errors’ or hardware faults (although those are a factor) but more one of temporary shortage of resources, resulting, for example, in the loss of a packet or a deadline being missed. Two approaches to handling this are mitigation (re-transmitting a packet, for example) or propagation (packet loss resulting in a failed transfer), requiring handing at a higher layer. These interact, and the optimal approach will depend on the frequency and severity of the failures and the costs of handling them in different ways.
  • #25: Variability applies both to resources and to load, and its key aspect is correlation: Positively correlated, e.g. by TV advert breaks Negatively correlated, e.g. use of one part of the system precludes simultaneous use of another Uncorrelated, basically a random effect. Correlations can be externally generated or be a result of the operation of the system
  • #26: We need to consider both the impact on individual outcomes and the impact on the ability of the rest of the system to deliver collective outcomes.
  • #27: Once the core is understood, the rest is manageable with the right tools.
  • #32: Need to support stages in the SDLC. In Design: Feasibility: can you deliver the outcomes with sufficient timeliness with acceptable use of resources Hierarchical decomposition Acceptance criteria Verification requires checking quantified outcomes, in a way that is ‘cheap’ enough to re-apply during the system lifetime.
  • #34: Looking at a more formal approach to managing cost/performance hazards – do the benefits and costs of this balance out? There’s a push to use standard commodity infrastructure for safety/mission critical purposes – saves a lot of costs but also introduces risk. Need to be able to make a safety case! Virtualisation is coming in everywhere – what are the risks? Case studies done inside the project show that getting intentions to be quantified can be hard; however explaining that allowing for some possibility of delay or failure can dramatically reduce the delivery costs may encourage engagement. Even functional verification can be considered ‘too expensive’.
  • #35: Looking at a more formal approach to managing cost/performance hazards – do the benefits and costs of this balance out? There’s a push to use standard commodity infrastructure for safety/mission critical purposes – saves a lot of costs but also introduces risk. Need to be able to make a safety case! Virtualisation is coming in everywhere – what are the risks? Case studies done inside the project show that getting intentions to be quantified can be hard; however explaining that allowing for some possibility of delay or failure can dramatically reduce the delivery costs may encourage engagement. Even functional verification can be considered ‘too expensive’.